Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
344,099
| 10,339,995,360
|
IssuesEvent
|
2019-09-03 20:44:33
|
oslc-op/jira-migration-landfill
|
https://api.github.com/repos/oslc-op/jira-migration-landfill
|
closed
|
Some form of subscription and/or update notification may be required
|
Domain: CfgM Priority: High Status: Deferred Xtra: Jira
|
If a Global Configuration server is to handle local contributions to local configurations, or if subsidiary layers of global configuration servers are to be supported, then a leaf node in the contribution tree might not be an immediate contribution to a GC in that first GC server. In that case, that first GC server must either be notified of the change to the contribution tree, or must delegate component resolution to the next layer down (the subsidiary GC server, or the local configuration server with multiple levels of contributed configurations).
The current draft spec does not address delegated layers of component resolution.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-120 (opened by @oslc-bot; previously assigned to @undefined)_
|
1.0
|
Some form of subscription and/or update notification may be required - If a Global Configuration server is to handle local contributions to local configurations, or if subsidiary layers of global configuration servers are to be supported, then a leaf node in the contribution tree might not be an immediate contribution to a GC in that first GC server. In that case, that first GC server must either be notified of the change to the contribution tree, or must delegate component resolution to the next layer down (the subsidiary GC server, or the local configuration server with multiple levels of contributed configurations).
The current draft spec does not address delegated layers of component resolution.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-120 (opened by @oslc-bot; previously assigned to @undefined)_
|
priority
|
some form of subscription and or update notification may be required if a global configuration server is to handle local contributions to local configurations or if subsidiary layers of global configuration servers are to be supported then a leaf node in the contribution tree might not be an immediate contribution to a gc in that first gc server in that case that first gc server must either be notified of the change to the contribution tree or must delegate component resolution to the next layer down the subsidiary gc server or the local configuration server with multiple levels of contributed configurations the current draft spec does not address delegated layers of component resolution migrated from opened by oslc bot previously assigned to undefined
| 1
|
755,497
| 26,430,600,096
|
IssuesEvent
|
2023-01-14 19:17:08
|
LiteLDev/LiteLoaderBDS
|
https://api.github.com/repos/LiteLDev/LiteLoaderBDS
|
closed
|
LLSE ob.setScore / pl.setScore 不会自动在计分板中创建目标
|
type: bug module: script engine priority: high
|
### 异常模块
ScriptEngine (脚本引擎)
### 操作系统
Windows 10
### LiteLoader 版本
LiteLoaderBDS 2.7.2+2b8c54d25
### BDS 版本
Version 1.19.31.01(ProtocolVersion 554)
### 发生了什么?
LLSE JS中(非Nodejs ,这里面没测试)
`ob.setScore` 和 `pl.setScore` 均不能自动创建计分板目标,即第一个函数里面的第一个参数 玩家,第二个函数里面的pl
比如 `ob.setScore(player,1);` (我确定这里的ob 不为null,且是有效的计分板对象,计分板也存在,计分板名称也无误,且player是真实有效的玩家对象)
这句函数在执行的时候 不会返回null,控制台也没有报错,游戏中的计分板(在右侧显示)除了显示标题空空如也
如果使用命令 `/scoreboard players add XXX jifenban` 将player玩家添加到该计分板中,则这条函数可以正常使用(正常设置分数)
如果`ob.setScore()`的第一个参数为字符串(这个字符串目标不在该计分板中的时候),执行函数控制台会报错
**总结 :核心的问题是,这条函数(或许还有其他计分板函数)在之前的版本中是可以检测并创建计分板目标后然后再设置目标分数的,但现在并不会检测并创建计分板目标了(这里的计分板目标指的是计分板中的玩家名)**
### 复现此问题的步骤
附上我的脚本作为参考
```javescript
/*
* 开发者 - CNGEGE
*/
(function(){
//logger.setConsole(true);
let ob = null;
let obname = "onlineplayers";
let obshowname = "在线玩家(ping)";
let timerid = -1;
let timeout = 1000 * 5;
function ServerStarted(){
ob = mc.getScoreObjective(obname);
if(ob == null){
logger.log("计分板为空");
ob = mc.newScoreObjective(obname,obshowname);
ob.setDisplay("sidebar");
}else if(ob.displayName != obshowname)
{
logger.log("计分板名字不对应");
mc.removeScoreObjective(obname);
ob = mc.newScoreObjective(obname,obshowname);
ob.setDisplay("sidebar");
}
logger.log("服务开启");
}
function Join(player){
//logger.log("玩家加入游戏");
//logger.log("当前玩家数量:",mc.getOnlinePlayers().length.toString());
let device = player.getDevice();
player.setScore(obname,device.avgPing);
//如果 当前只有这一位玩家 则开启一个计时器,每5s计算一次玩家的ping值 直到没有玩家在线了,则关闭计时器
if(mc.getOnlinePlayers().length == 1)
{
timerid = setInterval(()=>{
let players = mc.getOnlinePlayers();
if(players.length == 0){
clearInterval(timerid);
}else{
for(i=0;i<players.length;i++){
let dev = players[i].getDevice();
//logger.log("玩家名字:"+players[i].name+" 延迟:"+dev.lastPing.toString());
//players[i].setScore(obname,dev.lastPing);
if(ob.setScore(players[i],dev.lastPing) == null){
logger.log("失败");
}
}
}
},timeout);
}
}
function Left(player){
//logger.log("玩家退出游戏");
player.deleteScore(obname);
}
mc.listen("onJoin",Join);
mc.listen("onLeft",Left);
mc.listen("onServerStarted",ServerStarted);
})()
```
### 有关的日志/输出
_No response_
### 插件列表
```raw
22:20:58 INFO [Server] 插件列表 [12]
22:20:58 INFO [Server] - AutoFishing [v1.2.1] (AutoFishing.dll)
22:20:58 INFO [Server] LL版 BDS服务器全自动挂机钓鱼
22:20:58 INFO [Server] - DispenserGetLavaFromCauldron [v3.2.1] (DispenserGetLavaFromCauldron.dll)
22:20:58 INFO [Server] 用发射器向装有岩浆的炼药锅发射空桶以装岩浆/反之亦然
22:20:58 INFO [Server] - TreeCuttingAndMining [v2.1.0] (TreeCuttingAndMining.dll)
22:20:58 INFO [Server] 砍树与挖矿插件
22:20:58 INFO [Server] - DispenserDestroyBlock [v2.1.1] (DispenserDestroyBlock.dll)
22:20:58 INFO [Server] 激活发射器利用工具破坏方块
22:20:58 INFO [Server] - LLMoney [v2.7.0] (LLMoney.dll)
22:20:58 INFO [Server] EconomyCore for LiteLoaderBDS
22:20:58 INFO [Server] - Hundred_Times [v1.0.0] (Hundred_Times.dll)
22:20:58 INFO [Server] 百倍掉落物 by CNGEGE
22:20:58 INFO [Server] - PlayerKB [v1.2.0] (PlayerKB.dll)
22:20:58 INFO [Server] 玩家击退&间隔控制
22:20:58 INFO [Server] - PermissionAPI [v2.7.0] (PermissionAPI.dll)
22:20:58 INFO [Server] Builtin & Powerful permission API for LiteLoaderBDS
22:20:58 INFO [Server] - ScriptEngine-QuickJs [v2.7.2] (LiteLoader.Js.dll)
22:20:58 INFO [Server] Javascript ScriptEngine for LiteLoaderBDS
22:20:58 INFO [Server] - onLinePlayer [v1.0.0] (onLinePlayer.js)
22:20:58 INFO [Server] onLinePlayer
22:20:58 INFO [Server] - ScriptEngine-Lua [v2.7.2] (LiteLoader.Lua.dll)
22:20:58 INFO [Server] Lua ScriptEngine for LiteLoaderBDS
22:20:58 INFO [Server] - ScriptEngine-NodeJs [v2.7.2] (LiteLoader.NodeJs.dll)
22:20:58 INFO [Server] Node.js ScriptEngine for LiteLoaderBDS
22:20:58 INFO [Server]
22:20:58 INFO [Server]
```
|
1.0
|
LLSE ob.setScore / pl.setScore 不会自动在计分板中创建目标 - ### 异常模块
ScriptEngine (脚本引擎)
### 操作系统
Windows 10
### LiteLoader 版本
LiteLoaderBDS 2.7.2+2b8c54d25
### BDS 版本
Version 1.19.31.01(ProtocolVersion 554)
### 发生了什么?
LLSE JS中(非Nodejs ,这里面没测试)
`ob.setScore` 和 `pl.setScore` 均不能自动创建计分板目标,即第一个函数里面的第一个参数 玩家,第二个函数里面的pl
比如 `ob.setScore(player,1);` (我确定这里的ob 不为null,且是有效的计分板对象,计分板也存在,计分板名称也无误,且player是真实有效的玩家对象)
这句函数在执行的时候 不会返回null,控制台也没有报错,游戏中的计分板(在右侧显示)除了显示标题空空如也
如果使用命令 `/scoreboard players add XXX jifenban` 将player玩家添加到该计分板中,则这条函数可以正常使用(正常设置分数)
如果`ob.setScore()`的第一个参数为字符串(这个字符串目标不在该计分板中的时候),执行函数控制台会报错
**总结 :核心的问题是,这条函数(或许还有其他计分板函数)在之前的版本中是可以检测并创建计分板目标后然后再设置目标分数的,但现在并不会检测并创建计分板目标了(这里的计分板目标指的是计分板中的玩家名)**
### 复现此问题的步骤
附上我的脚本作为参考
```javescript
/*
* 开发者 - CNGEGE
*/
(function(){
//logger.setConsole(true);
let ob = null;
let obname = "onlineplayers";
let obshowname = "在线玩家(ping)";
let timerid = -1;
let timeout = 1000 * 5;
function ServerStarted(){
ob = mc.getScoreObjective(obname);
if(ob == null){
logger.log("计分板为空");
ob = mc.newScoreObjective(obname,obshowname);
ob.setDisplay("sidebar");
}else if(ob.displayName != obshowname)
{
logger.log("计分板名字不对应");
mc.removeScoreObjective(obname);
ob = mc.newScoreObjective(obname,obshowname);
ob.setDisplay("sidebar");
}
logger.log("服务开启");
}
function Join(player){
//logger.log("玩家加入游戏");
//logger.log("当前玩家数量:",mc.getOnlinePlayers().length.toString());
let device = player.getDevice();
player.setScore(obname,device.avgPing);
//如果 当前只有这一位玩家 则开启一个计时器,每5s计算一次玩家的ping值 直到没有玩家在线了,则关闭计时器
if(mc.getOnlinePlayers().length == 1)
{
timerid = setInterval(()=>{
let players = mc.getOnlinePlayers();
if(players.length == 0){
clearInterval(timerid);
}else{
for(i=0;i<players.length;i++){
let dev = players[i].getDevice();
//logger.log("玩家名字:"+players[i].name+" 延迟:"+dev.lastPing.toString());
//players[i].setScore(obname,dev.lastPing);
if(ob.setScore(players[i],dev.lastPing) == null){
logger.log("失败");
}
}
}
},timeout);
}
}
function Left(player){
//logger.log("玩家退出游戏");
player.deleteScore(obname);
}
mc.listen("onJoin",Join);
mc.listen("onLeft",Left);
mc.listen("onServerStarted",ServerStarted);
})()
```
### 有关的日志/输出
_No response_
### 插件列表
```raw
22:20:58 INFO [Server] 插件列表 [12]
22:20:58 INFO [Server] - AutoFishing [v1.2.1] (AutoFishing.dll)
22:20:58 INFO [Server] LL版 BDS服务器全自动挂机钓鱼
22:20:58 INFO [Server] - DispenserGetLavaFromCauldron [v3.2.1] (DispenserGetLavaFromCauldron.dll)
22:20:58 INFO [Server] 用发射器向装有岩浆的炼药锅发射空桶以装岩浆/反之亦然
22:20:58 INFO [Server] - TreeCuttingAndMining [v2.1.0] (TreeCuttingAndMining.dll)
22:20:58 INFO [Server] 砍树与挖矿插件
22:20:58 INFO [Server] - DispenserDestroyBlock [v2.1.1] (DispenserDestroyBlock.dll)
22:20:58 INFO [Server] 激活发射器利用工具破坏方块
22:20:58 INFO [Server] - LLMoney [v2.7.0] (LLMoney.dll)
22:20:58 INFO [Server] EconomyCore for LiteLoaderBDS
22:20:58 INFO [Server] - Hundred_Times [v1.0.0] (Hundred_Times.dll)
22:20:58 INFO [Server] 百倍掉落物 by CNGEGE
22:20:58 INFO [Server] - PlayerKB [v1.2.0] (PlayerKB.dll)
22:20:58 INFO [Server] 玩家击退&间隔控制
22:20:58 INFO [Server] - PermissionAPI [v2.7.0] (PermissionAPI.dll)
22:20:58 INFO [Server] Builtin & Powerful permission API for LiteLoaderBDS
22:20:58 INFO [Server] - ScriptEngine-QuickJs [v2.7.2] (LiteLoader.Js.dll)
22:20:58 INFO [Server] Javascript ScriptEngine for LiteLoaderBDS
22:20:58 INFO [Server] - onLinePlayer [v1.0.0] (onLinePlayer.js)
22:20:58 INFO [Server] onLinePlayer
22:20:58 INFO [Server] - ScriptEngine-Lua [v2.7.2] (LiteLoader.Lua.dll)
22:20:58 INFO [Server] Lua ScriptEngine for LiteLoaderBDS
22:20:58 INFO [Server] - ScriptEngine-NodeJs [v2.7.2] (LiteLoader.NodeJs.dll)
22:20:58 INFO [Server] Node.js ScriptEngine for LiteLoaderBDS
22:20:58 INFO [Server]
22:20:58 INFO [Server]
```
|
priority
|
llse ob setscore pl setscore 不会自动在计分板中创建目标 异常模块 scriptengine 脚本引擎 操作系统 windows liteloader 版本 liteloaderbds bds 版本 version protocolversion 发生了什么 llse js中(非nodejs ,这里面没测试) ob setscore 和 pl setscore 均不能自动创建计分板目标 即第一个函数里面的第一个参数 玩家,第二个函数里面的pl 比如 ob setscore player (我确定这里的ob 不为null 且是有效的计分板对象,计分板也存在 计分板名称也无误,且player是真实有效的玩家对象) 这句函数在执行的时候 不会返回null 控制台也没有报错,游戏中的计分板(在右侧显示)除了显示标题空空如也 如果使用命令 scoreboard players add xxx jifenban 将player玩家添加到该计分板中,则这条函数可以正常使用(正常设置分数) 如果 ob setscore 的第一个参数为字符串(这个字符串目标不在该计分板中的时候),执行函数控制台会报错 总结 :核心的问题是,这条函数(或许还有其他计分板函数)在之前的版本中是可以检测并创建计分板目标后然后再设置目标分数的,但现在并不会检测并创建计分板目标了(这里的计分板目标指的是计分板中的玩家名) 复现此问题的步骤 附上我的脚本作为参考 javescript 开发者 cngege function logger setconsole true let ob null let obname onlineplayers let obshowname 在线玩家 ping let timerid let timeout function serverstarted ob mc getscoreobjective obname if ob null logger log 计分板为空 ob mc newscoreobjective obname obshowname ob setdisplay sidebar else if ob displayname obshowname logger log 计分板名字不对应 mc removescoreobjective obname ob mc newscoreobjective obname obshowname ob setdisplay sidebar logger log 服务开启 function join player logger log 玩家加入游戏 logger log 当前玩家数量 mc getonlineplayers length tostring let device player getdevice player setscore obname device avgping 如果 当前只有这一位玩家 则开启一个计时器, 直到没有玩家在线了,则关闭计时器 if mc getonlineplayers length timerid setinterval let players mc getonlineplayers if players length clearinterval timerid else for i i players length i let dev players getdevice logger log 玩家名字 players name 延迟 dev lastping tostring players setscore obname dev lastping if ob setscore players dev lastping null logger log 失败 timeout function left player logger log 玩家退出游戏 player deletescore obname mc listen onjoin join mc listen onleft left mc listen onserverstarted serverstarted 有关的日志 输出 no response 插件列表 raw info 插件列表 info autofishing autofishing dll info ll版 bds服务器全自动挂机钓鱼 info dispensergetlavafromcauldron dispensergetlavafromcauldron dll info 用发射器向装有岩浆的炼药锅发射空桶以装岩浆 反之亦然 info treecuttingandmining treecuttingandmining dll info 砍树与挖矿插件 info dispenserdestroyblock dispenserdestroyblock dll info 激活发射器利用工具破坏方块 info llmoney llmoney dll info economycore for liteloaderbds info hundred times hundred times dll info 百倍掉落物 by cngege info playerkb playerkb dll info 玩家击退 间隔控制 info permissionapi permissionapi dll info builtin powerful permission api for liteloaderbds info scriptengine quickjs liteloader js dll info javascript scriptengine for liteloaderbds info onlineplayer onlineplayer js info onlineplayer info scriptengine lua liteloader lua dll info lua scriptengine for liteloaderbds info scriptengine nodejs liteloader nodejs dll info node js scriptengine for liteloaderbds info info
| 1
|
488,429
| 14,077,269,881
|
IssuesEvent
|
2020-11-04 11:45:29
|
CLIxIndia-Dev/clixoer
|
https://api.github.com/repos/CLIxIndia-Dev/clixoer
|
opened
|
Filter Option Needs Tool Tip Explaining Types as Explained in SRS Slide
|
backend enhancement highpriority
|
**Is your feature request related to a problem? Please describe.**
Filter Option Needs Tool Tip Explaining Types as Explained in SRS Slide [Here](https://docs.google.com/presentation/d/1ivAPOnd9aXzfNQCtjQuRFx2Iw9nk7PWfM7mNTe9D7XU/edit#slide=id.g74b266a43a_0_22)
**Describe the solution you'd like**
Add Too Tips explaining what filters are
|
1.0
|
Filter Option Needs Tool Tip Explaining Types as Explained in SRS Slide - **Is your feature request related to a problem? Please describe.**
Filter Option Needs Tool Tip Explaining Types as Explained in SRS Slide [Here](https://docs.google.com/presentation/d/1ivAPOnd9aXzfNQCtjQuRFx2Iw9nk7PWfM7mNTe9D7XU/edit#slide=id.g74b266a43a_0_22)
**Describe the solution you'd like**
Add Too Tips explaining what filters are
|
priority
|
filter option needs tool tip explaining types as explained in srs slide is your feature request related to a problem please describe filter option needs tool tip explaining types as explained in srs slide describe the solution you d like add too tips explaining what filters are
| 1
|
193,589
| 6,886,245,620
|
IssuesEvent
|
2017-11-21 18:46:58
|
GoogleCloudPlatform/google-cloud-eclipse
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse
|
closed
|
Convert to App Engine Standard menu should configure Java 8 instead of Java 7
|
enhancement high priority
|
Currently, it's always Java 7.
It may be OK for Java 7 projects to remain on the Java 7 runtime, although they will have no problem being deployed to the Java 8 runtime.
|
1.0
|
Convert to App Engine Standard menu should configure Java 8 instead of Java 7 - Currently, it's always Java 7.
It may be OK for Java 7 projects to remain on the Java 7 runtime, although they will have no problem being deployed to the Java 8 runtime.
|
priority
|
convert to app engine standard menu should configure java instead of java currently it s always java it may be ok for java projects to remain on the java runtime although they will have no problem being deployed to the java runtime
| 1
|
361,435
| 10,708,890,879
|
IssuesEvent
|
2019-10-24 20:43:45
|
opencv/opencv
|
https://api.github.com/repos/opencv/opencv
|
closed
|
Incorrect window size on Mac OS X Lion
|
auto-transferred bug category: highgui-gui priority: normal
|
Transferred from http://code.opencv.org/issues/2189
```
|| Jan Dlabal on 2012-07-24 17:26
|| Priority: Normal
|| Affected: None
|| Category: highgui-gui
|| Tracker: Bug
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Incorrect window size on Mac OS X Lion
```
See http://stackoverflow.com/questions/11635842/opencv-not-filling-entire-image.
Basically this:
@
cv::Mat cvSideDepthImage1(150, 150, CV_8UC1, cv::Scalar(100));
cv::imshow("side1", cvSideDepthImage1);
@
Creates this 200x150px window:
!http://i.stack.imgur.com/HetAA.png!
When it should just show a gray 150x150 square.
This is OpenCV 2.4.9 on OS X Lion (all updates installed).
```
## History
##### Marina Kolpakova on 2012-07-24 18:05
```
- Category set to highgui-images
```
##### Andrey Kamaev on 2012-08-15 13:42
```
- Assignee set to Vadim Pisarevsky
```
##### Andrey Kamaev on 2012-08-16 15:42
```
- Category changed from highgui-images to highgui-gui
```
|
1.0
|
Incorrect window size on Mac OS X Lion - Transferred from http://code.opencv.org/issues/2189
```
|| Jan Dlabal on 2012-07-24 17:26
|| Priority: Normal
|| Affected: None
|| Category: highgui-gui
|| Tracker: Bug
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
## Incorrect window size on Mac OS X Lion
```
See http://stackoverflow.com/questions/11635842/opencv-not-filling-entire-image.
Basically this:
@
cv::Mat cvSideDepthImage1(150, 150, CV_8UC1, cv::Scalar(100));
cv::imshow("side1", cvSideDepthImage1);
@
Creates this 200x150px window:
!http://i.stack.imgur.com/HetAA.png!
When it should just show a gray 150x150 square.
This is OpenCV 2.4.9 on OS X Lion (all updates installed).
```
## History
##### Marina Kolpakova on 2012-07-24 18:05
```
- Category set to highgui-images
```
##### Andrey Kamaev on 2012-08-15 13:42
```
- Assignee set to Vadim Pisarevsky
```
##### Andrey Kamaev on 2012-08-16 15:42
```
- Category changed from highgui-images to highgui-gui
```
|
priority
|
incorrect window size on mac os x lion transferred from jan dlabal on priority normal affected none category highgui gui tracker bug difficulty none pr none platform none none incorrect window size on mac os x lion see basically this cv mat cv cv scalar cv imshow creates this window when it should just show a gray square this is opencv on os x lion all updates installed history marina kolpakova on category set to highgui images andrey kamaev on assignee set to vadim pisarevsky andrey kamaev on category changed from highgui images to highgui gui
| 1
|
250,942
| 7,992,991,238
|
IssuesEvent
|
2018-07-20 05:18:47
|
AtlasOfLivingAustralia/fieldcapture
|
https://api.github.com/repos/AtlasOfLivingAustralia/fieldcapture
|
closed
|
Support for multiple project funding sources
|
priority-high status-new type-enhancement
|
_migrated from:_ https://code.google.com/p/ala/issues/detail?id=735
_date:_ Sat Jul 5 06:10:04 2014
_author:_ CoolDa...@gmail.com
---
New doE programmes require projects to be apportioned across multiple funding streams. This is also a requirement of the generalised system as many projects undertaken by organisations involve multiple funding partners.
A simple multi-row table in the Admin>Project information tab should be adequate for this putpose.
|
1.0
|
Support for multiple project funding sources - _migrated from:_ https://code.google.com/p/ala/issues/detail?id=735
_date:_ Sat Jul 5 06:10:04 2014
_author:_ CoolDa...@gmail.com
---
New doE programmes require projects to be apportioned across multiple funding streams. This is also a requirement of the generalised system as many projects undertaken by organisations involve multiple funding partners.
A simple multi-row table in the Admin>Project information tab should be adequate for this putpose.
|
priority
|
support for multiple project funding sources migrated from date sat jul author coolda gmail com new doe programmes require projects to be apportioned across multiple funding streams this is also a requirement of the generalised system as many projects undertaken by organisations involve multiple funding partners a simple multi row table in the admin project information tab should be adequate for this putpose
| 1
|
168,478
| 6,376,387,998
|
IssuesEvent
|
2017-08-02 07:18:12
|
leo-project/leofs
|
https://api.github.com/repos/leo-project/leofs
|
closed
|
Consistency Problem with Async. Deletion
|
Bug Priority-HIGH v1.3 _leo_storage
|
## Description
Asynchronous deletion would cause consistency problem if another modification has occurred before the async. deletion is handled
## Root Cause
Directory Deletion
- Object list under the directory is pull when `leo_storage` handles the deletion, the list of objects could have been changed by the time
- Time stamp of the deletion request is not taken into account, re-created file could be incorrectly deleted
Object Deletion
- Time stamp of the deletion request is recorded as the time `leo_storage` starts to handle, not the origin request time
## Action to take
Clarify the consistency model, especially the mix of sync and async. operations, the time stamp record for reconciliation
## Related Issue
Spark first cleanup the temporary folder and then start to write data into the folder https://github.com/leo-project/leofs/issues/595
|
1.0
|
Consistency Problem with Async. Deletion - ## Description
Asynchronous deletion would cause consistency problem if another modification has occurred before the async. deletion is handled
## Root Cause
Directory Deletion
- Object list under the directory is pull when `leo_storage` handles the deletion, the list of objects could have been changed by the time
- Time stamp of the deletion request is not taken into account, re-created file could be incorrectly deleted
Object Deletion
- Time stamp of the deletion request is recorded as the time `leo_storage` starts to handle, not the origin request time
## Action to take
Clarify the consistency model, especially the mix of sync and async. operations, the time stamp record for reconciliation
## Related Issue
Spark first cleanup the temporary folder and then start to write data into the folder https://github.com/leo-project/leofs/issues/595
|
priority
|
consistency problem with async deletion description asynchronous deletion would cause consistency problem if another modification has occurred before the async deletion is handled root cause directory deletion object list under the directory is pull when leo storage handles the deletion the list of objects could have been changed by the time time stamp of the deletion request is not taken into account re created file could be incorrectly deleted object deletion time stamp of the deletion request is recorded as the time leo storage starts to handle not the origin request time action to take clarify the consistency model especially the mix of sync and async operations the time stamp record for reconciliation related issue spark first cleanup the temporary folder and then start to write data into the folder
| 1
|
134,853
| 5,238,529,692
|
IssuesEvent
|
2017-01-31 05:26:26
|
axsh/openvdc
|
https://api.github.com/repos/axsh/openvdc
|
closed
|
Scheduler needs configuration file to run on multi-host environment
|
Priority : High Type : Feature
|
### Problem
It is currently not possible to start scheduler through systemd and have it interact with zookeeper, api, etc. when those are installed on another host.
Also if `openvdc-cli` isn't installed on the same host, scheduler will not start at all because an EnvironmentFile was mistakenly added to the `openvdc-cli` package instead.
### Solution
* Have scheduler accept a configuration file similar to https://github.com/axsh/openvdc/pull/91.
* Remove the EnvironmentFile in favour of the new configuration file.
|
1.0
|
Scheduler needs configuration file to run on multi-host environment - ### Problem
It is currently not possible to start scheduler through systemd and have it interact with zookeeper, api, etc. when those are installed on another host.
Also if `openvdc-cli` isn't installed on the same host, scheduler will not start at all because an EnvironmentFile was mistakenly added to the `openvdc-cli` package instead.
### Solution
* Have scheduler accept a configuration file similar to https://github.com/axsh/openvdc/pull/91.
* Remove the EnvironmentFile in favour of the new configuration file.
|
priority
|
scheduler needs configuration file to run on multi host environment problem it is currently not possible to start scheduler through systemd and have it interact with zookeeper api etc when those are installed on another host also if openvdc cli isn t installed on the same host scheduler will not start at all because an environmentfile was mistakenly added to the openvdc cli package instead solution have scheduler accept a configuration file similar to remove the environmentfile in favour of the new configuration file
| 1
|
792,046
| 27,943,861,482
|
IssuesEvent
|
2023-03-24 00:15:40
|
evo-lua/evo-runtime
|
https://api.github.com/repos/evo-lua/evo-runtime
|
closed
|
Integrate a UUID generation library (to generate IDs for connected WebSocket clients)
|
Priority: High Complexity: Moderate Scope: Dependencies Scope: Runtime Status: In Progress Type: New Feature
|
This looks decent enough at first glance: https://github.com/mariusbancila/stduuid
In the current uws prototype (#73 ) I'm using the remote address of the connected peer as an ID, but that could go wrong. It's also somewhat wasteful (since it's currently a string value that is replicated for every event). I guess using an UUID value and only converting it to a Lua string when needed (via FFI bindings) would solve both of these issues.
Goals:
- [x] Can generate uuid strings (Lua and cdata)
Roadmap:
- [x] Add submodule
- [x] Integrate in build process
- [x] Add version to help command
- [x] Static exports table
- [x] FFI bindings library
- [x] Unit tests
- [x] API documentation
|
1.0
|
Integrate a UUID generation library (to generate IDs for connected WebSocket clients) - This looks decent enough at first glance: https://github.com/mariusbancila/stduuid
In the current uws prototype (#73 ) I'm using the remote address of the connected peer as an ID, but that could go wrong. It's also somewhat wasteful (since it's currently a string value that is replicated for every event). I guess using an UUID value and only converting it to a Lua string when needed (via FFI bindings) would solve both of these issues.
Goals:
- [x] Can generate uuid strings (Lua and cdata)
Roadmap:
- [x] Add submodule
- [x] Integrate in build process
- [x] Add version to help command
- [x] Static exports table
- [x] FFI bindings library
- [x] Unit tests
- [x] API documentation
|
priority
|
integrate a uuid generation library to generate ids for connected websocket clients this looks decent enough at first glance in the current uws prototype i m using the remote address of the connected peer as an id but that could go wrong it s also somewhat wasteful since it s currently a string value that is replicated for every event i guess using an uuid value and only converting it to a lua string when needed via ffi bindings would solve both of these issues goals can generate uuid strings lua and cdata roadmap add submodule integrate in build process add version to help command static exports table ffi bindings library unit tests api documentation
| 1
|
384,110
| 11,383,671,793
|
IssuesEvent
|
2020-01-29 06:51:06
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
Your setup is not completed. Please setup for better AMP Experience.
|
NEXT UPDATE [Priority: HIGH] bug
|
What am I supposed to do?

|
1.0
|
Your setup is not completed. Please setup for better AMP Experience. - What am I supposed to do?

|
priority
|
your setup is not completed please setup for better amp experience what am i supposed to do
| 1
|
518,563
| 15,030,424,501
|
IssuesEvent
|
2021-02-02 07:25:42
|
red-hat-storage/ocs-ci
|
https://api.github.com/repos/red-hat-storage/ocs-ci
|
closed
|
Deployments to vsphere stuck after initializing terraform work directory
|
High Priority bug
|
Multiple vsphere deployments have become stuck after initializing the terraform work directory. Jobs stuck at this point will remain here until aborted or until the jenkins slave is shut down. The nodes that have been brought up do not appear to be in a running state / created successfully by terraform so they will need to be cleaned up manually through the vsphere console. I have experienced this myself on multiple different datacenters so I know the issue isn't isolated to one. I am still unsure if it is a vsphere issue or a jenkins slave issue.
Logs while job is stuck:
```
18:09:45 - MainThread - ocs_ci.deployment.terraform - INFO - Initializing terraform work directory
18:09:45 - MainThread - ocs_ci.utility.utils - INFO - Executing command: terraform init /home/jenkins/workspace/qe-deploy-ocs-cluster/ocs-ci/external/installer/upi/vsphere/
18:10:04 - MainThread - ocs_ci.utility.utils - INFO - Executing command: terraform apply '-var-file=/home/jenkins/current-cluster-dir/openshift-cluster-dir/terraform_data/terraform.tfvars' -auto-approve '/home/jenkins/workspace/qe-deploy-ocs-cluster/ocs-ci/external/installer/upi/vsphere/'
```
After shutting down the jenkins slave:
```
Cannot contact temp-slave-jnk-pr1377-b976: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on temp-slave-jnk-pr1377-b976 failed. The channel is closing down or has closed down
```
Jenkins job: https://ocs4-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/qe-deploy-ocs-cluster/4128/console
|
1.0
|
Deployments to vsphere stuck after initializing terraform work directory - Multiple vsphere deployments have become stuck after initializing the terraform work directory. Jobs stuck at this point will remain here until aborted or until the jenkins slave is shut down. The nodes that have been brought up do not appear to be in a running state / created successfully by terraform so they will need to be cleaned up manually through the vsphere console. I have experienced this myself on multiple different datacenters so I know the issue isn't isolated to one. I am still unsure if it is a vsphere issue or a jenkins slave issue.
Logs while job is stuck:
```
18:09:45 - MainThread - ocs_ci.deployment.terraform - INFO - Initializing terraform work directory
18:09:45 - MainThread - ocs_ci.utility.utils - INFO - Executing command: terraform init /home/jenkins/workspace/qe-deploy-ocs-cluster/ocs-ci/external/installer/upi/vsphere/
18:10:04 - MainThread - ocs_ci.utility.utils - INFO - Executing command: terraform apply '-var-file=/home/jenkins/current-cluster-dir/openshift-cluster-dir/terraform_data/terraform.tfvars' -auto-approve '/home/jenkins/workspace/qe-deploy-ocs-cluster/ocs-ci/external/installer/upi/vsphere/'
```
After shutting down the jenkins slave:
```
Cannot contact temp-slave-jnk-pr1377-b976: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on temp-slave-jnk-pr1377-b976 failed. The channel is closing down or has closed down
```
Jenkins job: https://ocs4-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/qe-deploy-ocs-cluster/4128/console
|
priority
|
deployments to vsphere stuck after initializing terraform work directory multiple vsphere deployments have become stuck after initializing the terraform work directory jobs stuck at this point will remain here until aborted or until the jenkins slave is shut down the nodes that have been brought up do not appear to be in a running state created successfully by terraform so they will need to be cleaned up manually through the vsphere console i have experienced this myself on multiple different datacenters so i know the issue isn t isolated to one i am still unsure if it is a vsphere issue or a jenkins slave issue logs while job is stuck mainthread ocs ci deployment terraform info initializing terraform work directory mainthread ocs ci utility utils info executing command terraform init home jenkins workspace qe deploy ocs cluster ocs ci external installer upi vsphere mainthread ocs ci utility utils info executing command terraform apply var file home jenkins current cluster dir openshift cluster dir terraform data terraform tfvars auto approve home jenkins workspace qe deploy ocs cluster ocs ci external installer upi vsphere after shutting down the jenkins slave cannot contact temp slave jnk hudson remoting channelclosedexception channel unknown remote call on temp slave jnk failed the channel is closing down or has closed down jenkins job
| 1
|
515,966
| 14,973,097,785
|
IssuesEvent
|
2021-01-28 00:16:57
|
DoobDev/Doob
|
https://api.github.com/repos/DoobDev/Doob
|
closed
|
starboard duplication glitch
|
High Priority Stale bug
|
if you keep spamming the star, the number just goes up, it never removes stars.
|
1.0
|
starboard duplication glitch - if you keep spamming the star, the number just goes up, it never removes stars.
|
priority
|
starboard duplication glitch if you keep spamming the star the number just goes up it never removes stars
| 1
|
658,591
| 21,898,043,474
|
IssuesEvent
|
2022-05-20 10:34:31
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
K_ESSENTIAL option doesn't have any effect on k_create_thread
|
bug priority: high area: Kernel
|
**Describe the bug**
In our application we spawn a thread at innit that will be running a loop during all the lifecycle of the app. If that thread stops execution or exits for any reason we want the whole system to restart. To achieve this we configure the thread with the option **K_ESSENTIAL** which according to the documentation:
*This option tags the thread as an essential thread. This instructs the kernel to treat the termination or aborting of the thread as a fatal system error.*
What we observe is that configuring the **K_ESSENTIAL** option has no effect whatsoever and that if the thread stops the rest of the app keeps running.
**To Reproduce**
Here a simple app to reproduce the problem:
```
#include <zephyr.h>
#include <sys/printk.h>
#define LOOP2_STACK_SIZE 1024
#define LOOP2_PRIORITY 1
struct k_thread loop2_thread;
K_THREAD_STACK_DEFINE(loop2_area, LOOP2_STACK_SIZE);
void loop2(void *one, void *two, void *three) {
int i = 0;
while (1)
{
printk("LOOP 2\n");
i++;
if (i > 3)
{
printk("LOOP 2 EXITING\n");
return;
}
k_sleep(K_SECONDS(1));
}
}
void main(void)
{
printk("Hello World! %s\n", CONFIG_BOARD);
k_tid_t loop2_thread_id = k_thread_create(&loop2_thread, loop2_area,
K_THREAD_STACK_SIZEOF(loop2_area),
loop2,
NULL, NULL, NULL,
LOOP2_PRIORITY, K_ESSENTIAL, K_NO_WAIT);
while (1) {
printk("LOOP 1\n");
k_sleep(K_SECONDS(2));
}
}
```
**Expected behavior**
When the spawned thread exits the app should be restarted (or halted depending on the platform)
**Impact**
This is a good fit for our application since the devices that are running it are spread geographically and adds robustness to the design.
**Environment (please complete the following information):**
We have observed this behavior when testing on qemu_cortex as well as on our hardware board which is a Nordic Semiconductors nrf9160 based one.
We are using Nordic SDK 1.9.1 which ships with Zephyr OS v2.7.99
|
1.0
|
K_ESSENTIAL option doesn't have any effect on k_create_thread - **Describe the bug**
In our application we spawn a thread at innit that will be running a loop during all the lifecycle of the app. If that thread stops execution or exits for any reason we want the whole system to restart. To achieve this we configure the thread with the option **K_ESSENTIAL** which according to the documentation:
*This option tags the thread as an essential thread. This instructs the kernel to treat the termination or aborting of the thread as a fatal system error.*
What we observe is that configuring the **K_ESSENTIAL** option has no effect whatsoever and that if the thread stops the rest of the app keeps running.
**To Reproduce**
Here a simple app to reproduce the problem:
```
#include <zephyr.h>
#include <sys/printk.h>
#define LOOP2_STACK_SIZE 1024
#define LOOP2_PRIORITY 1
struct k_thread loop2_thread;
K_THREAD_STACK_DEFINE(loop2_area, LOOP2_STACK_SIZE);
void loop2(void *one, void *two, void *three) {
int i = 0;
while (1)
{
printk("LOOP 2\n");
i++;
if (i > 3)
{
printk("LOOP 2 EXITING\n");
return;
}
k_sleep(K_SECONDS(1));
}
}
void main(void)
{
printk("Hello World! %s\n", CONFIG_BOARD);
k_tid_t loop2_thread_id = k_thread_create(&loop2_thread, loop2_area,
K_THREAD_STACK_SIZEOF(loop2_area),
loop2,
NULL, NULL, NULL,
LOOP2_PRIORITY, K_ESSENTIAL, K_NO_WAIT);
while (1) {
printk("LOOP 1\n");
k_sleep(K_SECONDS(2));
}
}
```
**Expected behavior**
When the spawned thread exits the app should be restarted (or halted depending on the platform)
**Impact**
This is a good fit for our application since the devices that are running it are spread geographically and adds robustness to the design.
**Environment (please complete the following information):**
We have observed this behavior when testing on qemu_cortex as well as on our hardware board which is a Nordic Semiconductors nrf9160 based one.
We are using Nordic SDK 1.9.1 which ships with Zephyr OS v2.7.99
|
priority
|
k essential option doesn t have any effect on k create thread describe the bug in our application we spawn a thread at innit that will be running a loop during all the lifecycle of the app if that thread stops execution or exits for any reason we want the whole system to restart to achieve this we configure the thread with the option k essential which according to the documentation this option tags the thread as an essential thread this instructs the kernel to treat the termination or aborting of the thread as a fatal system error what we observe is that configuring the k essential option has no effect whatsoever and that if the thread stops the rest of the app keeps running to reproduce here a simple app to reproduce the problem include include define stack size define priority struct k thread thread k thread stack define area stack size void void one void two void three int i while printk loop n i if i printk loop exiting n return k sleep k seconds void main void printk hello world s n config board k tid t thread id k thread create thread area k thread stack sizeof area null null null priority k essential k no wait while printk loop n k sleep k seconds expected behavior when the spawned thread exits the app should be restarted or halted depending on the platform impact this is a good fit for our application since the devices that are running it are spread geographically and adds robustness to the design environment please complete the following information we have observed this behavior when testing on qemu cortex as well as on our hardware board which is a nordic semiconductors based one we are using nordic sdk which ships with zephyr os
| 1
|
282,054
| 8,703,250,234
|
IssuesEvent
|
2018-12-05 16:13:29
|
opencollective/opencollective
|
https://api.github.com/repos/opencollective/opencollective
|
closed
|
Duplicated emails received by backers on Collective updates
|
api backend bug high priority
|
Some users(example: I got the same 4 emails from `Manyverse` and @piamancini got 6) are receiving multiple emails through the Collectives Updates:

|
1.0
|
Duplicated emails received by backers on Collective updates - Some users(example: I got the same 4 emails from `Manyverse` and @piamancini got 6) are receiving multiple emails through the Collectives Updates:

|
priority
|
duplicated emails received by backers on collective updates some users example i got the same emails from manyverse and piamancini got are receiving multiple emails through the collectives updates
| 1
|
395,215
| 11,672,653,204
|
IssuesEvent
|
2020-03-04 07:13:41
|
fontforge/fontforge
|
https://api.github.com/repos/fontforge/fontforge
|
closed
|
GUI version of fontforge fails when used within in a script (but didn't used to)
|
High Priority
|
### When reporting a bug/issue:
- [ ] The FontForge version and the operating system you're using
Debian testing, version 1:20190801~dfsg-2
- [ ] The behavior you expect to see, and the actual behavior
I am the maintainer of [mftrace](https://github.com/hanwen/mftrace) for Debian. mftrace uses fontforge to do part of its job, and begins by testing that fontforge is present and working by running "fontforge --version". This ran without issue using older versions of the fontforge package (the most recent such Debian package was 1:20170731~dfsg-2; after that, they jumped to version 20190801), but not with the current version - it just exits with an error. Tracking this back, when the GUI-enabled version of fontforge is run without a valid DISPLAY, even if just running "fontforge --version" or "fontforge --help", it bombs out. The older version did not do this.
I am aware that there is a non-GUI version of fontforge, but the GUI-enabled version used to work for non-GUI tasks in a non-GUI environment, but it no longer does. Furthermore, scripts are used to using plain "fontforge", without caring whether it is GUI-enabled or not (and there is no simple way to check for this), but this change in behaviour has broken at least mftrace, and quite possibly other scripts too.
I have not isolated the change that caused this change in behaviour, though; at a quick glance, the code does not look dramatically different between 20170731 and 20190801.
- [ ] Steps to reproduce the behavior
Run "fontforge --version" or "fontforge --help" using the GUI-enabled fontforge in a terminal without a valid DISPLAY.
- [ ] Possible solution/fix/workaround
I'm not sure where this change in behaviour was introduced, but perhaps there could be a check in `fontforgeexe/startui.c` for whether there is a valid GUI, and to revert to the behaviour of `fontforgeexe/startnoui.c` if there is not instead of crashing.
|
1.0
|
GUI version of fontforge fails when used within in a script (but didn't used to) - ### When reporting a bug/issue:
- [ ] The FontForge version and the operating system you're using
Debian testing, version 1:20190801~dfsg-2
- [ ] The behavior you expect to see, and the actual behavior
I am the maintainer of [mftrace](https://github.com/hanwen/mftrace) for Debian. mftrace uses fontforge to do part of its job, and begins by testing that fontforge is present and working by running "fontforge --version". This ran without issue using older versions of the fontforge package (the most recent such Debian package was 1:20170731~dfsg-2; after that, they jumped to version 20190801), but not with the current version - it just exits with an error. Tracking this back, when the GUI-enabled version of fontforge is run without a valid DISPLAY, even if just running "fontforge --version" or "fontforge --help", it bombs out. The older version did not do this.
I am aware that there is a non-GUI version of fontforge, but the GUI-enabled version used to work for non-GUI tasks in a non-GUI environment, but it no longer does. Furthermore, scripts are used to using plain "fontforge", without caring whether it is GUI-enabled or not (and there is no simple way to check for this), but this change in behaviour has broken at least mftrace, and quite possibly other scripts too.
I have not isolated the change that caused this change in behaviour, though; at a quick glance, the code does not look dramatically different between 20170731 and 20190801.
- [ ] Steps to reproduce the behavior
Run "fontforge --version" or "fontforge --help" using the GUI-enabled fontforge in a terminal without a valid DISPLAY.
- [ ] Possible solution/fix/workaround
I'm not sure where this change in behaviour was introduced, but perhaps there could be a check in `fontforgeexe/startui.c` for whether there is a valid GUI, and to revert to the behaviour of `fontforgeexe/startnoui.c` if there is not instead of crashing.
|
priority
|
gui version of fontforge fails when used within in a script but didn t used to when reporting a bug issue the fontforge version and the operating system you re using debian testing version dfsg the behavior you expect to see and the actual behavior i am the maintainer of for debian mftrace uses fontforge to do part of its job and begins by testing that fontforge is present and working by running fontforge version this ran without issue using older versions of the fontforge package the most recent such debian package was dfsg after that they jumped to version but not with the current version it just exits with an error tracking this back when the gui enabled version of fontforge is run without a valid display even if just running fontforge version or fontforge help it bombs out the older version did not do this i am aware that there is a non gui version of fontforge but the gui enabled version used to work for non gui tasks in a non gui environment but it no longer does furthermore scripts are used to using plain fontforge without caring whether it is gui enabled or not and there is no simple way to check for this but this change in behaviour has broken at least mftrace and quite possibly other scripts too i have not isolated the change that caused this change in behaviour though at a quick glance the code does not look dramatically different between and steps to reproduce the behavior run fontforge version or fontforge help using the gui enabled fontforge in a terminal without a valid display possible solution fix workaround i m not sure where this change in behaviour was introduced but perhaps there could be a check in fontforgeexe startui c for whether there is a valid gui and to revert to the behaviour of fontforgeexe startnoui c if there is not instead of crashing
| 1
|
231,438
| 7,632,506,884
|
IssuesEvent
|
2018-05-05 16:02:30
|
tootsuite/mastodon
|
https://api.github.com/repos/tootsuite/mastodon
|
closed
|
Video modal extends beyond screen
|
bug priority - high ui
|
Referencing #5956 and @yuntan , hoping you can help fix this as you were working on the modal last.
Check this as an example of a large-dimension video. When opened in a modal it extends beyond the screen:
https://mastodon.social/@WAHa_06x36/99972837564881692
To reproduce, paste link in search, open by pressing <- -> arrows next to fullscreen then press play when pressing play it pops outside (as per 2nd screenshot)


* * * *
- [x] I searched or browsed the repo’s other issues to ensure this is not a duplicate.
- [x] This bug happens on a [tagged release](https://github.com/tootsuite/mastodon/releases) and not on `master` (If you're a user, don't worry about this).
|
1.0
|
Video modal extends beyond screen - Referencing #5956 and @yuntan , hoping you can help fix this as you were working on the modal last.
Check this as an example of a large-dimension video. When opened in a modal it extends beyond the screen:
https://mastodon.social/@WAHa_06x36/99972837564881692
To reproduce, paste link in search, open by pressing <- -> arrows next to fullscreen then press play when pressing play it pops outside (as per 2nd screenshot)


* * * *
- [x] I searched or browsed the repo’s other issues to ensure this is not a duplicate.
- [x] This bug happens on a [tagged release](https://github.com/tootsuite/mastodon/releases) and not on `master` (If you're a user, don't worry about this).
|
priority
|
video modal extends beyond screen referencing and yuntan hoping you can help fix this as you were working on the modal last check this as an example of a large dimension video when opened in a modal it extends beyond the screen to reproduce paste link in search open by pressing arrows next to fullscreen then press play when pressing play it pops outside as per screenshot i searched or browsed the repo’s other issues to ensure this is not a duplicate this bug happens on a and not on master if you re a user don t worry about this
| 1
|
531,652
| 15,501,840,759
|
IssuesEvent
|
2021-03-11 11:03:02
|
glennmdt/sdmx-ml
|
https://api.github.com/repos/glennmdt/sdmx-ml
|
closed
|
Metadata Provision Agreement and Metadata Provider Scheme missing from XML Schemas
|
High Priority bug
|
Two structures are missing from the schemas:
* Metadata Provision Agreement
* Metadata Provider Scheme
They are essentially the same as the data equivalents except a Metadata Provision Agreement can link to any other artefact.
See section 7.3 in the following
[https://metatechltd.sharepoint.com/:w:/s/SDMX30TechnicalSpecifications/EflcBha7_PxBuFkeXu_dqRMBISBAy9IsjqXHjYkNHq321Q?e=ZCiUyI](url)
|
1.0
|
Metadata Provision Agreement and Metadata Provider Scheme missing from XML Schemas - Two structures are missing from the schemas:
* Metadata Provision Agreement
* Metadata Provider Scheme
They are essentially the same as the data equivalents except a Metadata Provision Agreement can link to any other artefact.
See section 7.3 in the following
[https://metatechltd.sharepoint.com/:w:/s/SDMX30TechnicalSpecifications/EflcBha7_PxBuFkeXu_dqRMBISBAy9IsjqXHjYkNHq321Q?e=ZCiUyI](url)
|
priority
|
metadata provision agreement and metadata provider scheme missing from xml schemas two structures are missing from the schemas metadata provision agreement metadata provider scheme they are essentially the same as the data equivalents except a metadata provision agreement can link to any other artefact see section in the following url
| 1
|
234,601
| 7,723,682,964
|
IssuesEvent
|
2018-05-24 13:11:16
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
Kernel tests failing at runtime on frdm_kw41z
|
area: Kernel bug priority: high
|
Several kernel tests are failing at runtime on frdm_kw41z on commit e7509c1 (v1.12.0-rc1)
```
$ sanitycheck -T tests/kernel/ -p frdm_kw41z --device-testing --device-serial /dev/ttyACM0
Cleaning output directory /home/maureen/zephyr/sanity-out
Building testcase defconfigs...
63 tests selected, 8604 tests discarded due to filters
total complete: 26/ 63 41% failed: 0
frdm_kw41z tests/kernel/fatal/kernel.common.stack_sentinel FAILED: failed
see: sanity-out/frdm_kw41z/tests/kernel/fatal/kernel.common.stack_sentinel/handler.log
total complete: 32/ 63 50% failed: 1
frdm_kw41z tests/kernel/mem_protect/stackprot/kernel.memory_protection FAILED: timeout
see: sanity-out/frdm_kw41z/tests/kernel/mem_protect/stackprot/kernel.memory_protection/handler.log
total complete: 36/ 63 57% failed: 2
frdm_kw41z tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random FAILED: timeout
see: sanity-out/frdm_kw41z/tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random/handler.log
total complete: 63/ 63 100% failed: 3
60 of 63 tests passed with 0 warnings in 896 seconds
```
|
1.0
|
Kernel tests failing at runtime on frdm_kw41z - Several kernel tests are failing at runtime on frdm_kw41z on commit e7509c1 (v1.12.0-rc1)
```
$ sanitycheck -T tests/kernel/ -p frdm_kw41z --device-testing --device-serial /dev/ttyACM0
Cleaning output directory /home/maureen/zephyr/sanity-out
Building testcase defconfigs...
63 tests selected, 8604 tests discarded due to filters
total complete: 26/ 63 41% failed: 0
frdm_kw41z tests/kernel/fatal/kernel.common.stack_sentinel FAILED: failed
see: sanity-out/frdm_kw41z/tests/kernel/fatal/kernel.common.stack_sentinel/handler.log
total complete: 32/ 63 50% failed: 1
frdm_kw41z tests/kernel/mem_protect/stackprot/kernel.memory_protection FAILED: timeout
see: sanity-out/frdm_kw41z/tests/kernel/mem_protect/stackprot/kernel.memory_protection/handler.log
total complete: 36/ 63 57% failed: 2
frdm_kw41z tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random FAILED: timeout
see: sanity-out/frdm_kw41z/tests/kernel/mem_protect/stack_random/kernel.memory_protection.stack_random/handler.log
total complete: 63/ 63 100% failed: 3
60 of 63 tests passed with 0 warnings in 896 seconds
```
|
priority
|
kernel tests failing at runtime on frdm several kernel tests are failing at runtime on frdm on commit sanitycheck t tests kernel p frdm device testing device serial dev cleaning output directory home maureen zephyr sanity out building testcase defconfigs tests selected tests discarded due to filters total complete failed frdm tests kernel fatal kernel common stack sentinel failed failed see sanity out frdm tests kernel fatal kernel common stack sentinel handler log total complete failed frdm tests kernel mem protect stackprot kernel memory protection failed timeout see sanity out frdm tests kernel mem protect stackprot kernel memory protection handler log total complete failed frdm tests kernel mem protect stack random kernel memory protection stack random failed timeout see sanity out frdm tests kernel mem protect stack random kernel memory protection stack random handler log total complete failed of tests passed with warnings in seconds
| 1
|
538,550
| 15,771,905,197
|
IssuesEvent
|
2021-03-31 21:06:59
|
myRutgers/Web
|
https://api.github.com/repos/myRutgers/Web
|
closed
|
myNews Drawer Article Sponsors
|
High Priority enhancement myNews
|
The article news sites and search bar are hard coded and we need the actual data to show the real ones. Also doesn't have any functionality right now
|
1.0
|
myNews Drawer Article Sponsors - The article news sites and search bar are hard coded and we need the actual data to show the real ones. Also doesn't have any functionality right now
|
priority
|
mynews drawer article sponsors the article news sites and search bar are hard coded and we need the actual data to show the real ones also doesn t have any functionality right now
| 1
|
708,363
| 24,339,774,845
|
IssuesEvent
|
2022-10-01 14:58:23
|
worldmaking/mischmasch
|
https://api.github.com/repos/worldmaking/mischmasch
|
closed
|
dealing with feedback connections
|
Priority: High Audio V0.5.x // Node
|
@grrrwaaa will attempt to handle this directly within Audio.js rather than using the connection cable='history' in patch.document
|
1.0
|
dealing with feedback connections - @grrrwaaa will attempt to handle this directly within Audio.js rather than using the connection cable='history' in patch.document
|
priority
|
dealing with feedback connections grrrwaaa will attempt to handle this directly within audio js rather than using the connection cable history in patch document
| 1
|
624,752
| 19,706,286,633
|
IssuesEvent
|
2022-01-12 22:28:45
|
E3SM-Project/scream
|
https://api.github.com/repos/E3SM-Project/scream
|
closed
|
Need to rethink (or extend) how scream does IO on dyn grid
|
I/O priority:high dynamics
|
Restart files _require_ a Discontinuous Galerkin (DG) version of the dyn grid data (meaning that the corresponding edge dofs on bordering elements need to be separately saved), while right now, our IO on dyn grid assumes a Continuous Galerkin (CG) field (meaning that of the corresponding edge dofs on bordering elements, only one is saved). E.g., for output, we do a dyn_grid->phys_gll remap, and then write the phys_gll field (where the "remap" is simply "picking" one of the occurrences of each dof in the GLL grid).
Some thoughts below.
- The SEGrids class already stores nelem * np * np dofs. The way we currently set it up in `dynamics/homme/interface/dyn_grid_mod.F90`, we replicate edge dofs gids, so that the global gids are still the same as in the corresponding unique PointGrid. However, I don't think we ever use this fact, so we could easily change SEGrid, to store nelem * np * np unique gids (I believe that's what Homme stores in `elem(ie)%gdofP`, so we can just use those).
- The model restart files require to restart dyn grid field in a DG-like fashion, which means IO classes need to be able to handle SEGrid in a DG-like way. However, normal output might still be processed in the way we currently do (i.e., remap dyn->physGLL and then save, and viceversa for input). So we might want to handle both.
- Scorpio IO classes _assume_ that (partitioned) dofs have a layout of the form `(ncols[,...])`. For instance, for a 3d vector quantity, the layout is assumed to be `(ncols,dim,nlevs)`. The natural dyn layout of a 3d field is `(nelem,dim,np,np,nlev)`. We can either make IO handle the latter, or we can output/load the field as a "point-grid-like" field, with layout `(nelem*np*np, dim, nlev)`. The latter is what EAM does, in a way. But we'd have to make the IO class switch based on a) grid type, and b) if SEGrid, whether to output a DG or CG field.
|
1.0
|
Need to rethink (or extend) how scream does IO on dyn grid - Restart files _require_ a Discontinuous Galerkin (DG) version of the dyn grid data (meaning that the corresponding edge dofs on bordering elements need to be separately saved), while right now, our IO on dyn grid assumes a Continuous Galerkin (CG) field (meaning that of the corresponding edge dofs on bordering elements, only one is saved). E.g., for output, we do a dyn_grid->phys_gll remap, and then write the phys_gll field (where the "remap" is simply "picking" one of the occurrences of each dof in the GLL grid).
Some thoughts below.
- The SEGrids class already stores nelem * np * np dofs. The way we currently set it up in `dynamics/homme/interface/dyn_grid_mod.F90`, we replicate edge dofs gids, so that the global gids are still the same as in the corresponding unique PointGrid. However, I don't think we ever use this fact, so we could easily change SEGrid, to store nelem * np * np unique gids (I believe that's what Homme stores in `elem(ie)%gdofP`, so we can just use those).
- The model restart files require to restart dyn grid field in a DG-like fashion, which means IO classes need to be able to handle SEGrid in a DG-like way. However, normal output might still be processed in the way we currently do (i.e., remap dyn->physGLL and then save, and viceversa for input). So we might want to handle both.
- Scorpio IO classes _assume_ that (partitioned) dofs have a layout of the form `(ncols[,...])`. For instance, for a 3d vector quantity, the layout is assumed to be `(ncols,dim,nlevs)`. The natural dyn layout of a 3d field is `(nelem,dim,np,np,nlev)`. We can either make IO handle the latter, or we can output/load the field as a "point-grid-like" field, with layout `(nelem*np*np, dim, nlev)`. The latter is what EAM does, in a way. But we'd have to make the IO class switch based on a) grid type, and b) if SEGrid, whether to output a DG or CG field.
|
priority
|
need to rethink or extend how scream does io on dyn grid restart files require a discontinuous galerkin dg version of the dyn grid data meaning that the corresponding edge dofs on bordering elements need to be separately saved while right now our io on dyn grid assumes a continuous galerkin cg field meaning that of the corresponding edge dofs on bordering elements only one is saved e g for output we do a dyn grid phys gll remap and then write the phys gll field where the remap is simply picking one of the occurrences of each dof in the gll grid some thoughts below the segrids class already stores nelem np np dofs the way we currently set it up in dynamics homme interface dyn grid mod we replicate edge dofs gids so that the global gids are still the same as in the corresponding unique pointgrid however i don t think we ever use this fact so we could easily change segrid to store nelem np np unique gids i believe that s what homme stores in elem ie gdofp so we can just use those the model restart files require to restart dyn grid field in a dg like fashion which means io classes need to be able to handle segrid in a dg like way however normal output might still be processed in the way we currently do i e remap dyn physgll and then save and viceversa for input so we might want to handle both scorpio io classes assume that partitioned dofs have a layout of the form ncols for instance for a vector quantity the layout is assumed to be ncols dim nlevs the natural dyn layout of a field is nelem dim np np nlev we can either make io handle the latter or we can output load the field as a point grid like field with layout nelem np np dim nlev the latter is what eam does in a way but we d have to make the io class switch based on a grid type and b if segrid whether to output a dg or cg field
| 1
|
404,234
| 11,854,119,261
|
IssuesEvent
|
2020-03-24 23:49:03
|
monarch-initiative/mondo
|
https://api.github.com/repos/monarch-initiative/mondo
|
closed
|
remove comment from MONDO_0005597 'cystic renal cell carcinoma'
|
high priority
|
there is a comment on MONDO_0005597 'cystic renal cell carcinoma' to merge MONDO_0005597 with MONDO:0003010 multilocular clear cell renal cell carcinoma, but this has already been done. Need to remove comment.
|
1.0
|
remove comment from MONDO_0005597 'cystic renal cell carcinoma' - there is a comment on MONDO_0005597 'cystic renal cell carcinoma' to merge MONDO_0005597 with MONDO:0003010 multilocular clear cell renal cell carcinoma, but this has already been done. Need to remove comment.
|
priority
|
remove comment from mondo cystic renal cell carcinoma there is a comment on mondo cystic renal cell carcinoma to merge mondo with mondo multilocular clear cell renal cell carcinoma but this has already been done need to remove comment
| 1
|
387,009
| 11,454,589,990
|
IssuesEvent
|
2020-02-06 17:21:32
|
openforcefield/openforcefield
|
https://api.github.com/repos/openforcefield/openforcefield
|
closed
|
Add functionality to create OFFMols with specific indexing
|
effort:high priority:high
|
**Is your feature request related to a problem? Please describe.**
Many people have requested the ability to create OFFMols with specified atom orderings. Most commonly, this is in the context of [Canonical, isomeric, explicit hydrogen, mapped SMILES](https://github.com/openforcefield/cmiles#how-to-use-cmiles), which we attach to OFF-submitted molecules in QCArchive. However, people have also requested the ability to reorder the atoms from a molecule that already has an indexing system (like, an SDF).
**Describe the solution you'd like**
This is two similar requests in one:
* `Molecule.from_object(obj, index_map=dict(current_idx:new_idx)`: Be able to read a molecule with a _defined_ indexing system (like, from SDF), but have the created OFFMol have a different atom/bond ordering.
* `Molecule.from_mapped_smiles(mapped_smiles)`: Read an explicit-hydrogen, fully mapped SMILES, and create an OFFMol with that atom ordering.
* Needs to check+fail if H's aren't explicit.
For both functions, we'll need to resolve what to do if the map indices don't begin at 1, of if the numbering system has gaps. For now, a reasonable behavior is probably just "fail".
|
1.0
|
Add functionality to create OFFMols with specific indexing - **Is your feature request related to a problem? Please describe.**
Many people have requested the ability to create OFFMols with specified atom orderings. Most commonly, this is in the context of [Canonical, isomeric, explicit hydrogen, mapped SMILES](https://github.com/openforcefield/cmiles#how-to-use-cmiles), which we attach to OFF-submitted molecules in QCArchive. However, people have also requested the ability to reorder the atoms from a molecule that already has an indexing system (like, an SDF).
**Describe the solution you'd like**
This is two similar requests in one:
* `Molecule.from_object(obj, index_map=dict(current_idx:new_idx)`: Be able to read a molecule with a _defined_ indexing system (like, from SDF), but have the created OFFMol have a different atom/bond ordering.
* `Molecule.from_mapped_smiles(mapped_smiles)`: Read an explicit-hydrogen, fully mapped SMILES, and create an OFFMol with that atom ordering.
* Needs to check+fail if H's aren't explicit.
For both functions, we'll need to resolve what to do if the map indices don't begin at 1, of if the numbering system has gaps. For now, a reasonable behavior is probably just "fail".
|
priority
|
add functionality to create offmols with specific indexing is your feature request related to a problem please describe many people have requested the ability to create offmols with specified atom orderings most commonly this is in the context of which we attach to off submitted molecules in qcarchive however people have also requested the ability to reorder the atoms from a molecule that already has an indexing system like an sdf describe the solution you d like this is two similar requests in one molecule from object obj index map dict current idx new idx be able to read a molecule with a defined indexing system like from sdf but have the created offmol have a different atom bond ordering molecule from mapped smiles mapped smiles read an explicit hydrogen fully mapped smiles and create an offmol with that atom ordering needs to check fail if h s aren t explicit for both functions we ll need to resolve what to do if the map indices don t begin at of if the numbering system has gaps for now a reasonable behavior is probably just fail
| 1
|
6,673
| 2,590,834,331
|
IssuesEvent
|
2015-02-18 21:21:43
|
chessmasterhong/WaterEmblem
|
https://api.github.com/repos/chessmasterhong/WaterEmblem
|
closed
|
Continuing a suspended game does not restore game back to a playable state
|
bug high priority
|
Due to the large number changes done to the codebase since the suspend game feature was last updated, suspending the game will not save all data necessary to bring the game back to a playable state upon continuing the suspended game. Some data were were overlooked and were not saved in the game suspending or were not properly loaded during the game continuing process.
|
1.0
|
Continuing a suspended game does not restore game back to a playable state - Due to the large number changes done to the codebase since the suspend game feature was last updated, suspending the game will not save all data necessary to bring the game back to a playable state upon continuing the suspended game. Some data were were overlooked and were not saved in the game suspending or were not properly loaded during the game continuing process.
|
priority
|
continuing a suspended game does not restore game back to a playable state due to the large number changes done to the codebase since the suspend game feature was last updated suspending the game will not save all data necessary to bring the game back to a playable state upon continuing the suspended game some data were were overlooked and were not saved in the game suspending or were not properly loaded during the game continuing process
| 1
|
802,540
| 28,966,322,640
|
IssuesEvent
|
2023-05-10 08:10:23
|
gamefreedomgit/Maelstrom
|
https://api.github.com/repos/gamefreedomgit/Maelstrom
|
closed
|
Mana regeneration (any class)
|
Core Status: Confirmed Priority: High
|
**Description:**
Mana regeneration on any class is not acting as expected, with the count regressing every few milliseconds.
**How to reproduce:**
Cast any spell and observe your mana go "backwards" every few milliseconds.
Example:

**How it should work:**
Mana regeneration should be continuous without regressing.
|
1.0
|
Mana regeneration (any class) - **Description:**
Mana regeneration on any class is not acting as expected, with the count regressing every few milliseconds.
**How to reproduce:**
Cast any spell and observe your mana go "backwards" every few milliseconds.
Example:

**How it should work:**
Mana regeneration should be continuous without regressing.
|
priority
|
mana regeneration any class description mana regeneration on any class is not acting as expected with the count regressing every few milliseconds how to reproduce cast any spell and observe your mana go backwards every few milliseconds example how it should work mana regeneration should be continuous without regressing
| 1
|
164,593
| 6,229,530,428
|
IssuesEvent
|
2017-07-11 04:23:14
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
closed
|
User file not calculating zone volumes correctly
|
Priority2 S1 - High
|
Helpdesk ticket 11140
User expected similar loads from four similar zones, but they varied signficantly. The four zones have similar floor area and the same ceiling height but greatly different zone volumes:

The difference in volumes cause the infiltration and ventilation flows to be different, because both were specified in ACH.
Workaround is to put hard values for Volume in the Zone objects.
|
1.0
|
User file not calculating zone volumes correctly - Helpdesk ticket 11140
User expected similar loads from four similar zones, but they varied signficantly. The four zones have similar floor area and the same ceiling height but greatly different zone volumes:

The difference in volumes cause the infiltration and ventilation flows to be different, because both were specified in ACH.
Workaround is to put hard values for Volume in the Zone objects.
|
priority
|
user file not calculating zone volumes correctly helpdesk ticket user expected similar loads from four similar zones but they varied signficantly the four zones have similar floor area and the same ceiling height but greatly different zone volumes the difference in volumes cause the infiltration and ventilation flows to be different because both were specified in ach workaround is to put hard values for volume in the zone objects
| 1
|
134,952
| 5,240,692,673
|
IssuesEvent
|
2017-01-31 13:53:26
|
SIMRacingApps/SIMRacingApps
|
https://api.github.com/repos/SIMRacingApps/SIMRacingApps
|
closed
|
Tire Measurements Don't Change on all Dallara tires
|
bug high priority
|
When I'm running laps in iRacing i click the "fuel and 4 tires" button and when i do go to pit. my crew changes the tires and fill the fuel but the app only tells me my right rear tire was changed and still says i ran *amount of laps* on the other three tires. Please help me fix this
|
1.0
|
Tire Measurements Don't Change on all Dallara tires - When I'm running laps in iRacing i click the "fuel and 4 tires" button and when i do go to pit. my crew changes the tires and fill the fuel but the app only tells me my right rear tire was changed and still says i ran *amount of laps* on the other three tires. Please help me fix this
|
priority
|
tire measurements don t change on all dallara tires when i m running laps in iracing i click the fuel and tires button and when i do go to pit my crew changes the tires and fill the fuel but the app only tells me my right rear tire was changed and still says i ran amount of laps on the other three tires please help me fix this
| 1
|
771,802
| 27,092,857,176
|
IssuesEvent
|
2023-02-14 22:47:27
|
asastats/channel
|
https://api.github.com/repos/asastats/channel
|
closed
|
Add new Yieldly's NFT prize game
|
feature high priority addressed
|
By @SCN9A in [GitHub](https://github.com/asastats/channel/discussions/130#discussioncomment-4970939):
> Yieldly announced new NFT prize game
> https://twitter.com/YieldlyFinance/status/1625471570054418437
> Escrow for YLDY staked to join NFT Prize game is
https://algoexplorer.io/address/E3SJQLV7PHEL6J4MX3EESXG7JP5KI35ORAOOX55OLIE2QOK4YMT3UBPA6Y
|
1.0
|
Add new Yieldly's NFT prize game - By @SCN9A in [GitHub](https://github.com/asastats/channel/discussions/130#discussioncomment-4970939):
> Yieldly announced new NFT prize game
> https://twitter.com/YieldlyFinance/status/1625471570054418437
> Escrow for YLDY staked to join NFT Prize game is
https://algoexplorer.io/address/E3SJQLV7PHEL6J4MX3EESXG7JP5KI35ORAOOX55OLIE2QOK4YMT3UBPA6Y
|
priority
|
add new yieldly s nft prize game by in yieldly announced new nft prize game escrow for yldy staked to join nft prize game is
| 1
|
160,471
| 6,098,163,035
|
IssuesEvent
|
2017-06-20 06:40:05
|
OpenEMS/openems-gui
|
https://api.github.com/repos/OpenEMS/openems-gui
|
closed
|
Show PV Charger as Production
|
Priority: High Type: Bug
|
### Bug Report or Feature Request (mark with an `x`)
```
- [x] bug report -> please search issues before submitting
- [ ] feature request
```
### Desired functionality.
DC-PV chargers (e.g. FENECON Commercial DC) are not shown as "Production"
|
1.0
|
Show PV Charger as Production - ### Bug Report or Feature Request (mark with an `x`)
```
- [x] bug report -> please search issues before submitting
- [ ] feature request
```
### Desired functionality.
DC-PV chargers (e.g. FENECON Commercial DC) are not shown as "Production"
|
priority
|
show pv charger as production bug report or feature request mark with an x bug report please search issues before submitting feature request desired functionality dc pv chargers e g fenecon commercial dc are not shown as production
| 1
|
227,238
| 7,528,125,244
|
IssuesEvent
|
2018-04-13 19:33:36
|
cranndarach/lifetracker
|
https://api.github.com/repos/cranndarach/lifetracker
|
opened
|
[feature] Pre-made entries
|
priority/high task-size/big type/feature
|
I want to find a good way to add pre-made entries or bundles of entries without having to create pages with default values. For example, if there is a list of things that you do every day, and you just want to mark down the time that they all happened but otherwise have all their data pre-populated. It doesn't make sense to have to spend 10 minutes inputting the same data every day.
I know I have been trying for a while to figure out a good way to make this work, but I feel like it is really important to the overall utility of the program.
|
1.0
|
[feature] Pre-made entries - I want to find a good way to add pre-made entries or bundles of entries without having to create pages with default values. For example, if there is a list of things that you do every day, and you just want to mark down the time that they all happened but otherwise have all their data pre-populated. It doesn't make sense to have to spend 10 minutes inputting the same data every day.
I know I have been trying for a while to figure out a good way to make this work, but I feel like it is really important to the overall utility of the program.
|
priority
|
pre made entries i want to find a good way to add pre made entries or bundles of entries without having to create pages with default values for example if there is a list of things that you do every day and you just want to mark down the time that they all happened but otherwise have all their data pre populated it doesn t make sense to have to spend minutes inputting the same data every day i know i have been trying for a while to figure out a good way to make this work but i feel like it is really important to the overall utility of the program
| 1
|
106,979
| 4,288,207,616
|
IssuesEvent
|
2016-07-17 09:22:00
|
fossasia/open-event-orga-server
|
https://api.github.com/repos/fossasia/open-event-orga-server
|
closed
|
Add key to Array of object in existing json output to follow re:Data
|
Priority: High Rest-API
|
We need to add key to the array of objects to follow the re:data conventions. The existing output is not parseable with libraries we are using as well. We need to change it from
```
[
{
"audio":null,
"comments":null,
"end_time":"2016-05-05 08:00:00",
"id":1,
"language":null,
"long_abstract":"",
"short_abstract":"",
"signup_url":null,
"slides":null,
"start_time":"2016-05-05 07:30:00",
"state":"accepted",
"subtitle":null,
"title":"Fantastische Hardware Bauen",
"track":{},
"video":"https://www.youtube.com/channel/UCJoKp-gG2t7lH5_hiKp95yA"
}
]
```
to
```
{
"sessions":[
{
"audio":null,
"comments":null,
"end_time":"2016-05-05 08:00:00",
"id":1,
"language":null,
"long_abstract":"",
"short_abstract":"",
"signup_url":null,
"slides":null,
"start_time":"2016-05-05 07:30:00",
"state":"accepted",
"subtitle":null,
"title":"Fantastische Hardware Bauen",
"track":{
},
"video":"https://www.youtube.com/channel/UCJoKp-gG2t7lH5_hiKp95yA"
}
]
}
```
Please treat this as high priority as we need to start integrating client apps with the orga server and that requires changing the parsers on the client side.
|
1.0
|
Add key to Array of object in existing json output to follow re:Data - We need to add key to the array of objects to follow the re:data conventions. The existing output is not parseable with libraries we are using as well. We need to change it from
```
[
{
"audio":null,
"comments":null,
"end_time":"2016-05-05 08:00:00",
"id":1,
"language":null,
"long_abstract":"",
"short_abstract":"",
"signup_url":null,
"slides":null,
"start_time":"2016-05-05 07:30:00",
"state":"accepted",
"subtitle":null,
"title":"Fantastische Hardware Bauen",
"track":{},
"video":"https://www.youtube.com/channel/UCJoKp-gG2t7lH5_hiKp95yA"
}
]
```
to
```
{
"sessions":[
{
"audio":null,
"comments":null,
"end_time":"2016-05-05 08:00:00",
"id":1,
"language":null,
"long_abstract":"",
"short_abstract":"",
"signup_url":null,
"slides":null,
"start_time":"2016-05-05 07:30:00",
"state":"accepted",
"subtitle":null,
"title":"Fantastische Hardware Bauen",
"track":{
},
"video":"https://www.youtube.com/channel/UCJoKp-gG2t7lH5_hiKp95yA"
}
]
}
```
Please treat this as high priority as we need to start integrating client apps with the orga server and that requires changing the parsers on the client side.
|
priority
|
add key to array of object in existing json output to follow re data we need to add key to the array of objects to follow the re data conventions the existing output is not parseable with libraries we are using as well we need to change it from audio null comments null end time id language null long abstract short abstract signup url null slides null start time state accepted subtitle null title fantastische hardware bauen track video to sessions audio null comments null end time id language null long abstract short abstract signup url null slides null start time state accepted subtitle null title fantastische hardware bauen track video please treat this as high priority as we need to start integrating client apps with the orga server and that requires changing the parsers on the client side
| 1
|
195,355
| 6,911,136,254
|
IssuesEvent
|
2017-11-28 06:49:14
|
vmware/harbor
|
https://api.github.com/repos/vmware/harbor
|
closed
|
bzr failed in Clair due to "permission denied"
|
kind/bug priority/high target/vic-1.3
|
Error:
```
{"Event":"running database migrations","Level":"info","Location":"pgsql.go:216","Time":"2017-11-22 10:05:45.320408"}
ot/.bazaar/bazaar.conf.\\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\\n13751\\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\": invalid syntax","output":"No handlers could be found for logger \"bzr\"\nfailed to open trace file: [Errno 13] Permission denied: '/root/.bzr.log'\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\n13751\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\n"}
```
|
1.0
|
bzr failed in Clair due to "permission denied" - Error:
```
{"Event":"running database migrations","Level":"info","Location":"pgsql.go:216","Time":"2017-11-22 10:05:45.320408"}
ot/.bazaar/bazaar.conf.\\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\\n13751\\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\": invalid syntax","output":"No handlers could be found for logger \"bzr\"\nfailed to open trace file: [Errno 13] Permission denied: '/root/.bzr.log'\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\n13751\nPermission denied while trying to load configuration store file:///root/.bazaar/bazaar.conf.\n"}
```
|
priority
|
bzr failed in clair due to permission denied error event running database migrations level info location pgsql go time ot bazaar bazaar conf npermission denied while trying to load configuration store file root bazaar bazaar conf npermission denied while trying to load configuration store file root bazaar bazaar conf npermission denied while trying to load configuration store file root bazaar bazaar conf npermission denied while trying to load configuration store file root bazaar bazaar conf invalid syntax output no handlers could be found for logger bzr nfailed to open trace file permission denied root bzr log npermission denied while trying to load configuration store file root bazaar bazaar conf npermission denied while trying to load configuration store file root bazaar bazaar conf npermission denied while trying to load configuration store file root bazaar bazaar conf npermission denied while trying to load configuration store file root bazaar bazaar conf npermission denied while trying to load configuration store file root bazaar bazaar conf n
| 1
|
58,657
| 3,090,439,537
|
IssuesEvent
|
2015-08-26 06:41:38
|
TheLens/demolitions
|
https://api.github.com/repos/TheLens/demolitions
|
closed
|
photos above the fold
|
High priority
|
let's work on getting more of the second photo above the fold on laptop and phone.
to save space on the sidebar, should we swap the map and the text/photo column and put the nav buttons/title over the map?
we can also move share buttons up next to address.
i realize we can't get the entire second photo above but we can try for more.
we can also see how it works to put the caption below second photo, like you did before. that will save room between.
|
1.0
|
photos above the fold - let's work on getting more of the second photo above the fold on laptop and phone.
to save space on the sidebar, should we swap the map and the text/photo column and put the nav buttons/title over the map?
we can also move share buttons up next to address.
i realize we can't get the entire second photo above but we can try for more.
we can also see how it works to put the caption below second photo, like you did before. that will save room between.
|
priority
|
photos above the fold let s work on getting more of the second photo above the fold on laptop and phone to save space on the sidebar should we swap the map and the text photo column and put the nav buttons title over the map we can also move share buttons up next to address i realize we can t get the entire second photo above but we can try for more we can also see how it works to put the caption below second photo like you did before that will save room between
| 1
|
156,718
| 5,987,983,533
|
IssuesEvent
|
2017-06-02 02:14:02
|
GeneaLabs/laravel-governor
|
https://api.github.com/repos/GeneaLabs/laravel-governor
|
opened
|
Incorporate publishing of laravel-casts assets
|
enhancement high priority
|
- add to publish command
- document what to include in the layout blade file
|
1.0
|
Incorporate publishing of laravel-casts assets - - add to publish command
- document what to include in the layout blade file
|
priority
|
incorporate publishing of laravel casts assets add to publish command document what to include in the layout blade file
| 1
|
804,580
| 29,493,483,608
|
IssuesEvent
|
2023-06-02 15:03:58
|
octokit/plugin-retry.js
|
https://api.github.com/repos/octokit/plugin-retry.js
|
closed
|
[BUG]: last commit was detected as a fix instead of breaking change (node version)
|
Priority: High Type: Bug
|
### What happened?
the package isn't supporting node >= 16 so I can't install my project. This is because of BREAKING CHANGE of [this commit](https://github.com/octokit/plugin-retry.js/commit/52cf451c23b82898046f861b5806915e93561f98) wasn't detected
### Versions
@octokit/plugin-retry@4.1.4
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
[BUG]: last commit was detected as a fix instead of breaking change (node version) - ### What happened?
the package isn't supporting node >= 16 so I can't install my project. This is because of BREAKING CHANGE of [this commit](https://github.com/octokit/plugin-retry.js/commit/52cf451c23b82898046f861b5806915e93561f98) wasn't detected
### Versions
@octokit/plugin-retry@4.1.4
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
priority
|
last commit was detected as a fix instead of breaking change node version what happened the package isn t supporting node so i can t install my project this is because of breaking change of wasn t detected versions octokit plugin retry relevant log output no response code of conduct i agree to follow this project s code of conduct
| 1
|
343,146
| 10,325,713,349
|
IssuesEvent
|
2019-09-01 19:40:54
|
OpenSRP/opensrp-client-chw
|
https://api.github.com/repos/OpenSRP/opensrp-client-chw
|
closed
|
Customize the side panel with the list of registers for the CHW app for BA
|
Boresha Afya Size: Small (≤1) high priority
|
Please reference below or issue: [BA issue #2](https://github.com/OpenSRP/opensrp-jhpiego-ba/issues/2)
<img width="318" alt="screen shot 2019-01-14 at 11 55 04" src="https://user-images.githubusercontent.com/20777928/51103317-485c2180-17f3-11e9-855b-831d7133e366.png"><img width="324" alt="screen shot 2019-01-14 at 11 55 13" src="https://user-images.githubusercontent.com/20777928/51103321-4d20d580-17f3-11e9-9aed-a4deafd25e9b.png">
- [x] Update the top logo with "USAID Boresha Afya" instead of "Boresha Afya"
- [ ] Update the registers to include the following registers in the following order: All Families, ANC, PNC, Child Clients, Family Planning, Malaria
- [x] Update the icons associated with each register. You should have access to this file. Please mention me and let me know if yo do not have access
|
1.0
|
Customize the side panel with the list of registers for the CHW app for BA - Please reference below or issue: [BA issue #2](https://github.com/OpenSRP/opensrp-jhpiego-ba/issues/2)
<img width="318" alt="screen shot 2019-01-14 at 11 55 04" src="https://user-images.githubusercontent.com/20777928/51103317-485c2180-17f3-11e9-855b-831d7133e366.png"><img width="324" alt="screen shot 2019-01-14 at 11 55 13" src="https://user-images.githubusercontent.com/20777928/51103321-4d20d580-17f3-11e9-9aed-a4deafd25e9b.png">
- [x] Update the top logo with "USAID Boresha Afya" instead of "Boresha Afya"
- [ ] Update the registers to include the following registers in the following order: All Families, ANC, PNC, Child Clients, Family Planning, Malaria
- [x] Update the icons associated with each register. You should have access to this file. Please mention me and let me know if yo do not have access
|
priority
|
customize the side panel with the list of registers for the chw app for ba please reference below or issue img width alt screen shot at src width alt screen shot at src update the top logo with usaid boresha afya instead of boresha afya update the registers to include the following registers in the following order all families anc pnc child clients family planning malaria update the icons associated with each register you should have access to this file please mention me and let me know if yo do not have access
| 1
|
525,939
| 15,269,435,747
|
IssuesEvent
|
2021-02-22 12:47:37
|
AY2021S2-CS2103T-W13-2/tp
|
https://api.github.com/repos/AY2021S2-CS2103T-W13-2/tp
|
opened
|
Delete customer
|
priority.High type.Story
|
As an individual operating a private book loaning service, I can delete a new customer from the system, so that I can keep track of the customers.
|
1.0
|
Delete customer - As an individual operating a private book loaning service, I can delete a new customer from the system, so that I can keep track of the customers.
|
priority
|
delete customer as an individual operating a private book loaning service i can delete a new customer from the system so that i can keep track of the customers
| 1
|
417,954
| 12,190,828,823
|
IssuesEvent
|
2020-04-29 10:00:32
|
softeng-701-group-5/softeng-701-assignment-1
|
https://api.github.com/repos/softeng-701-group-5/softeng-701-assignment-1
|
closed
|
Searching Causes Crashes on the Frontend
|
APPROVED :+1: HIGH PRIORITY bug frontend
|
**Bug Summary**
<!-- A concise description of what the bug is -->
There is currently a bug where when you search on the frontend you get an application crash.
**Test Case(s)**
<!-- List the relevant unsuccessful test case(s) -->
Currently no test cases for this behaviour
---
**Expected behavior**
<!-- Describe what you expected to happen -->
Expected the search operation to be applied and the results on the news cards to be filtered.
**Observed Behaviour**
<!-- Describe the observed behaviour of the bug -->
The observed behaviour was that the application would crash
**Steps To Reproduce**
<!-- The steps performed to reproduce the bug -->
1. Run the application and sign in.
2. Wait for the results to finish loading
3. In the search bar type any search query
**Environment**
- Version: Current Version
- OS: Windows
- Browser: Chrome
---
**Code Examples**

**Stack Trace**

**Screenshots**
**Error Report**
---
**Additional context**
<!-- Add any other context about the problem here -->
The two data types for the weather API and the covid19 API both don't return strings for either username or title needs to have a manual check before the search for these API's. This could be a potential area to start looking at.
|
1.0
|
Searching Causes Crashes on the Frontend - **Bug Summary**
<!-- A concise description of what the bug is -->
There is currently a bug where when you search on the frontend you get an application crash.
**Test Case(s)**
<!-- List the relevant unsuccessful test case(s) -->
Currently no test cases for this behaviour
---
**Expected behavior**
<!-- Describe what you expected to happen -->
Expected the search operation to be applied and the results on the news cards to be filtered.
**Observed Behaviour**
<!-- Describe the observed behaviour of the bug -->
The observed behaviour was that the application would crash
**Steps To Reproduce**
<!-- The steps performed to reproduce the bug -->
1. Run the application and sign in.
2. Wait for the results to finish loading
3. In the search bar type any search query
**Environment**
- Version: Current Version
- OS: Windows
- Browser: Chrome
---
**Code Examples**

**Stack Trace**

**Screenshots**
**Error Report**
---
**Additional context**
<!-- Add any other context about the problem here -->
The two data types for the weather API and the covid19 API both don't return strings for either username or title needs to have a manual check before the search for these API's. This could be a potential area to start looking at.
|
priority
|
searching causes crashes on the frontend bug summary there is currently a bug where when you search on the frontend you get an application crash test case s currently no test cases for this behaviour expected behavior expected the search operation to be applied and the results on the news cards to be filtered observed behaviour the observed behaviour was that the application would crash steps to reproduce run the application and sign in wait for the results to finish loading in the search bar type any search query environment version current version os windows browser chrome code examples stack trace screenshots error report additional context the two data types for the weather api and the api both don t return strings for either username or title needs to have a manual check before the search for these api s this could be a potential area to start looking at
| 1
|
743,678
| 25,910,188,824
|
IssuesEvent
|
2022-12-15 13:24:39
|
PHI-base/PHI5_web_display
|
https://api.github.com/repos/PHI-base/PHI5_web_display
|
closed
|
'Effector' high level term not displayed on effector gene pages
|
bug high priority
|
The gene pages for effector genes are not showing the 'Effector' high level term.
In the instructions for mapping the PHI-base 4 high level terms to data in PHI-base 5, we decided that the 'Effector' high level term should be displayed in the following cases:
* The annotation is a GO Biological Process annotation with the term GO:0140418 or any of its child terms, **or**
* the annotation is a metagenotype annotation involving a pathogen gene annotated with GO:0140418 (or any of its child terms).
Using PHIG:277 as an example, this means that the 'Effector' high level term should be displayed in all of the following places.
In the **Entry Summary**:

In the **Additional information** section for every phenotype annotation that includes the gene, meaning Gene for Gene Phenotype and PHI Phenotype:

And in the **GO Biological Process** section for any annotation with the term GO:0140418 or any of its child terms.

|
1.0
|
'Effector' high level term not displayed on effector gene pages - The gene pages for effector genes are not showing the 'Effector' high level term.
In the instructions for mapping the PHI-base 4 high level terms to data in PHI-base 5, we decided that the 'Effector' high level term should be displayed in the following cases:
* The annotation is a GO Biological Process annotation with the term GO:0140418 or any of its child terms, **or**
* the annotation is a metagenotype annotation involving a pathogen gene annotated with GO:0140418 (or any of its child terms).
Using PHIG:277 as an example, this means that the 'Effector' high level term should be displayed in all of the following places.
In the **Entry Summary**:

In the **Additional information** section for every phenotype annotation that includes the gene, meaning Gene for Gene Phenotype and PHI Phenotype:

And in the **GO Biological Process** section for any annotation with the term GO:0140418 or any of its child terms.

|
priority
|
effector high level term not displayed on effector gene pages the gene pages for effector genes are not showing the effector high level term in the instructions for mapping the phi base high level terms to data in phi base we decided that the effector high level term should be displayed in the following cases the annotation is a go biological process annotation with the term go or any of its child terms or the annotation is a metagenotype annotation involving a pathogen gene annotated with go or any of its child terms using phig as an example this means that the effector high level term should be displayed in all of the following places in the entry summary in the additional information section for every phenotype annotation that includes the gene meaning gene for gene phenotype and phi phenotype and in the go biological process section for any annotation with the term go or any of its child terms
| 1
|
190,706
| 6,821,748,613
|
IssuesEvent
|
2017-11-07 17:42:24
|
playasoft/weightlifter
|
https://api.github.com/repos/playasoft/weightlifter
|
opened
|
Update questions and scoring for 2018 grant cycle
|
priority: high
|
Update questions to match the 2018 CATS manifesto.
|
1.0
|
Update questions and scoring for 2018 grant cycle - Update questions to match the 2018 CATS manifesto.
|
priority
|
update questions and scoring for grant cycle update questions to match the cats manifesto
| 1
|
29,261
| 2,714,225,240
|
IssuesEvent
|
2015-04-10 00:55:38
|
hamiltont/clasp
|
https://api.github.com/repos/hamiltont/clasp
|
opened
|
Add option to introspect emulator's block devices
|
Antimalware High priority
|
From @bamos
I'll branch off into more fine-grained issues when I run into them, but
the idea is to intercept block device writes from the emulators
and stream the metadata using the
gammaray tool from my research group: https://github.com/cmusatyalab/gammaray
This will probably involve modifying Android's already modified qemu implementation
and providing a publish-subscribe queue of the filesystem metadata with gammaray.
When this is working, I'll add a general feature to the web interface to
see the block device writes in real-time.
After this, I'll add an option to let applications get snapshots of the metadata
for the antimalware application.
I'll send out a message to lock the codebase before making these modifications,
I expect I won't get to them until November or December.
I'll also be careful to make it so the Qemu and other modifications
are only active when a user wants to use this option. :-)
From @hamiltont:
Make sure you consult that andlantis paper before starting. Specifically, I believe tehy claim to have ability to do a block-level scan of the filesystem before and after running malware and report files modified, added, deleted, etc. Your paper will need to compare against that approach (or recreate it and provide some hard evidence on how well it works, I've found all the megadroid-esq papers lacking on hard details of how it works and how well it works).
PS - just in case you were not planning this, try to code it in your fork for now. I'm working on this a lot until Nov and migh force push sometimes so I don't want to blow up your work accidentally
|
1.0
|
Add option to introspect emulator's block devices - From @bamos
I'll branch off into more fine-grained issues when I run into them, but
the idea is to intercept block device writes from the emulators
and stream the metadata using the
gammaray tool from my research group: https://github.com/cmusatyalab/gammaray
This will probably involve modifying Android's already modified qemu implementation
and providing a publish-subscribe queue of the filesystem metadata with gammaray.
When this is working, I'll add a general feature to the web interface to
see the block device writes in real-time.
After this, I'll add an option to let applications get snapshots of the metadata
for the antimalware application.
I'll send out a message to lock the codebase before making these modifications,
I expect I won't get to them until November or December.
I'll also be careful to make it so the Qemu and other modifications
are only active when a user wants to use this option. :-)
From @hamiltont:
Make sure you consult that andlantis paper before starting. Specifically, I believe tehy claim to have ability to do a block-level scan of the filesystem before and after running malware and report files modified, added, deleted, etc. Your paper will need to compare against that approach (or recreate it and provide some hard evidence on how well it works, I've found all the megadroid-esq papers lacking on hard details of how it works and how well it works).
PS - just in case you were not planning this, try to code it in your fork for now. I'm working on this a lot until Nov and migh force push sometimes so I don't want to blow up your work accidentally
|
priority
|
add option to introspect emulator s block devices from bamos i ll branch off into more fine grained issues when i run into them but the idea is to intercept block device writes from the emulators and stream the metadata using the gammaray tool from my research group this will probably involve modifying android s already modified qemu implementation and providing a publish subscribe queue of the filesystem metadata with gammaray when this is working i ll add a general feature to the web interface to see the block device writes in real time after this i ll add an option to let applications get snapshots of the metadata for the antimalware application i ll send out a message to lock the codebase before making these modifications i expect i won t get to them until november or december i ll also be careful to make it so the qemu and other modifications are only active when a user wants to use this option from hamiltont make sure you consult that andlantis paper before starting specifically i believe tehy claim to have ability to do a block level scan of the filesystem before and after running malware and report files modified added deleted etc your paper will need to compare against that approach or recreate it and provide some hard evidence on how well it works i ve found all the megadroid esq papers lacking on hard details of how it works and how well it works ps just in case you were not planning this try to code it in your fork for now i m working on this a lot until nov and migh force push sometimes so i don t want to blow up your work accidentally
| 1
|
784,077
| 27,557,057,106
|
IssuesEvent
|
2023-03-07 18:46:47
|
eclipse/dirigible
|
https://api.github.com/repos/eclipse/dirigible
|
opened
|
[UI] Disable two finger horizontal swipe gesture
|
enhancement web-ide usability priority-high efforts-medium
|
**Describe the bug**
In modern browsers, when using touchpads, the two finger horizontal swipe gesture equals pressing the back button. This is brilliant for normal web pages but in web apps such as Dirigible, it's very intrusive.
**Expected behavior**
The navigational gestures should be disabled.
**Desktop:**
- OS: Fedora Linux 36
- Browser: Firefox 110
- Version: Dirigible 7.1.6
|
1.0
|
[UI] Disable two finger horizontal swipe gesture - **Describe the bug**
In modern browsers, when using touchpads, the two finger horizontal swipe gesture equals pressing the back button. This is brilliant for normal web pages but in web apps such as Dirigible, it's very intrusive.
**Expected behavior**
The navigational gestures should be disabled.
**Desktop:**
- OS: Fedora Linux 36
- Browser: Firefox 110
- Version: Dirigible 7.1.6
|
priority
|
disable two finger horizontal swipe gesture describe the bug in modern browsers when using touchpads the two finger horizontal swipe gesture equals pressing the back button this is brilliant for normal web pages but in web apps such as dirigible it s very intrusive expected behavior the navigational gestures should be disabled desktop os fedora linux browser firefox version dirigible
| 1
|
538,822
| 15,778,976,479
|
IssuesEvent
|
2021-04-01 08:16:54
|
knurling-rs/probe-run
|
https://api.github.com/repos/knurling-rs/probe-run
|
closed
|
backtrace can infinte-loop.
|
difficulty: medium priority: high status: needs info type: bug
|
I'm seeing this behavior with `-C force-frame-pointers=no`.
I think it's to be expected that backtracing doesn't work correctly with it, but I think at least this should be detected and fail with `error: the stack appears to be corrupted beyond this point` instead of looping forever.
If there's interest I can try cooking a binary that reproduces this.
```
stack backtrace:
0: HardFaultTrampoline
<exception entry>
1: tester_gwc::sys::__cortex_m_rt_WDT
at ak/src/bin/../sys.rs:556
2: WDT
at ak/src/bin/../sys.rs:553
<exception entry>
3: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.8/src/future/select.rs:95
4: tester_gwc::common::abort_on_keypress::{{closure}}
at ak/src/bin/../tester_common.rs:26
5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
6: tester_gwc::test_network::{{closure}}
at ak/src/bin/tester_gwc.rs:61
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
8: tester_gwc::main::{{closure}}
at ak/src/bin/tester_gwc.rs:43
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: tester_gwc::sys::main_task::task::{{closure}}
at ak/src/bin/../sys.rs:196
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
12: embassy::executor::Task<F>::poll
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:132
13: core::cell::Cell<T>::get
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/cell.rs:432
14: embassy::executor::timer_queue::TimerQueue::update
at /home/dirbaio/akiles/embassy/embassy/src/executor/timer_queue.rs:34
15: embassy::executor::Executor::run::{{closure}}
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:241
16: embassy::executor::run_queue::RunQueue::dequeue_all
at /home/dirbaio/akiles/embassy/embassy/src/executor/run_queue.rs:65
17: embassy::executor::Executor::run
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:223
18: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
19: real_main
at ak/src/bin/../sys.rs:478
20: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
21: real_main
at ak/src/bin/../sys.rs:478
22: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
23: real_main
at ak/src/bin/../sys.rs:478
24: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
25: real_main
at ak/src/bin/../sys.rs:478
26: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
27: real_main
at ak/src/bin/../sys.rs:478
28: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
29: real_main
at ak/src/bin/../sys.rs:478
30: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
31: real_main
at ak/src/bin/../sys.rs:478
32: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
33: real_main
at ak/src/bin/../sys.rs:478
34: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
35: real_main
at ak/src/bin/../sys.rs:478
36: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
37: real_main
at ak/src/bin/../sys.rs:478
38: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
39: real_main
at ak/src/bin/../sys.rs:478
40: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
41: real_main
at ak/src/bin/../sys.rs:478
42: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
43: real_main
at ak/src/bin/../sys.rs:478
44: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
45: real_main
at ak/src/bin/../sys.rs:478
46: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
47: real_main
at ak/src/bin/../sys.rs:478
... this goes on forever
```
|
1.0
|
backtrace can infinte-loop. - I'm seeing this behavior with `-C force-frame-pointers=no`.
I think it's to be expected that backtracing doesn't work correctly with it, but I think at least this should be detected and fail with `error: the stack appears to be corrupted beyond this point` instead of looping forever.
If there's interest I can try cooking a binary that reproduces this.
```
stack backtrace:
0: HardFaultTrampoline
<exception entry>
1: tester_gwc::sys::__cortex_m_rt_WDT
at ak/src/bin/../sys.rs:556
2: WDT
at ak/src/bin/../sys.rs:553
<exception entry>
3: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.8/src/future/select.rs:95
4: tester_gwc::common::abort_on_keypress::{{closure}}
at ak/src/bin/../tester_common.rs:26
5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
6: tester_gwc::test_network::{{closure}}
at ak/src/bin/tester_gwc.rs:61
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
8: tester_gwc::main::{{closure}}
at ak/src/bin/tester_gwc.rs:43
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: tester_gwc::sys::main_task::task::{{closure}}
at ak/src/bin/../sys.rs:196
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
12: embassy::executor::Task<F>::poll
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:132
13: core::cell::Cell<T>::get
at /home/dirbaio/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/cell.rs:432
14: embassy::executor::timer_queue::TimerQueue::update
at /home/dirbaio/akiles/embassy/embassy/src/executor/timer_queue.rs:34
15: embassy::executor::Executor::run::{{closure}}
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:241
16: embassy::executor::run_queue::RunQueue::dequeue_all
at /home/dirbaio/akiles/embassy/embassy/src/executor/run_queue.rs:65
17: embassy::executor::Executor::run
at /home/dirbaio/akiles/embassy/embassy/src/executor/mod.rs:223
18: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
19: real_main
at ak/src/bin/../sys.rs:478
20: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
21: real_main
at ak/src/bin/../sys.rs:478
22: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
23: real_main
at ak/src/bin/../sys.rs:478
24: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
25: real_main
at ak/src/bin/../sys.rs:478
26: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
27: real_main
at ak/src/bin/../sys.rs:478
28: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
29: real_main
at ak/src/bin/../sys.rs:478
30: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
31: real_main
at ak/src/bin/../sys.rs:478
32: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
33: real_main
at ak/src/bin/../sys.rs:478
34: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
35: real_main
at ak/src/bin/../sys.rs:478
36: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
37: real_main
at ak/src/bin/../sys.rs:478
38: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
39: real_main
at ak/src/bin/../sys.rs:478
40: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
41: real_main
at ak/src/bin/../sys.rs:478
42: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
43: real_main
at ak/src/bin/../sys.rs:478
44: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
45: real_main
at ak/src/bin/../sys.rs:478
46: cortex_m::asm::wfe
at /home/dirbaio/.cargo/registry/src/github.com-1ecc6299db9ec823/cortex-m-0.6.4/src/asm.rs:117
47: real_main
at ak/src/bin/../sys.rs:478
... this goes on forever
```
|
priority
|
backtrace can infinte loop i m seeing this behavior with c force frame pointers no i think it s to be expected that backtracing doesn t work correctly with it but i think at least this should be detected and fail with error the stack appears to be corrupted beyond this point instead of looping forever if there s interest i can try cooking a binary that reproduces this stack backtrace hardfaulttrampoline tester gwc sys cortex m rt wdt at ak src bin sys rs wdt at ak src bin sys rs as core future future future poll at home dirbaio cargo registry src github com futures util src future select rs tester gwc common abort on keypress closure at ak src bin tester common rs as core future future future poll at home dirbaio rustup toolchains nightly unknown linux gnu lib rustlib src rust library core src future mod rs tester gwc test network closure at ak src bin tester gwc rs as core future future future poll at home dirbaio rustup toolchains nightly unknown linux gnu lib rustlib src rust library core src future mod rs tester gwc main closure at ak src bin tester gwc rs as core future future future poll tester gwc sys main task task closure at ak src bin sys rs as core future future future poll at home dirbaio rustup toolchains nightly unknown linux gnu lib rustlib src rust library core src future mod rs embassy executor task poll at home dirbaio akiles embassy embassy src executor mod rs core cell cell get at home dirbaio rustup toolchains nightly unknown linux gnu lib rustlib src rust library core src cell rs embassy executor timer queue timerqueue update at home dirbaio akiles embassy embassy src executor timer queue rs embassy executor executor run closure at home dirbaio akiles embassy embassy src executor mod rs embassy executor run queue runqueue dequeue all at home dirbaio akiles embassy embassy src executor run queue rs embassy executor executor run at home dirbaio akiles embassy embassy src executor mod rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs cortex m asm wfe at home dirbaio cargo registry src github com cortex m src asm rs real main at ak src bin sys rs this goes on forever
| 1
|
696,541
| 23,904,620,748
|
IssuesEvent
|
2022-09-08 22:35:36
|
bcgov/digital-journeys
|
https://api.github.com/repos/bcgov/digital-journeys
|
opened
|
TELEWORK: 3+ days of submissions not firing confirmation emails after review from manager/supervisor
|
bug dev - TES Telework Agreement high priority
|
Background: final confirmation emails for 3+ days of telework are not firing after the supervisor has reviewed and submitted a decision
Steps to Reproduce: put in a submission for 3+ days of telework. As a manager/supervisor, review the application and put in a decision. The final confirmation emails for both supervisors and employees with pdf attachments are not received.
Actual Behavior: the final confirmation emails after the decision has been submitted by supervisors are not coming in for both the employee and manager/supervisor
Expected Behaviour: Both the employee and supervisor should be receiving confirmation emails with pdf attachments of the final decision made.
Environment: PROD
|
1.0
|
TELEWORK: 3+ days of submissions not firing confirmation emails after review from manager/supervisor - Background: final confirmation emails for 3+ days of telework are not firing after the supervisor has reviewed and submitted a decision
Steps to Reproduce: put in a submission for 3+ days of telework. As a manager/supervisor, review the application and put in a decision. The final confirmation emails for both supervisors and employees with pdf attachments are not received.
Actual Behavior: the final confirmation emails after the decision has been submitted by supervisors are not coming in for both the employee and manager/supervisor
Expected Behaviour: Both the employee and supervisor should be receiving confirmation emails with pdf attachments of the final decision made.
Environment: PROD
|
priority
|
telework days of submissions not firing confirmation emails after review from manager supervisor background final confirmation emails for days of telework are not firing after the supervisor has reviewed and submitted a decision steps to reproduce put in a submission for days of telework as a manager supervisor review the application and put in a decision the final confirmation emails for both supervisors and employees with pdf attachments are not received actual behavior the final confirmation emails after the decision has been submitted by supervisors are not coming in for both the employee and manager supervisor expected behaviour both the employee and supervisor should be receiving confirmation emails with pdf attachments of the final decision made environment prod
| 1
|
822,054
| 30,850,161,099
|
IssuesEvent
|
2023-08-02 16:08:58
|
labdao/plex
|
https://api.github.com/repos/labdao/plex
|
closed
|
[LAB-245] include bacalhau job metadata in IO
|
High priority Migrated
|
The IO should contain the bacalhau job id. I am not yet sure wether it should contain additional metadata such as the peerID
Usually this content is generated via "bacalhau describe *jobID*"
Link to the user [source](https://www.notion.so/labdao/Early-User-Alexandre-Miloski-b67de9d983054611b1e0e4b3fce0ccad?pvs=4)
<sub>From [SyncLinear.com](https://synclinear.com) | [LAB-245](https://linear.app/convexitylabs/issue/LAB-245/include-bacalhau-job-metadata-in-io)</sub>
|
1.0
|
[LAB-245] include bacalhau job metadata in IO - The IO should contain the bacalhau job id. I am not yet sure wether it should contain additional metadata such as the peerID
Usually this content is generated via "bacalhau describe *jobID*"
Link to the user [source](https://www.notion.so/labdao/Early-User-Alexandre-Miloski-b67de9d983054611b1e0e4b3fce0ccad?pvs=4)
<sub>From [SyncLinear.com](https://synclinear.com) | [LAB-245](https://linear.app/convexitylabs/issue/LAB-245/include-bacalhau-job-metadata-in-io)</sub>
|
priority
|
include bacalhau job metadata in io the io should contain the bacalhau job id i am not yet sure wether it should contain additional metadata such as the peerid usually this content is generated via bacalhau describe jobid link to the user from
| 1
|
610,007
| 18,891,885,066
|
IssuesEvent
|
2021-11-15 14:06:02
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
FSE: Finalizing the name and menu item placement
|
[Type] Enhancement [Priority] High General Interface Needs Dev [Release] Do Not Punt [Feature] Full Site Editing
|
As we get close to releasing FSE into core, questions have come up **about what the feature will be officially called** within the WordPress Dashboard.
This is also currently a blocker for the Documentation team ([see more details in this post on Make](https://make.wordpress.org/docs/2021/02/18/a-home-and-a-name-for-site-editor-documentation-full-site-editing-feature/)). The sooner a decision is reached on the name, the sooner we can spin up proper documentation for it.
@mtias also mentioned that a discussion is needed around **the placement of the menu item and how it interacts with the other items within Appearance sub menu**.
Related issue: #29031
|
1.0
|
FSE: Finalizing the name and menu item placement - As we get close to releasing FSE into core, questions have come up **about what the feature will be officially called** within the WordPress Dashboard.
This is also currently a blocker for the Documentation team ([see more details in this post on Make](https://make.wordpress.org/docs/2021/02/18/a-home-and-a-name-for-site-editor-documentation-full-site-editing-feature/)). The sooner a decision is reached on the name, the sooner we can spin up proper documentation for it.
@mtias also mentioned that a discussion is needed around **the placement of the menu item and how it interacts with the other items within Appearance sub menu**.
Related issue: #29031
|
priority
|
fse finalizing the name and menu item placement as we get close to releasing fse into core questions have come up about what the feature will be officially called within the wordpress dashboard this is also currently a blocker for the documentation team the sooner a decision is reached on the name the sooner we can spin up proper documentation for it mtias also mentioned that a discussion is needed around the placement of the menu item and how it interacts with the other items within appearance sub menu related issue
| 1
|
316,135
| 9,637,238,204
|
IssuesEvent
|
2019-05-16 08:20:32
|
HGustavs/LenaSYS
|
https://api.github.com/repos/HGustavs/LenaSYS
|
closed
|
Zoombar buttons
|
Diagram W21MergeIssue gruppA2019 highPriority
|
The zoombar buttons have wrong styling. The + button is bigger than the - button. Make so both buttons have the same size
How it looks now

|
1.0
|
Zoombar buttons - The zoombar buttons have wrong styling. The + button is bigger than the - button. Make so both buttons have the same size
How it looks now

|
priority
|
zoombar buttons the zoombar buttons have wrong styling the button is bigger than the button make so both buttons have the same size how it looks now
| 1
|
782,700
| 27,503,811,937
|
IssuesEvent
|
2023-03-06 00:06:08
|
tedious/JShrink
|
https://api.github.com/repos/tedious/JShrink
|
closed
|
"Comments" are detected inside of Regex
|
Priority: High Status: Accepted Type: Bug
|
Error in script after Minify when script contents has some code. For example
return /^([a-z][a-z\d\+\-\.]*:)?\/\//i.test(url); minifed by return /^([a-z][a-z\d\+\-\.]*:)?\/\};
script minifed has errors
file Axios.js - https://github.com/axios/axios
|
1.0
|
"Comments" are detected inside of Regex - Error in script after Minify when script contents has some code. For example
return /^([a-z][a-z\d\+\-\.]*:)?\/\//i.test(url); minifed by return /^([a-z][a-z\d\+\-\.]*:)?\/\};
script minifed has errors
file Axios.js - https://github.com/axios/axios
|
priority
|
comments are detected inside of regex error in script after minify when script contents has some code for example return i test url minifed by return script minifed has errors file axios js
| 1
|
158,572
| 6,031,941,145
|
IssuesEvent
|
2017-06-09 01:19:59
|
bloomberg/bucklescript
|
https://api.github.com/repos/bloomberg/bucklescript
|
closed
|
BuckleScript produces syntactically incorrect js code
|
bug PRIORITY:HIGH
|
Consider the following program:
```
$ cat issues-bs/syntax3.ml
(let _s = (let h = true in fun x -> let j = fun f -> true in fun f -> 0) "" "" in 0)
```
It is vanilla OCaml which can be compiled and run with the bytecode compiler:
```
$ ocamlc -o issues-bs/syntax3.byte issues-bs/syntax3.ml
File "issues-bs/syntax3.ml", line 1, characters 15-16:
Warning 26: unused variable h.
File "issues-bs/syntax3.ml", line 1, characters 40-41:
Warning 26: unused variable j.
$ ./issues-bs/syntax3.byte
$
```
However BuckleScript compiles it into what my node installation (and SpiderMonkey) considers syntactically incorrect JavaScript:
```
$ ./node_modules/bs-platform/bin/bsc.exe issues-bs/syntax3.ml
File "issues-bs/syntax3.ml", line 1, characters 15-16:
Warning 26: unused variable h.
File "issues-bs/syntax3.ml", line 1, characters 40-41:
Warning 26: unused variable j.
$ cat issues-bs/syntax3.js
// Generated by BUCKLESCRIPT VERSION 1.7.4, PLEASE EDIT WITH CARE
'use strict';
function () {
return function () {
return 0;
};
}("")("");
/* Not a pure module */
$ node issues-bs/syntax3.js
/the/path/to/the/dir/issues-bs/syntax3.js:5
function () {
^
SyntaxError: Unexpected token (
at createScript (vm.js:56:10)
at Object.runInThisContext (vm.js:97:10)
at Module._compile (module.js:542:28)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.runMain (module.js:604:10)
at run (bootstrap_node.js:390:7)
at startup (bootstrap_node.js:150:9)
$ js17 issues-bs/syntax3.js
issues-bs/syntax3.js:5:0 SyntaxError: function statement requires a name:
issues-bs/syntax3.js:5:0 function () {
issues-bs/syntax3.js:5:0 .........^
```
This is with BuckleScript 1.7.4 (Using OCaml4.02.3+BS ) and node v6.10.3.
|
1.0
|
BuckleScript produces syntactically incorrect js code - Consider the following program:
```
$ cat issues-bs/syntax3.ml
(let _s = (let h = true in fun x -> let j = fun f -> true in fun f -> 0) "" "" in 0)
```
It is vanilla OCaml which can be compiled and run with the bytecode compiler:
```
$ ocamlc -o issues-bs/syntax3.byte issues-bs/syntax3.ml
File "issues-bs/syntax3.ml", line 1, characters 15-16:
Warning 26: unused variable h.
File "issues-bs/syntax3.ml", line 1, characters 40-41:
Warning 26: unused variable j.
$ ./issues-bs/syntax3.byte
$
```
However BuckleScript compiles it into what my node installation (and SpiderMonkey) considers syntactically incorrect JavaScript:
```
$ ./node_modules/bs-platform/bin/bsc.exe issues-bs/syntax3.ml
File "issues-bs/syntax3.ml", line 1, characters 15-16:
Warning 26: unused variable h.
File "issues-bs/syntax3.ml", line 1, characters 40-41:
Warning 26: unused variable j.
$ cat issues-bs/syntax3.js
// Generated by BUCKLESCRIPT VERSION 1.7.4, PLEASE EDIT WITH CARE
'use strict';
function () {
return function () {
return 0;
};
}("")("");
/* Not a pure module */
$ node issues-bs/syntax3.js
/the/path/to/the/dir/issues-bs/syntax3.js:5
function () {
^
SyntaxError: Unexpected token (
at createScript (vm.js:56:10)
at Object.runInThisContext (vm.js:97:10)
at Module._compile (module.js:542:28)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.runMain (module.js:604:10)
at run (bootstrap_node.js:390:7)
at startup (bootstrap_node.js:150:9)
$ js17 issues-bs/syntax3.js
issues-bs/syntax3.js:5:0 SyntaxError: function statement requires a name:
issues-bs/syntax3.js:5:0 function () {
issues-bs/syntax3.js:5:0 .........^
```
This is with BuckleScript 1.7.4 (Using OCaml4.02.3+BS ) and node v6.10.3.
|
priority
|
bucklescript produces syntactically incorrect js code consider the following program cat issues bs ml let s let h true in fun x let j fun f true in fun f in it is vanilla ocaml which can be compiled and run with the bytecode compiler ocamlc o issues bs byte issues bs ml file issues bs ml line characters warning unused variable h file issues bs ml line characters warning unused variable j issues bs byte however bucklescript compiles it into what my node installation and spidermonkey considers syntactically incorrect javascript node modules bs platform bin bsc exe issues bs ml file issues bs ml line characters warning unused variable h file issues bs ml line characters warning unused variable j cat issues bs js generated by bucklescript version please edit with care use strict function return function return not a pure module node issues bs js the path to the dir issues bs js function syntaxerror unexpected token at createscript vm js at object runinthiscontext vm js at module compile module js at object module extensions js module js at module load module js at trymoduleload module js at function module load module js at module runmain module js at run bootstrap node js at startup bootstrap node js issues bs js issues bs js syntaxerror function statement requires a name issues bs js function issues bs js this is with bucklescript using bs and node
| 1
|
123,019
| 4,848,649,295
|
IssuesEvent
|
2016-11-10 18:06:55
|
smartdevicelink/sdl_android
|
https://api.github.com/repos/smartdevicelink/sdl_android
|
closed
|
Null pointer excetpion in SdlBroadcastReceiver
|
bug high priority
|
See this [line](https://github.com/smartdevicelink/sdl_android/blob/master/sdl_android_lib/src/com/smartdevicelink/transport/SdlBroadcastReceiver.java#L105)
If extras are null it will cause the exception.
|
1.0
|
Null pointer excetpion in SdlBroadcastReceiver - See this [line](https://github.com/smartdevicelink/sdl_android/blob/master/sdl_android_lib/src/com/smartdevicelink/transport/SdlBroadcastReceiver.java#L105)
If extras are null it will cause the exception.
|
priority
|
null pointer excetpion in sdlbroadcastreceiver see this if extras are null it will cause the exception
| 1
|
528,907
| 15,376,838,555
|
IssuesEvent
|
2021-03-02 16:24:48
|
SiLeBAT/FSK-Lab
|
https://api.github.com/repos/SiLeBAT/FSK-Lab
|
closed
|
Update the controlled vocabulary for "Parameter data types"
|
FSK-Lab In Progress enhancement high priority
|
YYYY/MM/DD
and
YYYY.MM.DD
(because of a problem of compatibility between google sheet and excel)
|
1.0
|
Update the controlled vocabulary for "Parameter data types" - YYYY/MM/DD
and
YYYY.MM.DD
(because of a problem of compatibility between google sheet and excel)
|
priority
|
update the controlled vocabulary for parameter data types yyyy mm dd and yyyy mm dd because of a problem of compatibility between google sheet and excel
| 1
|
508,614
| 14,703,718,541
|
IssuesEvent
|
2021-01-04 15:26:43
|
geodesymiami/rsmas_insar
|
https://api.github.com/repos/geodesymiami/rsmas_insar
|
closed
|
Dask not working with current python environment
|
Priority: High Type: Bug
|
I installed the python environments (new and old) in your area (test/test2 and test/testold). Some description of the issue is also at https://github.com/insarlab/MintPy/issues/165 . Since then somebody confirmed that it does not work under PBS either. David's effort are documented at https://github.com/2gotgrossman/dask-rsmas-presentation . As I said, I spent lots of time to install the old environment using the old requirements files but did not get it work.
_(After opening the issue it occurred to me that this is not a rsmas_insar but a MintPy issue. The MintPy environment is simpler as it does not have ISCE. I did install a mintpy python environment (run `s.bmintpy`). It gives the same problem @yunjunz is also interested in this.)._
First run test data with the old (good) python environment (in `3rdparty` dir using `ln -s /projects/scratch/insarlab/famelung/MINICONDA3_GOOD miniconda3`):
```
s.btestold
cd /projects/scratch/insarlab/jaz101/unittestGalapagosSenDT128/mintpy
rm -rf wor* time* S1* *velo* *lock
ifgram_inversion.py /projects/scratch/insarlab/famelung/unittestGalapagosSenDT128/mintpy/inputs/ifgramStack.h5 -t /projects/scratch/insarlab/famelung/unittestGalapagosSenDT128/mintpy/smallbaselineApp.cfg --update
```
You will see the following output on the screen. Once you see the line 'FUTURE #1...` that means the first worker has completed its job.
```
/nethome/jaz101/test/testold/rsmas_insar/3rdparty/miniconda3/lib/python3.7/site-packages/distributed/deploy/local.py:106: UserWarning: diagnostics_port has been deprecated. Please use `dashboard_address=` instead
"diagnostics_port has been deprecated. "
JOB COMMAND CALLED FROM PYTHON: #!/bin/bash
#BSUB -J mintpy_bee
#BSUB -q general
#BSUB -P insarlab
#BSUB -n 2
#BSUB -R "span[hosts=1]"
#BSUB -M 4000
#BSUB -W 00:15
#BSUB -R "rusage[mem=2500]"
#BSUB -o worker_mintpy.%J.o
#BSUB -e worker_mintpy.%J.e
JOB_ID=${LSB_JOBID%.*}
/nethome/jaz101/test/testold/rsmas_insar/3rdparty/miniconda3/bin/python3 -m distributed.cli.dask_worker tcp://10.11.1.13:43577 --nthreads 2 --memory-limit 4.00GB --name mintpy_bee--${JOB_ID}-- --death-timeout 60 --interface ib0
0 [0, 0, 34, 1100]
1 [34, 0, 68, 1100]
2 [68, 0, 102, 1100]
3 [102, 0, 136, 1100]
4 [136, 0, 170, 1100]
5 [170, 0, 204, 1100]
6 [204, 0, 238, 1100]
7 [238, 0, 272, 1100]
8 [272, 0, 306, 1100]
9 [306, 0, 341, 1100]
10 [341, 0, 375, 1100]
11 [375, 0, 409, 1100]
12 [409, 0, 443, 1100]
13 [443, 0, 477, 1100]
14 [477, 0, 511, 1100]
15 [511, 0, 545, 1100]
16 [545, 0, 579, 1100]
17 [579, 0, 613, 1100]
18 [613, 0, 647, 1100]
19 [647, 0, 682, 1100]
20 [682, 0, 716, 1100]
21 [716, 0, 750, 1100]
22 [750, 0, 784, 1100]
23 [784, 0, 818, 1100]
24 [818, 0, 852, 1100]
25 [852, 0, 886, 1100]
26 [886, 0, 920, 1100]
27 [920, 0, 954, 1100]
28 [954, 0, 988, 1100]
29 [988, 0, 1023, 1100]
30 [1023, 0, 1057, 1100]
31 [1057, 0, 1091, 1100]
32 [1091, 0, 1125, 1100]
33 [1125, 0, 1159, 1100]
34 [1159, 0, 1193, 1100]
35 [1193, 0, 1227, 1100]
36 [1227, 0, 1261, 1100]
37 [1261, 0, 1295, 1100]
38 [1295, 0, 1329, 1100]
39 [1329, 0, 1364, 1100]
FUTURE #1 complete in 22.41086196899414 seconds. Box: [1329, 0, 1364, 1100] Time: 1576907836.4373446
FUTURE #2 complete in 22.615032196044922 seconds. Box: [341, 0, 375, 1100] Time: 1576907836.641515
FUTURE #3 complete in 23.67570185661316 seconds. Box: [204, 0, 238, 1100] Time: 1576907837.7021844
FUTURE #4 complete in 23.899144649505615 seconds. Box: [750, 0, 784, 1100] Time: 1576907837.9256275
FUTURE #5 complete in 33.91184902191162 seconds. Box: [1193, 0, 1227, 1100] Time: 1576907847.9383318
FUTURE #6 complete in 34.35519361495972 seconds. Box: [409, 0, 443, 1100] Time: 1576907848.3816762
FUTURE #7 complete in 34.4250373840332 seconds. Box: [1295, 0, 1329, 1100] Time: 1576907848.45152
FUTURE #8 complete in 34.43126916885376 seconds. Box: [1091, 0, 1125, 1100] Time: 1576907848.4577518
FUTURE #9 complete in 34.47014904022217 seconds. Box: [102, 0, 136, 1100] Time: 1576907848.4966319
FUTURE #10 complete in 34.49504494667053 seconds. Box: [1261, 0, 1295, 1100] Time: 1576907848.5215275
FUTURE #11 complete in 34.52674746513367 seconds. Box: [716, 0, 750, 1100] Time: 1576907848.55323
FUTURE #12 complete in 34.56773853302002 seconds. Box: [613, 0, 647, 1100] Time: 1576907848.5942214
FUTURE #13 complete in 34.633689165115356 seconds. Box: [579, 0, 613, 1100] Time: 1576907848.6601717
FUTURE #14 complete in 34.643046855926514 seconds. Box: [954, 0, 988, 1100] Time: 1576907848.6695294
FUTURE #15 complete in 34.737093687057495 seconds. Box: [1125, 0, 1159, 1100] Time: 1576907848.7635763
FUTURE #16 complete in 34.85588765144348 seconds. Box: [784, 0, 818, 1100] Time: 1576907848.8823705
FUTURE #17 complete in 34.89444017410278 seconds. Box: [1023, 0, 1057, 1100] Time: 1576907848.920923
FUTURE #18 complete in 34.98618984222412 seconds. Box: [272, 0, 306, 1100] Time: 1576907849.0126724
FUTURE #19 complete in 34.988025426864624 seconds. Box: [0, 0, 34, 1100] Time: 1576907849.0145073
FUTURE #20 complete in 35.06926655769348 seconds. Box: [852, 0, 886, 1100] Time: 1576907849.0957494
FUTURE #21 complete in 35.12981843948364 seconds. Box: [920, 0, 954, 1100] Time: 1576907849.1563005
FUTURE #22 complete in 35.1398344039917 seconds. Box: [511, 0, 545, 1100] Time: 1576907849.1663165
FUTURE #23 complete in 35.14792537689209 seconds. Box: [545, 0, 579, 1100] Time: 1576907849.1744082
FUTURE #24 complete in 35.26082181930542 seconds. Box: [1057, 0, 1091, 1100] Time: 1576907849.2873044
FUTURE #25 complete in 35.325475454330444 seconds. Box: [818, 0, 852, 1100] Time: 1576907849.3519585
FUTURE #26 complete in 35.357988357543945 seconds. Box: [682, 0, 716, 1100] Time: 1576907849.384471
FUTURE #27 complete in 35.36216115951538 seconds. Box: [443, 0, 477, 1100] Time: 1576907849.388643
FUTURE #28 complete in 35.36387801170349 seconds. Box: [477, 0, 511, 1100] Time: 1576907849.3903596
FUTURE #29 complete in 35.44611406326294 seconds. Box: [1329, 0, 1364, 1100] Time: 1576907849.4725966
FUTURE #30 complete in 35.512518644332886 seconds. Box: [988, 0, 1023, 1100] Time: 1576907849.5390012
FUTURE #31 complete in 35.62356638908386 seconds. Box: [1159, 0, 1193, 1100] Time: 1576907849.6500492
FUTURE #32 complete in 35.67281889915466 seconds. Box: [647, 0, 682, 1100] Time: 1576907849.6993017
FUTURE #33 complete in 35.694395303726196 seconds. Box: [375, 0, 409, 1100] Time: 1576907849.7208781
FUTURE #34 complete in 35.860108613967896 seconds. Box: [306, 0, 341, 1100] Time: 1576907849.8865912
FUTURE #35 complete in 35.878817319869995 seconds. Box: [886, 0, 920, 1100] Time: 1576907849.9052997
FUTURE #36 complete in 35.90852355957031 seconds. Box: [1227, 0, 1261, 1100] Time: 1576907849.9350061
FUTURE #37 complete in 36.021509885787964 seconds. Box: [34, 0, 68, 1100] Time: 1576907850.0479925
FUTURE #38 complete in 36.410053968429565 seconds. Box: [170, 0, 204, 1100] Time: 1576907850.4365366
FUTURE #39 complete in 36.676669120788574 seconds. Box: [68, 0, 102, 1100] Time: 1576907850.703151
FUTURE #40 complete in 36.9715633392334 seconds. Box: [136, 0, 170, 1100] Time: 1576907850.9980462
--------------------------------------------------
converting phase to range
calculating perpendicular baseline timeseries
...
```
To run the current (new) python environment (installed in `/3rparty` dir as described in https://github.com/geodesymiami/rsmas_insar/blob/master/docs/installation.md#installation-guide ) just do (after clearing your old environment) using
```
s.bnew
```
and the same commands above. You will see the screen output below, but the `FUTURE #1` will never show up. If you run `bjobs` you will see that the worker have been started but the don't run. They stop after the time-out period of 30 minutes.
```
/nethome/jaz101/test/test2/rsmas_insar/3rdparty/miniconda3/bin/python3 -m distributed.cli.dask_worker tcp://10.11.1.13:44169 --nthreads 2 --memory-limit 4.00GB --name mintpy_bee--${JOB_ID}-- --death-timeout 60 --interface ib0
0 [0, 0, 34, 1100]
1 [34, 0, 68, 1100]
2 [68, 0, 102, 1100]
3 [102, 0, 136, 1100]
4 [136, 0, 170, 1100]
5 [170, 0, 204, 1100]
6 [204, 0, 238, 1100]
7 [238, 0, 272, 1100]
8 [272, 0, 306, 1100]
9 [306, 0, 341, 1100]
10 [341, 0, 375, 1100]
11 [375, 0, 409, 1100]
12 [409, 0, 443, 1100]
13 [443, 0, 477, 1100]
14 [477, 0, 511, 1100]
15 [511, 0, 545, 1100]
16 [545, 0, 579, 1100]
17 [579, 0, 613, 1100]
18 [613, 0, 647, 1100]
19 [647, 0, 682, 1100]
20 [682, 0, 716, 1100]
21 [716, 0, 750, 1100]
22 [750, 0, 784, 1100]
23 [784, 0, 818, 1100]
24 [818, 0, 852, 1100]
25 [852, 0, 886, 1100]
26 [886, 0, 920, 1100]
27 [920, 0, 954, 1100]
28 [954, 0, 988, 1100]
29 [988, 0, 1023, 1100]
30 [1023, 0, 1057, 1100]
31 [1057, 0, 1091, 1100]
32 [1091, 0, 1125, 1100]
33 [1125, 0, 1159, 1100]
34 [1159, 0, 1193, 1100]
35 [1193, 0, 1227, 1100]
36 [1227, 0, 1261, 1100]
37 [1261, 0, 1295, 1100]
38 [1295, 0, 1329, 1100]
39 [1329, 0, 1364, 1100]
^Z
[1]+ Stopped ifgram_inversion.py /projects/scratch/insarlab/famelung/unittestGalapagosSenDT128/mintpy/inputs/ifgramStack.h5 -t /projects/scratch/insarlab/famelung/unittestGalapagosSenDT128/mintpy/smallbaselineApp.cfg --update
//login3/projects/scratch/insarlab/jaz101/unittestGalapagosSenDT128/mintpy[1004] bjobs
JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME
23155989 jaz101 RUN general login3 2*n264 mintpy_bee Dec 21 01:16
23155991 jaz101 RUN general login3 2*n276 mintpy_bee Dec 21 01:16
23155990 jaz101 RUN general login3 2*n259 mintpy_bee Dec 21 01:16
23155994 jaz101 RUN general login3 2*n267 mintpy_bee Dec 21 01:16
```
|
1.0
|
Dask not working with current python environment - I installed the python environments (new and old) in your area (test/test2 and test/testold). Some description of the issue is also at https://github.com/insarlab/MintPy/issues/165 . Since then somebody confirmed that it does not work under PBS either. David's effort are documented at https://github.com/2gotgrossman/dask-rsmas-presentation . As I said, I spent lots of time to install the old environment using the old requirements files but did not get it work.
_(After opening the issue it occurred to me that this is not a rsmas_insar but a MintPy issue. The MintPy environment is simpler as it does not have ISCE. I did install a mintpy python environment (run `s.bmintpy`). It gives the same problem @yunjunz is also interested in this.)._
First run test data with the old (good) python environment (in `3rdparty` dir using `ln -s /projects/scratch/insarlab/famelung/MINICONDA3_GOOD miniconda3`):
```
s.btestold
cd /projects/scratch/insarlab/jaz101/unittestGalapagosSenDT128/mintpy
rm -rf wor* time* S1* *velo* *lock
ifgram_inversion.py /projects/scratch/insarlab/famelung/unittestGalapagosSenDT128/mintpy/inputs/ifgramStack.h5 -t /projects/scratch/insarlab/famelung/unittestGalapagosSenDT128/mintpy/smallbaselineApp.cfg --update
```
You will see the following output on the screen. Once you see the line 'FUTURE #1...` that means the first worker has completed its job.
```
/nethome/jaz101/test/testold/rsmas_insar/3rdparty/miniconda3/lib/python3.7/site-packages/distributed/deploy/local.py:106: UserWarning: diagnostics_port has been deprecated. Please use `dashboard_address=` instead
"diagnostics_port has been deprecated. "
JOB COMMAND CALLED FROM PYTHON: #!/bin/bash
#BSUB -J mintpy_bee
#BSUB -q general
#BSUB -P insarlab
#BSUB -n 2
#BSUB -R "span[hosts=1]"
#BSUB -M 4000
#BSUB -W 00:15
#BSUB -R "rusage[mem=2500]"
#BSUB -o worker_mintpy.%J.o
#BSUB -e worker_mintpy.%J.e
JOB_ID=${LSB_JOBID%.*}
/nethome/jaz101/test/testold/rsmas_insar/3rdparty/miniconda3/bin/python3 -m distributed.cli.dask_worker tcp://10.11.1.13:43577 --nthreads 2 --memory-limit 4.00GB --name mintpy_bee--${JOB_ID}-- --death-timeout 60 --interface ib0
0 [0, 0, 34, 1100]
1 [34, 0, 68, 1100]
2 [68, 0, 102, 1100]
3 [102, 0, 136, 1100]
4 [136, 0, 170, 1100]
5 [170, 0, 204, 1100]
6 [204, 0, 238, 1100]
7 [238, 0, 272, 1100]
8 [272, 0, 306, 1100]
9 [306, 0, 341, 1100]
10 [341, 0, 375, 1100]
11 [375, 0, 409, 1100]
12 [409, 0, 443, 1100]
13 [443, 0, 477, 1100]
14 [477, 0, 511, 1100]
15 [511, 0, 545, 1100]
16 [545, 0, 579, 1100]
17 [579, 0, 613, 1100]
18 [613, 0, 647, 1100]
19 [647, 0, 682, 1100]
20 [682, 0, 716, 1100]
21 [716, 0, 750, 1100]
22 [750, 0, 784, 1100]
23 [784, 0, 818, 1100]
24 [818, 0, 852, 1100]
25 [852, 0, 886, 1100]
26 [886, 0, 920, 1100]
27 [920, 0, 954, 1100]
28 [954, 0, 988, 1100]
29 [988, 0, 1023, 1100]
30 [1023, 0, 1057, 1100]
31 [1057, 0, 1091, 1100]
32 [1091, 0, 1125, 1100]
33 [1125, 0, 1159, 1100]
34 [1159, 0, 1193, 1100]
35 [1193, 0, 1227, 1100]
36 [1227, 0, 1261, 1100]
37 [1261, 0, 1295, 1100]
38 [1295, 0, 1329, 1100]
39 [1329, 0, 1364, 1100]
FUTURE #1 complete in 22.41086196899414 seconds. Box: [1329, 0, 1364, 1100] Time: 1576907836.4373446
FUTURE #2 complete in 22.615032196044922 seconds. Box: [341, 0, 375, 1100] Time: 1576907836.641515
FUTURE #3 complete in 23.67570185661316 seconds. Box: [204, 0, 238, 1100] Time: 1576907837.7021844
FUTURE #4 complete in 23.899144649505615 seconds. Box: [750, 0, 784, 1100] Time: 1576907837.9256275
FUTURE #5 complete in 33.91184902191162 seconds. Box: [1193, 0, 1227, 1100] Time: 1576907847.9383318
FUTURE #6 complete in 34.35519361495972 seconds. Box: [409, 0, 443, 1100] Time: 1576907848.3816762
FUTURE #7 complete in 34.4250373840332 seconds. Box: [1295, 0, 1329, 1100] Time: 1576907848.45152
FUTURE #8 complete in 34.43126916885376 seconds. Box: [1091, 0, 1125, 1100] Time: 1576907848.4577518
FUTURE #9 complete in 34.47014904022217 seconds. Box: [102, 0, 136, 1100] Time: 1576907848.4966319
FUTURE #10 complete in 34.49504494667053 seconds. Box: [1261, 0, 1295, 1100] Time: 1576907848.5215275
FUTURE #11 complete in 34.52674746513367 seconds. Box: [716, 0, 750, 1100] Time: 1576907848.55323
FUTURE #12 complete in 34.56773853302002 seconds. Box: [613, 0, 647, 1100] Time: 1576907848.5942214
FUTURE #13 complete in 34.633689165115356 seconds. Box: [579, 0, 613, 1100] Time: 1576907848.6601717
FUTURE #14 complete in 34.643046855926514 seconds. Box: [954, 0, 988, 1100] Time: 1576907848.6695294
FUTURE #15 complete in 34.737093687057495 seconds. Box: [1125, 0, 1159, 1100] Time: 1576907848.7635763
FUTURE #16 complete in 34.85588765144348 seconds. Box: [784, 0, 818, 1100] Time: 1576907848.8823705
FUTURE #17 complete in 34.89444017410278 seconds. Box: [1023, 0, 1057, 1100] Time: 1576907848.920923
FUTURE #18 complete in 34.98618984222412 seconds. Box: [272, 0, 306, 1100] Time: 1576907849.0126724
FUTURE #19 complete in 34.988025426864624 seconds. Box: [0, 0, 34, 1100] Time: 1576907849.0145073
FUTURE #20 complete in 35.06926655769348 seconds. Box: [852, 0, 886, 1100] Time: 1576907849.0957494
FUTURE #21 complete in 35.12981843948364 seconds. Box: [920, 0, 954, 1100] Time: 1576907849.1563005
FUTURE #22 complete in 35.1398344039917 seconds. Box: [511, 0, 545, 1100] Time: 1576907849.1663165
FUTURE #23 complete in 35.14792537689209 seconds. Box: [545, 0, 579, 1100] Time: 1576907849.1744082
FUTURE #24 complete in 35.26082181930542 seconds. Box: [1057, 0, 1091, 1100] Time: 1576907849.2873044
FUTURE #25 complete in 35.325475454330444 seconds. Box: [818, 0, 852, 1100] Time: 1576907849.3519585
FUTURE #26 complete in 35.357988357543945 seconds. Box: [682, 0, 716, 1100] Time: 1576907849.384471
FUTURE #27 complete in 35.36216115951538 seconds. Box: [443, 0, 477, 1100] Time: 1576907849.388643
FUTURE #28 complete in 35.36387801170349 seconds. Box: [477, 0, 511, 1100] Time: 1576907849.3903596
FUTURE #29 complete in 35.44611406326294 seconds. Box: [1329, 0, 1364, 1100] Time: 1576907849.4725966
FUTURE #30 complete in 35.512518644332886 seconds. Box: [988, 0, 1023, 1100] Time: 1576907849.5390012
FUTURE #31 complete in 35.62356638908386 seconds. Box: [1159, 0, 1193, 1100] Time: 1576907849.6500492
FUTURE #32 complete in 35.67281889915466 seconds. Box: [647, 0, 682, 1100] Time: 1576907849.6993017
FUTURE #33 complete in 35.694395303726196 seconds. Box: [375, 0, 409, 1100] Time: 1576907849.7208781
FUTURE #34 complete in 35.860108613967896 seconds. Box: [306, 0, 341, 1100] Time: 1576907849.8865912
FUTURE #35 complete in 35.878817319869995 seconds. Box: [886, 0, 920, 1100] Time: 1576907849.9052997
FUTURE #36 complete in 35.90852355957031 seconds. Box: [1227, 0, 1261, 1100] Time: 1576907849.9350061
FUTURE #37 complete in 36.021509885787964 seconds. Box: [34, 0, 68, 1100] Time: 1576907850.0479925
FUTURE #38 complete in 36.410053968429565 seconds. Box: [170, 0, 204, 1100] Time: 1576907850.4365366
FUTURE #39 complete in 36.676669120788574 seconds. Box: [68, 0, 102, 1100] Time: 1576907850.703151
FUTURE #40 complete in 36.9715633392334 seconds. Box: [136, 0, 170, 1100] Time: 1576907850.9980462
--------------------------------------------------
converting phase to range
calculating perpendicular baseline timeseries
...
```
To run the current (new) python environment (installed in `/3rparty` dir as described in https://github.com/geodesymiami/rsmas_insar/blob/master/docs/installation.md#installation-guide ) just do (after clearing your old environment) using
```
s.bnew
```
and the same commands above. You will see the screen output below, but the `FUTURE #1` will never show up. If you run `bjobs` you will see that the worker have been started but the don't run. They stop after the time-out period of 30 minutes.
```
/nethome/jaz101/test/test2/rsmas_insar/3rdparty/miniconda3/bin/python3 -m distributed.cli.dask_worker tcp://10.11.1.13:44169 --nthreads 2 --memory-limit 4.00GB --name mintpy_bee--${JOB_ID}-- --death-timeout 60 --interface ib0
0 [0, 0, 34, 1100]
1 [34, 0, 68, 1100]
2 [68, 0, 102, 1100]
3 [102, 0, 136, 1100]
4 [136, 0, 170, 1100]
5 [170, 0, 204, 1100]
6 [204, 0, 238, 1100]
7 [238, 0, 272, 1100]
8 [272, 0, 306, 1100]
9 [306, 0, 341, 1100]
10 [341, 0, 375, 1100]
11 [375, 0, 409, 1100]
12 [409, 0, 443, 1100]
13 [443, 0, 477, 1100]
14 [477, 0, 511, 1100]
15 [511, 0, 545, 1100]
16 [545, 0, 579, 1100]
17 [579, 0, 613, 1100]
18 [613, 0, 647, 1100]
19 [647, 0, 682, 1100]
20 [682, 0, 716, 1100]
21 [716, 0, 750, 1100]
22 [750, 0, 784, 1100]
23 [784, 0, 818, 1100]
24 [818, 0, 852, 1100]
25 [852, 0, 886, 1100]
26 [886, 0, 920, 1100]
27 [920, 0, 954, 1100]
28 [954, 0, 988, 1100]
29 [988, 0, 1023, 1100]
30 [1023, 0, 1057, 1100]
31 [1057, 0, 1091, 1100]
32 [1091, 0, 1125, 1100]
33 [1125, 0, 1159, 1100]
34 [1159, 0, 1193, 1100]
35 [1193, 0, 1227, 1100]
36 [1227, 0, 1261, 1100]
37 [1261, 0, 1295, 1100]
38 [1295, 0, 1329, 1100]
39 [1329, 0, 1364, 1100]
^Z
[1]+ Stopped ifgram_inversion.py /projects/scratch/insarlab/famelung/unittestGalapagosSenDT128/mintpy/inputs/ifgramStack.h5 -t /projects/scratch/insarlab/famelung/unittestGalapagosSenDT128/mintpy/smallbaselineApp.cfg --update
//login3/projects/scratch/insarlab/jaz101/unittestGalapagosSenDT128/mintpy[1004] bjobs
JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME
23155989 jaz101 RUN general login3 2*n264 mintpy_bee Dec 21 01:16
23155991 jaz101 RUN general login3 2*n276 mintpy_bee Dec 21 01:16
23155990 jaz101 RUN general login3 2*n259 mintpy_bee Dec 21 01:16
23155994 jaz101 RUN general login3 2*n267 mintpy_bee Dec 21 01:16
```
|
priority
|
dask not working with current python environment i installed the python environments new and old in your area test and test testold some description of the issue is also at since then somebody confirmed that it does not work under pbs either david s effort are documented at as i said i spent lots of time to install the old environment using the old requirements files but did not get it work after opening the issue it occurred to me that this is not a rsmas insar but a mintpy issue the mintpy environment is simpler as it does not have isce i did install a mintpy python environment run s bmintpy it gives the same problem yunjunz is also interested in this first run test data with the old good python environment in dir using ln s projects scratch insarlab famelung good s btestold cd projects scratch insarlab mintpy rm rf wor time velo lock ifgram inversion py projects scratch insarlab famelung mintpy inputs ifgramstack t projects scratch insarlab famelung mintpy smallbaselineapp cfg update you will see the following output on the screen once you see the line future that means the first worker has completed its job nethome test testold rsmas insar lib site packages distributed deploy local py userwarning diagnostics port has been deprecated please use dashboard address instead diagnostics port has been deprecated job command called from python bin bash bsub j mintpy bee bsub q general bsub p insarlab bsub n bsub r span bsub m bsub w bsub r rusage bsub o worker mintpy j o bsub e worker mintpy j e job id lsb jobid nethome test testold rsmas insar bin m distributed cli dask worker tcp nthreads memory limit name mintpy bee job id death timeout interface future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time future complete in seconds box time converting phase to range calculating perpendicular baseline timeseries to run the current new python environment installed in dir as described in just do after clearing your old environment using s bnew and the same commands above you will see the screen output below but the future will never show up if you run bjobs you will see that the worker have been started but the don t run they stop after the time out period of minutes nethome test rsmas insar bin m distributed cli dask worker tcp nthreads memory limit name mintpy bee job id death timeout interface z stopped ifgram inversion py projects scratch insarlab famelung mintpy inputs ifgramstack t projects scratch insarlab famelung mintpy smallbaselineapp cfg update projects scratch insarlab mintpy bjobs jobid user stat queue from host exec host job name submit time run general mintpy bee dec run general mintpy bee dec run general mintpy bee dec run general mintpy bee dec
| 1
|
412,483
| 12,042,911,239
|
IssuesEvent
|
2020-04-14 11:27:51
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
Facebook instance article is not working when the permalink settings are in the plain
|
NEXT UPDATE [Priority: HIGH] bug
|
Facebook instance article is not working when the permalink settings are in the plain.
ref:https://secure.helpscout.net/conversation/1126596869/120314?folderId=3575684
|
1.0
|
Facebook instance article is not working when the permalink settings are in the plain - Facebook instance article is not working when the permalink settings are in the plain.
ref:https://secure.helpscout.net/conversation/1126596869/120314?folderId=3575684
|
priority
|
facebook instance article is not working when the permalink settings are in the plain facebook instance article is not working when the permalink settings are in the plain ref
| 1
|
532,975
| 15,574,758,454
|
IssuesEvent
|
2021-03-17 10:13:44
|
telerik/kendo-ui-core
|
https://api.github.com/repos/telerik/kendo-ui-core
|
closed
|
Pasting a table from Word into the Editor throws a js exception (IE11)
|
Bug C: Editor FP: In Development Kendo2 Priority 5 SEV: High
|
### Bug report
Regression introduced in R1 2021 SP1. Related to: #6169
### Reproduction of the problem
Reproducible in the demos in IE11.
1. Create a table in a Word document.
2. Copy the table and paste it in the Editor.
### Current behavior
A js exception is thrown.
### Expected/desired behavior
No exceptions on pasting the table.
### Environment
* **Kendo UI version:** 2021.1.224
* **jQuery version:** x.y
* **Browser:** [IE11 ]
|
1.0
|
Pasting a table from Word into the Editor throws a js exception (IE11) - ### Bug report
Regression introduced in R1 2021 SP1. Related to: #6169
### Reproduction of the problem
Reproducible in the demos in IE11.
1. Create a table in a Word document.
2. Copy the table and paste it in the Editor.
### Current behavior
A js exception is thrown.
### Expected/desired behavior
No exceptions on pasting the table.
### Environment
* **Kendo UI version:** 2021.1.224
* **jQuery version:** x.y
* **Browser:** [IE11 ]
|
priority
|
pasting a table from word into the editor throws a js exception bug report regression introduced in related to reproduction of the problem reproducible in the demos in create a table in a word document copy the table and paste it in the editor current behavior a js exception is thrown expected desired behavior no exceptions on pasting the table environment kendo ui version jquery version x y browser
| 1
|
158,021
| 6,020,546,832
|
IssuesEvent
|
2017-06-07 16:41:01
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
reopened
|
Amp carousel issue
|
Category: AMP Caches P1: High Priority
|
AMP Carousel is not showing , when it served from google cache.
google caching url in mobile is :- https://www.google.co.in/amp/s/www.lybrate.com/amp/delhi/sexologist
And it's also not visible from cdn server: https://www-lybrate-com.cdn.ampproject.org/v/s/www.lybrate.com/amp/delhi/sexologist?amp_js_v=0.1
while amp carousel visible on amp Cloudflare
https://www-lybrate-com.amp.cloudflare.com/c/s/www.lybrate.com/amp/delhi/sexologist
<img width="311" alt="screen shot 2017-06-06 at 6 26 02 pm" src="https://cloud.githubusercontent.com/assets/16610202/26830321/1dca20ba-4ae6-11e7-8586-004dc98ac95c.png">
<img width="576" alt="screen shot 2017-06-06 at 6 26 28 pm" src="https://cloud.githubusercontent.com/assets/16610202/26830330/2ca3d482-4ae6-11e7-9491-7a5604ac394b.png">
My Page amp link is as :
https://www.lybrate.com/amp/delhi/sexologist
Plz help me out , what is wrong with this??
Images :
|
1.0
|
Amp carousel issue - AMP Carousel is not showing , when it served from google cache.
google caching url in mobile is :- https://www.google.co.in/amp/s/www.lybrate.com/amp/delhi/sexologist
And it's also not visible from cdn server: https://www-lybrate-com.cdn.ampproject.org/v/s/www.lybrate.com/amp/delhi/sexologist?amp_js_v=0.1
while amp carousel visible on amp Cloudflare
https://www-lybrate-com.amp.cloudflare.com/c/s/www.lybrate.com/amp/delhi/sexologist
<img width="311" alt="screen shot 2017-06-06 at 6 26 02 pm" src="https://cloud.githubusercontent.com/assets/16610202/26830321/1dca20ba-4ae6-11e7-8586-004dc98ac95c.png">
<img width="576" alt="screen shot 2017-06-06 at 6 26 28 pm" src="https://cloud.githubusercontent.com/assets/16610202/26830330/2ca3d482-4ae6-11e7-9491-7a5604ac394b.png">
My Page amp link is as :
https://www.lybrate.com/amp/delhi/sexologist
Plz help me out , what is wrong with this??
Images :
|
priority
|
amp carousel issue amp carousel is not showing when it served from google cache google caching url in mobile is and it s also not visible from cdn server while amp carousel visible on amp cloudflare img width alt screen shot at pm src img width alt screen shot at pm src my page amp link is as plz help me out what is wrong with this images
| 1
|
387,727
| 11,467,180,267
|
IssuesEvent
|
2020-02-08 02:43:41
|
nchen000/parashop_frontend
|
https://api.github.com/repos/nchen000/parashop_frontend
|
opened
|
CreateListForm (2)
|
High priority enhancement
|
Technical Background:
- user should be able to create a new list
Acceptance Criteria:
- allow user to create a new list from `HomeScreen`
- redirect user to `ViewListScreen` after a successful creation
|
1.0
|
CreateListForm (2) - Technical Background:
- user should be able to create a new list
Acceptance Criteria:
- allow user to create a new list from `HomeScreen`
- redirect user to `ViewListScreen` after a successful creation
|
priority
|
createlistform technical background user should be able to create a new list acceptance criteria allow user to create a new list from homescreen redirect user to viewlistscreen after a successful creation
| 1
|
383,330
| 11,354,628,194
|
IssuesEvent
|
2020-01-24 18:04:32
|
DarshanShet777/Model-Airport
|
https://api.github.com/repos/DarshanShet777/Model-Airport
|
closed
|
Structural Engineering: Inner Section Foundation 2 More Columns Necessary
|
High Priority
|
- [x] Build Columns
- [x] Outline column parts on scrap printing paper
- [x] Cut parts
- [x] Assemble parts to create columns
- [ ] Build Foundation 2 Outer Section
- [x] Align layers over Foundation 1 and Columns
- [x] Insert screws where columns are present
- [ ] Reinsert screws with wood glue to secure layers
- [ ] Build Foundation 2 Inner Section
- [x] Align layers over Foundation 1 and Columns
- [ ] Insert screws where columns are present
- [ ] Reinsert screws with wood glue to secure layers
|
1.0
|
Structural Engineering: Inner Section Foundation 2 More Columns Necessary - - [x] Build Columns
- [x] Outline column parts on scrap printing paper
- [x] Cut parts
- [x] Assemble parts to create columns
- [ ] Build Foundation 2 Outer Section
- [x] Align layers over Foundation 1 and Columns
- [x] Insert screws where columns are present
- [ ] Reinsert screws with wood glue to secure layers
- [ ] Build Foundation 2 Inner Section
- [x] Align layers over Foundation 1 and Columns
- [ ] Insert screws where columns are present
- [ ] Reinsert screws with wood glue to secure layers
|
priority
|
structural engineering inner section foundation more columns necessary build columns outline column parts on scrap printing paper cut parts assemble parts to create columns build foundation outer section align layers over foundation and columns insert screws where columns are present reinsert screws with wood glue to secure layers build foundation inner section align layers over foundation and columns insert screws where columns are present reinsert screws with wood glue to secure layers
| 1
|
518,813
| 15,035,250,758
|
IssuesEvent
|
2021-02-02 13:55:50
|
YangCatalog/search
|
https://api.github.com/repos/YangCatalog/search
|
closed
|
Yang files indexing update
|
Priority: High enhancement
|
When we have a yang file that is too big it will create json file that needs to be indexed to elasticsearch way too big to be processed without any problemes. Split this json into smaller chunks and put them into elk separately
|
1.0
|
Yang files indexing update - When we have a yang file that is too big it will create json file that needs to be indexed to elasticsearch way too big to be processed without any problemes. Split this json into smaller chunks and put them into elk separately
|
priority
|
yang files indexing update when we have a yang file that is too big it will create json file that needs to be indexed to elasticsearch way too big to be processed without any problemes split this json into smaller chunks and put them into elk separately
| 1
|
468,292
| 13,464,773,470
|
IssuesEvent
|
2020-09-09 19:45:24
|
ucfopen/Obojobo
|
https://api.github.com/repos/ucfopen/Obojobo
|
opened
|
Lock Navigation During Attempts missing 'onStartAttempt' trigger
|
bug editor high priority
|
The 'Lock Navigation During Attempts' feature automatically adds some triggers to ensuring that when an attempt is started the navigation is locked. However it's missing an 'onStartAttempt' trigger which should run the lock navigation action. If you don't apply the 'lock navigation' action on your Start Assessment button then students can leave the assessment.
|
1.0
|
Lock Navigation During Attempts missing 'onStartAttempt' trigger - The 'Lock Navigation During Attempts' feature automatically adds some triggers to ensuring that when an attempt is started the navigation is locked. However it's missing an 'onStartAttempt' trigger which should run the lock navigation action. If you don't apply the 'lock navigation' action on your Start Assessment button then students can leave the assessment.
|
priority
|
lock navigation during attempts missing onstartattempt trigger the lock navigation during attempts feature automatically adds some triggers to ensuring that when an attempt is started the navigation is locked however it s missing an onstartattempt trigger which should run the lock navigation action if you don t apply the lock navigation action on your start assessment button then students can leave the assessment
| 1
|
281,923
| 8,700,872,814
|
IssuesEvent
|
2018-12-05 09:58:01
|
AICrowd/ai-crowd-3
|
https://api.github.com/repos/AICrowd/ai-crowd-3
|
closed
|
Leaderboard V2
|
feature high priority
|
_From @seanfcarroll on September 28, 2017 08:27_
The crowdAI leaderboard will be redesigned and rebuilt. Changes are:
- Leaderboard changes from being a modal to a full page with a shortened url and buttons to share on Facebook and Twitter
- Comments and likes are available on this full page
- Closing the page returns to the leaderboard
- A challenge-level configuration setting (eg: 1 day) tracks leaderboard rankings and shows up and down arrows on the leaderboard. eg: https://github.com/crowdAI/crowdai/issues/287
- If the logged in participant is on the leaderboard, but not in the top 10, the leaderboard will hide the intervening rows, so that the participant can see their submission without scrolling the entire leaderboard. Pressing the load more button reverts to the original leaderboard.
This issue closes the following tickets (reference for more details):
https://github.com/crowdAI/crowdai/issues/321
https://github.com/crowdAI/crowdai/issues/322
https://github.com/crowdAI/crowdai/issues/303
https://github.com/crowdAI/crowdai/issues/296
https://github.com/crowdAI/crowdai/issues/292
https://github.com/crowdAI/crowdai/issues/287
https://github.com/crowdAI/crowdai/issues/286


_Copied from original issue: crowdAI/crowdai#324_
|
1.0
|
Leaderboard V2 - _From @seanfcarroll on September 28, 2017 08:27_
The crowdAI leaderboard will be redesigned and rebuilt. Changes are:
- Leaderboard changes from being a modal to a full page with a shortened url and buttons to share on Facebook and Twitter
- Comments and likes are available on this full page
- Closing the page returns to the leaderboard
- A challenge-level configuration setting (eg: 1 day) tracks leaderboard rankings and shows up and down arrows on the leaderboard. eg: https://github.com/crowdAI/crowdai/issues/287
- If the logged in participant is on the leaderboard, but not in the top 10, the leaderboard will hide the intervening rows, so that the participant can see their submission without scrolling the entire leaderboard. Pressing the load more button reverts to the original leaderboard.
This issue closes the following tickets (reference for more details):
https://github.com/crowdAI/crowdai/issues/321
https://github.com/crowdAI/crowdai/issues/322
https://github.com/crowdAI/crowdai/issues/303
https://github.com/crowdAI/crowdai/issues/296
https://github.com/crowdAI/crowdai/issues/292
https://github.com/crowdAI/crowdai/issues/287
https://github.com/crowdAI/crowdai/issues/286


_Copied from original issue: crowdAI/crowdai#324_
|
priority
|
leaderboard from seanfcarroll on september the crowdai leaderboard will be redesigned and rebuilt changes are leaderboard changes from being a modal to a full page with a shortened url and buttons to share on facebook and twitter comments and likes are available on this full page closing the page returns to the leaderboard a challenge level configuration setting eg day tracks leaderboard rankings and shows up and down arrows on the leaderboard eg if the logged in participant is on the leaderboard but not in the top the leaderboard will hide the intervening rows so that the participant can see their submission without scrolling the entire leaderboard pressing the load more button reverts to the original leaderboard this issue closes the following tickets reference for more details copied from original issue crowdai crowdai
| 1
|
181,379
| 6,659,194,971
|
IssuesEvent
|
2017-10-01 07:47:40
|
k0shk0sh/FastHub
|
https://api.github.com/repos/k0shk0sh/FastHub
|
closed
|
Invalid indentation for quotation and header
|
Priority: High Status: Completed Type: Enhancement
|
**FastHub Version: 4.3.0**
**Android Version: 7.0 (SDK: 24)**
**Device Information:**
- **Manufacturer:** samsung
- **Brand:** samsung
- **Model:** Galaxy Tab S2 8.0
---
Markdown renderer doesn't render the correct indentation for quotation and headers. See a sample below.
If you open this issue in a browser, you'll see the correct indentation.
As I mentioned in other issue, it's important for this app to have valid markdown rendering, as otherwise I'd need to see how it looks in browser when I update something important (e.g. `Readme`). Also it's annoying when you compose a comment and add unnecessary line breaks because your text looks shitty, and later it appears that markdown is fine and the issue is with the app instead.. :disappointed:
Thank you for your time! :smile_cat:
-------
Here is some text.
> This quotation is expected to have indent between the previous line - same as between the line below.
Some reply to the quote.
Here is some text.
### This header is expected to have indent between the previous line.
|
1.0
|
Invalid indentation for quotation and header - **FastHub Version: 4.3.0**
**Android Version: 7.0 (SDK: 24)**
**Device Information:**
- **Manufacturer:** samsung
- **Brand:** samsung
- **Model:** Galaxy Tab S2 8.0
---
Markdown renderer doesn't render the correct indentation for quotation and headers. See a sample below.
If you open this issue in a browser, you'll see the correct indentation.
As I mentioned in other issue, it's important for this app to have valid markdown rendering, as otherwise I'd need to see how it looks in browser when I update something important (e.g. `Readme`). Also it's annoying when you compose a comment and add unnecessary line breaks because your text looks shitty, and later it appears that markdown is fine and the issue is with the app instead.. :disappointed:
Thank you for your time! :smile_cat:
-------
Here is some text.
> This quotation is expected to have indent between the previous line - same as between the line below.
Some reply to the quote.
Here is some text.
### This header is expected to have indent between the previous line.
|
priority
|
invalid indentation for quotation and header fasthub version android version sdk device information manufacturer samsung brand samsung model galaxy tab markdown renderer doesn t render the correct indentation for quotation and headers see a sample below if you open this issue in a browser you ll see the correct indentation as i mentioned in other issue it s important for this app to have valid markdown rendering as otherwise i d need to see how it looks in browser when i update something important e g readme also it s annoying when you compose a comment and add unnecessary line breaks because your text looks shitty and later it appears that markdown is fine and the issue is with the app instead disappointed thank you for your time smile cat here is some text this quotation is expected to have indent between the previous line same as between the line below some reply to the quote here is some text this header is expected to have indent between the previous line
| 1
|
484,418
| 13,939,585,218
|
IssuesEvent
|
2020-10-22 16:41:22
|
interferences-at/mpop
|
https://api.github.com/repos/interferences-at/mpop
|
closed
|
Update all the English text for each multiple-question pages
|
QML difficulty: medium kiosk_central mpop_kiosk priority: high
|
The text are detailed in #84
There is QML model for the questions.
Ask @aalex in case of doubt about which question has which text.
`ModelQuestions.qml`
Also update min/max text for both languages.
|
1.0
|
Update all the English text for each multiple-question pages - The text are detailed in #84
There is QML model for the questions.
Ask @aalex in case of doubt about which question has which text.
`ModelQuestions.qml`
Also update min/max text for both languages.
|
priority
|
update all the english text for each multiple question pages the text are detailed in there is qml model for the questions ask aalex in case of doubt about which question has which text modelquestions qml also update min max text for both languages
| 1
|
676,785
| 23,137,759,900
|
IssuesEvent
|
2022-07-28 15:34:07
|
opendatahub-io/odh-dashboard
|
https://api.github.com/repos/opendatahub-io/odh-dashboard
|
closed
|
[KFNBC]: File -> Log out does not work
|
kind/bug notebook-controller priority/high
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Clicking on File -> Log Out brings up an error message

Clicking on `dashboard` simply reloads the JL interface.
### Expected Behavior
Logs the current user out, server keeps running
### Steps To Reproduce
Spawn server
Click on `Log Out` in the `File` menu
Click on `dashboard` in the error message
### Workaround (if any)
_No response_
### OpenShift Version
_No response_
### Openshift Version
_No response_
### What browsers are you seeing the problem on?
_No response_
### Open Data Hub Version
_No response_
### Relevant log output
_No response_
|
1.0
|
[KFNBC]: File -> Log out does not work - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Clicking on File -> Log Out brings up an error message

Clicking on `dashboard` simply reloads the JL interface.
### Expected Behavior
Logs the current user out, server keeps running
### Steps To Reproduce
Spawn server
Click on `Log Out` in the `File` menu
Click on `dashboard` in the error message
### Workaround (if any)
_No response_
### OpenShift Version
_No response_
### Openshift Version
_No response_
### What browsers are you seeing the problem on?
_No response_
### Open Data Hub Version
_No response_
### Relevant log output
_No response_
|
priority
|
file log out does not work is there an existing issue for this i have searched the existing issues current behavior clicking on file log out brings up an error message clicking on dashboard simply reloads the jl interface expected behavior logs the current user out server keeps running steps to reproduce spawn server click on log out in the file menu click on dashboard in the error message workaround if any no response openshift version no response openshift version no response what browsers are you seeing the problem on no response open data hub version no response relevant log output no response
| 1
|
461,292
| 13,228,101,627
|
IssuesEvent
|
2020-08-18 05:17:51
|
moonwards1/Moonwards-Virtual-Moon
|
https://api.github.com/repos/moonwards1/Moonwards-Virtual-Moon
|
closed
|
Create material for Factory Foundations and Launch Pads
|
Department: Graphics/GFX Priority: High Type: Feature
|
This is similar to the road, but a little darker - a medium light gray. Like all surfaces for vehicles or people, it needs to be rough so traction is as good as possible. These areas are made of cast stone. In this case, seams between large rectangular sections can be visible.
https://www.pinterest.com.mx/holder3884/factory-floors-and-launch-pads-cast-stone/
|
1.0
|
Create material for Factory Foundations and Launch Pads - This is similar to the road, but a little darker - a medium light gray. Like all surfaces for vehicles or people, it needs to be rough so traction is as good as possible. These areas are made of cast stone. In this case, seams between large rectangular sections can be visible.
https://www.pinterest.com.mx/holder3884/factory-floors-and-launch-pads-cast-stone/
|
priority
|
create material for factory foundations and launch pads this is similar to the road but a little darker a medium light gray like all surfaces for vehicles or people it needs to be rough so traction is as good as possible these areas are made of cast stone in this case seams between large rectangular sections can be visible
| 1
|
787,126
| 27,707,414,824
|
IssuesEvent
|
2023-03-14 12:10:14
|
AY2223S2-CS2113-T14-3/tp
|
https://api.github.com/repos/AY2223S2-CS2113-T14-3/tp
|
closed
|
As a user, I can view my previous inputs
|
type.Task priority.High
|
so that I can view and edit past records while adding new ones
|
1.0
|
As a user, I can view my previous inputs - so that I can view and edit past records while adding new ones
|
priority
|
as a user i can view my previous inputs so that i can view and edit past records while adding new ones
| 1
|
352,980
| 10,547,760,788
|
IssuesEvent
|
2019-10-03 02:38:03
|
wso2/devstudio-tooling-ei
|
https://api.github.com/repos/wso2/devstudio-tooling-ei
|
closed
|
Endpoint inside template is not supported - tooling
|
Priority/High
|
Doc: https://docs.wso2.com/display/EI6xx/Sample+752%3A+Load+Balancing+Between+3+Endpoints+With+Endpoint+Templates
```
<template name="endpoint_template">
<parameter name="suspend_duration"/>
<endpoint name="annonymous">
<address uri="$uri">
<enableAddressing/>
<suspendDurationOnFailure>$suspend_duration</suspendDurationOnFailure>
</address>
</endpoint>
</template>
```
Following error occurs when the above content is pasted in to source view of a template xml file.

|
1.0
|
Endpoint inside template is not supported - tooling - Doc: https://docs.wso2.com/display/EI6xx/Sample+752%3A+Load+Balancing+Between+3+Endpoints+With+Endpoint+Templates
```
<template name="endpoint_template">
<parameter name="suspend_duration"/>
<endpoint name="annonymous">
<address uri="$uri">
<enableAddressing/>
<suspendDurationOnFailure>$suspend_duration</suspendDurationOnFailure>
</address>
</endpoint>
</template>
```
Following error occurs when the above content is pasted in to source view of a template xml file.

|
priority
|
endpoint inside template is not supported tooling doc suspend duration following error occurs when the above content is pasted in to source view of a template xml file
| 1
|
322,369
| 9,816,900,648
|
IssuesEvent
|
2019-06-13 15:33:07
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio] Quick Create API
|
new feature priority: high
|
@see #2947
Suggested response:
```js
{
"items": [
{
"label": "Press Article",
"contentTypeId": "/page/article",
"path": "",
/* ...other needed props to open form */
}
]
}
```
|
1.0
|
[studio] Quick Create API - @see #2947
Suggested response:
```js
{
"items": [
{
"label": "Press Article",
"contentTypeId": "/page/article",
"path": "",
/* ...other needed props to open form */
}
]
}
```
|
priority
|
quick create api see suggested response js items label press article contenttypeid page article path other needed props to open form
| 1
|
292,905
| 8,970,330,067
|
IssuesEvent
|
2019-01-29 13:22:32
|
richelbilderbeek/pirouette
|
https://api.github.com/repos/richelbilderbeek/pirouette
|
closed
|
Review pirouette manuscript
|
High priority depends
|
Thanks @Giappo for the feedback! I processed all of it and added an introduction.
I hope you could you review this new version :1st_place_medal:
|
1.0
|
Review pirouette manuscript - Thanks @Giappo for the feedback! I processed all of it and added an introduction.
I hope you could you review this new version :1st_place_medal:
|
priority
|
review pirouette manuscript thanks giappo for the feedback i processed all of it and added an introduction i hope you could you review this new version place medal
| 1
|
710,659
| 24,427,049,883
|
IssuesEvent
|
2022-10-06 04:21:03
|
HiAvatar/backend
|
https://api.github.com/repos/HiAvatar/backend
|
closed
|
rds에서 time_zone 설정을 한국으로 변경
|
Type: Bug Priority: High
|
### Description
데이터베이스의 timestamp 값이 한국 표준 시간보다 9시간 늦게 적용되므로 rds에서 수정이 필요해보임.
<br>
### Todo List
- [x] 한국 시간으로 변경하기
|
1.0
|
rds에서 time_zone 설정을 한국으로 변경 - ### Description
데이터베이스의 timestamp 값이 한국 표준 시간보다 9시간 늦게 적용되므로 rds에서 수정이 필요해보임.
<br>
### Todo List
- [x] 한국 시간으로 변경하기
|
priority
|
rds에서 time zone 설정을 한국으로 변경 description 데이터베이스의 timestamp 값이 한국 표준 시간보다 늦게 적용되므로 rds에서 수정이 필요해보임 todo list 한국 시간으로 변경하기
| 1
|
345,367
| 10,361,532,576
|
IssuesEvent
|
2019-09-06 10:15:04
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
closed
|
[NPC] Lady Naz'jar - Throne of The Tides Heroic.
|
Dungeon/Raid Fixed in Dev Priority-High
|
**Links:**
https://www.wowhead.com/npc=40586/lady-nazjar
from WoWHead
**What is happening:**
She hits only for 184 dmg - 314 dmg the tank.
**What should happen:**
She must hits the tank for 10000 - 19.970 Physical Damage
https://www.youtube.com/watch?v=NbpiKcDCSL4
Looking inside the video, we note that the tank suffers from 9000 to 19.970 damage, which is not the case on our server.

This is the damage from our Lady Naz'jar.
|
1.0
|
[NPC] Lady Naz'jar - Throne of The Tides Heroic. - **Links:**
https://www.wowhead.com/npc=40586/lady-nazjar
from WoWHead
**What is happening:**
She hits only for 184 dmg - 314 dmg the tank.
**What should happen:**
She must hits the tank for 10000 - 19.970 Physical Damage
https://www.youtube.com/watch?v=NbpiKcDCSL4
Looking inside the video, we note that the tank suffers from 9000 to 19.970 damage, which is not the case on our server.

This is the damage from our Lady Naz'jar.
|
priority
|
lady naz jar throne of the tides heroic links from wowhead what is happening she hits only for dmg dmg the tank what should happen she must hits the tank for physical damage looking inside the video we note that the tank suffers from to damage which is not the case on our server this is the damage from our lady naz jar
| 1
|
333,984
| 10,134,313,081
|
IssuesEvent
|
2019-08-02 07:08:53
|
AbsaOSS/enceladus
|
https://api.github.com/repos/AbsaOSS/enceladus
|
closed
|
Cache client-side REST calls for immutable data
|
Menas feature priority: high
|
## Background
Currently, the Menas UI makes a lot of repetitive REST calls to the back-end to fetch datasets, the datasets' schemas, the mapping tables used in the datasets' mapping rules and the mapping tables' schemas. This happens on dataset load, and on adding/editing a conformance rule. In addition to creating heavy traffic, it can cause unexpected latency-related issues.
## Feature
We can reduce this by caching the values of requests on immutable data. The possible downside is that this may eat up a lot of memory if users never close their tab.
## Proposed Solution
Add a map to the RestDAO object which maintains a list of previously executed queries and their results, specifically for:
1. `Dataset` name and version
2. `Schema` name and version
3. `Mapping table` name and version
|
1.0
|
Cache client-side REST calls for immutable data - ## Background
Currently, the Menas UI makes a lot of repetitive REST calls to the back-end to fetch datasets, the datasets' schemas, the mapping tables used in the datasets' mapping rules and the mapping tables' schemas. This happens on dataset load, and on adding/editing a conformance rule. In addition to creating heavy traffic, it can cause unexpected latency-related issues.
## Feature
We can reduce this by caching the values of requests on immutable data. The possible downside is that this may eat up a lot of memory if users never close their tab.
## Proposed Solution
Add a map to the RestDAO object which maintains a list of previously executed queries and their results, specifically for:
1. `Dataset` name and version
2. `Schema` name and version
3. `Mapping table` name and version
|
priority
|
cache client side rest calls for immutable data background currently the menas ui makes a lot of repetitive rest calls to the back end to fetch datasets the datasets schemas the mapping tables used in the datasets mapping rules and the mapping tables schemas this happens on dataset load and on adding editing a conformance rule in addition to creating heavy traffic it can cause unexpected latency related issues feature we can reduce this by caching the values of requests on immutable data the possible downside is that this may eat up a lot of memory if users never close their tab proposed solution add a map to the restdao object which maintains a list of previously executed queries and their results specifically for dataset name and version schema name and version mapping table name and version
| 1
|
151,752
| 5,827,081,901
|
IssuesEvent
|
2017-05-08 07:51:47
|
bedita/bedita
|
https://api.github.com/repos/bedita/bedita
|
closed
|
Implement UUID external auth provider
|
Priority - High Topic - API Type - New Feature
|
Implement an external auth provider to allow registration of anonymous user via client UUID.
When the provider is active (row in `auth_providers`), the creation of a user can be done with the request
```http
POST /auth
Authorization: UUID <client-uuid>
```
This request will create a new user with `external_auth.provider_username` and `users.username` equal to `<client-uuid>`.
|
1.0
|
Implement UUID external auth provider - Implement an external auth provider to allow registration of anonymous user via client UUID.
When the provider is active (row in `auth_providers`), the creation of a user can be done with the request
```http
POST /auth
Authorization: UUID <client-uuid>
```
This request will create a new user with `external_auth.provider_username` and `users.username` equal to `<client-uuid>`.
|
priority
|
implement uuid external auth provider implement an external auth provider to allow registration of anonymous user via client uuid when the provider is active row in auth providers the creation of a user can be done with the request http post auth authorization uuid this request will create a new user with external auth provider username and users username equal to
| 1
|
239,599
| 7,799,871,614
|
IssuesEvent
|
2018-06-09 01:29:37
|
tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM
|
closed
|
0005710:
fix record count display in multiedit dlg for filter selection
|
Mantis Tinebase JavaScript high priority
|
**Reported by pschuele on 15 Feb 2012 10:27**
**Version:** Milan (2012-03) Beta 4
fix record count display in multiedit dlg for filter selection
|
1.0
|
0005710:
fix record count display in multiedit dlg for filter selection - **Reported by pschuele on 15 Feb 2012 10:27**
**Version:** Milan (2012-03) Beta 4
fix record count display in multiedit dlg for filter selection
|
priority
|
fix record count display in multiedit dlg for filter selection reported by pschuele on feb version milan beta fix record count display in multiedit dlg for filter selection
| 1
|
819,208
| 30,723,694,228
|
IssuesEvent
|
2023-07-27 17:51:54
|
iina/iina
|
https://api.github.com/repos/iina/iina
|
closed
|
[mpv Default] Shortcuts for changing window scale are inefficient
|
progress: fixed/implemented priority: high bug: regression
|
**System and IINA version:**
- macOS Monterey 12.6.1
- IINA 1.3.1, 1.3.0
**Expected behavior:**
When choosing the mpv Default shortcuts, the window scale changing shortcuts are displayed in the menu "Video" and typing the the mpv shortcuts ⌥0, ⌥1, ⌥2 changes the scale of the window.
**Actual behavior:**
When choosing the mpv Default shortcuts, the window scale changing shortcuts are not displayed in the menu "Video" and typing the the mpv shortcuts ⌥0, ⌥1, ⌥2 do not affect the scale of the window.
Preferences:

But in the menu Video:

And when clicking on every shortcuts ⌥0, ⌥1, ⌥2, we got:

|
1.0
|
[mpv Default] Shortcuts for changing window scale are inefficient - **System and IINA version:**
- macOS Monterey 12.6.1
- IINA 1.3.1, 1.3.0
**Expected behavior:**
When choosing the mpv Default shortcuts, the window scale changing shortcuts are displayed in the menu "Video" and typing the the mpv shortcuts ⌥0, ⌥1, ⌥2 changes the scale of the window.
**Actual behavior:**
When choosing the mpv Default shortcuts, the window scale changing shortcuts are not displayed in the menu "Video" and typing the the mpv shortcuts ⌥0, ⌥1, ⌥2 do not affect the scale of the window.
Preferences:

But in the menu Video:

And when clicking on every shortcuts ⌥0, ⌥1, ⌥2, we got:

|
priority
|
shortcuts for changing window scale are inefficient system and iina version macos monterey iina expected behavior when choosing the mpv default shortcuts the window scale changing shortcuts are displayed in the menu video and typing the the mpv shortcuts ⌥ ⌥ ⌥ changes the scale of the window actual behavior when choosing the mpv default shortcuts the window scale changing shortcuts are not displayed in the menu video and typing the the mpv shortcuts ⌥ ⌥ ⌥ do not affect the scale of the window preferences but in the menu video and when clicking on every shortcuts ⌥ ⌥ ⌥ we got
| 1
|
568,500
| 16,981,546,078
|
IssuesEvent
|
2021-06-30 09:28:57
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Project API throws Transitive Dependency Cannot be Found
|
Area/ProjectAPI Priority/High Team/DevTools Type/Bug
|
**Description:**
In a Java Micro Service which uses Ballerina Project API to parse a Ballerina build project we get a Transitive Dependency Error for three connectors. The three connectors got pulled were older versions and were incompatible with SL Alpha 5.3 version of Ballerina which was used in our Java Micro Service. The version numbers which got pulled are also mentioned in the below list.
- Azure Event Hub 0.1.1 (Transitive dependency cannot be found: org=ballerina, package=file, version=0.7.0-alpha4)
- Mongo DB 2.0.4 (Transitive dependency cannot be found: org=ballerina, package=time, version=1.1.0-alpha4])
- AWS S3 0.99.2 (Transitive dependency cannot be found: org=ballerina, package=file, version=0.7.0-alpha4)
However, we have been informed by the connector team that all the above three connectors had the latest version upgraded to Ballerina Sl Alpha 5 and those version were also available in Ballerina central by the time the transitive dependency error occurred. The list of latest versions available in the Ballerina Central were as follows.
- Azure Event Hub - 0.1.3
- Mongo DB - 2.0.7
- AWS S3 - 0.99.4
The top part of the error log which got printed by the Java Micro service is as follows,
```
io.ballerina.projects.ProjectException: Transitive dependency cannot be found: org=ballerina, package=file, version=0.7.0-alpha4
at io.ballerina.projects.internal.PackageDependencyGraphBuilder.buildPackageDependencyGraph(PackageDependencyGraphBuilder.java:203) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageResolution.buildDependencyGraph(PackageResolution.java:153) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageResolution.<init>(PackageResolution.java:71) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageResolution.from(PackageResolution.java:78) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageContext.getResolution(PackageContext.java:213) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageCompilation.<init>(PackageCompilation.java:68) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageCompilation.from(PackageCompilation.java:94) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageContext.getPackageCompilation(PackageContext.java:206) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.Package.getCompilation(Package.java:131) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
```
Since the latest versions of the connector were available in the Ballerina central and we did not have any dependency locking in `Dependencies.toml` file of the build project I believe the latest version of the above mentioned three connectors should have been pulled from the Ballerina Central.
The workaround we used to fix the above error was to explicitly mention the three connector versions in the `Dependencies.toml` file of the build project as a temporary fix.
**Steps to reproduce:**
If one tried to load a Ballerina build project using project API and invoke project API's `package.getCompilation()` method using the below mentioned ballerina version without any connector versions mentioned in the `Dependencies.toml` file this issue could be reproduced.
**Affected Versions:**
Java Micro Service was built using Java 11, Ballerina SL Alpha 5.3 with Ballerina Lang version `2.0.0-alpha8-20210513-185400-180f6504`
**OS, DB, other environment details and versions:**
Ubuntu 20.04.2.0
|
1.0
|
Project API throws Transitive Dependency Cannot be Found - **Description:**
In a Java Micro Service which uses Ballerina Project API to parse a Ballerina build project we get a Transitive Dependency Error for three connectors. The three connectors got pulled were older versions and were incompatible with SL Alpha 5.3 version of Ballerina which was used in our Java Micro Service. The version numbers which got pulled are also mentioned in the below list.
- Azure Event Hub 0.1.1 (Transitive dependency cannot be found: org=ballerina, package=file, version=0.7.0-alpha4)
- Mongo DB 2.0.4 (Transitive dependency cannot be found: org=ballerina, package=time, version=1.1.0-alpha4])
- AWS S3 0.99.2 (Transitive dependency cannot be found: org=ballerina, package=file, version=0.7.0-alpha4)
However, we have been informed by the connector team that all the above three connectors had the latest version upgraded to Ballerina Sl Alpha 5 and those version were also available in Ballerina central by the time the transitive dependency error occurred. The list of latest versions available in the Ballerina Central were as follows.
- Azure Event Hub - 0.1.3
- Mongo DB - 2.0.7
- AWS S3 - 0.99.4
The top part of the error log which got printed by the Java Micro service is as follows,
```
io.ballerina.projects.ProjectException: Transitive dependency cannot be found: org=ballerina, package=file, version=0.7.0-alpha4
at io.ballerina.projects.internal.PackageDependencyGraphBuilder.buildPackageDependencyGraph(PackageDependencyGraphBuilder.java:203) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageResolution.buildDependencyGraph(PackageResolution.java:153) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageResolution.<init>(PackageResolution.java:71) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageResolution.from(PackageResolution.java:78) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageContext.getResolution(PackageContext.java:213) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageCompilation.<init>(PackageCompilation.java:68) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageCompilation.from(PackageCompilation.java:94) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.PackageContext.getPackageCompilation(PackageContext.java:206) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
at io.ballerina.projects.Package.getCompilation(Package.java:131) ~[ballerina-lang-2.0.0-alpha8-20210513-185400-180f6504.jar:na]
```
Since the latest versions of the connector were available in the Ballerina central and we did not have any dependency locking in `Dependencies.toml` file of the build project I believe the latest version of the above mentioned three connectors should have been pulled from the Ballerina Central.
The workaround we used to fix the above error was to explicitly mention the three connector versions in the `Dependencies.toml` file of the build project as a temporary fix.
**Steps to reproduce:**
If one tried to load a Ballerina build project using project API and invoke project API's `package.getCompilation()` method using the below mentioned ballerina version without any connector versions mentioned in the `Dependencies.toml` file this issue could be reproduced.
**Affected Versions:**
Java Micro Service was built using Java 11, Ballerina SL Alpha 5.3 with Ballerina Lang version `2.0.0-alpha8-20210513-185400-180f6504`
**OS, DB, other environment details and versions:**
Ubuntu 20.04.2.0
|
priority
|
project api throws transitive dependency cannot be found description in a java micro service which uses ballerina project api to parse a ballerina build project we get a transitive dependency error for three connectors the three connectors got pulled were older versions and were incompatible with sl alpha version of ballerina which was used in our java micro service the version numbers which got pulled are also mentioned in the below list azure event hub transitive dependency cannot be found org ballerina package file version mongo db transitive dependency cannot be found org ballerina package time version aws transitive dependency cannot be found org ballerina package file version however we have been informed by the connector team that all the above three connectors had the latest version upgraded to ballerina sl alpha and those version were also available in ballerina central by the time the transitive dependency error occurred the list of latest versions available in the ballerina central were as follows azure event hub mongo db aws the top part of the error log which got printed by the java micro service is as follows io ballerina projects projectexception transitive dependency cannot be found org ballerina package file version at io ballerina projects internal packagedependencygraphbuilder buildpackagedependencygraph packagedependencygraphbuilder java at io ballerina projects packageresolution builddependencygraph packageresolution java at io ballerina projects packageresolution packageresolution java at io ballerina projects packageresolution from packageresolution java at io ballerina projects packagecontext getresolution packagecontext java at io ballerina projects packagecompilation packagecompilation java at io ballerina projects packagecompilation from packagecompilation java at io ballerina projects packagecontext getpackagecompilation packagecontext java at io ballerina projects package getcompilation package java since the latest versions of the connector were available in the ballerina central and we did not have any dependency locking in dependencies toml file of the build project i believe the latest version of the above mentioned three connectors should have been pulled from the ballerina central the workaround we used to fix the above error was to explicitly mention the three connector versions in the dependencies toml file of the build project as a temporary fix steps to reproduce if one tried to load a ballerina build project using project api and invoke project api s package getcompilation method using the below mentioned ballerina version without any connector versions mentioned in the dependencies toml file this issue could be reproduced affected versions java micro service was built using java ballerina sl alpha with ballerina lang version os db other environment details and versions ubuntu
| 1
|
16,196
| 2,612,477,312
|
IssuesEvent
|
2015-02-27 14:50:06
|
zaphoyd/websocketpp
|
https://api.github.com/repos/zaphoyd/websocketpp
|
closed
|
set function for max_http_body_size is named get_max_http_body_size
|
Bug Core High Priority
|
The function is part of the endpoint class, and currently on line 413 of endpoint.hpp: https://github.com/zaphoyd/websocketpp/blob/master/websocketpp/endpoint.hpp#L413
|
1.0
|
set function for max_http_body_size is named get_max_http_body_size - The function is part of the endpoint class, and currently on line 413 of endpoint.hpp: https://github.com/zaphoyd/websocketpp/blob/master/websocketpp/endpoint.hpp#L413
|
priority
|
set function for max http body size is named get max http body size the function is part of the endpoint class and currently on line of endpoint hpp
| 1
|
376,666
| 11,150,015,531
|
IssuesEvent
|
2019-12-23 20:50:04
|
thibaultmeyer/sparrow
|
https://api.github.com/repos/thibaultmeyer/sparrow
|
closed
|
Compact response not working
|
area/tracker kind/bug priority/high
|
Compact response dont' working as espected. Maybe an encoding error...
|
1.0
|
Compact response not working - Compact response dont' working as espected. Maybe an encoding error...
|
priority
|
compact response not working compact response dont working as espected maybe an encoding error
| 1
|
478,343
| 13,777,926,026
|
IssuesEvent
|
2020-10-08 11:42:53
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
[Debugger] Add support for global variable scopes
|
Component/Debugger Priority/High SwanLakeDump Team/Tooling Type/Improvement
|
**Description:**
Current jBallerina debugger implementation shows only the local variables instances available(visible) for a debug hit. Therefore the implementation should be improved to capture and show all the global variable instances as well.
**Steps to reproduce:**
**Affected Versions:**
Ballerina Version - jBallerina Swan Lake Preview 3 and below.
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
1.0
|
[Debugger] Add support for global variable scopes - **Description:**
Current jBallerina debugger implementation shows only the local variables instances available(visible) for a debug hit. Therefore the implementation should be improved to capture and show all the global variable instances as well.
**Steps to reproduce:**
**Affected Versions:**
Ballerina Version - jBallerina Swan Lake Preview 3 and below.
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
priority
|
add support for global variable scopes description current jballerina debugger implementation shows only the local variables instances available visible for a debug hit therefore the implementation should be improved to capture and show all the global variable instances as well steps to reproduce affected versions ballerina version jballerina swan lake preview and below os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
| 1
|
518,183
| 15,025,234,644
|
IssuesEvent
|
2021-02-01 20:47:14
|
tysonkaufmann/su-go
|
https://api.github.com/repos/tysonkaufmann/su-go
|
opened
|
[DEV] Create a Get User Information Endpoint
|
High Priority task
|
**Related To**
- [User Profile](https://github.com/tysonkaufmann/su-go/issues/32)
**Description**
Create an API endpoint `/api/userinformation/{USERNAME}` which will serve `GET` requests:
On Success, the API endpoint should return the information on success e.g
```
{
"status":"200",
"success":"true",
"data": {
"username":"",
"fullname":"",
"email":"",
"totaldistancecompleted":"",
"totalroutescompleted":"",
"totaltime":"",
"profilepic":"BASE64ENCODEDIMAGE",
}
}
```
On Error, the API endpoint should return an error if unsuccessful e.g
```
{
"status":"200",
"success":"false",
"error":"The username does not exist"
}
```
**Development Steps**
- Create integration tests to fully test the API endpoint `/api/userinformation/{USERNAME}` (TDD)
- Route the endpoint in `backend/routes/UserInformation`
- Create unit tests for any code added in `backend/controllers/UserInformation` (TDD)
- Add code to the `backend/controllers/UserInformation` to serve the API request
|
1.0
|
[DEV] Create a Get User Information Endpoint - **Related To**
- [User Profile](https://github.com/tysonkaufmann/su-go/issues/32)
**Description**
Create an API endpoint `/api/userinformation/{USERNAME}` which will serve `GET` requests:
On Success, the API endpoint should return the information on success e.g
```
{
"status":"200",
"success":"true",
"data": {
"username":"",
"fullname":"",
"email":"",
"totaldistancecompleted":"",
"totalroutescompleted":"",
"totaltime":"",
"profilepic":"BASE64ENCODEDIMAGE",
}
}
```
On Error, the API endpoint should return an error if unsuccessful e.g
```
{
"status":"200",
"success":"false",
"error":"The username does not exist"
}
```
**Development Steps**
- Create integration tests to fully test the API endpoint `/api/userinformation/{USERNAME}` (TDD)
- Route the endpoint in `backend/routes/UserInformation`
- Create unit tests for any code added in `backend/controllers/UserInformation` (TDD)
- Add code to the `backend/controllers/UserInformation` to serve the API request
|
priority
|
create a get user information endpoint related to description create an api endpoint api userinformation username which will serve get requests on success the api endpoint should return the information on success e g status success true data username fullname email totaldistancecompleted totalroutescompleted totaltime profilepic on error the api endpoint should return an error if unsuccessful e g status success false error the username does not exist development steps create integration tests to fully test the api endpoint api userinformation username tdd route the endpoint in backend routes userinformation create unit tests for any code added in backend controllers userinformation tdd add code to the backend controllers userinformation to serve the api request
| 1
|
809,533
| 30,197,024,456
|
IssuesEvent
|
2023-07-04 23:02:16
|
maxcao13/upstream-practices-demo
|
https://api.github.com/repos/maxcao13/upstream-practices-demo
|
closed
|
[Request] General CI workflows
|
feat high-priority
|
### Describe the feature
Demo workflow for F2F Presentation
### Anything other information?
_No response_
|
1.0
|
[Request] General CI workflows - ### Describe the feature
Demo workflow for F2F Presentation
### Anything other information?
_No response_
|
priority
|
general ci workflows describe the feature demo workflow for presentation anything other information no response
| 1
|
710,749
| 24,431,907,412
|
IssuesEvent
|
2022-10-06 08:44:33
|
keycloak/keycloak-ui
|
https://api.github.com/repos/keycloak/keycloak-ui
|
closed
|
AdminV2 not using admin hostname
|
kind/bug priority/critical impact/high area/admin/ui
|
### Describe the bug
By setting the hostname and admin hostname you can log in fine but the client will attempt to reach out to admin endpoints on the main hostname which are suggested to be blocked.
Example request:
The admin client pings the whoami endpoint which is defined here: https://github.com/keycloak/keycloak-nodejs-admin-client/blob/main/src/resources/whoAmI.ts#L12 which uses (from my understanding) this https://github.com/keycloak/keycloak-admin-ui/blob/main/src/context/auth/AdminClient.tsx#L73 which is set via https://github.com/keycloak/keycloak/blob/main/services/src/main/java/org/keycloak/services/resources/admin/AdminConsole.java#L337
### Version
18.0.2
### Expected behavior
The admin client should communicate to all admin endpoints using the admin base url.
### Actual behavior
The admin client communicates using the frontend urls.
### How to Reproduce?
To reproduce failure you need a level 7 firewall to block off the paths as defined here: https://www.keycloak.org/server/reverseproxy
### Anything else?
Related PRs: https://github.com/keycloak/keycloak-admin-ui/pull/2704
|
1.0
|
AdminV2 not using admin hostname - ### Describe the bug
By setting the hostname and admin hostname you can log in fine but the client will attempt to reach out to admin endpoints on the main hostname which are suggested to be blocked.
Example request:
The admin client pings the whoami endpoint which is defined here: https://github.com/keycloak/keycloak-nodejs-admin-client/blob/main/src/resources/whoAmI.ts#L12 which uses (from my understanding) this https://github.com/keycloak/keycloak-admin-ui/blob/main/src/context/auth/AdminClient.tsx#L73 which is set via https://github.com/keycloak/keycloak/blob/main/services/src/main/java/org/keycloak/services/resources/admin/AdminConsole.java#L337
### Version
18.0.2
### Expected behavior
The admin client should communicate to all admin endpoints using the admin base url.
### Actual behavior
The admin client communicates using the frontend urls.
### How to Reproduce?
To reproduce failure you need a level 7 firewall to block off the paths as defined here: https://www.keycloak.org/server/reverseproxy
### Anything else?
Related PRs: https://github.com/keycloak/keycloak-admin-ui/pull/2704
|
priority
|
not using admin hostname describe the bug by setting the hostname and admin hostname you can log in fine but the client will attempt to reach out to admin endpoints on the main hostname which are suggested to be blocked example request the admin client pings the whoami endpoint which is defined here which uses from my understanding this which is set via version expected behavior the admin client should communicate to all admin endpoints using the admin base url actual behavior the admin client communicates using the frontend urls how to reproduce to reproduce failure you need a level firewall to block off the paths as defined here anything else related prs
| 1
|
51,021
| 3,010,029,586
|
IssuesEvent
|
2015-07-28 10:33:15
|
N4SJAMK/teamboard-client-react
|
https://api.github.com/repos/N4SJAMK/teamboard-client-react
|
closed
|
iPad issue: Mini map square disappears off the minimap
|
bug HIGH PRIORITY ready_to_verify
|
Mini map square disappears off the minimap. (minimapilla neliö pääsee yli minimapin, siirtoneliö häviää yli laudan).
////////CONTRIBOARD TESTING, 24.7.2015////////
Contriboard (SUT) versions:
Client version v1.2.100
API version v0.4.38
IMG version v0.1.10
Testing environment:
Heli (1 kpl iPad iOS 8.3 (12F69) Flowdock -> Apple web-view)
////////CONTRIBOARD TESTING, 24.7.2015////////
|
1.0
|
iPad issue: Mini map square disappears off the minimap - Mini map square disappears off the minimap. (minimapilla neliö pääsee yli minimapin, siirtoneliö häviää yli laudan).
////////CONTRIBOARD TESTING, 24.7.2015////////
Contriboard (SUT) versions:
Client version v1.2.100
API version v0.4.38
IMG version v0.1.10
Testing environment:
Heli (1 kpl iPad iOS 8.3 (12F69) Flowdock -> Apple web-view)
////////CONTRIBOARD TESTING, 24.7.2015////////
|
priority
|
ipad issue mini map square disappears off the minimap mini map square disappears off the minimap minimapilla neliö pääsee yli minimapin siirtoneliö häviää yli laudan contriboard testing contriboard sut versions client version api version img version testing environment heli kpl ipad ios flowdock apple web view contriboard testing
| 1
|
513,209
| 14,917,825,174
|
IssuesEvent
|
2021-01-22 20:32:44
|
aleksn41/corona_world_app
|
https://api.github.com/repos/aleksn41/corona_world_app
|
closed
|
Statistic issues have to be fixed
|
High Priority
|
Statistics are not working properly. Additionaly there are many constellations of criteria, chart type and time yet to be implemented.
|
1.0
|
Statistic issues have to be fixed - Statistics are not working properly. Additionaly there are many constellations of criteria, chart type and time yet to be implemented.
|
priority
|
statistic issues have to be fixed statistics are not working properly additionaly there are many constellations of criteria chart type and time yet to be implemented
| 1
|
821,278
| 30,814,479,400
|
IssuesEvent
|
2023-08-01 12:40:36
|
quentin452/privates-minecraft-modpack
|
https://api.github.com/repos/quentin452/privates-minecraft-modpack
|
closed
|
[1.12.2]: Sometimes Entities become invisible /visible caused by infinite loop
|
bug priority high 1.12.2 Forge Mystic's Monstrosity Modified V6
|
### Modpack
Mystics Monstrosity modified V6.0 beta44
### Game log
no logs
### Description
Sometimes entities become invisible/visible at infinite loop in the nether and certain dimension randomly
|
1.0
|
[1.12.2]: Sometimes Entities become invisible /visible caused by infinite loop - ### Modpack
Mystics Monstrosity modified V6.0 beta44
### Game log
no logs
### Description
Sometimes entities become invisible/visible at infinite loop in the nether and certain dimension randomly
|
priority
|
sometimes entities become invisible visible caused by infinite loop modpack mystics monstrosity modified game log no logs description sometimes entities become invisible visible at infinite loop in the nether and certain dimension randomly
| 1
|
709,995
| 24,399,824,206
|
IssuesEvent
|
2022-10-04 23:35:13
|
okTurtles/group-income
|
https://api.github.com/repos/okTurtles/group-income
|
closed
|
Remove Disagreement Rule from Group Creation setup
|
App:Frontend Level:Starter Priority:High Note:UI/UX
|
### Problem
See #1393 for details on both problem and solution.
Summary: there are problems with having the disagreement rule in the group setup, and there are currently also problems with the disagreement rule implementation.
### Solution
Implement the new designs for #1393.
|
1.0
|
Remove Disagreement Rule from Group Creation setup - ### Problem
See #1393 for details on both problem and solution.
Summary: there are problems with having the disagreement rule in the group setup, and there are currently also problems with the disagreement rule implementation.
### Solution
Implement the new designs for #1393.
|
priority
|
remove disagreement rule from group creation setup problem see for details on both problem and solution summary there are problems with having the disagreement rule in the group setup and there are currently also problems with the disagreement rule implementation solution implement the new designs for
| 1
|
72,444
| 3,385,861,427
|
IssuesEvent
|
2015-11-27 14:08:47
|
CoderDojo/community-platform
|
https://api.github.com/repos/CoderDojo/community-platform
|
opened
|
Issue with special characters in title bar
|
bug high priority suitable for beginners
|
Just put Portuguese up but it looks like there's an issue on the homepage with the special characters:

|
1.0
|
Issue with special characters in title bar - Just put Portuguese up but it looks like there's an issue on the homepage with the special characters:

|
priority
|
issue with special characters in title bar just put portuguese up but it looks like there s an issue on the homepage with the special characters
| 1
|
319,320
| 9,742,145,791
|
IssuesEvent
|
2019-06-02 14:50:34
|
accodeing/fortnox-api
|
https://api.github.com/repos/accodeing/fortnox-api
|
closed
|
Invalid email regexp
|
info:pull-request-open priority:high type:bug
|
"sköldpadda@example.com" (String) has invalid type for :email violates constraints (format?(/^$|\A[\w+-_.]+@[\w+-_.]+\.[a-z]+\z/i, "sköldpadda@example.com") failed)
|
1.0
|
Invalid email regexp - "sköldpadda@example.com" (String) has invalid type for :email violates constraints (format?(/^$|\A[\w+-_.]+@[\w+-_.]+\.[a-z]+\z/i, "sköldpadda@example.com") failed)
|
priority
|
invalid email regexp sköldpadda example com string has invalid type for email violates constraints format a z i sköldpadda example com failed
| 1
|
232,568
| 7,661,602,901
|
IssuesEvent
|
2018-05-11 14:44:27
|
joaosantana/Recibos
|
https://api.github.com/repos/joaosantana/Recibos
|
closed
|
Assegurar a criação da base de dados
|
Priority: High Status: Accepted Type: Maintenance
|
>Due to limitations in the way .NET Core tools interact with UWP projects the model needs to be placed in a non-UWP project to be able to run migrations commands in the Package Manager Console
https://docs.microsoft.com/en-us/ef/core/get-started/uwp/getting-started
O procedimento para fazer isso é simples, mas complicado. Kristina_Ragan escreveu um walkthrough utilíssimo para fazê-lo na seção de comentários do link acima, que copio em Documentation/ModelWalkthough.md.
|
1.0
|
Assegurar a criação da base de dados - >Due to limitations in the way .NET Core tools interact with UWP projects the model needs to be placed in a non-UWP project to be able to run migrations commands in the Package Manager Console
https://docs.microsoft.com/en-us/ef/core/get-started/uwp/getting-started
O procedimento para fazer isso é simples, mas complicado. Kristina_Ragan escreveu um walkthrough utilíssimo para fazê-lo na seção de comentários do link acima, que copio em Documentation/ModelWalkthough.md.
|
priority
|
assegurar a criação da base de dados due to limitations in the way net core tools interact with uwp projects the model needs to be placed in a non uwp project to be able to run migrations commands in the package manager console o procedimento para fazer isso é simples mas complicado kristina ragan escreveu um walkthrough utilíssimo para fazê lo na seção de comentários do link acima que copio em documentation modelwalkthough md
| 1
|
428,749
| 12,416,134,945
|
IssuesEvent
|
2020-05-22 17:32:27
|
X-Plane/XPlane2Blender
|
https://api.github.com/repos/X-Plane/XPlane2Blender
|
opened
|
280: Implement is_directional/omni spec and predicate
|
WYSIWYG Lights priority high
|
All lights in X-Plane follow this test
```
# Where the **WIDTH column** is from the most
# trusted overload of that light after all _sw_callbacks have been applied
is_omni = WIDTH column == 1.0
if is_omni == False:
Light is definitely not omni
elif is_omni and most trusted overload is a billboard and it has a dataref:
Light's dataref may simulate directionality by some means. Must check special cases
(in our implementation, it means apply a _sw_callback and check, which we've already done this)
else:
The answer is the value of is_omni
```
### Special cases:
- lights that use the `_do_rgb_to_dxyz_w_calc` call back (`airplane_generic_(core/glow/flare)`) will never be omni
- lights that use the `_do_force_WIDTH_1` may or may not be omni. See the table below for which is which
Testing is_omni after applying _sw_callbacks solves hard coding these cases into the exporter
#### _do_force_omni special cases
The following is a breakdown of what lights are always omni or (uni)directional as a result of their dataref and WIDTH column == 1:
```
Omni
----
airplane_beacon_rotate
airplane_beacon
appch_rabbit_o
appch_strobe_o
inset_appch_rabbit_o
inset_appch_rabbit_o_sp
inset_appch_strobe_o
inset_appch_strobe_o_sp
(Uni)directional
----------------
VASI
VASI3
appch_rabbit_u
appch_strobe_u
inset_appch_rabbit_u
inset_appch_rabbit_u_sp
inset_appch_strobe_u
inset_appch_strobe_u_sp
wigwag_y1
wigwag_y2
hold_short_y1
hold_short_y2
pad_SGSI_lo
pad_SGSI_on
pad_SGSI_hi
carrier_datum
carrier_waveoff
carrier_meatball1
carrier_meatball2
carrier_meatball3
carrier_meatball4
carrier_meatball5
frigate_SGSI_lo
frigate_SGSI_on
frigate_SGSI_hi
```
- [ ] Test about is omni, is not always omni - by API and fixture
- [ ] Test on how Blender Point/Spot affects filling in WIDTH
- [ ] Code!
This was formerly part of #563
|
1.0
|
280: Implement is_directional/omni spec and predicate - All lights in X-Plane follow this test
```
# Where the **WIDTH column** is from the most
# trusted overload of that light after all _sw_callbacks have been applied
is_omni = WIDTH column == 1.0
if is_omni == False:
Light is definitely not omni
elif is_omni and most trusted overload is a billboard and it has a dataref:
Light's dataref may simulate directionality by some means. Must check special cases
(in our implementation, it means apply a _sw_callback and check, which we've already done this)
else:
The answer is the value of is_omni
```
### Special cases:
- lights that use the `_do_rgb_to_dxyz_w_calc` call back (`airplane_generic_(core/glow/flare)`) will never be omni
- lights that use the `_do_force_WIDTH_1` may or may not be omni. See the table below for which is which
Testing is_omni after applying _sw_callbacks solves hard coding these cases into the exporter
#### _do_force_omni special cases
The following is a breakdown of what lights are always omni or (uni)directional as a result of their dataref and WIDTH column == 1:
```
Omni
----
airplane_beacon_rotate
airplane_beacon
appch_rabbit_o
appch_strobe_o
inset_appch_rabbit_o
inset_appch_rabbit_o_sp
inset_appch_strobe_o
inset_appch_strobe_o_sp
(Uni)directional
----------------
VASI
VASI3
appch_rabbit_u
appch_strobe_u
inset_appch_rabbit_u
inset_appch_rabbit_u_sp
inset_appch_strobe_u
inset_appch_strobe_u_sp
wigwag_y1
wigwag_y2
hold_short_y1
hold_short_y2
pad_SGSI_lo
pad_SGSI_on
pad_SGSI_hi
carrier_datum
carrier_waveoff
carrier_meatball1
carrier_meatball2
carrier_meatball3
carrier_meatball4
carrier_meatball5
frigate_SGSI_lo
frigate_SGSI_on
frigate_SGSI_hi
```
- [ ] Test about is omni, is not always omni - by API and fixture
- [ ] Test on how Blender Point/Spot affects filling in WIDTH
- [ ] Code!
This was formerly part of #563
|
priority
|
implement is directional omni spec and predicate all lights in x plane follow this test where the width column is from the most trusted overload of that light after all sw callbacks have been applied is omni width column if is omni false light is definitely not omni elif is omni and most trusted overload is a billboard and it has a dataref light s dataref may simulate directionality by some means must check special cases in our implementation it means apply a sw callback and check which we ve already done this else the answer is the value of is omni special cases lights that use the do rgb to dxyz w calc call back airplane generic core glow flare will never be omni lights that use the do force width may or may not be omni see the table below for which is which testing is omni after applying sw callbacks solves hard coding these cases into the exporter do force omni special cases the following is a breakdown of what lights are always omni or uni directional as a result of their dataref and width column omni airplane beacon rotate airplane beacon appch rabbit o appch strobe o inset appch rabbit o inset appch rabbit o sp inset appch strobe o inset appch strobe o sp uni directional vasi appch rabbit u appch strobe u inset appch rabbit u inset appch rabbit u sp inset appch strobe u inset appch strobe u sp wigwag wigwag hold short hold short pad sgsi lo pad sgsi on pad sgsi hi carrier datum carrier waveoff carrier carrier carrier carrier carrier frigate sgsi lo frigate sgsi on frigate sgsi hi test about is omni is not always omni by api and fixture test on how blender point spot affects filling in width code this was formerly part of
| 1
|
31,732
| 2,736,617,327
|
IssuesEvent
|
2015-04-19 16:23:09
|
OpenSprites/OpenSprites
|
https://api.github.com/repos/OpenSprites/OpenSprites
|
closed
|
Bundle your commits!
|
high priority ongoing
|
This is something, maybe not VERY important but people still seem to forget this.
In the busy weekends, when everyone is doing work EVERYONE should bundle. You can't imagine the pain of conflicts, just because (SORRY) @GrannyCookies keeps spamming commits because he doesnt bundle them.
I would love to see something done for this.
Look at this:

Yes, it's awesome, but please.
|
1.0
|
Bundle your commits! - This is something, maybe not VERY important but people still seem to forget this.
In the busy weekends, when everyone is doing work EVERYONE should bundle. You can't imagine the pain of conflicts, just because (SORRY) @GrannyCookies keeps spamming commits because he doesnt bundle them.
I would love to see something done for this.
Look at this:

Yes, it's awesome, but please.
|
priority
|
bundle your commits this is something maybe not very important but people still seem to forget this in the busy weekends when everyone is doing work everyone should bundle you can t imagine the pain of conflicts just because sorry grannycookies keeps spamming commits because he doesnt bundle them i would love to see something done for this look at this yes it s awesome but please
| 1
|
145,872
| 5,583,213,016
|
IssuesEvent
|
2017-03-28 23:30:40
|
CS2103JAN2017-T16-B2/main
|
https://api.github.com/repos/CS2103JAN2017-T16-B2/main
|
closed
|
Update documentation for V0.4
|
priority.high type.epic type.task
|
- Update UserGuide.md with relevant changes
- Update DeveloperGuide.md with relevant changes
- Add Component SD for UI.
|
1.0
|
Update documentation for V0.4 - - Update UserGuide.md with relevant changes
- Update DeveloperGuide.md with relevant changes
- Add Component SD for UI.
|
priority
|
update documentation for update userguide md with relevant changes update developerguide md with relevant changes add component sd for ui
| 1
|
180,856
| 6,654,001,915
|
IssuesEvent
|
2017-09-29 10:47:12
|
bitshares/bitshares-ui
|
https://api.github.com/repos/bitshares/bitshares-ui
|
closed
|
[1][ariesjia] Better form field validation
|
bug high priority
|
Closed #459 in order to create an actionable item:
- Should fail validation if account does not exist
- Should fail validation if non-numeric value is present in qty

|
1.0
|
[1][ariesjia] Better form field validation - Closed #459 in order to create an actionable item:
- Should fail validation if account does not exist
- Should fail validation if non-numeric value is present in qty

|
priority
|
better form field validation closed in order to create an actionable item should fail validation if account does not exist should fail validation if non numeric value is present in qty
| 1
|
589,831
| 17,761,637,859
|
IssuesEvent
|
2021-08-29 20:09:42
|
ClinGen/clincoded
|
https://api.github.com/repos/ClinGen/clincoded
|
closed
|
ClinGen awards stats
|
priority: high colleague request requires discussion data request sitewide
|
As part of the ClinGen retreat, there is a plan to hand out awards for 2020 accomplishments (Jan 1st 2020 to Dec 31st 2020). Including:
1. ClinGen Super Submitter VCEP 2020 : Most variant submissions to ClinVar in 2020
2. ClinGen Super Submitter GCEP 2020 : Most gene curations to the ClinGen Website in 2020
3. ClinGen Champion Biocurator (Variant) 2020: most variant curations in ClinGen VCI 2020
4. ClinGen Champion Biocurator (Gene) 2020: most gene curations in ClinGen GCI 2020
As such, we need to know:
total number of GDMs published to the website in 2020 listed per GCEP
total number of edits in the GCI in 2020 listed per individual curator
total number of edits in the VCI in 2020 listed per individual curator
|
1.0
|
ClinGen awards stats - As part of the ClinGen retreat, there is a plan to hand out awards for 2020 accomplishments (Jan 1st 2020 to Dec 31st 2020). Including:
1. ClinGen Super Submitter VCEP 2020 : Most variant submissions to ClinVar in 2020
2. ClinGen Super Submitter GCEP 2020 : Most gene curations to the ClinGen Website in 2020
3. ClinGen Champion Biocurator (Variant) 2020: most variant curations in ClinGen VCI 2020
4. ClinGen Champion Biocurator (Gene) 2020: most gene curations in ClinGen GCI 2020
As such, we need to know:
total number of GDMs published to the website in 2020 listed per GCEP
total number of edits in the GCI in 2020 listed per individual curator
total number of edits in the VCI in 2020 listed per individual curator
|
priority
|
clingen awards stats as part of the clingen retreat there is a plan to hand out awards for accomplishments jan to dec including clingen super submitter vcep most variant submissions to clinvar in clingen super submitter gcep most gene curations to the clingen website in clingen champion biocurator variant most variant curations in clingen vci clingen champion biocurator gene most gene curations in clingen gci as such we need to know total number of gdms published to the website in listed per gcep total number of edits in the gci in listed per individual curator total number of edits in the vci in listed per individual curator
| 1
|
551,270
| 16,165,804,638
|
IssuesEvent
|
2021-05-01 13:19:10
|
nathnet/teamfinder
|
https://api.github.com/repos/nathnet/teamfinder
|
opened
|
The three dots on the top right corner on project listings
|
HIGH priority
|
The three dots on the top right corner, what do they do?
|
1.0
|
The three dots on the top right corner on project listings - The three dots on the top right corner, what do they do?
|
priority
|
the three dots on the top right corner on project listings the three dots on the top right corner what do they do
| 1
|
382,741
| 11,311,677,668
|
IssuesEvent
|
2020-01-20 03:12:48
|
DroidKaigi/conference-app-2020
|
https://api.github.com/repos/DroidKaigi/conference-app-2020
|
closed
|
Travis CI fail (release build fail)
|
high priority welcome contribute
|
## Kind (Required)
- Improvement
## Overview (Required)
- This script run on CI
- .travis/android/script.bash
```
* What went wrong:
Execution failed for task ':android-base:mergeReleaseClasses'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
> Entry name 'io/github/droidkaigi/confsched2020/util/DaggerFragment_MembersInjector.class' collided
```
## Links
-
|
1.0
|
Travis CI fail (release build fail) - ## Kind (Required)
- Improvement
## Overview (Required)
- This script run on CI
- .travis/android/script.bash
```
* What went wrong:
Execution failed for task ':android-base:mergeReleaseClasses'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
> Entry name 'io/github/droidkaigi/confsched2020/util/DaggerFragment_MembersInjector.class' collided
```
## Links
-
|
priority
|
travis ci fail release build fail kind required improvement overview required this script run on ci travis android script bash what went wrong execution failed for task android base mergereleaseclasses a failure occurred while executing com android build gradle internal tasks workers actionfacade entry name io github droidkaigi util daggerfragment membersinjector class collided links
| 1
|
60,774
| 3,134,184,186
|
IssuesEvent
|
2015-09-10 08:35:34
|
quantopian/pyfolio
|
https://api.github.com/repos/quantopian/pyfolio
|
closed
|
Show '% of portfolio' traded instead of shares traded for turnover plots
|
enhancement formatting & presentation help wanted high priority
|
where (% of portfolio) = ($ traded) / (portfolio value),
and, ($ traded) = (stock price) * (# of shares)

|
1.0
|
Show '% of portfolio' traded instead of shares traded for turnover plots - where (% of portfolio) = ($ traded) / (portfolio value),
and, ($ traded) = (stock price) * (# of shares)

|
priority
|
show of portfolio traded instead of shares traded for turnover plots where of portfolio traded portfolio value and traded stock price of shares
| 1
|
512,686
| 14,907,205,527
|
IssuesEvent
|
2021-01-22 02:33:11
|
apache/echarts
|
https://api.github.com/repos/apache/echarts
|
closed
|
seriesLayoutBy not working properly when 'row' is set
|
bug en priority: high
|
### Version
5.0.0
### Reproduction link
[https://echarts.apache.org/next/examples/en/editor.html?c=dataset-simple0](https://echarts.apache.org/next/examples/en/editor.html?c=dataset-simple0)
### Steps to reproduce
1 - Open the link
2 - Add , seriesLayoutBy: 'row' on each series
2 - The legend will be wrong on the chart. Right legend is the product name
3 - Change seriesLayoutBy to column, the legend dont change,, and now is right( year)
### What is expected?
Legend change when seriesLayoutBy is changed
### What is actually happening?
legend not change
<!-- This issue is generated by echarts-issue-helper. DO NOT REMOVE -->
<!-- This issue is in English. DO NOT REMOVE -->
|
1.0
|
seriesLayoutBy not working properly when 'row' is set - ### Version
5.0.0
### Reproduction link
[https://echarts.apache.org/next/examples/en/editor.html?c=dataset-simple0](https://echarts.apache.org/next/examples/en/editor.html?c=dataset-simple0)
### Steps to reproduce
1 - Open the link
2 - Add , seriesLayoutBy: 'row' on each series
2 - The legend will be wrong on the chart. Right legend is the product name
3 - Change seriesLayoutBy to column, the legend dont change,, and now is right( year)
### What is expected?
Legend change when seriesLayoutBy is changed
### What is actually happening?
legend not change
<!-- This issue is generated by echarts-issue-helper. DO NOT REMOVE -->
<!-- This issue is in English. DO NOT REMOVE -->
|
priority
|
serieslayoutby not working properly when row is set version reproduction link steps to reproduce open the link add serieslayoutby row on each series the legend will be wrong on the chart right legend is the product name change serieslayoutby to column the legend dont change and now is right year what is expected legend change when serieslayoutby is changed what is actually happening legend not change
| 1
|
99,605
| 4,057,294,082
|
IssuesEvent
|
2016-05-24 21:34:26
|
smartchicago/chicago-early-learning
|
https://api.github.com/repos/smartchicago/chicago-early-learning
|
closed
|
Different zoom level for zip code and community area searches
|
High Priority
|
We need to look at having different zoom level for address searches vs zip code or community area searches. Parents and Early Learning providers are not finding the appropriate location because the pinpoint is at the center of the zip code/community area and does not take the shape into account.
This came up specifically with Stock Elementary. If you type in "Edison Park" it does not appear because of the current zoom level:
<img width="1229" alt="screen shot 2016-05-16 at 2 54 23 pm" src="https://cloud.githubusercontent.com/assets/5550969/15301988/1bf7f01e-1b76-11e6-80f7-bf0448ab4eb4.png">
Stock Elementary is a little further north:
<img width="1226" alt="screen shot 2016-05-16 at 2 54 55 pm" src="https://cloud.githubusercontent.com/assets/5550969/15302002/31bf7250-1b76-11e6-81ce-336091831894.png">
Note: The current zoom level is very important for address searches because we heard from our partners that parents weren't getting locations close enough to the address they searched for. This relates to the work in this milestone: https://github.com/smartchicago/chicago-early-learning/issues?q=milestone%3A%22Improve+Search+Function%22
|
1.0
|
Different zoom level for zip code and community area searches - We need to look at having different zoom level for address searches vs zip code or community area searches. Parents and Early Learning providers are not finding the appropriate location because the pinpoint is at the center of the zip code/community area and does not take the shape into account.
This came up specifically with Stock Elementary. If you type in "Edison Park" it does not appear because of the current zoom level:
<img width="1229" alt="screen shot 2016-05-16 at 2 54 23 pm" src="https://cloud.githubusercontent.com/assets/5550969/15301988/1bf7f01e-1b76-11e6-80f7-bf0448ab4eb4.png">
Stock Elementary is a little further north:
<img width="1226" alt="screen shot 2016-05-16 at 2 54 55 pm" src="https://cloud.githubusercontent.com/assets/5550969/15302002/31bf7250-1b76-11e6-81ce-336091831894.png">
Note: The current zoom level is very important for address searches because we heard from our partners that parents weren't getting locations close enough to the address they searched for. This relates to the work in this milestone: https://github.com/smartchicago/chicago-early-learning/issues?q=milestone%3A%22Improve+Search+Function%22
|
priority
|
different zoom level for zip code and community area searches we need to look at having different zoom level for address searches vs zip code or community area searches parents and early learning providers are not finding the appropriate location because the pinpoint is at the center of the zip code community area and does not take the shape into account this came up specifically with stock elementary if you type in edison park it does not appear because of the current zoom level img width alt screen shot at pm src stock elementary is a little further north img width alt screen shot at pm src note the current zoom level is very important for address searches because we heard from our partners that parents weren t getting locations close enough to the address they searched for this relates to the work in this milestone
| 1
|
704,012
| 24,181,532,854
|
IssuesEvent
|
2022-09-23 09:21:37
|
mizdra/happy-css-modules
|
https://api.github.com/repos/mizdra/happy-css-modules
|
closed
|
Support `--load-path` for sass
|
Type: Feature Priority: High Priority: Low Status: PR Welcome
|
- We want to resolve `specifier` using the [`--load-path`](https://sass-lang.com/documentation/cli/dart-sass#load-path) option of sass.
- I think it can be implemented by extending `src/resolver/webpack-resolver.ts` module.
- The `--sass-load-path` option must also be added.
|
2.0
|
Support `--load-path` for sass - - We want to resolve `specifier` using the [`--load-path`](https://sass-lang.com/documentation/cli/dart-sass#load-path) option of sass.
- I think it can be implemented by extending `src/resolver/webpack-resolver.ts` module.
- The `--sass-load-path` option must also be added.
|
priority
|
support load path for sass we want to resolve specifier using the option of sass i think it can be implemented by extending src resolver webpack resolver ts module the sass load path option must also be added
| 1
|
164,621
| 6,229,988,492
|
IssuesEvent
|
2017-07-11 06:33:27
|
eoglethorpe/deep
|
https://api.github.com/repos/eoglethorpe/deep
|
closed
|
Entry template editor - return to project management panel when saved
|
priority - high
|
When i create a new project and i create a new analysis framework, and i clic edit framework, then a new tab appear and i access the "entry template editor" ( we should rename this Analysis framework editor for consistency).
I create my framework, all is okay, life is great. then i clic save. and nothing happen. Normally when clicking "save" i should go back to the project management panel, i should see here the preview of the analysis framework i just created. Currently, when i clic "save", nothing really happens. i stay in the same web page, and the saved analysis framework does not appear in my preview of the project panel.
Cheers!
|
1.0
|
Entry template editor - return to project management panel when saved - When i create a new project and i create a new analysis framework, and i clic edit framework, then a new tab appear and i access the "entry template editor" ( we should rename this Analysis framework editor for consistency).
I create my framework, all is okay, life is great. then i clic save. and nothing happen. Normally when clicking "save" i should go back to the project management panel, i should see here the preview of the analysis framework i just created. Currently, when i clic "save", nothing really happens. i stay in the same web page, and the saved analysis framework does not appear in my preview of the project panel.
Cheers!
|
priority
|
entry template editor return to project management panel when saved when i create a new project and i create a new analysis framework and i clic edit framework then a new tab appear and i access the entry template editor we should rename this analysis framework editor for consistency i create my framework all is okay life is great then i clic save and nothing happen normally when clicking save i should go back to the project management panel i should see here the preview of the analysis framework i just created currently when i clic save nothing really happens i stay in the same web page and the saved analysis framework does not appear in my preview of the project panel cheers
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.