id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1921659761 | 🛑 Laman AMS BSrE is down
In 441bb60, Laman AMS BSrE (https://portal-bsre.bssn.go.id/login) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Laman AMS BSrE is back up in e1719f6 after 10 days, 17 hours, 14 minutes.
| gharchive/issue | 2023-10-02T10:37:25 | 2025-04-01T04:32:19.244604 | {
"authors": [
"BSrE-ID"
],
"repo": "BSrE-ID/monitor",
"url": "https://github.com/BSrE-ID/monitor/issues/415",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
271454725 | BTDF V5.5 - missing to create .bat files
Hi,
I have one issue with v5.5. Create deployment project template is created all files except .bat files.
Is any issue with my installation?
Attachments
BTDF_V5.5.PNG
This work item was migrated from CodePlex
CodePlex work item ID: '10928'
Vote count: '1'
[tfabraham@4/27/2015]
The batch files should always be created. If you're using a build server, ensure that you have the final release version of v5.5 installed on the build server. Ensure that you have the final release installed on all development workstations too.
[UnknownUser@6/10/2015]
[Narsaiah@6/10/2015]
This BTDF 5.5 downloaded and installed in April month so it should be latest version. Not sure what is the issue with my development system. Still I have same issue. Not created any .bat files while creating deployment project.
Can you suggest me how to re-install this tool? attached screenshot for your reference.
[tfabraham@8/26/2015]
Sorry, I thought you were referring to the MSI build output. The batch files within the deployment project folder are no longer required or supported. You should not see them in v5.5.
[UnknownUser@8/26/2015]
Issue closed by tfabraham with comment
See comments
Reason closed
As Designed
| gharchive/issue | 2017-11-06T12:12:15 | 2025-04-01T04:32:19.249819 | {
"authors": [
"tfabraham"
],
"repo": "BTDF/DeploymentFramework",
"url": "https://github.com/BTDF/DeploymentFramework/issues/376",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
177567820 | 请问,现在如何在python中使用OpenCC?
http://pypi.python.org/pypi/opencc-python/ 是老版本的OpenCC。谢谢
我试着将新版本的 TWPhrasesRev.txt(也使用过TWPhrases.txt),用opencc_dict.exe 转换为 TWPhrasesRev.ocd,然后编写zhs2zhtp.ini
【zhs2zhtp.ini】
title = simp_to_trad_pharase
description = Standard Configuration for Conversion from Simplified Chinese to Traditional Chinese
dict0 = OCD TWPhrasesRev.ocd
dict1 = OCD simp_to_trad_characters.ocd
【/zhs2zhtp.ini】
并修改__init__.py
BUILDIN_CONFIGS = {
's2t': os.path.join(DATA_PATH, 'zhs2zht.ini'),
't2s': os.path.join(DATA_PATH, 'zht2zhs.ini'),
'mix2t': os.path.join(DATA_PATH, 'mix2zht.ini'),
'mix2s': os.path.join(DATA_PATH, 'mix2zhs.ini'),
'zhs2zhtp': os.path.join(DATA_PATH, 'zhs2zhtp.ini'),
}
但是,依然没法将“鼠标”变换成“滑鼠”。
可以考慮試用 ctypes
https://docs.python.org/2/library/ctypes.html#module-ctypes
https://github.com/osfans/PIME/tree/master/python/opencc
这里有一版
不过没有实现通过ctypes生成ocd 好像很麻烦的样子
谢谢
Also take a look at a python implementation of opencc. It uses the opencc data files.
https://github.com/yichen0831/opencc-python
| gharchive/issue | 2016-09-17T10:11:57 | 2025-04-01T04:32:19.280227 | {
"authors": [
"BYVoid",
"Hopkins1",
"osfans",
"retsyo"
],
"repo": "BYVoid/OpenCC",
"url": "https://github.com/BYVoid/OpenCC/issues/200",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
854917690 | Cloning a glTF object with LOD remove its material
This can be repro here: https://www.babylonjs-playground.com/#FHHM51#40
Whereas simple spheres are working fine: https://playground.babylonjs.com/#LUMDVM#2
Adding @bghgary for the material issue
In a nutshell, the mesh clone is not getting the material. It is actually getting one but the LOD extension from gltf is somehow disposing it
This isn't exactly a bug.
The problem is that we are loading the LODs progressively. Cloning the object before the LODs are done loading means it will clone an intermediately LOD. The current behavior is that materials for intermediately LODs will be disposed when the final LOD is loaded.
It would be a difficult feature to implement if we want to be able to clone before the LODs are complete.
Here is how you would clone after the LODs are complete:
https://www.babylonjs-playground.com/#FHHM51#41
I like when there is no bug :D
Closing the issue as we know how to deal with it
| gharchive/issue | 2021-04-09T23:22:52 | 2025-04-01T04:32:19.284613 | {
"authors": [
"bghgary",
"deltakosh",
"sebavan"
],
"repo": "BabylonJS/Babylon.js",
"url": "https://github.com/BabylonJS/Babylon.js/issues/10171",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
338100059 | [Bug] (Some?) animations consume more CPU with time
Bugs
Bug repro on playground
Expected result: I expect my game to perform in a consistent way & not degrade as time passes.
Current result: My game goes from ~10% cpu to 100% after a few minutes of idle animation.
I binary searched the commits & narrowed it down to this PR https://github.com/BabylonJS/Babylon.js/pull/4563
If I revert this change, my game does not degrade with time.
Here's a screenshot of the profiler output. The listener curve is truncated, but it's a saw-tooth & doesn't leak upward / accumulate beyond a certain point.
Thanks a lot for finding this!!
Will fix it asap :)
Do you have a repro in the PG that can help us fix it?
I'll try to craft one
Much appreciated
Updated post & https://www.babylonjs-playground.com/index.html#4GJCJ3
This playground abuses the crap out of the issue. I would suggest profiling within the first 10 seconds of opening it & then let it run in a fresh window for a couple minutes and then check your CPU. The difference is vast.
If you switch the version to latest, you won't see any degrade of experience (though it's still choppy b/c the code is very adversarial).
Thanks this is perfect
@bghgary will probably check on thursday as tomorrow is 4th of july :)
Sounds good. You guys will probably hear from me a lot. I'm working on a 3d fighting game that touches nearly every part of the babylon codebase & pushes it pretty hard. The regressions are easy to catch (though tough to narrow down).
Cheers
Well this is an excellent news!!! More than happy to fix everything you can find :D
It looks like the playground code is creating new animation objects every time objectTick is called. This is definitely going to leak. You can see this if you type BABYLON.Engine.LastCreatedScene.animatables in the debug console. This array just keeps growing and growing as calling scene.beginDirectAnimation will just keep adding to the list.
@fmmoret
I binary searched the commits & narrowed it down to this PR #4563
I'm not sure what you are seeing with this. My PR makes it such that animations can have only one frame. Without this change, the code in your playground doesn't work at all since it immediately throws an exception.
Did you visit the playground link on stable vs not:
https://www.babylonjs-playground.com/index.html#4GJCJ3
https://www.babylonjs-playground.com/indexStable.html#4GJCJ3
And https://github.com/BabylonJS/Babylon.js/blob/master/src/Animations/babylon.animatable.ts#L381 they should be getting cleaned up when they finish. I estimate that your early return is bypassing some clean-up mechanism down the road.
BABYLON.Engine.LastCreatedScene.animatables doesn't grow on stable -- stays at a flat 3-4 depending on how fast setInterval kicks off the next function call
Makes sense to me
Agreed. The animatables should be removed on end. I'll look at it more.
It's https://github.com/BabylonJS/Babylon.js/blob/master/src/Animations/babylon.animatable.ts#L372 getting the wrong value back from https://github.com/bghgary/Babylon.js/blob/ee0b2ad49a57c9094337ae9385bce749c986daea/src/Animations/babylon.runtimeAnimation.ts#L408
Agree, should just return loop instead of !loop
I think it should be return loop; ? If we're looping, we're definitely still running, and if we're not -- we're done b/c its just one frame.
Yes, I relied on the comment too much. But I don't think that's the only issue.
I'm having the same issue but it is consuming that much CPU because of the garbage collector
The function _registerTargetForLateAnimationBinding is allocating 50% of the memory of Decentraland
We need memory to blend animation together unfortunately, We need to keep track of all animations targeting the same object/path and then resolve the merge
Do you have a lot of blended animations?
Yes... several, less than 10 tho, we made an adapter for animations and the default interface creates weighted animations even if you decide to play only one animation at 100% or the model has only one animation.
https://docs.decentraland.org/sdk-reference/entity-interfaces/#gltf-models
Well we can try to see if I can optimize that :)
can you repro something in the PG that shows this problem?
| gharchive/issue | 2018-07-04T01:16:38 | 2025-04-01T04:32:19.298106 | {
"authors": [
"bghgary",
"deltakosh",
"fmmoret",
"menduz"
],
"repo": "BabylonJS/Babylon.js",
"url": "https://github.com/BabylonJS/Babylon.js/issues/4673",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
227104502 | Adjustment to position of CannonJS HeightmapImpostor
I hope this works! :)
Hi guys,
sorry for a late reply as always :smile:
Can you explain what was wrong with the old implementation and why you think this is the right way to go?
Cannon has z as the up vector, which is not needed until you get to very specific object types like the one we are handling here. Any playground you want to show me?
Hi R, thx for asking. I guess it was #8 in the series that was first seen as an issue. Camera aiming +x (I changed to +z aimed cam soon after).
Notice that physics impostor only active in southwest quadrant of terrain. Notice impostor extends southwest BEYOND edges of terrain, too. It is a 500x500 terrain, and I believe the impostor need repositioning 250 units north (+x) and 250 units east (+z) (or +y if cannon heightmap has z vertical)
Thank god you're here! I can use all the help I can get. But don't let this stand in the way of real life stuff, R. I think boundingBox.extendSize changed from returning DIAMETER-like values... to returning radius-like values (half of what they once were). But I have no proof of that, and nobody is saying I am correct. So, likely I'm wrong. :) Thanks for advice/input, R. Sorry if I'm causing you hassles, or lost on wild goose chase.
DOA
| gharchive/pull-request | 2017-05-08T16:40:19 | 2025-04-01T04:32:19.301625 | {
"authors": [
"RaananW",
"Wingnutt"
],
"repo": "BabylonJS/Babylon.js",
"url": "https://github.com/BabylonJS/Babylon.js/pull/2112",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
506474190 | aws-ecr
amazon web services
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:0c27bfab932b60f1c60a4c2e74bee114f8d4b795
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:0c27bfab932b60f1c60a4c2e74bee114f8d4b795
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=8.0.0
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:984cb69bcfb2f3e1ad17578f906c8617619cdff2
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:984cb69bcfb2f3e1ad17578f906c8617619cdff2
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:984cb69bcfb2f3e1ad17578f906c8617619cdff2
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=8.1.0
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:f7245e32ec81adb22d4e05658220a4e86f2dd90c
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:f7245e32ec81adb22d4e05658220a4e86f2dd90c
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=8.1.1
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:20543408da67b0013c3adea8a0dac192ab0b5540
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:7ceb2edb57dc9f42a1ecc5ec3a5830fc00140ab8
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:7ceb2edb57dc9f42a1ecc5ec3a5830fc00140ab8
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=8.1.2
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:7ceb2edb57dc9f42a1ecc5ec3a5830fc00140ab8
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:14861880e15d2323048d623370c33fb42c5c82a8
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:1071ee6384fa0af636f0bdc5eeb14a902a798b81
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:362bc6a44ec2cc89a936bd0aeddfc58d5f058e75
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:eef51910b354d996676931da6205061435dd12b3
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:f9924a08cc4740e35ee54ff2ce01bbb42b534eb6
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:c710193b32fd9281d806b8e915377e5c9bdb8c0b
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:a8bbc90c4d6d963a29ecf2bbdd515e75f5bcf47d
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:4332b83c04cd80f29a2fead59e613b28af6e85ef
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:8b56848d4294634c97eaaa73d7173958e9d31cce
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=8.2.0
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:1d72438e3487c2f871f962dcff3a00b4ee1cb706
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:1d72438e3487c2f871f962dcff3a00b4ee1cb706
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=8.2.1
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:6264a75e89df2e34f8ca42f2559a3efd824853cf
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:596dd3b9f5c7da01d1fec2ac561e8cf48f89c7a6
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:e275ff964679f259f60ee6fc1d39103304089e7e
Your development orb has been published. It will expire in 30 days.
You can preview what this will look like on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=dev:9632aa04a1d515422cb53a1b3df7ce1c787c1acd
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.0.0
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.0.1
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.0.2
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.0.3
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.0.4
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.1.0
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.2.0
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.3.0
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.3.1
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.3.4
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.3.6
Your orb has been published to the CircleCI Orb Registry.
You can view your published orb on the CircleCI Orb Registry at the following link:
https://circleci.com/developer/orbs/orb/circleci/aws-ecr?version=9.3.7
| gharchive/pull-request | 2019-10-14T06:43:48 | 2025-04-01T04:32:19.383407 | {
"authors": [
"orb-publisher",
"sunilchalla"
],
"repo": "BackEndTea/aws-ecr-orb",
"url": "https://github.com/BackEndTea/aws-ecr-orb/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1306604809 | 🛑 Warranty.Buildsafe.co.uk is down
In afb4f0b, Warranty.Buildsafe.co.uk (https://warranty.buildsafe.co.uk) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Warranty.Buildsafe.co.uk is back up in 3ee8e9c.
| gharchive/issue | 2022-07-15T22:45:01 | 2025-04-01T04:32:19.426322 | {
"authors": [
"BaileyStudio"
],
"repo": "BaileyStudio/Monitor",
"url": "https://github.com/BaileyStudio/Monitor/issues/140",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2328937168 | [Feat]: NavBar should close When Navitem is clicked in mobile devices
Issue No: #43
Actual Behaviour:
NavBar is not closed close When Nav Item is clicked on mobile devices
Excepted Behaviour:
NavBar is closed When Navitem is clicked on mobile devices
If it's not good, please assign me again I will do it better
Thank you for submitting your pull request! 🙌 We'll review it as soon as possible. In the meantime, please ensure that your changes align with our CONTRIBUTING.md. If there are any specific instructions or feedback regarding your PR, we'll provide them here. Thanks again for your contribution! 😊
| gharchive/pull-request | 2024-06-01T05:52:45 | 2025-04-01T04:32:19.451191 | {
"authors": [
"BamaCharanChhandogi",
"SrinivasDevolper"
],
"repo": "BamaCharanChhandogi/Diabetes-Prediction",
"url": "https://github.com/BamaCharanChhandogi/Diabetes-Prediction/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2591654887 | Fixing top space on mobile view in Home and Blog page
Some space above the the post something on home page and blog page in mobile view. We should remove that. Please assign this issue to me.
👋 Thank you for raising an issue! We appreciate your effort in helping us improve. Our GitFinder team will review it shortly. Stay tuned!
| gharchive/issue | 2024-10-16T11:56:26 | 2025-04-01T04:32:19.452406 | {
"authors": [
"BamaCharanChhandogi",
"DevanshuTripathi"
],
"repo": "BamaCharanChhandogi/GitFinder",
"url": "https://github.com/BamaCharanChhandogi/GitFinder/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
185814327 | Drag-n-Drop
Find better alternative to the drag and drop directive, or maybe a better implementation of what is currently there. These changes need to update the Trello board, as well.
Can I start this?
@sandaruny certainly, thanks! i haven't worked on this project in about a year, but i do remember improving upon the drag-n-drop functionality slightly
| gharchive/issue | 2016-10-28T00:45:03 | 2025-04-01T04:32:19.453656 | {
"authors": [
"Banjerr",
"sandaruny"
],
"repo": "Banjerr/electrello",
"url": "https://github.com/Banjerr/electrello/issues/3",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2289905036 | Set PlayerDefaultFaction
For New Contributors
How to contribute
Description
Current Clan.PlayerClan refers to the clan with id of "player_faction", this is the server clan.
Intended Design
Clan.PlayerClan should point to the local player clan.
Find a good place to assign this.
Location
Create a branch based from development
Related Issues
Requirements
Additional information
Definition of Done
[ ] Class level comments exist for all new classes.
[ ] XUnit tests exist for every method that does not require the game to be ran.
On hold
| gharchive/issue | 2024-05-10T15:01:54 | 2025-04-01T04:32:19.499197 | {
"authors": [
"EgardA"
],
"repo": "Bannerlord-Coop-Team/BannerlordCoop",
"url": "https://github.com/Bannerlord-Coop-Team/BannerlordCoop/issues/802",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
168153203 | What is the changes on the last update?
Hello just wanted to know
nuffin
hust performance optimization, it reduces lags when server has a lot of players
| gharchive/issue | 2016-07-28T17:36:46 | 2025-04-01T04:32:19.557084 | {
"authors": [
"Barbosik",
"lulzyyy",
"multa2"
],
"repo": "Barbosik/MultiOgar",
"url": "https://github.com/Barbosik/MultiOgar/issues/236",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2211902300 | 🛑 Infrastructure is down
In 13bed77, Infrastructure (https://www.inf.gov.nt.ca) was down:
HTTP code: 503
Response time: 9958 ms
Resolved: Infrastructure is back up in f36f1bc after 8 minutes.
| gharchive/issue | 2024-03-27T21:01:06 | 2025-04-01T04:32:19.559470 | {
"authors": [
"Barctic"
],
"repo": "Barctic/gnwt-monitor",
"url": "https://github.com/Barctic/gnwt-monitor/issues/10259",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2598251885 | 🛑 NWT Housing Corporation is down
In 1c6a977, NWT Housing Corporation (https://www.nwthc.gov.nt.ca) was down:
HTTP code: 503
Response time: 10248 ms
Resolved: NWT Housing Corporation is back up in 261a382 after 6 minutes.
| gharchive/issue | 2024-10-18T19:34:43 | 2025-04-01T04:32:19.562082 | {
"authors": [
"Barctic"
],
"repo": "Barctic/gnwt-monitor",
"url": "https://github.com/Barctic/gnwt-monitor/issues/12716",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1520513149 | 🛑 Health and Social Services is down
In 70bebfa, Health and Social Services (https://www.hss.gov.nt.ca) was down:
HTTP code: 403
Response time: 418 ms
Resolved: Health and Social Services is back up in 3c0eeaf.
| gharchive/issue | 2023-01-05T10:42:07 | 2025-04-01T04:32:19.564386 | {
"authors": [
"Barctic"
],
"repo": "Barctic/gnwt-monitor",
"url": "https://github.com/Barctic/gnwt-monitor/issues/1882",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1530767370 | 🛑 Infrastructure is down
In 77565ab, Infrastructure (https://www.inf.gov.nt.ca) was down:
HTTP code: 500
Response time: 504 ms
Resolved: Infrastructure is back up in 7791e44.
| gharchive/issue | 2023-01-12T14:02:33 | 2025-04-01T04:32:19.566842 | {
"authors": [
"Barctic"
],
"repo": "Barctic/gnwt-monitor",
"url": "https://github.com/Barctic/gnwt-monitor/issues/2074",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1533294958 | 🛑 Health and Social Services is down
In b7ebee6, Health and Social Services (https://www.hss.gov.nt.ca) was down:
HTTP code: 403
Response time: 452 ms
Resolved: Health and Social Services is back up in 60517ad.
| gharchive/issue | 2023-01-14T14:59:14 | 2025-04-01T04:32:19.569161 | {
"authors": [
"Barctic"
],
"repo": "Barctic/gnwt-monitor",
"url": "https://github.com/Barctic/gnwt-monitor/issues/2407",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1669288541 | 🛑 Environment and Natural Resources is down
In db83620, Environment and Natural Resources (https://www.enr.gov.nt.ca) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Environment and Natural Resources is back up in 5452d15.
| gharchive/issue | 2023-04-15T09:51:09 | 2025-04-01T04:32:19.571454 | {
"authors": [
"Barctic"
],
"repo": "Barctic/gnwt-monitor",
"url": "https://github.com/Barctic/gnwt-monitor/issues/4547",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2042569654 | 🛑 Health and Social Services Authority is down
In f9c30d7, Health and Social Services Authority (https://www.nthssa.ca) was down:
HTTP code: 503
Response time: 3217 ms
Resolved: Health and Social Services Authority is back up in 355f2fc after 9 minutes.
| gharchive/issue | 2023-12-14T21:59:55 | 2025-04-01T04:32:19.573762 | {
"authors": [
"Barctic"
],
"repo": "Barctic/gnwt-monitor",
"url": "https://github.com/Barctic/gnwt-monitor/issues/9242",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1002753701 | 🛑 Norcal - Search is down
In 5f559ac, Norcal - Search (https://demo-norcal-v1.builtforyou.com/results?search=) was down:
HTTP code: 500
Response time: 284 ms
Resolved: Norcal - Search is back up in 0b345f3.
| gharchive/issue | 2021-09-21T15:51:01 | 2025-04-01T04:32:19.576158 | {
"authors": [
"risadams"
],
"repo": "BarkleyREI-ArchiTECH/ArchiTECH-upptime",
"url": "https://github.com/BarkleyREI-ArchiTECH/ArchiTECH-upptime/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2362967992 | 🛑 QBitTorrent webUI is down
In 3277135, QBitTorrent webUI (https://qbittorrent.jonthan.xyz) was down:
HTTP code: 404
Response time: 427 ms
Resolved: QBitTorrent webUI is back up in ee0efa7 after 6 minutes.
| gharchive/issue | 2024-06-19T18:52:18 | 2025-04-01T04:32:19.582875 | {
"authors": [
"Baronhez"
],
"repo": "Baronhez/upptime",
"url": "https://github.com/Baronhez/upptime/issues/2453",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2429748318 | 🛑 QBitTorrent webUI is down
In bbf1841, QBitTorrent webUI (https://qbittorrent.jonthan.xyz) was down:
HTTP code: 404
Response time: 426 ms
Resolved: QBitTorrent webUI is back up in a29843f after 6 minutes.
| gharchive/issue | 2024-07-25T11:41:30 | 2025-04-01T04:32:19.585376 | {
"authors": [
"Baronhez"
],
"repo": "Baronhez/upptime",
"url": "https://github.com/Baronhez/upptime/issues/3372",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
229999743 | support version camt.053.001.04
I have added support to camt.053.001.04.
Please review the pull request and let me know if you are fine with it.
This is the spec I used:
https://www.six-interbank-clearing.com/dam/downloads/en/standardization/iso/swiss-recommendations/implementation-guidelines-camt.pdf
| gharchive/pull-request | 2017-05-19T14:53:43 | 2025-04-01T04:32:19.610303 | {
"authors": [
"manubo"
],
"repo": "Barzahlen/camt_parser",
"url": "https://github.com/Barzahlen/camt_parser/pull/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
976188909 | removing waitlist copy
Description
Testing required outside of automated testing?
[ ] Not Applicable
Screenshots (if appropriate):
[ ] Not Applicable
Rollback / Rollforward Procedure
[ ] Roll Forward
[ ] Roll Back
Reviewer Checklist
[ ] Description of Change
[ ] Description of outside testing if applicable.
[ ] Description of Roll Forward / Backward Procedure
[ ] Documentation updated for Change
:tada: This PR is included in version 1.1.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2021-08-21T17:58:43 | 2025-04-01T04:32:19.645566 | {
"authors": [
"armsteadj1",
"bweber"
],
"repo": "Basis-Theory/docs",
"url": "https://github.com/Basis-Theory/docs/pull/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
714368275 | Re-Optimised Dockerfile
Re-Optimised Dockerfile with prebuilt ffmpeg-alpine image.Everything should work fine
@retrodaredevil or anyone else who is reading this, can you re-clone this repository as I've just pushed a commit, follow the instructions under "Building with Docker" in the README, and then try the YouTube downloader? I'm getting the following error when doing so:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/yt.py", line 262, in yt_downloader
return send_json_response(session['progress_file_path'], video_id, ' [audio].')
File "/app/yt.py", line 144, in send_json_response
correct_file = [filename for filename in downloads if video_id in filename and download_type in filename][0]
IndexError: list index out of range
The most informative lines being:
File "/app/yt.py", line 144, in send_json_response
correct_file = [filename for filename in downloads if video_id in filename and download_type in filename][0]
IndexError: list index out of range
I'm wondering if anyone else will get the same issue. @telegrambotdev are you sure that you don't get the aforementioned error?
I haven't tested this in a while, but we could start using just youtube-dl as a dependency instead of git+https://github.com/ytdl-org/youtube-dl.git@refs/pull/25717/head like we currently have it. I think the change we needed a while back got merged (https://github.com/ytdl-org/youtube-dl/pull/25717/).
I'll try recloning and building the docker image if I find time.
I haven't tested this in a while, but we could start using just youtube-dl as a dependency instead of git+https://github.com/ytdl-org/youtube-dl.git@refs/pull/25717/head like we currently have it. I think the change we needed a while back got merged (ytdl-org/youtube-dl#25717).
I thought I had already removed it from requirements.txt. Just had a look and it hasn't been removed, so I'll do so now and edit the README to list it as a dependency when running locally.
I would merge this pull request to make testing the new Dockerfile easier, but this will close this issue and I'd rather keep this issue open to make my above request more visible to people who view this repository.
@telegrambotdev @retrodaredevil Here's the reason for the issue I mentioned as well as the solution:
I explored inside the Docker container using docker exec -it name-of-container bash...
What I found is that youtube-dl isn't able to run with a simple /usr/local/bin/youtube-dl, I got the following error:
env: can't execute 'python': No such file or directory
The path to Python 3 also needs to be specified. So, to run youtube-dl in the container, you need to do something like /usr/bin/python3 /usr/local/bin/youtube-dl.
This means that yt.py needs to be edited before building the container. The way we are using youtube-dl is by passing a list to subprocess.run(), and therefore you need to make youtube_dl_path a list like so youtube_dl_path = ['/usr/bin/python3', '/usr/local/bin/youtube-dl'], and then concatenate all of the args lists in yt.py with this list. Here's an example:
args = youtube_dl_path + ['--newline', '--restrict-filenames', '--cookies', 'cookies.txt',
'-o', download_template, '--', video_id]
You can see the changes that needed to be made to yt.py in this commit.
Is there a reason you are calling youtube-dl as a subprocess? youtube-dl is written in Python and I think you are able to do the same thing by just calling functions from its public API. Then you don't have to worry about the path to the python interpreter or the path to the executable.
@retrodaredevil That was a good suggestion, so I've changed the code to use the API instead of subprocess.run(). https://github.com/BassThatHertz/AudioAndVideoConverter/commit/891f03100ba5dc4cfe83648d1949d8288ec7e218
| gharchive/pull-request | 2020-10-04T18:03:09 | 2025-04-01T04:32:19.654740 | {
"authors": [
"BassThatHertz",
"retrodaredevil",
"telegrambotdev"
],
"repo": "BassThatHertz/AudioAndVideoConverter",
"url": "https://github.com/BassThatHertz/AudioAndVideoConverter/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2170448945 | [FEATURE] Individual Entry Preview
Requested Feature
I'd like to be able to render a specific entry on its own, without any of the frontmatter and appendix.
single entry
no frontmatter
no appendix
Current Implementation
You must include the following for the entry to actually render in the preview
#show notebook.with(/*...*/)
Motivation
This would make it easier to work on individual entries and quickly check that everything looks correct without any scrolling in the pdf preview.
(Optional) Possible Implementations and Alternatives
open the pdf to a specific page but i couldn't figure out how to do this
dont provide cover page -> radial theme creates a placeholder cover page
I don't think its possible for us to support this. I considered having each create_entry function return content instead of modifying the global state of the template, however this created some issues that I didn't see a way to get around.
Entries could be created out of order (ie you could have a frontmatter entry, a body entry, and then another frontmatter entry)
We suddenly have no way of guaranteeing that the page count is correct at any point in the document
Even if it was possible this is too large of a refactor for the number of users it affects.
I know that my PDF viewer (Zathura) remembers which page it was viewing when the document reloads, so you may want to consider switching up your approach to viewing PDFs.
| gharchive/issue | 2024-03-06T01:27:00 | 2025-04-01T04:32:19.676774 | {
"authors": [
"BattleCh1cken",
"meisZWFLZ"
],
"repo": "BattleCh1cken/notebookinator",
"url": "https://github.com/BattleCh1cken/notebookinator/issues/32",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
2600694063 | bump to 1.21.1?
could new versions of this mod for neoforge be bumped to 1.21.1?
1.21 works with 1.21.1, theres no code difference.
ok
| gharchive/issue | 2024-10-20T17:14:57 | 2025-04-01T04:32:19.691608 | {
"authors": [
"Bawnorton",
"VaporeonScripts"
],
"repo": "Bawnorton/AllTheTrims",
"url": "https://github.com/Bawnorton/AllTheTrims/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1842366582 | it's not working on proxy
Could you add the function to support to search and download music under proxy?
我使用clash进行美国代理,在v4.0.0 Creamplayer中可以正常播放和下载。在国内直连下载是比较快的
| gharchive/issue | 2023-08-09T02:47:56 | 2025-04-01T04:32:19.728634 | {
"authors": [
"Beadd",
"alucadoli"
],
"repo": "Beadd/Creamplayer",
"url": "https://github.com/Beadd/Creamplayer/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
444957109 | Adding a feedback dialogue.
A user can select an app rating mark.
If it’s positive, then the store is going to open with a possibility to give feedback. If it’s negative, then mail is going to open directed to our address.
The dialogue is going to be displayed every 10th start or with successful getting/sending of the 2nd transactions.
example - https://cdn-images-1.medium.com/max/1200/0*kHtaRGaYqPEAVvo7.png
@alexandrashelenkova please design
https://zpl.io/29w3Bne
https://zpl.io/VkpPk75
@alexandrashelenkova please check the designs
I think the name is misleading. It's about featuring ratings on the settings page.
and I'm sure we don't need a popup
checked
| gharchive/issue | 2019-05-16T13:29:41 | 2025-04-01T04:32:19.732196 | {
"authors": [
"Denis-Asipenko",
"DenisDemyanko",
"dariatarakanova",
"sasha-abramovich"
],
"repo": "BeamMW/ios-wallet",
"url": "https://github.com/BeamMW/ios-wallet/issues/141",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1864041317 | 🟢 Add a Maps page that pinpoints users to the location they identified in the JSON file.
Create a Map page with a UI design map to display Contributors' locations; you can use the Python library called "Leafmap". Currently, info on where Contributors are based is located in the Contributors.json file and is structured in the following way. We might need to adjust how the JSON file is structured when reaching the phase of creating code to iterate over the JSON file and pulling users' locations, which are structured by a city name, country name, and flag emoji.
**🔨We can break this open issue into smaller issues as it might require several moving parts to execute
From the GitHub Repository fpdf2 that is a great example ⬇️
When viewing the map by the open source project fpdf2, they have the ability to click on the user's GitHub link and specific contribution to the project⬇️
Current steps to execute for this project or better yet reverse engineer how the map in GitHub Repository fpdf2 was constructed.
[ ] 1. Create a new page component: Create a new React component that will serve as the page for displaying the world map and contributor locations. You can use the create-react-app command to create a new component or create a new file manually.
[ ] 2. Install the necessary dependencies: Install any necessary dependencies for displaying the map and markers, such as react-leaflet or google-maps-react.
[ ] 3. Fetch the contributor data: Fetch the contributor data from the JSON file or API endpoint and store it in the component's state or props.
[ ] 4. Display the map: Use the map library to display a map of the world on the page. You can use react-leaflet or google-maps-react to display the map.
[ ] 5. Add markers for each contributor: Iterate over the contributor data and add a marker for each contributor's location on the map. You can use the Marker component from react-leaflet or google-maps-react to add markers.
[ ] 6. Customize the markers: Customize the markers to display information about each contributor, such as their name, location, and contribution details. You can use the Popup component from react-leaflet or google-maps-react to display this information.
[ ] 7. Style the page: Style the page to make it visually appealing and easy to use. You can use CSS or a CSS framework such as Bootstrap or Material UI to style the page.
From research, the two options with documentation from others would be using Leaflet for React or integrating Google Maps React. This doesn't mean it has to be done this way. If you have another solution to creating a map to display users' locations, go for it!
Resources:
GitHub Repository fpdf2 - That has in the README section at the bottom a Map function that shows where all the contributors are located
Example of the map by the open source project fpdf2 - This shows all the contributors involved in the project in addition to link to the GitHub profile and their specific(s) contribution to the project for a pull request.
YouTube video for using React and Leaflet - Additionally, this channel demonstrates several more videos on the topic to customize the layering and popups of the map.
Medium Article for using React and Leaflet
Digital Ocean article on using Google Maps and React
hi xanderRubio pls I will love to work on this issue please asign it to me thanks
Hello @XanderRubio ,
i'm a student this issue would be my first contribution on github,
I'm interested in doing it but i still have some questions since i'm still a beginner.
when you said "You can use the create-react-app command" do you mean creating a new project? or
do i make the issue in a already made project? by adding a file in the /component folder for example?
and lastly how long do you think this issue would take?
-Marciano
Hi Marciano! I appreciate your enthusiasm in tackling this issue. I really appreciate it. Before tackling I would recommend you have a look over the README and then head over to the CONTRIBUTION-GUIDELINES and then open up your first pull request to share what you want to do before you die. Look at the live link for examples.
Once you get your info added, we will merge your first pull request, and this is a great way to get an idea of how pull requests work. Then I can go ahead and assign you to this task, and I can assist with helping you with providing more clarification on this issue.
Additionally, you can make a file manually as this can be easier for figuring out the structure of the file directory visually in VSCode or if you're using a different code editor. The issue itself depends on how much time and effort you want to put into it. The main objective of your contribution is to learn and grow from your contribution, so if you find it takes longer than you may have expertise, but you are learning a new insight in working with React, then it is worth it. I hope that helps. Have a great day!
Xander
Hi @XanderRubio, I'm excited to do this task. I will start now! Can you put me as Assignees?
@XanderRubio
Awesome!, That sounds good!
I will review the README and CONTRIBUTION-GUIDELINES and try to make my first pull request today!.
Hi @XanderRubio, I'm excited to do this task. I will start now! Can you put me as Assignees?
Hi @lucasfirmo62!
It's wonderful to see your enthusiasm for contributing to our project! I'll go ahead and assign you this task. Additionally, could you please update your text for your "Before I Die" statement, as mentioned in issue #75? We encourage you to share a meaningful statement that resonates with you, as other contributors will view your text, and we want it to serve as a positive example for others to share their meaningful goals.
Thank you, Lucas, and we look forward to seeing your contribution. If you have any questions or issues while integrating the contributors' map, please don't hesitate to reach out. Have a fantastic day!
@XanderRubio Awesome!, That sounds good! I will review the README and CONTRIBUTION-GUIDELINES and try to make my first pull request today!.
Thanks for making your first pull request @MarcianoN. I've assigned this issue to @lucasfirmo62. Please feel free to contribute to other open issues, create your own open issues, and also visit the ROADMAP for ideas on how we can continually grow this project with different tech stacks. Have a great day!
Hi @XanderRubio
If this issue is still pending please let me know I will start working on it as soon as possible as I am good at MERN and I want to contribute to open-source So, This will my first contribution
please can assign me this issue?
I also worked on ReactJS but want to explore open-source
Thank you
Hi @XanderRubio If this issue is still pending please let me know I will start working on it as soon as possible as I am good at MERN and I want to contribute to open-source So, This will my first contribution please can assign me this issue? I also worked on ReactJS but want to explore open-source
Thank you
Hi @KulkarniShrinivas,
Thank you for following up on your interest in working on this issue. As of now, I assigned the issue to @lucasfirmo62 two weeks ago without any further communication from him. If by the end of Saturday, September 30th, he has not responded, we will go ahead and reassign the issue to you so that you can start working on it.
In the meantime, feel free to open your first pull request by following the contribution guidelines outlined here. Share what you want to do before you die - I look forward to your contribution!
Have a great day,
Xander
I think it would be better to reuse the LocationMap component since it does exactly that. Nest it within a bigger component and move LocationMap itself to a separate function that returns the jsx. The bigger component would utilize a smaller function to do the geocoding and returns all the geo positions for all the users and then call LocationMap only once with { markers } argument.
Hi @XanderRubio, sorry for the delay, over the next week I will be finalizing this issue, I encountered some difficulties when developing, but I believe I will finish it next week.
Hi @XanderRubio, sorry for the delay, over the next week I will be finalizing this issue, I encountered some difficulties when developing, but I believe I will finish it next week.
Hi @lucasfirmo62. No worries on the delay😉 I hope your well and if you would like you can reference how @sherikovic built out the LocationMap component to display on the user card. This could help with any difficulties and additional following what @sherikovic is suggesting. You guys can even partner on the code if that helps also with working on it. Please let me know what you would like to do and @sherikovic is a friendly developer who is happy to assist in working together.
"I think it would be better to reuse the LocationMap component since it does exactly that. Nest it within a bigger component and move LocationMap itself to a separate function that returns the jsx. The bigger component would utilize a smaller function to do the geocoding and returns all the geo positions for all the users and then call LocationMap only once with { markers } argument."
seems like a few people are already working on this, i'd like to help where i can 🥺
| gharchive/issue | 2023-08-23T21:30:32 | 2025-04-01T04:32:19.813539 | {
"authors": [
"KulkarniShrinivas",
"MarcianoN",
"XanderRubio",
"goodylove",
"lucasfirmo62",
"samejima-san",
"sherikovic"
],
"repo": "BeforeIDieCode/BeforeIDieAchievements",
"url": "https://github.com/BeforeIDieCode/BeforeIDieAchievements/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
272163788 | Add idle timeout
Fixes #301
Adds idle timeout option to browserstack serivce.
See https://www.browserstack.com/automate/timeouts
+1 on this change - tests seem to be failing due to PHP 5.3 testing on Travis: PHP 5.3 is supported only on Precise. See https://github.com/Behat/MinkExtension/pull/293 for the fix
| gharchive/pull-request | 2017-11-08T11:24:10 | 2025-04-01T04:32:19.816478 | {
"authors": [
"adamclark-dev",
"pvhee"
],
"repo": "Behat/MinkExtension",
"url": "https://github.com/Behat/MinkExtension/pull/302",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1847936402 | VibrateNope is two instead of three on Android
In VibrateNope your android pattern is { 0, 50, 50, 50 }, which is two vibratios instead of three. Android patterns are like { repeats, vibration_length, delay, vibration_length, delay, ... }.
Don't hesitate to send a pull request to fix this ;).
| gharchive/issue | 2023-08-12T11:01:29 | 2025-04-01T04:32:19.879025 | {
"authors": [
"BenoitFreslon",
"Trotsenkov"
],
"repo": "BenoitFreslon/Vibration",
"url": "https://github.com/BenoitFreslon/Vibration/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
345701143 | iOS 怎么正确使用该框架?
在debug模式下导出的framework加入到我自己的工程里面,可以正常运行,但是release模式下导出的framework在我的工程的release模式下,提示file was built for arm64 which is not the architecture being linked (armv7)。我的配置选项为:Bulid Active Architecture Only = NO, Valid Architectures = x86_64 arm64 armv7
当前不支持32位设备 且不能调用 仅支持x86_64或者arm64
| gharchive/issue | 2018-07-30T10:31:32 | 2025-04-01T04:32:19.900430 | {
"authors": [
"SmilngCat",
"panxiaoqin"
],
"repo": "Bepal/eosio",
"url": "https://github.com/Bepal/eosio/issues/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1041711290 | [Bug] Opening the CSS editor to a detached window causes everything to dissappear.
Describe the Bug
Detaching the CSS editor causes everything but the editor to go away, and this is only fixed with a restart.
To Reproduce
Detach the css editor
Expected Behavior
To detach normally
Screenshots
Right after detaching it -> https://i.imgur.com/xcxcIBU.png
After closing the editor -> https://i.imgur.com/MrTMPcA.png
Discord Version
Stable
Additional Context
I have disabled all my plugins and themes. It does this whether you detach it through the editor or have it detach on opening
You can press escape after detaching the window to display everything again. It's not ideal, but it should work for the time being...(Thanks DemetedElmo for telling me this)
| gharchive/issue | 2021-11-01T22:54:13 | 2025-04-01T04:32:19.962533 | {
"authors": [
"TheGreenPig",
"identity7"
],
"repo": "BetterDiscord/BetterDiscord",
"url": "https://github.com/BetterDiscord/BetterDiscord/issues/1102",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1453189985 | Invalid region sent when state mismatch
After creating a monitor resource, if the monitor is updated from the betteruptime dashboard, the next deployments start failing with error:
Error: PATCH https://betteruptime.com/api/v2/monitors/[MONITOR_ID] returned 422: {"errors":{"regions":["are not included in the list: us, eu, as, au"]}}
This can be replicated following the steps:
Create a monitor
resource "betteruptime_monitor" "this" {
url = "https://example.com"
monitor_type = "status"
}
This creates a monitor with regions: null. Which is perfectly fine.
Deploying this multiple times works fine.
Now open your betteruptime dashboard and update "On call escalation" (region is not updated):
Now deployment starts failing with aforementioned error.
The tf plan shows the following change:
~ resource "betteruptime_monitor" "this" {
~ regions = [
- "us",
- "eu",
- "as",
- "au",
]
... // truncated
}
I assume regions: [] is being sent in the PATCH HTTP call, which fails with 422 from betteruptime (as verified with Postman).
It should be regions: null in the Patch HTTP call. Even if we want this to fail, a user friendly message would be of greater help than a 422 error.
The error here is HTTP Patch call being sent with regions: [], which is not accepted by betteruptime and fails with 422.
It should either be a valid region array with atleast 1 region OR null.
Thanks for the issue and thorough description @rahulpsd18!
We'll have a look into fixing this.
@rahulpsd18 Thanks again for opening the issue. I looked into it, and based on the open issues in Terraform, I wasn't able to figure out a good enough fix for this one.
I wanted to use the DiffSupressFunc first, to ignore if the original regions is an empty list or nil, however the DiffSupressFunc doesn't work for schema.TypeList https://github.com/hashicorp/terraform-plugin-sdk/issues/477
Then I wanted to add the Default value with all of the regions to match the default from the dashboard, and found out that this is also not supported for schema.TypeList https://github.com/hashicorp/terraform-plugin-sdk/issues/142
As such, I'm sorry to say I run out of ideas here. If you got any tips, happy to reopen the issue & address in a proper way; until then, I'd recommend always setting the regions in the resource specifications explicitly, which will solve the problem for you as well.
| gharchive/issue | 2022-11-17T11:34:36 | 2025-04-01T04:32:19.969323 | {
"authors": [
"adikus",
"gyfis",
"rahulpsd18"
],
"repo": "BetterStackHQ/terraform-provider-better-uptime",
"url": "https://github.com/BetterStackHQ/terraform-provider-better-uptime/issues/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
465799733 | upgrade omniauth-saml
Summary
Security patch noise reduction for omniauth-saml to 1.10.1 which now requires ruby-saml >= 1.7 which fixed CVE-2017-11430, even though we already had ruby-saml 1.9 in our Gemfile.lock, so we were already patched up.
/domain @Betterment/test_track_core
/no-platform
Also hit CVE-2015-9284. This is a pretty tricky one because omniauth gem decided not to accept a fix (optionally) relying on rails CSRF protection in order to mitigate. They suggested creating an omniauth-rails gem which never got released. Subsequently Cookpad released a different gem that patches things up similar to the original omniauth proposal.
So now we're good if we added multi-SAML-provider support. I'm not sure how to get github alerts to be quiet up about our vulnerable version of omniauth now that we've mitigated. But lets get this in first.
<< domain LGTM
| gharchive/pull-request | 2019-07-09T14:02:36 | 2025-04-01T04:32:19.986524 | {
"authors": [
"jmileham",
"smudge"
],
"repo": "Betterment/test_track",
"url": "https://github.com/Betterment/test_track/pull/120",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2334931790 | [Account page] Clicking on History gave 404 error
Reproduce:
Click on history. I have no assets in my accout.
Can i be assigned this issue please. I will deliver ASAP
| gharchive/issue | 2024-06-05T05:28:25 | 2025-04-01T04:32:19.992528 | {
"authors": [
"devcollinss",
"ponderingdemocritus"
],
"repo": "BibliothecaDAO/RealmsWorld",
"url": "https://github.com/BibliothecaDAO/RealmsWorld/issues/225",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1603295577 | Sets lists implementation
Creating temporary PR to get feedback on display format for lists.
Something of note:
Identical sets and lists do not have identical displays.
Using the side bar navigation takes you to the first instance of the anchor.
One additional feature that is highish priority is to update all links to the sets page.
Any reference to a partID attribute that doesn't begin with 'na' and ends in 'catSet' or 'Set' should link to set.html. The link is currently parts.html.
This applies to the following attributes:
Compartment Set
Unit Set ID
Aggregation Scale ID
Quality Set ID
Category Set ID
Unit Set ID
The PR looks good. The first part is rendering without a problem.
The version yesterday had a collapsable navigation bar on the right side, but it doesn't today. I like the collapsable nav.
I pushed version rc.3.10. Why don't you update with that version and we'll see if the duplicates get removed. Then we should be GTG.
I also noticed that the content structure for the different set types (list, catSet, setType) are the same. The main difference is what the grouping is (list, catSet, setID) and what the categories are in that grouping. I believe the code can be significantly reduced and made less confusing if we make one function to render each grouping and use it everywhere. Let me know if this does not make sense and we can discuss it.
Yes, those lists are the same, essentially. I am talking this morning to @mathew-thomson about a modest clean-up renaming of these parts (spurred by this work). It shouldn't change the structure and location of the parts -- at least for version 2. I'll put a Discourse post in this discussion.
| gharchive/pull-request | 2023-02-28T15:32:20 | 2025-04-01T04:32:19.997371 | {
"authors": [
"DougManuel",
"rvyuha",
"yulric"
],
"repo": "Big-Life-Lab/PHES-ODM-Doc",
"url": "https://github.com/Big-Life-Lab/PHES-ODM-Doc/pull/27",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
11091347 | upgraded project, added a few icons and a more complex example
The IOSandMac_NoXib example also shows how the same codebase and the same xcode project can be shared -- ios (iphone and ipad) using the apple UIKit, the osx variant using Chameleon. (Without Xibs)
+It adds a NSStatusItem to emphasize that Cocoa and UIKit can be combined.
k guess this wasn't helpful - a comment would be nice
| gharchive/pull-request | 2013-02-17T18:24:27 | 2025-04-01T04:32:20.005472 | {
"authors": [
"Daij-Djan"
],
"repo": "BigZaphod/Chameleon",
"url": "https://github.com/BigZaphod/Chameleon/pull/95",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2250488845 | 关于代码debug的问题
您好,我想更加深入的了解一下这套代码。但是在尝试debug的时候(从train.py的主函数出发)发现走到读取config文件(即config/bevformer_small.py之类的文件)的时候,它里边写的model、backbone之类的都是字符串形式也就无法读入到具体的模块的类里边去,这里应该是mmlab框架的一些机制导致的,但我对这方面确实了解很少。所以能麻烦您讲述一下debug的一些方法和技巧吗(本人非计算机专业出身代码能力可能较弱还请麻烦介绍的详细一些吧)
这是基于注册器实现的,其实比较容易理解,如果你能搞懂mmcv的注册机制的话
这是基于注册器实现的,其实比较容易理解,如果你能搞懂mmcv的注册机制的话
感谢回答,那请问能否提供一些相关的快速上手资料呢。另外还有一个问题就是修改后的网络在做推理阶段时,算法输入是什么?只需要nuscenes数据集的六张环视图像就可以吗?那么如果后续我想要尝试做实车部署的话是不是设法获取六个相机的内外参以及图像即可呢?
直接看mmcv的官方文档就可以,想当详细
| gharchive/issue | 2024-04-18T11:48:53 | 2025-04-01T04:32:20.034715 | {
"authors": [
"Bin-ze",
"azxcdewq123"
],
"repo": "Bin-ze/BEVFormer_segmentation_detection",
"url": "https://github.com/Bin-ze/BEVFormer_segmentation_detection/issues/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2054546215 | Stuck on loading screen each time I insert a rom
Trying to do a pokemon nuzlocke, and It was working fine the other day but now, each time I try to insert a rom such as Radical red or Black 2, It just leaves me on the loading screen but doesn't actually load. What do i do to fix this?
That's a bad ROM. Make sure it works on another emulator first (for most FireRed romhacks, make sure you're using the v1.0 ROM)
That's a bad ROM. Make sure it works on another emulator first (for most FireRed romhacks, make sure you're using the v1.0 ROM)
I don't understand how my radical red rom is bad if it was working the other day thought
It could be that mGBA (the emulator used here) just doesn't support it.
Damn, What about black 2 though?
Im using melonDS for that
Same thing
| gharchive/issue | 2023-12-22T22:18:09 | 2025-04-01T04:32:20.037375 | {
"authors": [
"0o0f1234",
"BinBashBanana"
],
"repo": "BinBashBanana/webretro",
"url": "https://github.com/BinBashBanana/webretro/issues/99",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2658225979 | Adding the license of the individual databases
Create a config file that lists the license for each database we have an annotator for, and include the license information extracted from this config file in the KG's metadata file.
It also could be used in the croissant schema
@YojanaGadiya do you agree with this addition?
Makes sense.
| gharchive/issue | 2024-11-14T09:40:36 | 2025-04-01T04:32:20.044471 | {
"authors": [
"YojanaGadiya",
"tabbassidaloii"
],
"repo": "BioDataFuse/pyBiodatafuse",
"url": "https://github.com/BioDataFuse/pyBiodatafuse/issues/193",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1943133698 | 🛑 S-BIAD531 is down
In fe71fb8, S-BIAD531 (https://www.ebi.ac.uk/biostudies/files/S-BIAD531/Zebrafish_ML_Archive/outputs/2020.07.30_FishDev_WT_01_1/obj_probs/FishDev_WT_01_1_MMStack_A10-Site_0.ome_Object Probabilities.tiff_results.txt) was down:
HTTP code: 0
Response time: 0 ms
Resolved: S-BIAD531 is back up in fe63fcc after 1 hour, 30 minutes.
| gharchive/issue | 2023-10-14T09:38:47 | 2025-04-01T04:32:20.047760 | {
"authors": [
"matthewh-ebi"
],
"repo": "BioImage-Archive/upptime",
"url": "https://github.com/BioImage-Archive/upptime/issues/114",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
442892443 | Feature Suggestion: Improvements to Resuming Batches
Is your feature request related to a problem? Please describe
It's unclear if resuming a batch download has remembered what has previously been downloaded, or if it's retrying what has failed.
Describe the solution you'd like
Keep all results previously available in the batch download window. Retain the completed status of those previously downloaded. Remove the error state of failed downloads so they can be retried. Show a Missing/Removed status for any items that are no longer available.
Change the "Downloading pages" message in this case to "Downloading pages and comparing to previous results"
Alternative solutions
Change the "Downloading pages" message in this case to "Downloading pages and comparing to previous results. Completed downloads will be omitted. Failed downloads will be retried."
Also, does "Follow downloaded images" mean to remember previously downloaded images? And is that within the batch only, or across all past batches? It would be good to make this a little clearer. Also, "Remember downloaded images" might be more appropriate, as "Follow" implies checking server status for updates, like following a tag, unless that's what it means?
Also, when an incomplete batch resumes, does it resume from a page position, or does it load all pages, load all files and ignore ones it has downloaded previously? In the first situation, some items can be missed entirely if the search includes an order by a factor that can change, like favorite
| gharchive/issue | 2019-05-10T20:58:52 | 2025-04-01T04:32:20.069608 | {
"authors": [
"frebeee"
],
"repo": "Bionus/imgbrd-grabber",
"url": "https://github.com/Bionus/imgbrd-grabber/issues/1660",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2466266400 | chore: Readme clarity improvements
Added a few parts to readme files to incentivize using Docker setup instead of local environment. Also made sure these aprts link to dockerize setup in Build.md.
Once I have a better understanding of simulator and scripts I will also probably try to write up some at least small guide for those.
Please rebase, there appears to be a conflict already :innocent:
| gharchive/pull-request | 2024-08-14T16:13:00 | 2025-04-01T04:32:20.082395 | {
"authors": [
"RostarMarek",
"benma"
],
"repo": "BitBoxSwiss/bitbox02-firmware",
"url": "https://github.com/BitBoxSwiss/bitbox02-firmware/pull/1271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
49287581 | Command line is inconsistent: --rpcport vs. --p2p-port
I'm thinking of writing a patch to accept the following hyphenated command line options in addition to their non-hyphenated counterparts:
--rpc-user
--rpc-password
--rpc-port
--httpd-endpoint
--http-port
I think we should prefer the hyphenated versions (e.g. not showing non-hyphenated versions in help). I'm also thinking of printing a deprecation warning if non-hyphenated versions are used, but I think we need to continue to support non-hyphenated usage for backward compatibility with existing scripts at least until the next major upgrade.
Too low priority.
| gharchive/issue | 2014-11-18T20:12:45 | 2025-04-01T04:32:20.114601 | {
"authors": [
"drltc",
"vikramrajkumar"
],
"repo": "BitShares/bitshares",
"url": "https://github.com/BitShares/bitshares/issues/1012",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
497837481 | Update ',' to '.' in iOS keyboard
Bug report via email:
I tried to use badger wallet app on my iPhone 7 with latest iOS 13 and I can’t send decimals units because the badger wallet app uses , instead of . like the iPhone keyboard uses.
I tried also on an Samsung S9 so on Android it works just fine because they use . and the keyboard as well so on Android you can send decimal units of a SLP token.
Thank you for you support in advance.
Odd, cannot recreate this on any of my own iOS devices.
Anyone else seeing this issue? Perhaps it's in a language with different localization? Will continue to investigate
| gharchive/issue | 2019-09-24T17:54:52 | 2025-04-01T04:32:20.122756 | {
"authors": [
"SpicyPete",
"cgcardona"
],
"repo": "Bitcoin-com/badger-mobile",
"url": "https://github.com/Bitcoin-com/badger-mobile/issues/166",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
453749787 | Is mocha dependency necessary to run bitbox?
I upgraded to bitbox-sdk 8.4.0 in my react/Typescript project and I think @types/mocha dependency is causing a conflict with my jest dependency. According to my package-lock.json, @types/mocha is needed by bitbox-sdk. Is this actually true? Or can you make it a dev dependency?
Thanks for pointing this out. I moved the types for mocha, chai and sinon to dev dependencies. The fix is in bitbox-sdk v8.4.2
🎩
| gharchive/issue | 2019-06-08T03:12:05 | 2025-04-01T04:32:20.125140 | {
"authors": [
"cgcardona",
"devalbo"
],
"repo": "Bitcoin-com/bitbox-sdk",
"url": "https://github.com/Bitcoin-com/bitbox-sdk/issues/120",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1392554756 | Changed AsRef<CStr> type bound to AsRef<CStr> + ?Sized
In the places we're using S: AsRef<CStr>, we only ever deal with references to S.
The implicit Sized bound is thus unnecessarily restrictive (and in some contexts, actively obstructive), so this PR removes it.
Merging.
| gharchive/pull-request | 2022-09-30T14:46:15 | 2025-04-01T04:32:20.140215 | {
"authors": [
"zec"
],
"repo": "BlackCAT-CubeSat/n2o4",
"url": "https://github.com/BlackCAT-CubeSat/n2o4/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1160862171 | vuejs devtools v6 has been released, we could update the vendors
The official release of vuejs devtool v6
https://chrome.google.com/webstore/detail/vuejs-devtools/nhdogjmejiglipccpnnnanhbledajbpd
Okey
done: electron-devtools-vendor@1.0.5
| gharchive/issue | 2022-03-07T03:39:26 | 2025-04-01T04:32:20.143472 | {
"authors": [
"BlackHole1",
"yzqdev"
],
"repo": "BlackHole1/electron-devtools-vendor",
"url": "https://github.com/BlackHole1/electron-devtools-vendor/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2346306430 | Batch Processing Feature
Overview
The goal is to add support for efficient batch processing of inputs to the MLX-VLM library. This will allow users to process multiple images and text prompts simultaneously to generate corresponding outputs in a single batch, improving performance.
Use cases:
Generating captions for a large dataset of images.
Localizing objects or regions in a batch of images based on textual descriptions.
Classifying a large number of images into predefined categories, considering accompanying text information.
Answering questions based on a batch of images (single and multiple question prompts).
Video processing.
Note: Tag @Blaizzy for code reviews and questions.
Requirements
Support batched inputs:
Accept a batch of images as input, provided as a list or array of image objects.
Accept a batch of text prompts as input, provided as a list or array of strings.
Accept a single text prompt as input, provided as a string.
Perform batch processing:
Process the batch of images and text prompts simultaneously (async) using the MLX-VLM model.
Utilize parallel processing or GPU acceleration to optimize batch processing performance.
Ensure that the processing of one input in the batch does not affect the processing of other inputs.
Generate batched outputs:
Return the generated outputs for each input in the batch.
Maintain the order of the outputs corresponding to the order of the inputs.
Support different output formats such as text, embeddings, or visual representations based on the specific task.
Error handling:
Handle errors gracefully during batch processing.
Provide informative error messages for invalid inputs or processing failures.
Continue processing the remaining inputs in the batch if an error occurs for a specific input.
API design:
Provide a clear and intuitive API for users to perform batch processing.
Allow users to specify the maximum batch size supported by their system.
Provide options to control the batch processing behavior, such as enabling/disabling parallel processing.
Documentation and examples:
Update the library documentation to include information about the batch processing feature.
Provide code examples demonstrating how to use the batch processing API effectively.
Include performance benchmarks and guidelines for optimal batch sizes based on system resources.
Implementation
Modify the existing input handling logic to accept batches of images and text prompts.
Implement batch processing functionality using parallel processing techniques or GPU acceleration libraries.
Optimize memory usage and performance for efficient batch processing.
Update the output generation logic to handle batched outputs and maintain the correct order.
Implement error handling mechanisms to gracefully handle and report errors during batch processing.
Design and expose a user-friendly API for performing batch processing.
Write unit tests to verify the correctness and performance of the batch processing implementation.
Update the library documentation and provide code examples for using the batch processing feature.
Testing
Prepare a comprehensive test suite to validate the batch processing functionality.
Test with different batch sizes and input variations to ensure robustness.
Verify that the generated outputs match the expected results for each input in the batch.
Measure the performance improvement gained by batch processing compared to individual processing.
Conduct error handling tests to ensure graceful handling of invalid inputs and processing failures.
Delivery
Integrate the batch processing feature into the existing MLX-VLM library codebase.
Ensure backward compatibility with previous versions of the library.
Provide release notes highlighting the new batch processing capability and any breaking changes.
Update the library version number following semantic versioning conventions.
Publish the updated library package to the relevant package repositories or distribution channels.
By implementing this batch processing feature, MLX-VLM will provide users with the ability to efficiently process multiple inputs simultaneously, improving performance and usability of the library for various vision-language tasks.
Will take it for implementation! hope to meet the standards :)
Here are some details:
@willccbb
Sorry, just saw this -- will take a swing when #53 is merged.
@willccbb done ✅
#53 is merged
Hey @willccbb, any update on this? Would be super helpful to have
@willccbb doesn't have the bandwidth.
This feature is now open and back in backlog.
| gharchive/issue | 2024-06-11T12:31:02 | 2025-04-01T04:32:20.203169 | {
"authors": [
"Benjoyo",
"Blaizzy",
"eDeveloperOZ",
"willccbb"
],
"repo": "Blaizzy/mlx-vlm",
"url": "https://github.com/Blaizzy/mlx-vlm/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
253546546 | 请问我的缓存文件为何获取到总是0
请问cacheUtils.getInstance()是获取android/data/包名/cache中所有的缓存包括文件夹里的吗?
我用这个方法获取到的缓存一直都为0
我在cache文件里有一个叫xBitmapCache的文件夹,里面有3个文件,我用cacheUtils.getInstance("xBitmapCache")获取到的缓存依然是0
不是很理解,求解
我怎么才能每次都获取到android/data/包名/cache下的缓存 然后实现清理缓存的功能呢?
因为是异步计算吧,你延迟会去获取就不是0了,之后我搞个回调吧。
那我用cacheUtils.getInstance()获取到的是android/data/包名/cache中所有的缓存吗,包括所有文件夹下的吗?
你用我的CacheUtils存缓存怎么会存文件夹?只能获取文件,你要获取文件大小那就用FileUtils
好的
已修复此bug,现在异步获取size就可以了,同步的话会卡哦
好的哈!辛苦了
@Blankj 你好,CacheUtils.getInstance().getCacheSize()方法仍然获取到的是0,是需要其他配置吗,在主线程或是rxjava的异步里结果都是0,设置里获取到的cache是有几兆的,设备是模拟器
用最新版本
@Blankj 这边之前使用的1.9.10,现在替换了1.9.12,实体机和模拟器仍然是0,实体机上在系统设置里看到的缓存大小是304k,
设置里的缓存和我这个缓存不是一个东西啊 大哥
我看到cache就以为是系统缓存,理解错了。3Q
| gharchive/issue | 2017-08-29T06:54:25 | 2025-04-01T04:32:20.208648 | {
"authors": [
"Blankj",
"HeJingWei",
"candrwow"
],
"repo": "Blankj/AndroidUtilCode",
"url": "https://github.com/Blankj/AndroidUtilCode/issues/262",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
287687030 | 希望加入一个ArrayUtils
例如ArrayUtils
已经有了,试试 1.26.0
| gharchive/issue | 2018-01-11T07:24:31 | 2025-04-01T04:32:20.210005 | {
"authors": [
"Blankj",
"liaolintao"
],
"repo": "Blankj/AndroidUtilCode",
"url": "https://github.com/Blankj/AndroidUtilCode/issues/381",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
380040712 | 使用ToastUtils 之后刷新列表的时候会造成列表不正常显示,有时还会崩溃 在华为手机上 型号EVA-AL00
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<com.scwang.smartrefresh.layout.SmartRefreshLayout
android:id="@+id/farm_refresh"
android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<LinearLayout
android:id="@+id/noCropLayout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@color/background"
android:gravity="center"
android:orientation="vertical">
<ImageView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:background="@drawable/empty_page" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="暂未找到未绑定的作物" />
</LinearLayout>
<LinearLayout
android:id="@+id/cropLayout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:visibility="gone">
<android.support.v7.widget.RecyclerView
android:id="@+id/choose_crop_list"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</LinearLayout>
</LinearLayout>
</com.scwang.smartrefresh.layout.SmartRefreshLayout>
<LinearLayout
android:id="@+id/sure_to_add_crop"
android:layout_width="match_parent"
android:layout_height="70dp"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
android:layout_marginLeft="10dp"
android:layout_marginRight="10dp"
android:layout_marginBottom="15dp"
android:gravity="center"
android:visibility="gone">
<ImageView
android:id="@+id/connaconCropToDikuai"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:background="@mipmap/tjzw_sure_btn_copy" />
</LinearLayout>
</RelativeLayout>
你只用了 ToastUtils 吗,还是还用了什么
我在商米平板上也遇到了,导致展示异常。应该是与修改了density有关,建议增加开关,可以绕过与显示Toast无关的操作。
你只用了 ToastUtils 吗,还是还用了什么
沒有,我是整体引入的。只是在用toast的时候 发现了一些奇怪的问题,报错就像 @candyguy242 说的那样,报错是资源文件丢失
我没有用那些适配方案,直接调用的ToastUitls的方法,设置了居中等等,也出现了兼容性问题。其他设备上没有遇到。
你们过两天用我最新的适配方案吧,我会把过去适配方案删掉
升级到 1.22.0 试试
商米平板新版已验证通过~
| gharchive/issue | 2018-11-13T03:15:05 | 2025-04-01T04:32:20.214509 | {
"authors": [
"Blankj",
"candyguy242",
"xiangyao0906"
],
"repo": "Blankj/AndroidUtilCode",
"url": "https://github.com/Blankj/AndroidUtilCode/issues/704",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2073274939 | netstandard2.0
Is there a specific reason why you're targeting .net6? I would really like to use the dependency injection for my new PowerShell module, but I would also like to still support the old Windows PowerShell.
I was thinking about maybe dual targeting if that would be necessary.
@svrooij thank you for the input. That's a great suggestion. I ran some quick tests locally to check any regression and there's some specific runtime/language features that are not available in netstandard2.0 and breaks the current build (ex. using declarations, not pattern, xunit.runner.visualstudio package dependency, file-scoped namespace, etc.). Though targeting netstandard2.0 is not fully feasible with the current project right now, let me know if there are any other suggestions that may help ease the support for legacy modules.
Added support for netstandard2.1
| gharchive/issue | 2024-01-09T22:29:01 | 2025-04-01T04:32:20.217014 | {
"authors": [
"kenswan",
"svrooij"
],
"repo": "BlazorFocused/Automation",
"url": "https://github.com/BlazorFocused/Automation/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
718632618 | Feature/empty
Favorite Page, Shop Page, Cart Page 등 비어 있는 페이지 뷰를 구현합니다.
관련 이슈
#46 [FEATURE] Empty Page
변경 사항
Favorite Page Icon 추가
Shop Page, Cart Page empty 화면 뷰 추가
PR Point
뷰가 괜찮은지 봐주세요.
Reference
@kimdg1105
제플린에서는 favorite 페이지에서 버튼 마우스 홀드 시에 보라색인데 반대로 된 것 같습니다!
아 이 부분이 이미 만들어져 있는 컴포넌트가 보라색 -> 하얀색이라 디자인팀한테 이야기해본다는 것이 상의, 설명 없이 피알 올렸네용 !! 추후 상의해서 알려드리겠습니당 ~!
그리구 비로그인으로 favorite empty 페이지는 아직 수정이 안 된 것 같습니다!
이 부분은 추후 로그인 모달 feature 개발 시에 작업할 것 같습니다 !! 예리하네유 감사합니당 ㅎㅎ
@kimdg1105
제플린에서는 favorite 페이지에서 버튼 마우스 홀드 시에 보라색인데 반대로 된 것 같습니다!
아 이 부분이 이미 만들어져 있는 컴포넌트가 보라색 -> 하얀색이라 디자인팀한테 이야기해본다는 것이 상의, 설명 없이 피알 올렸네용 !! 추후 상의해서 알려드리겠습니당 ~!
상의 결과 하얀색 -> 보라색으로 해야 한다구 해서 다시 바꿔서 올릴게용 ~!!
바꿨습니당 !!
마우스 안 올렸을 때
마우스 올렸을 때
| gharchive/pull-request | 2020-10-10T14:32:59 | 2025-04-01T04:32:20.277137 | {
"authors": [
"Seogeurim"
],
"repo": "Bletcher-Project/bletcher-front",
"url": "https://github.com/Bletcher-Project/bletcher-front/pull/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
617541326 | Error in m4a file check
I have a m4a file that I download each week (Church Sermon) that comes down as a m4a file. Wordpress will not accept it for security reasons. I've tried your converter and it renames the file as an MP4 file. This is the copy from the debug. Is this a file conversion error on the supplier's side (Zoom.us) or is it a bug in the converter. If I put the 'accept all' code into my wp-config file, it uploads just fine.
?
Thanks
Paul Proefrock
VALIDATION:
Naive Name: 5.m4a
Naive Extension: m4a
Naive Type: audio/mpeg
Magic Type: video/mp4
Best Type: video/mp4
FINAL:
Name: 5.mp4
Extension: mp4
Type: video/mp4
Code: 107
SYSTEM:
Kernel: Linux mi3-ss49.a2hosting.com 3.10.0-962.3.2.lve1.5.28.el7.x86_64 #1 SMP Tue Jan 28 04:53:14 EST 2020 x86_64
PHP: 7.2.30
Modules: Core; PDO; Phar; Reflection; SPL; SimpleXML; Zend OPcache; bcmath; bz2; calendar; ctype; curl; date; dom; exif; fileinfo; filter; ftp; gd; gettext; gmp; hash; iconv; imagick; imap; intl; ionCube Loader; json; libxml; litespeed; mbstring; memcache; memcached; mysqli; mysqlnd; openssl; pcntl; pcre; pdo_mysql; pdo_pgsql; pdo_sqlite; pgsql; posix; readline; session; shmop; soap; sockets; sqlite3; standard; tidy; tokenizer; xml; xmlreader; xmlrpc; xmlwriter; xsl; zip; zlib
WordPress: 5.4.1
Plugins: a2-optimized [2.0.10.9.8]; akismet [4.1.5]; blob-mimes [1.1.1]; contact-form-7 [5.1.7]; content-control [1.1.4]; download-button-for-elementor [1.0.0]; elementor [2.9.8]; elementor [2.9]; elementor-pro [2.9.4]; facebook-auto-publish [2.3.1]; force-regenerate-thumbnails-master [2.0.6]; jetpack [8.5]; media-link-for-elementor-master [1.0.0]; pdf-embedder [4.6]; tablepress [1.11]; user-registration [1.8.3]; user-role-editor [4.54]; wp-meta-and-date-remover [1.7.9]; wpcf7-recaptcha [1.2.6]; wpdatatables [2.8.1]
Theme: responsive-brix [4.8.7]
Hi @PaulProe,
Thanks for reporting! WordPress allows both MP4 audio and video by default, so there might just be something unusual in the encodings for these files that's confusing things.
Are you able to share one of the original sermon files with me (a URL, Dropbox link, etc.)? I'd be happy to take a look and see what might be causing the issues.
Josh
Thanks for the quick reply. You may download the file at this URL: https://us02web.zoom.us/rec/play/tZUsf--qqzg3SNCS4wSDC6JwW468e_ms1ncZrvFYxUa3AHgBYVChNeAWMbaPOtapAnxWtZtDPoIFc1sT
Thanks,
Paul Proefrock
From: Josh notifications@github.com
Sent: Wednesday, May 13, 2020 2:59 PM
To: Blobfolio/righteous-mimes righteous-mimes@noreply.github.com
Cc: PaulProe paul@proefrock.net; Mention mention@noreply.github.com
Subject: Re: [Blobfolio/righteous-mimes] Error in m4a file check (#1)
Hi @PaulProe https://github.com/PaulProe ,
Thanks for reporting! WordPress allows both MP4 audio and video by default, so there might just be something unusual in the encodings for these files that's confusing things.
Are you able to share one of the original sermon files with me (a URL, Dropbox link, etc.)? I'd be happy to take a look and see what might be causing the issues.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/Blobfolio/righteous-mimes/issues/1#issuecomment-628213836 , or unsubscribe https://github.com/notifications/unsubscribe-auth/APABPVM5TRVS3NGBCSWVJG3RRL3ZTANCNFSM4M73F4BQ . https://github.com/notifications/beacon/APABPVKM2ZIDHWGCL2BJFODRRL3ZTA5CNFSM4M73F4B2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEVY4QTA.gif
Thanks @PaulProe , but it looks like that link is behind a password lock. Can you email me a temporary access code at hello@blobfolio.com?
Josh
It was easier & quicker to share the file thru DropBox. This is the link:
https://www.dropbox.com/s/ah2qmlz2oo6lxzg/GMT20200510-145044_West-Count.m4a?dl=0
Paul
From: Josh notifications@github.com
Sent: Wednesday, May 13, 2020 11:09 PM
To: Blobfolio/righteous-mimes righteous-mimes@noreply.github.com
Cc: PaulProe paul@proefrock.net; Mention mention@noreply.github.com
Subject: Re: [Blobfolio/righteous-mimes] Error in m4a file check (#1)
Thanks @PaulProe https://github.com/PaulProe , but it looks like that link is behind a password lock. Can you email me a temporary access code at hello@blobfolio.com mailto:hello@blobfolio.com ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/Blobfolio/righteous-mimes/issues/1#issuecomment-628374202 , or unsubscribe https://github.com/notifications/unsubscribe-auth/APABPVLCJMXOLGPID46J5LLRRNVEZANCNFSM4M73F4BQ . https://github.com/notifications/beacon/APABPVOHYKRNOPBRV426FNLRRNVEZA5CNFSM4M73F4B2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEV2DVOQ.gif
Thanks @PaulProe! I appreciate you taking the time to get me a copy of the file. That's very helpful.
So... there wasn't actually anything wrong with the encoding of that file.
PHP's fileinfo extension was just making some bad assumptions and Lord of the Files wasn't being assertive enough to override that. I added a workaround and pushed a new release of the plugin (1.1.2) that should fix the issue for you. (That file is being correctly handled on my test site now, at least.)
When you have a moment, please update the plugin, then try to re-upload the sermon and let me know how it goes!
Josh
Thanks for your help, the update fixed the issue.
Paul
From: Josh notifications@github.com
Sent: Thursday, May 14, 2020 10:57 PM
To: Blobfolio/righteous-mimes righteous-mimes@noreply.github.com
Cc: PaulProe paul@proefrock.net; Mention mention@noreply.github.com
Subject: Re: [Blobfolio/righteous-mimes] Error in m4a file check (#1)
Thanks @PaulProe https://github.com/PaulProe ! I appreciate you taking the time to get me a copy of the file. That's very helpful.
So... there wasn't actually anything wrong with the encoding of that file.
PHP's fileinfo extension was just making some bad assumptions and Lord of the Files wasn't being assertive enough to override that. I added a workaround and pushed a new release of the plugin (1.1.2) that should fix the issue for you. (That file is being correctly handled on my test site now, at least.)
When you have a moment, please update the plugin, then try to re-upload the sermon and let me know how it goes!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/Blobfolio/righteous-mimes/issues/1#issuecomment-629011542 , or unsubscribe https://github.com/notifications/unsubscribe-auth/APABPVMO6L7AD55MHAOGDDLRRS4O5ANCNFSM4M73F4BQ . https://github.com/notifications/beacon/APABPVNVCHQNALIZA7LOV4LRRS4O5A5CNFSM4M73F4B2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEV67IVQ.gif
| gharchive/issue | 2020-05-13T15:23:11 | 2025-04-01T04:32:20.337586 | {
"authors": [
"PaulProe",
"joshstoik1"
],
"repo": "Blobfolio/righteous-mimes",
"url": "https://github.com/Blobfolio/righteous-mimes/issues/1",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
60657741 | Technical debt output not escaped
The technical debt plugin doesn't make the message output safe, so if you have TODO and FIXME tags in HTML and JS files you can get bits of actual HTML and javascript inserted into the page
This should be easy to fix in PHPCI/Plugin/TechnicalDebt.php:
$content = trim($allLines[$lineNumber - 1]);
To:
$content = htmlspecialchars(trim($allLines[$lineNumber - 1]));
If I don't get around to creating a PR for this over the next few days, please feel free to go ahead and do it. :)
Searched pull requests and couldn't see one.
Implemented @mikebronner 's suggested fix.
That's much of a front-end issue. It would be better to store the data unaltered and escape it in the view/JavaScript.
@Adirelle that was what I was thinking
@adamazing @mikebronner Sorry about not doing the pull request, been a bit busy
No worries @REBELinBLUE I just happened to be bored and browsing the easy-fix tag and thought I'd help. I'm rushed off my feet now so I'll just delete the PR and leave the issue open for another person/time. :)
| gharchive/issue | 2015-03-11T13:27:05 | 2025-04-01T04:32:20.343178 | {
"authors": [
"Adirelle",
"REBELinBLUE",
"adamazing",
"mikebronner"
],
"repo": "Block8/PHPCI",
"url": "https://github.com/Block8/PHPCI/issues/865",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2498876648 | diy: add support for TTGO T-Display S3
I expected the T-Display S3 to have identical USB Vendor ID and Product ID as the T-Display, but I discovered this isn't the case.
Thanks for this - will include it asap.
Merged, many thanks.
| gharchive/pull-request | 2024-08-31T14:33:45 | 2025-04-01T04:32:20.345935 | {
"authors": [
"JamieDriver",
"umzr"
],
"repo": "Blockstream/Jade",
"url": "https://github.com/Blockstream/Jade/pull/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
114697092 | no icons
Great add-on!
The library will be more useful if messages can be shown without icons.
What is the simplest way to remove the icon requirements?
Thanks!
@friksa This is not something we have planned for the library. By all means fork the repository, and adjust to fit your needs. Should be relatively easy to remove the icon markup from the component template.
| gharchive/issue | 2015-11-02T22:19:50 | 2025-04-01T04:32:20.388333 | {
"authors": [
"friksa",
"ynnoj"
],
"repo": "Blooie/ember-cli-notifications",
"url": "https://github.com/Blooie/ember-cli-notifications/issues/61",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1273539962 | Error: undefined symbol: _Z9nrnpy_hocv
Hi,
I have successfully compiled and built Coreneuron on my laptop. I have also managed to compile my mod files (nrnivmodl -coreneuron .).
However, I get the following error (undefined symbol: _Z9nrnpy_hocv) every time I try to run my network. Can you help me?
Command
mpirun -n 6 ./x86_64/special -mpi -python completeModel-myOwnSTDP.py
Output:
--------------------------------------------------------------------------
[[2982,1],3]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:
Module: OpenFabrics (openib)
Host: jhielson-Alienware-15-R4
Another transport will be used instead, although this may result in
lower performance.
NOTE: You can disable this warning by setting the MCA parameter
btl_base_warn_component_unused to 0.
--------------------------------------------------------------------------
numprocs=6
NEURON -- VERSION 8.1.0 HEAD (047dd824) 2022-03-25
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2021
See http://neuron.yale.edu/neuron/credits
Additional mechanisms from files
"./fi_stdp.mod" "./GP.mod" "./Izhi2003b.mod" "./izhi2007a.mod" "./izhi2007bS.mod" "./mySTDP.mod" "./spikeout.mod" "./stdwa_songabbott.mod" "./STN2.mod" "./Str.mod" "./SynExp2NMDA.mod" "./thalamus.mod"
Traceback (most recent call last):
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 135, in <module>
from . import hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "completeModel-myOwnSTDP.py", line 2, in <module>
from netpyne import specs, sim
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/__init__.py", line 21, in <module>
from netpyne import analysis
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/__init__.py", line 17, in <module>
from .spikes import prepareSpikeData, prepareRaster, prepareSpikeHist, popAvgRates
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/spikes.py", line 34, in <module>
from ..specs import Dict
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/__init__.py", line 14, in <module>
from .netParams import NetParams, CellParams
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/netParams.py", line 30, in <module>
from .. import conversion
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/__init__.py", line 13, in <module>
from .neuronPyHoc import importCell, importCellsFromNet, mechVarList, getSecName
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/neuronPyHoc.py", line 19, in <module>
from neuron import h
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 137, in <module>
import neuron.hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
Traceback (most recent call last):
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 135, in <module>
from . import hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "completeModel-myOwnSTDP.py", line 2, in <module>
from netpyne import specs, sim
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/__init__.py", line 21, in <module>
from netpyne import analysis
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/__init__.py", line 17, in <module>
from .spikes import prepareSpikeData, prepareRaster, prepareSpikeHist, popAvgRates
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/spikes.py", line 34, in <module>
from ..specs import Dict
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/__init__.py", line 14, in <module>
from .netParams import NetParams, CellParams
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/netParams.py", line 30, in <module>
from .. import conversion
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/__init__.py", line 13, in <module>
from .neuronPyHoc import importCell, importCellsFromNet, mechVarList, getSecName
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/neuronPyHoc.py", line 19, in <module>
from neuron import h
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 137, in <module>
import neuron.hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
Traceback (most recent call last):
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 135, in <module>
from . import hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "completeModel-myOwnSTDP.py", line 2, in <module>
from netpyne import specs, sim
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/__init__.py", line 21, in <module>
from netpyne import analysis
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/__init__.py", line 17, in <module>
from .spikes import prepareSpikeData, prepareRaster, prepareSpikeHist, popAvgRates
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/spikes.py", line 34, in <module>
from ..specs import Dict
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/__init__.py", line 14, in <module>
from .netParams import NetParams, CellParams
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/netParams.py", line 30, in <module>
from .. import conversion
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/__init__.py", line 13, in <module>
from .neuronPyHoc import importCell, importCellsFromNet, mechVarList, getSecName
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/neuronPyHoc.py", line 19, in <module>
from neuron import h
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 137, in <module>
import neuron.hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
Traceback (most recent call last):
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 135, in <module>
from . import hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "completeModel-myOwnSTDP.py", line 2, in <module>
from netpyne import specs, sim
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/__init__.py", line 21, in <module>
from netpyne import analysis
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/__init__.py", line 17, in <module>
from .spikes import prepareSpikeData, prepareRaster, prepareSpikeHist, popAvgRates
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/spikes.py", line 34, in <module>
from ..specs import Dict
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/__init__.py", line 14, in <module>
from .netParams import NetParams, CellParams
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/netParams.py", line 30, in <module>
from .. import conversion
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/__init__.py", line 13, in <module>
from .neuronPyHoc import importCell, importCellsFromNet, mechVarList, getSecName
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/neuronPyHoc.py", line 19, in <module>
from neuron import h
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 137, in <module>
import neuron.hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
Traceback (most recent call last):
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 135, in <module>
from . import hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "completeModel-myOwnSTDP.py", line 2, in <module>
from netpyne import specs, sim
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/__init__.py", line 21, in <module>
from netpyne import analysis
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/__init__.py", line 17, in <module>
from .spikes import prepareSpikeData, prepareRaster, prepareSpikeHist, popAvgRates
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/spikes.py", line 34, in <module>
from ..specs import Dict
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/__init__.py", line 14, in <module>
from .netParams import NetParams, CellParams
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/netParams.py", line 30, in <module>
from .. import conversion
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/__init__.py", line 13, in <module>
from .neuronPyHoc import importCell, importCellsFromNet, mechVarList, getSecName
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/neuronPyHoc.py", line 19, in <module>
from neuron import h
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 137, in <module>
import neuron.hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
Traceback (most recent call last):
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 135, in <module>
from . import hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "completeModel-myOwnSTDP.py", line 2, in <module>
from netpyne import specs, sim
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/__init__.py", line 21, in <module>
from netpyne import analysis
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/__init__.py", line 17, in <module>
from .spikes import prepareSpikeData, prepareRaster, prepareSpikeHist, popAvgRates
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/analysis/spikes.py", line 34, in <module>
from ..specs import Dict
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/__init__.py", line 14, in <module>
from .netParams import NetParams, CellParams
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/specs/netParams.py", line 30, in <module>
from .. import conversion
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/__init__.py", line 13, in <module>
from .neuronPyHoc import importCell, importCellsFromNet, mechVarList, getSecName
File "/home/jhielson/.local/lib/python3.8/site-packages/netpyne/conversion/neuronPyHoc.py", line 19, in <module>
from neuron import h
File "/home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/__init__.py", line 137, in <module>
import neuron.hoc
ImportError: /home/jhielson/coreneuron_repository/nrn/build/lib/python/neuron/hoc.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z9nrnpy_hocv
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[2982,1],3]
Exit code: 1
--------------------------------------------------------------------------
[jhielson-Alienware-15-R4:67411] 5 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
[jhielson-Alienware-15-R4:67411] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Thanks!!! I believe that might be the problem. It was off when I built the package.
-- Python | ON
-- EXE | /usr/bin/python3
-- INC | /usr/include/python3.8
-- LIB | /usr/lib/x86_64-linux-gnu/libpython3.8.so
-- MODULE | ON
-- DYNAMIC | OFF
I don't have my laptop with me right now to test it but I'll try that as soon I return to the lab. And I will let you know here. Thank you again.
Dear @iomaganaris,
Sorry, I believe I have misunderstood your message. It was disabled the whole time as you can see below:
Command:
cmake .. -DNRN_ENABLE_CORENEURON=ON -DCORENRN_ENABLE_GPU=ON -DNRN_ENABLE_INTERVIEWS=OFF -DNRN_ENABLE_RX3D=OFF -DCMAKE_INSTALL_PREFIX=$HOME/install -DCMAKE_C_COMPILER=nvc -DCMAKE_CXX_COMPILER=nvc++ -DCMAKE_CUDA_COMPILER=nvcc
Output:
-- The C compiler identification is PGI 22.3.0
-- The CXX compiler identification is PGI 22.3.0
-- Check for working C compiler: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvc
-- Check for working C compiler: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvc++
-- Check for working CXX compiler: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvc++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Setting build type to 'RelWithDebInfo' as none was specified.
-- 3rd party project: using Random123 from "external/Random123"
-- Found BISON: /usr/bin/bison (found version "3.5.1")
-- Found FLEX: /usr/bin/flex (found version "2.6.4")
-- Found Readline: /usr/include
-- Found MPI_C: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi.so (found version "3.1")
-- Found MPI_CXX: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi_cxx.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Detected OpenMPI 3.1.5
-- -DPYTHON_EXECUTABLE not specified. Looking for `python3` in the PATH exclusively...
-- Setting PYTHON_EXECUTABLE=/usr/bin/python3
-- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.10", minimum required is "3.7")
-- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.10", minimum required is "3")
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.8.so
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Building CoreNEURON from submodule
-- Sub-project : using coreneuron from from /home/jhielson/coreneuron_repository/nrn/external/coreneuron
-- Found HpcCodingConv: /home/jhielson/coreneuron_repository/nrn/external/coreneuron/CMake/hpc-coding-conventions
-- CORENRN_FORMATTING: OFF
-- CORENRN_TEST_FORMATTING: OFF
-- CORENRN_CLANG_FORMAT: OFF
-- CORENRN_CMAKE_FORMAT: OFF
-- CORENRN_GIT_HOOKS: OFF
-- CORENRN_GIT_COMMIT_HOOKS:
-- CORENRN_GIT_PUSH_HOOKS: courtesy-msg
-- CORENRN_STATIC_ANALYSIS: OFF
-- CORENRN_TEST_STATIC_ANALYSIS: OFF
-- Found Random123: /home/jhielson/coreneuron_repository/nrn/external/coreneuron/external/Random123
-- Found Git: /usr/bin/git (found version "2.25.1")
-- 3rd party project: using CLI11 from "external/CLI11"
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- Doxygen not found, building docs has been disabled
-- Setting default CUDA architectures to 70;80
-- The CUDA compiler identification is NVIDIA 11.6.112
-- Check for working CUDA compiler: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvcc
-- Check for working CUDA compiler: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Found PythonInterp: /usr/bin/python3 (found version "3.8.10")
-- Found Perl: /usr/bin/perl (found version "5.30.0")
-- Found MOD2C: /home/jhielson/coreneuron_repository/nrn/external/coreneuron/external/mod2c
-- Using mod2c submodule from /home/jhielson/coreneuron_repository/nrn/external/coreneuron/external/mod2c
-- mod2c is used as APPLICATION_NAME
-- Boost found, enabling use of memory pools for Random123...
--
-- Configured CoreNEURON 8.2.0
--
-- You can now build CoreNEURON using:
-- cmake --build . --parallel 8 [--target TARGET]
-- You might want to adjust the number of parallel build jobs for your system.
-- Some non-default targets you might want to build:
-- --------------------+--------------------------------------------------------
-- Target | Description
-- --------------------+--------------------------------------------------------
-- install | Will install CoreNEURON to: /home/jhielson/install
-- docs | Build full docs. Calls targets: doxygen, sphinx
-- --------------------+--------------------------------------------------------
-- Build option | Status
-- --------------------+--------------------------------------------------------
-- CXX COMPILER | /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvc++
-- COMPILE FLAGS | -g -O2 --c++14 -cuda -gpu=cuda11.6,lineinfo,cc70,cc80 -acc -Mautoinline -DEIGEN_DONT_VECTORIZE=1
-- Build Type | STATIC
-- MPI | ON
-- DYNAMIC | OFF
-- INC | /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/include
-- OpenMP | ON
-- Use legacy units | OFF
-- NMODL | OFF
-- MOD2CPP PATH | /home/jhielson/coreneuron_repository/nrn/build/bin/mod2c_core
-- GPU Support | ON
-- CUDA |
-- Offload | OpenACC
-- Unified Memory | OFF
-- Auto Timeout | ON
-- Wrap exp() | OFF
-- SplayTree Queue | ON
-- NetReceive Buffer | ON
-- Caliper | OFF
-- Likwid | OFF
-- Unit Tests | OFF
-- Reporting | OFF
-- --------------+--------------------------------------------------------------
-- See documentation : https://github.com/BlueBrain/CoreNeuron/
-- --------------+--------------------------------------------------------------
--
Extracting link flags from target 'Threads::Threads', beware that this can be fragile. Got:
Generating link flags from path /usr/lib/x86_64-linux-gnu/libreadline.so Got: /usr/lib/x86_64-linux-gnu/libreadline.so -Wl,-rpath,/usr/lib/x86_64-linux-gnu
Generating link flags from path /usr/lib/x86_64-linux-gnu/libpython3.8.so Got: /usr/lib/x86_64-linux-gnu/libpython3.8.so -Wl,-rpath,/usr/lib/x86_64-linux-gnu
Generating link flags from path /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi.so Got: /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi.so -Wl,-rpath,/opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib
--
-- Configured NEURON 8.2.0
--
-- You can now build NEURON using:
-- cmake --build . --parallel 8 [--target TARGET]
-- You might want to adjust the number of parallel build jobs for your system.
-- Some non-default targets you might want to build:
-- --------------+--------------------------------------------------------------
-- Target | Description
-- --------------+--------------------------------------------------------------
-- install | Will install NEURON to: /home/jhielson/install
-- | Change the install location of NEURON using:
-- | cmake <src_path> -DCMAKE_INSTALL_PREFIX=<install_path>
-- docs | Build full docs. Calls targets: doxygen, notebooks, sphinx, notebooks-clean
-- uninstall | Removes files installed by make install (todo)
-- --------------+--------------------------------------------------------------
-- Build option | Status
-- --------------+--------------------------------------------------------------
-- C COMPILER | /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvc
-- CXX COMPILER | /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/bin/nvc++
-- BUILD_TYPE | RelWithDebInfo (allowed: Custom;Debug;Release;RelWithDebInfo;Fast)
-- COMPILE FLAGS | -g -O2 --diag_suppress=1,47,111,128,170,174,177,180,186,301,541,550,816,941,2465
-- Shared | ON
-- Default units | modern units (2019 nist constants)
-- MPI | ON
-- DYNAMIC | OFF
-- INC | /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/include
-- LIB | /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi_cxx.so
-- Python | ON
-- EXE | /usr/bin/python3
-- INC | /usr/include/python3.8
-- LIB | /usr/lib/x86_64-linux-gnu/libpython3.8.so
-- MODULE | ON
-- DYNAMIC | OFF
-- Readline | /usr/lib/x86_64-linux-gnu/libreadline.so
-- RX3D | OFF
-- Interviews | OFF
-- CoreNEURON | ON
-- PATH | /home/jhielson/coreneuron_repository/nrn/external/coreneuron
-- LINK FLAGS | -cuda -gpu=cuda11.6,lineinfo,cc70,cc80 -acc -rdynamic -lrt -Wl,--whole-archive -Lx86_64 -lcorenrnmech -L$(libdir) -lcoreneuron -Wl,--no-whole-archive /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi_cxx.so /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi.so
-- Legacy Units| OFF
-- Tests | OFF
-- --------------+--------------------------------------------------------------
-- See documentation : https://www.neuron.yale.edu/neuron/
-- --------------+--------------------------------------------------------------
--
-- Configuring done
-- Generating done
-- Build files have been written to: /home/jhielson/coreneuron_repository/nrn/build
Hello @jhielson ,
It's disabled as it should be for now. Could you try prepending the NEURON python package installation directory to your PYTHONPATH and trying launching the special again? You can do this by running the following before your script:
export PYTHONPATH=$HOME/install/lib/python:$PYTHONPATH
Also it could be useful to share the output of the following commands:
echo $PYTHONPATH
ldd $HOME/install/lib/python/neuron/hoc.cpython-*.so
nm $HOME/install/lib/libnrniv.so
Thank you very much
Ok, I am gonna do that and let you know here. I have been testing some previous versions of NVHPC but got the same issue.
Hi,
I exported the python path as suggested and removed the "x86_64" folder. Then, I recompiled my mods file (nrnivmodl -coreneuron . ). Now, I get the following error:
No such file or directory: '/home/jhielson/install/lib/python/neuron/.data/bin/nrnivmodl'
Command:
echo $PYTHONPATH
Output:
/home/jhielson/install/lib/python:/home/jhielson/coreneuron_repository/nrn/build/lib/python:/opt/ros/noetic/lib/python3/dist-packages
Command:
ldd $HOME/install/lib/python/neuron/hoc.cpython-*.so
Output:
linux-vdso.so.1 (0x00007ffd707d0000)
libnrniv.so => /home/jhielson/install/lib/libnrniv.so (0x00007f5cc5294000)
libatomic.so.1 => /usr/lib/x86_64-linux-gnu/libatomic.so.1 (0x00007f5cc5240000)
libnvhpcatm.so => /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/lib/libnvhpcatm.so (0x00007f5cc5035000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f5cc4e53000)
libnvomp.so => /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/lib/libnvomp.so (0x00007f5cc4151000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5cc4149000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5cc4126000)
libnvcpumath.so => /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/lib/libnvcpumath.so (0x00007f5cc3ce5000)
libnvc.so => /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/compilers/lib/libnvc.so (0x00007f5cc3a84000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5cc3892000)
libgcc_s.so.1 => /../lib/libgcc_s.so.1 (0x00007f5cc3877000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5cc3726000)
libreadline.so.8 => /lib/x86_64-linux-gnu/libreadline.so.8 (0x00007f5cc36d6000)
libpython3.8.so.1.0 => /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0 (0x00007f5cc3180000)
libmpi.so.40 => /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi.so.40 (0x00007f5cc2cf8000)
/lib64/ld-linux-x86-64.so.2 (0x00007f5cc57b6000)
libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f5cc2cc8000)
libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f5cc2c98000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f5cc2c7c000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f5cc2c77000)
libopen-rte.so.40 => /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libopen-rte.so.40 (0x00007f5cc2912000)
libopen-pal.so.40 => /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libopen-pal.so.40 (0x00007f5cc23f0000)
librdmacm.so.1 => /usr/lib/x86_64-linux-gnu/librdmacm.so.1 (0x00007f5cc23d1000)
libibverbs.so.1 => /usr/lib/x86_64-linux-gnu/libibverbs.so.1 (0x00007f5cc23b2000)
libnuma.so.1 => /opt/nvidia/hpc_sdk/Linux_x86_64/22.3/comm_libs/openmpi/openmpi-3.1.5/lib/libnuma.so.1 (0x00007f5cc21a7000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f5cc219d000)
libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x00007f5cc2178000)
libnl-route-3.so.200 => /usr/lib/x86_64-linux-gnu/libnl-route-3.so.200 (0x00007f5cc2100000)
Command:
nm $HOME/install/lib/libnrniv.so
The output of this one was too long. So I changed the command to:
Command:
nm $HOME/install/lib/libnrniv.so | grep _Z9nrnpy_hocv
Output:
00000000003f7840 T _Z9nrnpy_hocv
Dear @iomaganaris,
I have reinstalled coreneuron and now I got a different error:
mpirun -n 6 ./x86_64/special -mpi -python completeModel-myOwnSTDP.py
--------------------------------------------------------------------------
[[40883,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:
Module: OpenFabrics (openib)
Host: jhielson-Alienware-15-R4
Another transport will be used instead, although this may result in
lower performance.
NOTE: You can disable this warning by setting the MCA parameter
btl_base_warn_component_unused to 0.
--------------------------------------------------------------------------
numprocs=6
NEURON -- VERSION 8.2a-5-g6e4f8cb0+ master (6e4f8cb0+) 2022-06-11
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2021
See http://neuron.yale.edu/neuron/credits
Additional mechanisms from files
"./fi_stdp.mod" "./GP.mod" "./Izhi2003b.mod" "./izhi2007a.mod" "./izhi2007bS.mod" "./mySTDP.mod" "./spikeout.mod" "./stdwa_songabbott.mod" "./STN2.mod" "./Str.mod" "./SynExp2NMDA.mod" "./thalamus.mod"
Start time: 2022-06-20 12:53:29.684249
Creating network of 674 cell populations on 6 hosts...
Number of cells on node 1: 138
Number of cells on node 2: 138
Number of cells on node 3: 138
Number of cells on node 4: 138
Number of cells on node 5: 138
Number of cells on node 0: 138
Done; cell creation time = 0.24 s.
Making connections...
Number of connections on node 5: 1040
Number of synaptic contacts on node 5: 1046
Number of connections on node 0: 1061
Number of synaptic contacts on node 0: 1069
[jhielson-Alienware-15-R4:37703] 5 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
[jhielson-Alienware-15-R4:37703] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Number of connections on node 4: 1647
Number of synaptic contacts on node 4: 1653
Number of connections on node 1: 1682
Number of synaptic contacts on node 1: 1688
Number of connections on node 2: 1683
Number of synaptic contacts on node 2: 1689
Number of connections on node 3: 1658
Number of synaptic contacts on node 3: 1664
Done; cell connection time = 3.27 s.
Adding stims...
Number of stims on node 4: 30
Number of stims on node 1: 31
Number of stims on node 0: 30
Number of stims on node 2: 31
Number of stims on node 3: 30
Number of stims on node 5: 30
Done; cell stims creation time = 0.04 s.
exp(764.628) out of range, returning exp(700)
exp(764.628) out of range, returning exp(700)
exp(834.429) out of range, returning exp(700)
exp(808.336) out of range, returning exp(700)
exp(808.336) out of range, returning exp(700)
exp(727.905) out of range, returning exp(700)
exp(727.905) out of range, returning exp(700)
exp(796.118) out of range, returning exp(700)
exp(796.118) out of range, returning exp(700)
exp(814.896) out of range, returning exp(700)
exp(814.896) out of range, returning exp(700)
exp(834.429) out of range, returning exp(700)
Running with interval func using CoreNEURON for 300000.0 ms...
5 ./x86_64/special: NEURON model for CoreNEURON requires cvode.cache_efficient(1)
5 near line 0
5 tstop=300000.0
3 ./x86_64/special: NEURON model for CoreNEURON requires cvode.cache_efficient(1)
3 near line 0
3 tstop=300000.0
2 ./x86_64/special: NEURON model for CoreNEURON requires cvode.cache_efficient(1)
1 ./x86_64/special: NEURON model for CoreNEURON requires cvode.cache_efficient(1)
1 near line 0
1 tstop=300000.0
4 ./x86_64/special: NEURON model for CoreNEURON requires cvode.cache_efficient(1)
4 near line 0
4 tstop=300000.0
0 ./x86_64/special: NEURON model for CoreNEURON requires cvode.cache_efficient(1)
0 near line 0
0 tstop=300000.0
^
^
3 ParallelContext[5].psolve( ^
1 ParallelContext[5].psolve(1)
^
4 ParallelContext[5].psolve(1)
^
0 ParallelContext[5].psolve(1)
5 ParallelContext[5].psolve(1)
1)
2 near line 0
2 tstop=300000.0
^
2 ParallelContext[5].psolve(1)
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
with errorcode -1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[jhielson-Alienware-15-R4:37703] 5 more processes have sent help message help-mpi-api.txt / mpi-abort
Basically, it says:
NEURON model for CoreNEURON requires cvode.cache_efficient(1)
Can you help me with this new issue?
Hello @jhielson,
Nice. It looks like we're making some progress. Regarding your new issue and assuming that you're using NetPyNE, can you make sure that in your NetPyNE SimConfig the variable cache_efficient is True?
You can enable it by setting the following in your cfg.py:
cfg.cache_efficient = True
```.
See also: https://github.com/suny-downstate-medical-center/netpyne/blob/development/examples/M1detailed/cfg.py
Thanks. Now, it says it can not find GPU.
--------------------------------------------------------------------------
[[38348,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:
Module: OpenFabrics (openib)
Host: jhielson-Alienware-15-R4
Another transport will be used instead, although this may result in
lower performance.
NOTE: You can disable this warning by setting the MCA parameter
btl_base_warn_component_unused to 0.
--------------------------------------------------------------------------
numprocs=6
NEURON -- VERSION 8.2a-5-g6e4f8cb0+ master (6e4f8cb0+) 2022-06-11
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2021
See http://neuron.yale.edu/neuron/credits
Additional mechanisms from files
"./fi_stdp.mod" "./GP.mod" "./Izhi2003b.mod" "./izhi2007a.mod" "./izhi2007bS.mod" "./mySTDP.mod" "./spikeout.mod" "./stdwa_songabbott.mod" "./STN2.mod" "./Str.mod" "./SynExp2NMDA.mod" "./thalamus.mod"
Start time: 2022-06-20 13:01:36.929230
Creating network of 149 cell populations on 6 hosts...
Number of cells on node 1: 51
Number of cells on node 2: 51
Number of cells on node 3: 50
Number of cells on node 4: 50
Number of cells on node 5: 50
Number of cells on node 0: 51
Done; cell creation time = 0.05 s.
Making connections...
Number of connections on node 3: 452
Number of synaptic contacts on node 3: 460
Number of connections on node 1: 510
Number of synaptic contacts on node 1: 516
Number of connections on node 4: 542
Number of synaptic contacts on node 4: 548
Number of connections on node 0: 523
Number of synaptic contacts on node 0: 529
Number of connections on node 2: 430
Number of synaptic contacts on node 2: 436
Number of connections on node 5: 541
Number of synaptic contacts on node 5: 547
Done; cell connection time = 0.59 s.
Adding stims...
Number of stims on node 0: 30
Number of stims on node 4: 31
Number of stims on node 5: 31
Number of stims on node 1: 30
Number of stims on node 2: 30
Number of stims on node 3: 30
Done; cell stims creation time = 0.02 s.
exp(751.718) out of range, returning exp(700)
exp(751.718) out of range, returning exp(700)
exp(823.67) out of range, returning exp(700)
exp(823.67) out of range, returning exp(700)
exp(874.045) out of range, returning exp(700)
exp(874.045) out of range, returning exp(700)
exp(720.81) out of range, returning exp(700)
exp(720.81) out of range, returning exp(700)
exp(732.031) out of range, returning exp(700)
exp(732.031) out of range, returning exp(700)
exp(823.791) out of range, returning exp(700)
exp(823.791) out of range, returning exp(700)
Running with interval func using CoreNEURON for 20000.0 ms...
num_mpi=6
ERROR : Enabled GPU execution but couldn't find NVIDIA GPU!
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
with errorcode -1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[jhielson-Alienware-15-R4:39224] 5 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
[jhielson-Alienware-15-R4:39224] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[jhielson-Alienware-15-R4:39224] 5 more processes have sent help message help-mpi-api.txt / mpi-abort
But I have one working fine.
Command:
nvidia-smi
Output:
Mon Jun 20 13:03:27 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.05 Driver Version: 510.73.05 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| N/A 71C P0 47W / N/A | 1418MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1136 G /usr/lib/xorg/Xorg 150MiB |
| 0 N/A N/A 1717 G /usr/lib/xorg/Xorg 583MiB |
| 0 N/A N/A 1847 G /usr/bin/gnome-shell 162MiB |
| 0 N/A N/A 2557 G /usr/lib/firefox/firefox 284MiB |
| 0 N/A N/A 14545 G ...RendererForSitePerProcess 222MiB |
+-----------------------------------------------------------------------------+
Hello @jhielson,
What is the GPU model you're trying to execute CoreNEURON on? It might be that CoreNEURON is not built for your GPU architecture and this is why CoreNEURON cannot find it. If you find the Compute Capability for your GPU you can build CoreNEURON for it using the following CMake variable:
-DCMAKE_CUDA_ARCHITECTURES=61
for Compute Architecture 6.1 for example.
Thanks. I have the following device:
description: VGA compatible controller
product: GP104BM [GeForce GTX 1080 Mobile]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:163 memory:9c000000-9cffffff memory:70000000-7fffffff memory:80000000-81ffffff ioport:5000(size=128) memory:c0000-dffff
The compute capability seems to be 6.1.
I am going to compile coreneuron again and I will keep you informed.
Dear @iomaganaris,
Now, it is recognizing the GPU but there is a new error:
Command:
mpirun -n 6 ./x86_64/special -mpi -python completeModel-100GC.py
Output:
--------------------------------------------------------------------------
[[49409,1],2]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:
Module: OpenFabrics (openib)
Host: jhielson-Alienware-15-R4
Another transport will be used instead, although this may result in
lower performance.
NOTE: You can disable this warning by setting the MCA parameter
btl_base_warn_component_unused to 0.
--------------------------------------------------------------------------
numprocs=6
NEURON -- VERSION 8.2a-5-g6e4f8cb0+ master (6e4f8cb0+) 2022-06-11
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2021
See http://neuron.yale.edu/neuron/credits
Additional mechanisms from files
"./fi_stdp.mod" "./GP.mod" "./Izhi2003b.mod" "./izhi2007a.mod" "./izhi2007bS.mod" "./mySTDP.mod" "./spikeout.mod" "./stdwa_songabbott.mod" "./STN2.mod" "./Str.mod" "./SynExp2NMDA.mod" "./thalamus.mod"
Start time: 2022-06-20 14:42:52.738020
Creating network of 149 cell populations on 6 hosts...
Number of cells on node 1: 51
Number of cells on node 2: 51
Number of cells on node 3: 50
Number of cells on node 4: 50
Number of cells on node 5: 50
Number of cells on node 0: 51
Done; cell creation time = 0.08 s.
Making connections...
Number of connections on node 2: 428
Number of synaptic contacts on node 2: 434
Number of connections on node 5: 545
Number of synaptic contacts on node 5: 551
Number of connections on node 3: 454
Number of synaptic contacts on node 3: 462
Number of connections on node 0: 525
Number of synaptic contacts on node 0: 531
Number of connections on node 4: 542
Number of synaptic contacts on node 4: 548
Number of connections on node 1: 506
Number of synaptic contacts on node 1: 512
Done; cell connection time = 0.54 s.
Adding stims...
Number of stims on node 4: 31
Number of stims on node 2: 30
Number of stims on node 5: 31
Number of stims on node 3: 30
Number of stims on node 0: 30
Number of stims on node 1: 30
Done; cell stims creation time = 0.02 s.
exp(760.937) out of range, returning exp(700)
exp(760.937) out of range, returning exp(700)
exp(930.274) out of range, returning exp(700)
exp(930.274) out of range, returning exp(700)
exp(762.501) out of range, returning exp(700)
exp(762.501) out of range, returning exp(700)
exp(810.312) out of range, returning exp(700)
exp(810.312) out of range, returning exp(700)
exp(871.081) out of range, returning exp(700)
exp(871.081) out of range, returning exp(700)
exp(829.235) out of range, returning exp(700)
exp(829.235) out of range, returning exp(700)
Running with interval func using CoreNEURON for 20000.0 ms...
num_mpi=6
Info : 1 GPUs shared by 6 ranks per node
Duke, Yale, and the BlueBrain Project -- Copyright 1984-2020
Version : 8.2.0 6215225f (2022-06-08 10:09:55 +0200)
Additional mechanisms from files
GP.mod Izhi2003b.mod STN2.mod Str.mod SynExp2NMDA.mod exp2syn.mod expsyn.mod fi_stdp.mod hh.mod izhi2007a.mod izhi2007bS.mod mySTDP.mod netstim.mod passive.mod pattern.mod spikeout.mod stdwa_songabbott.mod stim.mod svclmp.mod thalamus.mod
Memory (MBs) : After mk_mech : Max 398.7812, Min 398.0742, Avg 398.4473
GPU Memory (MiBs) : Used = 2132.000000, Free = 5953.937500, Total = 8085.937500
Memory (MBs) : After MPI_Init : Max 398.7812, Min 398.0742, Avg 398.4473
GPU Memory (MiBs) : Used = 2132.000000, Free = 5953.937500, Total = 8085.937500
Memory (MBs) : Before nrn_setup : Max 399.2930, Min 398.5078, Avg 398.8932
GPU Memory (MiBs) : Used = 2132.000000, Free = 5953.937500, Total = 8085.937500
best_balance=0.978723 ncell=47 ntype=1 nwarp=47
best_balance=0.978723 ncell=47 ntype=1 nwarp=47
best_balance=0.978723 ncell=47 ntype=1 nwarp=47
best_balance=0.978723 ncell=47 ntype=1 nwarp=47
best_balance=0.978723 ncell=47 ntype=1 nwarp=47
best_balance=0.978723 ncell=47 ntype=1 nwarp=47
Setup Done : 0.00 seconds
Model size : 766.18 kB
Memory (MBs) : After nrn_setup : Max 399.2930, Min 398.5078, Avg 398.8932
GPU Memory (MiBs) : Used = 2132.000000, Free = 5953.937500, Total = 8085.937500
GENERAL PARAMETERS
--mpi=true
--mpi-lib=
--gpu=true
--dt=0.1
--tstop=1
GPU
--nwarp=65536
--cell-permute=2
--cuda-interface=false
INPUT PARAMETERS
--voltage=1000
--seed=-1
--datpath=.
--filesdat=files.dat
--pattern=
--report-conf=
--restore=
PARALLEL COMPUTATION PARAMETERS
--threading=false
--skip_mpi_finalize=true
SPIKE EXCHANGE
--ms_phases=2
--ms_subintervals=2
--multisend=false
--spk_compress=0
--binqueue=false
CONFIGURATION
--spikebuf=100000
--prcellgid=-1
--forwardskip=0
--celsius=6.3
--mindelay=1
--report-buffer-size=4
OUTPUT PARAMETERS
--dt_io=0.1
--outpath=.
--checkpoint=
Start time (t) = 0
Memory (MBs) : After mk_spikevec_buffer : Max 399.2930, Min 398.5078, Avg 398.8932
GPU Memory (MiBs) : Used = 2132.000000, Free = 5953.937500, Total = 8085.937500
Memory (MBs) : After nrn_finitialize : Max 399.9531, Min 399.1055, Avg 399.5358
GPU Memory (MiBs) : Used = 2132.000000, Free = 5953.937500, Total = 8085.937500
psolve |=========================================================| t: 1.00 ETA: 0h00m01s
Solver Time : 0.100294
Simulation Statistics
Number of cells: 282
Number of compartments: 846
Number of presyns: 481
Number of input presyns: 1137
Number of synapses: 3058
Number of point processes: 3419
Number of transfer sources: 0
Number of transfer targets: 0
Number of spikes: 12
Number of spikes with non negative gid-s: 12
special: /home/jhielson/coreneuron_repository/nrn/src/nrniv/nrncore_write/callbacks/nrncore_callbacks.cpp:1093: void core2nrn_SelfEvent_event(int, double, int, int, double, unsigned long, int): Assertion `nc->target_ == pnt' failed.
special: /home/jhielson/coreneuron_repository/nrn/src/nrniv/nrncore_write/callbacks/nrncore_callbacks.cpp:1093: void core2nrn_SelfEvent_event(int, double, int, int, double, unsigned long, int): Assertion `nc->target_ == pnt' failed.
special: /home/jhielson/coreneuron_repository/nrn/src/nrniv/nrncore_write/callbacks/nrncore_callbacks.cpp:1093: void core2nrn_SelfEvent_event(int, double, int, int, double, unsigned long, int): Assertion `nc->target_ == pnt' failed.
special: /home/jhielson/coreneuron_repository/nrn/src/nrniv/nrncore_write/callbacks/nrncore_callbacks.cpp:1093: void core2nrn_SelfEvent_event(int, double, int, int, double, unsigned long, int): Assertion `nc->target_ == pnt' failed.
[jhielson-Alienware-15-R4:52731] *** Process received signal ***
[jhielson-Alienware-15-R4:52731] Signal: Aborted (6)
[jhielson-Alienware-15-R4:52731] Signal code: (-6)
special: /home/jhielson/coreneuron_repository/nrn/src/nrniv/nrncore_write/callbacks/nrncore_callbacks.cpp:1093: void core2nrn_SelfEvent_event(int, double, int, int, double, unsigned long, int): Assertion `nc->target_ == pnt' failed.
[jhielson-Alienware-15-R4:52733] *** Process received signal ***
[jhielson-Alienware-15-R4:52733] Signal: Aborted (6)
[jhielson-Alienware-15-R4:52733] Signal code: (-6)
special: /home/jhielson/coreneuron_repository/nrn/src/nrniv/nrncore_write/callbacks/nrncore_callbacks.cpp:1093: void core2nrn_SelfEvent_event(int, double, int, int, double, unsigned long, int): Assertion `nc->target_ == pnt' failed.
[jhielson-Alienware-15-R4:52730] *** Process received signal ***
[jhielson-Alienware-15-R4:52730] Signal: Aborted (6)
[jhielson-Alienware-15-R4:52730] Signal code: (-6)
[jhielson-Alienware-15-R4:52734] *** Process received signal ***
[jhielson-Alienware-15-R4:52734] Signal: Aborted (6)
[jhielson-Alienware-15-R4:52734] Signal code: (-6)
[jhielson-Alienware-15-R4:52732] *** Process received signal ***
[jhielson-Alienware-15-R4:52732] Signal: Aborted (6)
[jhielson-Alienware-15-R4:52732] Signal code: (-6)
[jhielson-Alienware-15-R4:52729] *** Process received signal ***
[jhielson-Alienware-15-R4:52729] Signal: Aborted (6)
[jhielson-Alienware-15-R4:52729] Signal code: (-6)
[jhielson-Alienware-15-R4:52731] [ 0] [jhielson-Alienware-15-R4:52734] [ 0] [jhielson-Alienware-15-R4:52733] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7fe493b6a420]
[jhielson-Alienware-15-R4:52731] [ 1] /lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7f326cdec420]
[jhielson-Alienware-15-R4:52733] [ 1] [jhielson-Alienware-15-R4:52729] [ 0] [jhielson-Alienware-15-R4:52732] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7f65d5fdf420]
[jhielson-Alienware-15-R4:52732] [ 1] /lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7f2442676420]
[jhielson-Alienware-15-R4:52734] [ 1] [jhielson-Alienware-15-R4:52730] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7f5fd03a0420]
[jhielson-Alienware-15-R4:52730] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7fe49330500b]
[jhielson-Alienware-15-R4:52731] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7fe4932e4859]
[jhielson-Alienware-15-R4:52731] [ 3] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f326c58700b]
[jhielson-Alienware-15-R4:52733] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7f326c566859]
[jhielson-Alienware-15-R4:52733] /lib/x86_64-linux-gnu/libpthread.so.0(+0x14420)[0x7f095c9b7420]
[jhielson-Alienware-15-R4:52729] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f095c15200b]
[jhielson-Alienware-15-R4:52729] [ 2] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f65d577a00b]
[jhielson-Alienware-15-R4:52732] [ 2] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f2441e1100b]
[jhielson-Alienware-15-R4:52734] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7f2441df0859]
[jhielson-Alienware-15-R4:52734] [ 3] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f5fcfb3b00b]
[jhielson-Alienware-15-R4:52730] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x22729)[0x7fe4932e4729]
[jhielson-Alienware-15-R4:52731] [ 4] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x22729)[0x7f2441df0729]
[jhielson-Alienware-15-R4:52734] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x33fd6)[0x7f2441e01fd6]
[jhielson-Alienware-15-R4:52734] [ 5] /lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7f5fcfb1a859]
[jhielson-Alienware-15-R4:52730] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x22729)[0x7f5fcfb1a729]
[jhielson-Alienware-15-R4:52730] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x33fd6)[0x7f5fcfb2bfd6]
[jhielson-Alienware-15-R4:52730] [ 5] /lib/x86_64-linux-gnu/libc.so.6(+0x33fd6)[0x7fe4932f5fd6]
[jhielson-Alienware-15-R4:52731] [ 5] /lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7f095c131859]
[jhielson-Alienware-15-R4:52729] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x22729)[0x7f095c131729]
[jhielson-Alienware-15-R4:52729] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x33fd6)[0x7f095c142fd6]
[jhielson-Alienware-15-R4:52729] [ 5] /lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7f65d5759859]
[jhielson-Alienware-15-R4:52732] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x22729)[0x7f65d5759729]
[jhielson-Alienware-15-R4:52732] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x33fd6)[0x7f65d576afd6]
[jhielson-Alienware-15-R4:52732] [ 5] /lib/x86_64-linux-gnu/libc.so.6(+0x22729)[0x7f326c566729]
[jhielson-Alienware-15-R4:52733] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x33fd6)[0x7f326c577fd6]
[jhielson-Alienware-15-R4:52733] [ 5] /home/jhielson/install/lib/libnrniv.so(+0x1e5e9e)[0x7fe4969b2e9e]
[jhielson-Alienware-15-R4:52731] [ 6] /home/jhielson/install/lib/libnrniv.so(+0x1e5e9e)[0x7f24454bee9e]
[jhielson-Alienware-15-R4:52734] [ 6] /home/jhielson/install/lib/libnrniv.so(+0x1e5e9e)[0x7f326fc34e9e]
[jhielson-Alienware-15-R4:52733] [ 6] ./x86_64/special(_ZN10coreneuron20core2nrn_data_returnEv+0x12cc)[0x53cbcc]
[jhielson-Alienware-15-R4:52731] [ 7] /home/jhielson/install/lib/libnrniv.so(+0x1e5e9e)[0x7f095f7ffe9e]
[jhielson-Alienware-15-R4:52729] [ 6] /home/jhielson/install/lib/libnrniv.so(+0x1e5e9e)[0x7f65d8e27e9e]
[jhielson-Alienware-15-R4:52732] [ 6] /home/jhielson/install/lib/libnrniv.so(+0x1e5e9e)[0x7f5fd31e8e9e]
[jhielson-Alienware-15-R4:52730] [ 6] ./x86_64/special(_ZN10coreneuron20core2nrn_data_returnEv+0x12cc)[0x53cbcc]
[jhielson-Alienware-15-R4:52733] [ 7] ./x86_64/special(_ZN10coreneuron20core2nrn_data_returnEv+0x12cc)[0x53cbcc]
[jhielson-Alienware-15-R4:52734] [ 7] ./x86_64/special(run_solve_core+0xa5f)[0x4f639f]
[jhielson-Alienware-15-R4:52731] [ 8] ./x86_64/special(_ZN10coreneuron20core2nrn_data_returnEv+0x12cc)[0x53cbcc]
[jhielson-Alienware-15-R4:52729] [ 7] ./x86_64/special(_ZN10coreneuron20core2nrn_data_returnEv+0x12cc)[0x53cbcc]
[jhielson-Alienware-15-R4:52730] [ 7] ./x86_64/special(_ZN10coreneuron20core2nrn_data_returnEv+0x12cc)[0x53cbcc]
[jhielson-Alienware-15-R4:52732] [ 7] ./x86_64/special(run_solve_core+0xa5f)[0x4f639f]
[jhielson-Alienware-15-R4:52734] [ 8] ./x86_64/special(corenrn_embedded_run+0x8e)[0x4555ce]
[jhielson-Alienware-15-R4:52731] [ 9] ./x86_64/special(run_solve_core+0xa5f)[0x4f639f]
[jhielson-Alienware-15-R4:52733] [ 8] ./x86_64/special(corenrn_embedded_run+0x8e)[0x4555ce]
[jhielson-Alienware-15-R4:52734] [ 9] ./x86_64/special(run_solve_core+0xa5f)[0x4f639f]
[jhielson-Alienware-15-R4:52729] [ 8] ./x86_64/special(corenrn_embedded_run+0x8e)[0x4555ce]
[jhielson-Alienware-15-R4:52733] [ 9] ./x86_64/special(run_solve_core+0xa5f)[0x4f639f]
[jhielson-Alienware-15-R4:52730] [ 8] ./x86_64/special(run_solve_core+0xa5f)[0x4f639f]
[jhielson-Alienware-15-R4:52732] [ 8] ./x86_64/special(corenrn_embedded_run+0x8e)[0x4555ce]
[jhielson-Alienware-15-R4:52729] [ 9] ./x86_64/special(corenrn_embedded_run+0x8e)[0x4555ce]
[jhielson-Alienware-15-R4:52730] [ 9] ./x86_64/special(corenrn_embedded_run+0x8e)[0x4555ce]
[jhielson-Alienware-15-R4:52732] [ 9] /home/jhielson/install/lib/libnrniv.so(_Z14nrncore_psolvedi+0x229)[0x7fe4969ad929]
[jhielson-Alienware-15-R4:52731] [10] /home/jhielson/install/lib/libnrniv.so(_Z14nrncore_psolvedi+0x229)[0x7f24454b9929]
[jhielson-Alienware-15-R4:52734] [10] /home/jhielson/install/lib/libnrniv.so(_Z14nrncore_psolvedi+0x229)[0x7f326fc2f929]
[jhielson-Alienware-15-R4:52733] [10] /home/jhielson/install/lib/libnrniv.so(_Z14nrncore_psolvedi+0x229)[0x7f095f7fa929]
[jhielson-Alienware-15-R4:52729] [10] /home/jhielson/install/lib/libnrniv.so(+0x205c91)[0x7fe4969d2c91]
[jhielson-Alienware-15-R4:52731] [11] /home/jhielson/install/lib/libnrniv.so(_Z14nrncore_psolvedi+0x229)[0x7f65d8e22929]
[jhielson-Alienware-15-R4:52732] [10] /home/jhielson/install/lib/libnrniv.so(+0x205c91)[0x7f24454dec91]
[jhielson-Alienware-15-R4:52734] [11] /home/jhielson/install/lib/libnrniv.so(+0x205c91)[0x7f326fc54c91]
[jhielson-Alienware-15-R4:52733] [11] /home/jhielson/install/lib/libnrniv.so(+0x205c91)[0x7f095f81fc91]
[jhielson-Alienware-15-R4:52729] [11] /home/jhielson/install/lib/libnrniv.so(_Z14nrncore_psolvedi+0x229)[0x7f5fd31e3929]
[jhielson-Alienware-15-R4:52730] [10] /home/jhielson/install/lib/libnrniv.so(+0x205c91)[0x7f65d8e47c91]
[jhielson-Alienware-15-R4:52732] [11] /home/jhielson/install/lib/libnrniv.so(_Z20hoc_object_componentv+0xc4e)[0x7fe496a6738e]
[jhielson-Alienware-15-R4:52731] [12] /home/jhielson/install/lib/libnrniv.so(_Z20hoc_object_componentv+0xc4e)[0x7f326fce938e]
[jhielson-Alienware-15-R4:52733] [12] /home/jhielson/install/lib/libnrniv.so(_Z20hoc_object_componentv+0xc4e)[0x7f244557338e]
[jhielson-Alienware-15-R4:52734] [12] /home/jhielson/install/lib/libnrniv.so(_Z20hoc_object_componentv+0xc4e)[0x7f095f8b438e]
[jhielson-Alienware-15-R4:52729] [12] /home/jhielson/install/lib/libnrniv.so(+0x205c91)[0x7f5fd3208c91]
[jhielson-Alienware-15-R4:52730] [11] /home/jhielson/install/lib/libnrniv.so(+0x3ef0bb)[0x7fe496bbc0bb]
[jhielson-Alienware-15-R4:52731] [13] /home/jhielson/install/lib/libnrniv.so(_Z20hoc_object_componentv+0xc4e)[0x7f65d8edc38e]
[jhielson-Alienware-15-R4:52732] [12] /home/jhielson/install/lib/libnrniv.so(+0x3ef0bb)[0x7f095fa090bb]
[jhielson-Alienware-15-R4:52729] [13] /home/jhielson/install/lib/libnrniv.so(+0x3ef0bb)[0x7f326fe3e0bb]
[jhielson-Alienware-15-R4:52733] [13] /home/jhielson/install/lib/libnrniv.so(+0x3ef0bb)[0x7f24456c80bb]
[jhielson-Alienware-15-R4:52734] [13] /home/jhielson/install/lib/libnrniv.so(_ZN6OcJump7fpycallEPFPvS0_S0_ES0_S0_+0x22f)[0x7fe4969d66af]
[jhielson-Alienware-15-R4:52731] [14] /home/jhielson/install/lib/libnrniv.so(_Z20hoc_object_componentv+0xc4e)[0x7f5fd329d38e]
[jhielson-Alienware-15-R4:52730] [12] /home/jhielson/install/lib/libnrniv.so(+0x3ef0bb)[0x7f65d90310bb]
[jhielson-Alienware-15-R4:52732] [13] /home/jhielson/install/lib/libnrniv.so(_ZN6OcJump7fpycallEPFPvS0_S0_ES0_S0_+0x22f)[0x7f095f8236af]
[jhielson-Alienware-15-R4:52729] [14] /home/jhielson/install/lib/libnrniv.so(_ZN6OcJump7fpycallEPFPvS0_S0_ES0_S0_+0x22f)[0x7f326fc586af]
[jhielson-Alienware-15-R4:52733] [14] /home/jhielson/install/lib/libnrniv.so(_ZN6OcJump7fpycallEPFPvS0_S0_ES0_S0_+0x22f)[0x7f24454e26af]
[jhielson-Alienware-15-R4:52734] [14] /home/jhielson/install/lib/libnrniv.so(+0x3ef772)[0x7fe496bbc772]
[jhielson-Alienware-15-R4:52731] [15] /home/jhielson/install/lib/libnrniv.so(+0x3ef0bb)[0x7f5fd33f20bb]
[jhielson-Alienware-15-R4:52730] [13] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyObject_MakeTpCall+0xab)[0x7fe496485b1b]
[jhielson-Alienware-15-R4:52731] [16] /home/jhielson/install/lib/libnrniv.so(+0x3ef772)[0x7f095fa09772]
[jhielson-Alienware-15-R4:52729] [15] /home/jhielson/install/lib/libnrniv.so(_ZN6OcJump7fpycallEPFPvS0_S0_ES0_S0_+0x22f)[0x7f65d8e4b6af]
[jhielson-Alienware-15-R4:52732] [14] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74df3)[0x7fe496251df3]
[jhielson-Alienware-15-R4:52731] [17] /home/jhielson/install/lib/libnrniv.so(+0x3ef772)[0x7f326fe3e772]
[jhielson-Alienware-15-R4:52733] [15] /home/jhielson/install/lib/libnrniv.so(+0x3ef772)[0x7f24456c8772]
[jhielson-Alienware-15-R4:52734] [15] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7fe496259ef6]
[jhielson-Alienware-15-R4:52731] [18] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyObject_MakeTpCall+0xab)[0x7f095f2d2b1b]
[jhielson-Alienware-15-R4:52729] [16] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74df3)[0x7f095f09edf3]
[jhielson-Alienware-15-R4:52729] [17] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyObject_MakeTpCall+0xab)[0x7f326f707b1b]
[jhielson-Alienware-15-R4:52733] [16] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f095f0a6ef6]
[jhielson-Alienware-15-R4:52729] [18] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyObject_MakeTpCall+0xab)[0x7f2444f91b1b]
[jhielson-Alienware-15-R4:52734] [16] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7fe4963a7e3b]
[jhielson-Alienware-15-R4:52731] [19] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74df3)[0x7f326f4d3df3]
[jhielson-Alienware-15-R4:52733] [17] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f095f1f4e3b]
[jhielson-Alienware-15-R4:52729] [19] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f326f4dbef6]
[jhielson-Alienware-15-R4:52733] [18] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7fe496485114]
[jhielson-Alienware-15-R4:52731] [20] /home/jhielson/install/lib/libnrniv.so(_ZN6OcJump7fpycallEPFPvS0_S0_ES0_S0_+0x22f)[0x7f5fd320c6af]
[jhielson-Alienware-15-R4:52730] [14] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f095f2d2114]
[jhielson-Alienware-15-R4:52729] [20] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74df3)[0x7f2444d5ddf3]
[jhielson-Alienware-15-R4:52734] [17] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f2444d65ef6]
[jhielson-Alienware-15-R4:52734] [18] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f326f629e3b]
[jhielson-Alienware-15-R4:52733] [19] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f326f707114]
[jhielson-Alienware-15-R4:52733] [20] /home/jhielson/install/lib/libnrniv.so(+0x3ef772)[0x7f65d9031772]
[jhielson-Alienware-15-R4:52732] [15] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyObject_MakeTpCall+0xab)[0x7f65d88fab1b]
/usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyVectorcall_Call+0x60)[0x7fe496485830]
[jhielson-Alienware-15-R4:52731] [21] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x590a)[0x7fe496257a7a]
[jhielson-Alienware-15-R4:52731] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyVectorcall_Call+0x60)[0x7f095f2d2830]
[jhielson-Alienware-15-R4:52729] [21] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x590a)[0x7f095f0a4a7a]
[jhielson-Alienware-15-R4:52729] [22] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyVectorcall_Call+0x60)[0x7f326f707830]
[jhielson-Alienware-15-R4:52733] [21] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x590a)[0x7f326f4d9a7a]
[jhielson-Alienware-15-R4:52733] [22] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f2444eb3e3b]
[jhielson-Alienware-15-R4:52734] [19] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f2444f91114]
[jhielson-Alienware-15-R4:52734] [20] [jhielson-Alienware-15-R4:52732] [16] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74df3)[0x7f65d86c6df3]
[jhielson-Alienware-15-R4:52732] [17] [22] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7fe4963a7e3b]
[jhielson-Alienware-15-R4:52731] [23] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f095f1f4e3b]
[jhielson-Alienware-15-R4:52729] [23] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f65d86ceef6]
[jhielson-Alienware-15-R4:52732] [18] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7fe496485114]
[jhielson-Alienware-15-R4:52731] [24] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f326f629e3b]
[jhielson-Alienware-15-R4:52733] [23] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyVectorcall_Call+0x60)[0x7f2444f91830]
[jhielson-Alienware-15-R4:52734] [21] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x590a)[0x7f2444d63a7a]
[jhielson-Alienware-15-R4:52734] [22] /home/jhielson/install/lib/libnrniv.so(+0x3ef772)[0x7f5fd33f2772]
[jhielson-Alienware-15-R4:52730] [15] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f095f2d2114]
[jhielson-Alienware-15-R4:52729] [24] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f095f09ed6d]
[jhielson-Alienware-15-R4:52729] [25] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f095f0a6ef6]
[jhielson-Alienware-15-R4:52729] [26] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f65d881ce3b]
[jhielson-Alienware-15-R4:52732] [19] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7fe496251d6d]
[jhielson-Alienware-15-R4:52731] [25] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f326f707114]
[jhielson-Alienware-15-R4:52733] [24] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyObject_MakeTpCall+0xab)[0x7f5fd2cbbb1b]
[jhielson-Alienware-15-R4:52730] [16] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x8006b)[0x7f095f0aa06b]
[jhielson-Alienware-15-R4:52729] [27] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f095f09ed6d]
[jhielson-Alienware-15-R4:52729] [28] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0xea8)[0x7f095f0a0018]
[jhielson-Alienware-15-R4:52729] [29] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f65d88fa114]
[jhielson-Alienware-15-R4:52732] [20] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyVectorcall_Call+0x60)[0x7f65d88fa830]
[jhielson-Alienware-15-R4:52732] [21] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7fe496259ef6]
[jhielson-Alienware-15-R4:52731] [26] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x8006b)[0x7fe49625d06b]
[jhielson-Alienware-15-R4:52731] [27] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7fe496251d6d]
[jhielson-Alienware-15-R4:52731] [28] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0xea8)[0x7fe496253018]
[jhielson-Alienware-15-R4:52731] [29] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f2444eb3e3b]
[jhielson-Alienware-15-R4:52734] [23] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f2444f91114]
[jhielson-Alienware-15-R4:52734] [24] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f326f4d3d6d]
[jhielson-Alienware-15-R4:52733] [25] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f326f4dbef6]
[jhielson-Alienware-15-R4:52733] [26] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x8006b)[0x7f326f4df06b]
[jhielson-Alienware-15-R4:52733] [27] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f326f4d3d6d]
[jhielson-Alienware-15-R4:52733] [28] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74df3)[0x7f5fd2a87df3]
[jhielson-Alienware-15-R4:52730] [17] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f5fd2a8fef6]
[jhielson-Alienware-15-R4:52730] [18] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f095f1f4e3b]
[jhielson-Alienware-15-R4:52729] *** End of error message ***
/usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x590a)[0x7f65d86cca7a]
[jhielson-Alienware-15-R4:52732] [22] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f2444d5dd6d]
[jhielson-Alienware-15-R4:52734] [25] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f5fd2bdde3b]
[jhielson-Alienware-15-R4:52730] [19] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f5fd2cbb114]
[jhielson-Alienware-15-R4:52730] [20] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0xea8)[0x7f326f4d5018]
[jhielson-Alienware-15-R4:52733] [29] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f326f629e3b]
[jhielson-Alienware-15-R4:52733] *** End of error message ***
/usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f2444d65ef6]
[jhielson-Alienware-15-R4:52734] [26] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7fe4963a7e3b]
[jhielson-Alienware-15-R4:52731] *** End of error message ***
/usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f65d881ce3b]
[jhielson-Alienware-15-R4:52732] [23] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x8006b)[0x7f2444d6906b]
[jhielson-Alienware-15-R4:52734] [27] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f65d88fa114]
[jhielson-Alienware-15-R4:52732] [24] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyVectorcall_Call+0x60)[0x7f5fd2cbb830]
[jhielson-Alienware-15-R4:52730] [21] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f2444d5dd6d]
/usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f65d86c6d6d]
[jhielson-Alienware-15-R4:52732] [25] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x590a)[0x7f5fd2a8da7a]
[jhielson-Alienware-15-R4:52730] [22] [jhielson-Alienware-15-R4:52734] [28] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f5fd2bdde3b]
[jhielson-Alienware-15-R4:52730] [23] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0xea8)[0x7f2444d5f018]
[jhielson-Alienware-15-R4:52734] [29] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f65d86ceef6]
[jhielson-Alienware-15-R4:52732] [26] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f5fd2cbb114]
[jhielson-Alienware-15-R4:52730] [24] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x8006b)[0x7f65d86d206b]
[jhielson-Alienware-15-R4:52732] [27] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f2444eb3e3b]
[jhielson-Alienware-15-R4:52734] *** End of error message ***
/usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f5fd2a87d6d]
[jhielson-Alienware-15-R4:52730] [25] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f5fd2a8fef6]
[jhielson-Alienware-15-R4:52730] [26] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f65d86c6d6d]
[jhielson-Alienware-15-R4:52732] [28] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0xea8)[0x7f65d86c8018]
[jhielson-Alienware-15-R4:52732] [29] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f65d881ce3b]
[jhielson-Alienware-15-R4:52732] *** End of error message ***
/usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x8006b)[0x7f5fd2a9306b]
[jhielson-Alienware-15-R4:52730] [27] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f5fd2a87d6d]
[jhielson-Alienware-15-R4:52730] [28] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0xea8)[0x7f5fd2a89018]
[jhielson-Alienware-15-R4:52730] [29] /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f5fd2bdde3b]
[jhielson-Alienware-15-R4:52730] *** End of error message ***
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 4 with PID 0 on node jhielson-Alienware-15-R4 exited on signal 6 (Aborted).
--------------------------------------------------------------------------
[jhielson-Alienware-15-R4:52725] 5 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
[jhielson-Alienware-15-R4:52725] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Hi,
The error comes from line 1093 of the following code: https://nrn.readthedocs.io/en/latest/doxygen/nrncore__callbacks_8cpp_source.html. I am not sure how to fix that.
@jhielson: Just FYI, we (neuron/coreneuron developers) are meeting today+tomorrow for a hackathon. In case you have time / you would like, you can join for some time (via zoom) and we can take a look at this together. Can send you the zoom link if you are interested.
Hi @pramodk, I would appreciate that.
Hello @jhielson ,
I sent you a zoom invite to your email. You can join our zoom meeting and sync there
| gharchive/issue | 2022-06-16T13:06:38 | 2025-04-01T04:32:20.439512 | {
"authors": [
"iomaganaris",
"jhielson",
"pramodk"
],
"repo": "BlueBrain/CoreNeuron",
"url": "https://github.com/BlueBrain/CoreNeuron/issues/827",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2545315896 | Add first iteration of plotting tool as a script
NOT MEANT TO BE MERGED
Might revisit this later when the simulation tool is done.
| gharchive/pull-request | 2024-09-24T12:48:52 | 2025-04-01T04:32:20.443400 | {
"authors": [
"BoBer78",
"WonderPG"
],
"repo": "BlueBrain/neuroagent",
"url": "https://github.com/BlueBrain/neuroagent/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
556068333 | Version Endpoint change - nexus-sdk-js
service endpoint result is a little different, make sure to reflect this change in types
part of https://github.com/BlueBrain/nexus/issues/996
We've decided not to implement this in the SDK, but leave it up to the clients
| gharchive/issue | 2020-01-28T09:00:04 | 2025-04-01T04:32:20.444992 | {
"authors": [
"kenjinp"
],
"repo": "BlueBrain/nexus",
"url": "https://github.com/BlueBrain/nexus/issues/1005",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1587193605 | 🛑 hosted-PalmettoGBA is down
In 7acb5e6, hosted-PalmettoGBA (https://palmettogba.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: hosted-PalmettoGBA is back up in 947321b.
| gharchive/issue | 2023-02-16T07:47:21 | 2025-04-01T04:32:20.447429 | {
"authors": [
"BlueDude0"
],
"repo": "BlueDude0/BlueSiteStatus",
"url": "https://github.com/BlueDude0/BlueSiteStatus/issues/3112",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1856644607 | 🛑 hosted-PalmettoGBA is down
In 9db9cfc, hosted-PalmettoGBA (https://palmettogba.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: hosted-PalmettoGBA is back up in f24564f.
| gharchive/issue | 2023-08-18T12:30:18 | 2025-04-01T04:32:20.451352 | {
"authors": [
"BlueDude0"
],
"repo": "BlueDude0/BlueSiteStatus",
"url": "https://github.com/BlueDude0/BlueSiteStatus/issues/6538",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2162248905 | 🛑 hosted-PalmettoGBA is down
In 0a0bb03, hosted-PalmettoGBA (https://palmettogba.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: hosted-PalmettoGBA is back up in 1c4ecff after 4 minutes.
| gharchive/issue | 2024-02-29T23:09:37 | 2025-04-01T04:32:20.453673 | {
"authors": [
"BlueDude0"
],
"repo": "BlueDude0/BlueSiteStatus",
"url": "https://github.com/BlueDude0/BlueSiteStatus/issues/9274",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
851783490 | Pirate Speak Language
Added Pirate Speak.
Please add onto this! It's not very good yet
Cancelling the pull request for now, while it's still being worked on in the dry dock! (My fork: https://github.com/TechnicJelle/BlueMapVue)
| gharchive/pull-request | 2021-04-06T20:24:44 | 2025-04-01T04:32:20.460380 | {
"authors": [
"TechnicJelle"
],
"repo": "BlueMap-Minecraft/BlueMapVue",
"url": "https://github.com/BlueMap-Minecraft/BlueMapVue/pull/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2286651112 | Move DeltaTime from property to Update method
Suggestion
Instead of public double DeltaTime { get; } -> Update(in NexusInputCollection inputs, in double deltaTime)
Additional context
No response
DeltaTime should be kept as property if the user want;s to use it outside of Update
| gharchive/issue | 2024-05-08T23:47:24 | 2025-04-01T04:32:20.476940 | {
"authors": [
"BlyZeYT"
],
"repo": "BlyZeYT/ConsoleNexusEngine",
"url": "https://github.com/BlyZeYT/ConsoleNexusEngine/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1192225924 | 似乎无法使用smtp发送邮件
尝试了outlook、gmail、yandex还有网易,执行邮件队列、每日用户详情等任务后均不能发送出邮箱,点击获取验证码提示 发送错误 0
看数据库似乎是未能成功加入到email_queue
这个是你自己问题
| gharchive/issue | 2022-04-04T19:34:36 | 2025-04-01T04:32:20.481640 | {
"authors": [
"BobCoderS9",
"RealSlakey"
],
"repo": "BobCoderS9/SSPanel-Metron",
"url": "https://github.com/BobCoderS9/SSPanel-Metron/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
440373572 | App freezes and crashes with "out of memory"
OS: Windows 10
Same behavior with packaged version 0.0.87 and master at ab0fc60 (via npm install and npm start). App starts normally and allows me to open the TeslaCam folder from the USB drive. Selecting a date with multiple Dashcam and Sentry recordings renders time block headers, but no videos are shown:
At this point the app freezes, even affecting the system somewhat (screen flickers once, possibly reloading display drivers? In one instance my Firefox running in the background crashed silently). When run via npm, following is output on the terminal:
Olexandr@Leviathan-PC MINGW64 /c/Tools/teslacam-browser (master)
$ npm start
> teslacam-browser@0.0.85 start C:\Tools\teslacam-browser
> electron .
Skip checkForUpdatesAndNotify because application is not packed
<--- Last few GCs --->
1425 ms: Mark-sweep 17.9 (38.0) -> 15.3 (40.0) MB, 4.5 / 0.0 ms (+ 16.0 ms in 174 steps since start of marking, biggest step 0.5 ms) [GC interrupt] [GC in old space requested].
19633 ms: Mark-sweep 17.0 (40.0) -> 14.8 (25.0) MB, 3.7 / 0.0 ms (+ 20.8 ms in 5 steps since start of marking, biggest step 7.6 ms) [Incremental marking task: finalize incremental marking] [GC in old space requested].
<--- JS stacktrace --->
FATAL ERROR: Committing semi space failed. Allocation failed - process out of memory
Olexandr@Leviathan-PC MINGW64 /c/Tools/teslacam-browser (master)
$ [9552:0504/231712.502:ERROR:gles2_cmd_decoder.cc(10044)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUseProgram: program not linked
[9552:0504/231712.502:ERROR:gles2_cmd_decoder.cc(9494)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform2fv: wrong uniform function for type
[9552:0504/231712.503:ERROR:gles2_cmd_decoder.cc(9494)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform2fv: wrong uniform function for type
[9552:0504/231712.504:ERROR:gles2_cmd_decoder.cc(9494)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform2fv: wrong uniform function for type
[9552:0504/231712.504:ERROR:gles2_cmd_decoder.cc(9520)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform4fv: unknown location
[9552:0504/231712.504:ERROR:gles2_cmd_decoder.cc(9520)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform4fv: unknown location
[9552:0504/231712.504:ERROR:gles2_cmd_decoder.cc(9520)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform1i: unknown location
[9552:0504/231712.505:ERROR:gles2_cmd_decoder.cc(9520)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform1i: unknown location
[9552:0504/231712.505:ERROR:gles2_cmd_decoder.cc(9520)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform1fv: unknown location
[9552:0504/231712.505:ERROR:gles2_cmd_decoder.cc(9520)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform1fv: unknown location
[9552:0504/231712.505:ERROR:gles2_cmd_decoder.cc(9520)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniform1fv: unknown location
[9552:0504/231712.505:ERROR:gles2_cmd_decoder.cc(9494)] [.BrowserCompositor-00000278C8B25690]GL ERROR :GL_INVALID_OPERATION : glUniformMatrix4fv: wrong uniform function for type
Olexandr@Leviathan-PC MINGW64 /c/Tools/teslacam-browser (master)
$
(note especially how the second part with GL errors is output after the process already terminates and releases the terminal input)
One thing I've tried is increasing Node memory allocation by adding the following directly after imports in main.js:
app.commandLine.appendSwitch('js-flags', '--max-old-space-size=4096')
When run with this line added, the app behaves similar. The freeze occurs at the same point, but after the screen flicker the Electron wrapper remains alive, the loaded page goes blank. There is no terminal output aside from Skip checkForUpdatesAndNotify because application is not packed however, and the process does not terminate.
Running the app with Electron dev tools enabled shows no errors or any information on the console. The following error is shown after the page content "crashes" and the page is blanked out:
Ok, further investigation shows that this only happens when opening days with very many recordings (e.g. today after a lot of driving with a full RecentClips folder + 5 Sentry events) - I guess instantiating that many videos brings HTML5 video API to its knees on my system.
Idea: only actually render video elements for N segments with a very limited or choosable N. Render placeholders instead of actual video elements for videos that are not visible on screen, replace them with video elements when the user scrolls to there.
Impressive debugging! Want to fix & submit a pull request? (no worries if not; definitely needs to be fixed 👍🏻🤓).
Latest release 0.0.88 will lazy-load the video elements when scrolled. Let me know if this helps! :)
@olexs please re-open if still having a problem, otherwise I think this is resolved.
| gharchive/issue | 2019-05-04T21:38:30 | 2025-04-01T04:32:20.490442 | {
"authors": [
"BobStrogg",
"mitchcapper",
"olexs"
],
"repo": "BobStrogg/teslacam-browser",
"url": "https://github.com/BobStrogg/teslacam-browser/issues/7",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
186089023 | Add docker container with MongoDB
related to issue #26.
setup up Docker with MongoDB as database
update readme with basic instructions to run the container
Coverage remained the same at 87.5% when pulling 2398a559c3915a9d7bc8fdab0bd008168413c850 on nickgnd:datastore/mongodb-container into 7c5a044580af8bcecfc55fe4371be5680a518ab8 on BondAnthony:master.
@nickgnd thank you for the PR and the support!! Please check out our wiki for more issues that have yet to be opened. Let me know if you have other ideas to help enhance this project.
Thank you for the support @nickgnd please feel free to continue to contribute. Checkout our wiki page for a rough project plan. The Wiki
| gharchive/pull-request | 2016-10-29T17:01:26 | 2025-04-01T04:32:20.511853 | {
"authors": [
"BondAnthony",
"coveralls",
"nickgnd"
],
"repo": "BondAnthony/status-service",
"url": "https://github.com/BondAnthony/status-service/pull/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
419704050 | Could not create the vault client: error parsing supplied token: failed to lookup Vault periodic token: Get http://vault:8200/v1/auth/token/lookup-self: dial tcp 10.0.29.179:8200: connect: connection refused
Tried to run the quick start guide. Each time I deploy the sample app, kubernetes-vault reports this error and crash.
time="2019-03-11T21:50:00Z" level=debug msg="Discovered 0 nodes: []"
time="2019-03-11T21:50:00Z" level=fatal msg="Could not create the vault client: error parsing supplied token: failed to lookup Vault periodic token: Get http://vault:8200/v1/auth/token/lookup-self: dial tcp 10.0.29.179:8200: connect: connection refused"
This is a dev environment and I did not make much change to the configure files. Any suggestions?
It seems that your Vault server is not running correctly. Can you post the logs from the vault server using kubectl logs?
One thing I should mention is that I didn't start vault in dev mode. Instead I started and initialized it with key and a token. Here is the log
==> Vault server started! Log data will stream in below:
2019-03-11T21:12:34.723Z [WARN] no api_addr value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set
2019-03-11T21:13:28.306Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:13:28.306Z [INFO] core: security barrier not initialized
2019-03-11T21:13:38.428Z [INFO] core: security barrier not initialized
2019-03-11T21:13:38.429Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:13:48.517Z [INFO] core: security barrier not initialized
2019-03-11T21:13:48.517Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:13:58.604Z [INFO] core: security barrier not initialized
2019-03-11T21:13:58.605Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:14:08.698Z [INFO] core: security barrier not initialized
2019-03-11T21:14:08.699Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:14:18.785Z [INFO] core: security barrier not initialized
2019-03-11T21:14:18.786Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:14:28.876Z [INFO] core: security barrier not initialized
2019-03-11T21:14:28.877Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:14:32.418Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:14:38.965Z [INFO] core: security barrier not initialized
2019-03-11T21:14:38.966Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:14:49.050Z [INFO] core: security barrier not initialized
2019-03-11T21:14:49.058Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:14:59.139Z [INFO] core: security barrier not initialized
2019-03-11T21:14:59.140Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:15:09.225Z [INFO] core: security barrier not initialized
2019-03-11T21:15:09.226Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:15:19.320Z [INFO] core: security barrier not initialized
2019-03-11T21:15:19.320Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:15:29.405Z [INFO] core: security barrier not initialized
2019-03-11T21:15:29.406Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:15:39.488Z [INFO] core: security barrier not initialized
2019-03-11T21:15:39.489Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:15:49.572Z [INFO] core: security barrier not initialized
2019-03-11T21:15:49.573Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:15:59.657Z [INFO] core: security barrier not initialized
2019-03-11T21:15:59.658Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:16:09.744Z [INFO] core: security barrier not initialized
2019-03-11T21:16:09.744Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:16:19.825Z [INFO] core: security barrier not initialized
2019-03-11T21:16:19.826Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:16:29.916Z [INFO] core: security barrier not initialized
2019-03-11T21:16:29.916Z [INFO] core: seal configuration missing, not initialized
2019-03-11T21:16:31.515Z [INFO] core: security barrier not initialized
2019-03-11T21:16:31.515Z [INFO] core: security barrier initialized: shares=1 threshold=1
2019-03-11T21:16:31.515Z [INFO] core: post-unseal setup starting
2019-03-11T21:16:31.528Z [INFO] core: loaded wrapping token key
2019-03-11T21:16:31.528Z [INFO] core: successfully setup plugin catalog: plugin-directory=
2019-03-11T21:16:31.528Z [INFO] core: no mounts; adding default mount table
2019-03-11T21:16:31.529Z [INFO] core: successfully mounted backend: type=kv path=secret/
2019-03-11T21:16:31.529Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2019-03-11T21:16:31.529Z [INFO] core: successfully mounted backend: type=system path=sys/
2019-03-11T21:16:31.529Z [INFO] core: successfully mounted backend: type=identity path=identity/
2019-03-11T21:16:31.531Z [INFO] core: successfully enabled credential backend: type=token path=token/
2019-03-11T21:16:31.531Z [INFO] core: restoring leases
2019-03-11T21:16:31.531Z [INFO] rollback: starting rollback manager
2019-03-11T21:16:31.532Z [INFO] expiration: lease restore complete
2019-03-11T21:16:31.532Z [INFO] identity: entities restored
2019-03-11T21:16:31.532Z [INFO] identity: groups restored
2019-03-11T21:16:31.532Z [INFO] core: post-unseal setup complete
2019-03-11T21:16:31.532Z [INFO] core: starting listener: listener_address=127.0.0.1:8201
2019-03-11T21:16:31.532Z [INFO] core: root token generated
2019-03-11T21:16:31.532Z [INFO] core: pre-seal teardown starting
2019-03-11T21:16:31.532Z [INFO] core: stopping cluster listeners
2019-03-11T21:16:31.532Z [INFO] core: shutting down forwarding rpc listeners
2019-03-11T21:16:31.532Z [INFO] core: forwarding rpc listeners stopped
2019-03-11T21:16:31.532Z [INFO] core: serving cluster requests: cluster_listen_address=127.0.0.1:8201
2019-03-11T21:16:31.532Z [INFO] core: rpc listeners successfully shut down
2019-03-11T21:16:31.532Z [INFO] core: cluster listeners successfully shut down
2019-03-11T21:16:31.532Z [INFO] rollback: stopping rollback manager
2019-03-11T21:16:31.532Z [INFO] core: pre-seal teardown complete
2019-03-11T21:18:38.712Z [INFO] core: vault is unsealed
2019-03-11T21:18:38.712Z [INFO] core: post-unseal setup starting
2019-03-11T21:18:38.712Z [INFO] core: loaded wrapping token key
2019-03-11T21:18:38.712Z [INFO] core: successfully setup plugin catalog: plugin-directory=
2019-03-11T21:18:38.712Z [INFO] core: successfully mounted backend: type=kv path=secret/
2019-03-11T21:18:38.713Z [INFO] core: successfully mounted backend: type=system path=sys/
2019-03-11T21:18:38.713Z [INFO] core: successfully mounted backend: type=identity path=identity/
2019-03-11T21:18:38.713Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2019-03-11T21:18:38.714Z [INFO] core: successfully enabled credential backend: type=token path=token/
2019-03-11T21:18:38.714Z [INFO] core: restoring leases
2019-03-11T21:18:38.714Z [INFO] rollback: starting rollback manager
2019-03-11T21:18:38.714Z [INFO] identity: entities restored
2019-03-11T21:18:38.714Z [INFO] identity: groups restored
2019-03-11T21:18:38.714Z [INFO] core: post-unseal setup complete
2019-03-11T21:18:38.714Z [INFO] core: starting listener: listener_address=127.0.0.1:8201
2019-03-11T21:18:38.714Z [INFO] core: serving cluster requests: cluster_listen_address=127.0.0.1:8201
2019-03-11T21:18:38.714Z [INFO] expiration: lease restore complete
2019-03-11T21:21:06.951Z [INFO] core: successful mount: namespace= path=root-ca/ type=pki
2019-03-11T21:21:56.141Z [INFO] core: successful mount: namespace= path=intermediate-ca/ type=pki
2019-03-11T21:29:24.729Z [INFO] core: enabled credential backend: path=approle/ type=approle
The problem is that Vault listener is started on 127.0.0.1:8201. Your Kubernetes-Vault config expects Vault to be at 10.0.29.179:8200, but since Vault is listening on its loopback address, it will not be reachable.
I tried not to start vault in dev mode so added a line in vault.yaml
command: ["vault", "server", "-config", "/vault/config/config.json"]
The config.json is
{
"listener": {
"tcp":{
"address": "127.0.0.1:8200",
"tls_disable": 1
}
},
"storage": {"inmem":{}
},
"ui": true
}
If I didn't set listener, it did not work either. Any suggestion how to set the listener address? Much appreciated
Try setting the address to 0.0.0.0:8200. This will tell it to listen to all interfaces.
Tried that and seems worked. However the sample app pod all have status init, but the pods did have tokens; Logs are
sample-app-57bb645f76-5q2lj 0/1 Init:0/1 0 3s
sample-app-57bb645f76-jbszd 0/1 Init:0/1 0 3s
sample-app-57bb645f76-lrx5b 0/1 Init:0/1 0 3s
sample-app-57bb645f76-r58s5 0/1 Init:0/1 0 3s
sample-app-57bb645f76-x4v2c 0/1 Init:0/1 0 3s
Found Vault token...
Token: s.Ah6aRMoI5a4dbDsJTTSUOGGm
Accessor: WrkDRhg0hjOIPwj9zX5ZG4ke
Lease Duration: 21600
Renewable: true
Vault Address: http://vault:8200
Sample App v0.6.1 (503d2759d5e925c79891b9601427e0329e1a8369) built on Mon Nov 26 22:44:38 UTC 2018
CA Bundle Exists: false
Then in production, I probably don't want to let it listens on all interface. Should I specify a cluster_addr and let my app listening on that address? Like using cluster address instead of http://vault:8200? Thanks
I am not sure why the pods are showing as status init, perhaps this is because the status is lagging? If the app pods received the token, then everything should work together.
If you won't want to listen on 0.0.0.0, you can use Kubernetes' downward API. The status.podIP might allow you to bind to the pod's IP address.
If you are using a service mesh such as istio, I think all communication is on 127.0.0.1, so that might work. It's been a while since I looked at istio, so the details are hazy.
Have an unrelated question. Tried to build the sample-app image without push as I wanted to play around with the code. It was on a Mac and the images always crashed with error
standard_init_linux.go:190: exec user process caused "exec format error"
This is before I made any code changes
Can you post the commands you used and the logs/output of the build?
Here is the build-dev-image.sh. I created a repository so changed the tag and only build sample-app container. As for the log file, I did not find log/output under the kubernetes-vault directory. Thanks
#!/bin/sh
VERSION=dev
Build the binaries
docker run --rm -v "$PWD":/source/kubernetes-vault -w /source/kubernetes-vault golang:1.11-alpine ./build.sh
Build the images
#docker build -t daywalker128/mydepot/kubernetes-vault:"$VERSION" -f cmd/controller/Dockerfile.dev cmd/controller/
#docker build -t daywalker128/mydepot/kubernetes-vault-init:"$VERSION" -f cmd/init/Dockerfile.dev cmd/init/
docker build -t daywalker128/mydepot/kubernetes-vault-sample-app:"$VERSION" -f cmd/sample-app/Dockerfile.dev cmd/sample-app/
Push images
#docker push daywalker128/mydepot/kubernetes-vault:"$VERSION"
#docker push daywalker128/mydepot/kubernetes-vault-init:"$VERSION"
#docker push daywalker128/mydepot/kubernetes-vault-sample-app:"$VERSION"
Hmm. It's hard to tell what the problem it. According to this thread, it's because you're trying to build on an unsupported architecture: https://forums.docker.com/t/standard-init-linux-go-190-exec-user-process-caused-exec-format-error/49368
If your docker is really old, I'd suggest upgrading it to the latest version.
purge the repository did the trick. Really appreciate it. Thanks
Glad you got it working. I am going to go ahead and close this issue.
I have the same problem as the title, but not the same logs output. I ran etcd and vault on kubernetes, managed to get them working. But when I deploy kubernetes-vault I get connection refused.
| gharchive/issue | 2019-03-11T21:59:42 | 2025-04-01T04:32:20.563341 | {
"authors": [
"DrissiReda",
"F21",
"ju187"
],
"repo": "Boostport/kubernetes-vault",
"url": "https://github.com/Boostport/kubernetes-vault/issues/147",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2445526552 | Fix/malla inf
Arregle unas cosas que me pidio el CEEINF, basicamente algunos ramos se dan ambos semestres e IO tenia 6 creditos en vez de 5
super, hagamos una update completa. Tengo entendido que desde hace tiempo uno o dos ramos cambiaron de prerrequisitos y cada vez que me acuerdo de cambiarlo se me olvida cuales son, revise siga a la rápida y opti según ahora pide progra pero no se si hay otro ramo más.
intente pushear el cambio a la pr pero por x motivo no me dejo
Hola!! teniendo un tiempo voy a revisar bien la malla 😄
Que raro que no se haga la pr, puede que se me haya corrompido algo.
Saludos!
Hugo Campos Castro
Estudiante de Ingeniería Civil en Informática
Universidad Técnica Federico Santa María - Campus Casa Central Valparaíso
[imagen barra]
De: César Paulangelo @.>
Enviado: jueves, 8 de agosto de 2024 1:40
Para: BooterMan98/malla-interactiva @.>
Cc: Hugo Campos Castro @.>; Author @.>
Asunto: Re: [BooterMan98/malla-interactiva] Fix/malla inf (PR #43)
ADVERTENCIA: Este correo electrónico proviene desde fuera de la Universidad. No haga click en los enlaces ni abra los archivos adjuntos a menos que conozca al remitente y sepa que el contenido es seguro.
super, hagamos una update completa. Tengo entendido que desde hace tiempo uno o dos ramos cambiaron de prerrequisitos y cada vez que me acuerdo de cambiarlo se me olvida cuales son, revise siga a la rápida y opti según ahora pide progra pero no se si hay otro ramo más.
intente pushear el cambio a la pr pero por x motivo no me dejo
—
Reply to this email directly, view it on GitHubhttps://github.com/BooterMan98/malla-interactiva/pull/43#issuecomment-2274993474, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ASXDBU7YNTPSFWKE63KJHETZQMAERAVCNFSM6AAAAABL477M4CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZUHE4TGNBXGQ.
You are receiving this because you authored the thread.Message ID: @.***>
Revise la malla según siga y actualice los pre-requisitos de los Opti e IA, no encontré ningún otro detalle así que voy a hacer el merge. Si falto algo más estaré atento.
Okaa gracias!! no habia tenido tiempo de verlo
| gharchive/pull-request | 2024-08-02T18:18:43 | 2025-04-01T04:32:20.573235 | {
"authors": [
"BooterMan98",
"uwo-o"
],
"repo": "BooterMan98/malla-interactiva",
"url": "https://github.com/BooterMan98/malla-interactiva/pull/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2484641327 | No saving consitency in orderservice.
orders can be changed outside of the service a rule needs to be clearly stated on the desired behaviour of the object.
Intented Behavior:
New order -> Add Item troughService -> ItemSavedInDbb
newOrder -> AddNewItemTOArray -> NoChangesInDbb
| gharchive/issue | 2024-08-24T14:38:11 | 2025-04-01T04:32:20.574547 | {
"authors": [
"Borkanie"
],
"repo": "Borkanie/FoodStand",
"url": "https://github.com/Borkanie/FoodStand/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
202382812 | For access via NPM
It fixes #7
Now it's possible to use as NPM package!
What a Great Work @UsulPro ! Thanks :smile: :+1:
| gharchive/pull-request | 2017-01-22T13:27:20 | 2025-04-01T04:32:20.582520 | {
"authors": [
"BosNaufal",
"UsulPro"
],
"repo": "BosNaufal/react-scrollbar",
"url": "https://github.com/BosNaufal/react-scrollbar/pull/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
183707501 | Fix Our Examples page
It seems to have gone down https://bottr.co/examples
We should update it anyhow to fit in the new website as well as add some better examples such as a weather bot, todo bot and reminder bot.
Hey!
Do you have an identity guideline or something?
@Fineas Are you talking in terms of branding. I have all the logos here https://github.com/Bottr-js/Bottr-Brand but I don't really have a guideline yet as it's something I need to develop
@jcampbell05 It looks like the /examples page was removed in the migration to the new site, see here.
Do you have any working examples to link to?
@geoffdavis92 The links on that original page should still work :) the examples themselves are hosted at https://github.com/Bottr-js/Bottr-Examples
Eventually I want to make it easier for people to update the hosted examples via PR (I have to upload any PR).
If you could create a new page for examples, initally it can host the original examples but I want to replace them in the long term with better examples.
Should be Up now. But needs to be tweaked to match our design :) and to have some good community examples :)
@jcampbell05 I can take a look at this sometime this week.
Currently, all the examples are located here. At some point, we need to migrate it into this repository in gh-pages branch.
| gharchive/issue | 2016-10-18T14:44:27 | 2025-04-01T04:32:20.587233 | {
"authors": [
"Fineas",
"geoffdavis92",
"jcampbell05",
"ummahusla"
],
"repo": "Bottr-js/Bottr",
"url": "https://github.com/Bottr-js/Bottr/issues/44",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1283262846 | added exporting LCSC property to KiCad v6 Schematic Files
This is very experimental, I wanted to get feedback if it is a maintainable feature. I tested it on a couple of my designs so far.
Wow, that looks very good 😍
Give me a bit time to test this before I merge it!
@computergeek1507 I did a quick test and it seems to work as expected!
But it fooled me on te very first try 🙈 I clicked the export button selected the .kicad_sch file and clicked ok.
So far so good I thought and wanted to see in the schematic if the LCSC property is set, but it wasn't.
Then I realized that the FileDialog gave me another project as default directory and I wasn't reading properly when I selected that .kicad_sch file. That way the function wrote data from one projects pcb into the schematic of another one.
Sure, there's the _old file as backup but still I think this should be tune a bit.
In my opinion at least the defaults of the FileDialog should be set to reasonable values:
with wx.FileDialog(
self,
"Select Schematics",
"", # DefaultDir
"", # DefaultFile
"KiCad V6 Schematics (*.kicad_sch)|*.kicad_sch",
wx.FD_OPEN | wx.FD_FILE_MUST_EXIST | wx.FD_MULTIPLE,
) as openFileDialog:
DefaultDir should be the project dir
DefaultFile could be the same name as the .kicad_pcb but with kicad_sch
Let me know what you think about this.
@computergeek1507 Holy 💩 I tried to add my suggestions but messed it up 🙈 I think I simply push a new branch with your changes and add mine, otherwise I'm going to mess up the main branch.
| gharchive/pull-request | 2022-06-24T04:58:47 | 2025-04-01T04:32:20.601002 | {
"authors": [
"Bouni",
"computergeek1507"
],
"repo": "Bouni/kicad-jlcpcb-tools",
"url": "https://github.com/Bouni/kicad-jlcpcb-tools/pull/183",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
437769826 | Matches multiple schemas when only one must validate
I've setup an HTML project and when I created a launch debugger I'm getting the following message:
Matches multiple schemas when only one must validate.
Here's the launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Launch Firefox (localhost)",
"type": "firefox",
"request": "launch",
"url": "http://localhost:8080/index.html",
"webRoot": "${workspaceRoot}"
}
]
}
The warning is on the first bracket in the configurations array (line 4).
Here is the asconfig.json:
{
"config": "js",
"compilerOptions": {
"source-map": true,
"html-template": "template.html"
},
"files":
[
"src/Main.as"
]
}
I guess that the creators of the Firefox extension for VSCode must have changed something in their launch configuration format. I'll be happy to update the documentation with a better launch configuration, if you can figure out what's wrong and share the details here!
Hmmm... I just tried copying your launch.json into one of my workspaces, and I'm not seeing the same error when I have the file open. It also launches correctly when I start the debugger.
I can confirm it's launching OK. It's not hitting the break points I've set in Main.as. I've restarted VSC and now the original message is gone:
Matches multiple schemas when only one must validate.
FWIW I tried to use the Add Configuration button that VSC and that might have had something to do with the message but it's gone after restart. I'll keep working on the breakpoints issue if it is some how related. FYI The page is working.
There is a known issue where breakpoints may not work when a Royale app is first starting up. It's like the Chrome/Firefox debugger in VSCode doesn't connect fast enough. However, breakpoints in event listeners that are triggered later (like a "click" listener) work fine. You may be running into that.
Details: https://github.com/BowlerHatLLC/vscode-as3mxml/wiki/Common-issues#debugger-will-not-stop-at-breakpoints-on-startup-with-apache-royale
OK It's hitting breakpoints later on. I found a note about it in the Firefox debugger on it:
Breakpoints that should get hit immediately after the javascript file is loaded may not work the first time: You will have to click "Reload" in Firefox for the debugger to stop at such a breakpoint. This is a weakness of the Firefox debug protocol: VS Code can't tell Firefox about breakpoints in a file before the execution of that file starts.
That's good info. I'll try to link to it as an official explanation.
I've restarted VSC and now the original message is gone:
Matches multiple schemas when only one must validate.
I'm just here to report that I encountered the same message while editing my settings.json file, and I couldn't figure out what the heck could be causing it. It was saying the problem was in the "launch" section of my settings, but I wasn't even editing that part of my settings. I thought I was going insane until I saw this comment from @velara3 and tried restarting VS Code. The message went away.
Moral of the story is: When you see weird warnings, restarting the editor is always worth a try.
Moral of the story is: When you see weird warnings, restarting the editor is always worth a try.
Continually restarts VSC, warning persists. Oh well, It's a false positive anyway, for the most part.
Still, really does kick me in the stomach when I'm making fairly common edits to a .json file, however it can suck my toe.
There was a recent comment but it seems to have been removed. Since this post I've made a repository of working AS3 projects:
https://github.com/velara3/as3
| gharchive/issue | 2019-04-26T17:22:10 | 2025-04-01T04:32:20.615564 | {
"authors": [
"Alzarax",
"jkyeung",
"joshtynjala",
"velara3"
],
"repo": "BowlerHatLLC/vscode-as3mxml",
"url": "https://github.com/BowlerHatLLC/vscode-as3mxml/issues/358",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
971607091 | EPMLABSBRN-900 - BE: fix bug post headphones for current user: 403
EPMLABSBRN-900
fix for the current api
added new constraint for headphones table
may be it is reasonable to add integration test on this bug?
@goopnigoop , may be it is reasonable to add integration test on this bug?
ok, I'll add
| gharchive/pull-request | 2021-08-16T10:48:22 | 2025-04-01T04:32:20.626084 | {
"authors": [
"ElenaSpb",
"goopnigoop"
],
"repo": "Brain-up/brn",
"url": "https://github.com/Brain-up/brn/pull/2079",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2000739914 | 🛑 Plex is down
In 3c2a249, Plex (https://plex.bramboeckx.be) was down:
HTTP code: 530
Response time: 3133 ms
Resolved: Plex is back up in 98fc788 after 7 minutes.
| gharchive/issue | 2023-11-19T07:49:14 | 2025-04-01T04:32:20.630498 | {
"authors": [
"BramB-1952444"
],
"repo": "BramB-1952444/uptime",
"url": "https://github.com/BramB-1952444/uptime/issues/722",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2082799299 | 🛑 Drank jowile is down
In f4fa688, Drank jowile (https://drank.jowile.be) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Drank jowile is back up in 73888a0 after 21 minutes.
| gharchive/issue | 2024-01-15T23:29:19 | 2025-04-01T04:32:20.632819 | {
"authors": [
"BramB-1952444"
],
"repo": "BramB-1952444/uptime",
"url": "https://github.com/BramB-1952444/uptime/issues/799",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
131140444 | 1.10.5 breaking builds
I don't see a tag for it 1.10.5, but its on jcenter, being vended out to gradle configs using compile 'io.branch.sdk.android:library:1.+' and is also up on your AWS direct download.
Build-break being caused by this PR #204
Hey @elincoln - thanks for opening this. Can you specify what's breaking? Do you have any code relying on these private methods? Let us know
It was all related to the debug mode, i couldn't get it reliably working using your touch hijacking code, so i disabled it and wrote a toggle for my SettingsActivity. It looks like that live-debug-mode thing is completely removed now from both web and SDK?
If that's the case I can just delete my code. It's not production related in any way, I just wasn't expecting a build break out of a patch release. PrefHelper is meant to be private?
@elincoln - nice - this was the intended reason for removing the touch-hijacker; it ended up causing more pain than solving. You are correct, we have removed this from our SDK.
we plan on making PrefHelper package private now (hopefully as soon a next release), as removing this allows us to do that. But yes, it's intended to be private. Sorry about this. I'll leave this open for a little while until we move on to the next release.
@Sarkar Sounds good to me; thanks for the quick responses! :smile:
| gharchive/issue | 2016-02-03T20:19:16 | 2025-04-01T04:32:20.635933 | {
"authors": [
"Sarkar",
"elincoln"
],
"repo": "BranchMetrics/Android-Deferred-Deep-Linking-SDK",
"url": "https://github.com/BranchMetrics/Android-Deferred-Deep-Linking-SDK/issues/207",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
139182663 | allow sending option append_deeplink_path
@jsaleigh @jeanmw per a partner's request. This is a very advanced option so let's leave undocumented for now.
:+1:
Should probably get a thumb from @jeanmw too since i touched the code
:+1: your test fix makes sense. @jeanmw please thumb
@derrickstaten LGTM :+1:
Hmm...nope, my tests didn't work either, they broke mobile tests this time :/
| gharchive/pull-request | 2016-03-08T05:03:17 | 2025-04-01T04:32:20.638030 | {
"authors": [
"derrickstaten",
"jeanmw",
"jsaleigh"
],
"repo": "BranchMetrics/Smart-App-Banner-Deep-Linking-Web-SDK",
"url": "https://github.com/BranchMetrics/Smart-App-Banner-Deep-Linking-Web-SDK/pull/295",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
167004095 | BranchUniversalObject.getShortUrl() return long url when called inside AsyncTask
I get a long url when I try to create a short url from an AsyncTask.doInBackground() by calling the method BranchUniversalObject.getShortUrl()
@bubinimara BranchUniversalObject.getShortUrl() is internally using anSyncTask for getting the URL since URL creation involves network operation. Async tasks cannot be executed from background thered and thats is the reason for getting a long url.
Could you please try to call BranchUniversalObject.getShortUrl() from your onPreExecute of your Async task and then use it in your doInBackGround method?
Updated a fix with PR https://github.com/BranchMetrics/android-branch-deep-linking/pull/273
| gharchive/issue | 2016-07-22T09:18:17 | 2025-04-01T04:32:20.640424 | {
"authors": [
"bubinimara",
"sojanpr"
],
"repo": "BranchMetrics/android-branch-deep-linking",
"url": "https://github.com/BranchMetrics/android-branch-deep-linking/issues/272",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
353691665 | Shared links are not shown as hyperlinks in Samsung native email client.
I used this sample app. I have shared the link from Samsung native email client. After that I opened the email in the same email client in a different device. The branch link is not highlighted and not tappable. It works properly with Gmail.
PFA the screenshot for reference.
Hi @bhagyae5308 - Are you still experiencing this issue? If yes, can you provide more details about your integration?
Thanks!
| gharchive/issue | 2018-08-24T08:28:00 | 2025-04-01T04:32:20.642531 | {
"authors": [
"Psrinath-branch",
"bhagyae5308"
],
"repo": "BranchMetrics/android-branch-deep-linking",
"url": "https://github.com/BranchMetrics/android-branch-deep-linking/issues/606",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
303698980 | Tracking Control
Adding SDK complience with upcoming policy changes outlined by GDPR.
Adds a way to Enable / Disable Tracking user data or actions.
Key feature are
When tracking disabled
- Branch SDK will not cache or save any user tracking da
- Branch SDK will not send any network request
- Branch SDK will clear all previously collected user data
- URL creation apis would still work but send a long url in place of short url
Support hot app init with Branch as soon as tracking enabled.
Few other fixes and changes to support the new feature
@E-B-Smith @aaustin @Sarkar
LGTM 👍
I think this PR needs extensive load testing to attempt to cause some concurrent exception modifications. Like, can you write a test case that inits / fires off 100 events in a for loop and triggers the disable flow halfway through? Similar for re-init. I'm mostly worried about the queue/network request manipulation while it's in use.
you're planning to do a separate PR to handle the case of direct deep linking?
This seems like it should work. I'd definitely want that thread testing per my comment above on both iOS / Android.
Added an automated test to exercise tracking mode switching and test link creation on both tracking enabled and disabled cases. Passes all the test
@aaustin
Nice 👍
On Wed, Mar 21, 2018 at 2:54 PM Sojan P.R. notifications@github.com wrote:
Added an automated test to exercise tracking mode switching and test link
creation on both tracking enabled and disabled cases. Passes all the test
@aaustin https://github.com/aaustin
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/BranchMetrics/android-branch-deep-linking/pull/557#issuecomment-375143286,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABRVmNhfy8tp1Gf2sarIEeAS74PihioBks5tgvZNgaJpZM4SjpCn
.
LGTM 👍2
| gharchive/pull-request | 2018-03-09T02:07:52 | 2025-04-01T04:32:20.648875 | {
"authors": [
"E-B-Smith",
"aaustin",
"sojanpr"
],
"repo": "BranchMetrics/android-branch-deep-linking",
"url": "https://github.com/BranchMetrics/android-branch-deep-linking/pull/557",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
160575652 | expose branch view header files in framework
make Branch view header files public in the framework since our umbrella header for building modules imports it
@aaustin @derrickstaten
👍
| gharchive/pull-request | 2016-06-16T04:32:53 | 2025-04-01T04:32:20.650069 | {
"authors": [
"aaustin",
"ahmednawar"
],
"repo": "BranchMetrics/ios-branch-deep-linking",
"url": "https://github.com/BranchMetrics/ios-branch-deep-linking/pull/373",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
385518546 | loadRewards for a unique user
Hi guys. Tell me what I'm doing wrong. I call the branch method.setIdentity (visitorId), then make createBranchUniversalObject with the transition. But when I change the visitorId, it does not affect the number of points that the loadRewards method returns. How do I get points for a specific user ID? Checking everything on the IOS 11.4 emulator, react-native-branch - 2.3.0.
@abaddonGIT - Can you please see if you can give branch method.setIdentity (visitorId) before the branch.subscribe method and see if you are still having the issue
Yes it fix the problem. Thanks
| gharchive/issue | 2018-11-29T00:43:28 | 2025-04-01T04:32:20.654071 | {
"authors": [
"abaddonGIT",
"sequoiaat"
],
"repo": "BranchMetrics/react-native-branch-deep-linking",
"url": "https://github.com/BranchMetrics/react-native-branch-deep-linking/issues/396",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
843703958 | i index variable used out of its loop
in the file
https://github.com/Breakend/experiment-impact-tracker/blob/master/experiment_impact_tracker/cpu/intel.py
the function :
def get_rapl_power(pid_list, logger=None, **kwargs):
at line 370
shoudn't it be:
for i, p in enumerate(process_list):
instead of
for p in process_list:
The way it is currently, it seems the i variable will get set to the last value from the previous loop and as a consequence, in the except block, the process added to the zombies list will also be the last process in the list process_list instead of being the process which generated the exception.
Thanks for catching this! Should be fixed in #59 . please re-open if it's still an issue
| gharchive/issue | 2021-03-29T19:07:02 | 2025-04-01T04:32:20.676845 | {
"authors": [
"Breakend",
"paulgay"
],
"repo": "Breakend/experiment-impact-tracker",
"url": "https://github.com/Breakend/experiment-impact-tracker/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1301428212 | M1 Mac install errors
Hello, just got an M1 mac and am facing an error setting up some projects that use prisma-client-rust that worked just find on my intel mac:
❯ cargo prisma generate
Finished dev [unoptimized + debuginfo] target(s) in 0.17s
Running `/Users/aaronleopold/Projects/stump/target/debug/prisma-cli generate`
Downloading https://prisma-photongo.s3-eu-west-1.amazonaws.com/prisma-cli-3.13.0-darwin-arm64.gz to /Users/aaronleopold/Library/Caches/prisma/binaries/cli/3.13.0/prisma-cli-darwin-arm64
Downloading https://binaries.prisma.sh/all_commits/efdf9b1183dddfd4258cd181a72125755215ab7b/darwin/query-engine.gz to /Users/aaronleopold/Library/Caches/prisma/binaries/cli/3.13.0/efdf9b1183dddfd4258cd181a72125755215ab7b/prisma-query-engine-darwin
Downloading https://binaries.prisma.sh/all_commits/efdf9b1183dddfd4258cd181a72125755215ab7b/darwin/migration-engine.gz to /Users/aaronleopold/Library/Caches/prisma/binaries/cli/3.13.0/efdf9b1183dddfd4258cd181a72125755215ab7b/prisma-migration-engine-darwin
Downloading https://binaries.prisma.sh/all_commits/efdf9b1183dddfd4258cd181a72125755215ab7b/darwin/introspection-engine.gz to /Users/aaronleopold/Library/Caches/prisma/binaries/cli/3.13.0/efdf9b1183dddfd4258cd181a72125755215ab7b/prisma-introspection-engine-darwin
Downloading https://binaries.prisma.sh/all_commits/efdf9b1183dddfd4258cd181a72125755215ab7b/darwin/prisma-fmt.gz to /Users/aaronleopold/Library/Caches/prisma/binaries/cli/3.13.0/efdf9b1183dddfd4258cd181a72125755215ab7b/prisma-prisma-fmt-darwin
Prisma schema loaded from prisma/schema.prisma
Error: Get config: Error: Command failed with Unknown system error -86: /Users/aaronleopold/Library/Caches/prisma/binaries/cli/3.13.0/efdf9b1183dddfd4258cd181a72125755215ab7b/prisma-query-engine-darwin cli get-config --ignoreEnvVarErrors
spawn Unknown system error -86
❯ cargo prisma -v
Finished dev [unoptimized + debuginfo] target(s) in 0.18s
Running `/Users/aaronleopold/Projects/stump/target/debug/prisma-cli -v`
Error: Command failed with Unknown system error -86: /Users/aaronleopold/Library/Caches/prisma/binaries/cli/3.13.0/efdf9b1183dddfd4258cd181a72125755215ab7b/prisma-introspection-engine-darwin --version
spawn Unknown system error -86
The error looks like it just can't recognize my system, and it looks like prisma-cli got the arm download, but none of the others did? Not sure if that matters for those at all though. I've deleted the downloads from my library cache a few times to see if maybe it was a fluke but unfortunately this happens each time.
🤦 I needed to install Rosetta, I'll go ahead and close this issue! Sorry about that, I'll link to where I found my solution in case anyone else finds it useful: https://github.com/prisma/prisma/issues/6397
| gharchive/issue | 2022-07-12T01:46:42 | 2025-04-01T04:32:20.679883 | {
"authors": [
"aaronleopold"
],
"repo": "Brendonovich/prisma-client-rust",
"url": "https://github.com/Brendonovich/prisma-client-rust/issues/103",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
58810490 | Parameterize CDN
Added parameterization to the CDN functionality. If no bucket name is provided in the creds object, it will default to the production CDN, so existing functionality will be preserved. If a bucket name is provided, however, it will function as if it is publishing to some other S3 bucket.
Reasoning: This was done in order to allow testing free-range apps without having to upload the apps to the production CDN. Buckets can be created strictly for testing purposes so that the production CDN can stay production-only,
Coverage remained the same at 100.0% when pulling 1ab1f39b6d65d7ad8b853d699b5d663be75a2796 on mthjones:parameterize_cdn into 12977761f66f753c249d6cfee2c29a43e2b82a01 on Brightspace:master.
Coverage remained the same at 100.0% when pulling d50158cd076597651201daa8bd2fe4015b9248e8 on mthjones:parameterize_cdn into 12977761f66f753c249d6cfee2c29a43e2b82a01 on Brightspace:master.
Coverage remained the same at 100.0% when pulling 7797ad299f3f9759069c335a5e5af7d07a5ebb45 on mthjones:parameterize_cdn into 12977761f66f753c249d6cfee2c29a43e2b82a01 on Brightspace:master.
Coverage remained the same at 100.0% when pulling f4fdd1540e942a77924ba0a0346e46137b7a7690 on mthjones:parameterize_cdn into 12977761f66f753c249d6cfee2c29a43e2b82a01 on Brightspace:master.
| gharchive/pull-request | 2015-02-24T21:44:16 | 2025-04-01T04:32:20.702410 | {
"authors": [
"coveralls",
"mthjones"
],
"repo": "Brightspace/gulp-frau-publisher",
"url": "https://github.com/Brightspace/gulp-frau-publisher/pull/38",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
926457834 | [ignore for now] Issue 1398: fix dropdown auto-close not working when wrapped in another component
The problem seemed to be with the focus event on document.body not being triggered for this scenario. If I instead listen for the blur event on the component then that does seem to work as expected, but I'm not sure if there is a reason that it wasn't done that way initially that I'm missing.
The problem seemed to be with the focus event on document.body not being triggered for this scenario. If I instead listen for the blur event on the component then that does seem to work as expected, but I'm not sure if there is a reason that it wasn't done that way initially that I'm missing.
Yeah, unfortunately I don't recall if there was a specific reason for focus over blur. I think there were a lot of scenarios, and I don't necessarily think they are well captured in tests unfortunately. Things like clicking out of the browser, and then moving focus back into the browser. I vaguely recall some problems with the browser's Find/Search as well. I would just suggest doing lots of manual unit testing.
Thanks for that additional context. I've been playing around with it this morning and the change looks fairly safe (with the addition of the change mentioned below in "Dropdown in card").
I tested the following in addition to the issue case on Chrome, Safari, Firefox, and Edge (non-legacy):
Clicking outside of the browser window then returning focus to the window (dropdown does not close)
Find/search then exiting (behaves as before - with Safari and Edge pressing escape to close the search also closes the dropdown)
Menu (e.g., variety of keyboard and click interactions)
Dropdown in card (I did find a bit of a problem with this where if I clicked on the dropdown to open it, then tabbed the dropdown wouldn't close until focus left the card. If I keep document.body.addEventListener('focus', this.__onAutoCloseFocus, true); in addition to the new this.addEventListener('blur', this.__onAutoCloseFocus, true); then that problem goes away. Is there any problem with having both?)
Something that seemed buggy that I noticed that did also exist before this change was that in Firefox, I can't tab within the dropdown, focus just leaves it (e.g., there are two links in the dropdown, opening the dropdown with enter causes focus to go to the first link, tabbing then causes focus to leave the dropdown. With any other browser focus will go to the second link).
Dropdown in card (I did find a bit of a problem with this where if I clicked on the dropdown to open it, then tabbed the dropdown wouldn't close until focus left the card. If I keep document.body.addEventListener('focus', this.__onAutoCloseFocus, true); in addition to the new this.addEventListener('blur', this.__onAutoCloseFocus, true); then that problem goes away. Is there any problem with having both?)
That does sound a bit peculiar, but I think it should be ok to have both. If it were the other way around I can imagine scenarios where a focus or blur event would not be observed from the perspective of the body, but I don't think that's the case here.
Card does have some special logic for tracking the dropdown open/closed state but I think it's just to control z-index so that dropdowns stack correctly when the user hovers over adjacent cards. That should be unrelated to this as well.
This seems reasonable to me. I actually don't understand how this was working previously, since the focus event isn't supposed to bubble at all... which means it should never be reaching the <body> element. Hmm.
It was using capture: true.
| gharchive/pull-request | 2021-06-21T17:53:51 | 2025-04-01T04:32:20.709446 | {
"authors": [
"dbatiste",
"margaree"
],
"repo": "BrightspaceUI/core",
"url": "https://github.com/BrightspaceUI/core/pull/1423",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
144258887 | Full control over required JS library locations
BuilderOptions currently provides a setting for the location of JS libraries needed by Brunel. Improvements needed are:
Brunel in Toree notebooks need to use BuilderOptions settings instead of hard-coding locations
Allow the user control over where D3, JQuery or any other required JS libraries are located
For now, we will add a single environment variable: BRUNEL_CONFIG. Eventually this can move to a file.
Within this variable, variable we can set:
jvm: The path to the JVM used by Jpype (Python notebooks only)
locd3: The URL to find D3 (Python/Toree notebooks)
locJavaScript: The location to the Brunel JS files (Python/Toree notebooks)
Added on 1.1. Only added support for D3, brunel JS/CSS & brunel geo data loading since control over JQuery is not needed for notebook deployments which was the main issue this was intended to address.
If using a Jupyter notebook via python, how can I set locMaps environment variable to specify a different source for the geo json files given that this variable does not appear in brunel_util.py (Brunel/python/brunel/brunel_util.py)?
I've can successfully set the the d3 to point to a different location, however I can't get locMaps to be updated via the same environment variable. How can this be achieved?
How are you setting the env variable?
Indeed, some settings in the environment variable are read and used during Java execution. So, it is likely this environment variable must be set directly in the OS prior to starting Jupyter.
I'm setting the environment variable via export i.e.:
~ $ export BRUNEL_CONFIG="locMaps=
http://localhost/brunel_geo;locd3=http://localhost2/d3.js"
and then
~ $ jupyter notebook
When I create a new notebook, the locd3 is pointing to the correct location
(localhost). however the locMaps location is still pointing to brunelvis.org
Could this be the scope of the environment variable where java can't see
this variable but python can? (are they run in the same context?)
(Screenshots attached)
On 1 August 2016 at 23:36, Dan Rope notifications@github.com wrote:
How are you setting the env variable?
Indeed, some settings in the environment variable are read and used during
Java execution. So, it is likely this environment variable must be set
directly in the OS prior to starting Jupyter.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Brunel-Visualization/Brunel/issues/77#issuecomment-236582091,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAbmt6q-vxV8WVE7Zy3SeQF0pUUyGMp6ks5qbfZWgaJpZM4H6tAN
.
OK. This is indeed a defect. Re-opening..
Fixed on develop. Confirmed loaded file locations in python & R notebooks from the following env variable. Note that requirejs is used to load D3, so the trailing ".js" is omitted. Also note that the actual path to the map content is /notebooks/data/geo/2.0/high/.. (high, low, or med). Brunel will pre-pend the version number to the supplied path in the env. var.
BRUNEL_CONFIG=locMaps= http://localhost:8888/tree/notebooks/data/geo;locd3=http://localhost:8888/files/notebooks/data/d3/d3
Thanks :)
| gharchive/issue | 2016-03-29T13:07:15 | 2025-04-01T04:32:20.746515 | {
"authors": [
"danrope",
"rbrasier"
],
"repo": "Brunel-Visualization/Brunel",
"url": "https://github.com/Brunel-Visualization/Brunel/issues/77",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
134991075 | Problema no cadastro de docente
Na validação do cadastro de docente só é possível cadastrar um docente cuja a data de ingresso na universidade é anterior a data de nascimento (deve ser ao contrário).
Corrigido
| gharchive/issue | 2016-02-19T22:27:09 | 2025-04-01T04:32:20.757667 | {
"authors": [
"Tomahaawk",
"leticianb1"
],
"repo": "BrunoSoares-LABORA/sdd-ufg",
"url": "https://github.com/BrunoSoares-LABORA/sdd-ufg/issues/113",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1349918171 | Is this React
Is this reactjs or nodejs?
yes, Both are used
| gharchive/issue | 2022-08-24T19:23:18 | 2025-04-01T04:32:20.763949 | {
"authors": [
"Buddhi-Chathuranga",
"SeliyaMindula"
],
"repo": "Buddhi-Chathuranga/Portfolio",
"url": "https://github.com/Buddhi-Chathuranga/Portfolio/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1805704193 | Talk wallet questions
Questions at questions.json file
An admin can see all the questions at /admin/questions
An admin can open a question to be answered
The users can go to the answer page by scanning a QR code
The users can select your answer
The admin can see all the addresses that answered the question
The admin can set the question to the revealed state
The users can see the right answer
The admin can see how many users have selected the right answer
Each question has a value
A leaderboard at /questions/leaderboard is updated with the user scores after each question is revealed
All the data is saved at the Vercel KV Redis store.
Any auth for the admin is done yet, this is left for another PR after merging it.
closes #47
This is great @damianmarti
I tested it and everything works as expected! I added some basic instructions to the README and some other small tweaks.
| gharchive/pull-request | 2023-07-14T23:11:50 | 2025-04-01T04:32:20.786943 | {
"authors": [
"carletex",
"damianmarti"
],
"repo": "BuidlGuidl/event-wallet",
"url": "https://github.com/BuidlGuidl/event-wallet/pull/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2472231682 | Set project video mandatory?
I think it's nice to make them upload a 2 minute video explaining their project (Extension).
Maybe we should make it mandatory in the form+schema. If not we'd need to tweak the Q/A proposal of #45
No hard opinion on this...I see PROS / CONS. I think it depends on where we want to set the bar for a submission.
I think by making the video mandatory we set the bar a little higher and maybe in this way we avoid receiving bad or incompleted extensions.
Ok, let's do it! Assigned it to you @damianmarti !
| gharchive/issue | 2024-08-19T00:39:44 | 2025-04-01T04:32:20.788736 | {
"authors": [
"Pabl0cks",
"carletex",
"damianmarti"
],
"repo": "BuidlGuidl/extensions-hackathon",
"url": "https://github.com/BuidlGuidl/extensions-hackathon/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
646957420 | current changes
Description
post request
All pull requests should be related to open issues. Indicate the issue(s) bellow and remove this line:
Fixes #(issue number 1) #(issue number 2 if applicable)
How Has This Been Tested?
Please describe in detail how you tested your changes.
Include details of your testing environment, and the tests you ran to see how your change affects other areas of the code etc.
Screenshots (if applicable, else remove this line / section)
Checklist:
[ ] My code follows the style guidelines of this project
[ ] I have performed a self-review of my own code
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] I have added necessary inline code documentation
[ ] I have added tests that prove my fix is effective and that this feature works
[ ] New and existing unit tests pass locally with my changes
ggggg
| gharchive/pull-request | 2020-06-28T16:18:55 | 2025-04-01T04:32:20.796202 | {
"authors": [
"Chiuri254",
"GeorgeKariuki7205"
],
"repo": "BuildForSDG/Team-00142-FrontEnd",
"url": "https://github.com/BuildForSDG/Team-00142-FrontEnd/pull/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1635943805 | useStore(...) proxy now handles delete proxy[k]
What is it?
[X] Bug
Description
previously proxy objects don't implement deleteProperty handler, thus:
the implicit default implementation allows deletion of any key, including symbolic ones, which could be problematic
deleting a key won't trigger subscribers' update callback, which could be considered a BUG
Use cases and why
const store=useStore({key:1})
componentCall(store) // internally uses `store.key`
delete store.key // this is valid javascript and store usage, but it won't trigger reactivity inside `componentCall`
Checklist:
[x] My code follows the developer guidelines of this project
[x] I have performed a self-review of my own code
[x] I have made corresponding changes to the documentation
[ ] Added new tests to cover the fix / functionality
about the failing check, it seems I don't have the permission to run it
Error: Parameter token or opts.auth is required
Curious why not handle immutable for now? I would be great to have some unit test too
We should add a new component here:
https://github.com/BuilderIO/qwik/blob/main/starters/apps/e2e/src/components/render/render.tsx
that tests this, maybe a onClick that deletes it, a e2e test that validates the correct behaviour:
https://github.com/BuilderIO/qwik/blob/main/starters/e2e/e2e.render.spec.ts#L258
| gharchive/pull-request | 2023-03-22T14:59:44 | 2025-04-01T04:32:20.806999 | {
"authors": [
"manucorporat",
"revintec"
],
"repo": "BuilderIO/qwik",
"url": "https://github.com/BuilderIO/qwik/pull/3475",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.