added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:54:46.326152
| 2016-01-29T16:08:45
|
129810542
|
{
"authors": [
"ChrisBAshton",
"issyl0"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13350",
"repo": "BBC-News/wraith",
"url": "https://github.com/BBC-News/wraith/pull/375"
}
|
gharchive/pull-request
|
Consistently capitalize GitHub
"GitHub" is its correct capitalization, so here's a PR. :sunny:
(This tool looks cool - good work, btw!)
Thanks!
|
2025-04-01T04:54:46.359004
| 2017-10-10T16:38:45
|
264296690
|
{
"authors": [
"gmfricke",
"wfvining"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13351",
"repo": "BCLab-UNM/SwarmBaseCode-ROS",
"url": "https://github.com/BCLab-UNM/SwarmBaseCode-ROS/issues/71"
}
|
gharchive/issue
|
Segfault when GUI closed and popout map used
There is a coredump in rqt_rover_gui when the GUI is closed and the popout map frame is open. Related to an std::pair assignment.
Fixed in pr #78
|
2025-04-01T04:54:46.362042
| 2015-10-01T01:53:14
|
109214059
|
{
"authors": [
"arkal",
"hannes-ucsc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13352",
"repo": "BD2KGenomics/toil",
"url": "https://github.com/BD2KGenomics/toil/issues/445"
}
|
gharchive/issue
|
static addChild definition of a node does what Encapsulate claims to do by default
If A is a job with dynamically added children, if
a, b = A(), B()
a.addChild(b)
then b should run in parallel with the dynamically added children unless A().encapsulate is specified. b appears to run only after all of a's dynamically added children have completed.
@arkal, I slightly tweaked your comment. I hope I didn't change the semantics.
yep. I'll properly format any new issues I find.
Here's the sample code if you want to reproduce it
from __future__ import print_function
from toil.job import Job
import argparse
import time
def f(job):
'''
DOCSTRING
'''
with open('test_toil.txt', 'w', 0) as outfile:
print('F', sep='', file=outfile)
#job.addChildJobFn(h)
#job.addChildJobFn(i)
return 'F'
def g(job):
'''
DOCSTRING
'''
with open('test_toil.txt', 'a', 0) as outfile:
print('G', sep='', file=outfile)
def h(job):
'''
DOCSTRING
'''
time.sleep(5) # So this will end after G and I
with open('test_toil.txt', 'a', 0) as outfile:
print('H', sep='', file=outfile)
return 'H'
def i(job):
'''
DOCSTRING
'''
with open('test_toil.txt', 'a', 0) as outfile:
print('I', sep='', file=outfile)
return 'I'
def j(job, my_rv):
'''
DOCSTRING
'''
with open('test_toil.txt', 'a', 0) as outfile:
print(my_rv, sep='', file=outfile)
def test_1():
'''
DOCSTRING
'''
parser = argparse.ArgumentParser()
parser.add_argument('-f', dest='txt', default='txt')
Job.Runner.addToilOptions(parser)
params = parser.parse_args()
F = Job.wrapJobFn(f).encapsulate()
G = Job.wrapJobFn(g)
F.addChild(G)
Job.Runner.startToil(F, params)
def test_2():
'''
DOCSTRING
'''
parser = argparse.ArgumentParser()
parser.add_argument('-f', dest='dummy', default='dummy')
Job.Runner.addToilOptions(parser)
params = parser.parse_args()
F = Job.wrapJobFn(f).encapsulate()
J = Job.wrapJobFn(j, F.rv())
F.addChild(J)
Job.Runner.startToil(F, params)
if __name__ == '__main__':
test_1() # Expect FIGH or FGIH, get FIHG
test_2() # TypeError
|
2025-04-01T04:54:46.367676
| 2024-02-06T00:01:53
|
2119687161
|
{
"authors": [
"AdilCodeBX",
"Agnieszka-Dzwolak",
"Dnyandeo33",
"SowmyaPuttaswamygowda",
"ahlamboudali",
"dspodina",
"emrahhko",
"enteryana",
"rathiNamrata",
"richellepintucan",
"rodicailciuc",
"rohma19",
"samirm00"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13353",
"repo": "BF-FrontEnd-class-2024/home",
"url": "https://github.com/BF-FrontEnd-class-2024/home/issues/162"
}
|
gharchive/issue
|
Tuesday 06-02-2024 roll call
Roll Call!
Leave a comment with these two things:
A summary of your study using only emojis
Something you'd like to share, anything goes! (within respect)
emoji cheat sheet
Good morning!!!
Hello!
Good Morning!
Good morning
Hello
hey hey hey
hi
hi
good morning 👋
morning
✌️
hello
|
2025-04-01T04:54:46.427472
| 2023-09-27T15:54:16
|
1915880195
|
{
"authors": [
"frehburg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13354",
"repo": "BIH-CEI/ERKER2Phenopackets",
"url": "https://github.com/BIH-CEI/ERKER2Phenopackets/issues/160"
}
|
gharchive/issue
|
add help command
this command could list all commands one can call from our package
probably have to rename this command to something else, since it is inbuilt into the cmd. idk if i can use this command call
|
2025-04-01T04:54:46.430179
| 2023-09-05T13:33:15
|
1881995306
|
{
"authors": [
"frehburg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13355",
"repo": "BIH-CEI/ERKER2Phenopackets",
"url": "https://github.com/BIH-CEI/ERKER2Phenopackets/issues/57"
}
|
gharchive/issue
|
Implement _map_chunk
This method should implement the logic to map a chunk (subset) of the dataset to phenopackets.
implement helper method def_map_instance to map a single row of the dataset to one phenopacket
Include:
Patient description:
[x] #60
[x] #62
Genotyping:
[x] #64
[x] #65
Phenotyping:
[x] #67
Debug
|
2025-04-01T04:54:46.436836
| 2022-11-30T19:12:43
|
1470103645
|
{
"authors": [
"codeKameleon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13356",
"repo": "BLSQ/iaso",
"url": "https://github.com/BLSQ/iaso/pull/218"
}
|
gharchive/pull-request
|
IA-1644/IA-1688: frontend grid and responsive behavior refactor
Explain what problem this PR is resolving
There was a lack of consistency in the design rules. Sometime the search button is under the filters, sometimes aside the filters
Self proof reading checklist
[x] Did I use eslint and black formatters
[x] Is my code clear enough and well documented
[ ] Are my typescript files well typed
[ ] Did I add translations
[ ] My migrations file are included
[ ] Are there enough tests
Changes
used xs={12} if on mobile/tablet
if there are <= 3 columns in filters, the search button is aligned on the same level
if there are more then 3 columns, the search button is aligned below the filters
used useTheme and useMediaQuery to improve/fix design issues on mobile
How to test
Go through the entire app, see if there are no design issues on desktop, tablet and mobile
Print screen / video
@cypress
|
2025-04-01T04:54:46.494240
| 2021-12-05T20:16:54
|
1071547312
|
{
"authors": [
"BRAVO68WEB",
"Procoder16"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13357",
"repo": "BRAVO68WEB/propstar-theme",
"url": "https://github.com/BRAVO68WEB/propstar-theme/pull/1"
}
|
gharchive/pull-request
|
Readme Updated!
@BRAVO68WEB do review and merge the PR if you find It perfectly alright.
Great job !!
|
2025-04-01T04:54:46.552018
| 2016-11-07T12:04:55
|
187694974
|
{
"authors": [
"ftrader"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13358",
"repo": "BTCfork/hardfork_prototype_1_mvf-bu",
"url": "https://github.com/BTCfork/hardfork_prototype_1_mvf-bu/issues/24"
}
|
gharchive/issue
|
Set up Continuous Integration for this project on GitHub
This project needs to set up Continuous Integration in the form of Travis or some other service which can run builds and tests at least for every commit into the master branch.
I'm raising this issue here so that we don't forget about it. If anyone with experience would like to volunteer to help set this up, that would be great. The Bitcoin Unlimited project is just in the process of doing the same (actually, they've already set up Travis), so we might be able to get helpful advice from them if we go the Travis route.
I've activated integration with Travis, but I expect it to fail initially and require some support action to resolve (as happened with Bitcoin Unlimited).
Fixed with successful test or PR#52
|
2025-04-01T04:54:46.559570
| 2017-11-06T06:55:00
|
271375322
|
{
"authors": [
"tfabraham"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13359",
"repo": "BTDF/DeploymentFramework",
"url": "https://github.com/BTDF/DeploymentFramework/issues/124"
}
|
gharchive/issue
|
Add a SetRegistryValue MSBuild task to DeploymentFramework.BuildTasks.dll
Add a SetRegistryValue MSBuild task to DeploymentFramework.BuildTasks.dll. This can be used in any .btdfproj file (as can the SDC Tasks).
This work item was migrated from CodePlex
CodePlex work item ID: '6303'
Assigned to: 'tfabraham'
Vote count: '0'
[UnknownUser@4/12/2010]
Resolved with changeset 39648.
[UnknownUser@2/21/2013]
[UnknownUser@5/16/2013]
|
2025-04-01T04:54:46.561412
| 2022-01-01T23:38:13
|
1091916065
|
{
"authors": [
"SimonMeskens"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13360",
"repo": "BTW-Community/BTW-Public",
"url": "https://github.com/BTW-Community/BTW-Public/issues/154"
}
|
gharchive/issue
|
[Build tool] Turn Build tool into proper dev environment
Since the build tool is already almost a full build environment, turning it into one would be great. Copying the BTW sources into the MCP /src/ folder and then updating the md5s would likely be enough.
It also needs me to set up the multiple folders again, like I did with BTW-gradle alpha, but this time without the useless /common/ folder experiment.
$mcpDir/src should contain the BTW source
$projectDir/src should contain the addon source
Might potentially need to move the stuff that now goes in $projectDir/src to a temp build folder, potentially the MCP one and move that one from $mcpDir/temp to $projectDir/temp (or is it /tmp/?)
|
2025-04-01T04:54:46.566076
| 2018-08-04T05:49:15
|
347590653
|
{
"authors": [
"BUG1989",
"nihui"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13361",
"repo": "BUG1989/caffe-int8-convert-tools",
"url": "https://github.com/BUG1989/caffe-int8-convert-tools/issues/4"
}
|
gharchive/issue
|
per-group input and weight int8_scale for group convolution
scenario 1
A channel = 10
B channel = 10
A -> depthwiseconv -> B
expected behavior
generate 10 int8_scale for A, one for each channel
generate 10 int8_scale for depthwiseconv weight, one for each channel
scenario 2
A channel = 10
B channel = 10
A -> conv (group=2) -> B
expected behavior
generate 2 int8_scale for A, one for each 5 channels
generate 2 int8_scale for conv weight, one for each group
Support this feature by caffe-int8-convert-tool-dev.py.It may take a long time to implement.I will try to optimize it
|
2025-04-01T04:54:46.592357
| 2023-07-26T20:30:55
|
1823117021
|
{
"authors": [
"Aragas",
"Mrcubix"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13362",
"repo": "BUTR/Bannerlord.Module.Template",
"url": "https://github.com/BUTR/Bannerlord.Module.Template/pull/160"
}
|
gharchive/pull-request
|
Add support for vscode
Premises
vscode cannot make use of the Properties/launchSettings.json file as it uses an unsupported commandName, only project is supported in vscode.
What is this PR for
This PR aim to add launch & debug support of modules in vscode.
Necessary files
To achieve this, 3 files are necessary:
launch.json
settings.json
tasks.json
Only a single of those files, launch.json, requires the use of variables to make a newly created module, work properly.
those variables are stored in the settings.json files and is used by vscode to feed the appropriate values.
the 2 variables that are required are:
The game binary folder (by default, Win64_Shipping_Client)
The name of the module to debug
Unfortunately, the addition of 3 template symbols, including a single parameter are required.
Template symbols
-c or --code will need to be used if the user intend to use vscode as their IDE.
Generated type symbols are used to replace the placeholders present within settings.json initially.
gameBinariesFolder:
If the gameWindows parameter is true, gameBinariesFolder's value is the Win64_Shipping_Client folder, if not then,
If the gameWindowsStore parameter is true , gameBinariesFolder's value is the Gaming.Desktop.x64_Shipping folder.
displayName:
a separate symbol to moduleName that control its presence, in the case that it is not, sourceName will be used (the name symbol represent that value)
Folder exclusion
In the case that the user specify the use of vscode, the Properties folder will be excluded from the template.
The opposite, will be true if the user does not specify any values or false, .vscode will be excluded.
What could be done on top of what is currently provided
If necessary, i can add the publish and watch tasks back to tasks.json
This adds some concerns if we're implementing it this way
There are two independent ways for configuration - MSBuild and VSCode
Hardcoded gameBinariesFolder - if you change the BANNERLORD_GAME_DIR env, you'll also need to change the variable
I think that adding a new VSCode specific templates for Standard/Sdk would be better - we could move all of the dynamic variables stored in MSBuild to settings.json. When building, MSBuild will look into settings.json for the values.
I should be able to adapt the changes as a new project if you're not able. It kinda doubles the codebase, but it should provide a better user experience
Also, as a comment, I think it would be better to file a bug report to VSCode/ C# Dev Kit extension for the lack of executable support in commandName. Looks like someone reported an IIS error and it's being fixed
There are two independent ways for configuration - MSBuild and VSCode
I think it would be better to file a bug report to VSCode/ C# Dev Kit extension for the lack of executable support in commandName. Looks like someone reported an IIS error and it's being fixed
I think that adding a new VSCode specific templates for Standard/Sdk would be better
https://code.visualstudio.com/docs/csharp/debugger-settings#_launchsettingsjson-support
Seems like vscode originally planned it to be this way, i agree that mixing both configs isn't the best of ideas, that would reduce the complexity of templates, however, that will make maintaining the templates more difficult.
Hardcoded gameBinariesFolder - if you change the BANNERLORD_GAME_DIR env, you'll also need to change the variable
I don't really see what your concern is about here, this would mostly be an issue if the BANNERLORD_GAME_DIR variable isn't set properly, or if you don't restart vscode after changing the variable.
If you move the game to another location on steam or the store, the binary folder will stay the same.
we could move all of the dynamic variables stored in MSBuild to settings.json. When building, MSBuild will look into settings.json for the values.
This would require the use of an msbuild task just for that purpose, which is somewhat confusing to use, even with the documentation provided.
I will try playing around with it since, at least, it has a more proper documentation than templates does.
I've added my own interpretation of VSCode, but as you said, MSBuild code is a mess, especially the way to read JSON without any JSON dependency
https://github.com/BUTR/Bannerlord.Module.Template/tree/Mrcubix_master_BUTR
you should be able to make use of System.Text.Json just for this task, it shouldn't pose any issues
you should be able to make use of System.Text.Json just for this task, it shouldn't pose any issues, or worst case scenario, you can make use of the game's newtonsoft dependency.
Surprisingly, System.Text.Json isn't available in MSBuild because the target is netstandard2.0, as I understand
Reusage of Newtonsoft could work when the game folder is available, and reference assemblies are not set, but there are other issues
We need Newtonsoft.Json to get the Game folder where Newtonsoft.Json is, so that's a paradox. It won't also work with having Newtonsoft.Json as a NuGet package, because we need to resolve the game folder before the project dependencies are resolved.
So we need some built-in mechanism to read Json without external deps, where System.Text.Json should have helped
https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.configuration.configurationbuilder?view=dotnet-plat-ext-7.0&viewFallbackFrom=netstandard-2.0
could try using a ConfigurationBuilder which is available in netstandard 2.0
We should be able to revisit this, looks like we could potentially reuse launchSettings.json
So it's marked as merged, but it not "really" merged.
@Mrcubix if you're still modding Bannerlord, can you create a standard SDK/non SKD template and check whether such support for VSCode is correct and sufficient?
I have moved on from bannerlord modding modding but i can check tomorrow, still useful for other templates i work on.
So there are multiple issues with the template:
The project generated from the template is created in another folder than the current folder, resulting in VScode being unable to find the launch.json file. Usually, templates just dump their content in the current folder.
Once moved, VScode is still unable to debug the module, and will error with the following:
Unable to determine debug settings for project '[...]\Bannerlord\Testing\Template-testing\Bannerlord.Template.Test.csproj'
Looking into how to fix the last error.
So there are multiple issues with the SDK template:
The project generated from the template is created in another folder than the current folder, resulting in VScode being unable to find the launch.json file. Usually, templates just dump their content in the current folder.
No default profile is being specified, which could lead to some errors,
If i open the generated directory instead in vscode, and try to debug the module, then i get the following error:
[...]\Bannerlord\Testing\Template-testing\Bannerlord.Template.Test\Bannerlord.Template.Test.csproj' is not an executable project.
Update: Noticed again that only Profiles of type Project are compatible with vscode, so these templates won't work, see: https://code.visualstudio.com/docs/csharp/debugger-settings#:~:text=Properties/launchSettings.json"-,Restrictions The same applies to dotnet CLI.
Not sure I can reproduce the issues.
So I've create the template with two methods
dotnet new blmodfx - will treat the current directory as the project name
`dotnet new blmodfx --name "MyModule" - will create a new folder MyModule with the content
Do you mean the VSCode profile or the profile from launchSettings?
I'm able to select Start Debugging [net472] (Steam/GOG/Epic) and it starts the game as it should - with correct args, so it looks like Executable is working correctly in VSCode
Do you mean the VSCode profile or the profile from launchSettings?
launchSettings.json
I'm able to select Start Debugging [net472] (Steam/GOG/Epic) and it starts the game as it should - with correct args, so it looks like Executable is working correctly in VSCode
I tried multiple times, none of which works, also tried on another mostly clean machine just to make sure it wasn't a setup issue
Now, with blmodfx:
dotnet new blmodfx
Try to debug
Get the error '[...]\Bannerlord\Testing\template_test\template_test.csproj' is not an executable project.
Now, with blmodfx:
dotnet new blmodfx
Try to debug
Get the error '[...]\Bannerlord\Testing\template_test\template_test.csproj' is not an executable project.
Just to be sure, both ms-dotnettools.csharp and ms-dotnettools.csdevkit are installed?
I can confirm that without ms-dotnettools.csharp it will tell that dotnet is not supported and without ms-dotnettools.csdevkit it states 'c:\Git\f\f.csproj' is not an executable project.
I definitely need to add extensions.json with their recommendations
{
"recommendations": [
"ms-dotnettools.csharp",
"ms-dotnettools.csdevkit"
]
}
ms-dotnettools.csdevkit is not installed as it is not Open source
Okay, then we'll need a non proprietary solution still, but we don't have yet an alternative solution which would not require a new template. But even with the template those issues still persist
It seems like custom operations works in json files, so you could try to put placeholders in and replace those placeholders with the value in question of makes use of settings vars, something like that.
|
2025-04-01T04:54:46.594654
| 2020-09-09T01:40:37
|
696318043
|
{
"authors": [
"ROODAY"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13363",
"repo": "BUUPE/interview",
"url": "https://github.com/BUUPE/interview/issues/60"
}
|
gharchive/issue
|
Pre-commit hook runs prettier on all files instead of just staged files
[x] Make prettier only write to staged files
[x] Have eslint warnings prevent commit
Completed in 1e22647aa04346f8a811d67a4f2b1af82fed0c70
|
2025-04-01T04:54:46.618253
| 2019-12-12T22:46:20
|
537254183
|
{
"authors": [
"daniekpo",
"jaydenmilne",
"kolbytn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13364",
"repo": "BYU-PCCL/holodeck",
"url": "https://github.com/BYU-PCCL/holodeck/issues/351"
}
|
gharchive/issue
|
Holodeck Make Scenario Config
Describe the bug
When using a python data structure rather than a scenario config file, the following error is thrown.
holodeck/src/holodeck/packagemanager.py", line 345, in get_binary_path_for_scenario
with open(config_path, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'config.json'
To Reproduce
Call holodeck.make with keyword argument scenario_cfg
Expected behavior
Should load correct world with scenario that was provided as keyword arg
Additional context
I believe this is caused by line 62 in holodeck.py
scenario_name = "{}-{}".format(scenario["name"], scenario["world"])
The scenario name should follow the format [WORLDNAME]-[SCENARIONAME].
Additionally, I don't think it should be a requirement that the scenario name you are using be an existing scenario name. As is, this is a requirement for holodeck to be able to find the package config file bauase of the logic in packagemanager.py/get_binary_path_for_scenario.
I'm currently looking into this issue.
I'm currently looking into this issue.
You can assign yourself, then we know you are working on it (right hand side)
@kolbytn did you say you the documentation specified that you can use any name for the scenario? I think the right behavior is to specify an existing scenario name. I think we may need to fix the documentation.
To my knowledge, the documentation doesn't specify, but I don't think a custom scenario should need to be the same name as an existing scenario file.
I see what you mean. Yeah, that makes sense now that I looked at it again. We need to change how we're getting the executable. Right now it's using the path to an existing scenario to find the executable path.
|
2025-04-01T04:54:46.643950
| 2021-09-10T16:20:31
|
993386575
|
{
"authors": [
"PirateJC",
"RolandCsibrei",
"sebavan"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13365",
"repo": "BabylonJS/Assets",
"url": "https://github.com/BabylonJS/Assets/pull/46"
}
|
gharchive/pull-request
|
Added tileAndOffset NME custom node
Hey! Hopefully this fits here!
I would say yes :-) adding @PirateJC and @PatrickRyanMS
LOL This is hilarious! I'm working on something like this as we speak! Awesome! Great minds!
I was just watching your video Custom Nodes in the Node Material Editor: Part 1 today and tried to create a useful one. So I did this one! Crazy!
|
2025-04-01T04:54:46.653436
| 2024-09-23T11:31:00
|
2542354709
|
{
"authors": [
"eoineoineoin",
"nyan-left",
"sebavan",
"tangobravo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13366",
"repo": "BabylonJS/Babylon.js",
"url": "https://github.com/BabylonJS/Babylon.js/issues/15609"
}
|
gharchive/issue
|
Havok physics compatibility for older devices
Repro
Expected result: Havok physics working on older devices (iOS pre-16.4, Android Webview < 81)
Current result: Havok physics fails due to lack of SIMD support
Screenshots
N/A
Desktop (please complete the following information):
OS: Various (see details in linked issues)
Browser: Safari, Chrome
Version: Various (focus on pre-16.4 iOS Safari and Android Webview < 81)
Smartphone (please complete the following information):
Device: Various iOS and Android devices
OS: iOS < 16.4, Android with Webview < 81
Browser: Safari, Android Webview
Version: Various (see details in linked issues)
Additional context
@sebavan @CedricGuillemet @eoineoineoin
This issue aims to resurface the conversation from #14315, which was closed without a clear resolution. We're still encountering compatibility issues with Havok physics on older devices, affecting a significant user base (>8% of iOS users as of April 2023).
Key points and potential solutions:
SIMDe or LLVM's vector extensions have been suggested as possible approaches
Goal: maintain compatibility without compromising performance on newer hardware
Relevant links:
Previous issue: #14315
Forum discussions:
https://forum.babylonjs.com/t/webassembly-error-havok-project-in-old-ios-device-iphone-6s/40213
https://forum.babylonjs.com/t/havok-cannot-be-loaded-in-any-safari-browser/41481/14?u=nyan-left
Emscripten issue: https://github.com/emscripten-core/emscripten/issues/22470
Can we reopen this discussion and explore viable solutions? Your insights would be greatly appreciated.
@CedricGuillemet / @eoineoineoin , are we planning to fix the issue ? From what I understand, no but I might be wrong and wanted to check with you before closing ? I am pretty sure the ios numbers are now pretty different a well ?
@sebavan I'm not particularly inclined here, just because of the extra third-party dependency, doubling the targets and the difficulty testing. Not even sure if those old OS versions are supported by their respective vendors, either. Let's discuss on our Thursday call.
Thanks, that is what I thought so le let me close the issue as Wont Fix and reopen if you decide otherwise on Thursday
OK, thanks for a response. One error in @nyan-left's summary was the >8% stats were from April this year, 2024 not 2023. Unfortunately I can't find any newer ones that include detailed version breakdowns. Apple announced over 2.2 billion active devices in total in February, so with a conservative estimate of 60% of them being iPhones means there are likely over 100 million users running versions of iOS without wasm simd support.
Testing-wise a universal "baseline wasm" build will still run on all current wasm engines, so all the same test suite used to verify functionality could be used, it wouldn't need specific testing on these older engines. Or perhaps this fallback build could be released with an explicitly lower "support tier" and left to users to handle the runtime load decision if they really want a fallback (eg Unity's WebGL output is officially unsupported on mobile but still sees usage).
I haven't completely given up on trying to do this on the wasm directly but it looks so much easier to achieve at compile time. I do take the point there might be an extra dependency required and some extra testing cost so won't keep pushing it if it's still a no! :)
|
2025-04-01T04:54:46.659336
| 2021-01-22T11:14:07
|
791919540
|
{
"authors": [
"CedricGuillemet",
"Drigax"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13367",
"repo": "BabylonJS/BabylonNative",
"url": "https://github.com/BabylonJS/BabylonNative/issues/581"
}
|
gharchive/issue
|
[Nightly] Texture bindings and validation tests
The crash reported by d3d11 when running UnindexedMesh PG is:
D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: The Shader Resource View dimension declared in the shader code (TEXTURE2D) does not match the view type bound to slot 0 of the Pixel Shader unit (TEXTURECUBE). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH]
D3D11: **BREAK** enabled for the previous message, which was: [ ERROR EXECUTION #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH ]
The Fresnel PG must be run before UnindexedMesh.
It seems like Fresnel binds some TextureCubes that are not unbound or replaced by new 2D textures. When rendering UnindexedMesh, the shader is expecting a different texture type and crashes.
To repro, you have to use the same Preprocessor as the nightly when building solution: -DBGFX_CONFIG_MEMORY_TRACKING=ON -DBGFX_CONFIG_DEBUG=ON
I tried :
discard bindings after bgfx::submit but subsequent drawcalls that share the same state will fail
bind invalidHandle to each texture slot used by the shader when setProgram is called
call bgfx::discard when binding a new program
I've tried to revert some commits and I've found the cause:
If I replace these lines:
NativeEngine.prototype.getHardwareScalingLevel = function () {
return this._native.getHardwareScalingLevel();
};
NativeEngine.prototype.setHardwareScalingLevel = function (level) {
this._native.setHardwareScalingLevel(level);
}
with previous version:
NativeEngine.prototype.getHardwareScalingLevel = function () {
return 1.0;
}
Then validation test works fine.
I do not understand how this leads to a shader and texture attribution error.
I've tried to revert some commits and I've found the cause:
If I replace these lines:
NativeEngine.prototype.getHardwareScalingLevel = function () {
return this._native.getHardwareScalingLevel();
};
NativeEngine.prototype.setHardwareScalingLevel = function (level) {
this._native.setHardwareScalingLevel(level);
}
with previous version:
NativeEngine.prototype.getHardwareScalingLevel = function () {
return 1.0;
}
Then validation test works fine.
I do not understand how this leads to a shader and texture attribution error.
Interesting...I just ran the validation test with the current repo state, but the issue persists. The current babylon.max.js script that the validation test runs agains does NOT have the changes that you reverted @CedricGuillemet.
Interesting...I just ran the validation test with the current repo state, but the issue persists. The current babylon.max.js script that the validation test runs agains does NOT have the changes that you reverted @CedricGuillemet.
|
2025-04-01T04:54:46.716364
| 2023-01-28T19:46:54
|
1560985607
|
{
"authors": [
"Badgerati",
"fatherofinvention"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13368",
"repo": "Badgerati/Pode",
"url": "https://github.com/Badgerati/Pode/issues/1075"
}
|
gharchive/issue
|
Mixing Pode.Web in Pode Views
Question
I am playing around with custom HTML in Pode views and thought it would be cool to also be able to leverage Pode.Web when needed but cannot get them to render. Is this possible? Here is a basic example:
<!DOCTYPE html>
<html lang="en">
<title>Mixing Pode.Web With Pode Views</title>
<body>
<div class="container">
<div class="columns">
<div class="column col-12">
$(New-PodeWebCard -Name 'Test' -Content @(
New-PodeWebForm -Name 'TestingIt091827' -ScriptBlock {
#noop
} -Content @(
New-PodeWebTextbox -Name 'TestName' -DisplayName 'Name'
New-PodeWebRadio -Name 'TestScope' -DisplayName 'Scope' -Options @('AllUsers', 'CurrentUser')
)
))
</div>
</div>
</div>
</body>
</html>
Hi @fatherofinvention,
It's an interesting idea, though with how Pode.Web works at the moment it's not currently possible. Calling New-PodeWebCard, etc., will each return a hashtable defining the element which is then used in Pode.Web to dynamically build a lot of Pode Views.
Calling them in isolation, within custom Pode Views, they will simply return bare hashtables 😛
It is teeechnically possible, but would require a change in Pode.Web to render the required HTML, something like this possibly:
<!DOCTYPE html>
<html lang="en">
<title>Mixing Pode.Web With Pode Views</title>
<body>
<div class="container">
<div class="columns">
<div class="column col-12">
$(ConvertTo-PodeWebHtml (New-PodeWebCard -Name 'Test' -Content @(
New-PodeWebForm -Name 'TestingIt091827' -ScriptBlock {
#noop
} -Content @(
New-PodeWebTextbox -Name 'TestName' -DisplayName 'Name'
New-PodeWebRadio -Name 'TestScope' -DisplayName 'Scope' -Options @('AllUsers', 'CurrentUser')
)
)))
</div>
</div>
</div>
</body>
</html>
It'll also need helper functions to load the CSS (this could be optional), and also the required JavaScript (definitely mandatory, otherwise controls wouldn't work).
Might be a good one to raise over on Pode.Web 😄
Thanks @Badgerati! II'll close this and open a new issue in the Pode.Web repo as suggested. Just to give you a little more context, I love how easy it is to build with Pode and Pode.Web, but I'm not a fan of the default Bootstrap styling. When I learned about views, I got really excited because it makes it super simple to write my own HTML and style it however I like, plus add PowerShell logic. However, there are a few things I'd love to use from Pode.Web (like file upload), so it would be great if I could pick and choose what I need from Pode.Web and use custom markup, styles, etc. the rest of the time.
|
2025-04-01T04:54:46.746588
| 2024-07-08T21:01:50
|
2396564102
|
{
"authors": [
"BamaCharanChhandogi",
"Soumya6Tiwari"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13369",
"repo": "BamaCharanChhandogi/Diabetes-Prediction",
"url": "https://github.com/BamaCharanChhandogi/Diabetes-Prediction/pull/138"
}
|
gharchive/pull-request
|
Added social media icons
Added Suitable social media profile links such as github, quora etc in the footer section
Thank you for submitting your pull request! 🙌 We'll review it as soon as possible. In the meantime, please ensure that your changes align with our CONTRIBUTING.md. If there are any specific instructions or feedback regarding your PR, we'll provide them here. Thanks again for your contribution! 😊
@BamaCharanChhandogi kindly review the PR
@BamaCharanChhandogi But I only made changes in footer.jsx
To add github icon , and quora icon it was necessary to import them from react - icons .
@BamaCharanChhandogi But I only made changes in footer.jsx To add github icon , and quora icon it was necessary to import them from react - icons .
This is all right. I am saying why did you change the package Json file. I think It was by mistake. you can undo changes to that file. I mean revert.
@BamaCharanChhandogi Ok I reverted back the changes
|
2025-04-01T04:54:46.767455
| 2023-08-17T15:36:10
|
1855246964
|
{
"authors": [
"BanchouBoo",
"BhaturaGuy",
"mrfragger"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13370",
"repo": "BanchouBoo/mpv-youtube-chat",
"url": "https://github.com/BanchouBoo/mpv-youtube-chat/issues/4"
}
|
gharchive/issue
|
[Feature Request] support downloaded videos? + more
Videos downloaded with yt-dlp have PURL and it contains video link. So it can just look for that to download live chat, also if a video already has livechat embedded in it then use it?
Insert line breaks if characters reach certain threshold so that spam messages don't take entire horizontal screen.
Custom font support + colors, borders config, usually people love to add emoji in chat but our usual font can't display that so we can just use Nerd Fonts for the chat so it can be displayed.
Videos downloaded with yt-dlp have PURL and it contains video link. So it can just look for that to download live chat, also if a video already has livechat embedded in it then use it?
If you try to load a live chat without manually passing in a file name (script-message load-chat) and it's from a local video file, it will look for a live chat with the same name as the video in the same directory as the video (e.g. file.mp4 will look for file.live_chat.json), so you can just download the live chat with the video when you download via yt-dlp.
Insert line breaks if characters reach certain threshold so that spam messages don't take entire horizontal screen.
I do have a limit for line length but it's flexible so it only breaks at word boundaries, I guess this would be a problem if a message was really long and had no spaces, I'll think about how to approach it
Custom font support + colors, borders config
Done, update to the latest commit. Also changed color format option to author-color.
Wasn't able to get emoji to work when changing the font though, let me know if you get that working.
Nerd fonts directly doesn't support emoji but there is a long way to patching fonts with Noto Color Emoji font to support them.
A second method is to use Noto Color Emoji as the default font for emoji, since live_chat.json does have text, emoji, emojiId it should be somehow possible to use different font for them? text can be configurable by the user and emoji font would be Noto Color Emoji.
But for some reason I still can't see the emojis in mpv even after setting font=Noto Color Emoji
I definitely could make a seperate font setting for emoji pretty easily, but I'm not sure to what extent mpv supports emoji. It seems libass doesn't support colored emoji (https://github.com/libass/libass/issues/381), supposedly monochrome emoji fonts should work, haven't tested myself yet. If you test one, let me know the results, if monochrome emoji fonts work then I'll add a setting for emoji font as well.
I tried tons of workarounds. EmojiOne, Nerd Font patched ones, Google's Noto Emoji (25% are monochrome) and all don't work and display the boxes for colored emoji. Yes monochrome emoji show just fine on most any font.
Only font that doesn't show boxes is Twitter Color Emoji "Regular B&W outline emoji are included for backwards/fallback compatibility."
https://github.com/13rac1/twemoji-color-font/releases
opts['font'] = 'Twitter Color Emoji'
In the end though it might just be better to use a better font though for readibility and just ignore the boxes.
not worth to ("emojiId": ").*?(",) / $1$2 replace them each time either.
Now using mpv 0.38 and it sure seems to be getting almost all emojis. Maybe it just has so many B&W for the colored ones. I tried nerd font..that was a no go. Also tried Noto Emoji again and the spacing between letters was way off. Only downside to using this Twitter Color Emoji is it's 14MB.
|
2025-04-01T04:54:46.813643
| 2019-05-02T03:42:43
|
439416017
|
{
"authors": [
"JaredWright",
"connojd",
"tklengyel"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13380",
"repo": "Bareflank/hypervisor",
"url": "https://github.com/Bareflank/hypervisor/issues/860"
}
|
gharchive/issue
|
CMake issues
As of 6dc4866bc417c396b40cdf255f417adcda579596 I'm running into the following using the default compile instructions:
[ 21%] Performing configure step for 'bfruntime_x86_64-vmm-elf'
-- The C compiler identification is Clang 3.9.1
-- The CXX compiler identification is Clang 3.9.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The ASM_NASM compiler identification is NASM version 2.12.01
-- Found assembler: /usr/bin/nasm
-- Configuring done
CMake Error at CMakeLists.txt:50 (add_library):
Target "bfcrt" links to target "vmm::bfroot" but the target was not found.
Perhaps a find_package() call is missing for an IMPORTED target, or an
ALIAS target is missing?
CMake Error at CMakeLists.txt:70 (add_library):
Target "bfdso" links to target "vmm::bfroot" but the target was not found.
Perhaps a find_package() call is missing for an IMPORTED target, or an
ALIAS target is missing?
CMake Error at CMakeLists.txt:78 (add_library):
Target "bfpthread" links to target "vmm::bfroot" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
CMake Error at CMakeLists.txt:89 (add_library):
Target "bfsyscall" links to target "vmm::bfroot" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
CMake Error at CMakeLists.txt:50 (add_library):
Target "bfcrt" links to target "vmm::bfroot" but the target was not found.
Perhaps a find_package() call is missing for an IMPORTED target, or an
ALIAS target is missing?
CMake Error at CMakeLists.txt:70 (add_library):
Target "bfdso" links to target "vmm::bfroot" but the target was not found.
Perhaps a find_package() call is missing for an IMPORTED target, or an
ALIAS target is missing?
CMake Error at CMakeLists.txt:78 (add_library):
Target "bfpthread" links to target "vmm::bfroot" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
CMake Error at CMakeLists.txt:89 (add_library):
Target "bfsyscall" links to target "vmm::bfroot" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
-- Generating done
-- Build files have been written to: /home/ssjtoma/workspace/bareflank/build/bfruntime/x86_64-vmm-elf/build
CMakeFiles/bfruntime_x86_64-vmm-elf.dir/build.make:108: recipe for target 'bfruntime/x86_64-vmm-elf/stamp/bfruntime_x86_64-vmm-elf-configure' failed
make[2]: *** [bfruntime/x86_64-vmm-elf/stamp/bfruntime_x86_64-vmm-elf-configure] Error 1
CMakeFiles/Makefile2:751: recipe for target 'CMakeFiles/bfruntime_x86_64-vmm-elf.dir/all' failed
make[1]: *** [CMakeFiles/bfruntime_x86_64-vmm-elf.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
This line looks suspect to me:
The C compiler identification is Clang 3.9.1
I'm not sure how CMake in your environment finds/reports the compiler version being used, but Clang 6+ is currently supported. Which version of CMake are you using, and which OS/distro?
Also, did you start your build from a completely empty build directory? That error looks very familiar to me from a few commits back, so I'm wondering if there's something leftover in your build directory that is causing it.
As of this morning, 6dc4866bc417c396b40cdf255f417adcda579596 builds on my Ubuntu 18.04 environment using CMake 3.14.0 and the default build configuration. If you have any more information about your particular environment that might help me reproduce this, that would be helpful!
This is on debian stretch using a clean build directory. With clang-6.0 and clang++-6.0 I get:
CMake Error at scripts/cmake/macros.cmake:975 (message):
ENABLE_BUILD_USERSPACE is not supported with clang
Call Stack (most recent call first):
scripts/cmake/validate.cmake:44 (invalid_config)
CMakeLists.txt:199 (include)
CMake Error at scripts/cmake/macros.cmake:962 (message):
Build validation failed
Call Stack (most recent call first):
CMakeLists.txt:200 (validate_build)
-- Configuring incomplete, errors occurred!
$ cmake --version
cmake version 3.13.2
With -DENABLE_BUILD_USERSPACE=OFF I get
[ 52%] Performing bfruntime_x86_64-vmm-elf_cleanup step for 'bfruntime_x86_64-vmm-elf'
Scanning dependencies of target bfsyscall
Scanning dependencies of target bfpthread
Scanning dependencies of target bfcrt
Scanning dependencies of target bfdso
[ 10%] Building CXX object CMakeFiles/bfdso.dir/src/dso/dso.cpp.o
[ 20%] Building CXX object CMakeFiles/bfcrt.dir/src/crt/crt.cpp.o
[ 30%] Building CXX object CMakeFiles/bfsyscall.dir/src/syscall/syscall.cpp.o
error: invalid value 'c++17' in '-std=c++17'
[ 40%] Building CXX object CMakeFiles/bfpthread.dir/src/pthread/pthread.cpp.o
CMakeFiles/bfdso.dir/build.make:62: recipe for target 'CMakeFiles/bfdso.dir/src/dso/dso.cpp.o' failed
make[5]: *** [CMakeFiles/bfdso.dir/src/dso/dso.cpp.o] Error 1
CMakeFiles/Makefile2:108: recipe for target 'CMakeFiles/bfdso.dir/all' failed
make[4]: *** [CMakeFiles/bfdso.dir/all] Error 2
make[4]: *** Waiting for unfinished jobs....
[ 50%] Building ASM_NASM object CMakeFiles/bfpthread.dir/src/pthread/threadcontext.asm.o
error: invalid value 'c++17' in '-std=c++17'error: error:
invalid invalid valuevalue 'c++17''c++17' in in '-std=c++17''-std=c++17'
CMakeFiles/bfsyscall.dir/build.make:62: recipe for target 'CMakeFiles/bfsyscall.dir/src/syscall/syscall.cpp.o' failed
Thanks for the info! Here's a couple things I can think of for you to try:
It looks like CMake is trying to use a version of clang to compile the VMM that doesn't have c++17 support enabled. When you specified a version of clang to use, did you do it using a command line configuration like -DCMAKE_C_COMPILER=clang-6? That will set the compiler for building userspace tools, but the vmm compiler is defined through a separate toolchain file in Bareflank's build system
Instead, try setting an environment variable CLANG_BIN to see if that helps. Maybe the VMM toolchain file is finding clang 3.9.1 in your environment first, and trying to use that to compile the VMM
I've set them manually via the environment variables
export CC=clang-6.0
export CXX=clang-6.0
I've removed all other clang packages from the system, new build folder, now it's back to the same error:
[ 33%] Performing configure step for 'bfruntime_x86_64-vmm-elf'
-- The C compiler identification is Clang 6.0.0
-- The CXX compiler identification is Clang 6.0.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The ASM_NASM compiler identification is NASM version 2.12.01
-- Found assembler: /usr/bin/nasm
-- Configuring done
CMake Error at CMakeLists.txt:50 (add_library):
Target "bfcrt" links to target "vmm::bfroot" but the target was not found.
Perhaps a find_package() call is missing for an IMPORTED target, or an
ALIAS target is missing?
CMake Error at CMakeLists.txt:70 (add_library):
Target "bfdso" links to target "vmm::bfroot" but the target was not found.
Perhaps a find_package() call is missing for an IMPORTED target, or an
ALIAS target is missing?
CMake Error at CMakeLists.txt:78 (add_library):
Target "bfpthread" links to target "vmm::bfroot" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
CMake Error at CMakeLists.txt:89 (add_library):
Target "bfsyscall" links to target "vmm::bfroot" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
CMake Error at CMakeLists.txt:50 (add_library):
Target "bfcrt" links to target "vmm::bfroot" but the target was not found.
Perhaps a find_package() call is missing for an IMPORTED target, or an
ALIAS target is missing?
CMake Error at CMakeLists.txt:70 (add_library):
Target "bfdso" links to target "vmm::bfroot" but the target was not found.
Perhaps a find_package() call is missing for an IMPORTED target, or an
ALIAS target is missing?
CMake Error at CMakeLists.txt:78 (add_library):
Target "bfpthread" links to target "vmm::bfroot" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
CMake Error at CMakeLists.txt:89 (add_library):
Target "bfsyscall" links to target "vmm::bfroot" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
-- Generating done
-- Build files have been written to: /home/ssjtoma/workspace/bareflank/build/bfruntime/x86_64-vmm-elf/build
CMakeFiles/bfruntime_x86_64-vmm-elf.dir/build.make:108: recipe for target 'bfruntime/x86_64-vmm-elf/stamp/bfruntime_x86_64-vmm-elf-configure' failed
make[2]: *** [bfruntime/x86_64-vmm-elf/stamp/bfruntime_x86_64-vmm-elf-configure] Error 1
CMakeFiles/Makefile2:1324: recipe for target 'CMakeFiles/bfruntime_x86_64-vmm-elf.dir/all' failed
make[1]: *** [CMakeFiles/bfruntime_x86_64-vmm-elf.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Getting closer...
Could you post the output of ls <path/to/your/build/dir>/export?
drt@l1:~/workspace/bareflank/build$ ls -la export/
total 12
drwxr-xr-x 2 drt drt 4096 May 2 09:20 .
drwxr-xr-x 18 drt drt 4096 May 2 09:19 ..
-rw-r--r-- 1 drt drt 1181 May 2 09:20 bfruntime-vmm-config.cmake
-rw-r--r-- 1 drt drt 0 May 2 09:17 pkg.list
Just to make sure as well, you aren't trying to build an extension? The above output is just from building the base hypervisor project, right?
I just built using your exact version of CMake and clang on my environment (Ubuntu 18.04), and it worked. I may have to spin up a Debian environment later today to investigate further... Could be a Debain specific thing?
Right now I'm just trying to build the default vmm with default settings
git clone https://github.com/bareflank/hypervisor.git
mkdir build; cd build
cmake ../hypervisor
make
Ideally once this works I can move the LibVMI extension over to the latest master
Interestingly on Ubuntu the same thing does work ¯\_(ツ)_/¯. At least on ubuntu 18.10 after I had to compile cmake from scratch and fix up the clang paths with symlinks
Huh... good to know. With all of this hassle you had to go through in mind, would it be useful to you if we updated our Vagrant support to provide an easily accessible and sane build environment?
Our Vagrantfile is currently out of date (doesn't have the right version of CMake), but thats something we should be able to work out very quickly.
Change line 117 in the top CMakeLists.txt to
add_subproject(bfruntime vmm DEPENDS gsl libcxx bfroot)
Ding ding! We have a winner :) It begs the question though... why does it work on Ubuntu without this?
The variation is coming from how the actual jobs are being scheduled. Without the explicit dependency on bfroot, cmake is able to make bfruntime without waiting for bfroot.
Alright, the fix for this is merged. Thanks for letting us know about this issue @tklengyel!
|
2025-04-01T04:54:46.817668
| 2018-03-08T23:26:01
|
303669925
|
{
"authors": [
"connojd"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13381",
"repo": "Bareflank/hypervisor",
"url": "https://github.com/Bareflank/hypervisor/pull/645"
}
|
gharchive/pull-request
|
x64/hve: Fix KVM and promote bugs
exit_handler.cpp:
KVM does not allow the "load IA32_PERF_GLOBAL_CTRL" to be set to 1 in
both the vm-exit and vm-entry controls. This prevents the vmcs field
guest_ia32_perf_global_ctrl from existing, so reads and writes
to it (e.g. handle_rdmsr and handlr_wrmsr in exit_handler.cpp) cause an
exception to be raised by Bareflank.
Modify the bfvmm's exit_handler private functions to only enable the
entry/exit control if it is allowed to be set to 1.
Read/write from the guest_ia32_perf_global_ctrl vmcs field only if it
exists.
vmcs_promote.asm:
Restore host cr8 on promote
@rianquinn ok if I merge this?
|
2025-04-01T04:54:46.829577
| 2023-12-28T15:26:58
|
2058563055
|
{
"authors": [
"BartoszJarocki",
"SujithThirumalaisamy",
"realsnick",
"thomasdavis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13385",
"repo": "BartoszJarocki/cv",
"url": "https://github.com/BartoszJarocki/cv/issues/12"
}
|
gharchive/issue
|
adapt json-resume
https://jsonresume.org/
I would like to work with this. Can you please give me a bit detail? As I think. That link is a standard object. Should we make the whole this based on that standard. Sorry if I was misunderstood. I think this is a pretty big change.
@SujithThirumalaisamy you can take a look at my repo https://github.com/realsnick/resume
i think there are a few things here that can be modified...i think your UI should be changed to a theme, and maybe utilize the dev environment, similar to what i have ... i use a nix project
@realsnick the new jsonresume monorepo uses next+turbo, and this seems to be a decently straightforward react component.
I might play with the idea of creating an npm package called jsonresume-theme-cv (some other name) and then clone this repo into it, setup some build scripts to induce compatibility (that can also convert between the two schemas for now). Will report back on that strategy.
closing as duplicate of #1
|
2025-04-01T04:54:46.848912
| 2016-11-12T23:56:28
|
188942253
|
{
"authors": [
"edubxb",
"nwinkler"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13386",
"repo": "Bash-it/bash-it",
"url": "https://github.com/Bash-it/bash-it/pull/834"
}
|
gharchive/pull-request
|
Fix search command for RedHat based distributions
Closes #765
Sorry for the long wait! Thanks again!
|
2025-04-01T04:54:46.865517
| 2022-01-26T07:05:46
|
1114694999
|
{
"authors": [
"Bastian",
"me4502"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13387",
"repo": "Bastian/bStats-Metrics",
"url": "https://github.com/Bastian/bStats-Metrics/pull/98"
}
|
gharchive/pull-request
|
Port to Sponge API8
Ports the Sponge bStats package to API8, the latest released version of Sponge.
Confirmed to produce correct data by adding to a dev build of WorldEdit; https://bstats.org/plugin/sponge/WorldEdit/3329
Resolves https://github.com/Bastian/bStats-Metrics/issues/97
Thank you for the PR! Lgtm!
Released in version 3.0.0. 🚀
Thanks again!
|
2025-04-01T04:54:46.872199
| 2021-03-23T17:17:28
|
838946402
|
{
"authors": [
"IceRaptor",
"ajkroeg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13388",
"repo": "BattletechModders/IRTweaks",
"url": "https://github.com/BattletechModders/IRTweaks/issues/61"
}
|
gharchive/issue
|
Feature request: Gun + pilot ability procs status effect on hit
Per discussion with BTA, a tweak that allows application of a status effect to a target unit on hit would be interested. Gated / specified by Gun level perhaps.
--- BEGIN ---
At certain tiers of Gun, when hitting arms/legs/head proc a status effect with a duration driven by Gun value would be pretty easy
Could this be done such that it is not enabled automatically by taking gunnery but is activated by taking a pilot ability? So that we could make a, say, Gunnery 10 pilot skill that activates this functionality?
Because I really like the idea of weapons fire causing debuffs but don't like the idea of it being on every pilot all the time.
That's something I'd want in a specific pilot skill so that it is a choice to be activated.
FrostRaptor — Today at 12:46 PM
From a vanilla standpoint - yes, I do that all the time in my mods. I check the skill + modify it if the pilot has a specific ability.
How that works with all the new ability stuff t-bone and Jamie added I can't speak to.
bloodydoves — Today at 12:47 PM
basically, could it be added to IRTweaks or whatever with a statname that "activates" it
and that without that statname it doesn't take effect
that's all I'd need
just a way to flag it on and off
FrostRaptor — Today at 12:47 PM
For example, here's how I calculate Tactics Mods for SBI, LowVis:
public static int GetTacticsModifier(Pilot pilot) {
return GetModifier(pilot, pilot.Tactics, "AbilityDefT5A", "AbilityDefT8A");
}
Oh yeah, statname? No problem.
bloodydoves — Today at 12:47 PM
so that I could make a pilot skill that turns it on for that pilot via statname
I'd also want configuration so I could set the various debuffs etc
legs/arms/torsos/head would probably be necessary locations
FrostRaptor — Today at 12:49 PM
Yeah, I think we'd be talking about a simple patch on AttackDirector.OnAttackSequenceEnd to check for a hit, if hit check attacker for enablement, then apply status effect to target
Famous last words but I think that'd be fairly straightforward.
Implemented
|
2025-04-01T04:54:46.877107
| 2024-01-25T19:13:50
|
2101010084
|
{
"authors": [
"DavidOry",
"arashasadabadi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13389",
"repo": "BayAreaMetro/tm2py",
"url": "https://github.com/BayAreaMetro/tm2py/issues/130"
}
|
gharchive/issue
|
🚀 Feature: Change the DTIME name to GENCOST in the skim cores
It looks like that in Emme transit skims, DTIME is presenting generalized cost rather than drive access time only. In order to avoid further confusions, we are going to change the DTIME name to GENCOST in all skim cores. This needs the changes in tm2py scripts and all the UECs which use the DTIME.
Progress:
[x] Sufficiently defined
[ ] Approach determined
[ ] Tests developed
A related question is whether we want to include distance in the generalized cost formulation.
|
2025-04-01T04:54:46.942441
| 2020-06-24T15:00:08
|
644687149
|
{
"authors": [
"KatsiarynaV",
"dariatarakanova"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13390",
"repo": "BeamMW/android-wallet",
"url": "https://github.com/BeamMW/android-wallet/issues/460"
}
|
gharchive/issue
|
Change "Scalable confidential cryptocurrency" to "Confidential, fast, easy to use"
https://zpl.io/aXkM95M
https://zpl.io/br0v8YW
change it everywhere it appears
App's loading screen: change "Scalable confidential cryptocurrency" to "Confidential, fast, easy to use"
Checked
|
2025-04-01T04:54:47.104515
| 2015-07-10T00:56:58
|
94185702
|
{
"authors": [
"Belphemur",
"rknell"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13391",
"repo": "Belphemur/node-json-db",
"url": "https://github.com/Belphemur/node-json-db/issues/4"
}
|
gharchive/issue
|
Corruption & Async
Hello,
I have rolled my own system similar to this and am having trouble with corruption of the data when the app closes suddenly (or exits)
Just wondering if the Sync calls you make to the file system are sync because that solves that problem, and then by extension, why are they sync calls instead of Async? I would think that might hurt performance or is there a reason?
I know the answer is "Just try it out" but shoehorning your library in now would take a fair while, and it might not solve anything so I figure its worth a shot asking!
Kind regards,
Ryan
Hello @rknell,
Well I chose Sync I/O for multiple reason. I didn't want to juggle with different Promise when doing a push + save on disk. I discover that with the Async IO if not used properly with callback or Promise you corrupt directly the file on which you write (since there is no locking of the file and the write all write together).
For the sake of simplicity and readability I gave up on Async and went full sync. Sure there is a downside in performance, this is why I added the possibility to not save on the file at each push but when the user decides (leading to a possible loss of data in case of power loss).
If you want to go for Async I/O, I'll advise to use a lock like this lib: https://www.npmjs.com/package/rwlock
to be able to block access to the file and do one writing at a time.
|
2025-04-01T04:54:47.124957
| 2022-07-30T17:32:08
|
1323234935
|
{
"authors": [
"Damen57",
"Xseba360",
"fivaz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13392",
"repo": "Benjamin-Dobell/IntelliJ-Luanalysis",
"url": "https://github.com/Benjamin-Dobell/IntelliJ-Luanalysis/issues/133"
}
|
gharchive/issue
|
FiveM support
Many people use Lua to code FiveM mods, it would be cool to have some support for FiveM global functions in Lua
@fivaz
Check these definitions out:
https://github.com/Xseba360/fivem-lua-docs
Feel free to use them in your project, let me know if you do.
I have added support for the hash literal in a forked repository:
https://github.com/Xseba360/IntelliJ-Luanalysis/commit/24a40218269ea65ddbb74a1588fcc7ef9988d298 (build artifact here)
I don't know if there's a better way to implement this for now, but it's all I was able to figure out with my close-to-zero Java/Kotlin knowledge.
Unless @Benjamin-Dobell plans to implement game/application specific syntaxes behind a plugin setting, it will have to stay in a fork.
For example, issue #109 has been left pretty much unanswered and AFAIK issue #91 has been pretty much untouched since April, so I don't predict any changes regarding this matter 😥.
I checked EmmyLua issues and it seems no one has any plans for implementing FiveM-specific backtick syntax there:
https://github.com/EmmyLua/IntelliJ-EmmyLua/issues/345
https://github.com/EmmyLua/IntelliJ-EmmyLua/issues/377
https://github.com/EmmyLua/IntelliJ-EmmyLua/issues/452#issuecomment-1022232281
I have added support for the hash literal in a forked repository:
https://github.com/Xseba360/IntelliJ-Luanalysis/tree/fivem (build artifacts here)
I don't know if there's a better way to implement this for now, but it's all I was able to figure out with my close-to-zero Java/Kotlin knowledge.
Unless @Benjamin-Dobell plans to implement game/application specific syntaxes behind a plugin setting, it will have to stay in a fork.
For example, issue #109 has been left pretty much unanswered and AFAIK issue #91 has been pretty much untouched since April, so I don't predict any changes regarding this matter 😥.
I checked EmmyLua issues and it seems no one has any plans for implementing FiveM-specific backtick syntax there:
False syntax error EmmyLua/IntelliJ-EmmyLua#345
Adding support for backticks EmmyLua/IntelliJ-EmmyLua#377
Expression Expected using backticks EmmyLua/IntelliJ-EmmyLua#452 (comment)
i had issues with phpstorm crashing during indexing with this unfortunately :(
|
2025-04-01T04:54:47.142626
| 2019-10-04T11:41:37
|
502588819
|
{
"authors": [
"BennyCarlsson",
"souravs17031999"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13393",
"repo": "BennyCarlsson/MyPortfolio-Hacktoberfest2019",
"url": "https://github.com/BennyCarlsson/MyPortfolio-Hacktoberfest2019/pull/147"
}
|
gharchive/pull-request
|
changing link of home button on home page
changing the previous "#home" to "#" will directly take you to top of page otherwise this was not functioning once you scroll down somewhere.
Awesome 👍
|
2025-04-01T04:54:47.174338
| 2022-01-21T12:16:52
|
1110406685
|
{
"authors": [
"creditcardscissors"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13394",
"repo": "Berry-Pool/nami-wallet",
"url": "https://github.com/Berry-Pool/nami-wallet/issues/264"
}
|
gharchive/issue
|
NAMI wallet wont load! the smart contracts failed me on a couple sales as well
This is my nami address:
addr1qywpsrhcl0pzw6k8jal6rnpdgg7ej8mjmd4x28u7jyefshhxapr38pk06jjf6qtw86nasv4058zk7ngnf8lymkshw0vqhj4jxh
It will not load. It just continues to buffer endlessly. Also, I had a couple smart contract sales on jpeg store fail. Please help me resolve this issue as soon as possible.
Please do not ask me to plug in my seed phrase somewhere. Please give me real solutions and fix the problem.
|
2025-04-01T04:54:47.207835
| 2019-08-06T14:26:53
|
477414101
|
{
"authors": [
"OrkhanAlikhanov",
"ormaa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13395",
"repo": "BiAtoms/Socket.swift",
"url": "https://github.com/BiAtoms/Socket.swift/issues/12"
}
|
gharchive/issue
|
cannot compile in Ubuntu 16.04
Hello,
the project seem to not be able to compile in ubuntu 16.04, using Swift 4.1
libressl cannot be installed.
swift build report several error around CLibreSSL
Hey! Make sure you obtain them from http://apt.orkhanalikhanov.com.
Also you need to update LD_LIBRARY_PATH to include libressl. I use following in travis builds:
env LD_LIBRARY_PATH='/usr/local/lib:/usr/local/opt/libressl/lib:$LD_LIBRARY_PATH' swift test
To install libressl following should work:
echo "deb [trusted=yes] http://apt.orkhanalikhanov.com ./" | sudo tee -a /etc/apt/sources.list
sudo apt-get update
sudo apt-get install libressl
Closing due to inactivity
|
2025-04-01T04:54:47.214340
| 2023-08-28T17:32:04
|
1870166420
|
{
"authors": [
"aymericdelab"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13396",
"repo": "BibliothecaDAO/eternum",
"url": "https://github.com/BibliothecaDAO/eternum/pull/109"
}
|
gharchive/pull-request
|
feat: [draft] entity updates + notifications
fix #106 + notifications from #66
I'm using graphql-ws to subscribe to torii ws.
I use graphql query entityUpdated to be notified when there is an entity update, but unfortunately we don't have the component values with it yet (will be implemented soon). So I'm temporarily forced to make a second query to torii to get the entity component values.
I create an Observable "entityUpdates" that I can access from anywhere in the app.
UseNotifications custom hook:
retrieves the list of entityUpdates from Torii and filters them + gives them the right EventType
checks if the labor can be harvested or the incoming resources are claimable
Notification Types
Accept/Create/Cancel Offer
Incoming resources claimable
Labor ready to be harvested
Missing features from #66 that should be added later
historical notifications (ex: your offers accepted before page refresh)
notifications management (filtering)
notification interaction (click on harvest notification and it harvests the labor, click on resources claimable and it should claim them, ...)
@r0man1337 could take a look at :
how to deal with overflow when there are too many notifications
better layout for the claimable order notifications
|
2025-04-01T04:54:47.219150
| 2022-07-21T18:18:02
|
1313632125
|
{
"authors": [
"123rolle",
"joecksma"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13397",
"repo": "BigBoot/GW4Remap",
"url": "https://github.com/BigBoot/GW4Remap/issues/11"
}
|
gharchive/issue
|
Samsung pay -> google pay doesn't work (wear os 3.2)
Samsung galaxy watch 4 and wear os 3.2. Samsung pay to google pay stopped working... tried with 2 different watches and same thing.
See https://github.com/BigBoot/GW4Remap/issues/10#issuecomment-1191855225
|
2025-04-01T04:54:47.238007
| 2024-12-09T16:15:30
|
2727590798
|
{
"authors": [
"robross0606"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13398",
"repo": "BigThunderSR/adt-pulse-mqtt",
"url": "https://github.com/BigThunderSR/adt-pulse-mqtt/issues/202"
}
|
gharchive/issue
|
No longer parsing zones correctly
It appears the HTML being returned by ADT Pulse now uses a div instead of a span tag wrapping the zone data. Because of this, all zones are evaluating to 0 out of cheerio and breaking updates.
I'd like permission to put together an MR for both this and #201.
This is working fine for HASS because HASS doesn't really need to discern the difference between "new" detected devices and "update" to existing devices. However, SmartThings MQTT Discovery requires this. If you debug the code, you'll find that theZone isn't being parsed correctly and theZoneNumber ends up being 0 all the time. This, in turn makes every device have an id of "sensor-0" which then causes SmartThings to mess up new devices vs updates when attempting to work in tandem with #201 changes.
Looking at the HTML being fed into cheerio during Pulse parsing, you can see that the zone information is in a div tag instead of a span:
<div class="\"p_grayNormalText\"">Zone 8</div>
The use of a non-breaking space also ends up messing with the regex to parse out theZoneNumber. I have a fix for all this that I'm testing right now and can have an MR up fairly soon, but I do have some extra questions first:
I'm a bit confused about the purposes of the adt-pulse-mqtt-test and x-adt-pulse-mqtt-test-alpha folders. Do I need to update these in the MR as well? Should those be removed? Left alone?
Should I bump the version to 4.0.0 as part of the MR?
In order to separate #202 and #201, I'd have to do #202 first. Right now, I have them at the same time because a #202 fix is required in order to make #201 work.
|
2025-04-01T04:54:47.335540
| 2016-12-16T14:25:44
|
196070673
|
{
"authors": [
"BinRoot",
"DiegoPortoJaccottet",
"chmnoh",
"enterkey1a",
"johndpope",
"minkoon",
"toasteez"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13399",
"repo": "BinRoot/TensorFlow-Book",
"url": "https://github.com/BinRoot/TensorFlow-Book/issues/3"
}
|
gharchive/issue
|
ch02_basics Concept06_saving_variables
When running the following code from code:
save_path = saver.save(sess, "spikes.ckpt")
print("spikes data saved in file: %s" % save_path)
I receive an error.
I am running on Windows 7 in conda virtual environment and Jupyter.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-36-444ce612b9fd> in <module>()
----> 1 save_path = saver.save(sess, "spikes.ckpt")
2 print("spikes data saved in file: %s" % save_path)
C:\Program Files\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\saver.py in save(self, sess, save_path, global_step, latest_filename, meta_graph_suffix, write_meta_graph, write_state)
1312 if not gfile.IsDirectory(os.path.dirname(save_path)):
1313 raise ValueError(
-> 1314 "Parent directory of {} doesn't exist, can't save.".format(save_path))
1315
1316 save_path = os.path.dirname(save_path)
ValueError: Parent directory of spikes.ckpt doesn't exist, can't save.
I recommend you provide an ubuntu virtual machine image link(not docker), rather than trying to get this code running on every machine.
Try this
<EMAIL_ADDRESS>
I have the same issue. I am currently using windows10 and jupyter notebooks. To solve this problem, is it necessary to use in a virtual environment?
You may try absolute path, for example:
"C:\\Users\\YourName\\Documents\\spikes.ckpt"
I got a similar problem in "Ch 02: Concept 07", couldn't load the variables until I used the full path:
try:
... saver.restore(sess, 'spikes.ckpt')
... print(spikes.eval())
... except:
... print('file not found')
...
W tensorflow/core/framework/op_kernel.cc:975] Not found: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for spikes.ckpt
file not found
try:
... saver.restore(sess, '/home/diego/spikes.ckpt')
... print(spikes.eval())
... except:
... print('file not found')
...
[False False True False False True False True]
Interesting. Have you tried running restore on ".spikes.ckpt"? It seems to work with the prefixed "."
change the path to be exactly directory.
For example:
checkpoint_name = "C:\Users\Minkun\Desktop\VTIS_Project\model.ckpt"
note that you used '' instead ''
|
2025-04-01T04:54:47.391305
| 2015-03-15T18:32:55
|
61879921
|
{
"authors": [
"Bionus",
"GoogleCodeExporter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13401",
"repo": "Bionus/imgbrd-grabber",
"url": "https://github.com/Bionus/imgbrd-grabber/issues/283"
}
|
gharchive/issue
|
Derpibooru won't load pictures, produce correct URL, and the login doesn't worked either.
What steps will reproduce the problem?
Do a search on any valid tag, and you get zero pictures.
When you copy the URLs, you get the following.
http://derpibooru.org//derpicdn.net/etc.
The above will not work.
To make it work you have to remove the http://derpibooru.org// part. So I'm guessing Derpibooru must have changed its coding a little.
Try to log in to your account...I dare you. Either I'm doing something wrong, or it just plain won't work for some reason.
What is the expected output? What do you see instead?
I expect images to appear when a valid tag is used...this hasn't happened yet.
Also the downloaded screen is only semi-functional. It will give you links, but it won't download anything. You have to copy the URLs, and edit out the http://derpibooru.org// from the links using notepad's find and replace feature.
What version of the product are you using? On what operating system?
3.4.1, and I'm using Windows 7.
Please also attach the log file if possible.
Being 789KB in size it's too big for posting, but here is a truncated version.
[18:28:54.534] New session started.
[18:28:54.535] Software version: 3.4.1.
[18:28:54.535] Path: C:/Program Files (x86)/Grabber
[18:28:54.535] Loading preferences from C:\Users\Ande\Grabber\settings.ini
[18:28:54.549] It seems that Imgbrd-Grabber hasn't shut down properly last time.
[18:28:56.938] Updating "New tab" tab options.
[18:28:56.938] Updating checkboxes.
[18:28:56.943] Loading results...
[18:28:56.944] Loading page http://derpibooru.org/images.json?key=511985&page=1&nocomments=1&nofav=1
[18:28:56.973] Updating "New tab" tab options.
[18:28:56.973] Updating checkboxes.
[18:28:59.189] Receiving page
http://derpibooru.org/images.json?key=511985&page=1&nocomments=1&nof
av=1
[18:29:01.760] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558491__safe_solo_oc_oc+
only_pegasus_blue+flame_artist-colon-meetbun.png).
[18:29:01.824] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558490__safe_rainbow+das
h_meme.jpg).
[18:29:01.877] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558489__safe_solo_oc_ani
mated_oc+only_pegasus_artist-colon-mirry92_blue+flame.gif).
[18:29:01.955] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558485__safe_pinkie+pie_
fluttershy_rarity_artist-colon-dac0n.png).
[18:29:01.980] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558487__safe_applejack_p
rincess+luna_monochrome_sketch_artist-colon-archonix_lunajack.png).
[18:29:02.081] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558486__twilight+sparkle
_questionable_lesbian_sunset+shimmer_strategically+covered_sunsetsparkle_artist-
colon-xenalollie.jpeg).
[18:29:02.084] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558483__solo_oc_anthro_s
uggestive_solo+female_clothes_oc+only_unicorn_cleavage_corset.png).
[18:29:02.163] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558481__solo_questionabl
e_solo+female_ponified_unicorn_wink_cameltoe_wet_catherine_artist-colon-paulyena
.png).
[18:29:02.174] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558488__safe_solo_oc_oc+
only_glasses_pegasus_headphones_artist-colon-xenalollie.jpeg).
[18:29:02.240] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558482__oc_nudity_questi
onable_human_crossover_breasts_source+needed_interspecies_bioshock_bioshock+infi
nite.jpeg).
[18:29:02.268] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558479__safe_solo_socks_
diamond+tiara_eyes+closed_artist-colon-handsockz.png).
[18:29:02.354] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558480__safe_oc_human_cr
ossover_source+needed_interspecies_bioshock_bioshock+infinite_artist-colon-metal
foxxx_elizabeth.jpeg).
[18:29:02.378] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558477__safe_solo_sweeti
e+belle_socks_eyes+closed_artist-colon-handsockz.png).
[18:29:02.427] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558467__safe_oc_anthro_p
rincess+celestia_questionable_obese_artist-colon-duragan_princess+celestialess.p
ng).
[18:29:02.759] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2014/2/21/558484__safe_twilight+sp
arkle_pinkie+pie_fluttershy_comic_princess+twilight_equestria+girls_princess+cad
ance_changeling_mirror.png).
[18:29:13.401] Loading results...
[18:29:13.403] Loading page
http://derpibooru.org/search.json?key=511985&page=1&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:15.155] Receiving page
http://derpibooru.org/search.json?key=511985&page=1&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:18.010] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/10/16/449813__safe_oc_crossov
er_nintendo_commission_the+legend+of+zelda_twilight+princess_artist-colon-hosend
amaru.png).
[18:29:18.086] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/11/25/481688__safe_solo_meme_
twilight+scepter_the+legend+of+zelda_zelda_twilight+princess_midna_artist-colon-
dovey_quill.png).
[18:29:18.140] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/10/1/439493__safe_applejack_c
rossover_riding_winona_the+legend+of+zelda_artist-colon-dimwitdog_zelda_twilight
+princess_midna.jpg).
[18:29:18.213] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/9/27/435920__safe_princess+lu
na_clothes_fat_costume_morbidly+obese_the+legend+of+zelda_impossibly+thumb+belly
_big+belly_chubby+cheeks.png).
[18:29:18.489] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/9/22/432741__safe_twilight+sp
arkle_applejack_animated_scootaloo_equestria+girls_nintendo_youtube_the+legend+o
f+zelda_youtube+link.gif).
[18:29:18.578] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/9/22/432739__safe_twilight+sp
arkle_applejack_animated_scootaloo_equestria+girls_nintendo_the+legend+of+zelda_
transformation_link.gif).
[18:29:18.603] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/9/22/432535__safe_fluttershy_
the+legend+of+zelda_uncanny+valley_lantern_zelda_twilight+princess_artist-colon-
letekky_legend+of+zelda+-colon-+twilight+princess_ooccoo.png).
[18:29:18.970] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/8/5/391596__safe_solo_twiligh
t+sparkle_traditional+art_watermark_nintendo_the+legend+of+zelda_twilight+prince
ss_artist-colon-puppyluver.jpeg).
[18:29:19.015] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/6/7/342413__safe_equestria+gi
rls_sunset+shimmer_spoiler-colon-equestria+girls_the+legend+of+zelda_sunset+sata
n_twilight+princess_midna.jpg).
[18:29:19.043] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/5/13/323512__safe_spike_eques
tria+girls_spike+the+dog_twilight+princess_wolf+link.jpg).
[18:29:19.087] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/3/30/283686__safe_solo_twilig
ht+sparkle_crossover_princess+twilight_smile_alternate+hairstyle_nintendo_the+le
gend+of+zelda_twilight+princess.png).
[18:29:19.268] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/3/30/283153__safe_twilight+sp
arkle_princess+twilight_princess_the+legend+of+zelda_twilight+(series)_twilight+
princess_midna_edward+cullen_know+the+difference.png).
[18:29:19.406] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/3/21/275887__safe_vector_poni
fied_nintendo_the+legend+of+zelda_twilight+princess_midna_artist-colon-sadlylove
r.jpeg).
[18:29:19.784] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/6/7/342527__safe_equestria+gi
rls_sunset+shimmer_spoiler-colon-equestria+girls_the+legend+of+zelda_sunset+sata
n_twilight+princess_zant.jpg).
[18:29:20.020] Warning: one of the thumbnails is empty
(http://derpibooru.org//derpicdn.net/img/view/2013/9/4/418931__safe_twilight+spa
rkle_crossover_the+legend+of+zelda_artist-colon-gardelius_twilight+princess_wolf
+link_legend+of+zelda+-colon-+twilight+princess.png).
[18:29:20.403] Batch download started.
[18:29:20.405] Loading page
http://derpibooru.org/search.json?key=511985&page=1&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.405] Loading page
http://derpibooru.org/search.json?key=511985&page=2&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.406] Loading page
http://derpibooru.org/search.json?key=511985&page=3&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.406] Loading page
http://derpibooru.org/search.json?key=511985&page=4&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.407] Loading page
http://derpibooru.org/search.json?key=511985&page=5&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.407] Loading page
http://derpibooru.org/search.json?key=511985&page=6&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.408] Loading page
http://derpibooru.org/search.json?key=511985&page=7&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.408] Loading page
http://derpibooru.org/search.json?key=511985&page=8&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.409] Loading page
http://derpibooru.org/search.json?key=511985&page=9&q=twilight_princess&
amp;nocomments=1&nofav=1
[18:29:20.409] Loading page
http://derpibooru.org/search.json?key=511985&page=10&q=twilight_princess
&nocomments=1&nofav=1
[18:29:20.410] Loading page
http://derpibooru.org/search.json?key=511985&page=11&q=twilight_princess
&nocomments=1&nofav=1
[18:29:20.410] Loading page
http://derpibooru.org/search.json?key=511985&page=12&q=twilight_princess
&nocomments=1&nofav=1
[18:29:20.411] Loading page
http://derpibooru.org/search.json?key=511985&page=13&q=twilight_princess
&nocomments=1&nofav=1
[18:29:20.411] Loading page
http://derpibooru.org/search.json?key=511985&page=14&q=twilight_princess
&nocomments=1&nofav=1
[18:29:20.412] Loading page
http://derpibooru.org/search.json?key=511985&page=15&q=twilight_princess
&nocomments=1&nofav=1
[18:29:20.412] Loading page
http://derpibooru.org/search.json?key=511985&page=16&q=twilight_princess
&nocomments=1&nofav=1
<Truncated pass this point>
[18:34:11.065] Received page
http://derpibooru.org/search?key=511985&page=800&sbq=twilight_princess
(0 results)
[18:34:11.084] Received page
http://derpibooru.org/search?key=511985&page=467&sbq=twilight_princess
(0 results)
[18:34:11.209] Received page
http://derpibooru.org/search?key=511985&page=139&sbq=twilight_princess
(0 results)
[18:34:12.531] Received page
http://derpibooru.org/search?key=511985&page=2125&sbq=twilight_princess
(0 results)
[18:34:12.626] Received page
http://derpibooru.org/search?key=511985&page=2241&sbq=twilight_princess
(0 results)
[18:34:12.638] Received page
http://derpibooru.org/search?key=511985&page=794&sbq=twilight_princess
(0 results)
[18:34:12.903] Received page
http://derpibooru.org/search?key=511985&page=2119&sbq=twilight_princess
(0 results)
[18:34:13.406] Received page
http://derpibooru.org/search?key=511985&page=2521&sbq=twilight_princess
(0 results)
[18:34:13.589] Received page
http://derpibooru.org/search?key=511985&page=2247&sbq=twilight_princess
(0 results)
[18:34:13.685] Received page
http://derpibooru.org/search?key=511985&page=2516&sbq=twilight_princess
(0 results)
[18:34:13.686] All images' urls have been received (42 images).
[18:34:13.726] Downloading images directly.
[18:34:13.728] File already exists: c:/pony/Pony
[18:34:13.730] File already exists: c:/pony/Pony
...
[18:34:13.807] File already exists: c:/pony/Pony
[18:34:13.808] File already exists: c:/pony/Pony
[18:34:13.809] Images download finished.
[19:07:01.580] Batch download finished
[19:07:07.601] Saving...
Original issue reported on code.google.com by<EMAIL_ADDRESS>on 22 Feb 2014 at 12:13
See issue #301
|
2025-04-01T04:54:47.401030
| 2017-02-07T14:57:50
|
205912620
|
{
"authors": [
"Bionus",
"MasterPetrik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13402",
"repo": "Bionus/imgbrd-grabber",
"url": "https://github.com/Bionus/imgbrd-grabber/issues/770"
}
|
gharchive/issue
|
Crash when trying to open preview
What steps will reproduce the problem?
MD5 search "25b48c3adf74f61f7afe37836fa4ee7f"
trying to open preview from sonohara.donmai.us
CRASH
What is the expected behavior? What do you get instead?
Crash of Grabber instead of opening preview.
What version of the program are you using? On what operating system?
tested from 5.0.0 to 5.1.1 win7 64
Please provide any additional information below
can confirm that crash still appears in 5.2.0 and still 100% repeatable
P.S.
5.2.0 alpha is very unstable, it's clever decision not to release it yet.
I couldn't reproduce it myself when testing. Is this image already saved on your hard drive? Is there anything in the log that could explain this crash?
What do you mean 5.2.0 is unstable?
I did a lot of tests without issues yesterday. I'll gladly make fixes and push the 5.2.1 tonight or tomorrow.
well, I used 5.2.0 only a couple of minutes but already found a several different crashes and other bugs)
I couldn't reproduce it myself when testing. Is this image already saved on your hard drive? Is there anything in the log that could explain this crash?
No, image doesn't saved on hard drive. Nothing unusual in log after displaying results, but after trying to open preview from sonohara.donmai.us result immediate crash, so I can't see changes in log.
here is the info about crash:
https://drive.google.com/open?id=0B3SK4Pf9wsnmYWxyNEJsOW5Hb00
Indeed, it seems the released version had differences with the one I was using for testing, I'll check that out and push another 5.2.1 pre-release, sorry.
can you please check, is crash reporter window and restore session window have "always-on-top" status?
Because for me they are not always-on-top, is that a normal behavior?
Thanks for the log, I get it now. I'll fix it in next patch.
BTW, if you have examples of problems you have with 5.2.0, I'm all ears.
better post it here or create issues for each problem?
Better create one issue for all, like "5.2.0 issues".
|
2025-04-01T04:54:47.402451
| 2020-11-14T01:35:17
|
742889729
|
{
"authors": [
"GiovanH"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13403",
"repo": "Bionus/imgbrd-grabber",
"url": "https://github.com/Bionus/imgbrd-grabber/pull/2178"
}
|
gharchive/pull-request
|
Develop builds are on travis-ci.org, not github
Resolves #2177
(Note the changes to the contributors list are line-endings only)
This is incorrect, develop builds do not seem to be available on travis either
|
2025-04-01T04:54:47.405104
| 2017-03-02T11:21:43
|
211362443
|
{
"authors": [
"b2m",
"coveralls",
"fraboeni"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13404",
"repo": "BioroboticsLab/bb_tracking",
"url": "https://github.com/BioroboticsLab/bb_tracking/pull/3"
}
|
gharchive/pull-request
|
Feature engineering and fragment tracking
The feature engineering and fragment tracking contains features that are useful for tracking on fragment base. Even though not all of the implemented features are used in the current tracking solution, they might be important in further iterations and should therefore be available in the repo.
Coverage decreased (-20.3%) to 79.709% when pulling cfcf6a3a3aba85222e5e00a34130426cdb0043d7 on feature_engineering_and_fragment_tracking into 8905bf03aa30fd43b91ab94f8659343afb1a60dc on master.
Closed because of missing tests for scoring functions.
|
2025-04-01T04:54:47.429510
| 2022-03-29T12:44:30
|
1184811408
|
{
"authors": [
"codecov-commenter",
"fuxingloh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13405",
"repo": "BirthdayResearch/oss-governance-bot",
"url": "https://github.com/BirthdayResearch/oss-governance-bot/pull/126"
}
|
gharchive/pull-request
|
Update README.md to point to new org
What kind of PR is this?:
/kind chore
Codecov Report
:exclamation: No coverage uploaded for pull request base (main@99f1d48). Click here to learn what that means.
The diff coverage is n/a.
:exclamation: Current head 91a7cd4 differs from pull request most recent head bd1a41e. Consider uploading reports for the commit bd1a41e to get more accurate results
@@ Coverage Diff @@
## main #126 +/- ##
=======================================
Coverage ? 97.33%
=======================================
Files ? 14
Lines ? 488
Branches ? 139
=======================================
Hits ? 475
Misses ? 8
Partials ? 5
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 99f1d48...bd1a41e. Read the comment docs.
|
2025-04-01T04:54:47.441918
| 2024-10-20T02:07:11
|
2599838996
|
{
"authors": [
"Anuj3553",
"Himanshu-kumar025"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13406",
"repo": "Bitbox-Connect/Bitbox",
"url": "https://github.com/Bitbox-Connect/Bitbox/pull/176"
}
|
gharchive/pull-request
|
Improve the styling of FAQ section
fixes issue no - #155
Title:
Improve the styling of FAQ section
Screenshots/Video (mandatory)
before:
after:
Checklist:
[x ] I have mentioned the issue number in my Pull Request.
Additional context (Mandatory):
Are you contributing under any Open-source programme?
GSSOC'24 and Hacktofest'24
[ x] I'm a GSSOC-EXT contributor
[ x] I'm a HACKTOBERFEST contributor
Well done @Himanshu-kumar025
|
2025-04-01T04:54:47.504001
| 2018-10-16T19:56:03
|
370774066
|
{
"authors": [
"davidtmiller",
"kavierkoo",
"narbhar",
"rk1809"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13407",
"repo": "BlackrockDigital/startbootstrap-resume",
"url": "https://github.com/BlackrockDigital/startbootstrap-resume/issues/33"
}
|
gharchive/issue
|
why profile pic is not showing in mobile browsers
Hi,
Thanks for the awesome template.
I modified and hosted the template, which looks good in desktop and also in mobile, but profile pic is missing in mobile browsers.
Is it designed that way or am I missing something?.
I thinks its by design,
Anyway, if you wanted to show your profile picture on mobile,
simply remove the d-none from your code for image.
Instead of:
<span class="d-none d-lg-block">
<img class="img-fluid img-profile rounded-circle ">
</span>
This works for me on mobile browser:
<span class="d-lg-block">
<img class="img-fluid img-profile rounded-circle ">
</span>
If you remove the "d-none" tag it seems to display only the profile image in mobile view
I see what you meant.. after removing the "d-none" it looks huge,
even after I tried to resize the image by changing the width and height
<span class="d-lg-block">
<img class="img-fluid img-profile rounded-circle mx-auto mb-2" src="xxxx" height="100px" width="100px:>
</span>
anyway if you want to only change the mobile view, you can try responsive css.
@media only screen and (max-width:500px) {
/* For mobile */
}
but it still looks weird as it falls under navbar.
This is a feature of the theme. You could certainly add it back in at another part of the page by using Bootstrap's responsive display utilities, but as it stands the profile picture will disappear on mobile since it's included as part of the navigation.
|
2025-04-01T04:54:47.517556
| 2022-05-05T15:41:32
|
1226852122
|
{
"authors": [
"Blake-Madden",
"PBfordev"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13408",
"repo": "Blake-Madden/Wisteria-Dataviz",
"url": "https://github.com/Blake-Madden/Wisteria-Dataviz/pull/4"
}
|
gharchive/pull-request
|
Fix setting compiler flags in multi-config generators
This is an attempt to fix Issue #2. TBH, I am not a fan of forcing compiler options like this, but I followed the suit.
Using $<$<NOT:$<CONFIG:Debug>X> instead of $<$<CONFIG:Release>X> would probably be more to the letter of the original code but IMO my solution is more to its spirit.
It appears there could be more improvements done in the CMakeLists.txt: For example turning on multiprocessor compilation for MSVS generators, which improves the build times significantly. I did not dare to make these yet.
EDIT
I somehow managed to make a typo in the comment which I fixed in another commit (I also fixed the typo that was already there). So, were the PR is going to be accepted, the two commits would have to be squashed.
Thanks, I'll try this out with CMakeUI and VS this weekend...
|
2025-04-01T04:54:47.530632
| 2016-06-02T11:03:09
|
158114611
|
{
"authors": [
"Kurre",
"gregorypratt"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13409",
"repo": "BlazeCSS/blaze",
"url": "https://github.com/BlazeCSS/blaze/issues/37"
}
|
gharchive/issue
|
Grid cell widths per media query breakpoint
http://blazecss.com/community/forum/#!/general:grid-width-classes-per-bre
Might increase file size quite a bit but would make the grid very flexible
I had to improvise this feature because we needed this on our project.
You can take a look in here if you want to: https://github.com/Kurre/blaze/commit/9e35413fdccd4ca479bec9263c24400498d650b9
By any means, I'm not a Sass expert and this was minimum viable product just to get the job done until you'll roll official way to do it (when it's ready of course, no rush) :smile:
Yeah that's a nice approach. I'm conscious of how much code this will produce but I'm happy the final payload size will by outweighed by the increased usefulness of the framework :)
Thank you for this feature (and other new ones)! Really appreciate it :+1:
|
2025-04-01T04:54:47.559359
| 2022-06-21T13:02:03
|
1278416901
|
{
"authors": [
"DaanV2",
"mitgobla"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13410",
"repo": "Blockception/Minecraft-bedrock-json-schemas",
"url": "https://github.com/Blockception/Minecraft-bedrock-json-schemas/pull/29"
}
|
gharchive/pull-request
|
Modify timer schema
Update the example to use the correct event schema. Previously it was using a string and not the event schema.
Wrote documentation for the value field in random_time_choices
Looking good!
|
2025-04-01T04:54:47.604437
| 2019-03-05T13:38:06
|
417302670
|
{
"authors": [
"julienmachon",
"kenjinp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13411",
"repo": "BlueBrain/nexus",
"url": "https://github.com/BlueBrain/nexus/issues/501"
}
|
gharchive/issue
|
Previews
Previews should be lazy loaded and displayed in both lists and resource detail pages based on the mediatype
done for images
|
2025-04-01T04:54:47.613839
| 2023-11-05T12:06:56
|
1977759339
|
{
"authors": [
"BlueHatbRit",
"brunetton"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13414",
"repo": "BlueHatbRit/mdpdf",
"url": "https://github.com/BlueHatbRit/mdpdf/issues/162"
}
|
gharchive/issue
|
Margins are not respected (--border options)
Hi,
First of all: Thanks for this great tool ! It's exactly what I was looking for ! :heart:
I'm trying to adjust margins of a generated document but the margins I give are not respected. Here are used commands and measured left margins in mm:
mdpdf test.md => 21.9 (no problem here as default margins is 20, there's probably a little margin due to CSS)
mdpdf test.md --border=20 => 7.5
mdpdf test.md --border=50 => 15
mdpdf test.md --border=75 => 22
mdpdf test.md --border=100 => 29
mdpdf test.md --border=150 => 41
The factor between asked border and effective border seems to be around 3.3, except for --border=20 where it's around 2.6. This is quite strange and not really ideal to use ;)
Steps to reproduce
create a test.md file containing:# Test page
Adipisicing veniam eu laborum esse sit. Reprehenderit sunt adipisicing culpa labore eiusmod voluptate enim Lorem fugiat duis. Anim Lorem voluptate duis qui.
generate corresponding PDF file using mdpdf only using --border options
use Inkscape PDF functionality to mesure effective generated margins
mdpdf 3.0.1
Thanks for your time
Sorry for taking so long to get to this. Unfortunately those border values are consumed by puppeteer and we don't really touch them. I'd suggest raising this with them if you think the values aren't popping out as they should be. I wish I could be more help here!
|
2025-04-01T04:54:47.615527
| 2023-06-19T18:08:09
|
1763983328
|
{
"authors": [
"Tu5k4rr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13415",
"repo": "BlueMap-Minecraft/BlueMapWiki",
"url": "https://github.com/BlueMap-Minecraft/BlueMapWiki/pull/34"
}
|
gharchive/pull-request
|
Update Installation.md
Missing "/app/" and docker container cant find world mount.
can find it now fixing path.
|
2025-04-01T04:54:47.653641
| 2023-04-05T20:00:51
|
1656208347
|
{
"authors": [
"mindthespines"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13416",
"repo": "BobaBoard/boba-backend",
"url": "https://github.com/BobaBoard/boba-backend/pull/127"
}
|
gharchive/pull-request
|
Setup enhancement
Updates a dead link in the repo's README
Adds the dev environment's .env and firebase-sdk.json contents to an env-setup directory as example files
Adds a node script (env-setup/copyfiles.js) to copy the example files to the project root and name them appropriately
Adds a script (setup-env) to the package.json to run as part of project setup that runs the copying
We'll also want to update the docs to include the new step if it looks good. (But the current way of doing it will still work, so there's no real urgency there.)
I figured since the info we use to populate these files is already out on the unprotected internet it would be fine to include it in the repo itself, but if that's not the case I'll delete my branch!
We could also get away with just using cp in the package.json script since we ask Windows devs to use WSL, but on principle I wanted to make it cross-platform. There are also libraries that copy files but it seemed very silly to add a new dependency for that, hence just writing it myself.
Update made! I've also opened a PR on the docs repo here to update the described backend setup process.
|
2025-04-01T04:54:47.656603
| 2016-11-03T13:05:10
|
187054244
|
{
"authors": [
"ChristianPeter",
"danfri86",
"viptec"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13417",
"repo": "Bogdan1975/ng2-slider-component",
"url": "https://github.com/Bogdan1975/ng2-slider-component/issues/19"
}
|
gharchive/issue
|
Problem with margin-left on parent component
if I have a html element which has a margin-left set, the calculation of the sliders seems buggy.
just open the demo page and change the margin-left style attribute of div id="app-container" to 200px and try to use any of the sliders.
The result I'm getting is that I cannot scroll to the lower limit - on the other hand, I'm able to scroll beyond the limits of the slider component.
I'm having this problem as well. The problem is not only when the parent has margin, but also the parents parent. And padding gives me the same problem.
Did you find a solution to this @viptec?
No, I switched to bootstrap-slider ;-)
|
2025-04-01T04:54:47.677490
| 2023-03-12T10:46:32
|
1620316661
|
{
"authors": [
"exentio",
"ssddanbrown"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13418",
"repo": "BookStackApp/BookStack",
"url": "https://github.com/BookStackApp/BookStack/issues/4094"
}
|
gharchive/issue
|
Markdown editor on Chrome for Android largely unusable
Describe the Bug
When trying to edit text with the markdown editor on Chrome for Android, the cursor skips all over the place, the scroll jumps back to the previous position of the cursor, selecting (especially select all) is often pointless, and other major annoyances that make it impossible to use. The issue doesn't happen with the WYSIWYG editor, and it doesn't seem to happen with Firefox. Also confirmed not to happen on iPad OS 16.3.1 using Chrome (on SlideOver to force the mobile interface).
My keyboard is Gboard and my device is a Google Pixel 6 on Android 13 (stock firmware).
Steps to Reproduce
Open the markdown editor on any text
Try to delete and write characters, scroll up or down
Expected Behaviour
The editing process should happen normally.
Screenshots or Additional Context
https://user-images.githubusercontent.com/14871029/224539657-0a1c8ff9-0a88-47ec-a8d8-d02402b5134c.mp4
Browser Details
Chrome (Android) 110.0.5481.153
Exact BookStack Version
BookStack v23.02
PHP Version
No response
Hosting Environment
Arch Linux, using the LinuxServer.io docker image and Nginx as a reverse proxy
when I hit the backspace button in the markdown editor on mobile, the line doesn't get removed and the keyboard disappears.
Yup, having the same issue too. I have a feeling that it might interpret every line as an independent textbox. Doesn't explain all the other issues, but it explains this one
I'm bumping this because it's a severe usability issue. Yesterday I wrote a new page from my phone and I had no way to write a new line by pressing enter.
It's seriously keeping me from using the application, and I don't want to look for an alternative after having spent a lot of time moving my notes from a different service.
This may just be due to how the library used (CodeMirror 5) behaves on certain mobile device/browser scenarios to be honest, I don't think it was intended for mobile use.
I don't see much point spending the time to investigate and (if possible) fix the reported issues at this time since:
This has a low user impact.
This has likely been a pre-existing issue.
You reported this is not an issue on Firefox so there's a potential workaround.
I'm currently spending a lot of effort updating the library used to CodeMirror 6 which I think may be more mobile friendly, but is a very different editor so would make any additional work on the current editor redundant.
I'm waiting until after the CodeMirror 6 upgrade to check back on this to see how things work on mobile.
If this is really a big deal right now, it might be possible to add in a hack to show a plain text area input (With no syntax highlighting and no working preview) on mobile screen sizes instead of the current editor. Just let me know if that would help.
Here's a hack if you want it:
<script type="module">
const textArea = document.querySelector('#markdown-editor-input');
if (textArea && window.innerWidth < 820) {
setTimeout(() => {
const cmEditor = document.querySelector('#markdown-editor .CodeMirror');
if (!cmEditor) {
return;
}
cmEditor.style.display = 'none';
textArea.style.display = 'flex';
textArea.style.flex = '1';
textArea.style.height = '100%';
}, 100);
}
</script>
Just add that to the "Custom HTML Head Content" customization setting.
Will hide the editor and use a plain text area on smaller screen sizes.
Note: may break on updates, not official or supported.
Thank you, I also added textArea.style.WebkitOverflowScrolling = 'touch'; from here because I kept having some scroll issues
With the release of v23.05 the editor (and other code blocks) now use CodeMirror 6 which is a major overhaul & significant change these blocks, and I believe mobile usability has been built CodeMirror 6 to some degree.
From my testing, on FireFox and Chrome on Android, I encountered no issues while editing a markdown page.
Therefore I'm going to go ahead and close this off but if you have new issues on v23.05 feel free to open as a new issue.
|
2025-04-01T04:54:47.691780
| 2023-06-29T13:05:35
|
1780784370
|
{
"authors": [
"jgriffithsuk",
"ssddanbrown"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13419",
"repo": "BookStackApp/BookStack",
"url": "https://github.com/BookStackApp/BookStack/issues/4350"
}
|
gharchive/issue
|
LDAP and standard login error on Roles page
Describe the Bug
When we try to go to the Role page we get a page saying
An Error Occured
An unknown error occured.
When disabling Ldap and logging in locally I still get the issue.
I basically want to create some LDAP groups so they sync the users,
Weirdly it did work once then we I press roles again I go tthe issue,
I have cleared cache and tried a different browser, (also another Pc)
is this a bug?
I am on verion v23.05.1
thanks
Steps to Reproduce
Go to settings then roles
Expected Behaviour
See the roles page
Screenshots or Additional Context
No response
Browser Details
Firefox, Chrome
Exact BookStack Version
v23.05.1
PHP Version
8.1
Hosting Environment
Ubuntu 22.04 LTS
Hi @jgriffithsuk,
Please follow our debugging guidance to gain more detail for the error from your logs:
https://www.bookstackapp.com/docs/admin/debugging/
Hi Dan
With Debug enabled I get this error
Illuminate\Database\QueryException
SQLSTATE[42S22]: Column not found: 1054 Unknown column 'users.created_at' in 'order clause' (SQL: select , (select count() from users inner join role_user on users.id = role_user.user_id where roles.id = role_user.role_id) as users_count, (select count(*) from role_permissions inner join permission_role on role_permissions.id = permission_role.permission_id where roles.id = permission_role.role_id) as permissions_count from roles order by users.created_at desc limit 20 offset 0)
Thanks
From: Dan Brown @.>
Sent: 29 June 2023 17:16
To: BookStackApp/BookStack @.>
Cc: Jamie Griffiths @.>; Mention @.>
Subject: Re: [BookStackApp/BookStack] LDAP and standard login error on Roles page (Issue #4350)
** This mail originated from OUTSIDE the Oakford corporate network. Treat hyperlinks and attachments in this email with caution. **
Hi @jgriffithsukhttps://github.com/jgriffithsuk,
Please follow our debugging guidance to gain more detail for the error from your logs:
https://www.bookstackapp.com/docs/admin/debugging/
—
Reply to this email directly, view it on GitHubhttps://github.com/BookStackApp/BookStack/issues/4350#issuecomment-1613479835, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BAIPNJPOZYZDIMCOFGM5OSLXNWS2JANCNFSM6AAAAAAZYPTN6Q.
You are receiving this because you were mentioned.Message ID<EMAIL_ADDRESS>Disclaimer Notice:
This email has been sent by Oakford Technology Limited, while we have checked this e-mail and any attachments for viruses, we can not guarantee that they are virus-free. You must therefore take full responsibility for virus checking.
This message and any attachments are confidential and should only be read by those to whom they are addressed. If you are not the intended recipient, please contact us, delete the message from your computer and destroy any copies. Any distribution or copying without our prior permission is prohibited.
Internet communications are not always secure and therefore Oakford Technology Limited does not accept legal responsibility for this message. The recipient is responsible for verifying its authenticity before acting on the contents. Any views or opinions presented are solely those of the author and do not necessarily represent those of Oakford Technology Limited.
Registered address: Oakford Technology Limited, The Manor House, Potterne, Wiltshire. SN10 5PN.
Registered in England and Wales No. 5971519
Thanks for the extra detail @jgriffithsuk,
Can confirm this occurs when sorting roles by created date.
I've assigned this to be addressed for the next patch release.
I believe the sorting preference is saved per-user, so other admin user accounts may have working access to the roles list if that helps.
This has now been patched within 18ee80a743cc54869a7b0f9c5e00527c62539ecd, and will be part of the next BookStack release.
Thanks again @jgriffithsuk for reporting.
|
2025-04-01T04:54:47.696294
| 2017-10-11T00:51:35
|
264423739
|
{
"authors": [
"ssddanbrown",
"svarlamov",
"tuaris"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13420",
"repo": "BookStackApp/BookStack",
"url": "https://github.com/BookStackApp/BookStack/issues/551"
}
|
gharchive/issue
|
Make images and attachments non-public
Desired Feature: Make images and attachments non-public.
Expected Behavior
Images should not be accessible when Bookstack is configured to require login to view content.
My proposed solution (keep in mind I know nothing about Laravel) is to create an image proxy that boots up a minimal version of the application (just enough) to get authentication functional. The image proxy would be used when Bookstack is configured to require login to view content. This would be transparent, meaning it can be enabled and disabled without having to modify existing content. Something (I think) can be achieved with URL re-writing.
A user knowing a URL can view any image/attachment and could result in sensitive information being leaked.
Steps to Reproduce
Configure Bookstack to require login when viewing content, copy the URL of an image, open a new private/incognito browser window. Image is visible.
Potential solution deployed as part of v0.20.0, Details can be found in blog post:
https://www.bookstackapp.com/blog/beta-release-v0-20-0/
Leaving this open for feedback purposes.
Is there a plan to have this feature setup with S3 via URL signing/proxy?
Closing this now that the feature has made release, Feel free to open new issues for any image-auth related problems.
|
2025-04-01T04:54:47.697272
| 2019-09-04T09:05:36
|
489034477
|
{
"authors": [
"kostefun",
"ssddanbrown"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13421",
"repo": "BookStackApp/BookStack",
"url": "https://github.com/BookStackApp/BookStack/pull/1627"
}
|
gharchive/pull-request
|
Update common.php
update ru
@kostefun Thanks again for all your translation update pull requests. I'll merge them in now for the next patch release.
|
2025-04-01T04:54:47.788462
| 2024-07-17T20:38:05
|
2414531575
|
{
"authors": [
"BottlecapDave",
"justwatchme"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13423",
"repo": "BottlecapDave/HomeAssistant-OctopusEnergy",
"url": "https://github.com/BottlecapDave/HomeAssistant-OctopusEnergy/pull/952"
}
|
gharchive/pull-request
|
Update Target Rate Sensor documentation for Target Timeframe flipped direction in window example
I believe "before" is a typo here and should read "after". In the example I read the minimum time "20:00:00" as "after" maximum time "05:00:00" because 20>5. Therefore the window will be from 20:00:00 today until 05:00:00 tomororrow.
Thank you for the correction.
|
2025-04-01T04:54:47.794010
| 2020-06-19T12:32:18
|
641936384
|
{
"authors": [
"renzon"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13424",
"repo": "Bouke/django-two-factor-auth",
"url": "https://github.com/Bouke/django-two-factor-auth/issues/359"
}
|
gharchive/issue
|
Translate messages to pt-BR
Messages on all stages forma are not tranlated do Brazillian Portuguese
Expected Behavior
Messages are not translated to pt-BR
Translation should bem available
Current Behavior
Change language to Portuguese and login ontp Django application
Provide translation
Possible Solution
I can do the translation but haven´t found intructions, a transifex link maybe. Is there a link for translation.
Steps to Reproduce (for bugs)
Set LANGUAGE_CODE = 'pt-br' on settings
Try to log in
Checke messagear are not translated
Context
Your Environment
Browser and version:
Python version:
Django version:
django-otp version:
django-two-factor-auth version:
Link to your project:
I beg your pardon, I´ve just found Transifez link on Readme: https://www.transifex.com/projects/p/django-two-factor-auth/
But how is #353 status? If Transifex isn´t out of date translation will nob be effective.
Excuse me, just checked transifex and it´s translated. Probably the bug is due to #353
|
2025-04-01T04:54:47.799630
| 2017-04-26T08:35:22
|
224386500
|
{
"authors": [
"BrainMaestro",
"supernova23"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13425",
"repo": "BrainMaestro/elixir-phoenix-realworld",
"url": "https://github.com/BrainMaestro/elixir-phoenix-realworld/pull/5"
}
|
gharchive/pull-request
|
Fix namespace again
Fixes #4
@supernova23 can you see if it uses the correct namespace now
another problem
I think regenerating the application using phoenix 1.3 will be the best solution
Yeah I just did that
it works fine now
Great.
|
2025-04-01T04:54:47.802400
| 2016-05-11T21:00:25
|
154341868
|
{
"authors": [
"ziogaschr"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13426",
"repo": "BranchMetrics/iOS-Deferred-Deep-Linking-SDK",
"url": "https://github.com/BranchMetrics/iOS-Deferred-Deep-Linking-SDK/pull/346"
}
|
gharchive/pull-request
|
Add CocoaPod podspec without IDFA
As a developer I would like to be able to install the SDK using CocoaPods and without IDFA.
In the docs you pin point correctly how to achieve this.
I added a new podspec that any developer will be able to use for installing the SDK without IDFA. This will allow us to keep the SDK updated easily
This is not ready to be merged
|
2025-04-01T04:54:47.808130
| 2016-12-21T00:53:08
|
196819307
|
{
"authors": [
"aaustin",
"aeromusek",
"dave-mcclurg"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13427",
"repo": "BranchMetrics/ios-branch-deep-linking",
"url": "https://github.com/BranchMetrics/ios-branch-deep-linking/pull/528"
}
|
gharchive/pull-request
|
Fix for potential race condition
Suggestion comes from https://mobilegrowth.org/discussions/thread/unity-branch-initsession-callback-never-fires-when-deep-linking/
Original notes:
In my [Unity] app, probably due to how scenes are loaded, the Branch.initSession callback never fires when the app is launched from a branch link (handleDeepLink). After lengthy debugging, I found the following patch fixes the issue for me. Is this a legit fix that others might encounter?
I think there is probably a better solution. If handleDeepLink tries to call the callback before it is setup by the app (via initSession), it should remember that (eg, callbackAttempted flag?) and then when initSession is called, it can check that flag and do the callback immediately.
@dave-mcclurg Just so I understand, the broken time sequence is as follows:
handleDeepLink or continueUserActivity receives the deep link URL
/v1/open is called with that deep link URL
parameters are returned to the app and stored in the ns user defaults
you then call initSession with an options dictionary filled with the URL and register your deep link handler
callback block doesn't trigger because initSession falls through
This is an edge case we've never supported as the normal iOS lifecycle never has initSession being called before continueUserActivity or openUrl.
If this is happening 100% of the time, can you just fetch the deep link parameters in your scene using our local nsuserdefaults method shown here? https://github.com/BranchMetrics/unity-branch-deep-linking#retrieve-session-install-or-open-parameters
I hesitate to open up support for this lifecycle flow since it's so unnatural, but let me think a bit more about this.
@dave-mcclurg Please read and comment on the above, but also note that I've changed the logic here. Would love to see if this new fix would work for you.
@aaustin - The patch works great. I’ll go with that.
Making against staging in this PR. Closing this out https://github.com/BranchMetrics/ios-branch-deep-linking/pull/535
|
2025-04-01T04:54:47.810697
| 2017-10-05T22:36:57
|
263287786
|
{
"authors": [
"BranchMacMini",
"parthkalavadia"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13428",
"repo": "BranchMetrics/ios-branch-deep-linking",
"url": "https://github.com/BranchMetrics/ios-branch-deep-linking/pull/731"
}
|
gharchive/pull-request
|
Bug fix for automatic deeplinking controller
To support old and new API for automatic deeplinking Controller, there are two types of value existed in deepLinkControllers dictionary: BNCDeepLinkViewControllerInstance and UIViewController.
branchSharingController.deepLinkingCompletionDelegate = self was set before checking type of the deepLinkControllers' value
Result of Integration 1
Duration: 10 minutes and 39 seconds
Result: All 203 tests passed, but please fix 1 warning.
Test Coverage: 61%
|
2025-04-01T04:54:47.823310
| 2018-08-22T21:56:28
|
353140828
|
{
"authors": [
"csalmi-branch",
"gustavonecore",
"sequoiaat"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13429",
"repo": "BranchMetrics/react-native-branch-deep-linking",
"url": "https://github.com/BranchMetrics/react-native-branch-deep-linking/issues/370"
}
|
gharchive/issue
|
Is there any doc/example for install a new fresh app from a branchio deeplink?
I was searching in the examples but I think all of them are related to already installed applications.
There is a lot of links, I have reviewed several of them but I can't find something for:
User open deeplink
User install app from play store
App launchs with some custom data from the deeplink
Any suggestion will be good guys!
Thanks
@gustavonecore
Please let us know if this will help you answer your queries -
User open deeplink
For this can you see if this document will answer this question you have. If not please let us know and we can try to help you
User install app from play store
Can you please elaborate a little bit more on this. Also see if this link anwers your query - https://docs.branch.io/pages/apps/react-native/#configure-branch
App launchs with some custom data from the deeplink
Please see if this document is what you are looking for- https://docs.branch.io/pages/apps/react-native/#navigate-to-content
@sequoiaat Thanks for the insight, I was able to get it working.
But I'm facing some weird issue (this is my progress so far):
I'm creating the deeplinks from my server using this tool: https://github.com/iivannov/branchio
The deeplinks are created properly using the live keys and sent to the user as SMS
But this is the weird thing:
After the install my app launch with the custom data on it using this:
branch.getFirstReferringParams().then((params) => {
if (params.phone && params['$deeplink_path'] === DEEPLINK_TYPE_INSTALL){
dispatch(Actions.fetchDeepLinkSuccess(params.phone));
}
else{
dispatch(Actions.fetchDeepLinkSuccess(''));
}
}).catch((error) => {
dispatch(Actions.fetchDeepLinkSuccess(error));
});
But, sometimes works and sometimes it doesn't. So, I have some assumptions/questions here:
I know the deeplink have some timeout, but can I configure that?
Is there some restriction between using several deeplinks por the same device ?
Is there any issue for using the url (android) as market://id=?app.example.id.com instead of the web url http://play.store.com/... ? . By the way, I'm using the market one.
Where I can see the deeplink created from my server in the branch console?
@gustavonecore are you still having issues, and do you still need help with those questions?
@gustavonecore is there anything else we can help you with here?
Due to the age and inactivity of this issue, I am closing this now. If you are encountering this problem still, please reach out to our support team (support.branch.io) or open a new github issue and we would be happy to help you out!
|
2025-04-01T04:54:47.829569
| 2021-11-01T21:38:00
|
1041666169
|
{
"authors": [
"merosenlund"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13430",
"repo": "Brassicaceaee/project-catwalk",
"url": "https://github.com/Brassicaceaee/project-catwalk/pull/17"
}
|
gharchive/pull-request
|
Reviews unit tests up to speed
I've added unit tests to test whether each of my components are rendering.
No big rush on this one it is just my unit tests for my components.
Reviews Unit Tests
|
2025-04-01T04:54:47.854414
| 2024-04-28T23:11:13
|
2267868949
|
{
"authors": [
"BrianPugh",
"shadow2560"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13431",
"repo": "BrianPugh/gnwmanager",
"url": "https://github.com/BrianPugh/gnwmanager/issues/156"
}
|
gharchive/issue
|
Seems to detect a ghost adapter on Windows
Before explaining my problem note that on my Windows system this seems to work properly, the test is made on a friend computer.
Without connecting any adapter on the computer I tried to launch the command "gnwmanager --verbosity debug unlock". On my computer this display a list of tries with all adapter and say that it couldn't find any adapter, normal. On my friend computer when I try the same thing it try to launch the stlink.cfg config and just after that it try to reset the device with the "reset_and_halt()" function, that's not normal because like I said no adpater is connected; and the program seems totaly frozen after that because even with "ctrl+c" shortcut it doesn't terminate it.
A classic flash work perfectly (old Game-and-watch-patch for example) so I realy don't understand what could cause this problem or what/where to search to find some clues on the error. I've tried to search in the Devices Manager and I don't see anything suspect, I've also searched in programs installed but nothing here also.
I've tried the unlock command from a classic Python installation and Chocolated install of Openocd with "gnwmanager install openocd". Our two Windows system are the same (Windows 11 Pro last version), we have the same STlink drivers installed, realy I don't understand...
Thanks.
hmmm, definitely odd. Please follow up if you get more concrete information! Also maybe try rebooting your windows machine.
I've tried to launch this command on the two computers:
openocd -c "tcl_port 6666" -c "adapter speed 4000" -c "source [find interface/stlink.cfg]" -c "transport select hla_swd" -c "source [find target/stm32h7x.cfg]"
Result in my computer:
Open On-Chip Debugger 0.12.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
adapter speed: 4000 kHz
hla_swd
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : clock speed 1800 kHz
Error: open failed
Now on my friend's computer:
Open On-Chip Debugger 0.12.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
adapter speed: 4000 kHz
hla_swd
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
Error: couldn't bind tcl to socket on port 6666: No error
I looked into path global variable to see if something could interfer but nothing. Realy I don't understand, it seems to be an Openocd problem but it seems to freeze also gnwmanager when it occures.
This code (and the functions above it) iterate over the various openocd configurations until it successfully connects to a debug probe.
This code (and the functions above it) iterate over the various openocd configurations until it successfully connects to a debug probe.
I've looked into it (that's how I have constructed my Openocd's test commands) but unfortunatly I don't know what I could search now with this, I'm not familiar with this type of hardwares. I think my next test will be a test on an other fresh Windows computer that I own, I'll post my results when it will be done.
On my fresh Windows computer this seems to work, I'm desesperate...
OK I finaly fix the problem by using "netstat /q /b" to list the port used on the computer, then I have identified that the port 6666 was used by a program (with the netstat command you only have the exe filename so you must identify the process with the task manager), for me it was the program Minitool Shadow Maker witch causing the bug, after uninstalling it it works. Maybe having the possibility to specify a port number with a param in the gnwmanager command could be a good function to add.
But now I have an other problem, I have tried to flash Retro-go but every 11 parts the program freeze. Do you think that it's a Retro-go problem? Should I open an other issue for that?
does it continue to make progress (i.e. 11 more chunks each flash attempt)? Some people experience this, but there's been no solid leads on the cause. Not a retro-go issue.
Yes, the flash progress, I can make it completly by relaunching the make command without clean. We will make some more tests in the next days, I'll inform you if I progress on this.
Some infos on my tests, on windows this doesn't progress, every 11 shunks the program stop to work properly and need to be stoped and relaunched to continue the flash (inside or outside Msys2 it's the same). Next test will be to use Pyocd outside Msys2, I've successfuly installed it so I think I could do some tests with it later.
On Ubuntu 22.04 the flash has worked with Pyocd but with Openocd it doesn't, it wait for booting the GnW and stop worked after the time out, with the custom Openocd and official installed by Gnwmanager. And an other strange thing, when trying the unlock process it doesn't work, when the program say to reboot the GnW to have the blue screen it reboot normaly and it seems to brake the flasher (seems to be in a sort of deep sleep state, the flasher is detected but don't want to comunicate with the memory or reset the connection (For the Stlink it ended with an error 9 with Pyocd)), we have lost a Stlink and a Pico yesturday and re-flashing the firmware on them doesn't helped, we have flashed severals GnW before and no problems like that have appened. If I press "enter" even if the blue screen doesn't show I obtain an error of hash mismatch. I haven't ried the unlock process on Windows but for now I will not, don't want to brake some more lfashers.
Last thing that I have tried, the compilation of the firmware doesn't work, with Gcc-arm-none-eabi-10.3-2021.10 and last mingw64 arm toolchain package on mingw64 this always do that except that with latest mingw64 arm toolchain package I have warnings:
text data bss dec hex filename
36628 944276 2316 983220 f00b4 build/gnwmanager.elf
copy from `build/gnwmanager.elf' [elf32-littlearm] to `build/gnwmanager.bin' [binary]
make: copy: No such file or directory
make: *** [Makefile:245: gnwmanager/firmware.bin] Error 127
And with latest mingw64 arm toolchain:
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard\libc_nano.a(libc_a-closer.o): in function `_close_r':
c:\m\b\src\newlib-<IP_ADDRESS>31231\newlib\libc\reent/closer.c:47:(.text+0xc): warning: _close is not implemented and will always fail
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard\libc_nano.a(libc_a-fstatr.o): in function `_fstat_r':
c:\m\b\src\newlib-<IP_ADDRESS>31231\newlib\libc\reent/fstatr.c:55:(.text+0x12): warning: _fstat is not implemented and will always fail
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard\libc_nano.a(libc_a-signalr.o): in function `_getpid_r':
c:\m\b\src\newlib-<IP_ADDRESS>31231\newlib\libc\reent/signalr.c:83:(.text+0x2c): warning: _getpid is not implemented and will always fail
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard\libc_nano.a(libc_a-isattyr.o): in function `_isatty_r':
c:\m\b\src\newlib-<IP_ADDRESS>31231\newlib\libc\reent/isattyr.c:52:(.text+0xc): warning: _isatty is not implemented and will always fail
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard\libc_nano.a(libc_a-signalr.o): in function `_kill_r':
c:\m\b\src\newlib-<IP_ADDRESS>31231\newlib\libc\reent/signalr.c:53:(.text+0x12): warning: _kill is not implemented and will always fail
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard\libc_nano.a(libc_a-lseekr.o): in function `_lseek_r':
c:\m\b\src\newlib-<IP_ADDRESS>31231\newlib\libc\reent/lseekr.c:49:(.text+0x14): warning: _lseek is not implemented and will always fail
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard\libc_nano.a(libc_a-readr.o): in function `_read_r':
c:\m\b\src\newlib-<IP_ADDRESS>31231\newlib\libc\reent/readr.c:49:(.text+0x14): warning: _read is not implemented and will always fail
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard\libc_nano.a(libc_a-writer.o): in function `_write_r':
c:\m\b\src\newlib-<IP_ADDRESS>31231\newlib\libc\reent/writer.c:49:(.text+0x14): warning: _write is not implemented and will always fail
c:/msys2/mingw64/bin/../lib/gcc/arm-none-eabi/12.2.0/../../../../arm-none-eabi/bin/ld.exe: warning: build/gnwmanager.elf has a LOAD segment with RWX permissions
text data bss dec hex filename
38556 944268 2660 985484 f098c build/gnwmanager.elf
copy from `build/gnwmanager.elf' [elf32-littlearm] to `build/gnwmanager.bin' [binary]
make: copy: No such file or directory
make: *** [Makefile:245: gnwmanager/firmware.bin] Error 127
See you for the nex report...
And an other strange thing, when trying the unlock process it doesn't work
Currently the unlock process doesn't work with pyocd; there are some upstream issues.
If everything seems to "freeze", try removing and replugging in the battery to your gnw. I doubt your debug probes are actually broken.
It seems like it mostly built fine; it seems like this last copy call failed. Maybe try replacing this line with something else. I rarely use windows, so I'm not too much help 🙈 .
We have disconected the GnW (from power and even from the flasher) but they are not working anymore and with the same reactions with the same scenario. Maybe the problem comes from an other source (bad wires for example but the person witch made it has largely enough competences to do it corectly, no doubt about it), for now I tried to analyse what I have done and what happened and the only thing witch is the same is that the problem comes after the failed flash (without error said by the program like I have explained in my last comment) during unlock process.
I'll try compiling on Linux when I can and I will try what you suggest on Windows.
So I have finaly ended my project. Here are the problems encountered and not resolved:
Flashing with Openocd work on Windows but it doesn't seems to comunicate properly with Gnwmanager, it freeze more or less randomly (every 11 or 13 shunks with Retro-go, doesn't end the CFW patch properly because the reset seems to freeze the app), just need to launch again the command to continue the flash process. With Pyocd the problem is not present, this work perfectly but it is more difficult to install it on some environement, on Msys I can't install it.
On Linux Openocd doesn't want to comunicate properly with Gnwmanager, it reboot the GnW in flash screen but doesn't want to flash (it says it waits the device boot and end with the time out). With Pyocd it work but not possible for me to unlock (Pyocd not supported for now to unlock in Gnwmanager) because of the Openocd problem, need to unlock with the old method; I've not retried the unlock process because I suspect it to brake my programers, we are not sure it is that witch brake the devices but I don't want to brake others programers.
After flashing CFW and Retro-go the commands only want to launch if the GnW is on Retro-go main screen, I doesn't know why (error "No cores were discovered").
I think I can't help more than that, I'll follow the differents projects, thanks for your work on them.
I've made some more tests in real conditions (Stlink V2 and Windows OS), if I flash with Pyocd this work one time, after I have the error "No cores were discovered" raized, need to use Openocd to go to flash mode (with "gnwmanager info" command for example) and then I must disconnect and reconnect the device and after that the flash could be done. This problem seems to only occure if I'm on CFW, if Retro-go is booted it seems that the problem is not present but I need to do more tests to see if it is realy that by flashing Retrogo-go on bank1. This error seems frequent with Pyocd, see this page or that one for example, personaly I don't realy understand the problems explained there.
Note: I think I will also make these tests with a Pico probe, for now we don't have one. I will also enable the debug verbosity in my projects to have more detailed infos.
|
2025-04-01T04:54:47.857646
| 2016-02-17T10:16:48
|
134238857
|
{
"authors": [
"BrianSidebotham",
"StuartCartmill",
"sergio-uma"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13432",
"repo": "BrianSidebotham/arm-tutorial-rpi",
"url": "https://github.com/BrianSidebotham/arm-tutorial-rpi/issues/6"
}
|
gharchive/issue
|
Problem with interrupts on the Pi2
I am having a problem with interrupts on the Pi2. It appears that the first interrupt causes execution to jump to somewhere and not return. I have checked the disassembly listing and the vector table looks ok.Has anyone had this problem and fixed it?
I had the same issue because I updated the firmware (bootcode.bin & start.elf) and from that moment my RPi 2 model B (v1.1) goes to hypervisor mode (0x1A). I put this code just before IVT copy:
.arch_extension sec
.arch_extension virt
mrs r0, cpsr
and r0, r0, #0x1F
cmp r0, #CPSR_MODE_SVR
beq svc_mode
mov r0, #CPSR_MODE_SVR
msr spsr_cxsf, r0
add r0, pc, #4
msr ELR_hyp, r0
eret
svc_mode:
And now it is working properly.
@sergio-uma Thanks for reporting this. I have only recently been able to fix the entire tutorial set. Just part-5 to go now. All the rest work across the complete set of RPI hardware, including RPI4.
|
2025-04-01T04:54:47.865687
| 2024-03-28T23:44:55
|
2214342970
|
{
"authors": [
"carlosduarteroa",
"gtfierro"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13433",
"repo": "BrickSchema/Brick",
"url": "https://github.com/BrickSchema/Brick/issues/630"
}
|
gharchive/issue
|
Possible syntax error when using Prefix Declaration for SPARQL Queries
I downloaded the nightly Brick version but got the error shown below when I tried to use it in python.
from brickschema import Graph
g = Graph(load_brick=True, load_brick_nightly=True)
g.load_file(bldg_brick_file)
<Graph identifier=Nd22db2bd15974015ab4bbd8c477fa056 (<class 'brickschema.graph.Graph'>)>
print(f'Before: {len(g)} triples')
Before: 75986 triples
g.expand(profile='owlrl+shacl+vbis+shacl')
*** pyshacl.errors.ConstraintLoadError: sh:declare must have exactly one sh:prefix predicate.
For reference, see https://www.w3.org/TR/shacl/#sparql-prefixes
I tracked the issue to the following block in the Brick.ttl file. It seems to be a syntax issue when using sh:declare.
<https://brickschema.org/schema/1.4-rc1/Brick> a owl:Ontology ;
rdfs:label "Brick" ;
dcterms:creator ( [ a sdo:Person ;
sdo:email<EMAIL_ADDRESS>;
sdo:name "Gabe Fierro" ] [ a sdo:Person ;
sdo:email<EMAIL_ADDRESS>;
sdo:name "Jason Koh" ] ) ;
dcterms:issued "2016-11-16" ;
dcterms:license <https://github.com/BrickSchema/brick/blob/master/LICENSE> ;
dcterms:modified "2024-03-28" ;
dcterms:publisher [ a sdo:Consortium ;
sdo:legalName "Brick Consortium, Inc" ;
sdo:sameAs <https://brickschema.org/consortium/> ] ;
rdfs:isDefinedBy <https://brickschema.org/schema/1.4-rc1/Brick> ;
rdfs:seeAlso <https://brickschema.org> ;
owl:imports <http://data.ashrae.org/bacnet/2020>,
<http://qudt.org/2.1/schema/shacl/overlay/qudt>,
<http://qudt.org/2.1/schema/shacl/qudt>,
<http://qudt.org/2.1/vocab/dimensionvector>,
<http://qudt.org/2.1/vocab/prefix>,
<http://qudt.org/2.1/vocab/quantitykind>,
<http://qudt.org/2.1/vocab/sou>,
<http://qudt.org/2.1/vocab/unit>,
<https://brickschema.org/schema/Brick/ref>,
<https://w3id.org/rec/recimports> ;
owl:versionInfo "1.4-rc1.0-rc1" ;
sh:declare [ sh:namespace "http://www.w3.org/1999/02/22-rdf-syntax-ns#"^^xsd:anyURI ;
sh:prefix "rdf" ],
[ sh:namespace "https://w3id.org/rec#"^^xsd:anyURI ;
sh:prefix "rec" ],
[ sh:namespace "https://brickschema.org/schema/Brick/ref#"^^xsd:anyURI ;
sh:prefix "ref" ],
[ sh:namespace "http://www.w3.org/2002/07/owl#"^^xsd:anyURI ;
sh:prefix "owl" ],
[ sh:namespace "http://data.ashrae.org/standard223#"^^xsd:anyURI ;
sh:prefix "s223" ],
[ sh:namespace "http://www.w3.org/2000/01/rdf-schema#"^^xsd:anyURI ;
sh:prefix "rdfs" ],
[ sh:namespace "https://brickschema.org/schema/Brick#"^^xsd:anyURI ;
sh:prefix "brick" ],
[ sh:namespace "http://www.w3.org/ns/shacl#"^^xsd:anyURI ;
sh:prefix "sh" ] .
I changed the syntax based the website listed in the error to the following and it seems to have resolved the issue. Not sure where it is in the source code but found one reference where the syntax may be fixed.
<https://brickschema.org/schema/1.4-rc1/Brick> a owl:Ontology ;
rdfs:label "Brick" ;
dcterms:creator ( [ a sdo:Person ;
sdo:email<EMAIL_ADDRESS>;
sdo:name "Gabe Fierro" ] [ a sdo:Person ;
sdo:email<EMAIL_ADDRESS>;
sdo:name "Jason Koh" ] ) ;
dcterms:issued "2016-11-16" ;
dcterms:license <https://github.com/BrickSchema/brick/blob/master/LICENSE> ;
dcterms:modified "2024-03-28" ;
dcterms:publisher [ a sdo:Consortium ;
sdo:legalName "Brick Consortium, Inc" ;
sdo:sameAs <https://brickschema.org/consortium/> ] ;
rdfs:isDefinedBy <https://brickschema.org/schema/1.4-rc1/Brick> ;
rdfs:seeAlso <https://brickschema.org> ;
owl:imports <http://data.ashrae.org/bacnet/2020>,
<http://qudt.org/2.1/schema/shacl/overlay/qudt>,
<http://qudt.org/2.1/schema/shacl/qudt>,
<http://qudt.org/2.1/vocab/dimensionvector>,
<http://qudt.org/2.1/vocab/prefix>,
<http://qudt.org/2.1/vocab/quantitykind>,
<http://qudt.org/2.1/vocab/sou>,
<http://qudt.org/2.1/vocab/unit>,
<https://brickschema.org/schema/Brick/ref>,
<https://w3id.org/rec/recimports> ;
owl:versionInfo "1.4-rc1.0-rc1" ;
sh:declare [
sh:namespace "http://www.w3.org/1999/02/22-rdf-syntax-ns#"^^xsd:anyURI ;
sh:prefix "rdf" ;
] ;
sh:declare [
sh:namespace "https://w3id.org/rec#"^^xsd:anyURI ;
sh:prefix "rec" ;
] ;
sh:declare [
sh:namespace "https://brickschema.org/schema/Brick/ref#"^^xsd:anyURI ;
sh:prefix "ref" ;
] ;
sh:declare [
sh:namespace "http://www.w3.org/2002/07/owl#"^^xsd:anyURI ;
sh:prefix "owl" ;
] ;
sh:declare [
sh:namespace "http://data.ashrae.org/standard223#"^^xsd:anyURI ;
sh:prefix "s223" ;
] ;
sh:declare [
sh:namespace "http://www.w3.org/2000/01/rdf-schema#"^^xsd:anyURI ;
sh:prefix "rdfs" ;
] ;
sh:declare [
sh:namespace "https://brickschema.org/schema/Brick#"^^xsd:anyURI ;
sh:prefix "brick" ;
] ;
sh:declare [
sh:namespace "http://www.w3.org/ns/shacl#"^^xsd:anyURI ;
] .
Here is one example where the syntax made need to be fixed.
https://github.com/BrickSchema/Brick/blob/3af06c05f8f8725db9d270722f138afeb7d42690/support/ref-schema.ttl#L123
@gtfierro Thanks for the quick response. I continued to investigate the issue, and you are correct; it is not a syntax issue. The error does not appear when I only run shacl inference but it still persists if I use owlrl+shacl even though I don't have any sh:declare blocks in my building graph and also don't have a Brick copy in it.
However, it seems that I don't need owlrl+shacl. I was thinking it was a multistep inference process where for example the owlrl part would do the graph simplification and inference and the part shacl would do the validation. But if I understand correctly using shacl is doing both.
Yep! SHACL does both as of Brick 1.3. I believe you can still turn on "simplification" in the brickschema package and that will get done as a post-SHACL processing step
Closing for now ; let me know if you have any further questions!
|
2025-04-01T04:54:47.964935
| 2016-12-13T17:56:56
|
195323848
|
{
"authors": [
"emylonas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13434",
"repo": "Brown-University-Library/usepweb_project",
"url": "https://github.com/Brown-University-Library/usepweb_project/issues/23"
}
|
gharchive/issue
|
text in collection description box on single collection view is not formatting properly
for ex: http://worfdev.services.brown.edu/projects/usep/collections/CA.Berk.UC.HMA/
closed because it's the same as #19
|
2025-04-01T04:54:47.968822
| 2017-01-13T21:42:07
|
200738077
|
{
"authors": [
"cjiang-ibm",
"danrope"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13435",
"repo": "Brunel-Visualization/Brunel",
"url": "https://github.com/Brunel-Visualization/Brunel/issues/190"
}
|
gharchive/issue
|
Feature request: provide a way to accept and use user defined "visInstance" name
It is required that visInstance has its own unique name (eg: var v = new BrunelVis("visualization") when you have multiple charts in one page.
Currently D3Builder#endVisSystem(VisItem main) uses hard codes name "v" for all the places which require "visInstance" name, such as line 381:
controls.writeEventHandler(out, "v");
or line 387:
out.add("BrunelD3.animateBuild(v,", .....
or line 390:
out.add("v.build(");
In our codes, in order to make multiple charts be displayed in one page, we are replacing "v" with our unique id. The codes are messy.
The string replacement doesn't work in BrunelJQueryControlFactory .make_range_slider(...) any more because it has "....v .data(null, 0).field( ..".
The best and cleanest solution is that Brunel to provide a way, such as in BuilderOptions.visInstanceName, for users to set vis instance name and Brunel use it to writes out the script.
Added new field in BuilderOptions:
BuilderOptions.visObject
|
2025-04-01T04:54:48.021711
| 2024-01-28T03:26:16
|
2103943981
|
{
"authors": [
"JollyRogerz",
"KannuSingh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13436",
"repo": "BuidlGuidl/batch1.buidlguidl.com",
"url": "https://github.com/BuidlGuidl/batch1.buidlguidl.com/pull/37"
}
|
gharchive/pull-request
|
Tweaked MainPage and TailwindCss
Description
Tweaked the mainpage and tailwindCss
ferramenta
Additional Information
[x] I have read the contributing docs (if this is your first contribution)
[x] This is not a duplicate of any existing pull request
Related Issues
Closes #{6}
Your ENS/address: jollyv.eth
I see the above build failed on Vercel.
Is adding a new package dependency forbidden? Bc this PR adds a new dependency, "typewriter-effect"
Closed it and I will submit a new one with the yearn.lock file and MetaHeader
|
2025-04-01T04:54:48.043334
| 2023-11-01T13:08:54
|
1972349748
|
{
"authors": [
"samijaber",
"smeijer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13437",
"repo": "BuilderIO/mitosis",
"url": "https://github.com/BuilderIO/mitosis/issues/1289"
}
|
gharchive/issue
|
support children as render prop
I am interested in helping provide a fix!
No
Which generators are impacted?
[ ] All
[ ] Angular
[ ] HTML
[ ] Preact
[ ] Qwik
[ ] React
[ ] React-Native
[ ] Solid
[ ] Stencil
[X] Svelte
[x] Vue
[ ] Web components
Reproduction case
https://mitosis.builder.io/?outputTab=M4NwpgNgLmQ%3D&code=JYWwDg9gTgLgBAbzgVwM4FMDKNrrgXzgDMoIQ4AiAAQCNlgAbAE3SgDpgIB6EYHVYKgoBuAFCiYATzB4ACqTCo4AXkSi4cAMYALRkyjoAdgH4AXHAAUYBanMJ1GuKhgBDGGnMByBhBdNghgDmnnAAPnCeqMiamuioqJ5iGvgAlCoAfHAAogzoIEYwYvji6AAekLBwLEQuyAzwRMiGmjCchnAAspIAwmSQhgVWNmn2GpoQhs5OrjB4qmhYOAYWo47Obh4RPn4BwQA0DqliDlxcTmR46ERE6C1K44YwpAwMu3AABuvuqO9wLoZMOAQGDaVjTNx4awQRTiDQGdxQdoWBwaAA86RRjgQUMUbB0egMhhW4O%2B5i%2B6DYXzQBBSxUccFRXAxGhSRSAA%3D
Expected Behaviour
I should be able to use a render prop to let my component consumer control the appearance. The component from the fiddle would be used like
<MyComponent>
{({ status, …props }) => <button {…props}>{status}</button>}
</MyComponent>
This works for react, solid, and vue, but not for svelte.
Actual Behaviour
It renders the properties wrong, and throws a build error:
<slot name="default({
status: status
})"/>
[vite-plugin-svelte] …/button.svelte:10:6 Error while preprocessing …/button.svelte - Expected }
file: …/button.svelte:10:6
8 |
9 | <slot name="default({
10 | status: _state.status,
^
Additional Information
No response
children as a render prop is something that doesn't easily exist in all frameworks. I don't believe there is anything in Svelte that allows you to pass a children render prop. The same goes for Qwik: you cannot provide a children render prop function.
Mitosis tries to include features that can easily be mapped to something that already exists in the majority of frameworks it focuses on, so I don't know if this is something we can easily add support for.
|
2025-04-01T04:54:48.055246
| 2023-06-29T08:18:02
|
1780350574
|
{
"authors": [
"duerselen",
"manucorporat",
"tpannickeduo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13438",
"repo": "BuilderIO/qwik",
"url": "https://github.com/BuilderIO/qwik/issues/4636"
}
|
gharchive/issue
|
[🐞] Can't set multiple Set-Cookie Headers (Middleware)
Which component is affected?
Qwik City (routing)
Describe the bug
I'm setting multiple cookies in a single request.
I'm using the azure swa middleware to create an azure function.
The problem is that chrome, edge, firefox etc. can't parse multiple cookies from a single set-cookie header, so these browsers will only set one cookie and not all.
Vite did return a set-cookie for each cookie, so I didn't find the bug before nearly going live.
I already found the problem in the code.
It's the mergeHeadersCookie function in the request-handler middleware, it uses the header append function.
https://developer.mozilla.org/en-US/docs/Web/API/Headers/append
The append() method of the Headers interface appends a new value onto an existing header inside a Headers object, or adds the header if it does not already exist.
The next problem would that the azure swa middleware sets the headers per object index string (line 60), that means every header can only exist once and duplicates overwrite the old values.
Just from looking at the qwik city middleware code it looks like the problem should exist with azure-swa, cloudflare, deno and vercel.
Reproduction
qwik.new uses vite and is not reproducible with vite
Steps to reproduce
Create an app that sets multiple cookies in a single request.
Create a azure functions function and test it.
Or use this http get request (removed the link, since i posted the http response from azure swa in the comment beneath)
System Info
System:
OS: Windows 10 10.0.22621
CPU: (20) x64 12th Gen Intel(R) Core(TM) i7-12700H
Memory: 18.51 GB / 31.71 GB
Binaries:
Node: 18.14.2 - C:\node\current\node.EXE
npm: 9.5.0 - C:\node\current\npm.CMD
Browsers:
Edge: Spartan (44.22621.1848.0), Chromium (114.0.1823.43)
Internet Explorer: 11.0.22621.1
npmPackages:
@builder.io/qwik: 1.1.5 => 1.1.5
@builder.io/qwik-city: 1.1.5 => 1.1.5
undici: 5.22.1 => 5.22.1
vite: 4.3.9 => 4.3.9
Additional Information
No response
Forgot to mention.
The url is a web app but rewrites to the azure function.
The function app returns
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Date: Thu, 29 Jun 2023 08:20:25 GMT
< Set-Cookie: duo_shop_dealer_id=ff38d23f-c175-4fae-a7b6-fd8e1d597ca9; HttpOnly; Path=/; SameSite=Strict; Secure, duo_shop_restricted_mode=true; HttpOnly; Path=/; SameSite=Strict; Secure
< Transfer-Encoding: chunked
< Request-Context: appId=
Tested with insomnia.
Do you have any idea of how we should fix the mergeHeadersCookies? for azure?
This might help:
https://github.com/geoffrich/svelte-adapter-azure-swa/pull/74/files
could you help us with a PR?!
I mean it's not just azure, it's cloudflare, deno and vercel too, since they use the function too.
But for azure the code is faulty at two positions.
I can create a qwik.new playground (instance?), but like I said vite doesn't use this function to add the cookies to the headers. The middlewares are using it.
https://stackblitz.com/edit/qwik-starter-zamsoc?file=src%2Froutes%2Flayout.tsx
Thats the code.
Just simple setting 2 cookies in a single request.
But no idea how stackblitz works, I have to reload the whole page for it to work after the first page load.
I think it's only azure, deno and netlify, and vercel we have unit tests for this case, they have a patched version of append() that works correctly! Problem is that our e2e for azure is broken at the moment, we need to set it up!
Sadly I'm not able to compile qwik (tried it multiple times), so I can't test changes if I tried to help.
I just checked the code and there seems to be no tests for cookies.
So I found the cookie unit tests for the request-handler.
And the unit test is nowhere calling mergeHeadersCookies like the middlewares do.
https://github.com/BuilderIO/qwik-city-e2e/blob/main/custom-src/routes/sign-in/index.tsx#L20-L26
This part of the app needs multiple cookies in order to work! restored the Azure app and in fact it's failing!
Got a fix
Thank you very much.
Thank you very much.
|
2025-04-01T04:54:48.059165
| 2023-03-12T12:46:42
|
1620352541
|
{
"authors": [
"cunzaizhuyi",
"shairez"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13439",
"repo": "BuilderIO/qwik",
"url": "https://github.com/BuilderIO/qwik/pull/3337"
}
|
gharchive/pull-request
|
fix(docs): modify solidStart routing link
What is it?
[ ] Feature / enhancement
[ ] Bug
[X] Docs / tests
Description
modify solidStart routing link
Use cases and why
One use case
Another use case
Checklist:
[ ] My code follows the developer guidelines of this project
[ ] I have performed a self-review of my own code
[ ] I have made corresponding changes to the documentation
[ ] Added new tests to cover the fix / functionality
thanks @cunzaizhuyi !
|
2025-04-01T04:54:48.062014
| 2016-11-29T15:25:31
|
192311588
|
{
"authors": [
"opticod"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13440",
"repo": "BuildmLearn/BuildmLearn-Toolkit-Android",
"url": "https://github.com/BuildmLearn/BuildmLearn-Toolkit-Android/pull/236"
}
|
gharchive/pull-request
|
Fixed Travis
Fixed the Travis.
Along with it, I observed that currently we are not running any manual tests in emulator hence don't require it for now. Hence it will faster our build time form approx 16 min to approx 3 min.
All should rebase their current PRs if they are sill failing.
|
2025-04-01T04:54:48.062965
| 2016-12-12T22:55:00
|
195110279
|
{
"authors": [
"codingblazer",
"zssMostE"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13441",
"repo": "BuildmLearn/BuildmLearn-Toolkit-Android",
"url": "https://github.com/BuildmLearn/BuildmLearn-Toolkit-Android/pull/284"
}
|
gharchive/pull-request
|
All Unused Files,Imports,Value Resources removed
#262 (Part 1)
All unused files ,resources,imports are removed successfully.
how did you know the resources,files and imports are unused? any metrics?
|
2025-04-01T04:54:48.090932
| 2022-04-16T12:05:41
|
1206097352
|
{
"authors": [
"scala-steward"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13442",
"repo": "BusyByte/flutterby",
"url": "https://github.com/BusyByte/flutterby/pull/219"
}
|
gharchive/pull-request
|
Update scalafmt-core to 3.5.1
Updates org.scalameta:scalafmt-core from 2.7.5 to 3.5.1.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequest = { frequency = "@monthly" },
dependency = { groupId = "org.scalameta", artifactId = "scalafmt-core" }
}]
labels: library-update, early-semver-major, semver-spec-major, commit-count:1
Superseded by #220.
|
2025-04-01T04:54:48.112085
| 2023-11-06T09:25:01
|
1978671092
|
{
"authors": [
"Michael-LinYu",
"ozcelgozde",
"vivek-debug"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13443",
"repo": "ByConity/ByConity",
"url": "https://github.com/ByConity/ByConity/issues/854"
}
|
gharchive/issue
|
cnch_auxility_policy clarification
Question
Looking at the cluster configuration documentation, I see a reference to:
cnch_auxility_policy, specifies the StoragePolicy used by ByConity to store temporary data on the local disk, optional configuration item, the default is default
What constitutes as temporary data here? Is it read/write agnostic or specific to certain functions? Can you give us an example usage of this setting?
@Michael-LinYu Can I get your help to answer this question?
check StorageLocation. Table's data are divide into two category, one is main storage, which stores actual table data, other one is auxility storage, which used to store some temporary data. For example, cnch use hdfs/s3 as main storage to store table data, and use auxility storage to write temporary data or cache files
|
2025-04-01T04:54:48.150822
| 2023-07-05T16:16:00
|
1789882903
|
{
"authors": [
"LuchunPen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13444",
"repo": "C0untFloyd/bark-gui",
"url": "https://github.com/C0untFloyd/bark-gui/issues/50"
}
|
gharchive/issue
|
No gradio module after clean install
Hi ! I have a problem with clean installation by one click installer.
There is "No gradio module" exception :(
tried v 0.7.0, 0.7.1
Maybe it because of this ?
I tried to install manually and it worked. Not as good as I wish, cause all dependencies was installed in disk C:\ with almost no free space.
One click installer doesn't work for me.
|
2025-04-01T04:54:48.255838
| 2024-09-12T11:07:11
|
2522085361
|
{
"authors": [
"buildmachine-sou-jenkins2",
"michael-bryson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13446",
"repo": "CAFapi/opensuse-opensearch2-image",
"url": "https://github.com/CAFapi/opensuse-opensearch2-image/pull/47"
}
|
gharchive/pull-request
|
I941175: Fix project registry name sanitization
Ticket: https://internal.almoctane.com/ui/entity-navigation?p=131002/6001&entityType=work_item&id=941175
A developer build has not yet been created for this branch. Click here to go ahead and create the build...
CI Build Link:
https://sou-jenkins2.swinfra.net/job/CAFapi/job/CAFapi~opensuse-opensearch2-image~I941175~CI
|
2025-04-01T04:54:48.257920
| 2022-09-02T22:21:23
|
1360653256
|
{
"authors": [
"alistairking",
"ckreibich"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13447",
"repo": "CAIDA/cc-common",
"url": "https://github.com/CAIDA/cc-common/pull/11"
}
|
gharchive/pull-request
|
Fix GCC 12.2 warning in access to patricia_t members
Hi folks — we're seeing a few warnings in the libpatricia code with newer GCCs:
/home/christian/devel/zeek/zeek/src/3rdparty/patricia.c:246:18: warning: array subscript ‘prefix_t {aka struct _prefix_t}[0]’ is partly outside array bounds of ‘unsigned char[12]’ [-Warray-bounds]
246 | prefix->bitlen = (bitlen >= 0) ? bitlen : default_bitlen;
| ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/christian/devel/zeek/zeek/src/3rdparty/patricia.c:228:18: note: object of size 12 allocated by ‘calloc’
228 | prefix = calloc(1, sizeof(prefix4_t));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/christian/devel/zeek/zeek/src/3rdparty/patricia.c:247:18: warning: array subscript ‘prefix_t {aka struct _prefix_t}[0]’ is partly outside array bounds of ‘unsigned char[12]’ [-Warray-bounds]
247 | prefix->family = family;
| ~~~~~~~~~~~~~~~^~~~~~~~
/home/christian/devel/zeek/zeek/src/3rdparty/patricia.c:228:18: note: object of size 12 allocated by ‘calloc’
228 | prefix = calloc(1, sizeof(prefix4_t));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/christian/devel/zeek/zeek/src/3rdparty/patricia.c:248:21: warning: array subscript ‘prefix_t {aka struct _prefix_t}[0]’ is partly outside array bounds of ‘unsigned char[12]’ [-Warray-bounds]
248 | prefix->ref_count = 0;
| ~~~~~~~~~~~~~~~~~~^~~
/home/christian/devel/zeek/zeek/src/3rdparty/patricia.c:228:18: note: object of size 12 allocated by ‘calloc’
228 | prefix = calloc(1, sizeof(prefix4_t));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/christian/devel/zeek/zeek/src/3rdparty/patricia.c:250:22: warning: array subscript ‘prefix_t {aka struct _prefix_t}[0]’ is partly outside array bounds of ‘unsigned char[12]’ [-Warray-bounds]
250 | prefix->ref_count++;
| ~~~~~~~~~~~~~~~~~^~
/home/christian/devel/zeek/zeek/src/3rdparty/patricia.c:228:18: note: object of size 12 allocated by ‘calloc’
228 | prefix = calloc(1, sizeof(prefix4_t));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
The above is based on Zeek's version of libpatricia, but the code in question is identical. I wasn't able to test the included tweak directly with your repo but the equivalent fix on the Zeek side resolves them.
Thanks!
|
2025-04-01T04:54:48.295668
| 2024-11-04T20:00:14
|
2633725716
|
{
"authors": [
"eschrom",
"swo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13448",
"repo": "CDCgov/cfa-immunization-uptake-projection",
"url": "https://github.com/CDCgov/cfa-immunization-uptake-projection/issues/42"
}
|
gharchive/issue
|
Consistent method of enforcing polars subclass
Subclasses of a polars data frame (any UptakeData, e.g.) will revert to regular polars data frame after a polars operation (e.g. filter, with_columns, rename, etc.). We can make versions of these polars functions that return the subclass they were given, or we can coerce polars data frames into subclasses more frequently (i.e. UptakeData(df)). Choose a strategy and be consistent.
I lean toward option #2 (let data frames be data frames, and use the subclasses for explicit validation or when we need extra functionality)
Also, make choose class vs. object methods carefully to avoid self = self.some_method(...), as is currently the case in augment_implicit_columns
Also, make choose class vs. object methods carefully to avoid self = self.some_method(...), as is currently the case in augment_implicit_columns
Or, at least, keep something like that as an object method, but make it not modify-in-place. Eg my_df = my_df.augment() rather than my_df.augment() changing the content of my_df.
this was resolved in a prior PR, maybe #65
|
2025-04-01T04:54:48.323135
| 2022-12-14T23:10:49
|
1497547515
|
{
"authors": [
"BerniXiongA6",
"Jcavallo7",
"TaneaY",
"brandonnava",
"brick-green",
"oslynn",
"sliu1000"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13449",
"repo": "CDCgov/prime-reportstream",
"url": "https://github.com/CDCgov/prime-reportstream/issues/7689"
}
|
gharchive/issue
|
CA - Poorly Formatted Accession Number
User states: Hello, we have received a set of report that have an incorrect / poorly formatted accession number. I did a quick review and it looks like these are from Gaslamp Medical Center 250 Market St, San Diego CAn92101 USA.
Please correct and resend the reports for the affected messages."
Me and Sharon proceeded to conduct research and responded to the user " possible that during the conversion on your end it created the shortform version you are seeing"
That value is typically provided by the sender. We wouldn't be doing any alteration. The work here would be to double check that the value we're sending is exactly the same as what was sent to CA. If it is, then we relay that info to CA. If it isn't then we need to do additional investigation into why it's not the same.
@BerniXiongA6 check with sender (according to Brick's recommendation) and then follow up with CA. cc: @sliu1000
Hey @Jcavallo7 @sliu1000 -- I've tried to find the originating email from the sender but there's nothing in the RS shared inbox. Did this come from the SR shared inbox? Could you provide our team with the sender's info? We'd like to start working on the ticket in this coming sprint and we'll need to reach out to the sender to ask follow up questions. Thank you! cc: @brick-green @brandonnava
Hey @BerniXiongA6 it came in through the simple report box, apologies if I neglected that detail. Here are the original senders info:
Jill Meesey<EMAIL_ADDRESS>Marjorie Richardson<EMAIL_ADDRESS>
BX sent email to original requesters of this ticket on 12/30:
Hi Jill and Marjorie Richardson,
Our operations team here at ReportStream will be bringing ticket (# 7689) into our upcoming work. In order to troubleshoot, we'll need to know what the Accession numbers are so we can confirm whether ReportStream is receiving the correct values from Gaslamp Medical Center that we are passing through to public health.
Can either of you confirm what the Accession numbers should be for this request? I'm attaching the screenshot that we received with the original request.
Once we receive this information, that can help our team identify what could be happening and determine how to fix this issue.
Thanks,
Berni
BX send email to sending facility (since Jill and Marjorie are the receivers) on 12/30:
<EMAIL_ADDRESS>Report Stream (CDC);Green, Brick (CDC/DDPHSS/OD/HITSSU) (CTR)
Hello Gas Lamp Medical Center Team:
Our operations team here at ReportStream received a request from your public health authority to troubleshoot an issue regarding COVID-19 data that was submitted to ReportStream with incorrect Accession numbers—which failed validation at your jurisdiction.
In order to investigate this further on the ReportStream end, we'll need to know what the Accession numbers are for the records that are included in this screenshot shared with us by your public health authority. Once we know what the correct Accession numbers should've been submitted to ReportStream, that will allow us to verify whether ReportStream is receiving the correct values from Gaslamp Medical Center that we are passing through to public health.
Can your team confirm what the Accession numbers should be for the following messages?
Thanks,
Berni
@brandonnava fyi
@Jcavallo7 @sliu1000 Since this came from SR, can you or someone on the SR side follow up with them?
@TaneaY Do you mind looking into this further.
@sliu1000 Follow-up email sent to requestor<EMAIL_ADDRESS>and submitter<EMAIL_ADDRESS>as I do not see a reply from either for additional information. Prior research of file received on 12/14 shows that CA DPH was sent an HL7 file in the correct format.
@brandonnava
Receiver - Jill from California is requesting for the "expanded format" to be used for files being sent to CA as it's causing parsing issues on their side: See below the specific response:
------------------------------https://app.zenhub.com/files/304423150/44837246-5615-441a-a5e0-e6b7d3587152/download-
Hi Tanea, I apologize for my confusion. I was busy last week. I don’t believe the December issue was resolved. And the issue is still happening. Any of the shortened scientific numbers that are the same, will attach to the first instance of the number coming across since accession number is one of our primary matching metrics.
Just going back to 12/1/22, there are 2466 entries with scientific format instead of the expanded number. Many of these have repeated values causing multiple different people to be attached together.
Please escalate this issue to be fixed and resubmitted.
In column H and I, I have highlighted three sets of accession number all for different people that have been attached to each other.
Emailed Jill for an updated example - as the one that is referenced is older than 60 days.
@oslynn Receiver sent over new examples
Please see attached for the accession numbers:
(https://app.zenhub.com/files/304423150/273343fa-3355-4b61-b19d-b85f313970b3/download)
Hi Sharon, here is a list for February. The two examples below are from today.
Shortened and causing reports for different people to attach to each other:
SPM|1|1.67683E+15 & Action Urgent Care I & 05D2078131&CLIA^1.67683E+15 & Action Urgent Care I
Expanded and unique:
SPM|1|1c807a03-f24a-49b6-8f02-36e7eef3bbb5&Torrance Memorial&05D0642594&CLIA^1c807a03-f24a-49b6-8f02-36e7eef3bbb5&Torrance Memorial&05D0642594&CLIA||
After looking at the messages from SimpleReport. I think the sender uploaded CSV file produced by excel that has Format Cells -> Number -> Decimal places set to [ 2 ]. This will automatically be set to a large number in engineering number format (i.e. 1234E+15).
@sliu1000 I need to get with SimpleReport folk to see the csv file that was uploaded to them.
Email sent to SimpleReport Lab:
Hi Noah,
CA State is having issues with reviewing your labs Accession Numbers for 2/21/2023. Can you please advise on the format you are using when you submit your CSV files via SimpleReport? Perhaps you can send us a sample file so we can see what format you are using. Please DO NOT sent any PII/PHI. You can just delete those columns from your file.
Best regards,
Sharon
The ReportStream Support Team
Contact us at<EMAIL_ADDRESS>Support hours: 8:00 am through 9:00 pm Eastern time
Monday-Friday, excluding US public holidays
Sender used Excel or Spreadsheet software to generate the CSV file to upload to SimpleReport. That software has default Format Cells to Number Decimal places of 2. Therefore when they save the file, it saves to the engineering format for a large number that has 12 digits or more (i.e<PHONE_NUMBER>12 save to 123457E+11).
To fix this, the sender needs to reformat all Cells to Number Decimal places of 0 as following step with MS Excell:
1.) Press Ctrl + A to select all cells
2.) Right-click on the mouse.
3,) Select Format Cells
4.) Select Number -> Decimal place -> 0
5.) Press OK
6.) Save the file.
@oslynn I sent the email and cc'd you.
@sliu1000 for Google Sheets do the following:
1.) Crtl + A to select all cells
2.) Mouse click on 123 as a screenshot and select Custom number format
3.) Select 0 and Apply
4.) File -> Download -> Comma Separated Values (.csv)
Screenshot before:
Screenshot after:
I am closing this ticket: Done.
|
2025-04-01T04:54:48.328738
| 2023-03-27T20:07:01
|
1642738836
|
{
"authors": [
"JessicaWNava",
"arnejduranovic"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13450",
"repo": "CDCgov/prime-reportstream",
"url": "https://github.com/CDCgov/prime-reportstream/issues/8861"
}
|
gharchive/issue
|
CA Flu Pilot: Create SR Sender Transform
User Story:
As ReportStream, I want to achieve an appropriate level of separation of concerns in our transforms by making sure sender-specific transforms, in this case Simple Report, are not stored with receiver transforms, in this case CA.
Description/Use Case
When getting SimpleReport FHIR FLU data sent to California, sender transforms did not exist and so all our transforms happened in the California receiver FHIR -> HL7 transforms under prime-router/metadata/hl7_mapping/STLTs/CA. Now that Sender transforms are implemented (FHIR -> FHIR) we need to comb through all the mappings in the CA folder and determine which ones are specific to SimpleReport and not CA and then move them to a SR sender transform.
Risks/Impacts/Considerations
Dev Notes:
Receiver transforms are FHIR -> HL7 transforms that occur to accommodate special requirements of the receiver (California)
Sender transforms are FHIR -> FHIR transforms that occur to accommodate special requirements or missing data of the sender (SimpleReport)
There are currently no Sender transforms in the repo. I suggest seeing if prime-router/metadata/fhir_mapping/fhir would be a good place to store them? We could make a folder here for each sender, like SimpleReport?
Victor added comments in the CA mappings files for what should be moved over to SimpleReport Sender transform
Acceptance Criteria
[ ] Created a sender transform for SimpleReport and stored it in the appropriate location in repo
[ ] Moved non-CA specific receiver mappings from CA folder to SimpleReport sender (FHIR) transform
[ ] Validated CA output HL7 is the same before and after changes to the mappings
After talking with Patricia and Victor, there is not anything to move over to simple report. However, there is going to be a ticket coming out of this to figure out how to handle the cliaForSender setting since this is in a bit of a middle no man's land between sender and receiver and may require a custom fhir function or some other very specific work around. We also determined that the sending-facility_namespace-id did not actually need to be set since it was not being set in the COVID pipeline, so next test @victor-chaparro is going to remove it to test to make sure there aren't any issues without it.
|
2025-04-01T04:54:48.333310
| 2024-09-27T07:41:20
|
2552242425
|
{
"authors": [
"jorg3lopez"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13451",
"repo": "CDCgov/trusted-intermediary",
"url": "https://github.com/CDCgov/trusted-intermediary/issues/1366"
}
|
gharchive/issue
|
Feature: Complex Map for args For Transformationn Engine
DevEx/OpEx
Currently, the transformation_definitions.json file is capable of passing a Map<String,String> as a parameter to the translations objects. Because of this constraint, we are unable to pass more complex types of maps and this forces us to add the complex map/data as part of the transformation class.
Propose Solution
Add the feature of passing a complex map (Map<String,Object>) from the transformation_definitions.json file. This feature will make it possible to inject the data via the transformation_definitions.json, adding flexibility, reusability, and generalizing our transformations.
Tasks
[x] Add Map<String,Object> args to Interfaces
[x] CustomFhirTransformation
[x] HappyPathCustomTransformationMockClass
[x] Add Map<String,Object> to TransformationRuleMethod
[x] Add Map<String,Object> to TransformationRule
[x] Add Map<String,Object> args to all available transformations
[x] Refactor
[x] failing tests cases
[x] null check transformations that use args
[x] Test coverage of new code
Additional Context
Add any other context or screenshots about the work here.
PR has been merged!
|
2025-04-01T04:54:48.334638
| 2023-12-11T19:22:34
|
2036372832
|
{
"authors": [
"cdmbase",
"devmax214"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13452",
"repo": "CDEBase/fullstack-pro",
"url": "https://github.com/CDEBase/fullstack-pro/pull/345"
}
|
gharchive/pull-request
|
Update getRoutes
This branch should be based on upgrade/react-router of common-stack
Can one of the admins verify this patch?
|
2025-04-01T04:54:48.343027
| 2015-10-26T18:39:49
|
113423112
|
{
"authors": [
"shlake",
"stephaniesimms"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13453",
"repo": "CDLUC3/dmptool",
"url": "https://github.com/CDLUC3/dmptool/issues/157"
}
|
gharchive/issue
|
Allow HTML formatting on Template "Detailed Question"
Sort of related to issue:
https://github.com/CDLUC3/dmptool/issues/6
Even though the formatting tools would be great to "see" and use when creating "detailed questions", "suggested text", "guidance", and "example text". HTML code can be used withing ALL of these except for "Detailed Questions".
Please let HTML work on detailed questions (if it is easy) OR I would love to have the formatting tools for all of the text entry if it is not too hard.
include CKEditor support for this field and Customization resources
|
2025-04-01T04:54:48.401774
| 2021-12-14T19:34:55
|
1080139848
|
{
"authors": [
"michalvasko",
"panlinux"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13454",
"repo": "CESNET/netopeer2",
"url": "https://github.com/CESNET/netopeer2/issues/1106"
}
|
gharchive/issue
|
Test failure in armhf (ubuntu and debian): test_rpc
Hi,
netopeer 2.0.35 is currently failing to build on armhf[1] due to a test_rpc failure. Here is my attempt at getting the relevant output (it's intertwined with other test results):
2: [ RUN ] test_lock_basic
2: [ OK ] test_lock_basic
2: [ RUN ] test_lock_fail
2: "<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="68">
2: <rpc-error>
2: <error-type>protocol</error-type>
2: <error-tag>lock-denied</error-tag>
2: <error-severity>error</error-severity>
2: <error-message xml:lang="en">Access to the requested lock is denied because the lock is currently held by another entity.</error-message>
2: <error-info>
2: <session-id>1</session-id>
2: </error-info>
2: </rpc-error>
2: </rpc-reply>
2: " != "<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="0">
2: <rpc-error>
2: <error-type>protocol</error-type>
2: <error-tag>lock-denied</error-tag>
2: <error-severity>error</error-severity>
2: <error-message xml:lang="en">Access to the requested lock is denied because the lock is currently held by another entity.</error-message>
2: <error-info>
2: <session-id>68</session-id>
2: </error-info>
2: </rpc-error>
2: </rpc-reply>
2: "
4: [ RUN ] test_xpath_basic
4: [ OK ] test_xpath_basic
4: [ RUN ] test_xpath_boolean_operator
4: [ OK ] test_xpath_boolean_operator
4: [ RUN ] test_xpath_union
4: [ OK ] test_xpath_union
4: [ RUN ] test_xpath_namespaces
2: [ LINE ] --- ./tests/test_rpc.c:154: error: Failure!
2/15 Test #2: test_rpc .........................Subprocess aborted***Exception: 7.96 sec
[==========] Running 11 test(s).
[ RUN ] test_lock_basic
[ OK ] test_lock_basic
[ RUN ] test_lock_fail
"<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="68">
<rpc-error>
<error-type>protocol</error-type>
<error-tag>lock-denied</error-tag>
<error-severity>error</error-severity>
<error-message xml:lang="en">Access to the requested lock is denied because the lock is currently held by another entity.</error-message>
<error-info>
<session-id>1</session-id>
</error-info>
</rpc-error>
</rpc-reply>
" != "<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="0">
<rpc-error>
<error-type>protocol</error-type>
<error-tag>lock-denied</error-tag>
<error-severity>error</error-severity>
<error-message xml:lang="en">Access to the requested lock is denied because the lock is currently held by another entity.</error-message>
<error-info>
<session-id>68</session-id>
</error-info>
</rpc-error>
</rpc-reply>
"
[ LINE ] --- ./tests/test_rpc.c:154: error: Failure!
In debian it seems to be also failing in other architectures, see [2]. All but ppc64el have the same test_rpc failure. ppc64el has other failures, so I wonder if this is a 32 bits thing.
https://launchpadlibrarian.net/574054935/buildlog_ubuntu-jammy-armhf.netopeer2_2.0.35-1_BUILDING.txt.gz
https://tracker.debian.org/pkg/netopeer2
Yes, it would seem like a 32b problem because incorrect printf flags were being used, should be fixed now.
Thanks, the test passes now
|
2025-04-01T04:54:48.403845
| 2020-12-18T07:20:53
|
770641806
|
{
"authors": [
"mdivyamohan",
"michalvasko"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13455",
"repo": "CESNET/netopeer2",
"url": "https://github.com/CESNET/netopeer2/issues/781"
}
|
gharchive/issue
|
Question: Hide a yang module from client
Hi,
Is there a way in which I can install a yang module on server side, but hide it from client?
Something similar to this in netconfd-pro:
The --hide-module parameter specifies the name of a module to hide from advertisements to client sessions. If the specified module name is loaded into the server, then this parameter will cause it to be omitted from the following data structures:
YANG 1.0 <hello> message
/netconf-state/schemas/schema list
/modules-state/module list
No, netopeer2 does not support such a feature.
|
2025-04-01T04:54:48.603959
| 2024-06-11T09:14:38
|
2345856058
|
{
"authors": [
"CKegel",
"DenisSergeevitch",
"GFLJS2100",
"GFLJS2100-user",
"bulieme",
"gwashark"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13456",
"repo": "CKegel/Web-SSTV",
"url": "https://github.com/CKegel/Web-SSTV/issues/1"
}
|
gharchive/issue
|
Where is the decoder
It’s been still coming but the decoder is not there it’s been a long time and still not getting a decoder
And more modes too
I've been toying around with a few possible implementations for the decoder. It is much more difficult then I anticipated and requires a custom AudioWorklet to be written. That being said over the past few weeks I've made some excellent progress and hope to have something functioning soon.
As for more modes, I just added a few more this month!
Cheers and 73!
Can't wait for the decoding feature!
please add more modes, like the robot 36 mode on the encoder and decoder
Looking forward to have a decoder too, thank you
|
2025-04-01T04:54:48.613204
| 2024-06-03T14:17:12
|
2331280001
|
{
"authors": [
"DangerRevolution"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13457",
"repo": "CM-14/CM-14",
"url": "https://github.com/CM-14/CM-14/pull/2220"
}
|
gharchive/pull-request
|
remove all mentions of tactics key
About the PR
previously added in my previous PR; based off memory off a much older version of CM; this no longer exists and would just like to clarify my mistake before proceeding with other PR's :)
que? where's the merge conflict ;(
git genius, merge conflict solved
|
2025-04-01T04:54:48.616080
| 2023-12-02T18:01:05
|
2022147207
|
{
"authors": [
"Tunguso4ka"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13458",
"repo": "CM-14/CM-14",
"url": "https://github.com/CM-14/CM-14/pull/728"
}
|
gharchive/pull-request
|
Vending machines
About the PR
todo:
[x] Add all sprites
[x] Fix sprite animation speeds
[x] Make prototypes
[x] Fill what can be filled
Why / Balance
Resolves #263
Media
[x] I have added screenshots/videos to this PR showcasing its changes ingame, or this PR does not require an ingame showcase
Ready to review.
My opinion is that med vendors should be filled in medical PR.
|
2025-04-01T04:54:48.640336
| 2022-02-24T02:51:02
|
1148799039
|
{
"authors": [
"philiponions"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13459",
"repo": "CMPUT301W22T31/QRAdventure",
"url": "https://github.com/CMPUT301W22T31/QRAdventure/issues/40"
}
|
gharchive/issue
|
Implement an efficient way to keep navbar across activities
I noticed a couple of things while working on the navbar:
The way I implemented was putting it inside a Linear Layout but I realized that for every activity I would have to paste it in every single xml. That sounds very very inefficient.
I also noticed that for every onClick it has, the current activity would have to take care of it. So that would mean every single activity would have to implement goToStats, goToAccount, goToLeaderboard just so that the navbar works (it crashes if we dont implement it). Again incredibly very inefficient.
As of right now my currently solution is to put the navbar in a seperate xml and use setContentView() and "inflate" it. This definitely takes care of the first issue so you do NOT have to copy and paste it in every xml activity. However i'm still not sure how to approach the second issue.
I did this today.
|
2025-04-01T04:54:48.646997
| 2024-09-25T18:30:21
|
2548707993
|
{
"authors": [
"StevenWadeOddball",
"patrickseguraoddball"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13460",
"repo": "CMS-Enterprise/mint-app",
"url": "https://github.com/CMS-Enterprise/mint-app/pull/1381"
}
|
gharchive/pull-request
|
[NOREF] Seed data for echimp
Description
Introduce seed data with existing echimp model data
Update Postman collection
How to test this change
Query the seeded model plan, look at the related echimp CR or TDL
( you can use the updated postman collection to do the query)
PR Author Checklist
[ ] I have provided a detailed description of the changes in this PR.
[ ] I have provided clear instructions on how to test the changes in this PR.
[ ] I have updated tests or written new tests as appropriate in this PR.
[ ] Updated the Postman Collection if necessary.
PR Reviewer Guidelines
It's best to pull the branch locally and test it, rather than just looking at the code online!
When approving a PR, provide a reason why you're approving it
e.g. "Approving because I tested it locally and all functionality works as expected"
e.g. "Approving because the change is simple and matches the Figma design"
Don't be afraid to leave comments or ask questions, especially if you don't understand why something was done! (This is often a great time to suggest code comments or documentation updates)
Check that all code is adequately covered by tests - if it isn't feel free to suggest the addition of tests.
I realize this may have been outside scope, but this was the original issue I brought to clay. I'm unable to query the model plan when querying echimpCRsAndTDLs. This is the error it throws.
Thanks @patrickseguraoddball! As we discussed, the error occurs locally when data is not seeded. Though not a result of this PR, we decided to implement a change to address this in this PR!
Now, if you can't get the cache, we should expect that nil is returned, and an error is logged in the backend. This will make sure we are alerted if this happens in a deployed environment, but allows flexibility when developing. You can confirm that the error shouldn't exist in the GQL response now, but you can look at the docker logs to see an error is logged if you query echimp crs or tdls without seeded data.
|
2025-04-01T04:54:48.739927
| 2024-12-19T10:24:37
|
2749860487
|
{
"authors": [
"muratams"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13461",
"repo": "CMU-cabot/cabot-navigation",
"url": "https://github.com/CMU-cabot/cabot-navigation/pull/114"
}
|
gharchive/pull-request
|
fix localization demo script to set use_sim_time parameter
fix localization demo launch file to correctly set use_sim_time parameter to nodes
fix tf_speed_control node to use node->get_clock so that it also works with ROS bag play
It was confirmed that tf_speed_control node and mutl_floor_manger worked correctly on a physical robot after these changes.
|
2025-04-01T04:54:48.747079
| 2023-07-27T20:54:03
|
1825208810
|
{
"authors": [
"AutonomicPerfectionist",
"ryanmrichard"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13462",
"repo": "CMakePP/CMakePPLang",
"url": "https://github.com/CMakePP/CMakePPLang/pull/105"
}
|
gharchive/pull-request
|
Treat pointer types like desc
Is this pull request associated with an issue(s)?
Fixes #96
Description
This PR adds support for pointer types in signatures and type assertions by treating them like they are desc.
TODOs
[x] Add tests for cpp_is_type() for pointer types
[ ] Restrict cpp_is_type pointer type detection, right now it's a simple regex
[x] Update documentation
I think this is ready for review, I haven't restricted the is_type check but let me know if that would be desirable
To be clear, when you say restricting the is_type check, are you referring to making sure that the the say a bool* actually points to a bool rather than pointing to say a list?
Yeah that's what I mean
Yeah, I'm fine punting on that for now.
Sounds good. I did see a couple diagrams in the documentation, I didn't modify any of them but should we? I think the only relevant one is the type relations diagram
How would a pointer to a pointer be handled in this implementation (for example, list**)?
I did think of that and it should work, but I'll definitely add tests and documentation for it
Sounds good. I did see a couple diagrams in the documentation, I didn't modify any of them but should we? I think the only relevant one is the type relations diagram
It's be good to keep the documentation in synch or at least open an issue about it.
@zachcran is there a source file somewhere for the type_relations.png diagram?
Hmmm, tried opening it in drawio.com and it said it wasn't a diagram file... I checked the true file type and it is indeed a png file. Checked metadata with ImageMagick and I don't see anything that could be interpreted as the diagram source
It's possible I forgot to click the "embed diagram" (or whatever they call the option) upon saving. I guess that means the image will probably need to be remade. Don't worry about doing that unless you really want to (and if you do I recommend using Excalidraw, since we've been using that instead of draw.io for some time now).
Sounds good, for now, I just added a small blurb to the RST file where the diagram is shown explaining the relationships for pointers
|
2025-04-01T04:54:48.759603
| 2020-05-19T02:06:34
|
620620775
|
{
"authors": [
"bam241",
"gonuke",
"kkiesling"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13463",
"repo": "CNERG/IsogeomGenerator",
"url": "https://github.com/CNERG/IsogeomGenerator/pull/33"
}
|
gharchive/pull-request
|
[WIP] Refactoring + Adding comprehensive tests
This is not fully complete yet, but can start to get reviewed.
The goal of this PR was to refactor the vol.py tool such that previously single class is separated into two classes each corresponding to two different steps in the geometry creation process.
The two new classes are:
IsoVolDatabase: this is the first step that uses only VisIt to generate a database of isovolumes.
IsoSurfGeom: this is the second step that creates the DAGMC geometry from the database created in the first step.
There are two pieces of information that connect these two steps:
data: the name of the data on the original cartesian mesh file
dbname: the path to the folder containing the database of isovolume files generated in the visit step
levels: the values that were used for the isosurfaces
This information for each step can be assigned in many different ways in each class. These member variables can be assigned when the object is instantiated or when the over-arching method is called. In the case of the IsoSurfGeom class, a IsoVolDatabase object can also be passed in that contains all this information. In this refactor, I have implemented some new logic for handling these various methods for providing input.
Additional changes in this PR:
Introduce the package meshio which allows me to remove some complicated logic for checking if the min/max values for the levels are within the bounds of the provided data.
removal of unused methods and variables
adds a method for writing out a levelfile in the first step and reading it in the second step
Add tests for everything (NOTE: some tests are still needed, this is still incomplete)
code coverage analysis in CI
warnings and errors are real warnings and errors and not just print statements now
Things that still need updating in this PR (WIP):
[ ] complete tests
[ ] update the CLI/argparse options in generate_isogeom.py to be consistent with the refactor
[ ] update readme
[ ] make sure docstrings are all up to date (or existent)
[ ] confirm that all the necessary checks for information are present
[ ] confirm that methods are complete
If you are reviewing the files, there is a lot added because of the tests. The most important file is IsogeomGenerator/vol.py. The changes here are significant enough I would recommend viewing it as a split diff or just look at the file entirely separate on my branch. The other most important file is test/test_vol.py. Both of these files need the diff loaded to view.
@bam241 - can you take a look at the new structure of the files I made here? I separated my huge vol.py file that contained both classes for the two steps into two separate files (IsoVolDatabase.py and IsoSurfGeom.py). I made a new file called tools.py that is meant to be the driver file. There are three available "tools" in this file, the first is a tool to generate levels(0 (optional in the full workflow), generate_volumes() which is the overarching workflow for the information int he IsoVolDatabase class (the Visit step), and generate_geometry() which is the workflow for the second step (moab step). Can you briefly take a look at the function signatures in tools.py and the inits of the two new classes just to see if this is what you were suggesting?
Also, any suggestion of a file name that's better than tools.py or driver.py?
(note, the file generate_isogeom.py is the CLI and has yet to be updated, and the tests aren't updated with the new addition of the driver file).
In both of the classes, there is the common member variables for data, levels, db, and the method read_levels(). So I think I might make a parent class to hold these and then these two classes will inherit from that parent class.
@bam241 - can you take a look at the new structure of the files I made here (specifically in the IsogeomGenerator folder)? I separated my huge vol.py file that contained both classes for the two steps into two separate files (IsoVolDatabase.py and IsoSurfGeom.py). I made a new file called tools.py that is meant to be the driver file. There are three available "tools" in this file: the first is a tool to generate levels() (optional in the full workflow), generate_volumes() which is the overarching workflow for the information int he IsoVolDatabase class (the Visit step), and generate_geometry() which is the workflow for the second step (moab step). Can you briefly take a look at the function signatures in tools.py and the inits of the two new classes just to see if this is what you were suggesting?
yes it is !
Also, any suggestion of a file name that's better than tools.py or driver.py?
I think I prefer driver.py, as it does not a set of useful methods to be used elsewhere, but it is the main course :)
(note, the file generate_isogeom.py is the CLI and has yet to be updated, and the tests aren't updated with the new addition of the driver file).
👍
Thanks so much for taking a look @bam241!
Merging this to provide a clean slate for incremental improvements
|
2025-04-01T04:54:48.770436
| 2024-07-02T13:26:55
|
2386257909
|
{
"authors": [
"dwpeng",
"jamshed"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13464",
"repo": "COMBINE-lab/cuttlefish",
"url": "https://github.com/COMBINE-lab/cuttlefish/issues/42"
}
|
gharchive/issue
|
How to output maximal unitigs?
Hi, Jamshed @jamshed
Thanks for cuttlefish giving me a new way to build cDBG in a very fast speed. I have one question after reading your paper. I noticed that you find the maximal unitigs by traversaling vertex mentioned in paper (called "two walks"). Every walk would stop until finding a "Fuzzy-Side" and concat two path if possible. So I check the code in github repo and try to find how to implement the corresponding logic. But there maybe some different with paper detailed in the repo.
I read output_maximal_unitigs_plain in src/CdBG_Plain_Writer.cpp:, I find there is not vertex traversaling, but visited every kmer in seq[left_end:right_end]. Firstly, find start_kmer and then try to find end_kmer. If paired end kmers were found, the unitig will be ouputed if not ouputed.
This is my understanding of how to output unitg. Please point it out for me if I'm wrong.
template <uint16_t k>
size_t CdBG<k>::output_maximal_unitigs_plain(const uint16_t thread_id, const char* const seq, const size_t seq_len, const size_t right_end, const size_t start_idx)
{
size_t kmer_idx = start_idx;
// assert(kmer_idx <= seq_len - k);
Annotated_Kmer<k> curr_kmer(Kmer<k>(seq, kmer_idx), kmer_idx, *hash_table);
// The subsequence contains only an isolated k-mer, i.e. there's no valid left or right
// neighboring k-mer to this k-mer. So it's a maximal unitig by itself.
if((kmer_idx == 0 || Kmer<k>::is_placeholder(seq[kmer_idx - 1])) &&
(kmer_idx + k == seq_len || Kmer<k>::is_placeholder(seq[kmer_idx + k])))
output_plain_unitig(thread_id, seq, curr_kmer, curr_kmer);
else // At least one valid neighbor exists, either to the left or to the right, or on both sides.
{
// No valid right neighbor exists for the k-mer.
if(kmer_idx + k == seq_len || Kmer<k>::is_placeholder(seq[kmer_idx + k]))
{
// A valid left neighbor exists as it's not an isolated k-mer.
Annotated_Kmer<k> prev_kmer(Kmer<k>(seq, kmer_idx - 1), kmer_idx, *hash_table);
if(is_unipath_start(curr_kmer.state_class(), curr_kmer.dir(), prev_kmer.state_class(), prev_kmer.dir()))
// A maximal unitig ends at the ending of a maximal valid subsequence.
output_plain_unitig(thread_id, seq, curr_kmer, curr_kmer);
// The contiguous sequence ends at this k-mer.
return kmer_idx + k;
}
// A valid right neighbor exists for the k-mer.
Annotated_Kmer<k> next_kmer = curr_kmer;
next_kmer.roll_to_next_kmer(seq[kmer_idx + k], *hash_table);
bool on_unipath = false;
Annotated_Kmer<k> unipath_start_kmer;
Annotated_Kmer<k> prev_kmer;
// No valid left neighbor exists for the k-mer.
if(kmer_idx == 0 || Kmer<k>::is_placeholder(seq[kmer_idx - 1]))
{
// A maximal unitig starts at the beginning of a maximal valid subsequence.
on_unipath = true;
unipath_start_kmer = curr_kmer;
}
// Both left and right valid neighbors exist for this k-mer.
else
{
prev_kmer = Annotated_Kmer<k>(Kmer<k>(seq, kmer_idx - 1), kmer_idx, *hash_table);
if(is_unipath_start(curr_kmer.state_class(), curr_kmer.dir(), prev_kmer.state_class(), prev_kmer.dir()))
{
on_unipath = true;
unipath_start_kmer = curr_kmer;
}
}
if(on_unipath && is_unipath_end(curr_kmer.state_class(), curr_kmer.dir(), next_kmer.state_class(), next_kmer.dir()))
{
output_plain_unitig(thread_id, seq, unipath_start_kmer, curr_kmer);
on_unipath = false;
}
// Process the rest of the k-mers of this contiguous subsequence.
for(kmer_idx++; on_unipath || kmer_idx <= right_end; ++kmer_idx)
{
prev_kmer = curr_kmer;
curr_kmer = next_kmer;
if(is_unipath_start(curr_kmer.state_class(), curr_kmer.dir(), prev_kmer.state_class(), prev_kmer.dir()))
{
on_unipath = true;
unipath_start_kmer = curr_kmer;
}
// No valid right neighbor exists for the k-mer.
if(kmer_idx + k == seq_len || Kmer<k>::is_placeholder(seq[kmer_idx + k]))
{
// A maximal unitig ends at the ending of a maximal valid subsequence.
if(on_unipath)
{
output_plain_unitig(thread_id, seq, unipath_start_kmer, curr_kmer);
on_unipath = false;
}
// The contiguous sequence ends at this k-mer.
return kmer_idx + k;
}
else // A valid right neighbor exists.
{
next_kmer.roll_to_next_kmer(seq[kmer_idx + k], *hash_table);
if(on_unipath && is_unipath_end(curr_kmer.state_class(), curr_kmer.dir(), next_kmer.state_class(), next_kmer.dir()))
{
output_plain_unitig(thread_id, seq, unipath_start_kmer, curr_kmer);
on_unipath = false;
}
}
}
}
// Return the non-inclusive ending index of the processed contiguous subsequence.
return kmer_idx + k;
}
dwpeng
Hi @dwpeng: good to know your interest in the algorithm!
Based on your description, you read the Cuttlefish 2 paper; but the implementation you're looking at is for the original Cuttlefish paper. You'll find the relevant implementation of the specific methods you're looking for at these files (and their .cpp counterparts): Read_CdBG.hpp, Read_CdBG_Constructor.hpp, and Read_CdBG_Extractor.hpp.
Regards.
I am so happy to receive your reply. Thank you. I will reread your code.
Regards.
|
2025-04-01T04:54:48.771566
| 2023-10-13T22:39:35
|
1942732401
|
{
"authors": [
"COMPRyanLI"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13465",
"repo": "COMPRyanLI/COMP333Hw2",
"url": "https://github.com/COMPRyanLI/COMP333Hw2/pull/12"
}
|
gharchive/pull-request
|
FIrst draft of login page
The php can work. Users can check whether the username and password is correct.
Need to change the code of creating a session.
|
2025-04-01T04:54:48.783904
| 2024-07-29T07:25:05
|
2434660156
|
{
"authors": [
"u21631532"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:13466",
"repo": "COS301-SE-2024/occupi",
"url": "https://github.com/COS301-SE-2024/occupi/pull/249"
}
|
gharchive/pull-request
|
Chore/occupi tech
Description
This PR adds Framer Motion animations to the VisualFeatures component, enhancing the user experience with smooth transitions and engaging animations. The changes include:
Adding container animations for each major section
Implementing text animations for content to fade in and slide up
Adding image animations to fade in and scale up slightly
Using Framer Motion variants for easy animation management and reuse
These changes improve the visual appeal of the VisualFeatures component and create a more dynamic and engaging user interface.
Fixes #123 (assuming there was an issue to enhance the VisualFeatures component)
Type of change
[x] New feature (non-breaking change which adds functionality)
How Has This Been Tested?
The changes have been tested in the following ways:
[x] Manual testing in development environment
[x] Verified animations work correctly on different screen sizes
[x] Checked performance impact of animations
Checklist:
[x] My code follows the style guidelines of this project
[x] I have performed a self-review of my code
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] New and existing unit tests pass locally with my changes
[ ] Any dependent changes have been merged and published in downstream modules
Additional Notes
The animations are designed to be subtle and not overwhelming
Performance impact should be minimal, but should be monitored in production
Consider adding a toggle for users who prefer reduced motion
@waveyboym please check if the occupi.tech is suitable, also there are a few fixes needed to be added such as the FAQ page, the About Us page, the Privacy policy, Terms and services etc
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.