added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:04.958677
| 2018-02-23T02:54:17
|
299582591
|
{
"authors": [
"ejmg",
"nelsonlim"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9609",
"repo": "pallets/click",
"url": "https://github.com/pallets/click/issues/937"
}
|
gharchive/issue
|
Importing group commands from separate module
Hi there.
I have two modules. My driver, cli.py, and then another, parser.py, which is in a subdirectory, commands.
I know one can create a group as follows:
@click.group()
def cli():
pass
@cli.command()
def foo():
click.echo("sup")
and so forth.
However, according to the docs, if I were to define my subcommands in a separate module (as I have), I must add them as such in cli.py:
from myModule.commands import foo, bar
@click.group()
def cli():
pass
cli.add_command(foo)
cli.add_command(bar)
and so forth.
Is there any way to avoid the above and instead declare my subcommands like in the first example while also keeping them in a separate module?
Importing my driver cli into parser.py seems to trigger a circular dependency and tells me AttributeError: module 'myPackage' has no attribute 'cli' when I attempt using decarators like @cli.command(). Actual code as follows:
from quoteBot.quoteBot import cli
@cli.command()
def extractTXT(file):
"""Extracts quotes from a .txt file.
:param file: the file being parsed
"""
click.echo("extractTXT called!")
Using import quoteBot.quoteBot syntax throws the same error, FYI.
This is not a "make or break" issue, I am just interested to know if such logic is possible. Thank you for your time.
Could you clarify the folder structure of your .py files?
Is it?
-cli.py
-myModule/
-commands/
-parser.py
It's a bit hard to tell what the issue might be.
My bad, that is clearly something I should have included. Here is the package:
├── LICENSE
├── MANIFEST.in
├── quoteBot
│ ├── commands
│ │ ├── authenticate.py
│ │ ├── __init__.py
│ │ ├── parser.py
│ │ └── tweet.py
│ ├── __init__.py
│ ├── __main__.py
│ └── quoteBot.py
├── README.rst
My naming might have slightly changed since the original post. cli is inside quotebot.py and is still the driver.
Thank you for taking the time to respond :slightly_smiling_face:
|
2025-04-01T04:35:04.960794
| 2018-04-22T02:21:14
|
316542450
|
{
"authors": [
"PyDever",
"davidism"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9610",
"repo": "pallets/flask",
"url": "https://github.com/pallets/flask/issues/2716"
}
|
gharchive/issue
|
Port Issue
Not sure if this is a bug or a system issue.
(debug=True, port=3000)
yet my app still runs on 5000 (default port)
You're presumably referring to passing arguments to app.run. But the flask command doesn't execute that function. Use the --port option: flask run --port 3000.
thank you.
i usually dont build with flask, i just do make
|
2025-04-01T04:35:04.964471
| 2017-09-20T10:37:07
|
259117516
|
{
"authors": [
"ThiefMaster",
"indranilroyaiem"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9611",
"repo": "pallets/jinja",
"url": "https://github.com/pallets/jinja/issues/773"
}
|
gharchive/issue
|
How can I render in Html for below json data
your code.
{'Namelist': {'thomas': {'gender': 'male', 'age': '23'}, 'david': {'gender': 'male'}, 'jennie': {'gender': 'female', 'age': '23'}, 'alex': {'gender': 'male'}}, 'selectors': {'naming': 'studentlist', 'code': 16}}
Scenario : Here I iterated through the serialized JSON converted to python object and evaluated the output as I only needed the names who's age is 23 and gender is male (in this case two results should be shown as david, alex. but this fails to do so.
Any suggestions how to achieve that ?
## Your Environment
* Python version: 2.7
* Jinja version:2.9.7
this is not the right place for support questions. use IRC or stack overflow.
|
2025-04-01T04:35:04.975171
| 2021-05-05T01:33:12
|
875968703
|
{
"authors": [
"mtblanton"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9612",
"repo": "palmetto/palmetto-components",
"url": "https://github.com/palmetto/palmetto-components/pull/461"
}
|
gharchive/pull-request
|
feat: allow popover to be server-side rendered
Github Issue or Trello Card
This PR addresses this issue: https://trello.com/c/bC292LqA
Palmetto.com has to force some clients to only render on the client-side instead of being server-side rendered. This fixes that for any components using Popover directly, but not through DateInput (DateInput forces withPortal to true, which renders it with a portalTarget of document.body).
LMK what you think of the typing, esp around the discriminated union. It prohibits setting portalTarget unless withPortal is true. I did this because we only ever need portalTarget if withPortal is true. I wanted to make it only required if withPortal is true but couldn't do that with the param initialization!
What type of change is this?
Please delete options that are not relevant.
[x] Bug fix (non-breaking change which fixes an issue)
[x] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist:
[x] My code follows the style guidelines of this project
The typing for Popover isn't quite the same. Let me know if that's a problem.
[x] I have performed a self-review of my own code
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[x] New and existing unit tests pass locally with my changes
UI Checklist
[x] I have conducted visual UAT on my changes/features.
the storybook still looked good/worked! P.com proposal also still looks good.
[x] My solution works well on desktop, tablet, and mobile browsers.
proposal controls not showing up
|
2025-04-01T04:35:05.162085
| 2022-04-26T11:53:28
|
1215843157
|
{
"authors": [
"ffashion",
"pandax381"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9614",
"repo": "pandax381/microps",
"url": "https://github.com/pandax381/microps/issues/18"
}
|
gharchive/issue
|
global var socks and tcp_pcb pcbs
i found that ths socks's len is 128 and , tcp_pcbs'len is 16, so is the tcp_pcb's len is small and socks var'len is too big. Should they be equal.
And we use array index to associate this 2 variables. can we use list or other to associate this 2 var, so that we can use more fd ,
I have reserved a larger array for the socket, but surely it would be better to match the sum of the TCP and UDP PCB arrays.
The use of array indexes to tie sockets to PCBs is for simplicity. I'm sure there are other, more efficient ways to do this, but I don't want to complicate the code too much for this project.
yes , i get it , thanks very much. I was wrong.
|
2025-04-01T04:35:05.235203
| 2015-08-16T03:58:42
|
101231786
|
{
"authors": [
"jsirois",
"piotr-dobrogost",
"rouge8"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9615",
"repo": "pantsbuild/pex",
"url": "https://github.com/pantsbuild/pex/pull/153"
}
|
gharchive/pull-request
|
Use Travis's container-based infrastructure
Jobs on the container-based infrastructure should start faster.
Replaced by #168
Aha - did not see this. I just started helping out on pex maintenance.
Thanks for having done this despite being trampled.
Actually for pip they are slower – https://github.com/pypa/pip/pull/3095
We shall see as CI runs accumulate, there is more at play than pip here. Pex has many shards to schedule, and that may give an overall win. We've had 1 container based build land and the variances are high, but it is ~10% faster overall: https://travis-ci.org/pantsbuild/pex/builds.
For pants - which also has many shards - the win was noticeable. We switched back on 2014-12-19 and have been happy customers.
|
2025-04-01T04:35:05.237040
| 2018-07-12T07:44:15
|
340524435
|
{
"authors": [
"guyuecode",
"pantsel"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9616",
"repo": "pantsel/konga",
"url": "https://github.com/pantsel/konga/issues/235"
}
|
gharchive/issue
|
Can't create routes
When I create a route, It pops up a hint .
Thinks.
@guyuecode ,
do what it says on the blue box in your screenshot.
Cheers
Thanks.
|
2025-04-01T04:35:05.239400
| 2017-07-18T07:36:13
|
243625706
|
{
"authors": [
"edwardjrp",
"pantsel",
"thucnc"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9617",
"repo": "pantsel/konga",
"url": "https://github.com/pantsel/konga/issues/80"
}
|
gharchive/issue
|
New form-api-011.html for Kong v0.11 release
KONG API will release next version 0.11 soon. Actually, it has the 0.11RC already, so please add new form to add new api for this version.
Thanks.
@here Any news on upgrading. Im using it with yesterday's 0.11.0 stable release and most things works except editing an api entry. plugins editing and other things work.
@edwardjrp , @thucnc ,
I'm going to work on the 0.11.* integration tommorrow probably.
Cheers
Check out v0.8.3 release.
|
2025-04-01T04:35:05.303048
| 2021-11-30T21:24:00
|
1067689340
|
{
"authors": [
"DocOfPineapples",
"TheDogg",
"dopazz",
"dustinosity"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9618",
"repo": "papyrus-mc/papyruscs",
"url": "https://github.com/papyrus-mc/papyruscs/issues/93"
}
|
gharchive/issue
|
Overworld rendering broken in 1.18
This morning I tried running papyrus in Minecraft 1.18. However, it seems this patch has broken the rendering for papyrus as new and some old areas are rendered as if they were underground (below is a picture of what happens). I'm running papyrus on ubuntu server 20.04 and I'm using the latest commit from the master branch (the 1.17.30 chunk fix). I don't know enough about this project to try and contribute to this code but I figured this would be the better place to post this issue instead of the discord. Let me know if there is any other information I should be providing.
I can confirm I have the same issue.
I have the same problem. As a workaround I added two switches to the command line:
--notrimceiling --limity 200
This output a proper rendering of the surface for me.
I tested it out and I eventually got it to work with --notrimceiling --y 319. I used 319 instead of 200 since it is the new build height for 1.18.
Hopefully, someone will be able to modify the source so it works correctly with 1.18 soon so we don't have to rely on this.
@DocOfPineapples can I ask what the full command you use to generate is? I just tried to use this yesterday on an 1.18 world and all I'm getting is blank image tiles. thinking I have a more than one issue at once here...
@DocOfPineapples can I ask what the full command you use to generate is? I just tried to use this yesterday on an 1.18 world and all I'm getting is blank image tiles. thinking I have a more than one issue at once here...
ok, some progress: just built master from source and that helped a bit. rendering non blank chunks now. just has some weird missing blocks.
@DocOfPineapples can I ask what the full command you use to generate is? I just tried to use this yesterday on an 1.18 world and all I'm getting is blank image tiles. thinking I have a more than one issue at once here...
ok, some progress: just built master from source and that helped a bit. rendering non blank chunks now. just has some weird missing blocks.
ok, i just forced an overwrite of everything and now it's not so bad.
Hi @dustinosity, yeah I had some issues too when I first tried to get it working and also assumed the problems I had it is related to another problem I might be happening. Here is the command that eventually worked:
PapyrusCs --world WorldLocation --output OutputLocation --htmlfile index.html --playericons true --deleteexistingupdatefolder true -d 0 --notrimceiling true -y 319 --forceoverwrite
After I got a fully render I stopped using the --forceoverwrite, I still need to check if new chunks generated after the reader come through properly. But, the first time I did it it didn't fully render everything. I think the --forceoverwrite helped fix that but it could have been something else and I don't remember exactly what I did. But it sounds like you got it working.
This pull request fixes this issue.
|
2025-04-01T04:35:05.305762
| 2024-02-10T23:07:20
|
2128747564
|
{
"authors": [
"loocapro",
"shekhirin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9619",
"repo": "paradigmxyz/reth",
"url": "https://github.com/paradigmxyz/reth/issues/6543"
}
|
gharchive/issue
|
Update current_stage tracker for Status message when unwinding
Describe the feature
Currently, we update it only on PipelineEvent::Run https://github.com/paradigmxyz/reth/blob/8cfa5efe62e79923b317d9685fc9f8205d5a92b5/bin/reth/src/commands/node/events.rs#L112 and PipelineEvent::Ran https://github.com/paradigmxyz/reth/blob/8cfa5efe62e79923b317d9685fc9f8205d5a92b5/bin/reth/src/commands/node/events.rs#L158 events. We need to do the same on unwind to keep the Status message up-to-date.
Additional context
No response
I can take this!
@loocapro assigned!
|
2025-04-01T04:35:05.431581
| 2023-02-22T04:12:53
|
1594428348
|
{
"authors": [
"harrysolovay",
"tjjfvi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9620",
"repo": "paritytech/capi",
"url": "https://github.com/paritytech/capi/issues/639"
}
|
gharchive/issue
|
initializing runes from constants
Currently, we do the following:
const rune = Rune.constant(theValue).into(TheRuneSubclassCtor)
Let's consider a larger example.
const denoFsHost = Rune
.constant({
readFile(filePath: string) {
return Deno.readFile(filePath)
},
// more ...
})
.into(FsHostRune)
If we rework the design as follows...
the parameter types can be inferred
we need not import Rune
confusion about the meaning of Rune.constant is avoided entirely (#569)
the code is more legible
const denoFsHost = FsHostRune.of({
readFile(filePath) {
return Deno.readFile(filePath)
},
// more ...
})
This is in line with my suggestion from #569:
I think Rune.resolve should be renamed to Rune.from. Then, it can be redesigned to work with subclasses, such that FooRune.from(runic, ...ctorArgs) is a shorthand for Rune.resolve(runic).into(FooRune, ...ctorArgs) (though I would need to confirm that the typing is fine).
Blocked on #517
@harrysolovay I think you commented on the wrong issue?
|
2025-04-01T04:35:05.435473
| 2021-05-06T14:09:16
|
877540078
|
{
"authors": [
"notlesh",
"sorpaas",
"tgmichel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9621",
"repo": "paritytech/frontier",
"url": "https://github.com/paritytech/frontier/pull/377"
}
|
gharchive/pull-request
|
Optional net_peerCount response format
There are inconsistencies on the response format expected when calling net_peerCount. Some tools/dapps expect decimal format, others hex.
This PR adds a bool peer_count_as_hex parameter to NetApi, so projects instantiating the handler at service level can decide which format to use. Defaulted to false in the template.
@sorpaas Since we've gone back and forth on the return type for this, what do you think about this solution? We're hoping to put out a new moonbeam release soon and would love to be as close to master as possible.
The PR looks fine, but honestly I think if Ethereum dapps expect different types themselves, then that's not really much we can do. We just have to make sure by default we follow Geth's type definition.
The PR looks fine, but honestly I think if Ethereum dapps expect different types themselves, then that's not really much we can do. We just have to make sure by default we follow Geth's type definition.
Yeah, I will re-open tomorrow, we realized was missing untagged for serializing the enum correctly. I will also make sure that we default to Geth's format, thanks for pointing that out.
|
2025-04-01T04:35:05.455771
| 2022-03-01T23:22:00
|
1155931253
|
{
"authors": [
"joao-paulo-parity"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9622",
"repo": "paritytech/pr-custom-review",
"url": "https://github.com/paritytech/pr-custom-review/issues/55"
}
|
gharchive/issue
|
Allow for optionally specifying target branches for rules
Problem: users might be inconvenienced by the CI status not passing when they want to merge a branch into another branch which is not master, e.g. a big feature branch (that does happen from time to time).
Solution: allow for specifying target_branches which means the rule will only be active if those branches are the targets of the pull request.
rules:
- name: Foo
+ target_branches:
+ - master
+ - release-v[0-9]
condition: .*
check_type: diff
min_approvals: 1
This can be implemented with workflows' if directly
steps:
- uses: paritytech/pr-custom-review@v1
if: ${{ github.base_ref == 'master' }}
|
2025-04-01T04:35:05.458174
| 2023-08-04T23:24:52
|
1837439352
|
{
"authors": [
"MrishoLukamba",
"paritytech-cicd-pr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9623",
"repo": "paritytech/substrate",
"url": "https://github.com/paritytech/substrate/pull/14717"
}
|
gharchive/pull-request
|
ext fees inclusion on tip
fixes #12169
In the quest of getting a clear picture of the problem,
I have included the final_scaled_tip for normal txn to include their inclusion fees. But I have a question on regarding the normal txn will be competing with operational txn and I think we have to make sure the normal txn scaled_tip wont be greater than operational txn scaled tip
Also concerning Gav issue on standard scale txn priority #11405 , I think we should change the how we operate on txn priority and maybe have a trait so that substrate chains can define how they want the priority to be and provide maybe a default implementation.
The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable
Logs: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/3335903
|
2025-04-01T04:35:05.466614
| 2016-03-03T07:52:45
|
138104187
|
{
"authors": [
"dpawasi",
"ropable"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9624",
"repo": "parksandwildlife/oim-cms",
"url": "https://github.com/parksandwildlife/oim-cms/pull/22"
}
|
gharchive/pull-request
|
Added org_unit__secondary_location__name to UserResource class.
Reviewed 1 of 1 files at r1.
Review status: all files reviewed at latest revision, all discussions resolved.
Comments from the review on Reviewable.io
|
2025-04-01T04:35:05.474541
| 2023-04-23T17:36:12
|
1680105478
|
{
"authors": [
"masayag"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9625",
"repo": "parodos-dev/parodos",
"url": "https://github.com/parodos-dev/parodos/pull/273"
}
|
gharchive/pull-request
|
Flpath 328: assessment workflow options should be available to the user
The patch introduces a new API endpoint. The purpose of the new API is
to enable users to retrieve context parameters of an executed workflow,
specifically of type "assessment".
The patch adds a new controller method that maps to a new endpoint.
This new endpoint accepts the workflow execution ID and a list of context
parameters to retrieve. The method then retrieves the specified context
parameters from the workflow service and returns them as a JSON response.
An example of the request URL:
http://localhost:8080/api/v1/workflows/${workflowExecutionId}/context?param=WORKFLOW_OPTIONS
In the future, when additional parameters will be defined as 'public' visible, the URLcan be used as:
http://localhost:8080/api/v1/workflows/${workflowExecutionId}/context?param=WORKFLOW_OPTIONS,OTHER_PARAM
or a list of separate 'param's:
http://localhost:8080/api/v1/workflows/${workflowExecutionId}/context?param=WORKFLOW_OPTIONS¶m=OTHER_PARAM
The response body includes the workflow execution ID and the retrieved
context parameters. An example of the response body:
{
"workFlowExecutionId": "91811189-a70f-4bce-bd0b-6aa41e36986d",
"workFlowOptions": {
"newOptions": [
{
"identifier": "onboardingOption",
"displayName": "Onboarding",
"description": "An example of a complex WorkFlow",
"details": [
"An example workflow option of a complex workFlow with status checks"
],
"workFlowName": "complexWorkFlow"
}
]
}
}
To retrieve context parameters, users will have to specify the parameters
they want to retrieve as defined in the WorkContextDelegate.Resource, for
resources that are defined as "public" visible. Currently, only the
WORKFLOW_OPTIONS context parameter is defined as "public" visible.
The schema change can be viewed in the generated openapi.json within the commit.
Integration tests will follow...
|
2025-04-01T04:35:05.495498
| 2018-02-05T09:00:14
|
294316433
|
{
"authors": [
"RaimundasSakalauskas",
"rahulbbit"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9626",
"repo": "particle-iot/particle-sdk-ios",
"url": "https://github.com/particle-iot/particle-sdk-ios/issues/3"
}
|
gharchive/issue
|
issue with keychain
In KeychainItemWrapper.m in writeToKeychain method the app crashes with -25299 error
This should no longer be the case. Keychain helper has been refactored. If that still happens, please reopen the issue.
|
2025-04-01T04:35:05.600011
| 2024-09-10T08:56:34
|
2515831934
|
{
"authors": [
"Mildophin",
"oliver-gordon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9627",
"repo": "pasqal-io/pasqal-cloud",
"url": "https://github.com/pasqal-io/pasqal-cloud/pull/136"
}
|
gharchive/pull-request
|
[FEATURE] emu fresnel access in SDK
Description
Introduce EMU_FRESNEL device in the SDK.
The SDK needs to provide no configuration when speaking to the server to schedule an EMU_FRESNEL job.
Remaining Tasks
Related PRs in other projects (PASQAL developers only)
Additional merge criteria
Breaking changes
Checklist
[ ] The title of the PR follows the right format: [{Label}] {Short Message}. Label examples: IMPROVEMENT, FIX, REFACTORING... Short message is about what your PR changes.
Versioning (PASQAL developers only)
[ ] Update the version of pasqal-cloud in _version.py following the changes in your PR and by using semantic versioning.
Documentation
[ ] Update CHANGELOG.md with a description explaining briefly the changes to the users.
Tests
[ ] Unit tests have been added or adjusted.
[ ] Tests were run locally.
Internal tests pipeline (PASQAL developers only)
[ ] Update and run the internal tests while targeting the branch of this PR.
If your PR hasn't changed any functionality, it still needs to be validated against internal tests.
After updating the version (PASQAL developers only)
[ ] Open a PR on the internal tests that updates the version used for the pasqal-cloud backward compatibility tests.
Missing changelog and version update ;)
@Mildophin There is 0 reason to update those until it's reviewed and updated otherwise I just need to update them again if someone merges in before me or if I need to change the MR.
|
2025-04-01T04:35:05.666887
| 2017-01-19T09:57:00
|
201812131
|
{
"authors": [
"Zeered",
"patchthecode"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9628",
"repo": "patchthecode/JTAppleCalendar",
"url": "https://github.com/patchthecode/JTAppleCalendar/issues/270"
}
|
gharchive/issue
|
exc_bad_instruction (code=exc_i386_invop subcode=0x0)
I am using 1 row of calendarview
i make it can be clicked and change to the colour i wanted.
but.. i only can do this in a row of the first page when i m running it.
ex. 1 month have 4 weeks..
but i only can clicked the first week where is the first preview when i run
when i click at the others it will show the error
at
let myCustomCell = cell as! cellView exc_bad_instruction (code=exc_i386_invop subcode=0x0)
can you copy and paste the entire function?
No response for 5 days. I assume your problem has been resolved.
If this is still an issue for you, then click on the re-open button and I will continue.
|
2025-04-01T04:35:05.668207
| 2021-12-16T17:44:43
|
1082478732
|
{
"authors": [
"CannonLock",
"josiewatkins"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9629",
"repo": "path-cc/path-cc.github.io",
"url": "https://github.com/path-cc/path-cc.github.io/pull/244"
}
|
gharchive/pull-request
|
Add precision mental health article
https://path-cc.io/web-preview/preview-gaylen-article/
Added the precision mental health article and also added subtitle and author in USGS article that I had forgot to include when I published that article originally
LGTM, you can take or leave these suggestions.
|
2025-04-01T04:35:05.670632
| 2024-01-06T15:19:44
|
2068677022
|
{
"authors": [
"ColCarroll",
"patrick-kidger"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9630",
"repo": "patrick-kidger/optimistix",
"url": "https://github.com/patrick-kidger/optimistix/pull/34"
}
|
gharchive/pull-request
|
Check last point when using best so far minimiser
Fixes #33
There is some unnecessary looking lines
best_f, best_aux = fn(state.best_y, args)
best_loss = self._to_loss(state.best_y, best_f)
where I would expect to just use state.best_loss, but the test doesn't pass without it!
Thank you for the fix! Always happy to squash bugs :)
|
2025-04-01T04:35:05.672213
| 2022-02-22T08:26:33
|
1146597063
|
{
"authors": [
"patrickdemooij9"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9631",
"repo": "patrickdemooij9/uSeoToolkit.Umbraco",
"url": "https://github.com/patrickdemooij9/uSeoToolkit.Umbraco/issues/1"
}
|
gharchive/issue
|
Add new mediapicker to meta fields image
Metafields package currently doesn't support the new mediapicker of Umbraco. Should add that to the list of supported editors.
Fixed with this commit: https://github.com/patrickdemooij9/uSeoToolkit.Umbraco/commit/7c0edcad80800099e9eac242e0c3c4642ef85fe6
Will be in beta2
|
2025-04-01T04:35:05.688517
| 2017-11-09T09:20:24
|
272491099
|
{
"authors": [
"patrys",
"ray007"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9632",
"repo": "patrys/vscode-code-outline",
"url": "https://github.com/patrys/vscode-code-outline/issues/59"
}
|
gharchive/issue
|
JS prototype properties
would be really nice if they were treated like methods in classes and showed up in the hierarchy below the function they are added on.
This extension does not do any extraction on its own, all symbols are provided by the language support extensions (in this case the JS/TS extension).
The only extension I've installed in Visual Studio Code is this one.
Could you please point me in the right direction where to ask for this?
The JS and TS extensions are part of Code and come with the editor: https://github.com/Microsoft/vscode/tree/master/extensions/javascript
|
2025-04-01T04:35:05.689773
| 2016-01-14T17:42:37
|
126708730
|
{
"authors": [
"Machinas",
"dmolsen"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9633",
"repo": "pattern-lab/patternlab-php",
"url": "https://github.com/pattern-lab/patternlab-php/issues/334"
}
|
gharchive/issue
|
How do you change the breakpoint values for S, M, L in the nav?
In which file can I amend this Thanks!
That's going to be in public/styleguide/js/styleguide.js. You'll want to look at lines 90-128. Example
|
2025-04-01T04:35:05.690634
| 2016-12-22T19:22:37
|
197248568
|
{
"authors": [
"jeff-phillips-18"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9634",
"repo": "patternfly/angular-patternfly",
"url": "https://github.com/patternfly/angular-patternfly/pull/382"
}
|
gharchive/pull-request
|
Use $doCheck instead of $scope.$watch in filter, sort, and toolbar co…
…mponents
Changed to angular.copy, thanks @dtaylor113
|
2025-04-01T04:35:05.693023
| 2018-01-04T19:35:02
|
286090650
|
{
"authors": [
"catrobson",
"dlabrecq"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9635",
"repo": "patternfly/patternfly-ng",
"url": "https://github.com/patternfly/patternfly-ng/issues/251"
}
|
gharchive/issue
|
Tour pattern
Implement the tour pattern as specified on patternfly.org:
http://www.patternfly.org/pattern-library/communication/tour/
Side note: Should rectify with iPaaS team as they were looking at some advancements to the tour capability. Need to coordinate design and development contributions in this area.
Closing due to inactivity.
|
2025-04-01T04:35:05.695269
| 2018-07-31T16:04:58
|
346255389
|
{
"authors": [
"patternfly-build",
"riccardo-forina",
"seanforyou23"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9636",
"repo": "patternfly/patternfly-ng",
"url": "https://github.com/patternfly/patternfly-ng/pull/435"
}
|
gharchive/pull-request
|
fix(list): allow setting a custom trackBy function for the underlying ngFor directive
It is now possible to set a custom trackBy function to be used in the underlying ngFor directive, which by default tracks changes in the items by identity.
This is useful in scenarios where the items passed to the component are periodically polled, and components with state (eg. a pfng-action component) are used in the item's template.
This should help fixing syndesisio/syndesis/issues/3121 where we have exactly this problem.
Deploy preview for patternfly-ng ready!
Built with commit 0b187e5bfa1641c7509ec6c51116ae82ce615df8
https://deploy-preview-435--patternfly-ng.netlify.com
Tested this against Syndesis UI and it works like a charm - nice enhancement
|
2025-04-01T04:35:05.736306
| 2024-10-21T18:04:23
|
2603296108
|
{
"authors": [
"jenny-s51",
"patternfly-build"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9637",
"repo": "patternfly/react-topology",
"url": "https://github.com/patternfly/react-topology/pull/243"
}
|
gharchive/pull-request
|
add customStatusIcon support to DefaultTaskGroup
What
Closes #
Description
Type of change
[ ] Feature
[ ] Bugfix
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no api changes)
[ ] Build related changes
[ ] Documentation content changes
[ ] Other (please describe):
Screen shots / Gifs for design review
Preview: https://react-topology-pr-topology-243.surge.sh
|
2025-04-01T04:35:05.738653
| 2018-03-19T07:03:20
|
306352488
|
{
"authors": [
"arwhirang",
"patverga"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9638",
"repo": "patverga/bran",
"url": "https://github.com/patverga/bran/issues/1"
}
|
gharchive/issue
|
Question regarding the "Bi-affine Pairwise Scores"
Hello?
I read the paper and I found the "Bi-affine Pairwise Scores" concept interesting. However, in your code, it doesn't seem to use the "Bi-affine Pairwise Scores" equation described in the paper.
I think the base classifier class, "ClassifierModel" in the classifier_models.py do not have this "Bi-affine Pairwise Scores" feature.
If I am mistaken, could you tell me where to look?
Hi
The bi-affine pairwise scores are being calculated here: https://github.com/patverga/bran/blob/master/src/models/transformer.py#L466
|
2025-04-01T04:35:05.744894
| 2024-06-13T18:14:16
|
2351777578
|
{
"authors": [
"youknow04"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9639",
"repo": "paul-gauthier/aider",
"url": "https://github.com/paul-gauthier/aider/pull/677"
}
|
gharchive/pull-request
|
Improve SEARCH/REPLACE accuracy
LLMs, including GPT-4, often provide a very short context for the SEARCH block to match.
For example:
<<<<<<< SEARCH
}
=======
// some long code block
>>>>>>> REPLACE
In this case, the current Aider code just uses the first match with just }, which may be wrong with a high probability.
This issue could be mitigated by prompt engineering, and we may need to do so.
However, in my opinion, we should extract the full effectiveness of classic coding before relying on LLMs.
This PR handles multiple perfect matches since, at the very least, it is safe to reject if there are multiple perfect matches.
(BTW, you did a great job. I read the related code for this PR, and it handles many crazy dirty situations effectively.)
I reproduced such a very short SEARCH context with this PR version of Aider in my project.
Aider+GPT-4o handled it well, as follows:
// prev LLM response with very short SEARCH block
The LLM did not conform to the edit format.
https://aider.chat/docs/troubleshooting/edit-errors.html
Multiple code matches found. Please provide more lines on SEARCH block to disambiguate.
Understood. I'll provide more lines to disambiguate the search block.
// correctly modified LLM response with long enough SEARCH block
I found that this case happens more frequently than I expected.
However, with this retry, I noticed that GPT-4 often breaks the previous code by just including more lines in the SEARCH block only (not including them in the REPLACE block).
I will fix this when I have time.
Until then, I will convert it to a draft.
|
2025-04-01T04:35:05.754032
| 2019-11-02T15:41:57
|
516633435
|
{
"authors": [
"ViktorVonVN",
"jabis",
"paulhodel"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9640",
"repo": "paulhodel/jexcel",
"url": "https://github.com/paulhodel/jexcel/issues/703"
}
|
gharchive/issue
|
How to cancel changes to cells?
I just learned how to use this library recently and i don't know how to cancel the changes made to the cells when the onchange event occurs.
Please help, i'm using it for a project.
Thank you in advance.
I am not sure what you meant. Do you mean, cancel the execution of the change? IF yes, you can use onbeforechange to intercept the change. Or, ESC key during the edition to close the edition and return the correct value. Maybe trigger UNDO after any checking.
Yes, how can i intercept the change with onbeforechange?
@ViktorVonVN arguments passed to onbeforechange are el, obj.records[y][x], x, y, value - if you return anything from it, the new value will be what you return
|
2025-04-01T04:35:05.757477
| 2020-10-05T02:13:55
|
714460773
|
{
"authors": [
"scala-steward",
"zakpatterson"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9641",
"repo": "pauljamescleary/scala-pet-store",
"url": "https://github.com/pauljamescleary/scala-pet-store/pull/420"
}
|
gharchive/pull-request
|
Update sbt to 1.4.0
Updates org.scala-sbt:sbt from 1.3.13 to 1.4.0.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scala-sbt", artifactId = "sbt" } ]
labels: library-update, semver-minor
@mergify refresh
@mergifyio update
@mergifyio update
@mergifyio update
|
2025-04-01T04:35:05.759704
| 2023-09-15T19:00:53
|
1898931606
|
{
"authors": [
"mismathh",
"paulkim26"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9642",
"repo": "paulkim26/til-to-html",
"url": "https://github.com/paulkim26/til-to-html/issues/5"
}
|
gharchive/issue
|
... content does not get updated when there is a title in input files
Based on the optional requirement stated in the Release 0.1 wiki to parse a title from the input files: if there is a title, it also needs to populate the<title>...</title> with the title from the input file.
Quote from Release 0.1 Wiki
try to parse a title from your input files. If there is a title, it will be the first line followed by two blank lines. In your generated HTML, use this to populate the <title>...</title> and add an <h1>...</h1> to the top of the <body>.
If this was a custom optional requirement, then please disregard this issue.
Ah I see, I interpreted "title" here as the filename but in the quote you provided it refers to the title within the text file, thanks, will change.
|
2025-04-01T04:35:05.773756
| 2017-05-19T07:32:15
|
229893604
|
{
"authors": [
"Vadixem",
"romanwue",
"tschust"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9643",
"repo": "paulovap/qtpdfium",
"url": "https://github.com/paulovap/qtpdfium/issues/22"
}
|
gharchive/issue
|
SIGSEGV on CPDF_ColorSpace::GetBufSize()
Hello, Paulo!
I am trying to test your library. I added it to my qt project, but when I try to test it work using your sample code in REDME.MD, it crashes. Consider this data
Stack trace:
1 CPDF_ColorSpace::GetBufSize cpdf_colorspace.cpp 411 0x5555559be96a
2 CPDF_ColorSpace::CreateBuf cpdf_colorspace.cpp 418 0x5555559be99e
3 CPDF_Color::SetColorSpace cpdf_color.cpp 65 0x5555559bd60b
4 CPDF_ColorState::ColorData::SetDefault cpdf_colorstate.cpp 152 0x5555559c33b7
5 CPDF_ColorState::SetDefault cpdf_colorstate.cpp 25 0x5555559c2ce2
6 CPDF_ContentParser::Continue cpdf_contentparser.cpp 170 0x5555559c64c3
7 CPDF_PageObjectHolder::ContinueParse cpdf_pageobjectholder.cpp 33 0x5555557b953b
8 CPDF_Page::ParseContent cpdf_page.cpp 99 0x5555557b7239
9 FPDF_LoadPage fpdfview.cpp 636 0x5555557afbbc
10 QPdfium::page qpdfium.cpp 109 0x5555557a9af2
11 main main.cpp 46 0x5555556c5075
int CPDF_ColorSpace::GetBufSize() const {
if (m_Family == PDFCS_PATTERN) { // Crashes when debugger gets here
return sizeof(PatternValue);
}
return m_nComponents * sizeof(FX_FLOAT);
}
main.cpp:
#include <QApplication>
#include <QtPdfium>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QPdfium pdf;
auto result = pdf.loadFile("path/to/some/file"); // Returns success
auto pdfPage = pdf.page(0); // Crashes here
auto valid = pdfPage.isValid();
return a.exec();
}
I might have just linked library the wrong way, so it cannot find .a file
@Vadixem I run into the same issue when rendering a pdf file in iOS. How did you fix it?
@Vadixem I run into the same issue when rendering a pdf file in iOS. How did you fix it?
Hello @tschust. To be honest, I am not sure whether I fixed it, but I wrote
I might have just linked library the wrong way, so it cannot find .a file
so I would suggest you to make sure in .qmake file that you have "LIBS += -L"/path/to/qtpdfium_lib_files" -llibname
Where -llibname is name of library file (be it .a or .dylib) without .a or .dylib part and without "lib" part.
For instance if we have usr/user/bin/libcoolstuff.a in .qmake file it will turn into
"LIBS += -L"usr/user/bin" -lcoolstuff
Hope it helps!
I already added the fix, hopefully the pull request get merged soon
|
2025-04-01T04:35:05.776446
| 2023-06-18T07:13:50
|
1762154095
|
{
"authors": [
"BOTHRAJ",
"vishwajeetio"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9644",
"repo": "paulpierre/RasaGPT",
"url": "https://github.com/paulpierre/RasaGPT/issues/32"
}
|
gharchive/issue
|
Organization already exists
Getting the error below when running the command
make install.
Traceback (most recent call last):
File "/app/api/seed.py", line 128, in
org_obj = create_org_by_org_or_uuid(
File "/app/api/helpers.py", line 95, in create_org_by_org_or_uuid
raise HTTPException(status_code=404, detail="Organization already exists")
fastapi.exceptions.HTTPException
make[1]: *** [seed] Error 1
make: *** [install] Error 2
use make db-reset cmd
|
2025-04-01T04:35:05.778731
| 2024-08-17T18:04:11
|
2471616124
|
{
"authors": [
"jewettg",
"paulscottrobson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9645",
"repo": "paulscottrobson/neo6502-firmware",
"url": "https://github.com/paulscottrobson/neo6502-firmware/issues/599"
}
|
gharchive/issue
|
v0.99 shows v0.39 when booting
When rebooting after loading v0.99, you see:
"Morpheus Firmware: v0.39".
Minor issue, but was expecting a v0.99
Not sure why this happened. It seems to be working again. There seems to have been some mixup with the release.
|
2025-04-01T04:35:05.779795
| 2023-03-02T16:27:47
|
1607118484
|
{
"authors": [
"eriktier",
"paulscottrobson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9646",
"repo": "paulscottrobson/superbasic",
"url": "https://github.com/paulscottrobson/superbasic/issues/37"
}
|
gharchive/issue
|
SolarFox, Demo2 and Invaders fail with Out of Range errors
After updating to the 01/03/2023 Beta 1 version of Basic/Kernel, I notice that these two demo games don't work anymore. Both fail in a routine that updates or draws all sprites with an Out of Range error. Perhaps this is caused by the changes in the sprite handling code.
Probably, as it seems to work on my machine ; I did update all the various demos that use sprites as they require new graphics.bin files ; old ones will probably do something odd ?
|
2025-04-01T04:35:05.794476
| 2019-07-04T13:52:48
|
464278888
|
{
"authors": [
"hosek",
"mihaiblaga89"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9647",
"repo": "pavjacko/renative",
"url": "https://github.com/pavjacko/renative/issues/129"
}
|
gharchive/issue
|
Error while rnv is listing Android target devices
Cant deploy to device/emulator due this:
ReNative run - _listAndroidTargets:android:false:false:false - Starting!
execCLI:null:echo "avd name" | nc localhost 5554
execCLI:null:echo "avd name" | nc localhost 5556
ReNative run - ERRROR! Cannot read property 'slice' of undefined
TypeError: Cannot read property 'slice' of undefined
platformTools/android.js:162
also please note typo "ERRROR"
What version of rnv do you have?
On Thu, Jul 4, 2019, 15:52 Roman Hosek<EMAIL_ADDRESS>wrote:
Cant deploy to device/emulator due this:
ReNative run - _listAndroidTargets:android:false:false:false - Starting!
execCLI:null:echo "avd name" | nc localhost 5554
execCLI:null:echo "avd name" | nc localhost 5556
ReNative run - ERRROR! Cannot read property 'slice' of undefined
TypeError: Cannot read property 'slice' of undefined
platformTools/android.js:162
also please note typo "ERRROR"
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/pavjacko/renative/issues/129?email_source=notifications&email_token=ABEQ5OUUKJLOINBNFA44I7LP5X6DDA5CNFSM4H526ADKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4G5MKVUA,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABEQ5OQ7L7UEAQHFNOZRSETP5X6DDANCNFSM4H526ADA
.
Oh sorry, I had 0.23.17.
Retesting it now with latest version 0.23.19 gives me ReNative run - ERRROR! No simulator -t target name specified! also not all emulators (3 of 6) are listed in CLI
Fixed in 0.23.20, closing
|
2025-04-01T04:35:05.803642
| 2023-06-03T18:18:24
|
1739748525
|
{
"authors": [
"Impre-visible"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9648",
"repo": "pawanpaudel93/rs-m3u-parser",
"url": "https://github.com/pawanpaudel93/rs-m3u-parser/issues/1"
}
|
gharchive/issue
|
"Info" printed when importing the package
Hi, When import:
"from m3u_parser import M3uParser"
"INFO" are printed, worst thing ever, can u patch that pls ?
warning, the pipy return to this website, not the good one...
|
2025-04-01T04:35:05.809439
| 2020-10-27T18:50:44
|
730722177
|
{
"authors": [
"pawelwiejkut",
"widlok"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9649",
"repo": "pawelwiejkut/hana_sharepoint_list_adapter",
"url": "https://github.com/pawelwiejkut/hana_sharepoint_list_adapter/issues/1"
}
|
gharchive/issue
|
java developer check required
Because I'm not daily Java developer, there will be nice if someone who is more into the topic can check the code and correct. Thank you!
Hi @pawelwiejkut , i looked through your code briefly. You might want to add management tool like gradle or maven. This will make it easier manage your dependencies and project build. Other than that code looks quite clean:)
@widlok , thank you for checking in 👍 Can you create a pull request with management tool? I don't know how to combine this in case of osgi project.
|
2025-04-01T04:35:05.878642
| 2023-06-19T13:24:42
|
1763544598
|
{
"authors": [
"JarrodMFlesch",
"MarvinVrdoljak",
"jmikrut",
"taismassaro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9650",
"repo": "payloadcms/payload",
"url": "https://github.com/payloadcms/payload/issues/2854"
}
|
gharchive/issue
|
replaceState for blocks not working properly anymore
Link to reproduction
To Reproduce
Insert code above in your project
log in into you payload admin area and go to my test-collection "Tests"
add new "Test"
click on the button "click" in sidebar
-> New State of Title is visible; new state of array is NOT
Note: This Code is working fine in version 1.8.5
Collection:
import {CollectionConfig} from 'payload/types'
import {ReplaceStateComponent} from '../component/collectionComponents/ReplaceStateComponent'
export const CollectionTest: CollectionConfig = {
slug: 'test',
admin: {
useAsTitle: 'title',
defaultColumns: ['fullTitle', 'author', 'createdAt', 'status'],
},
versions: {
drafts: true,
},
fields: [
{
name: 'title',
type: 'text',
required: true,
},
{
name: 'array',
type: 'blocks',
blocks: [
{
slug: 'text',
fields: [
{
name: 'text',
type: 'text',
required: true,
},
],
},
{
slug: 'textarea',
fields: [
{
name: 'textarea',
type: 'textarea',
required: true,
},
],
},
],
},
{
name: 'preview',
type: 'ui',
admin: {
position: 'sidebar',
components: {
Field: ReplaceStateComponent,
},
},
},
],
}
Component:
import React from 'react'
import {useForm} from 'payload/components/forms'
interface StateValue {
initialValue: any
value: any
valid: boolean
disableFormData?: boolean
}
type State = Record<string, StateValue>
export const ReplaceStateComponent: React.FC = () => {
const {getFields, replaceState, setModified} = useForm()
const fields = getFields()
const importBlocks = async (e: React.MouseEvent<HTMLElement>) => {
e.preventDefault()
const newState: State = {...fields}
newState['title'] = {
initialValue: 'title',
value: 'title',
valid: true,
disableFormData: true,
}
newState['array'] = {
initialValue: 1,
value: 1,
valid: true,
disableFormData: true,
}
newState['array.0.blockName'] = {
initialValue: 'textarea',
value: 'textarea',
valid: true,
}
newState['array.0.blockType'] = {
initialValue: 'textarea',
value: 'textarea',
valid: true,
}
newState['array.0.id'] = {
initialValue: '64904e5c8714d9702677551c',
value: '64904e5c8714d9702677551c',
valid: true,
}
newState['array.0.textarea'] = {
initialValue: 'This is a test',
value: 'This is a test',
valid: true,
}
// Sets the new state
replaceState(newState)
setModified(true)
// setTimeout(() => {
// submit()
// }, 1000)
}
return (
<button type="button" onClick={importBlocks}>
Click
</button>
)
}
Describe the Bug
The code (example) was running perfectly fine in version 1.8.5.
After clicking the "click"-button you need to submit the collection-post manually or use the submit() function at the end of the code to see the new state of the array.
Payload Version
1.9.5
Hey @MarvinVrdoljak — I can shine some light on this for you.
Updating array rows is a not yet documented feature, but we are working on making it significantly easier.
TL;DR - we used to store row state separately in its own state, per row field, which is what made updating rows of array / block fields difficult.
We now have flattened row state to exist in the parent form state itself, which will allow us to expose simple helper functions like addRow and removeRow directly from form state. That will simplify the work involved here significantly.
Right now, you can leverage dispatchFields by sending an ADD_ROW action to add rows to an array field, but, easier methods will be coming shortly like I mentioned.
You can also simply adjust your code by adding rows into your existing replaceState function by updating your array field to the following:
newState.array = {
initialValue: 1,
value: 1,
valid: true,
disableFormData: true,
// NEWLY CONSOLIDATED ROWS PROPERTY
rows: [
{
blockType: 'textarea',
collapsed: false,
id: '64904e5c8714d9702677551c',
},
],
};
Give that a shot, will you?
Sorry about the hassle - this will all become officially supported, and documented, shortly! We're making a lot of progress here.
@jmikrut Thanks a lot for your reply. Do I understand correctly that I can only use the row property to add empty rows? Because I actually need to add them prefilled. This is how my code looks like now:
newState['array'] = {
initialValue: 1,
value: 1,
valid: true,
disableFormData: true,
rows: [
{
blockType: 'textarea',
collapsed: false,
id: '64904e5c8714d9702677551c',
// textarea: 'textarea', // Not working -> Would be great if this would work
// blockName: 'textarea', // Not working -> Would be great if this would work
},
],
}
// To fill the added rows with content:
newState['array.0.textarea'] = {
initialValue: 'textarea',
value: 'textarea',
valid: true,
}
newState['array.0.blockName'] = {
initialValue: 'textarea',
value: 'textarea',
valid: true,
}
I really appreciate and looking forward to your official solutions regarding this topic. Can you already estimate when you will release it?
@MarvinVrdoljak what you have above is correct, to replace state you will need to add the rows individually like you are doing with newState['array.0.textarea'].
With the new approach you would need to implement something similar to this:
https://github.com/payloadcms/payload/blob/master/src/admin/components/forms/field-types/Array/index.tsx#L108
The helper functions will likely just abstract this out and have you pass rowIndex, path and fields so we can do the buildStateFromSchema part for you. As for ETA, not sure yet. I will talk with the team!
hey @JarrodMFlesch, do you have any updates on this? I've got an Article collection that has an array field for article sections and I would like to prefill some section titles when creating a new article. Ideally this would be done based on a previously chosen article type, but for now I'm just trying to prefill one "Summary" section in any new article. I got it somewhat working following the code in this issue, but it's very unclear to me what needs to go in the rows array and I'm getting a console error:
Uncaught TypeError: Cannot read properties of undefined (reading 'size')
at setsAreEqual (setsAreEqual.js:4:1)
at index.js:120:1
at Array.every (<anonymous>)
at index.js:117:1
at Array.forEach (<anonymous>)
at index.js:116:1
at WatchFormErrors.js:10:1
at useThrottledEffect.js:10:1
this is the code I have for the UI field:
const { getFields, replaceState, setModified } = useForm();
const fields = getFields();
console.log({ fields });
useEffect(() => {
if (fields.sections.rows.length <= 0) {
const updatedFields = {
...fields,
sections: {
...fields.sections,
disableFormData: true,
rows: [
{
id: "64904e5c8714d9702677551c",
content: "richText",
},
],
},
"sections.0.title": {
initialValue: "Summary",
value: "Summary",
valid: true,
},
};
replaceState(updatedFields);
setModified(true);
}
}, []);
any insights would be appreciated! 🙏🏽
|
2025-04-01T04:35:05.889499
| 2023-12-20T20:38:09
|
2051234195
|
{
"authors": [
"AlessioGr",
"AndriiSherman",
"AntoineBx",
"DanRibbens",
"denolfe",
"devj3ns",
"didiraja",
"ericalli",
"knynkwl",
"richleach",
"spencerxl"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9651",
"repo": "payloadcms/payload",
"url": "https://github.com/payloadcms/payload/issues/4580"
}
|
gharchive/issue
|
db-postgres & Supabase: drizzle detects schema changes when there are none
Link to reproduction
No response
Describe the Bug
See https://discord.com/channels/967097582721572934/1183518316191568023/1183518316191568023 :
npm run dev runs mostly fine, but then I receive the following message:
? Warnings detected during schema push:
· You're about to set not-null constraint to email column without default, which contains 1 items
DATA LOSS WARNING: Possible data loss detected if schema is pushed.
Accept warnings and push schema to database? › (y/N)
When I select no, then the startup process terminates. When I select yes, then I lose my user data. Where can I read documentation on what exactly happens during npm run dev? Is it necessary to push the database schema even though there haven't been any changes in between runs (with the exception of my creation of a user)?
To Reproduce
This only seems to happen on supabase
Payload Version
?
Adapters and Plugins
No response
To Reproduce
Just install Payload and use Supabase for PostgreSQL, simple as that
Env has nothing beyond supabase link for db and secret key
Same issue, connecting to Supabase.
I don't have a solution, but the warning is coming from db-postgres>dist>connect.js
Payload is using Drizzle under the hood, and drizzle is set to always push the schema if not in prod. (connect.js : 43)
Looking at the postgresAdapter (db-postgres>dist>index.js), push can be passed as an arg from your payload.config.ts, and disabled like so:
db: postgresAdapter({
pool: {
connectionString: process.env.DATABASE_URI,
},
push: false,
}),
I am having the same issue when using the postgresAdapter with supabase.
For me, this makes the postgresAdapter unusable because every time I change my collection schema, it not only wants these schema changes to push but also the email one which clears my whole users table.
? Warnings detected during schema push:
· You're about to set not-null constraint to email column without default, which contains 1 items
· ... my other schema changes
DATA LOSS WARNING: Possible data loss detected if schema is pushed.
Accept warnings and push schema to database? » (y/N)
Using push: false is not really an option if you are adding new fields. I suppose we will need to wait until this is out of beta. 😄
Does anyone know what Supabase is doing differently that trigger this data loss?
I just tried to replicate this issue on the latest version of Payload and was not able to since we updated the dependency of drizzle.
Tested on:
<EMAIL_ADDRESS><EMAIL_ADDRESS>Inherited dependency:
<EMAIL_ADDRESS>Please @DanRibbens (me) if you still experience an issue and I'll reopen.
We've reached out to Drizzle for assistance with this one. Stay tuned.
Same for me, Supabase Postgres. I added "push: false" which let's me work for now. Hoping this helps with visibility.
Has Drizzle responded to your request? @denolfe?
@richleach Yes, they determined this is likely a bug on Supabase's end. Gathering all needed info to open an issue right now.
After a few investigations, it's actually not a bug on the Supabase side, but Drizzle needed an update. I have already fixed that locally. I'm going to run a few more tests and then send an updated tag to the Payload team for upgrading
So grateful to see all this teamwork to solve the issue, thx so much. I really believe a lot in Payload.
Appreciate the fix for this @denolfe. Would it be possible to get a version bump on the @payload/db-postgres package? We're not able to test this fix atm. Thank you!
We're planning on doing releases tomorrow 👍
|
2025-04-01T04:35:05.924011
| 2016-01-20T16:17:04
|
127718474
|
{
"authors": [
"dglozic",
"pklicnik",
"samsel"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9652",
"repo": "paypal/react-engine",
"url": "https://github.com/paypal/react-engine/issues/128"
}
|
gharchive/issue
|
Client-side error when using code splitting in webpack
Our routes file is configured to use require.ensure so we can code-split our page into separate chunks using webpack. Eg:
if (typeof require.ensure === "undefined") {
require.ensure = require("node-ensure");
}
var Page = require("page.jsx");
module.exports = {
component: Page,
path: "/home",
childRoutes: [
{
path: ":id",
getComponent: function(location, callback) {
require.ensure([], function() {
callback(null, require("home.jsx"));
});
}
}
]
};
Server-side rendering succeeds without error, but an error appears in the console when mounted on the client:
Warning: React attempted to reuse markup in a container but the checksum was invalid. This generally means that you are using server rendering and the markup generated on the server was not what the client was expecting. React injected new markup to compensate which works but you have lost many of the benefits of server rendering. Instead, figure out why the markup being generated is different on the client or server:
(client) <noscript data-reacti
(server) <div data-reactid=".1
After debugging through the issue, this seems to be a problem/bug/limitation in react-router.
As a work around, you need to wrap the call to render inside match to pre-load the routes configuration on the client. Snippet of the change needed on the react-engine side
match({ routes: options.routes, location: location }, function() {
// for any component created by react-router, merge model data with the routerProps
// NOTE: This may be imposing too large of an opinion?
var routerComponent = React.createElement(RouterComponent, {
createElement: function(Component, routerProps) {
return React.createElement(Component, merge({}, props, routerProps));
},
routes: options.routes,
history: routerHistory.createHistory()
});
render(routerComponent, mountNode);
});
Solution is discussed here:
https://github.com/rackt/react-router/issues/2036#issuecomment-153541487
Example provided here:
https://github.com/rackt/example-react-router-server-rendering-lazy-routes
@pklicnik is the change needed on the server or client side of react-engine?
@samsel Client side.
See PR #129 for my workaround. Note: I didn't fully test all scenarios, so tweak as needed
+1
merged & published v3.1.0 with the change!
|
2025-04-01T04:35:05.926912
| 2015-01-27T00:26:12
|
55558394
|
{
"authors": [
"arschles",
"ayakushev99",
"kionka",
"milesoc",
"taylorleese",
"wpalmeri"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9653",
"repo": "paypal/scala-style-guide",
"url": "https://github.com/paypal/scala-style-guide/pull/9"
}
|
gharchive/pull-request
|
remove case ordering section
Ordering cases by having the simplest thing be first is common practice in functional programming. I see two reasons to do so:
Tail recursion
Understanding the base case makes it easier to read the more complicated case.
On the other hand, I understand the desire to have Failure cases be at the bottom, so it looks more like a try/catch block.
To avoid coming up with complicated rules I suggest removing this clause altogether.
I'm in favor of this change - this section hasn't been overly useful to us in practice. Better to use one's judgement.
No objections from me.
LGTM
forgot about this PR. we should merge it so I can update #10 properly. @ayakushev99 can you do the honors?
I'm fine with removing this from Scalastyle, but I still believe that we should follow this in almost all cases. We've certainly discussed it when deciding how to structure code.
|
2025-04-01T04:35:05.929807
| 2018-05-24T04:43:34
|
325962098
|
{
"authors": [
"akara"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9654",
"repo": "paypal/squbs",
"url": "https://github.com/paypal/squbs/pull/662"
}
|
gharchive/pull-request
|
Added Java API for and made Scala API more flexible (fixes #661)
Allows independently accessing matValue
Thanks for your pull request. Please review the following guidelines.
[X] Title includes issue id.
[X] Description of the change added.
[X] Commits are squashed.
[X] Tests added.
[X] Documentation added/updated.
[X] Also please review CONTRIBUTING.md.
In order to have an ActorLookup behavior, we best want to use ActorLookup itself.
Closing this based on comments. We should do better integration with ActorLookup for this instead of forcing the API at this level.
|
2025-04-01T04:35:05.949571
| 2017-07-23T08:16:55
|
244896281
|
{
"authors": [
"Pogman",
"cfaagaard",
"pazaan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9655",
"repo": "pazaan/600SeriesAndroidUploader",
"url": "https://github.com/pazaan/600SeriesAndroidUploader/issues/142"
}
|
gharchive/issue
|
Extract alarm flags from pump status message
hi guys,
thank you for your great work.
I think I found out some more "byte-meaning" in the status message.
In the index 72 you have a 2 byte (int16) there is a warning number (ex. 810) and in index 74 and 78 the date for the warning is shown. By downloading the carelink-csv I have found the following warnings:
public enum Warnings: Int16
{
No_Warning_0=0,
Alert_On_High_816 = 816,
Alert_Before_High_817 = 817,
Alert_On_Low_while_suspended_803 = 803,
Battery_Depleted_11 = 11,
Device_Alarm_100 = 100,
Device_Alarm_109 = 109,
Device_Alarm_73 = 73,
Device_Alarm_84 = 84,
No_Delivery_7 = 7,
Low_Battery_104 = 104,
Low_Reservoir_105 = 105,
Sensor_Alert_Calibrate_Now_775 = 775,
Sensor_Alert_Calibration_Error_776 = 776,
Sensor_Alert_Calibration_Reminder_869 = 869,
Sensor_Alert_Low_Transmitter_870 = 870,
Sensor_Alert_Sensor_Error_801 = 801,
Sensor_Alert_Change_Sensor_778 = 778,
Sensor_Alert_Lost_Sensor_780 = 780,
Sensor_Alert_Lost_Sensor_781 = 781,
Sensor_Alarm_786 = 786,
Sensor_Alarm_787 = 787,
Sensor_Alarm_788 = 788,
Sensor_Alarm_795 = 795,
Sensor_Alarm_798 = 798,
Sensor_Alarm_799 = 799,
Suspend_Before_Low_Alarm_quiet_810 = 810,
Suspend_Before_Low_Alarm_811 = 811,
Basal_Delivery_Resumed_Alert_quiet_806 = 806,
Suspend_Before_Low_Alarm_patient_unresponsive_medical_device_emergency_812 = 812,
Basal_Delivery_Resumed_Alert_maximum_suspend_reached_808 = 808,
Basal_Delivery_Resumed_Alert_glucose_still_low_maximum_suspend_reached_814 = 814,
}
Thanks @cfaagaard, this is super-handy! Can you please clarify what you mean by index 72 and index 74?
@cfaagaard I hope you don't mind, but I added some extras I found in my own CSVs 😄
Index number in the byte stream.
(On mobil right now.. clarify later. Ok?)
Index 74 and 78 is the rtc and offset.
A clarification later would be good. 😄
I don't think we're starting from the same index.
I am on the decrypted payload. To "calibrate" our index: I have:
[BinaryElement(69)]
public byte BolusWizardRecent { get; set; }
[BinaryElement(70)]
public Int16 BolusWizardBGLRaw { get; set; }
[BinaryElement(70)]
public Int16 BolusWizardBGLRaw { get; set; }
start at index 70 and it is a "Int16" two byte.j So the warning and datewarning should be right after your BGL.
I have been playing around with a binary serialization of the messages. Rightt now I am testing the framework on a c#/uwp client on a raspberry pi 3.
And the whole class have an attribute: [BinaryType(IsLittleEndian = false)]
which tells the serialization that it is a bigEndian array.
Got it, thanks! I have BolusWizardBGLRaw as 0x49 (73), so that's where we weren't aligned. Thanks again! I'll make sure to code this up 😄
And thank you for all your Work.
It's my pleasure! Is your project on GitHub? I'm currently working on a Node app for Pi/CHIP/Whatever.
Not on github, yet. Have No experince in using github as source control. Using Microsoft tfs right now. But plan to move to github.
The c# is coded in .net standard 1.4 so it should run on Linux/Windows platform. I wanted to try out the Windows 10 IOT in the pie.
Looked into the github source control from visual studio. It was not that hard, so I have published latest version of my project. https://github.com/cfaagaard/CGM.NET
Are there any central nightscout ressource where you can notify the community of the project I have just published?
Not so much "normal" users, but more developers.
Thanks @cfaagaard some good stuff there.
Thanks @Pogman and thank you for your work on the 600uploader. As I am saying in the readme.md; I am standing on the shoulders of giants.
offset in the uploader:
warning = statusBuffer.getShort(0x4B)
The downside with this warning is that it is only available while the alert shows on the pump screen. If a user cancels the alert between uploader polls we will never see it.
Pulled these from a year of saved Carelink data:
Alert Before High (817)
Alert Before Low (805)
Alert On High (816)
Alert On Low (802)
Alert On Low while suspended (803)
Basal Delivery Resumed Alert, maximum suspend reached (808)
Basal Delivery Resumed Alert, quiet (806)
Basal Delivery Resumed Alert, settings change (815)
Battery Depleted (11)
Battery Out Limit Exceeded (6)
Device Alarm (100)
Device Alarm (110)
Device Alarm (113)
Device Alarm (73)
Device Alarm (84)
Low Battery (104)
Low Reservoir (105)
No Delivery (7)
No Reservoir (66)
Sensor Alarm (787)
Sensor Alarm (788)
Sensor Alarm (789)
Sensor Alarm (790)
Sensor Alarm (791)
Sensor Alarm (795)
Sensor Alarm (797)
Sensor Alarm (798)
Sensor Alarm (799)
Sensor Alert: Calibrate Now (775)
Sensor Alert: Calibration Reminder (869)
Sensor Alert: Lost Sensor (780)
Sensor Alert: Lost Sensor (781)
Sensor Alert: Rising Rate of Change (784)
Sensor Alert: Sensor End (794)
Sensor Alert: Sensor Error (801)
Sensor Alert: Weak Signal (796)
Suspend Before Low Alarm, patient unresponsive, medical device emergency (812)
Suspend Before Low Alarm, quiet (810)
Suspend On Low Alarm (809)
Sensor Alarm (786)
Sensor Alert: Calibration Error (776)
No Delivery After Estimate (8)
Sensor Alert: Change Sensor (777)
Device Alarm (111)
Sensor Alert: Low Transmitter (870)
Device Alarm (58)
Device Alarm (70)
this is great. And yes your a correct @Pogman; it is more an "ActiveAlert" than just an "Alert". But it is great for a parent (like me :-)) when my kid gets a little warning/alarm/alert tired. Or at night when he does not hear the alarm.
now I just need to find out how to utilize this ActiveAlert in the nightscout website. Any suggestions? Are there any Alert functionality?
Beside nightscout website, I use Microsoft flow and wunderlist for our "diabetes management", so I am thinking maybe just a notification/alert via wunderlist just might do the trick...
The alert could be sent to NS as a "Note" or "Announcement" or "D.A.D. Alert" and use the built-in IFTTT to do something with that.
http://www.nightscout.info/wiki/labs/ifttt-integration
When the Sgv-value in the statusresponse is over 400 it looks like the sgv-value is a Alert number instead. And sometimes the actual Alert-bytes are also filled out. So there are two alert "placement".
I have observed the following values instead of sgv:
Warmup_769 = 769,
Need_Calibration_770 = 770,
Sensor_Alert_Change_774 = 774
Here are the meanings of those numbers for you:
769: SENSOR_INIT;
770: SENSOR_CAL_NEEDED;
771: SENSOR_ERROR;
772: SENSOR_CAL_ERROR;
773: SENSOR_CHANGE_SENSOR_ERROR;
774: SENSOR_END_OF_LIFE;
775: SENSOR_NOT_READY;
776: SENSOR_READING_HIGH;
777: SENSOR_READING_LOW;
778: SENSOR_CAL_PENDING;
779: SENSOR_CHANGE_CAL_ERROR;
780: SENSOR_TIME_UNKNOWN;
aahhh, thanks.
Observed this today:
Device_Alarm_Change_Battery_73 = 73,
Device_Alarm_Insert_Battery_84 = 84,
Closing this out now... if I see another alarm code I'll scream ;) v0.7.0 now handles all things alerts and alarms.
|
2025-04-01T04:35:05.957432
| 2021-01-09T22:50:14
|
782699917
|
{
"authors": [
"Pogman",
"TomBusche",
"psit"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9656",
"repo": "pazaan/600SeriesAndroidUploader",
"url": "https://github.com/pazaan/600SeriesAndroidUploader/issues/299"
}
|
gharchive/issue
|
Feature Request: Second Screen for 640G pump without upload
I'm wondering if the Android Uploader can be used or modified as a "simple" second Screen of the 640G pump, without uploading any data anywhere.
Features would be:
show current glucose and trend
show alarms
ring on alarm (until dismissed)
The app would run on my android smart phone, I would attach the Contour Next Link 2.4 USB stick at night and start the app, so background-tasks are not necessary.
We need this as a babymonitor for a 2 year old and I think a whole nightscout setup would be overkill for just getting a second screen for the pump.
I'm sorry that I write this request here, but I do and will not have Facebook and thus do not see any other way of getting support.
Kind regards!
Peter
You can do this with the uploader as it stands now. Simply install and plug in the Contour meter and it will act as a display without needing to touch a setting.
The uploader itself only alarms for meter disconnects but you can get glucose alarms by using xDrip installed on the same device.
You can do this with the uploader as it stands now. Simply install and plug in the Contour meter and it will act as a display without needing to touch a setting.
The uploader itself only alarms for meter disconnects but you can get glucose alarms by using xDrip installed on the same device.
Thank you very much for this fast reply. I was experimenting today a little with my old Samsung S4.
The plain Uploader App works perfectly as a simple second screen. I think you should note this somewhere in the feature list.
The xDrip+ App looks promising - I need to check the capabilities of alerting me an my wife on a hypo. I will know more after this coming night ;)
I know this is not an "issue", but I will keep it open until I can write more about the alerts. I'll update the subject/title.
Thank you very much for this fast reply. I was experimenting today a little with my old Samsung S4.
The plain Uploader App works perfectly as a simple second screen. I think you should note this somewhere in the feature list.
The xDrip+ App looks promising - I need to check the capabilities of alerting me an my wife on a hypo. I will know more after this coming night ;)
I know this is not an "issue", but I will keep it open until I can write more about the alerts. I'll update the subject/title.
The alerts of xDrip+ worked perfectly this night. Thanks a lot. I have some other questions, I will open another issue for that.
I'll keep this issue open. Maybe some contributor feels like adding documentation about using die uploader app without uploading ;)
The alerts of xDrip+ worked perfectly this night. Thanks a lot. I have some other questions, I will open another issue for that.
I'll keep this issue open. Maybe some contributor feels like adding documentation about using die uploader app without uploading ;)
Maybe a hint for the topic second screen.
Please have look for this project: https://github.com/mlukasek/M5_NightscoutMon
I have 3 kind of them. Awesome.
Thanks a lot. That project looks really interesting if you already have the data at nightscout - not what I was looking for back then, because I was looking for something that does not need a server - but very nice and maybe I will check one in the future.
|
2025-04-01T04:35:06.192764
| 2016-04-02T18:30:26
|
145401295
|
{
"authors": [
"mcadariu",
"pc035860"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9657",
"repo": "pc035860/angular-highlightjs",
"url": "https://github.com/pc035860/angular-highlightjs/issues/64"
}
|
gharchive/issue
|
Marking specific subset of source code in addition to highlighting
Hi,
Thanks for the really useful library, great work.
I'd like to ask for a little guidance to proceed, help appreciated.
How can I use angular-highlightjs to "mark" a subset of the source code lines with yellow background?
This JSFiddle shows what I mean, which works with highlight.js:
http://jsfiddle.net/tovic/059x3ygs/2/
Thanks,
Mircea
Hi @mcadariu ,
It's not possible for current angular-highlightjs, the stream-merging feature can only be invoked by hljs.highlightBlock(), and angular-highlightjs's implementation uses hljs.highlight().
Hi,
Clear, many thanks for the helpful clarification!
Greetings,
Mircea
|
2025-04-01T04:35:06.197263
| 2023-08-10T07:52:14
|
1844625947
|
{
"authors": [
"Phil-Friderici",
"pcfens"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9658",
"repo": "pcfens/puppet-ca_cert",
"url": "https://github.com/pcfens/puppet-ca_cert/pull/78"
}
|
gharchive/pull-request
|
Update to PDK v3.0.0
Based on my previous PR #77.
With this update the the validation and unit tests do run successfully. You can see the results in my fork here: https://github.com/Phil-Friderici/puppet-ca_cert/pull/2
Thanks for your work on getting us to PDK 3. Is there any particular order to the 3 MRs that you've submitted? I'm happy to merge them all, but would like to avoid any weird broken states along the way too.
Glad you like them :)
I only separated them into several PRs for better readability. You can merge the latest one and the other should get automatically closed.
Thank you for your fast review \o/
|
2025-04-01T04:35:06.214873
| 2013-06-26T18:46:25
|
16049220
|
{
"authors": [
"Chad813",
"abeburnett"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9659",
"repo": "pcottle/learnGitBranching",
"url": "https://github.com/pcottle/learnGitBranching/issues/109"
}
|
gharchive/issue
|
Level Relative Refs #2 could use more elaboration
First of all, thank you so much for creating this intuitive way to learn and understand Git branching. This is an awesome learning tool.
My issue is that I go through the first 6 lessons and then hit lesson 7 (Relative Refs #2) and hit a brick wall. I get the general idea of how '~' can be used to jump up many levels at once, but using it in the context of the given exercise doesn't make any sense to me. I don't have a clue how to begin.
I'd suggest elaborating on this lesson and making the given examples hew closer to what will be required for completing the exercise...maybe even split this lesson into two lessons which cover the material at greater length (maybe an easier exercise followed by a harder exercise). As it is since I don't have any clue how to use the lesson material to complete the exercise, I can't move past the lesson. And worse, I don't understand the concepts shared.
Thanks again!
didn't know you could force branch to a commit that hasn't been made yet. how do you do that without knowing the hash??
and why does checking out bugFix at c5 and committing make a c7 and not a c6?? more questions than answers.
|
2025-04-01T04:35:06.216853
| 2015-11-05T14:21:04
|
115292874
|
{
"authors": [
"JuhoKang",
"pcottle"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9660",
"repo": "pcottle/learnGitBranching",
"url": "https://github.com/pcottle/learnGitBranching/pull/327"
}
|
gharchive/pull-request
|
Translated multipleParents.js into Korean
Now all "main" levels are translated into Korean.
"remote" to go
:tada: :tada: :tada: :tada: woohoo! awesome man
pushing the site now...
btw there have been almost 400 sessions of korean users on the website this month!
|
2025-04-01T04:35:06.250376
| 2020-04-11T23:10:18
|
598369613
|
{
"authors": [
"jbarlow83",
"pietermarsman"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9661",
"repo": "pdfminer/pdfminer.six",
"url": "https://github.com/pdfminer/pdfminer.six/issues/415"
}
|
gharchive/issue
|
Unfork this repository
Contact GitHub support and ask them to "unfork" this repository, making it a primary repository.
That will enable code indexing, and reflects the fact that this is now the actively maintained version of pdfminer.six rather than euske/pdfminer.
Hi @jbarlow83,
Thanks for this reminder. I am going to contact github and see what they can do.
Pieter.
Now we wait...
Its done :D
|
2025-04-01T04:35:06.274415
| 2023-05-04T00:38:48
|
1695027485
|
{
"authors": [
"jordanpadams",
"pds-ops"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9662",
"repo": "pds-data-dictionaries/ldd-cart",
"url": "https://github.com/pds-data-dictionaries/ldd-cart/pull/51"
}
|
gharchive/pull-request
|
PDS4 Information Model Release <IP_ADDRESS>
Automated tagging of repo for nominal release of sub-model for PDS4 Release <IP_ADDRESS>
@thareUSGS it looks like there may be some changes to the latest GEOM LDD that need to be updated in the CART LDD?
https://github.com/pds-data-dictionaries/ldd-cart/actions/runs/4877878178/jobs/8702978850#step:5:566
|
2025-04-01T04:35:06.278405
| 2024-02-25T09:40:19
|
2152680593
|
{
"authors": [
"AkihiroSuda",
"msackman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9663",
"repo": "pdtpartners/nix-snapshotter",
"url": "https://github.com/pdtpartners/nix-snapshotter/issues/130"
}
|
gharchive/issue
|
defaultRuntime = "runsc" seems to have no effect
I'm experimenting with the new gvisor support.
virtualisation.containerd.rootless = {
enable = true;
nixSnapshotterIntegration = true;
gVisorIntegration = true;
defaultRuntime = "runsc";
};
# nerdctl run nix:0/nix/store/adnry81s33j2lmvy5bxpmlyxdc5z0jq7-nix-image-my-redis2.tar:latest
...
it certainly starts up and works, but on the host a ps aux | grep runsc gives nothing. ps aux | grep runc does give results.
Whereas:
# nerdctl run --runtime runsc nix:0/nix/store/adnry81s33j2lmvy5bxpmlyxdc5z0jq7-nix-image-my-redis2.tar:latest
and now a ps aux | grep runsc shows runsc-gofer and runsc-sandbox working.
Incidentally:
# nerdctl help run | grep runsc
--runtime string Runtime to use for this container, e.g. "crun", or "io.containerd.runsc.v1" (default "io.containerd.runc.v2")
But if I set defaultRuntime = "io.containerd.runsc.v1" then I get the cgroup error (WARN[0002] cannot set cgroup manager to "systemd" for runtime "io.containerd.runsc.v1") because it's not going through your wrapper to ignore the cgroups. So your runsc wrapper definitely works when explicitly used, but for some reason it doesn't seem to be found when set as the default.
The containerd.toml does contain default_runtime_name = "runsc", so I do not understand why it's not taking effect.
See https://github.com/containerd/nerdctl/blob/v2.0.0-rc.2/docs/faq.md#nerdctl-ignores-pluginsiocontainerdgrpcv1cri-config
|
2025-04-01T04:35:06.343711
| 2017-03-31T00:17:42
|
218363919
|
{
"authors": [
"keegan-lillo",
"rajid"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9666",
"repo": "pebble/clay",
"url": "https://github.com/pebble/clay/issues/157"
}
|
gharchive/issue
|
Can the colors of things be changed?
I'd like to change the color of the toggle, when it's "on" from red to green. I'd also like to make the submit button green. Is there a way to set colors in the Clay driven API? I was expecting some "attribute" but can't find any such documention.
There isn't a "direct" API for changing styles as the intent is for it to look as close to the native styling as possible, however you do have access to the "Minified" api of the element itself via the $element property: https://github.com/pebble/clay#clayitem-object-config-
http://minifiedjs.com/api/
clayConfig.on(clayConfig.EVENTS.AFTER_BUILD, function() {
var coolStuffToggle = clayConfig.getItemByMessageKey('cool_stuff');
coolStuffToggle.on('change', function() {
// only set the background if toggled
var toggled = this.get();
coolStuffToggle.$element.select('.marker').set('$backgroundColor', toggled ? 'green' : null);
coolStuffToggle.$element.select('.slide').set('$backgroundColor', toggled ? 'darkgreen' : null);
})
});
Ok, thanks! I'll play with it.
/raj
On Mar 30, 2017, at 6:21 PM, Keegan Lillo<EMAIL_ADDRESS><EMAIL_ADDRESS>wrote:
There isn't a "direct" API for changing styles as the intent is for it to look as close to the native styling as possible, however you do have access to the "Minified" api of the element itself via the $element property: https://github.com/pebble/clay#clayitem-object-config- https://github.com/pebble/clay#clayitem-object-config-
http://minifiedjs.com/api/ http://minifiedjs.com/api/
clayConfig.on(clayConfig.EVENTS.AFTER_BUILD, function() {
var coolStuffToggle = clayConfig.getItemByMessageKey('cool_stuff');
coolStuffToggle.on('change', function() {
// only set the background if toggled
var toggled = this.get();
coolStuffToggle.$element.select('.marker').set('$backgroundColor', toggled ? 'green' : null);
coolStuffToggle.$element.select('.slide').set('$backgroundColor', toggled ? 'darkgreen' : null);
})
});
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/pebble/clay/issues/157#issuecomment-290589675, or mute the thread https://github.com/notifications/unsubscribe-auth/AFAlZfDwDYt3QmIp5P1c1L_4lMzUC1YYks5rrFT_gaJpZM4MvE7O.
https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png https://github.com/pebble/clay https://github.com/pebble/clay#clayitem-object-config-\r\nhttp://minifiedjs.com/api/\r\n\r\n```js\r\nclayConfig.on https://github.com/pebble/clay/issues/157#issuecomment-290589675
Thanks for this! It works just fine for setting the slider colors. I'm still trying to set the background color of the "Submit" button itself. I defined it in config.js as:
{
"type": "submit",
"id": "submit",
"defaultValue": "save"
}
which allows me to find it easily with "clayConfig.getItemById('submit')" (I think?), but when I do:
clayConfig.getItemById('submit').$element.set({$backgroundColor: 'green'});
it sets the color of the background behind the button instead of the button itself. I tried things like:
clayConfig.getItemById('submit').$element.select(".label").set({$backgroundColor: 'green'});
clayConfig.getItemById('submit').$element.select(".button").set({$backgroundColor: 'green'});
but nothing works. I don't know enough JS to be able to discover the name of the element to set.
Can you help me with this? I promise this is the last thing I need to do here!
/raj
On Mar 30, 2017, at 6:21 PM, Keegan Lillo<EMAIL_ADDRESS><EMAIL_ADDRESS>wrote:
There isn't a "direct" API for changing styles as the intent is for it to look as close to the native styling as possible, however you do have access to the "Minified" api of the element itself via the $element property: https://github.com/pebble/clay#clayitem-object-config- https://github.com/pebble/clay#clayitem-object-config-
http://minifiedjs.com/api/ http://minifiedjs.com/api/
clayConfig.on(clayConfig.EVENTS.AFTER_BUILD, function() {
var coolStuffToggle = clayConfig.getItemByMessageKey('cool_stuff');
coolStuffToggle.on('change', function() {
// only set the background if toggled
var toggled = this.get();
coolStuffToggle.$element.select('.marker').set('$backgroundColor', toggled ? 'green' : null);
coolStuffToggle.$element.select('.slide').set('$backgroundColor', toggled ? 'darkgreen' : null);
})
});
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/pebble/clay/issues/157#issuecomment-290589675, or mute the thread https://github.com/notifications/unsubscribe-auth/AFAlZfDwDYt3QmIp5P1c1L_4lMzUC1YYks5rrFT_gaJpZM4MvE7O.
https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png https://github.com/pebble/clay https://github.com/pebble/clay#clayitem-object-config-\r\nhttp://minifiedjs.com/api/\r\n\r\n```js\r\nclayConfig.on https://github.com/pebble/clay/issues/157#issuecomment-290589675
Yeh none of those selectors will target what you want. Have a look at the source for a submit button https://github.com/pebble/clay/blob/master/src/templates/components/submit.tpl
You want . select("button") not . select(".button")
I'm also not sure why you want to change the colors of the elements. The whole point behind Clay is for it to maintain visual consistency with the rest of the Pebble app (and other developers). By you changing the colors of things, you're interrupting that consistency.
Sorry, I wasn't very clear about my purposes. The program is a special version of my "Orbits" watchface which I produced for my wife. She has type 1 diabetes and I wanted a watchface which would visually remind her when it's time to change your insulin pump (every 3 days). She liked my Orbits watchface, so I modified it with the additional functionality. Later, I realized that I wanted to use your "Clay" package, since it doesn't require an Internet connection in order to make the configuration page work. When I switched it, she immediately said, "Why is the 'Save' button red, and when the toggles turn red it makes it look to me as though it's not selected when it is. I think they should be green." I couldn't help but agree with her logic, actually, and while it really didn't matter to me, if you're married, then you know that making the wife happy is a worthwhile endeavor! Plus, I was just wondering how hard it could be to change something as simple as color, and I figured I'd learn more javascript in the process! Well, now I know that it's not trivial, but it's not that hard either, thanks to your help! I was pretty close to the correct solution and I enjoyed playing with it. I think she'll be happy with this, and since the program is custom for her, no else will see it!
Thanks a lot for all of your help! I really appreciate it!
/raj
On Mar 31, 2017, at 10:22 PM, Keegan Lillo<EMAIL_ADDRESS><EMAIL_ADDRESS>wrote:
Yeh none of those selectors will target what you want. Have a look at the source for a submit button https://github.com/pebble/clay/blob/master/src/templates/components/submit.tpl https://github.com/pebble/clay/blob/master/src/templates/components/submit.tpl
You want . select("button") not . select(".button")
I'm also not sure why you want to change the colors of the elements. The whole point behind Clay is for it to maintain visual consistency with the rest of the Pebble app (and other developers). By you changing the colors of things, you're interrupting that consistency.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/pebble/clay/issues/157#issuecomment-290896445, or mute the thread https://github.com/notifications/unsubscribe-auth/AFAlZSDulWboiO4zCQgGNHTC-jYYfijwks5rrd8AgaJpZM4MvE7O.
https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png https://github.com/pebble/clay https://github.com/pebble/clay/blob/master/src/templates/components/submit.tpl/r/n/r/nYou https://github.com/pebble/clay/issues/157#issuecomment-290896445
|
2025-04-01T04:35:06.384969
| 2012-12-11T02:15:16
|
9166812
|
{
"authors": [
"StoneCypher",
"curvedmark"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9667",
"repo": "pegjs/pegjs",
"url": "https://github.com/pegjs/pegjs/issues/136"
}
|
gharchive/issue
|
Move some options inside the grammar file
Currently, some CLI options are critical to the action code. Having to specify them on a command line (or inside a Makefile) means scattering of the code:
--export-var
If action needs to recursively called the parser, then this API is critical to it.
--allowed-start-rule
It's very unlikely authors will frequently change this option without modifying the grammar
So I propose move these options inside grammar. To bikeshed, JSON might be used:
{
"export-var": "var parser",
"allowed-start-rule": ["start", "non-start"]
}
{
// intializer
}
start = 'a'
nonstart = 'b'
The corresponding CLI options could still be perserved, to override options in the grammar file.
Notably, the example option given here is 95% of what you need for minimalist Typescript support #562 ; this gives you enough to do es6 modules if you just don't impose it as an assignment, and then add a string for the return type of parse and you're done
|
2025-04-01T04:35:06.401885
| 2024-09-29T13:02:47
|
2554926337
|
{
"authors": [
"devsecur",
"pellepelster"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9669",
"repo": "pellepelster/solidblocks",
"url": "https://github.com/pellepelster/solidblocks/issues/42"
}
|
gharchive/issue
|
Feature Request: Support Hetzner S3 for backups
As Hetzner started s3 and there are also other providers for s3 besides AWS, I would like not to use AWS only, but also Hetzner. Is there a way to use non-AWS s3 providers?
I had the same thought when I heard that Hetzner released the object storage service, the funny thing is it already works. The underlying backup solution (pgbackrest) supports any S3 compatible service I applied for the beta and tested it. I noticed though that the docs are lacking a little bit so here are some updates:
the postgres docker container that holds PostgreSQL as well as pgbackrest now has some updated docs on how to use it with other S3 providers: https://pellepelster.github.io/solidblocks/rds/index.html#s3-backup
the Hetzner RDS Terraform module also got some documentation updates, an integration test as well as an example snippet https://pellepelster.github.io/solidblocks/hetzner/rds/usage/index.html#hetzner-object-storage
until the next Solidblocks version is released, the example can be viewed here https://github.com/pellepelster/solidblocks/tree/main/solidblocks-hetzner/snippets/hetzner-postgres-rds-hetzner-s3-backup
|
2025-04-01T04:35:06.416444
| 2015-09-16T22:42:55
|
106878247
|
{
"authors": [
"johnsmyth",
"stuartpreston"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9670",
"repo": "pendrica/chef-provisioning-azurerm",
"url": "https://github.com/pendrica/chef-provisioning-azurerm/pull/3"
}
|
gharchive/pull-request
|
Azure virtual network
first attempt at virtual network - comments and suggestions welcome.
This is great, @johnsmyth - any chance of adding a spec for the resource to match the others?
I'd be happy to add some tests, but I'm not great with rspec. Do you know of any good examples for testing chef resource attributes?
Sent from my iPhone
On Sep 16, 2015, at 5:55 PM, Stuart Preston<EMAIL_ADDRESS>wrote:
This is great, @johnsmyth - any chance of adding a spec for the resource to match the others?
—
Reply to this email directly or view it on GitHub.
@stuartpreston - I added some spec tests
I have the same issue on Windows where the comparison of the label is done as if it's the only member of an array vs just the label itself. If you find an answer let me know, otherwise feel free to comment out that particular test for now and let me know when we're good to merge.
Oh, also - could you add an example usage for this resource to README.md? Thanks!
@stuartpreston - I commented out that test and added an example to the README. I think you're to merge.
Fyi - I started on azure_network_interface yesterday. What are your thoughts on handling dependent resources? A network interface may need a public ip, a vm will need network interfaces, etc...
Also, what's the best forum for questions like this?
Sent from my iPhone
On Sep 17, 2015, at 3:18 PM, Stuart Preston<EMAIL_ADDRESS>wrote:
Merged #3.
—
Reply to this email directly or view it on GitHub.
|
2025-04-01T04:35:06.426206
| 2022-06-28T14:24:49
|
1287432532
|
{
"authors": [
"2disbetter",
"gopher333"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9671",
"repo": "penk/penkesu",
"url": "https://github.com/penk/penkesu/issues/11"
}
|
gharchive/issue
|
Keyboard connection
Hi, I would appreciate a little more detail about the hardware configuration of the keyboard. How do I attach the Arduino Mini to the Keyboard (Pin Numbers)? I suppose the USB Port has to be to the outside of the PCB, but in which direction? USB side to the PCB or away from it? A picture would do...also the connection of the 4-pin usb connector would be helpfull (PINS?)
Tks is advance?
The micro usb port goes facing the Keyboard PCB. This picture should show you the pins. Be careful that nothing is touching metal on metal there. I actually put a small piece of kapton tape in between just to make sure.
Thanks a lot for thte immediate answer, that does it!
|
2025-04-01T04:35:06.427833
| 2022-03-25T12:56:02
|
1180762791
|
{
"authors": [
"e404r",
"penleychan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9672",
"repo": "penleychan/ngx-transloco-router",
"url": "https://github.com/penleychan/ngx-transloco-router/issues/4"
}
|
gharchive/issue
|
transloco v4.0.0
Hi, transloco v4.0.0 and angular 13 not working
Hello, I pushed an update for this. It should work with<EMAIL_ADDRESS>Please grab the latest version 1.1.1
|
2025-04-01T04:35:06.450577
| 2017-07-06T19:07:57
|
241055832
|
{
"authors": [
"ddiroma",
"wingman-pentaho"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9673",
"repo": "pentaho/pentaho-kettle",
"url": "https://github.com/pentaho/pentaho-kettle/pull/4125"
}
|
gharchive/pull-request
|
[BACKLOG-15718] Fixes for the following:
Fixes for #'s 6, 7, 8, 10, 11, 12, and 13 for https://docs.google.com/spreadsheets/d/17QboBxK0V064iFzd3GXdpYGZ5I2bg-yWP5OOCYyC9u0/edit#gid=1166424294
6: Normal card background color should be #FDFDFD. (this was my bad - PJ)
7: Right now the text in the cards (name and date info) are not vertically aligned. Adding a line height of 14 pix to the Card title should fix this.
8: When there are no recent files the message "You haven't opened anything recently." is missing a period.
10: Before the user selects a recent file, the open button should be disabled.
11: Before vertical scrollbar appears, maintain 20px gutter to right, currently looks bigger than 20px.
12: When vertical scrollbar appears, maintain 20px gutter to right of card between card edge and scrollbar (dev to see if thats possible).
13: At the end of the 12 recent files, there is no 20px of space under the last row of cards. Right now they stick to the container edge.
@bmorrise
@e-cuellar
Build Completed
:fire: This pull request has some issues. It would be preferable to fix them in order for it to be just perfect. See below for more details. Some links are also available below for further assistance in addressing those issues.
Build Commands
mvn -B -fn -f pom.xml clean install && mvn -B -f pom.xml site
Cleanup Commands
mvn -B -f pom.xml build-helper:remove-project-artifact
Changed files
plugins/file-open-save/core/src/main/javascript/app/app.css
plugins/file-open-save/core/src/main/javascript/app/app.html
plugins/file-open-save/core/src/main/javascript/app/components/card/card.css
plugins/file-open-save/core/src/main/javascript/app/components/card/card.html
plugins/file-open-save/core/src/main/resources/i18n/file-open-save/messages_en.properties
Unit test coverage change
These statistics help you identify how your changes have affected the coverage of the following files. If a file is not in this list, then its coverage was not affected by your changes. To get some help interpreting these metrics, please refer to Jacoco's documentation.
org.pentaho.di.core.injection.bean.BeanInjector
Branch Change: + .84%
Complexity Change: + 1.39%
org.pentaho.di.trans.steps.dbproc.DBProcMeta
Instruction Change: + .11%
org.pentaho.di.trans.steps.dimensionlookup.DimensionLookupMeta
Branch Change: + .31%
Instruction Change: + .06%
Line Change: -.23%:small_red_triangle_down:
Method Change: -1.02%:small_red_triangle_down:
org.pentaho.di.trans.steps.groupby.GroupByMeta
Branch Change: + 2.08%
Complexity Change: + 2.11%
org.pentaho.di.trans.steps.ifnull.IfNullMeta
Branch Change: -2.00%:small_red_triangle_down:
Complexity Change: -1.89%:small_red_triangle_down:
Instruction Change: -.17%:small_red_triangle_down:
org.pentaho.di.trans.steps.insertupdate.InsertUpdateMeta
Instruction Change: + .05%
org.pentaho.di.trans.steps.memgroupby.MemoryGroupByMeta
Branch Change: + 1.18%
Complexity Change: + 1.35%
org.pentaho.di.trans.steps.replacestring.ReplaceStringMeta
Instruction Change: -.07%:small_red_triangle_down:
org.pentaho.di.trans.steps.singlethreader.SingleThreaderMeta
Instruction Change: + .10%
org.pentaho.di.trans.steps.sort.SortRowsMeta
Instruction Change: -.08%:small_red_triangle_down:
org.pentaho.di.trans.steps.uniquerows.UniqueRowsMeta
Instruction Change: -.18%:small_red_triangle_down:
org.pentaho.di.trans.steps.webservices.WebServiceMeta
Branch Change: -1.79%:small_red_triangle_down:
Complexity Change: -1.19%:small_red_triangle_down:
Instruction Change: -.07%:small_red_triangle_down:
org.pentaho.di.www.TransformationMap
Branch Change: -14.77%:small_red_triangle_down:
Complexity Change: -10.77%:small_red_triangle_down:
Instruction Change: -15.67%:small_red_triangle_down:
Line Change: -15.38%:small_red_triangle_down:
org.pentaho.di.trans.steps.googleanalytics.GaInputStepMeta
Instruction Change: + .07%
org.pentaho.di.trans.steps.gpload.GPLoadMeta
Instruction Change: + .09%
|
2025-04-01T04:35:06.506938
| 2018-03-12T17:31:04
|
304468955
|
{
"authors": [
"tmcsantos",
"wingman-pentaho"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9674",
"repo": "pentaho/pentaho-kettle",
"url": "https://github.com/pentaho/pentaho-kettle/pull/5032"
}
|
gharchive/pull-request
|
[Backlog-20158] implements annotated job plugin dialogs
refactors previous StepDialog annotation to PluginDialog
updated core plugins to use new annotation
Build Completed
:x: This pull request has errors. They will need to be addressed before it can be accepted. See below for more details. Some links are also available below for further assistance in addressing those issues.
Build Commands
mvn -Dsurefire.runOrder=alphabetical -B -fn -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd clean install && mvn -B -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd site
Cleanup Commands
mvn -B -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd build-helper:remove-project-artifact
Changed files
core/src/main/java/org/pentaho/di/core/plugins/PluginFolder.java
core/src/main/java/org/pentaho/di/core/plugins/PluginRegistryPluginType.java
engine/src/main/java/org/pentaho/di/core/KettleEnvironment.java
engine/src/main/java/org/pentaho/di/core/annotations/JobEntry.java
engine/src/main/java/org/pentaho/di/core/annotations/PluginDialog.java
engine/src/main/java/org/pentaho/di/core/plugins/JobEntryDialogFragmentType.java
engine/src/main/java/org/pentaho/di/core/plugins/StepDialogFragmentType.java
engine/src/main/java/org/pentaho/di/job/entry/JobEntryBase.java
engine/src/main/java/org/pentaho/di/job/entry/JobEntryInterface.java
engine/src/main/java/org/pentaho/di/trans/step/BaseStepMeta.java
engine/src/main/java/org/pentaho/di/trans/step/StepMetaInterface.java
plugins/core/impl/src/main/java/org/pentaho/di/job/entries/addresultfilenames/JobEntryAddResultFilenames.java
plugins/core/impl/src/main/java/org/pentaho/di/job/entries/checkdbconnection/JobEntryCheckDbConnections.java
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_en_US.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_es_AR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_fr_FR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_it_IT.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_ja_JP.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_ko_KR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_zh_CN.properties
plugins/core/ui/src/main/java/org/pentaho/di/ui/job/entries/addresultfilenames/JobEntryAddResultFilenamesDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/job/entries/checkdbconnection/JobEntryCheckDbConnectionsDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/abort/AbortDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/analyticquery/AnalyticQueryDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/append/AppendDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/autodoc/AutoDocDialog.java
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_en_US.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_es_AR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_fr_FR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_it_IT.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_ja_JP.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_ko_KR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_zh_CN.properties
pom.xml
ui/src/main/java/org/pentaho/di/ui/spoon/Spoon.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonJobDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonStepsDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonTabsDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/job/JobGraph.java
ui/src/main/resources/org/pentaho/di/ui/spoon/messages/messages_en_US.properties
ui/src/test/java/org/pentaho/di/ui/spoon/delegates/SpoonJobDelegateTest.java
Unit Test Coverage
These statistics help you identify how your changes have affected the coverage of the following files. If a file is not in this list, then its coverage was not affected by your changes. To get some help interpreting these metrics, please refer to Jacoco's documentation.
:warning: Coverage Changes: <0.1% (click to expand)
org.pentaho.di.core.plugins.PluginFolder
Branch Change: -8.3% :small_red_triangle_down:
Complexity Change: -7.5% :small_red_triangle_down:
Instruction Change: -2.9% :small_red_triangle_down:
Line Change: -5.7% :small_red_triangle_down:
:new: org.pentaho.di.core.annotations.PluginDialog.PluginType
Coverage: 0% :exclamation:
:new: org.pentaho.di.core.plugins.JobEntryDialogFragmentType
Branch Coverage: 50% :exclamation:
Complexity Coverage: 30% :exclamation:
Instruction Coverage: 30.6% :exclamation:
Line Coverage: 38.5% :exclamation:
Method Coverage: 25% :exclamation:
org.pentaho.di.core.plugins.StepDialogFragmentType
Branch Change: -50% :small_red_triangle_down:
Complexity Change: -7.5% :small_red_triangle_down:
Instruction Change: -11.1% :small_red_triangle_down:
Line Change: -11.5% :small_red_triangle_down:
Method Change: -3.6% :small_red_triangle_down:
org.pentaho.di.job.entries.evaluatetablecontent.JobEntryEvalTableContent
Branch Change: -1% :small_red_triangle_down:
Complexity Change: -1% :small_red_triangle_down:
Instruction Change: -0.2% :small_red_triangle_down:
org.pentaho.di.trans.steps.ifnull.IfNullMeta
Branch Change: -2% :small_red_triangle_down:
Complexity Change: -1.9% :small_red_triangle_down:
Instruction Change: -0.2% :small_red_triangle_down:
org.pentaho.di.trans.steps.insertupdate.InsertUpdateMeta
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.ldapoutput.LDAPOutputMeta
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.salesforceupdate.SalesforceUpdateMeta
Instruction Change: -0.2% :small_red_triangle_down:
org.pentaho.di.trans.steps.salesforceupsert.SalesforceUpsertMeta
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.ui.spoon.delegates.SpoonTabsDelegate
Instruction Change: -0.1% :small_red_triangle_down:
Method Change: -1.1% :small_red_triangle_down:
License header violations
:heavy_exclamation_mark: Copyright year is not the current year on plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/abort/AbortDialog.java. Found 2017. Was expecting 2018
:heavy_exclamation_mark: Copyright year is not the current year on plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/analyticquery/AnalyticQueryDialog.java. Found 2017. Was expecting 2018
:heavy_exclamation_mark: Copyright year is not the current year on plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/append/AppendDialog.java. Found 2017. Was expecting 2018
:heavy_exclamation_mark: Copyright year is not the current year on plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/autodoc/AutoDocDialog.java. Found 2017. Was expecting 2018
Build Completed
:fire: This pull request has some issues. It would be preferable to fix them in order for it to be just perfect. See below for more details. Some links are also available below for further assistance in addressing those issues.
Build Commands
mvn -Dsurefire.runOrder=alphabetical -B -fn -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd clean install && mvn -B -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd site
Cleanup Commands
mvn -B -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd build-helper:remove-project-artifact
Changed files
core/src/main/java/org/pentaho/di/core/plugins/PluginFolder.java
core/src/main/java/org/pentaho/di/core/plugins/PluginRegistryPluginType.java
engine/src/main/java/org/pentaho/di/core/KettleEnvironment.java
engine/src/main/java/org/pentaho/di/core/annotations/JobEntry.java
engine/src/main/java/org/pentaho/di/core/annotations/PluginDialog.java
engine/src/main/java/org/pentaho/di/core/plugins/JobEntryDialogFragmentType.java
engine/src/main/java/org/pentaho/di/core/plugins/StepDialogFragmentType.java
engine/src/main/java/org/pentaho/di/job/entry/JobEntryBase.java
engine/src/main/java/org/pentaho/di/job/entry/JobEntryInterface.java
engine/src/main/java/org/pentaho/di/trans/step/BaseStepMeta.java
engine/src/main/java/org/pentaho/di/trans/step/StepMetaInterface.java
plugins/core/impl/src/main/java/org/pentaho/di/job/entries/addresultfilenames/JobEntryAddResultFilenames.java
plugins/core/impl/src/main/java/org/pentaho/di/job/entries/checkdbconnection/JobEntryCheckDbConnections.java
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_en_US.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_es_AR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_fr_FR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_it_IT.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_ja_JP.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_ko_KR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_zh_CN.properties
plugins/core/ui/src/main/java/org/pentaho/di/ui/job/entries/addresultfilenames/JobEntryAddResultFilenamesDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/job/entries/checkdbconnection/JobEntryCheckDbConnectionsDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/abort/AbortDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/analyticquery/AnalyticQueryDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/append/AppendDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/autodoc/AutoDocDialog.java
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_en_US.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_es_AR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_fr_FR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_it_IT.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_ja_JP.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_ko_KR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_zh_CN.properties
pom.xml
ui/src/main/java/org/pentaho/di/ui/spoon/Spoon.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonJobDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonStepsDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonTabsDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/job/JobGraph.java
ui/src/main/resources/org/pentaho/di/ui/spoon/messages/messages_en_US.properties
ui/src/test/java/org/pentaho/di/ui/spoon/delegates/SpoonJobDelegateTest.java
Unit Test Coverage
These statistics help you identify how your changes have affected the coverage of the following files. If a file is not in this list, then its coverage was not affected by your changes. To get some help interpreting these metrics, please refer to Jacoco's documentation.
:warning: Coverage Changes: <0.1% (click to expand)
org.pentaho.di.core.plugins.PluginFolder
Branch Change: -8.3% :small_red_triangle_down:
Complexity Change: -7.5% :small_red_triangle_down:
Instruction Change: -2.9% :small_red_triangle_down:
Line Change: -5.7% :small_red_triangle_down:
:new: org.pentaho.di.core.annotations.PluginDialog.PluginType
Coverage: 0% :exclamation:
:new: org.pentaho.di.core.plugins.JobEntryDialogFragmentType
Branch Coverage: 50% :exclamation:
Complexity Coverage: 30% :exclamation:
Instruction Coverage: 30.6% :exclamation:
Line Coverage: 38.5% :exclamation:
Method Coverage: 25% :exclamation:
org.pentaho.di.core.plugins.StepDialogFragmentType
Branch Change: -50% :small_red_triangle_down:
Complexity Change: -7.5% :small_red_triangle_down:
Instruction Change: -11.1% :small_red_triangle_down:
Line Change: -11.5% :small_red_triangle_down:
Method Change: -3.6% :small_red_triangle_down:
org.pentaho.di.job.entries.ftpsget.JobEntryFTPSGet
Branch Change: -0.5% :small_red_triangle_down:
Complexity Change: -0.6% :small_red_triangle_down:
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.job.entries.waitforsql.JobEntryWaitForSQL
Branch Change: -0.7% :small_red_triangle_down:
Complexity Change: -0.9% :small_red_triangle_down:
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.constant.ConstantMeta
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.csvinput.CsvInputMeta
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.dbproc.DBProcMeta
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.memgroupby.MemoryGroupByMeta
Branch Change: -1.1% :small_red_triangle_down:
Complexity Change: -1.3% :small_red_triangle_down:
org.pentaho.di.trans.steps.setvalueconstant.SetValueConstantMeta
Instruction Change: -0.2% :small_red_triangle_down:
org.pentaho.di.trans.steps.singlethreader.SingleThreaderMeta
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.tableoutput.TableOutputMeta
Branch Change: -0.9% :small_red_triangle_down:
Complexity Change: -0.8% :small_red_triangle_down:
org.pentaho.di.trans.steps.webservices.WebServiceMeta
Branch Change: -1.7% :small_red_triangle_down:
Complexity Change: -1.2% :small_red_triangle_down:
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.salesforceinsert.SalesforceInsertMeta
Instruction Change: -0.1% :small_red_triangle_down:
Note: The list of offending coverage changes extends beyond the presented results and was truncated to prevent reaching the comment size limit.
Build Completed
:fire: This pull request has some issues. It would be preferable to fix them in order for it to be just perfect. See below for more details. Some links are also available below for further assistance in addressing those issues.
Build Commands
mvn -Dsurefire.runOrder=alphabetical -B -fn -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd clean install && mvn -B -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd site
Cleanup Commands
mvn -B -f 'pom.xml' -pl 'ui,.,engine,core,plugins/core/impl,plugins/core/ui' -P '!assemblies' -amd build-helper:remove-project-artifact
Changed files
core/src/main/java/org/pentaho/di/core/plugins/PluginFolder.java
core/src/main/java/org/pentaho/di/core/plugins/PluginRegistryPluginType.java
engine/src/main/java/org/pentaho/di/core/KettleEnvironment.java
engine/src/main/java/org/pentaho/di/core/annotations/JobEntry.java
engine/src/main/java/org/pentaho/di/core/annotations/PluginDialog.java
engine/src/main/java/org/pentaho/di/core/plugins/JobEntryDialogFragmentType.java
engine/src/main/java/org/pentaho/di/core/plugins/StepDialogFragmentType.java
engine/src/main/java/org/pentaho/di/job/entry/JobEntryBase.java
engine/src/main/java/org/pentaho/di/job/entry/JobEntryInterface.java
engine/src/main/java/org/pentaho/di/trans/step/BaseStepMeta.java
engine/src/main/java/org/pentaho/di/trans/step/StepMetaInterface.java
plugins/core/impl/src/main/java/org/pentaho/di/job/entries/addresultfilenames/JobEntryAddResultFilenames.java
plugins/core/impl/src/main/java/org/pentaho/di/job/entries/checkdbconnection/JobEntryCheckDbConnections.java
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_en_US.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_es_AR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_fr_FR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_it_IT.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_ja_JP.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_ko_KR.properties
plugins/core/impl/src/main/resources/org/pentaho/di/job/entries/checkdbconnection/messages/messages_zh_CN.properties
plugins/core/ui/src/main/java/org/pentaho/di/ui/job/entries/addresultfilenames/JobEntryAddResultFilenamesDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/job/entries/checkdbconnection/JobEntryCheckDbConnectionsDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/abort/AbortDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/analyticquery/AnalyticQueryDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/append/AppendDialog.java
plugins/core/ui/src/main/java/org/pentaho/di/ui/trans/steps/autodoc/AutoDocDialog.java
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_en_US.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_es_AR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_fr_FR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_it_IT.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_ja_JP.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_ko_KR.properties
plugins/core/ui/src/main/resources/org/pentaho/di/ui/job/entries/checkdbconnection/messages/messages_zh_CN.properties
pom.xml
ui/src/main/java/org/pentaho/di/ui/spoon/Spoon.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonJobDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonStepsDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/delegates/SpoonTabsDelegate.java
ui/src/main/java/org/pentaho/di/ui/spoon/job/JobGraph.java
ui/src/main/resources/org/pentaho/di/ui/spoon/messages/messages_en_US.properties
ui/src/test/java/org/pentaho/di/ui/spoon/delegates/SpoonJobDelegateTest.java
Unit Test Coverage
These statistics help you identify how your changes have affected the coverage of the following files. If a file is not in this list, then its coverage was not affected by your changes. To get some help interpreting these metrics, please refer to Jacoco's documentation.
:warning: Coverage Changes: <0.1% (click to expand)
org.pentaho.di.core.plugins.PluginFolder
Branch Change: -8.3% :small_red_triangle_down:
Complexity Change: -7.5% :small_red_triangle_down:
Instruction Change: -2.9% :small_red_triangle_down:
Line Change: -5.7% :small_red_triangle_down:
:new: org.pentaho.di.core.annotations.PluginDialog.PluginType
Coverage: 0% :exclamation:
:new: org.pentaho.di.core.plugins.JobEntryDialogFragmentType
Branch Coverage: 50% :exclamation:
Complexity Coverage: 30% :exclamation:
Instruction Coverage: 30.6% :exclamation:
Line Coverage: 38.5% :exclamation:
Method Coverage: 25% :exclamation:
org.pentaho.di.core.plugins.StepDialogFragmentType
Branch Change: -50% :small_red_triangle_down:
Complexity Change: -7.5% :small_red_triangle_down:
Instruction Change: -11.1% :small_red_triangle_down:
Line Change: -11.5% :small_red_triangle_down:
Method Change: -3.6% :small_red_triangle_down:
org.pentaho.di.trans.steps.calculator.CalculatorMetaFunction
Branch Change: -8.7% :small_red_triangle_down:
Complexity Change: -3% :small_red_triangle_down:
Instruction Change: -0.7% :small_red_triangle_down:
Line Change: -0.9% :small_red_triangle_down:
org.pentaho.di.trans.steps.checksum.CheckSumMeta
Instruction Change: -0.2% :small_red_triangle_down:
org.pentaho.di.trans.steps.datagrid.DataGridMeta
Branch Change: -2% :small_red_triangle_down:
Complexity Change: -1.7% :small_red_triangle_down:
Instruction Change: -0.2% :small_red_triangle_down:
org.pentaho.di.trans.steps.dimensionlookup.DimensionLookupMeta
Branch Change: -1.8% :small_red_triangle_down:
Complexity Change: -1.1% :small_red_triangle_down:
Instruction Change: -0.4% :small_red_triangle_down:
Line Change: -0.3% :small_red_triangle_down:
org.pentaho.di.trans.steps.ifnull.IfNullMeta
Branch Change: -2% :small_red_triangle_down:
Complexity Change: -1.9% :small_red_triangle_down:
Instruction Change: -0.2% :small_red_triangle_down:
org.pentaho.di.trans.steps.memgroupby.MemoryGroupByMeta
Branch Change: -2.2% :small_red_triangle_down:
Complexity Change: -2.6% :small_red_triangle_down:
org.pentaho.di.trans.steps.replacestring.ReplaceStringMeta
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.rowgenerator.RowGeneratorMeta
Branch Change: -2.6% :small_red_triangle_down:
Complexity Change: -1.5% :small_red_triangle_down:
Instruction Change: -0.1% :small_red_triangle_down:
org.pentaho.di.trans.steps.setvalueconstant.SetValueConstantMeta
Instruction Change: -0.2% :small_red_triangle_down:
org.pentaho.di.trans.steps.textfileinput.TextFileInputMeta
Branch Change: -0.5% :small_red_triangle_down:
org.pentaho.di.trans.steps.uniquerows.UniqueRowsMeta
Instruction Change: -0.1% :small_red_triangle_down:
Note: The list of offending coverage changes extends beyond the presented results and was truncated to prevent reaching the comment size limit.
@pentaho/x-wing pls review
|
2025-04-01T04:35:06.510391
| 2019-12-04T16:58:13
|
532813278
|
{
"authors": [
"buildguy",
"cseverino789"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9675",
"repo": "pentaho/pentaho-kettle",
"url": "https://github.com/pentaho/pentaho-kettle/pull/7120"
}
|
gharchive/pull-request
|
[BACKLOG-31210] Update 'Get file names' step to use enhanced VFS Brow…
…ser. Updated to use Files or Folders dialog.
:white_check_mark: Build finished in 10m 23s
Build command:
mvn clean verify -B -e -Daudit -amd -pl ui
:ok_hand: All tests passed!
Tests run: 1125, Failures: 0, Skipped: 0 Test Results
:information_source: This is an automatic message
|
2025-04-01T04:35:06.522465
| 2020-06-02T12:43:49
|
629172832
|
{
"authors": [
"elemoine",
"ewjoachim"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9676",
"repo": "peopledoc/procrastinate",
"url": "https://github.com/peopledoc/procrastinate/pull/229"
}
|
gharchive/pull-request
|
WIP Removing the lock table
Refs #212 (not sure it closes it yet)
Successful PR Checklist:
[ ] Tests
[ ] (not applicable?)
[ ] Documentation
[ ] (not applicable?)
[ ] Had a good time contributing?
@marco44 FYI, this is the code @ewjoachim and I are currently testing.
Closed in favor of #231
|
2025-04-01T04:35:06.543478
| 2018-04-06T03:14:38
|
311835638
|
{
"authors": [
"Vlady17",
"coooool123",
"iDefineHD",
"pepzwee"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9677",
"repo": "pepzwee/node-csgo-web-tradebot",
"url": "https://github.com/pepzwee/node-csgo-web-tradebot/issues/149"
}
|
gharchive/issue
|
Steam Login not redirecting back to the sites
Expected behavior
A successful login from hitting the login button and being redirected to the site as usual
Actual behavior
Once you hit the button it won't redirect you back to the page and won't login even though you have hit the login button (login button > sign in with credentials > page doesn't redirect back to the site instead it just stays on the login page after signing into it and doesn't redirect you to the previous pages)
Steps to reproduce the behavior
Hit the login button and the actual behavior listed above will happen
Other
[Operating System]: Ubuntu 16.04.4 x64
[Node.js version]: whichever is listed on the steamapiskey sites
[How did you start the script]: via putty + digital ocean
Error Stack
I opened up the console and had this message when trying to login "The SSL certificate used to load resources from https://steamcommunity-a.akamaihd.net will be distrusted in M70. Once distrusted, users will be prevented from loading these resources. See https://g.co/chrome/symantecpkicerts for more information."
All symantic certs are not being trusted by browsers as they were bought out. Therefore the are not a "trusted" certificate authority.
You need to update passport-steam, which can be done like so npm i passport-steam inside the script directory.
I did that and now the error isn't showing up anymore in console so thanks but I still can't get redirected back to the site
To be able to login do I need to purchase/install the steamapiskey key first?
No, to be able to login your config file needs to have proper URL's and Steam API key. SteamApis.com key is required to load inventories and prices.
The error is still persistent please help
fix steam-passport library https://github.com/liamcurry/passport-steam/pull/74
Thanks i'll try it out vlad
@Vlady17 It still doesn't work for me!!I went to root/.npm/passport-steam and typed in the npm i passport-steam command and still nothing :(
|
2025-04-01T04:35:06.705684
| 2024-06-19T09:32:09
|
2361863997
|
{
"authors": [
"JNKPercona",
"inelpandzic"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9678",
"repo": "percona/percona-server-mysql-operator",
"url": "https://github.com/percona/percona-server-mysql-operator/pull/673"
}
|
gharchive/pull-request
|
K8SPS-241: Cluster-wide support
CHANGE DESCRIPTION
Problem:
CW support missing, add it.
CHECKLIST
Jira
[x] Is the Jira ticket created and referenced properly?
[x] Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
[x] Does the Jira ticket link to the proper milestone (Fix Version field)?
Tests
[x] Is an E2E test/test case added for the new feature/change?
[x] Are unit tests added where appropriate?
Config/Logging/Testability
[x] Are all needed new/changed options added to default YAML files?
[x] Did we add proper logging messages for operator actions?
[x] Did we ensure compatibility with the previous version or cluster upgrade process?
[x] Does the change support oldest and newest supported PS version?
[x] Does the change support oldest and newest supported Kubernetes version?
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
passed
gr-demand-backup
passed
gr-demand-backup-haproxy
passed
gr-finalizer
passed
gr-haproxy
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
passed
gr-recreate
passed
gr-scaling
passed
gr-security-context
passed
gr-self-healing
passed
gr-tls-cert-manager
passed
gr-users
passed
haproxy
passed
init-deploy
passed
limits
failure
monitoring
passed
one-pod
passed
operator-self-healing
passed
recreate
passed
scaling
passed
service-per-pod
passed
sidecars
passed
smart-update
passed
tls-cert-manager
passed
users
passed
version-service
failure
We run 32 out of 32
commit: https://github.com/percona/percona-server-mysql-operator/pull/673/commits/5ce222e46d2cdddb44bf55061897544d575e6714
image: perconalab/percona-server-mysql-operator:PR-673-5ce222e4
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
passed
gr-demand-backup
passed
gr-demand-backup-haproxy
passed
gr-finalizer
passed
gr-haproxy
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
passed
gr-recreate
passed
gr-scaling
passed
gr-security-context
passed
gr-self-healing
passed
gr-tls-cert-manager
passed
gr-users
passed
haproxy
passed
init-deploy
passed
limits
failure
monitoring
passed
one-pod
passed
operator-self-healing
passed
recreate
passed
scaling
passed
service-per-pod
passed
sidecars
passed
smart-update
failure
tls-cert-manager
passed
users
passed
version-service
failure
We run 32 out of 32
commit: https://github.com/percona/percona-server-mysql-operator/pull/673/commits/e97ad85b57c29db68a33ac2af9f9cb8993def4f9
image: perconalab/percona-server-mysql-operator:PR-673-e97ad85b
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
passed
gr-demand-backup
passed
gr-demand-backup-haproxy
passed
gr-finalizer
passed
gr-haproxy
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
passed
gr-recreate
passed
gr-scaling
passed
gr-security-context
passed
gr-self-healing
passed
gr-tls-cert-manager
passed
gr-users
passed
haproxy
passed
init-deploy
passed
limits
passed
monitoring
passed
one-pod
passed
operator-self-healing
passed
recreate
passed
scaling
passed
service-per-pod
passed
sidecars
passed
smart-update
passed
tls-cert-manager
failure
users
passed
version-service
failure
We run 32 out of 32
commit: https://github.com/percona/percona-server-mysql-operator/pull/673/commits/a1a9a8c76845c8a9180d688a2f183f52d4a16011
image: perconalab/percona-server-mysql-operator:PR-673-a1a9a8c7
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
failure
gr-demand-backup
failure
gr-demand-backup-haproxy
failure
gr-finalizer
passed
gr-haproxy
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
failure
gr-recreate
failure
gr-scaling
passed
gr-security-context
failure
gr-self-healing
passed
gr-tls-cert-manager
passed
gr-users
passed
haproxy
passed
init-deploy
passed
limits
passed
monitoring
failure
one-pod
failure
operator-self-healing
passed
recreate
passed
scaling
passed
service-per-pod
passed
sidecars
passed
smart-update
passed
tls-cert-manager
passed
users
failure
version-service
passed
We run 32 out of 32
commit: https://github.com/percona/percona-server-mysql-operator/pull/673/commits/1eb58d9bed545ed7f16146e9f74e7f038c08100e
image: perconalab/percona-server-mysql-operator:PR-673-1eb58d9b
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
failure
gr-demand-backup
failure
gr-demand-backup-haproxy
failure
gr-finalizer
passed
gr-haproxy
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
failure
gr-recreate
passed
gr-scaling
passed
gr-security-context
failure
gr-self-healing
passed
gr-tls-cert-manager
passed
gr-users
passed
haproxy
passed
init-deploy
passed
limits
failure
monitoring
failure
one-pod
failure
operator-self-healing
passed
recreate
passed
scaling
passed
service-per-pod
passed
sidecars
passed
smart-update
passed
tls-cert-manager
passed
users
passed
version-service
failure
We run 32 out of 32
commit: https://github.com/percona/percona-server-mysql-operator/pull/673/commits/c4dc92376ed296cf3ffc4c863c4a93aac65bea97
image: perconalab/percona-server-mysql-operator:PR-673-c4dc9237
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
passed
gr-demand-backup
passed
gr-demand-backup-haproxy
passed
gr-finalizer
passed
gr-haproxy
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
passed
gr-recreate
passed
gr-scaling
passed
gr-security-context
passed
gr-self-healing
passed
gr-tls-cert-manager
passed
gr-users
passed
haproxy
passed
init-deploy
passed
limits
passed
monitoring
passed
one-pod
failure
operator-self-healing
passed
recreate
passed
scaling
passed
service-per-pod
passed
sidecars
passed
smart-update
passed
tls-cert-manager
passed
users
passed
version-service
passed
We run 32 out of 32
commit: https://github.com/percona/percona-server-mysql-operator/pull/673/commits/482fc5598d737a122e3c8184ce3aed4d3e265518
image: perconalab/percona-server-mysql-operator:PR-673-482fc559
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
passed
gr-demand-backup
passed
gr-demand-backup-haproxy
passed
gr-finalizer
passed
gr-haproxy
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
passed
gr-recreate
passed
gr-scaling
passed
gr-security-context
passed
gr-self-healing
passed
gr-tls-cert-manager
passed
gr-users
passed
haproxy
passed
init-deploy
passed
limits
passed
monitoring
passed
one-pod
passed
operator-self-healing
passed
recreate
passed
scaling
passed
service-per-pod
passed
sidecars
passed
smart-update
passed
tls-cert-manager
passed
users
passed
version-service
passed
We run 32 out of 32
commit: https://github.com/percona/percona-server-mysql-operator/pull/673/commits/482fc5598d737a122e3c8184ce3aed4d3e265518
image: perconalab/percona-server-mysql-operator:PR-673-482fc559
Test name
Status
async-ignore-annotations
passed
auto-config
passed
config
passed
config-router
passed
demand-backup
passed
gr-demand-backup
passed
gr-demand-backup-haproxy
passed
gr-finalizer
passed
gr-haproxy
passed
gr-ignore-annotations
passed
gr-init-deploy
passed
gr-one-pod
passed
gr-recreate
passed
gr-scaling
passed
gr-security-context
passed
gr-self-healing
passed
gr-tls-cert-manager
passed
gr-users
passed
haproxy
passed
init-deploy
passed
limits
passed
monitoring
passed
one-pod
passed
operator-self-healing
passed
recreate
passed
scaling
passed
service-per-pod
passed
sidecars
passed
smart-update
passed
tls-cert-manager
passed
users
passed
version-service
passed
We run 32 out of 32
commit: https://github.com/percona/percona-server-mysql-operator/pull/673/commits/2411b41c00c06147f6a350bcfd0e6ce7bfbbc4ff
image: perconalab/percona-server-mysql-operator:PR-673-2411b41c
|
2025-04-01T04:35:06.715763
| 2022-09-05T07:42:56
|
1361534640
|
{
"authors": [
"AudunWA",
"Robdel12"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9679",
"repo": "percy/percy-testcafe",
"url": "https://github.com/percy/percy-testcafe/issues/268"
}
|
gharchive/issue
|
Support testcafe@2
Tescafé jsut released a new major version, which means this package has to update its peer dependency on it:
"peerDependencies": {
"testcafe": "~1"
},
npm ERR! Could not resolve dependency:
npm ERR! peer testcafe@"~1" from<EMAIL_ADDRESS>npm ERR! node_modules/@percy/testcafe
npm ERR! dev<EMAIL_ADDRESS>from the root project
npm ERR!
npm ERR! Conflicting peer dependency<EMAIL_ADDRESS>npm ERR! node_modules/testcafe
npm ERR! peer testcafe@"~1" from<EMAIL_ADDRESS>npm ERR! node_modules/@percy/testcafe
npm ERR! dev<EMAIL_ADDRESS>from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
This was fixed by #270, could you publish a new release @Robdel12?
Hey @AudunWA! This shouldn't be an issue anymore with v1.0.3 https://github.com/percy/percy-testcafe/releases/tag/v1.0.3
|
2025-04-01T04:35:06.716635
| 2017-06-02T11:47:38
|
233158698
|
{
"authors": [
"percyfal"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9680",
"repo": "percyfal/snakemake-rules",
"url": "https://github.com/percyfal/snakemake-rules/issues/44"
}
|
gharchive/issue
|
Make one aggregate function
All aggregate functions look alike. Either reference one aggregation function or utilize a factory
Should go in bioodo
|
2025-04-01T04:35:06.718951
| 2024-01-18T18:56:59
|
2088879320
|
{
"authors": [
"perennialtech"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9681",
"repo": "perennialtech/upptime",
"url": "https://github.com/perennialtech/upptime/issues/35"
}
|
gharchive/issue
|
⚠️ Rimgo has degraded performance
In 4f9b913, Rimgo (https://rimgo.perennialte.ch) experienced degraded performance:
HTTP code: 200
Response time: 10885 ms
Resolved: Rimgo performance has improved in 9fbece0 after 11 minutes.
|
2025-04-01T04:35:06.734366
| 2018-10-28T08:47:44
|
374731496
|
{
"authors": [
"JJ",
"chsanch",
"coke",
"hankache",
"tbrowder"
],
"license": "Artistic-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9682",
"repo": "perl6/doc",
"url": "https://github.com/perl6/doc/issues/2424"
}
|
gharchive/issue
|
Error when building the documentation with: make html
Hi,
I've tried to build the documentation with:
make html
But I'm getting this error:
Cannot resolve caller handle(Pod::Defn
Pod::Blo..., :part-config({anchored => True, h...), :pod-name(/language/pod.pod6), :part-number(48), :toc-counter(Pod::To::BigPage::TO...)); none of these signatures match:
(Pod::Block::Code $node, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Comment $node, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Declarator $node, :$pod-name, :$part-number, :$toc-counter)
(Pod::Block::Named $node, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Named $node where { ... }, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Named $node where { ... }, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Named $node where { ... }, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Named $node where { ... }, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Para $node, $context where { ... }, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Para $node, $context = Context::None, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Block::Para $node, $context where { ... }, :$pod-name, :$part-number, :$toc-counter)
(Pod::Block::Table $node, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Config $node, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::FormattingCode $node, $context where { ... }, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context where { ... } = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context = Context::None, :$pod-name, :$part-number, :$toc-counter)
(Pod::FormattingCode $node where { ... }, $context where { ... }, :$pod-name, :$part-number, :$toc-counter)
(Pod::Heading $node, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Item $node, :$pod-name, :$part-number, :$toc-counter, :%part-config)
(Pod::Item $node where { ... }, :$part-number, :$toc-counter, :%part-config)
(Pod::Raw $node, :$pod-name, :$part-number, :$toc-counter)
(Str $node, Context $context?, :$pod-name, :$part-number, :$toc-counter)
(Str $node, Context $context where { ... }, :$pod-name, :$part-number, :$toc-counter)
(Nil, :$pod-name, :$part-number, :$toc-counter)
in sub handle at /Users/chsanch/rakudo/install/share/perl6/site/sources/AECBB210A3A07DC20E6B2491D30B4B37D07542DF (Pod::To::BigPage) line 318
in sub parse-pod-file at /Users/chsanch/rakudo/install/share/perl6/site/resources/9A47DEF5F33A2E9D9B3BEECBE1A2DA8B45DD2E59 line 86
in block at /Users/chsanch/rakudo/install/share/perl6/site/resources/9A47DEF5F33A2E9D9B3BEECBE1A2DA8B45DD2E59 line 48
in sub MAIN at /Users/chsanch/rakudo/install/share/perl6/site/resources/9A47DEF5F33A2E9D9B3BEECBE1A2DA8B45DD2E59 line 47
in block <unit> at /Users/chsanch/rakudo/install/share/perl6/site/resources/9A47DEF5F33A2E9D9B3BEECBE1A2DA8B45DD2E59 line 100
in sub MAIN at /Users/chsanch/rakudo/install/share/perl6/site/bin/pod2onepage line 2
in block <unit> at /Users/chsanch/rakudo/install/share/perl6/site/bin/pod2onepage line 2
make: *** [bigpage] Error 1
I've installed all the modules needed with:
zef --deps-only install .
And this is my Perl 6 version on MacOS Mojave:
This is Rakudo version 2018.09-530-gf81146ae9 built on MoarVM version 2018.09-141-gb2e870c6f
implementing Perl 6.d.
I would say it's a Pod::To::Bigpage error, but I'll check and try to reproduce it.
I could reproduce it, and it's a problem with Pod::Defn in that page. I'll try to fix it there.
See if Rakudo PR #2439 fixes this.
Still not working.
How is this not breaking the build?
Link to Rakudo PR (above link is to the DOC PR) https://github.com/rakudo/rakudo/pull/2439
That PR has been merged to rakudo master, error is still occurring.
This is not breaking the build because the site is being built with an old version of rakudo:
+ echo 'Building docs for 98ed216de31d81926dd48cf8121dc6862db27e24 with ' This is Rakudo version 2018.03-148-g916b41a21 built on MoarVM version 2018.03-68-ged4201e92 implementing Perl 6.c.
from https://docs.perl6.org/build-log/
Which is probably the reason why it's not been upgraded yet...
Someone updated rakudo on docs.perl6.org so now this bug is impacting the live site.
We actually need Defn on the documentation now, after it's been incorporated by @finanalyst. I guess that meanwhile it's fixed we'll have to go back to 2018.06 or latest before this...
Besides, those logs are really old...
I'll check the latest version for this bug also... It should have been fixed.
That was patched by @finanalyst just today: perl6/perl6-pod-to-bigpage#33
The logs are not old: they're sorted in reverse order, however (look to the bottom) Updating the app config doesn't mean that the installed versions on the doc site are updated; we need to do a zef install-deps on the site. Checking that now.
Doesn't look like 0.5.0 is an available version for me yet. I just did a force install of the module, and only got 0.4.0 (and it keeps telling me it's up to date otherwise, and zef doesn't complain when you do an install-deps about not having that version, so I'm not sure the META6.json is being enforced.
I just upgraded it. Can you please check?
It's working now, and it works. The main problem is that, with old versions of perl6, it will not render Defn. I'll try and see where I insert a warning for that and will close this again.
|
2025-04-01T04:35:06.737705
| 2016-09-07T03:48:53
|
175406809
|
{
"authors": [
"AlexDaniel",
"ahalbert"
],
"license": "Artistic-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9683",
"repo": "perl6/doc",
"url": "https://github.com/perl6/doc/pull/890"
}
|
gharchive/pull-request
|
Converted -- to –
See #888
Is it important if code chunks also use – instead of --? I'll go back and change that if so.
It fails this in the CI: what does this mean?
pod2onepage --threads=1 -v --source-path=./doc --exclude=404.pod6,/.git,/precompiled > html/perl6.xhtml
Type check failed in binding to $id; expected CompUnit::PrecompilationId but got Str ("4CDC84469EF12631D41D...)
in block at /home/travis/.rakudobrew/moar-nom/install/share/perl6/site/resources/5E2F5C8269024A43AAC8425E863FC9AD6F5D0B0C line 66
in sub parse-pod-file at /home/travis/.rakudobrew/moar-nom/install/share/perl6/site/resources/5E2F5C8269024A43AAC8425E863FC9AD6F5D0B0C line 59
in block at /home/travis/.rakudobrew/moar-nom/install/share/perl6/site/resources/5E2F5C8269024A43AAC8425E863FC9AD6F5D0B0C line 38
in sub MAIN at /home/travis/.rakudobrew/moar-nom/install/share/perl6/site/resources/5E2F5C8269024A43AAC8425E863FC9AD6F5D0B0C line 38
in block <unit> at /home/travis/.rakudobrew/moar-nom/install/share/perl6/site/resources/5E2F5C8269024A43AAC8425E863FC9AD6F5D0B0C line 54
Is it important if code chunks also use – instead of --?
What kind of code chunks? I did a quick grep but did not find anything relevant to what you are mentioning. Can you show some examples?
An example is
proto congratulate(Str $reason, Str $name, |) {*}
multi congratulate($reason, $name) {
say "Hooray for your $reason, $name";
}
multi congratulate($reason, $name, Int $rank) {
say "Hooray for your $reason, $name -- got rank $rank!";
}
in functions.pod6
@ahalbert I think that an argument can be made both ways, so just leave it as is. It really does not matter that much if you're using monospace font, and I think that most people do.
|
2025-04-01T04:35:06.752269
| 2017-03-10T03:06:59
|
213230610
|
{
"authors": [
"LRonHubs"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9684",
"repo": "perminder-klair/angular-soundmanager2",
"url": "https://github.com/perminder-klair/angular-soundmanager2/issues/66"
}
|
gharchive/issue
|
How to access commands from controller?
I understand how to add track, etc using the buttons provided.
For my use case I need to remove these buttons and instead do everything programatically (i.e. without user input).
I'm new to Angular JS. Basically I just want to use this library to do simple things like add a specific track and play it. That's really it.
I have this part:
angular.module('memoryGameApp', ['angularSoundManager']);
Now how do I reference this library inside the scope of my project? What I mean is, do I refer to the angular sound manager object as "angularSoundManager", "soundManager", "ngSoundManager", or "angularPlayer"? I see all of these used in different contexts in the code.
Just trying to wrap my head around angularJS. thanks.
Using this stackoverflow post, you can basically hack Angular JS to click buttons that are already in the HTML. Then add the property hidden="true" to your button if you don't want it to be seen
|
2025-04-01T04:35:06.772645
| 2015-11-30T15:31:25
|
119515296
|
{
"authors": [
"ClintLiddick",
"jeking04",
"mkoval",
"psigen"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9685",
"repo": "personalrobotics/herbpy",
"url": "https://github.com/personalrobotics/herbpy/issues/65"
}
|
gharchive/issue
|
No rosdep key for or_pushing
What's the correct way to handle installing herbpy now? rosdep installs test dependencies by default, and exits if a key isn't found.
or_pushing is a package in @jeking04's randomized_rearrangement_planning repository. rosdep only installs apt and pip packages: it does not build code from source. You need to have randomized_rearrangement_planning checked out in your workspace before running rosdep. This means that we need to add randomized_rearrangement_planning to the appropriate .rosinstall files in the pr-rosinstalls repository.
Unfortunately, we can't do that because randomized_rearrangement_planning is currently private. Our best bet is to remove the dependency on or_pushing and temporarily disable the tests. We can revert this change (and update the appropriate .rosinstall files) once the repository is public.
Can we reverse the dependency and put the tests in randomized_rearrangement_planning? If not then yeah, I agree we need to pull it out for now.
Sounds good. I'll pull it out.
The test is for the herbpy action that is calling the planner in
randomized_rearrangement_planning. So I think it makes sense for it
to be in herbpy. But lets disable it for now.
On Tue, Dec 1, 2015 at 8:48 AM, Clint Liddick<EMAIL_ADDRESS>wrote:
Can we reverse the dependency and put the tests in
randomized_rearrangement_planning? If not then yeah, I agree we need to
pull it out for now.
—
Reply to this email directly or view it on GitHub
https://github.com/personalrobotics/herbpy/issues/65#issuecomment-160974905
.
--
Jen King
Does it make sense to have a separate .rosinstall that is the equivalent of test_depends?
It seems like in general there could always be optional packages that might not be public/open-source, but that we would want to test against.
Or, correspondingly, there could be test packages that are not required for use, but are required for testing, like test environments with extra metadata.
You can always test the core .rosinstall (or really, you should probably test all 4 of them) to make sure that they can still run herbpy, but that's potentially separate from testing every piece of optional functionality.
Does it make sense to have a separate .rosinstall that is the equivalent of test_depends?
Travis does not use a .rosinstall file at all to install dependencies. Instead, it dynamically installs the dependencies specified in the package.xml file. This is, in my opinion, the correct approach because it guarantees that the package.xml file is correct.
Or, correspondingly, there could be test packages that are not required for use, but are required for testing, like test environments with extra metadata.
This is the point of test_depend.
You can always test the core .rosinstall (or really, you should probably test all 4 of them) to make sure that they can still run herbpy, but that's potentially separate from testing every piece of optional functionality.
I agree. In addition to running tests on individual packages, we should also test each .rosinstall file included in pr-rosinstalls as a unit. I started implementing (see the feature/travis branch), but currently only herb-minimal-sim.rosinstall succeeds. I haven't had time to fix the issues in the other .rosinstall files.
@jeking04 removed the dependency from package.xml in 554e8e06e69d865456c48ebae3ba3663b1d5ef44, so this has been resolved.
|
2025-04-01T04:35:06.816099
| 2020-02-14T13:00:34
|
565313062
|
{
"authors": [
"vineetvk01"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9686",
"repo": "pesto-students/batch-12-WeConnect",
"url": "https://github.com/pesto-students/batch-12-WeConnect/pull/31"
}
|
gharchive/pull-request
|
CRUD of Locations, WorkSpaces
APIs to perform CRUD operations on location and workspaces
@fenilgandhi Please provide MONGODB_URL in .env file for complete CI testing.
|
2025-04-01T04:35:06.821023
| 2022-07-20T10:52:17
|
1310845754
|
{
"authors": [
"kasperbjerby",
"pestopancake"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9687",
"repo": "pestopancake/laravel-backpack-database-notifications",
"url": "https://github.com/pestopancake/laravel-backpack-database-notifications/issues/16"
}
|
gharchive/issue
|
Support latest backpack crud, please
Why? Just why?
Locked to 4.0.|4.1.|5.0.*
That means it wont work with the latest backpack crud, without manually installing it
Please change it to
"^4.0|^5.0"
hey @kasperbjerby, good point. I've just prepared a new release (1.0.8-alpha), would be great if you can try it out for us
Works fine, but you should maybe mark this addon as a pro addon, cause you can't use it without backpack pro, cause it uses filters
You also don't seem to support overwriting controllers, models, routes etc.
So i just ended up copying the files, and removing it again
Works fine, but you should maybe mark this addon as a pro addon, cause you can't use it without backpack pro, cause it uses filters, so guess i have to modify it myself
paging the boss, @tabacitu. Would be nice if this addon could work within both the free and pro version?
Also a side note, I have modified my own installation to use laravels broadcasting instead of constantly pooling the server for updates, I would also add support for that if I where you
Also a side note, I have modified my own installation to use laravels broadcasting instead of constantly pooling the server for updates, I would also add support for that if I where you
Ah yes I've done the same too, but for this package I thought it best not to require broadcasting as not everyone will have that set up on their project.
PRs welcome if you'd like to help add this functionality in - maybe we can allow choosing between polling or broadcasting in the config.
|
2025-04-01T04:35:06.829534
| 2019-10-09T08:21:48
|
504491345
|
{
"authors": [
"alice-cool",
"chlorane",
"zihaocode"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9688",
"repo": "peteanderson80/bottom-up-attention",
"url": "https://github.com/peteanderson80/bottom-up-attention/issues/71"
}
|
gharchive/issue
|
Could not open test.prototxt
I'm using the res-101 network and tools/generate_tsv.py to extract bounding box features to a tab-separated-values (tsv) file. I'm using a new dataset based on cartoon series.
So I run:
python tools/generate_tsv.py --cfg experiments/cfgs/faster_rcnn_end2end_resnet.yml --def models/vg/ResNet-101/faster_rcnn_end2end/test.prototxt --out test2014_resnet101_faster_rcnn_genome.tsv --net data/faster_rcnn_models/resnet101_faster_rcnn_final.caffemodel --split cartoon
However, when I tried to run generate_tsv.py, an error happened:
Process Process-1:
Traceback (most recent call last):
File "/home/chlorane/anaconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/chlorane/anaconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "tools/generate_tsv.py", line 162, in generate_tsv
net = caffe.Net(prototxt, caffe.TEST, weights=weights)
RuntimeError: Could not open file ./models/vg/ResNet-101/faster_rcnn_end2end/test.prototxt
Even if I use the absolute path here, the error still happens. What's wrong and how can I solve it?
Hi, I had the same problem. You can solve it by change the arges --def models/vg/ResNet-101/faster_rcnn_end2end/test.prototxt to --def models/vg/ResNet-101/faster_rcnn_end2end_final/test.prototxt. That's work for me!
https://github.com/peteanderson80/bottom-up-attention/issues/71#issuecomment-542163318
Dear scholar, did you run the result? I got all zero and NAN in the second picture. I am so cunfused
|
2025-04-01T04:35:06.843988
| 2021-11-21T21:20:17
|
1059489915
|
{
"authors": [
"Nohus",
"peterLaurence"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9689",
"repo": "peterLaurence/MapCompose",
"url": "https://github.com/peterLaurence/MapCompose/pull/9"
}
|
gharchive/pull-request
|
Added a version of scrollTo() that scrolls to an area
A new version of scrollTo() that scrolls to an area defined by a BoundingBox. For example, if you want to show a city on a map so that it's zoomed in on the city as much as possible while still keeping the entire city in view. It also includes an optional padding parameter in case we don't want the area to occupy the viewport edge to edge.
This is looking good. Have you tried the behavior of this API when the map is rotated ?
I don't use rotations in my app so I haven't tried it. I will try and see what happens.
Alright. The documentation can clarify that it's only applicable when the rotation is 0. Nevertheless, it's interesting for you so see what happens with rotation enabled.
After that doc change, I'm +1 to merge.
It did not work when rotated. But now it does.
Again, this is looking good, thank you. I just have a few questions (see the review)
I don't think you submitted the review.
You're right haha, submitted!
That was a great first contribution. Well done.
Well, at least comparing to all the other contributors. : )
Haha, sure. But it's not easy to contribute to this kind of projet, because of the complexity.
|
2025-04-01T04:35:06.845823
| 2021-09-25T09:52:18
|
1007041212
|
{
"authors": [
"AhmadShkour71",
"JoniXTech"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9690",
"repo": "peterhanania/Pogy-Old",
"url": "https://github.com/peterhanania/Pogy-Old/issues/26"
}
|
gharchive/issue
|
Will there come a new Version from Pogy?
Will there come a new Version from Pogy or a other Bot like Pogy?
I love Pogy but it is using an old Discord.js Version
From Owner Pogy Petter
|
2025-04-01T04:35:06.860429
| 2019-08-07T04:43:09
|
477704337
|
{
"authors": [
"VasilyFomin",
"peteroupc"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9691",
"repo": "peteroupc/CBOR",
"url": "https://github.com/peteroupc/CBOR/issues/32"
}
|
gharchive/issue
|
Async version?
Hi,
Thanks for the library. I'm wondering if you have any plans to work on the async support?
Async support might not be there for a while. There are several things that come into play here:
There is a risk of code duplication.
The async/await version of the relevant methods will be very similar to the non-async version.
Having a non-async version with substantially the same code is needed for compatibility with the .NET 2.0 and 4.0 versions and consistency with the Java version. For example, compare the async/await versions with the non-async version.
Also, apparently, code analysis suggests to include ConfigureAwait on any await which sounds burdensome, unnatural, or both.
As someone who recently added async support to a sync library, I can only agree that it's a pain.
In the end, I used "The Flag Argument Hack" from Async Programming - Brownfield Async Development by Stephen Cleary to reuse the code without the risk of a deadlock, and yes, you need to ConfigureAwait(false) in each awaited call.
Looking at the sample you provided you had an initial version already, did you stop for the reasons above?
I have decided against supporting async functions for now.
Especially when writing small CBOR objects (no more than 1024 bytes as required, for example, by the Client-to-Authenticator Protocol by default), it's generally enough to use EncodeToBytes and write the resulting byte array using existing async methods (e.g., Stream.WriteAsync()). In the case of reading CBOR objects, a higher-level protocol usually provides the byte length, which is usually small, so that existing async methods (e.g., Stream.ReadAsync()) can be used to read those bytes and DecodeToBytes then used. Note that EncodeToBytes and DecodeToBytes are CPU-bound, so they won't benefit from async.
|
2025-04-01T04:35:06.875551
| 2019-03-04T17:35:41
|
416913791
|
{
"authors": [
"FranDias",
"petrovicstefanrs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9692",
"repo": "petrovicstefanrs/30_seconds_of_knowledge",
"url": "https://github.com/petrovicstefanrs/30_seconds_of_knowledge/pull/26"
}
|
gharchive/pull-request
|
Clarify that a number as a string should be passed into toOrdinalSuffix
Supplying a number with a leading zero would cause the number to be interpreted as base 8.
toOrdinalSuffix(010) // 8th
toOrdinalSuffix('010') // 10th
Just makes it a little more clear that the input should be a string, not a number.
Lemme know if you'd prefer the solution to do input cleaning or throw a warning instead. I didn't add it because I think it takes away from the core part of the example.
@FranDias I believe you are right about that particular case, when the number is leading with a zero, but if I recall it's happening only in really old browsers that use javascript before es5. Now be that as it may, it's still possible that someone would use a really old browser and your correction is a more strict version of the snippet so yep, I guess it would be best to change it.
However, please use the dev branch as base for all further PRs. 😄
ah, sorry just went straight into the PR w/o checking on CoC. The extension is solid 😄.
|
2025-04-01T04:35:06.893875
| 2024-02-22T19:35:41
|
2149802573
|
{
"authors": [
"DiannaAN",
"andreireporter13"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9693",
"repo": "peviitor-ro/Scrapers_start_with_digi",
"url": "https://github.com/peviitor-ro/Scrapers_start_with_digi/issues/301"
}
|
gharchive/issue
|
CECBank Company - Scraper not found job listing discrepancies
Upon inspecting the company "CECBank" on our scraper, it was noticed that the scraper is labeled only as "cec". Upon further investigation using Postman, it was discovered that the scraper is not found. Additionally, discrepancies were found between the job openings listed on our website "peviitor.ro" and the main website of CECBank.
Steps to Reproduce:
Access the link: _https://scrapers.peviitor.ro/_
Search for the company "CECBank" (listed as "cec" on the scraper).
Right-click and inspect the page to obtain the Request URL.
Copy the Request URL: _https://dev.laurentiumarian.ro/scraper/Scrapers_start_with_digi/_
Check the Request Method, which is POST.
Open Postman and create a new entry with a POST request.
Observe the error message indicating "Scraper not found."
Further examine the discrepancies in job listings between "peviitor.ro" and the main website of CECBank.
Observed Issues:
The scraper for "CECBank" is not found, likely due to it being labeled only as "cec"
Discrepancies exist between the job openings listed on "peviitor.ro" and the main website of CECBank. The positions for "Consilier back-up - Sucursala Arad," "Consilier tranzacții clienți - Agentia Pecica, Sucursala Arad," and "Telebanker - Serviciul Administrare Market Place, Direcția Operațiuni la Distanță" are no longer available on the main website but are still listed on "peviitor.ro."
The location for the position "Consilier Tranzacții Clienți - Miercurea Ciuc" is listed as "Ciuc" on "peviitor.ro," whereas the main website specifies the location as "Miercurea Ciuc - Odorheiu Secuiesc, Petofi Sandor, Harghita."
O sa ma ocup mai tarziu de Locatiile de aici si de CECBank.
|
2025-04-01T04:35:06.899182
| 2021-12-29T17:55:16
|
1090669631
|
{
"authors": [
"DianaDascalu2",
"Kristinica"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9694",
"repo": "peviitor-ro/orase",
"url": "https://github.com/peviitor-ro/orase/issues/48"
}
|
gharchive/issue
|
[Arges] Mica component of Bascov village is not found in Arges county - Fail
Precondition
SOLR: http://zimbor.go.ro/solr/#/romania/query
browser: Chrome
Steps to Reproduce:
Step
Action
Expected
Status
1
open SOLR in browser
SOLR UI is loaded without any errors
Fail
2
type Mica in the q field
you are able to type Mica in the q field
Fail
3
type judet: Arges in the fq field
you are able to type judet: Arges in the fq field
Fail
4
click on Execute Query button
"judet:Arges" and "localitate: Mica", "comuna: Bascov " are displayed
Fail
Actual Results:
Mica component of Bascov village is not found in Arges county
https://orase.testquality.com defect D48.
Este comuna Bascov cu satul Mica.
|
2025-04-01T04:35:06.901716
| 2024-04-01T18:44:55
|
2218845387
|
{
"authors": [
"iBixee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9695",
"repo": "peviitor-ro/ui.orase",
"url": "https://github.com/peviitor-ro/ui.orase/issues/1913"
}
|
gharchive/issue
|
"Dudești" is listed in the drop-down menu
Description:
"Dudești" is listed in the drop-down menu alongside its corresponding county and commune, according to the law
Precondition:
The website is up an running.
Step 1
Type "Dudești" in the search bar.
Expected results
The location is listed in the drop-down menu as "Dudești, Brăila (Dudești)".
|
2025-04-01T04:35:06.914792
| 2024-06-01T09:38:35
|
2329039967
|
{
"authors": [
"Elena1303996"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9696",
"repo": "peviitor-ro/ui.orase",
"url": "https://github.com/peviitor-ro/ui.orase/issues/7538"
}
|
gharchive/issue
|
"Vulpeni", "Cotorbești", "Gropșani", "Mardale", "Pescărești", "Plopșorelu", "Prisaca", "Simniceni", "Tabaci" and "Valea Satului" are listed in the drop-down menu
Description:
The villages are listed in the drop-down menu alongside its corresponding county and commune, according to the law
Preconditions:
The website (https://peviitor-ro.github.io/ui.orase/) is up and runing. After every step delete the text you typed from the search bar.
Step 1
Write in the search bar "Vulpeni".
Expected results
The location is listed in the drop-down menu as "Sat Vulpeni OLT (Vulpeni)".
Step 2
Write in the search bar "Cotorbești".
Expected results
The location is listed in the drop-down menu as "Sat Cotorbești OLT (Vulpeni)".
Step 3
Write in the search bar "Gropșani".
Expected results
The location is listed in the drop-down menu as "Sat Gropșani OLT (Vulpeni)".
Step 4
Write in the search bar "Mardale".
Expected results
The location is listed in the drop-down menu as "Sat Mardale OLT (Vulpeni)".
Step 5
Write in the search bar "Pescărești".
Expected results
The location is listed in the drop-down menu as "Sat Pescărești OLT (Vulpeni)".
Step 6
Write in the search bar "Plopșorelu".
Expected results
The location is listed in the drop-down menu as "Sat Plopșorelu OLT (Vulpeni)".
Step 6
Write in the search bar "Prisaca".
Expected results
The location is listed in the drop-down menu as "Sat Prisaca OLT (Vulpeni)".
Step 8
Write in the search bar "Simniceni".
Expected results
The location is listed in the drop-down menu as "Sat Simniceni OLT (Vulpeni)".
Step 9
Write in the search bar "Tabaci".
Expected results
The location is listed in the drop-down menu as "Sat Tabaci OLT (Vulpeni)".
Step 10
Write in the search bar "Valea Satului".
Expected results
The location is listed in the drop-down menu as "Sat Valea Satului OLT (Vulpeni)".
|
2025-04-01T04:35:06.944014
| 2021-05-13T10:10:10
|
890912953
|
{
"authors": [
"Thrilleratplay",
"create-atl-delete",
"riccardospeggiorin-centropaghe"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9698",
"repo": "pfelk/docker",
"url": "https://github.com/pfelk/docker/issues/32"
}
|
gharchive/issue
|
No indexes in Kibana
I have installed the pfelk in docker from the zip provided and run the sh script for creating templates and dashboards.
All seems ok, the port 5140 of logstash is receving packet, checked with tcpdump and saw logs from firewall ip, but the dashborad shows me an error and I cannot see any index in the kibana dashborad management
These are the logs of logstash
[INFO ] 2021-05-13 12:31:41.197 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[ERROR] 2021-05-13 12:31:41.270 [[pfelk]-pipeline-manager] javapipeline - Pipeline error {:pipeline_id=>"pfelk", :exception=>#<Grok::PatternError: pattern %{HAPROXY} not defined>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in `block in compile'", "org/jruby/RubyKernel.java:1442:in `loop'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in `compile'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:288:in `block in register'", "org/jruby/RubyArray.java:1809:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:282:in `block in register'", "org/jruby/RubyHash.java:1415:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:277:in `register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in `block in register_plugins'", "org/jruby/RubyArray.java:1809:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:586:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:240:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137:in `block in start'"], "pipeline.sources"=>["/etc/pfelk/conf.d/01-inputs.conf", "/etc/pfelk/conf.d/02-types.conf", "/etc/pfelk/conf.d/03-filter.conf", "/etc/pfelk/conf.d/05-apps.conf", "/etc/pfelk/conf.d/20-interfaces.conf", "/etc/pfelk/conf.d/30-geoip.conf", "/etc/pfelk/conf.d/37-enhanced_user_agent.conf", "/etc/pfelk/conf.d/38-enhanced_url.conf", "/etc/pfelk/conf.d/45-cleanup.conf", "/etc/pfelk/conf.d/50-outputs.conf"], :thread=>"#<Thread:0x78d20b07@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
[INFO ] 2021-05-13 12:31:41.271 [[pfelk]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>"pfelk"}
[ERROR] 2021-05-13 12:31:41.277 [Converge PipelineAction::Create<pfelk>] agent - Failed to execute action {:id=>:pfelk, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<pfelk>, action_result: false", :backtrace=>nil}
[INFO ] 2021-05-13 12:31:41.325 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-05-13 12:31:42.323 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[INFO ] 2021-05-13 12:31:43.319 [LogStash::Runner] runner - Logstash shut down.
Using bundled JDK: /usr/share/logstash/jdk
are you sure that the path are all correct? Cause in the docker-compose I see:
- ./etc/pfelk/conf.d/patterns/:/etc/pfelk/patterns:ro
- ./etc/pfelk/conf.d/databases/:/etc/pfelk/databases:ro
but these directories are empty. The files are in /etc/pfelk/patterns and /etc/pfelk/databases on the host
So after doing all from scratch and without using the zip file, all seems to work!
There are some problems with the zip !
@riccardospeggiorin-centropaghe Thank you for this. I spent too much time trying to figure out this issue. Can you reopen this issue as I think the zip should be fixed?
No problem. There are some files that are missing, like the first post
I was able to resolve this to some extent by modifying the docker-compose.yml and correcting the paths for patterns and databases. Indexes are now populating, but none of the dashboards will load. Instead, I get a variety of errors regarding "Terms."
Given that @riccardospeggiorin-centropaghe stated that everything seems to work after copying everything down manually, it seems to me that there must be other issues with .zip beyond the incorrect paths in the docker-compose.yml.
The docker-compose.yml in main has the correct paths. Can use `wget https://raw.githubusercontent.com/pfelk/docker/main/docker-compose.yml' as a workaround until .zip is fixed.
|
2025-04-01T04:35:06.947008
| 2020-03-23T14:03:36
|
586233912
|
{
"authors": [
"momohatt",
"shinh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9699",
"repo": "pfnet-research/chainer-compiler",
"url": "https://github.com/pfnet-research/chainer-compiler/pull/819"
}
|
gharchive/pull-request
|
Display results for all children of nn.Sequential
By using nn.Sequential, invoking __call__ method of an nn.Model object in PyTorch can call more than two forward functions.
However, current implementation of inference engine does not consider such cases.
This PR fixes this behavior by changing the type of subroutine_node attribute of InferenceEngine from Dict[gast.Call, gast.FunctionDef] to Dict[gast.Call, List[gast.FunctionDef]].
/test
|
2025-04-01T04:35:06.955038
| 2017-01-25T05:11:06
|
203014357
|
{
"authors": [
"rezoo",
"yuyu2172"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9700",
"repo": "pfnet/chainer",
"url": "https://github.com/pfnet/chainer/pull/2174"
}
|
gharchive/pull-request
|
Optimize getitem for boolean array indices
This PR optimizes __getitem__ when slices contain boolean array.
I did 4 optimizations.
stop using flatten and use ravel instead (no DtoD memcpy)
avoid unnecessary cupy_subtract (one less kernel call)
stop calling cupy.max on result of scan. The maximum of scan is always stored at the last index.
use numpy.int32 for input and output of scan whenever possible. It is faster to compute than the case when they are numpy.int64.
I confirmed that all the optimizations contribute to speed up.
Benchmark results
You can benchmark speed using the following code.
The average speed of calling a boolean array indexing improved by 6x.
Before optimizations, it took 56.68 ms.
After optimizations, it took 9.38 ms.
All experiments are conducted on CUDA8.0, TitanX Pascal.
import cupy
import numpy as np
n = 2 ** 13
a = cupy.arange(n * n, dtype=np.float32).reshape(n, n)
mask = cupy.array(np.random.choice([True, False], size=(n, n)))
n_dry = 5
n_try = 10
for i in range(n_dry):
b = a[mask]
times = []
with cupy.cuda.profile():
for _ in range(n_try):
start = cupy.cuda.Event()
end = cupy.cuda.Event()
start.record()
b = a[mask]
end.record()
end.synchronize()
time = cupy.cuda.get_elapsed_time(start, end)
times.append(time)
print('mean={} std={}'.format(np.mean(times), np.std(times)))
Here are the results of kernel calls when n_try=1 for the case with and without the optimizations.
with optimizations
Time(%) Time Calls Avg Min Max Name
41.12% 3.5233ms 3 1.1744ms 3.9360us 3.3970ms inclusive_scan_kernel
27.85% 2.3864ms 1 2.3864ms 2.3864ms 2.3864ms cupy_boolean_array_indexing_nth
19.11% 1.6374ms 2 818.73us 140.52us 1.4969ms add_scan_blocked_sum_kernel
11.90% 1.0198ms 1 1.0198ms 1.0198ms 1.0198ms cupy_copy
0.02% 1.4080us 1 1.4080us 1.4080us 1.4080us [CUDA memcpy DtoH]
without optimizations
Time(%) Time Calls Avg Min Max Name
70.31% 39.517ms 1 39.517ms 39.517ms 39.517ms cupy_max
6.58% 3.6989ms 3 1.2330ms 3.7120us 3.5922ms inclusive_scan_kernel
5.68% 3.1904ms 2 1.5952ms 133.57us 3.0568ms add_scan_blocked_sum_kernel
5.42% 3.0435ms 1 3.0435ms 3.0435ms 3.0435ms cupy_subtract
5.31% 2.9871ms 1 2.9871ms 2.9871ms 2.9871ms cupy_boolean_array_indexing_nth
3.35% 1.8840ms 2 942.01us 378.86us 1.5052ms [CUDA memcpy DtoD]
3.35% 1.8810ms 1 1.8810ms 1.8810ms 1.8810ms cupy_copy
0.00% 896ns 1 896ns 896ns 896ns [CUDA memcpy DtoH]
Thank you for your PR!
As far as I can see, removing the cupy_max kernel from __getitem__ has the biggest effect on the reduction of the computational time.
Good. I think all the points you have modified seem to be valid. LGTM.
|
2025-04-01T04:35:06.958778
| 2015-11-11T16:58:43
|
116377115
|
{
"authors": [
"cbuechler",
"phil-davis"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9701",
"repo": "pfsense/pfsense",
"url": "https://github.com/pfsense/pfsense/pull/2061"
}
|
gharchive/pull-request
|
Limit alias info popup size #5415 RELENG_2_2
This is code for RELENG_2_2 pfSense 2.2.5 that will limit the number of rows in the alias info popup to 100.
Note: I expect that this works, but it is a quick demo of limiting the length of alias entries displayed in the popup - all the various types of long alias need to be tested. The final limit would need to be decided (50 suggested in the issue in Redmine). And of course necessary code engineered for however this works in 2.3
merged, thanks!
|
2025-04-01T04:35:06.960318
| 2018-07-04T20:38:32
|
338371516
|
{
"authors": [
"jwsi",
"rbgarga"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9702",
"repo": "pfsense/pfsense",
"url": "https://github.com/pfsense/pfsense/pull/3959"
}
|
gharchive/pull-request
|
Fix #8617
File in /usr/local/share/pear/Auth is actually RADIUS.php not RADIUS.inc.
[x] Redmine Issue: https://redmine.pfsense.org/issues/8617
[x] Ready for review
I already pushed a fix. Thanks!
|
2025-04-01T04:35:06.961052
| 2020-08-05T20:41:20
|
673833778
|
{
"authors": [
"rasmus-storjohann-PG",
"tomy-pg"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9703",
"repo": "pg-irc/pathways-frontend",
"url": "https://github.com/pg-irc/pathways-frontend/issues/1240"
}
|
gharchive/issue
|
Make screenshots for the stores
Use Niko's tool to make the screenshots
Live on both stores. Closing.
|
2025-04-01T04:35:07.010021
| 2012-01-24T00:30:40
|
2944195
|
{
"authors": [
"daw42",
"nate-yocom"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9704",
"repo": "pgina/pgina",
"url": "https://github.com/pgina/pgina/issues/86"
}
|
gharchive/issue
|
Installer should set permissions on registry/ui/installed directory
such that only local admins can run the ui/see the registry
This has already been added to master. See: #81
ah oops! :) #81 prevents the ui from running if it cannot escalate privileges, however, we could probably still protect the registry hive by default with a better set of permissions (so that 'everyone' cannot read by default)
Looks like Inno Setup will allow granting registry permissions, but not denying them. We may need to call out to a batch script to get this done. Ugh! Inno setup can be frustrating.
Perhaps worth writing a little post install exe that we can do whatever we want in? Let's us use c# instead of batch too.
On Jan 28, 2012, at 3:16 PM, David<EMAIL_ADDRESS>wrote:
Looks like Inno Setup will allow granting registry permissions, but not denying them. We may need to call out to a batch script to get this done. Ugh! Inno setup can be frustrating.
Reply to this email directly or view it on GitHub:
https://github.com/pgina/pgina/issues/86#issuecomment-3704784
Good idea Nate. In fact, we could have this exe call the ServiceHost exe and the registration exe so that the installer needs to run only a single post-install/uninstall app.
Indeed. Removes our dependence on inno script too. I like it!
On Jan 28, 2012, at 3:37 PM, David<EMAIL_ADDRESS>wrote:
Good idea Nate. In fact, we could have this exe call the ServiceHost exe and the registration exe so that the installer needs to run only a single post-install/uninstall app.
Reply to this email directly or view it on GitHub:
https://github.com/pgina/pgina/issues/86#issuecomment-3705006
If I remove the read or read&execute permission (for users group) from the configuration exe, then admin users can't run the config app (with full control allowed for the admin group), even when right-clicking and choosing "Run as administrator..". I'm not quite sure why...(I'm still learning about Windows ACLs). Appears that Win7 still treats the admin user as a regular user when trying to execute the file. I suggest that we allow the read&execute permission for users on the configuration exe, and rely on the registry ACLs and the UAC to protect from non-admin user access.
On second thought, I suggest that we don't mess with the ACLs in the install directory at all except perhaps for removing read permissions on the log directory. What do you think?
I like it. The sensitive data is the Registry values themselves, and
possibly the logs on disk (less so). We should make sure the UI fails
gracefully if it cant read (or write) the registry though.
On Tue, Jan 31, 2012 at 2:25 PM, David Wolff <
<EMAIL_ADDRESS>
wrote:
On second thought, I suggest that we don't mess with the ACLs in the
install directory at all except perhaps for removing read permissions on
the log directory. What do you think?
Reply to this email directly or view it on GitHub:
https://github.com/pgina/pgina/issues/86#issuecomment-3749414
|
2025-04-01T04:35:07.041911
| 2022-11-21T07:42:26
|
1457465713
|
{
"authors": [
"itaishmida",
"pgonzaleznetwork"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9705",
"repo": "pgonzaleznetwork/sfdc-happy-soup",
"url": "https://github.com/pgonzaleznetwork/sfdc-happy-soup/issues/214"
}
|
gharchive/issue
|
Fail to connect to my Salesforce org without IP
I am trying to login to https://www.happysoup.io/
I always get this error:
We were unable to log into your salesforce org. Try clearing the cache and cookies, using another browser or another org.
I am trying to connect to many different sandboxes of mine - all give same results
I have seen on my user that the IP address is blocked: <IP_ADDRESS>
I have added this IP now all is good ;o)
maybe this needed to be added to the docs
After login in I tried to get some metadata (any metadata gave the same results) and I got this error:
`We are sorry, something went wrong. Please click here to log a Github issue so that we can review the error. Please include the following details: INVALID_SESSION_ID: Invalid Session ID found in SessionHeader: Illegal Session
Error: INVALID_SESSION_ID: Invalid Session ID found in SessionHeader: Illegal Session at Object.listMetadata (/app/node_modules/sfdc-happy-api/lib/metadata.js:27:19) at processTicksAndRejections (internal/process/task_queues.js:95:5) at async Object.listMetadataJob (/app/backend/db/queue/jobs.js:55:26) at async Queue. (/app/backend/db/queue/worker.js:25:20)`
So in session setting: /lightning/setup/SecuritySession/home
I have removed setting:
Enforce login IP ranges on every request
Now all works ok
Thank you @itaishmida !
This is indeed because of the configuration of IP restrictions in your org, so the behavior was expected.
Of course, I could have returned a better error message for this, so I'll do that in the future.
Failed SOAP response while calling listMetadata() on metadata API {"fetchOptions":{"method":"POST","headers":{"Content-Type":"text/xml;charset=UTF-8","SOAPAction":"c","Accept-Encoding":"gzip,deflate"},"body":"<soapenv:Envelope xmlns:soapenv=\"[http://schemas.xmlsoap.org/soap/envelope/\](http://schemas.xmlsoap.org/soap/envelope/%5C)" xmlns:met=\"[http://soap.sforce.com/2006/04/metadata\](http://soap.sforce.com/2006/04/metadata%5C)">\n <soapenv:Header>\n <met:SessionHeader>\n <met:sessionId>00D8M0000004ekt!AQMAQN0Ofzlvo8ScZJM5G.CvvM0NcMxlsR9uSy2tie.4tXYa4WIJ2mxXHE.kW2TmFtgMpZhIsIKuyKQdkFyxDW437.zkEKcG</met:sessionId>\n </met:SessionHeader>\n </soapenv:Header>\n <soapenv:Body>\n <met:listMetadata>\n <met:queries>\n <met:type>CustomField</met:type>\n </met:queries>\n <met:asOfVersion>48.0</met:asOfVersion>\n </met:listMetadata>\n </soapenv:Body>\n </soapenv:Envelope>"},"json":{"soapenv:Envelope":{"soapenv:Body":{"soapenv:Fault":{"faultcode":"sf:INSUFFICIENT_ACCESS","faultstring":"INSUFFICIENT_ACCESS: Access from current IP address is not allowed"}}}}
Thanks @pgonzaleznetwork
Maybe just a few words on the docs would be enough
|
2025-04-01T04:35:07.312981
| 2024-09-25T14:20:24
|
2548137142
|
{
"authors": [
"AndersAskeland",
"bms63",
"bundfussr",
"kathrinflunkert",
"manciniedoardo",
"starosto",
"yurovska"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9708",
"repo": "pharmaverse/admiralmetabolic",
"url": "https://github.com/pharmaverse/admiralmetabolic/issues/31"
}
|
gharchive/issue
|
Feature Request: derive_param_computed() wrapper for waist/hip or waist/height ratio
Feature Idea
As suggested by @starosto, waist/hip ratio and waist/height ratio are common calculations done at ADaM level. These could be easily added as ADVS params using admiral::derive_param_computed(). But should we make a wrapper function for it, similar to derive_param_bmi() or derive_param_bsa()
If we decide to implement, should we have one or two functions?
Relevant Input
See derive_param_bsa examples.
Relevant Output
See derive_param_bsa examples.
Reproducible Example/Pseudo Code
NA
@pharmaverse/admiralmetabolic thoughts?
We keep it simple, just 100*WSTCIR /HIPCIR
Interesting. For me the term 'ratio' implied it is rather a unitless parameter (fraction of 1).
@ Novo team: How is it done on your end?
In this publication (https://www.nature.com/articles/s41591-024-02996-7) it sounds like Novo does it as a fraction of 1:
"Waist-to-height ratio
At baseline, mean WHtR was 0.66 for the study population. The lowest tertile of the SELECT population at baseline had a mean WHtR <0.62, which is higher than the cutoff point of 0.5 used to indicate increased cardiometabolic risk, [...]"
I just looked it up in our (Novo) metadata system, and it does indeed use percentages. But I would argue that this is a mistake in our system, and we should change it. From my understanding, even though ratios can be represented at percentages, it does not make it a percentage.
Imagine for instance a WHR above 1 (which is possible), providing percentages can appear really strange. So, I would vote for us to use the ratio and not provide any functionality for percentages.
Sorry for the confusion. It should be simple ratio WSTCIR /HIPCIR as Kathrin suggested.
No need to apologize. :)
It was good to have this discussion!
I would suggest 3 functions:
derive_param_ratio() - a wrapper for admiral::derive_param_computed() that derives any ratio parameter;
derive_param_waisthip() - a wrapper for derive_param_ratio() to derive Waist-to-Hip Ratio;
derive_param_waisthgt() - a wrapper for derive_param_ratio() to derive Waist-to-Height Ratio.
Actually, I already have a draft implementation and would be more than happy to share. Looking forward to joining @pharmaverse/admiralmetabolic!
I would suggest 3 functions:
derive_param_ratio() - a generic wrapper for admiral::derive_param_computed() that derives any ratio parameter in any ADaM BDS (could be useful also for other potential ratio parameters, e.g. Albumin/Creatinine Ratio that is specifically mentioned in TAUG for Diabetes, 3.1.3 Kidney Function, or things like Glucose/Insulin Ratio, etc.);
derive_param_waisthip() - a wrapper for derive_param_ratio() to derive Waist-to-Hip Ratio;
derive_param_waisthgt() - a wrapper for derive_param_ratio() to derive Waist-to-Height Ratio.
supplemental compute_ratio() (similar to admiral::compute_bmi(), admiral::compute_bsa(), etc.) that properly handles devision by zero (just in case).
Actually, I already have a draft implementation and would be more than happy to share. Looking forward to contributing to @pharmaverse/admiralmetabolic!
This is a nice idea for setup - I am wondering though if the ratio function would be better in {admiral}? @bms63 @bundfussr what do you think?
If it can be used by other TAs or general ADaMS then yes we should move it to admiral
Good point. It's indeed more suitable for {admiral}.
What would be the workflow in this kind of case? We propose it to {admiral} by creating an issue/PR there and wait till it becomes available? Or as an alternative we could make it internal here for the moment (@export flag to be removed) and create another issue for the future to replace it with the one from admiral when/if available.
Yes - create issue and do a PR. Be best if you work on it or someone from your team.
We would merge it to main - and it can easily become available with Remotes or staged.dependencies.
There is an admiral released planned for January 2025 that could include this...but we can do a mini-release of admiral if things don't line up perfectly.
I would suggest 3 functions:
derive_param_ratio() - a generic wrapper for admiral::derive_param_computed() that derives any ratio parameter in any ADaM BDS (could be useful also for other potential ratio parameters, e.g. Albumin/Creatinine Ratio that is specifically mentioned in TAUG for Diabetes, 3.1.3 Kidney Function, or things like Glucose/Insulin Ratio, etc.);
derive_param_waisthip() - a wrapper for derive_param_ratio() to derive Waist-to-Hip Ratio;
derive_param_waisthgt() - a wrapper for derive_param_ratio() to derive Waist-to-Height Ratio.
supplemental compute_ratio() (similar to admiral::compute_bmi(), admiral::compute_bsa(), etc.) that properly handles devision by zero (just in case).
Actually, I already have a draft implementation and would be more than happy to share. Looking forward to contributing to @pharmaverse/admiralmetabolic!
This is a nice idea for setup - I am wondering though if the ratio function would be better in {admiral}? @bms63 @bundfussr what do you think?
If the ratio function are implemented, it should be in {admiral} because ratios are not metabolic specific.
However, I wouldn't implement this setup. We have something similar in admiral but I would consider it as historic. I see no benefit in implementing these functions. With these function we could for example derive the waist-hip ratio by
derive_param_waisthip(
advs,
by_vars = exprs(STUDYID, USUBJID, ADT),
wstcir_code = "WSTCIR",
hipcir_code = "HIPCIR",
set_values_to = exprs(
PARAMCD = "WAISTHIP"
)
)
instead of
derive_param_computed(
advs,
by_vars = exprs(STUDYID, USUBJID, ADT),
parameters = c("WSTCIR", "HIPCIR"),
set_values_to = exprs(
AVAL = AVAL.WSTCIR / AVAL.HIPCIR,
PARAMCD = "WAISTHIP"
)
)
The first call doesn't look simpler, shorter, or clearer to me. I think the second call is even clearer than the first one because it is obvious how the ratio derived. Although the new functions are mainly wrapper functions they need to be implemented, documented, and tested, which is some work. And the users and reviewers would need to learn more functions.
I would suggest not implementing new derive_param_*() functions for parameters which are defined by a formula. For complex formulas like "Framingham Heart Study Cardiovascular Disease 10-Year Risk Score" a compute function should be implemented. I don't think we need a compute function for ratios.
Thanks @bundfussr for your feedback.
I don't think the point of creating wrapper functions is only to simplify the call. It's more about to specialize them for certain purpose. It’s easy for you, with your vast experience with {admiral}, to understand which function is best for specific purposes, but not for a user who installed the package five minutes ago. If they need parameters for a ratio, the first thing they’ll do is enter specific keywords in the search bar, and quickly find what they need.
In fact, the derive_param_waisthip function does simplify the call a bit. Firstly, it provides default values right away, so you can omit them, which the generic function doesn’t do, and also, as you may have noticed, it lacks the constant_parameters and constant_by_vars arguments, as they don’t make sense specifically for Waist and Hip measurements.
Besides, specialized wrapper functions may have additional checks for the input arguments or data in order to make it more fool-proof or provides the user with some useful info regarding the correctness of their input data.
It reminded me of my yesterday’s struggle with the admiral::derive_var_joined_exist_flag function, which is so freaking flexible that it took me a couple of hours to make it do something very simple, which would have taken 2-3 lines in a SAS DATA step. I have a strong feeling to vote even for wrappers of the wrappers.
Thanks for the feedback @yurovska on admiral
It reminded me of my yesterday’s struggle with the admiral::derive_var_joined_exist_flag function, which is so freaking flexible that it took me a couple of hours to make it do something very simple, which would have taken 2-3 lines in a SAS DATA step. I have a strong feeling to vote even for wrappers of the wrappers if it puts the right values to the right arguments for me.
Is it possible for you to reconstruct this simple example so we can include it in our documentation.
Thanks @yurovska - I made it an issue in admiral.
I think we should discuss this issue during our meeting today. To summarize, I think we need to consider the following:
Will derive_param_ratio() be added to admiral?
If derive_param_ratio() is not added, should we have dedicated functions for calculating waist hip ratio and waist height ratio?
If derive_param_ratio() is added, does it make sense to create dedicated functions for waist/hip and waist/height or should we just call derive_vars_ratio()?
Summary of discussion and decision at standup:
The ratio function will not be implemented at the {admiral} level due to the comments by @bms63 and @bundfussr
Within {admiralmetabolic}, we will go ahead and implement one or two wrappers for derive_param_computed() to derive waist to hip ratio and waist to height ratio, notwithstanding the fact that these functions will be relatively simple. This is because:
a) We don't have functions in the package yet and it would do the team good to see how they are implemented in terms of documentation, reference pages, unit tests, etc.
b) The function(s) can also do some unit conversions on the fly to calculate the unitless ratios as the team has indicated that at times the two tests are measured in different units (inches vs cm or similar).
c) We can always remove/supersede later on.
To that end, at your earliest convenience @yurovska please update #33 taking note of the above. We leave to your discretion (and the eventual reviewers'):
Whether you want to implement two separate functions for waist/hip and waist/height, or one
Whether you still want to use an intermediate ratio function (especially if you implement two functions), as long as it's marked as not exported
derive_param_ratio is now an internal function (not exported).
Both derive_param_waisthip and derive_param_waisthgt are kept.
Units conversion on the fly has been implemented. See https://github.com/pharmaverse/admiralmetabolic/pull/33/commits/16857612ea8175001e607d021ce462bca5ec5daa
|
2025-04-01T04:35:07.366290
| 2019-04-22T19:03:42
|
435847523
|
{
"authors": [
"KatieWoe",
"ariel-phet",
"jbphet",
"mattpen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9709",
"repo": "phetsims/QA",
"url": "https://github.com/phetsims/QA/issues/310"
}
|
gharchive/issue
|
RC Test: Isotopes and Atomic Mass 1.1.0-rc.2
@KatieWoe, @ariel-phet, and @arouinfar - Isotopes and Atomic Mass 1.1.0-rc.2 is ready for RC testing. This is the second RC on this branch, and addresses the issues found during the first RC. Please document issues in https://github.com/phetsims/isotopes-and-atomic-mass/issues and link to this issue.
Assigning @ariel-phet for prioritization.
General RC Test
What to Test
Click every single button.
Test all possible forms of input.
Test all mouse/trackpad inputs.
Test all touchscreen inputs.
Make sure you can't lose anything.
Play with the sim normally.
Try to break the sim.
Test all query parameters on all platforms. (See QA Book for a list of query parameters.)
Download HTML on Chrome and iOS.
Make sure the iFrame version of the simulation is working as intended on all platforms.
Make sure the XHTML version of the simulation is working as intended on all platforms.
Complete the test matrix.
Don't forget to make sure the sim works with Legends of Learning.
Check this LoL spreadsheet and notify AR if it not there.
Focus and Special Instructions
The focus should be on the issues listed below, and particularly on the "Nature's Mix" behavior on the 2nd screen, since some significant modifications were made to decrease memory usage.
Issues to Verify
[x] Trace string has no max width
[x] Slight pointer area overlap on Isotopes screen
[x] Fluorine 18 and 19 should have different amu
[x] Memory Test jumps around
[ ] User suggestion: tritium (trace) abundance
Link(s)
Simulation
iFrame
XHTML
Test Matrix
Legends of Learning Harness
FAQs for QA Members
There are multiple tests in this issue... Which test should I do first?
Test in order! Test the first thing first, the second thing second, and so on.
How should I format my issue?
We typically assign the developer who opened the issue in the QA repository.
My question isn't in here... What should I do?
You should:
Consult the QA Book.
Google it.
Ask Katie.
Ask a developer.
Google it again.
Cry.
Memory test results:
Start: 26.5
1: 48.7
2: 60.4
3: 60.5
4: 61.3
5: 64.3
6: 40.8
7: 60.5
8: 62.6
9: 64.4
10: 63.9
@KatieWoe currently high priority, as there is not too much else in the queue currently
This branch was patched as part of the batch maintenance release in phetsims/chipper#746. This should just have added a .gz file to the build artifacts.
QA is done
Thanks, I will address the logged issues individually (actually, I already have), so closing this one.
|
2025-04-01T04:35:07.493404
| 2024-11-11T21:36:18
|
2650459859
|
{
"authors": [
"KatieWoe",
"pixelzoom"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9710",
"repo": "phetsims/qa",
"url": "https://github.com/phetsims/qa/issues/1169"
}
|
gharchive/issue
|
RC Spot-Check Test: Vector Addition: Equations 1.1.0-rc.2
RC Spot-Check Test
Mentions: @arouinfar @KatieWoe @kathy-phet
Simulation links
github repository for issues <= Yes, this is vector-addition.
phet top-level directory
sim: all_phet.html
Test Matrices
General Test:
[x] Tester = NS , Platform = mac +safari, Time = .25
[x] Tester = KW, Platform = Win 11 Chrome, Time = .25
Features included
PhET-iO
Dynamic Locale
Alternative Input
UI Sound
Sonification
Description
Voicing
Focus and Special Instructions
This sim is the Equations screen from the Vector Addition sim, and was built from the identical code. So please do https://github.com/phetsims/qa/issues/1168 first, and then you can do an RC-lite test on this sim.
Issues to Verify
It is sufficient to test these issues on 1 platform.
These issues have the "status:ready-for-review" label. Unless an issue says to close after verifying, assign the
issue back to the developer.
[x] https://github.com/phetsims/vector-addition/issues/287
[x] https://github.com/phetsims/vector-addition/issues/289
[x] https://github.com/phetsims/vector-addition/issues/284
For QA...
General features
What to Test
Click every single button.
Test all possible forms of input.
Test all mouse/trackpad inputs.
Test all touchscreen inputs.
If there is sound, make sure it works.
Make sure you can't lose anything.
Play with the sim normally.
Try to break the sim.
Test some query parameters. (See QA Book for a
list of query parameters.)
When making an issue, check to see if it was in a previously published version.
Try to include version numbers for browsers.
If there is a console available, check for errors and include them in the Problem Description.
As an RC begins and ends, check the sim repo. If there is a maintenance issue, check it and notify developers if there
is a problem.
As the RC ends, notify the developer of any new QA credits that need to be added.
PhET-iO features
What to Test
Make sure that public files do not have password protection. Use a private browser for this.
Make sure that private files do have password protection. Use a private browser for this.
Make sure standalone sim is working properly.
Make sure the wrapper index is working properly.
Make sure each wrapper is working properly.
Launch the simulation in Studio with ?stringTest=xss and make sure the sim doesn't navigate to youtube
For newer PhET-iO wrapper indices, save the "basic example of a functional wrapper" as a .html file and open it. Make
sure the simulation loads without crashing or throwing errors.
Load the login wrapper just to make sure it works. Do so by adding this link from the sim deployed root:/wrappers/login/?wrapper=record&validationRule=validateDigits&&numberOfDigits=5&promptText=ENTER_A_5_DIGIT_NUMBER
Further instructions in QA Book
Conduct a recording test to Metacog, further instructions in the QA Book. Do this for iPadOS + Safari and one other
random platform.
Conduct a memory test on the stand alone sim wrapper (rc.1).
Test one platform combination with ?phetioDebug=true on the Studio and State wrapper.
If Pan/Zoom is supported, make sure that it works when set with PhET-iO State.
Test that the sim works offline:
Click the link to the phet-io zip file (at top of issue) to download the zip file.
Unzip it to a spot locally.
Open index.html by double clicking it on your desktop or in a Finder-view.
It should look like the standalone version of the sim in PhET-iO brand.
Accessibility features
What to Test
Specific instructions can be found above.
Make sure the accessibility (a11y) feature that is being tested doesn't negatively affect the sim in any way. Here is
a list of features that may be supported in this test:
Alternative Input
Interactive Description
Sound and Sonification
Pan and Zoom
Mobile Description
Voicing
Test all possible forms of input.
Test all mouse/trackpad inputs.
Test all touchscreen inputs.
Test all keyboard navigation inputs (if applicable).
Test all forms of input with a screen reader (if applicable).
If this sim is not in this list or up to date there, make an
issue in website to ask if PhET research page links need updating. Please
assign to @terracoda and @emily-phet.
Screen Readers
This sim may support screen readers. If you are unfamiliar with screen readers, please ask Katie to introduce you to
screen readers. If you simply need a refresher on screen readers, please consult the
QA Book, which should have all of the information
you need as well as a link to a screen reader tutorial made by Jesse. Otherwise, look over the a11y view before opening
the simulation. Once you've done that, open the simulation and make sure alerts and descriptions work as intended.
Platforms and Screen Readers to Be Tested
Windows 10 + Latest Chrome + Latest JAWS
Windows 10 + Latest Firefox + Latest NVDA
macOS + Safari + VoiceOver
iOS + Safari + VoiceOver (only if specified in testing issue)
Critical Screen Reader Information
We are tracking known screen reader bugs in
here. If you find a screen reader bug,
please check it against this list.
Keyboard Navigation
This sim supports keyboard navigation. Please make sure it works as intended on all platforms by itself and with a
screen reader.
Magnification
This sim supports magnification with pinch and drag gestures on touch screens, keyboard shortcuts, and mouse/wheel
controls. Please test magnfication and make sure it is working as intended and well with the use cases of the
simulation. Due to the way screen readers handle user input, magnification is NOT expected to work while using a screen
reader so there is no need to test this case.
FAQs for QA Members
There are multiple tests in this issue... Which test should I do first?
Test in order! Test the first thing first, the second thing second, and so on.
How should I format my issue?
We typically assign the developer who opened the issue in the QA repository.
My question isn't in here... What should I do?
You should:
Consult the QA Book.
Google it.
Ask Katie.
Ask a developer.
Google it again.
Cry.
QA is done
Thanks QA! Onward to publishing 1.1.
|
2025-04-01T04:35:07.543913
| 2015-05-24T01:14:07
|
80006797
|
{
"authors": [
"4xrsJCr9",
"phildawes",
"tomjakubowski"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9711",
"repo": "phildawes/racer",
"url": "https://github.com/phildawes/racer/issues/249"
}
|
gharchive/issue
|
having trouble completing from cargo crates
Hi, first of all thanks so much for creating racer!
I'm trying to use racer from Emacs with a Cargo project. My understanding is that Cargo dependencies are now supported. However, I've been having trouble with it. As an example, I edited src/main.rs in the racer project itself, added use toml:: to the top level scope of the file and asked Company for a completion. I got this:
Company: Back-end racer-company-complete error "/home/tom/.local/bin/racer exited with status 101" with args (candidates )
If I run racer complete use toml:: from inside that src directory of my Racer repo I see nothing printed and racer exits with a 0 error code.
Completions for std all work just as one would expect, so I think that RUST_SRC_PATH is set up and racer is otherwise happy.
Hi @tomjakubowski, What does your Cargo.toml look like?
Cargo.toml
[package]
name = "racer"
version = "0.0.1"
license = "MIT"
description = "Code completion for Rust"
authors = ["Phil Dawes<EMAIL_ADDRESS>
[[bin]]
name = "racer"
path = "src/main.rs"
[profile.release]
debug = true
[dependencies]
log = "*"
syntex_syntax = "*"
toml = "*"
[features]
nightly = []
Cargo.lock
[root]
name = "racer"
version = "0.0.1"
dependencies = [
"log 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"syntex_syntax 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
"toml 0.1.20 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "bitflags"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "kernel32-sys"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.1.18 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "libc"
version = "0.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rustc-serialize"
version = "0.3.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "syntex_syntax"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rustc-serialize 0.3.14 (registry+https://github.com/rust-lang/crates.io-index)",
"term 0.2.7 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-xid 0.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "term"
version = "0.2.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.1.18 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "toml"
version = "0.1.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rustc-serialize 0.3.14 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "unicode-xid"
version = "0.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi"
version = "0.1.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
Are you using multirust by any chance? If so, you'll need to symlink ~/.cargo to ~/.multirust/toolchains/stable/cargo or the like.
@phildawes: Perhaps the location of the cargo registry could be passed as an environment variable till https://github.com/rust-lang/cargo/issues/1098 is sorted out?
@4xrsJCr9 yes that might be the easiest way. Or maybe racer should interrogate the .multirust directory to back out the directory. (i.e. look in .multirust/overrides and .multirust/default).
(n.b. I'm not sure this is something a cargo lib can support easily without help from multirust)
@tomjakubowski Is @4xrsJCr9 correct about multirust? (Completing toml:: in the racer source works for me.) Thanks!
Yep, I've got multirust installed and that seems like a probable explanation. I'll give it a try with a workaround soon.
I added some multirust support last night. It currently doesn't do overrides, but it should look in the current multirust 'default' cargo directory now. You have to delete your ~/.cargo dir for the new functionality to kick in. Hope this helps!
|
2025-04-01T04:35:07.548297
| 2015-08-21T23:53:00
|
102482472
|
{
"authors": [
"Wilfred",
"phildawes"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9712",
"repo": "phildawes/racer",
"url": "https://github.com/phildawes/racer/issues/346"
}
|
gharchive/issue
|
racer doesn't discover variables inside llvm_sys files
Completion works in this case:
extern crate llvm_sys;
use llvm_sys::
However, it doesn't work here:
extern crate llvm_sys;
use llvm_sys::core::
Hi @Wilfred. I think this is because all the functions in llvm_sys::core:: are extern "C", and racer doesn't support extern yet.
@varding opened #347 with the same problem. I will try and get this functionality added soon
Aha, that looks like the issue. Should I close this issue in favour of #347?
Yeah, that would be good thanks. If after I've fixed that this still
doesn't work then reopen.
Thanks!
On Sun, Aug 23, 2015 at 11:20 AM, Wilfred Hughes<EMAIL_ADDRESS>wrote:
Aha, that looks like the issue. Should I close this issue in favour of
#347 https://github.com/phildawes/racer/issues/347?
—
Reply to this email directly or view it on GitHub
https://github.com/phildawes/racer/issues/346#issuecomment-133813230.
|
2025-04-01T04:35:07.564445
| 2018-05-31T10:10:03
|
328077541
|
{
"authors": [
"codylane",
"michaelhajjar",
"philpep"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9713",
"repo": "philpep/testinfra",
"url": "https://github.com/philpep/testinfra/issues/331"
}
|
gharchive/issue
|
how to import class and give arguments?
Hello,
I have a question. I have created a class with some tests.
I would like to add the class to a CLI tool we are creating (in the cement framework) and add arguments to it.
How can I add arguments in the class since the init costructor is not working?
is is for the backend configuration (paramiko with a username arg, server arg and private_key arg).
here is the test:
import testinfra
import pytest
class TestTemp(object):
# establish connection to server
connect = testinfra.get_host("paramiko://usernamearg@serverarg", ssh_identity_file=private_keyarg)
# test if packages are installed, before template creation
def test_pkg_installed(self):
# define packages that need to be checked
packages = ["cloud-init", "open-vm-tools", "perl"]
for pkg in packages:
assert self.connect.package(pkg).is_installed
# test if service is enabled
def test_service_enabled(self):
# define services that need to be checked
packages = ["vmtoolsd", "cloud-init", "sshd"]
for pkg in packages:
service = self.connect.service(pkg)
assert service.is_enabled
# test if service is enabled
def test_service_running(self):
# define services that need to be running
packages = ["vmtoolsd"]
for pkg in packages:
service = self.connect.service(pkg)
assert service.is_running
# test if repo exists
def test_repo_exists(self):
# define repos that need to be checked
repos = ["repo_os", "repo_updates", "repo_saltstack",
"repo_epel", "repo_duo", "repo_cloud_init",
"repo_extras", "repo_ansible"]
for repo in repos:
repo_path = "/etc/yum.repos.d/" + repo + ".repo"
h_file = self.connect.file
assert h_file(repo_path).exists
def test_yum_update(self):
assert self.connect.check_output("yum check-update")
def run_tests():
pytest.main(['-v', '--disable-warnings', '-r', 'P'])
Hello, I think your TestTemp should inherit from unittest.TestCase.
I'm not sure how your CLI thingy is supposed to work and how it's related to this problem, but I will attempt to answer your Class based test question.
testinfra use pytest and in it's self is a pytest plugin. I like pytest, but it took me a while to understand how fixtures work. Fixtures, are the things that testinfra provides in order to make our lives easier as testers. There is nothing really wrong with what you are asking or doing but think there might be some confusion how testinfra works with your tests.
Take this for example
import testinfra
import pytest
class TestTemp(object):
# establish connection to server
connect = testinfra.get_host("paramiko://usernamearg@serverarg", ssh_identity_file=private_keyarg)
# test if packages are installed, before template creation
def test_pkg_installed(self):
# define packages that need to be checked
packages = ["cloud-init", "open-vm-tools", "perl"]
for pkg in packages:
assert self.connect.package(pkg).is_installed
There is nothing wrong with that, but that is not really the correct way to use testinfra or pytest. In fact, I think I understand why you are doing it that way, but I'm going to try and steer you down the path that pytest testers use and encourage you to use fixtures as decroators or inside your test methods. Try to avoid wrapping test arguments into global variable(s) or you will have surprises and potential race conditions especially if you plan to use xdist and parallelize your tests. Avoid that hassle!
Instead I would like to encourage you to try this for your Class based tests DISCLAIMER I have not tested this, this will left for you but it should highlight the documented way of using pytest fixtures and testinfra
import os
import testinfra
import pytest
SSH_PRIV_KEY = os.environ.get('PARAMIKO_SSH_PRIVATE_KEY', '~/.ssh/id_rsa')
PARAMIKO_CONNECT = (
"paramiko://usernamearg@serverarg",
ssh_identity_file=SSH_PRIV_KEY
)
class TestTemp(object):
# "host" is a pytest test fixture
@pytest.mark.testinfra_hosts(*PARAMIKO_CONNECT)
def test_pkg_installed(self, host):
# define packages that need to be checked
packages = ["cloud-init", "open-vm-tools", "perl"]
for pkg in packages:
assert host.package(pkg).is_installed
@pytest.mark.testinfra_hosts(*PARAMIKO_CONNECT)
def test_foobar(self, host):
assert False, 'need to implement this'
The above is not only more explicit of what, you are testing for, but also where. No magic! I strongly encourage no magic when you are testing, duplicate tests if you have too, don't go overboard with variables and configuration of tests, it will bite you down the road and you never want to be in the boat "Hey, my tests are complicated and I don't know how they work anymore."
Small, concise, terse testing is the key to better testing. Also, the above example has the added benefits of what makes pytest so freaking awesome. When used in the decorator form above you also take advance of automatic setup and teardown routines to fire depending on how you scope your tests. See the pytest docs for examples on scoping.
Anyway, I hope this helped.
@philpep, @codylane thank you very much, I have found a solution with your help.
|
2025-04-01T04:35:07.574147
| 2022-10-07T09:09:28
|
1400862793
|
{
"authors": [
"phloxic",
"venomone"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9714",
"repo": "phloxic/videojs-sprite-thumbnails",
"url": "https://github.com/phloxic/videojs-sprite-thumbnails/issues/30"
}
|
gharchive/issue
|
Thumbnail does not appear under chrome
Hello,
I would like to know why I don't see any sprites in chrome, where Firefox seems to work fine!?
Is this already a known issue?
Thanks in advance
@venomone - works fine in Chrome for me[tm]. Could you be more specific? Link to an example where this happens?
Sadly I dont have any example online :( But is this only CSS controlled? The only really diffrence I see is that with chrome I get the following at my browsers console:
VIDEOJS: video-player: spriteThumbnails: WARN: connection.downlink < 2
This message does not appear with Firefox. Any Idea?
See the downlink config option.
This is expected behaviour. At the moment only Chrome based browsers support the network connection interface, so Firefox loads and shows them.
You can can of course also disable this check.
|
2025-04-01T04:35:07.614572
| 2023-09-13T03:16:32
|
1893646367
|
{
"authors": [
"AdamMPieroni",
"phonedude"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9715",
"repo": "phonedude/cs533-f23",
"url": "https://github.com/phonedude/cs533-f23/pull/14"
}
|
gharchive/pull-request
|
Assignment 1
Adam Pieroni Assignment 1
Your README.md needs to be within your "1" directory. For example, see: https://github.com/phonedude/cs533-f23/tree/main/assignments/Nelson/1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.