added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:21.819295
| 2020-06-09T02:58:26
|
635071975
|
{
"authors": [
"bingbongle",
"emplums"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5249",
"repo": "defund12/defund12.org",
"url": "https://github.com/defund12/defund12.org/pull/1034"
}
|
gharchive/pull-request
|
Add workflow for running tests
This PR takes the work in #1019 and moves running the tests into our Actions workflow 😄 I'm still new-ish to creating Actions so lmk if anything needs to be changed!
This should be merged into #1019 and if approved I believe we can go ahead and delete the dockerfile from test/markdown/
@avimoondra could you give this a look?
I personally feel a bit more inclined to use Docker here, as @avimoondra has already set up. Docker is easier to maintain for long-term, and is a bit more extensible. And ideally the CI pipeline will use the same workflow as local development. Setting up pip correctly can be a huge barrier to entry for new engineers, and I feel like Docker really simplifies the process. WDYT?
I'm fine with going the other route! Closing :)
|
2025-04-01T06:38:21.830072
| 2015-07-23T19:19:06
|
96886871
|
{
"authors": [
"bacongobbler",
"mboersma",
"technosophos"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5250",
"repo": "deis/deis",
"url": "https://github.com/deis/deis/pull/4095"
}
|
gharchive/pull-request
|
fix(deisctl): exit when stop/start fails
This fixes issue #3880, where a failed start or stop command will hang
deisctl. The fix simply prints out an error message and exits from the
resolution loop.
LGTM, though some unit tests would be nice
Now with three times the line count! And tests! And I fixed the typo @mboersma noted.
:heart: tests :heart:
code LGTM
Code LGTM.
|
2025-04-01T06:38:21.851244
| 2023-06-08T07:07:31
|
1747222511
|
{
"authors": [
"IndependentCreator",
"delay"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5251",
"repo": "delay/sveltekit-auth-starter",
"url": "https://github.com/delay/sveltekit-auth-starter/issues/15"
}
|
gharchive/issue
|
Bug? Signed in without email verification
Testing locally, I accidentally discovered a way to login without ever clicking on (or even receiving) the confirmation email:
Configure your .env with incorrect SMTP login credentials
Start the dev server via npm run dev
Fill in the sign up fields in the browser and click "Sign Up"
Notice the server crashes. Console logs incorrectly report the email was sent successfully before subsequently failing with an SMTP authentication error, e.g.
E-mail sent successfully!
log: {"level":"info","method":"GET","path":"/auth/verify/resend-email-a0455918-de53-4979-9aa4-3d72a06e491b","status":200,"timeInMs":1552,"user":"mrbogus@example.com","userId":"AhXN3CVU2QsGzOP","referer":"/auth/verify/email"}
log: {"level":"info","method":"GET","path":"/","status":200,"timeInMs":4056,"user":"mrbogus@example.com","userId":"AhXN3CVU2QsGzOP","referer":"/auth/verify/email"}
E-mail sent successfully!
log: {"level":"info","method":"GET","path":"/auth/verify/resend-email-a0455918-de53-4979-9aa4-3d72a06e491b","status":200,"timeInMs":654,"user":"mrbogus@example.com","userId":"AhXN3CVU2QsGzOP","referer":"/auth/verify/resend-email-a0455918-de53-4979-9aa4-3d72a06e491b"}
/src/lib/server/email-send.ts:78
throw new Error(`Error sending email: ${JSON.stringify(err)}`);
^
Error: Error sending email: {"code":"EAUTH","response":"535 Incorrect authentication data","responseCode":535,"command":"AUTH PLAIN"}
at eval (/src/lib/server/email-send.ts:78:19)
Restart the server with npm run dev
Return to the browser and click on the "If you did not receive the email, [click here] to resend it" link
Notice the server crashes again with an SMTP Auth error
Restart the server with npm run dev
Return to the browser and visit http://localhost:5173/dashboard
Notice under Protected Area, it says "If you are seeing this page, you are logged in."
Visit http://localhost:5173/profile and notice that you can see your profile and make changes to it.
Thanks for the reply. Do you have any concerns that someone could use this behavior to create an exploit that would allow them to register without using a valid email address?
If you configure your smtp settings with the correct info, can you replicate this trouble? It would concern me if correctly configuring allowed users to sign in.
Also I don’t really understand how even if it fails to send how you can be verified. When a user signs up, verified is set to false. The only way to verify the user is you need to visit this page https://github.com/delay/sveltekit-auth-starter/blob/main/src/routes/auth/verify/email-[token]/%2Bpage.server.ts so that verified can be set to true. Then you would see the protected page, without visiting that page with the correct token I am not sure how that would happen unless you changed verified to true on the sign up function.
Oops you are right, thanks for sending this bug report… The problem is resend verification email incorrectly sets verified to true. Thanks very much for reporting this! I thought it was just a configuration issue but actually is a bad bug!
It should be fixed now. Thanks once again for reporting this trouble! And sorry for not checking this out better after your first report. Thanks so much for the follow up question, because it enabled to think more about the problem and determine it shouldn't be happening whether the server was misconfigured or not.
No worries, I could have been more explicit in the initial report. Thanks for the quick fix and for putting together this example 🙂
|
2025-04-01T06:38:21.865828
| 2021-07-28T15:38:14
|
954990585
|
{
"authors": [
"lwilson",
"sujit-jadhav"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5252",
"repo": "dellhpc/omnia",
"url": "https://github.com/dellhpc/omnia/issues/437"
}
|
gharchive/issue
|
Ansible Lint test failing on control plane
Describe the bug
The Ansible lint workflow is failing on control_plane
To Reproduce
See https://github.com/dellhpc/omnia/runs/3183237513?check_suite_focus=true
Expected behavior
Ansible lint workflow should pass
List of errors
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:53
snmp_enabled: false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:107
- ethernet_switch_support == true or ethernet_switch_support == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:114
- ib_switch_support == true or ib_switch_support == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:121
- powervault_support == true or powervault_support == false
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:340
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:342
stat:
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/fetch_base_inputs.yml:346
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/package_installation.yml:21
[201] Trailing whitespace
control_plane/roles/control_plane_common/tasks/password_config.yml:39
cobbler_password | length < 1 or
[201] Trailing whitespace
control_plane/roles/control_plane_device/tasks/check_prerequisites.yml:35
when: backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/check_prerequisites.yml:35
when: backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/check_prerequisites.yml:72
- mngmnt_network_container_status == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/configure_mngmnt_network_container.yml:26
when: mngmnt_network_container_status == true and mngmnt_network_container_config_status == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/configure_mngmnt_network_container.yml:44
when: mngmnt_network_container_config_status == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_device/tasks/main.yml:42
when: (not mngmnt_network_container_image_status) or ( backup_map_status == true)
[201] Trailing whitespace
control_plane/roles/control_plane_ib/tasks/check_prerequisites.yml:35
when: infiniband_backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_ib/tasks/check_prerequisites.yml:35
when: infiniband_backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_ib/tasks/check_prerequisites.yml:72
- infiniband_container_status == true
[601] Don't compare to literal True/False
control_plane/roles/control_plane_ib/tasks/configure_infiniband_container.yml:26
when: infiniband_container_status == true and infiniband_container_config_status == false
[601] Don't compare to literal True/False
control_plane/roles/control_plane_ib/tasks/main.yml:38
when: (not infiniband_container_image_status) or ( infiniband_backup_map_status == true)
[306] Shells that use pipes should set the pipefail option
control_plane/roles/control_plane_repo/tasks/install_dsu.yml:30
Task/Handler: Execute bootstrap.cgi
[601] Don't compare to literal True/False
control_plane/roles/control_plane_repo/tasks/validate_idrac_vars.yml:23
- firmware_update_required == true or firmware_update_required == false
[206] Variables should have spaces before and after: {{ var_name }}
control_plane/roles/control_plane_sm/tasks/create_pod.yml:46
replace: " image: 'localhost/{{sm_docker_image_name}}:{{ sm_docker_image_tag }}'"
[208] File permissions unset or incorrect
control_plane/roles/control_plane_sm/tasks/pre_requisites.yml:44
Task/Handler: Copy opensm configuration file
[201] Trailing whitespace
control_plane/roles/provision_cobbler/tasks/check_prerequisites.yml:35
when: backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/check_prerequisites.yml:35
when: backup_map.stat.exists == true
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/main.yml:63
when: (not cobbler_image_status) or ( backup_map_status == true)
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/main.yml:67
when: (not cobbler_image_status) and (host_mapping_file == true) or ( backup_map_status == true)
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:33
Task/Handler: Remove blank spaces
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:51
Task/Handler: Count the hostname
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:57
Task/Handler: Count the ip
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:63
Task/Handler: Count the macs
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:69
Task/Handler: Check for duplicate hostname
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:75
Task/Handler: Check for duplicate ip
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:81
Task/Handler: Check for duplicate mac
[602] Don't compare to empty string
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:115
when: hostname_result.stdout != ""
[206] Variables should have spaces before and after: {{ var_name }}
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:121
shell: diff {{ role_path }}/files/new_host_mapping_file.csv {{role_path}}/files/backup_host_mapping_file.csv| tr -d \>|tr -d \<| grep -E -- ', & :| '
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:123
when: backup_map_status == true
[602] Don't compare to empty string
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:128
when: diff_output.stdout!= ""
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:147
when: (not cobbler_image_status) or (new_node_status == true)
[208] File permissions unset or incorrect
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:150
Task/Handler: Create a backup file
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:166
when: ( cobbler_container_status == true ) and ( new_node_status == true )
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:171
when: ( cobbler_container_status == true ) and ( new_node_status == true )
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mapping_file.yml:176
when: ( cobbler_container_status == true ) and ( new_node_status == true )
[208] File permissions unset or incorrect
control_plane/roles/provision_cobbler/tasks/mount_iso.yml:20
Task/Handler: Create iso directory
[601] Don't compare to literal True/False
control_plane/roles/provision_cobbler/tasks/mount_iso.yml:43
when: mount_check == true
[306] Shells that use pipes should set the pipefail option
control_plane/roles/provision_cobbler/tasks/provision_password.yml:29
Task/Handler: Encrypt cobbler password
[201] Trailing whitespace
control_plane/roles/webui_awx/tasks/awx_configuration.yml:29
[206] Variables should have spaces before and after: {{ var_name }}
control_plane/roles/webui_awx/tasks/awx_configuration.yml:132
loop: "{{ scheduled_templates}}"
[306] Shells that use pipes should set the pipefail option
control_plane/roles/webui_awx/tasks/configure_settings.yml:23
Task/Handler: Get AWX admin password
@lwilson We are using ansible lint 5.1.2 and we are not observing these linting issues.
github has lint version ansible-lint==4.2.0 in file https://github.com/ansible/ansible-lint-action
Can you upgrade the lint version?
@sujit-jadhav there is currently a PR pending to resolve this: https://github.com/ansible/ansible-lint-action/pull/48.
@lwilson If team goes back to 4.2.0 then many issues will come and team will end up spending more time. I think till pending PR is resolved for upgrading the lint version, we should disable the lint check. I have requested to compulsorily perform lint check before PR is created on the github.
The new lint appears to be working except for 2 errors in tools/olm.yml. I will work to correct those errors.
|
2025-04-01T06:38:21.874667
| 2021-06-20T13:52:39
|
925604400
|
{
"authors": [
"adityamcodes",
"bramrodenburg",
"houqp",
"wjones127"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5253",
"repo": "delta-io/delta-rs",
"url": "https://github.com/delta-io/delta-rs/pull/294"
}
|
gharchive/pull-request
|
Added .to_pandas to deltalake python
Description
Many users that work in Python use pandas in their daily work. Adding a .to_pandas method makes it super easy for users to read in a Delta table into a pandas dataframe.
Related Issue(s)
Documentation
Added documentation to the .to_pandas method. Will update the docs if the proposed change looks fine.
Ignoring the type check there works for me :) I will merge this after https://github.com/delta-io/delta-rs/pull/296 to address the new clippy error.
So if we're sourceing from a delta table in a bronze table in delta format and then converting to panda, using pandas transforms and reverting back to delta lake for write into a silver table (for example), does that delta table still store & retain the pandas transforms in the history of the delta table?
@adityamcodes To ask a question, could you please open an issue or discussion rather than commenting on an old pull request?
|
2025-04-01T06:38:21.875875
| 2024-06-10T17:52:17
|
2344506300
|
{
"authors": [
"linzhou-db",
"pranavsuku-db"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5254",
"repo": "delta-io/delta-sharing",
"url": "https://github.com/delta-io/delta-sharing/pull/497"
}
|
gharchive/pull-request
|
Fix test and protocol description about delta sharing streaming rpc internal
Fix test and protocol description about delta sharing streaming rpc internal
thanks for the fix, lgtm!
|
2025-04-01T06:38:21.881066
| 2019-12-11T04:32:19
|
536136739
|
{
"authors": [
"Tagar",
"ananthtony",
"brkyvz",
"gerardwolf",
"tdas",
"zsxwing"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5255",
"repo": "delta-io/delta",
"url": "https://github.com/delta-io/delta/issues/268"
}
|
gharchive/issue
|
Allow multiple UPDATE actions in Delta Lake MERGE INTO statement
A common practice in slowly changing dimension (SCD) load patterns is to soft-delete records rather than hard delete them. This is often done by setting a flag marking the record as deleted. This is of itself is easy to achieve, however sometimes deleted records reappear in the source system and therefore need to be re-inserted (effectively a special kind of update). This requires two WHEN MATCHED clauses with different conditions and attributes to be UPDATEd. A workaround for some scenarios is using a CASE statement, but this makes the logic unintuitive and much harder to read and maintain. It would be extremely useful if we could use UPDATE more than once in a WHEN MATCHED clause.
Issue discussed with @tdas on Delta Lake Slack channel where he suggested I raise this to be tracked.
https://delta-users.slack.com/archives/CGK79PLV6/p1575927346351000
This is the API in Apache Spark, therefore we may be able support this with Spark 3.0.
@brkyvz would this restriction be lifted with this improvement - https://github.com/apache/spark/pull/28875 ? thx
We are investigating this right now. :)
This has been fixed by https://github.com/delta-io/delta/commit/13c9c6ee9ee6e6921d59e940243f5eabbee3841e
A common practice in slowly changing dimension (SCD) load patterns is to soft-delete records rather than hard delete them. This is often done by setting a flag marking the record as deleted. This is of itself is easy to achieve, however sometimes deleted records reappear in the source system and therefore need to be re-inserted (effectively a special kind of update). This requires two WHEN MATCHED clauses with different conditions and attributes to be UPDATEd. A workaround for some scenarios is using a CASE statement, but this makes the logic unintuitive and much harder to read and maintain. It would be extremely useful if we could use UPDATE more than once in a WHEN MATCHED clause.
Issue discussed with @tdas on Delta Lake Slack channel where he suggested I raise this to be tracked. https://delta-users.slack.com/archives/CGK79PLV6/p1575927346351000
Hi,
Could you please provide how the re-appearing of soft-deletes are handled in single MERGE statement? Appreciate your response.
|
2025-04-01T06:38:21.924548
| 2022-08-14T21:47:06
|
1338357144
|
{
"authors": [
"deluchen",
"itz-winter",
"whutermeloon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5256",
"repo": "deluchen/fll",
"url": "https://github.com/deluchen/fll/pull/3"
}
|
gharchive/pull-request
|
Create apiIMPROVED.py
Better
Full of bugs, not stable. Duplicated functions, please don't rename anything.
Please don't make a copy of the original code file, just update the same code file in your branch and create the pull request. In this way we can see which line of code is changed, and we can understand your change better.
|
2025-04-01T06:38:21.926217
| 2022-07-07T20:41:26
|
1298071604
|
{
"authors": [
"bdemann",
"lastmjs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5257",
"repo": "demergent-labs/azle",
"url": "https://github.com/demergent-labs/azle/issues/481"
}
|
gharchive/issue
|
Figure out why ic_cdk::api::stable::stable_write is not panicking when we try and write out of bounds
[x] Create a bare bones rust example
[ ] Figure out where our api is breaking down
If it still presents, open a forum post
If it gets updated we need to update the stable_write tests
This has been resolved
|
2025-04-01T06:38:21.933140
| 2022-10-20T11:35:14
|
1416458589
|
{
"authors": [
"ShacharKidor"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5258",
"repo": "demisto/content-docs",
"url": "https://github.com/demisto/content-docs/pull/1210"
}
|
gharchive/pull-request
|
Improved the resubmission to a contribution section in the docs
Status
Ready/In Progress/In Hold (Reason for hold)
Related Issues
fixes: CIAC-3793.
related: CIAC-4347
Description
When contributing to an existing pack, the user can open a contribution PR from the UI and set the contribution's name (title) to whatever he likes (the selection of the existing pack he contributes to happens in the redirected contribution form).
When resubmitting changes from the UI to those kinds of PRs (contributions to existing packs from the UI) the user must
to set the contribution's name (title) to the display name of the existing pack he contributed to - the name of the pack should be the exact display name of the pack - otherwise, instead of updating the already existing PR, a new PR will be opened.
In this PR I'm trying to improve the instructions of the resubmission section so that users will pay attention to this known limitation.
Screenshots
Paste here any images that will help the reviewer
@ShahafBenYakir - thanks for reviewing. I'll try to explain better:
First I would like to mention that this change in content docs is temporary (I will revert it once we fix this open dev-bug).
The original resubmission section in the docs included the following sentence:
.
I think that this sentence alone wasn't clear enough - It is not clear enough to the users that in order to resubmit a change to an open contribution PR of an existing pack, the title\name of the contribution must be as the pack's display name.
In this PR I tried to improve the explanation here so that contributors will pay attention to this existing limitation in the resubmission flow.
@ShahafBenYakir, @dansterenson, @darkushin - If you can think of better wordings for that please add your suggestions.
I added this doc change due to an issue that was solved in this PR.
PR is not relevant anymore - Closing it.
|
2025-04-01T06:38:22.016327
| 2022-04-27T06:48:18
|
1216878878
|
{
"authors": [
"dc-builder",
"jochman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5259",
"repo": "demisto/dockerfiles",
"url": "https://github.com/demisto/dockerfiles/pull/7553"
}
|
gharchive/pull-request
|
trigger sane pdf reports build
Status
Ready/In Progress/In Hold (Reason for hold)
Related Content Pull Request
Related PR: link to the PR at demisto/content
Related Issues
Related: link to the issue
Description
A few sentences describing the overall goals of the pull request's commits.
Docker Image Ready - Dev
Docker automatic build at CircleCI has deployed your docker image: devdemisto/sane-pdf-reports:<IP_ADDRESS>997
It is available now on docker hub at: https://hub.docker.com/r/devdemisto/sane-pdf-reports/tags
Get started by pulling the image:
docker pull devdemisto/sane-pdf-reports:<IP_ADDRESS>997
Docker Metadata
Image Size: 667.82 MB
Image ID: sha256:30a7cac4ecc4ed652606becd6e4f9acacb096221c90e248677e2bc516ec2ae50
Created: 2022-04-27T06:52:36.390981922Z
Arch: linux/amd64
Command: ["python3"]
Environment:
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LANG=C.UTF-8
GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568
PYTHON_VERSION=3.9.6
PYTHON_PIP_VERSION=21.1.3
PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/a1675ab6c2bd898ed82b1f58c486097f763c74a9/public/get-pip.py
PYTHON_GET_PIP_SHA256=6665659241292b2147b58922b9ffe11dda66b39d52d8a6f3aa310bc1d60ea6f7
DOCKER_IMAGE=devdemisto/sane-pdf-reports:<IP_ADDRESS>997
Labels:
org.opencontainers.image.authors:Demisto<EMAIL_ADDRESS>org.opencontainers.image.revision:4eeecc53cc5d7a9dc2e719a265923090a1686977
org.opencontainers.image.version:<IP_ADDRESS>997
|
2025-04-01T06:38:22.029013
| 2021-04-28T15:37:38
|
870085871
|
{
"authors": [
"jasonsjones",
"kevinslin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5260",
"repo": "dendronhq/dendron-site",
"url": "https://github.com/dendronhq/dendron-site/pull/87"
}
|
gharchive/pull-request
|
doc: fix typos and format yml example
Hey @kevinslin! I found this project and approach to note taking after listening to you on FLOSS Weekly. Awesome work!
After going through some of the wiki docs (which are very helpful, btw 🙂 ), I found a few small spots that needed some attention.
Thanks.
Awesome, thanks for the corrections :)
|
2025-04-01T06:38:22.030668
| 2015-08-20T20:17:06
|
102227567
|
{
"authors": [
"cavie78",
"denehyg"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5261",
"repo": "denehyg/reveal.js-menu",
"url": "https://github.com/denehyg/reveal.js-menu/issues/3"
}
|
gharchive/issue
|
IE issues
There seems to be an issue with IE. Menu.js sometimes loads correctly but frequently does not. When this occurs all named links appear as one big link. Clicking makes reveal try to load all slides on top of one another.
Can you let me know what version of Windows and IE you are using. I've tested on IE9 and IE11 under Windows 7 and can't see the issue.
Thanks for the quick response. I've tried IE11 on 7 (actually Server 2008 R2) and XP
I think this was to do with the page load rather than any inherent problem with menu.js. Slimming down my site and loading the .js earlier seems to have solved the issue.
|
2025-04-01T06:38:22.033504
| 2016-10-14T09:54:17
|
183009188
|
{
"authors": [
"Garasjuk",
"ilya-shknaj"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5262",
"repo": "deniotokiari/training-epam-2016",
"url": "https://github.com/deniotokiari/training-epam-2016/issues/119"
}
|
gharchive/issue
|
HW 14 10 2016 Backend
https://github.com/Garasjuk/EPAMtraning2016/tree/master/BackendApplication
please provide link where i can see all changes in one page
https://github.com/Garasjuk/EPAMtraning2016/commit/80ee042a891fb6f66ca8b96ea88a2417c8ff5d5b
|
2025-04-01T06:38:22.058530
| 2023-04-03T14:58:13
|
1652240527
|
{
"authors": [
"HamzaChx",
"denisneuf"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5263",
"repo": "denisneuf/python-amazon-ad-api",
"url": "https://github.com/denisneuf/python-amazon-ad-api/pull/132"
}
|
gharchive/pull-request
|
Updating Product Ads to version 3
New endpoints for product ads, check : ad_api/api/sp/product_ads_v3.py
adding the new endpoints to product_ads_v3.py
adding the @Utils.deprecated to v2 of product ads
changes to the docs
Great job @HamzaChx
Great!
I will push some changes in the rst documents:
/Users/hanuman/Documents/PycharmProjects/python-amazon-ad-api-dev/source/sp/product_ads_v3.rst:2: WARNING: Title underline too short.
Product Ads
WARNING: autodoc: failed to import function 'ProductAdsV3.create_campaigns' from module 'ad_api.api.sp'; the following exception was raised:
No module named 'ad_api.api.sp.ProductAdsV3'
WARNING: autodoc: failed to import function 'ProductAdsV3.edit_campaigns' from module 'ad_api.api.sp'; the following exception was raised:
No module named 'ad_api.api.sp.ProductAdsV3'
WARNING: autodoc: failed to import function 'ProductAdsV3.list_campaigns' from module 'ad_api.api.sp'; the following exception was raised:
No module named 'ad_api.api.sp.ProductAdsV3'
WARNING: autodoc: failed to import function 'ProductAdsV3.delete_campaigns' from module 'ad_api.api.sp'; the following exception was raised:
No module named ‘ad_api.api.sp.ProductAdsV3'
The methods was wrong as belongs to the Campaign endpoint, need to be updated to the specific method of the current endpoint
Also will push the updated init.py that allows to path the new endpoints
from .campaigns import Campaigns
from .campaigns_v3 import CampaignsV3
from .ad_groups import AdGroups
from .ad_groups_v3 import AdGroupsV3
from .product_ads import ProductAds
from .product_ads_v3 import ProductAdsV3
from .bid_recommendations import BidRecommendations
from .keywords import Keywords
from .negative_keywords import NegativeKeywords
from .campaign_negative_keywords import CampaignNegativeKeywords
from .suggested_keywords import SuggestedKeywords
from .product_targeting import Targets
from .negative_product_targeting import NegativeTargets
from .reports import Reports
from .snapshots import Snapshots
from .budget_rules import BudgetRules
from .campaings_optimization import CampaignOptimization
from .ranked_keywords_recommendations import RankedKeywordsRecommendations
from .budget_recommendations import BudgetRecommendations
from .budget_rules_recommendations import BudgetRulesRecommendations
from .product_recommendations import ProductRecommendations
all = [
"Campaigns",
"CampaignsV3",
"AdGroups",
"AdGroupsV3"
"ProductAds",
"ProductAdsV3"
"BidRecommendations",
"Keywords",
"NegativeKeywords",
"CampaignNegativeKeywords",
"SuggestedKeywords",
"Targets",
"NegativeTargets",
"Reports",
"Snapshots",
"BudgetRules",
"CampaignOptimization",
"RankedKeywordsRecommendations",
"BudgetRecommendations",
"BudgetRulesRecommendations",
"ProductRecommendations"
]
You can pull it later in a while.
El 3 abr 2023, a las 23:17, Hamza @.***> escribió:
Great!
—
Reply to this email directly, view it on GitHub https://github.com/denisneuf/python-amazon-ad-api/pull/132#issuecomment-1494520137, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD4ENUXVGRFIFUDNVTQITTLW7LSY7ANCNFSM6AAAAAAWROGFYA.
You are receiving this because you modified the open/close state.
|
2025-04-01T06:38:22.067180
| 2018-09-24T09:58:42
|
363079966
|
{
"authors": [
"dennisdoomen",
"gormac"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5264",
"repo": "dennisdoomen/Beacon",
"url": "https://github.com/dennisdoomen/Beacon/pull/29"
}
|
gharchive/pull-request
|
Improves Beacon logging to console output
Improve logging by using ResetColor to reset console colors to the original colors when the process was started.
What problem is this fixing?
Apparently, ResetColor is the slightly better way of resetting the text colors. It saves us from having to do that ourselves. Nothing more, nothing less.
|
2025-04-01T06:38:22.069634
| 2015-11-24T13:22:57
|
118607588
|
{
"authors": [
"TerraVenil",
"dennisdoomen"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5265",
"repo": "dennisdoomen/fluentassertions",
"url": "https://github.com/dennisdoomen/fluentassertions/issues/312"
}
|
gharchive/issue
|
Can ShouldBeEquivalentTo mutch by lambda?
public class Failure
{
public string Code { get; set; }
public IEnumerable<string> ErrorMessages { get; set; }
}
var expectedCodes = new[] { "123", "456", "789" };
var expectedErrorMessages = new[] { "Required" };
var failures = new List<Failure> {
new Failure { Code = "123", ErrorMessages = new List<string> { "Required" } },
new Failure { Code = "123", ErrorMessages = new List<string> { "Required" } }};
Can I do with ShouldBeEquivalentTo similar to pseudo code?
failures.ShouldBeEquivalentTo(x => expectedCodes.Any(x.Code) && x.ErrorMessages.ContainInOrder(expectedErrorMessages));
Because right now I can do that with custom extension method.
No, you can't do that. Instead, you could create another List<Failure> that contains the data as you expect them, and pass that to ShouldBeEquivalentTo using option WithStrictOrdering.
@TerraVenil does this answer your question?
Yes, please close this issue.
|
2025-04-01T06:38:22.072081
| 2020-12-24T15:48:36
|
774494557
|
{
"authors": [
"andrepadeti",
"dennismorello"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5266",
"repo": "dennismorello/react-awesome-reveal",
"url": "https://github.com/dennismorello/react-awesome-reveal/issues/65"
}
|
gharchive/issue
|
react-reveal props I miss: spy and appear
Is this repo a fork of react-reveal? I'd love to use this more modern version but my use case makes use of two props I haven't been able to find here: spy and appear. Is there a workaround?
Hi, this is not a fork of react-reveal, this is a completely new and different implementation
That's okay. I thought it might be a fork because of this comment.
If I could just put in my two pennies worth, it would be nice if that could be implemented :-)
Keep up the good work!
Thank you @andrepadeti, feel free to submit a PR!
Merry Christmas 🎄
|
2025-04-01T06:38:22.119328
| 2023-12-22T17:07:47
|
2054230428
|
{
"authors": [
"arnauorriols",
"nicrosengren"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5267",
"repo": "denoland/deployctl",
"url": "https://github.com/denoland/deployctl/issues/227"
}
|
gharchive/issue
|
Gh action deploy without --prod
I've tried to find a way to allow gh actions to only do preview deploys.
It seems to perform deploys in a similar way to when the --prod flag i sent to the cli.
If the default must be --prod it would be really nice to have a config flag that would only do preview deploys that requires manual promotion.
Sorry if this exists and Ive missed it, in that case I would gladly submit a PR adding it to the action README.
When you link a Deploy project to a Github repository in order to use the GH action, you set which branch will act as the "production" branch. Any commit in that branch will result in a production deployment. I presume you have only 1 branch (main), in which case you could create a new branch and set it as the "production" branch. You can forget about it if you want to promote deployments manually, or you could merge to it those commits that should be deployed to production.
Thanks for your reply!
We've used a separate production branch before but what we're running now is a tag based versioning system and branching from tag on issues.
You're mentioning that I can forget about it if I want to promote manually which is exactly what I'd like to do. However, it seems deno deploy instantly promotes the latest deploy to production. This is the behaviour I'd like to turn off.
Enabling me to deploy, but to manually promote a deploy.
From your last sentence I got the feeling this is possible, but how?
What I suggest is to create a production branch, but never commit to it. This way, the branch where you commit to will produce preview deployments instead of production deployments.
Ahh, brilliant! I did not think of that.
Thanks so much for you help!
|
2025-04-01T06:38:22.123138
| 2022-11-06T21:21:01
|
1437544026
|
{
"authors": [
"baetheus",
"crowlKats"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5268",
"repo": "denoland/dotland",
"url": "https://github.com/denoland/dotland/issues/2566"
}
|
gharchive/issue
|
Show full comment for files with @module block comments.
I maintain a Deno "library" with many entrypoints here. One of the things I'd like to do is document the overall usage and paradigm of each file in its heading using the jsdoc @module tag. This sort of works with deno.land but it seems that the module information is truncated to only the first paragraph. And example is:
Source rendering on deno.land
Documentation rendering on deno.land
Documentation rendering on doc.deno.land
Is it possible to have deno.land/x modules render documentation more in line with doc.deno.land?
i dont see anything being truncated on either; the only difference is that deno.land/x/ currently doesnt render examples, which is something we do want to add
@crowlKats That tracks. I did a quick search and couldn't find a ticket for rendering examples. If you want I can adjust the title of this ticket to cover adding examples rendering, otherwise it seems this should be closed.
Example rendering has been implemented.
|
2025-04-01T06:38:22.134039
| 2023-07-18T17:38:22
|
1810411648
|
{
"authors": [
"afifurrohman-id",
"marvinhagemeister",
"mct-dev",
"sant123"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5269",
"repo": "denoland/fresh",
"url": "https://github.com/denoland/fresh/issues/1477"
}
|
gharchive/issue
|
JSX element implicitly has type 'any' because no interface 'JSX.IntrinsicElements' exists. When using Preact v10.16.0
This is my deno.json
{
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "preact"
},
"tasks": {
"start": "deno run -A --watch=static/,routes/ dev.ts",
"vpn": "DENO_DEPLOYMENT_ID=\"$(git rev-parse HEAD)\" deno run -A main.ts"
},
"imports": {
"$fresh/"<EMAIL_ADDRESS> "preact"<EMAIL_ADDRESS> "preact/"<EMAIL_ADDRESS> "preact-render-to-string"<EMAIL_ADDRESS> "@preact/signals": "https://esm.sh/*<EMAIL_ADDRESS> "@preact/signals-core": "https://esm.sh/<EMAIL_ADDRESS> }
}
Thanks!
Can you check that the deno vscode plugin is initialized? There is a command for that in the vscode command palette.
Yep it is, in fact this is a project I'm currently running. Changing it to Preact v10.15.1 works good. That's my workaround for now.
Screencast from 2023-07-18 14-03-35.webm
Screencast.from.2023-07-18.14-03-35.webm
Solutions
Ctrl/Cmd + Shift + P then choose Reload Window (vscode)
run fresh
deno task start
and reload browser window (deno automatically cache missing dependencies)
I use all latest preact / twind and it's work fine
Really weird @afifurrohman-id. I followed the steps you provided + I added --reload in my task to renew cache but still is showing that warning.
Alright I think I figured out the issue, but the workaround is a little odd to me:
To replicate the issue:
Close any VSCode instance
Delete Deno's cache. On Linux rm -r $HOME/.cache/deno
Open a Fresh project with VSCode and select any .tsx file
You should see something like this:
Run deno task start
Now restart the Deno language server
Now you should be able to see the error above:
To get rid of this annoying issue, stop the fresh server and run deno check main.ts
My question is, why using deno check solves the issue? I tried running with both dev.ts and main.ts and did not work.
@sant123 this worked for me. Idk why deno check works either, but it did. I was just using the default fresh template from their "Getting Started", as well. Might be worth fixing...
FWIW For people commenting here: It's not an issue with Fresh but with Deno's LSP. We are aware of the issue but haven't found the root cause yet nor a reliable way to reproduce it. Sometimes I can reproduce it and when I try again it doesn't work anymore. My guess as to why deno check works is that it may refresh the internal type cache or something.
I'll transfer this upstream to the deno cli repository, since this is not an issue with Fresh.
|
2025-04-01T06:38:22.138084
| 2023-02-12T08:07:54
|
1581165483
|
{
"authors": [
"deer",
"marvinhagemeister",
"sylc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5270",
"repo": "denoland/fresh",
"url": "https://github.com/denoland/fresh/pull/1025"
}
|
gharchive/pull-request
|
feat: Params for middleware
Fix https://github.com/denoland/fresh/issues/903
This PR introduce middlewareParams in the MiddlewareHandlerContext. Each middleware have access to the params that are upstream of its definition.
For example:
With a route '/api/[id1]/[id2]/foo'
and a middleware located at '/api/[id1]/_middleware',
middlewareParams will only have a property of 'id1'. 'id2 will' be undefined
Why not all the params of the route ?
For a middleware to access the params that are downstream from its level would require, I think, to duplicate in fresh the logic that is currently in the Rutt router. This would introduce a speed decrease. One solution could be to integrate the Rutt router in fresh, although I did not want to tackle this.
While this implementation has a limitation, I think it covers most use cases and does not add any significant overhead
Note the linter is failing not because of this PR but because of a recent change in deno affecting the whole project. There a 16 instances of Deno.run being deprecated, which should probably be fixed separately
@marvinhagemeister, don't forget that this is no longer necessary due to my 1314.
Closing in favour of #1314
|
2025-04-01T06:38:22.142287
| 2024-01-16T02:07:32
|
2082899227
|
{
"authors": [
"iuioiua",
"paudrow"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5271",
"repo": "denoland/saaskit",
"url": "https://github.com/denoland/saaskit/issues/655"
}
|
gharchive/issue
|
Feature request: Have a more minimal saaskit option
Is your feature request related to a problem? Please describe.
It takes a bit of work to get the saaskit into a form that I can start building from. It'd be nice to have a more minimal version.
Describe the solution you'd like
I'd like a repository that has only the following:
User and auth setup
Stripe integration
And not the following:
Blogs
Graphs
Items in database
Here is my attempt: https://github.com/paudrow/saaskit-minimal
Describe alternatives you've considered
I can delete everything myself, which I've done (link above), but it is something that I have to maintain and possibly fight merge conflicts. It'd be nicer to have an officially maintained version.
Having two versions of SaaSKit would be difficult to maintain. It'd be better to have a single version that improves its modularity, making it easier to modify. That's what I've tried to do with the addition of plugins.
I'd be happy to hear ideas on how modularity, but I will close this as not planned. Either way, thank you for your suggestion 🙂
|
2025-04-01T06:38:22.143752
| 2022-08-16T21:58:18
|
1340924405
|
{
"authors": [
"DjDeveloperr",
"elycheikhsmail"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5272",
"repo": "denosaurs/deno_python",
"url": "https://github.com/denosaurs/deno_python/issues/24"
}
|
gharchive/issue
|
python_dono ?!
Is possible to import deno module in python code ?
Possible, but that's completely different than what this module does so it is out of scope for this project.
Possible, but that's completely different than what this module does so it is out of scope for this project.
Thank you @DjDeveloperr
|
2025-04-01T06:38:22.165902
| 2016-11-28T11:56:45
|
191987663
|
{
"authors": [
"Grawl",
"denysdovhan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5273",
"repo": "denysdovhan/spaceship-zsh-theme",
"url": "https://github.com/denysdovhan/spaceship-zsh-theme/issues/42"
}
|
gharchive/issue
|
Logo ideas
Woo-hoo! Spaceship is going to get 200 stars! I think it's time to make a logo for theme. I will be grad if anybody can help me with that, so I'm looking for volontiers.
I think it have to be something simple, clean and expressive. Something that can reflect the essence and idea of the theme itself.
Examples
Bullet Train:
I am not using zsh and this zsh-spaceship-theme, but on screenshots I see you have not backgrounds. What is the difference with other shell themes? I see simplicity and completeness. Like in other good themes. What is the idea of spaceship-zsh-theme?
spaceship-zsh-theme implements three core ideas:
It has a lot of useful indicators: exit, host, user, sudo, git, nvm, rvm, rbenv, chruby, virtualenv, vi-mode, (swiftenv and xenv is comming). Some of them doesn't supported in the most of other themes.
It shows only indicators which are required at the moment. Without any overkills. You see what you need.
It's almost completely customizable. With #28 and #39 we'll be able to change almost everything in this theme as you want to. (Custom colors and ability to add sectors like in agnoster-zsh-theme)
BTW, a big update is comming. I've found a way to create cross-shell and testable themes, so probably Spaceship is going to be the first shell theme, that has test and works on any shell (sh, bash, zsh, fish, etc) with single and universal code base.
Okay got it.
And another, important question: why you called it with “spaceship”? Did you mean “it's like Fantasy UI into sensor panels of devices used in space”?
In my imagination, real spaceship is an extremely complex system with dozen of indicators (refers to the first core idea) which show data about whole system.
Systems which are providing life support in real spaceships are always maximally simple. You always may get everything that you need just now, without overkills (refers to the second idea).
Spaceship's systems give you ability to do whatever you need, like scientific researches, experiments, etc (this refers to the third point — customizability of the theme).
Something like this. Maybe I'm wrong about real spaceships, but that's the reasons why I named this theme Spaceship.
Okay it's enough of great thinkings for me for now. Now, give me a time.
As you wish, up to you!
200★ are here!
@Grawl hey, any updates?
As a starting point, I make an abstraction of the screenshot from repository.
This helped me a lot to understand what I'm doing.
Then, I tried to make some flying objects:
And then I noticed that arrow and rocket is very similar, and tried to combine them:
Notice that colors is important here – because terminal is all about typography, – so I put them everywhere.
So, what direction should I follow?
@Grawl hey, I'll answer you in private.
For inspiration Coats of arms of NASA missions:
Yet another awesome example:
My thoughts:
will try to add a rays and use colors from that awesome illustration
I like this much more than my previous tries
|
2025-04-01T06:38:22.529445
| 2019-05-19T19:21:18
|
445854341
|
{
"authors": [
"nandahkrishna",
"orsinium"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5274",
"repo": "dephell/dephell",
"url": "https://github.com/dephell/dephell/pull/104"
}
|
gharchive/pull-request
|
Adding owner name for license
Fixes issue #87 (adding owner name in license).
I've made the changes mentioned, hope they're right. Do tell me if there's anything to change!
I'll check it tomorrow. Looks perfect :)
Thanks!
Perfect! Thank you for your help :) We have very similar #106 and #107. I'm sure, you can implement them in a few minutes. So, if you're interested in more contributions -- I would be happy to review your solutions :)
Sure, I'll take a look. Thanks!
|
2025-04-01T06:38:22.532798
| 2016-10-18T15:49:20
|
183726981
|
{
"authors": [
"elfet",
"tebaly"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5275",
"repo": "deployphp/deployer",
"url": "https://github.com/deployphp/deployer/issues/816"
}
|
gharchive/issue
|
Please add checksum deployer.phar archive
Q
A
Issue Type
Feature Request
Deployer Version
N/A
Local Machine OS
N/A
Remote Machine OS
N/A
Description
Pages:
http://deployer.org/docs/getting-started
http://deployer.org/docs/installation
Please add checksum deployer.phar archive MD5 and sha256 to use with ansible like
- name: download file with check (md5)
get_url:
url: http://deployer.org/deployer.phar
dest: /usr/local/bin/dep
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
Ok, i have plans to do it.
Done.
|
2025-04-01T06:38:22.538960
| 2023-06-09T18:10:44
|
1750306812
|
{
"authors": [
"codecov-commenter",
"sumi-0011"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5276",
"repo": "depromeet/na-lab-client",
"url": "https://github.com/depromeet/na-lab-client/pull/219"
}
|
gharchive/pull-request
|
나의 질문 폼 생성 - 질문 삭제 기능
🤔 해결하려는 문제가 무엇인가요?
close #196
🎉 변경 사항
드래그 해서 삭제하는 코드 제거 (쓰레기통 코드 제거)
header의 삭제버튼 클릭하면, 쓰레기통 아이콘이 보이고, 삭제를 할 수 있도록 구현
🙏 여기는 꼭 봐주세요!
Codecov Report
Patch and project coverage have no change.
Comparison is base (81604db) 91.90% compared to head (da53617) 91.90%.
:exclamation: Current head da53617 differs from pull request most recent head 34c34b9. Consider uploading reports for the commit 34c34b9 to get more accurate results
Additional details and impacted files
@@ Coverage Diff @@
## main #219 +/- ##
=======================================
Coverage 91.90% 91.90%
=======================================
Files 38 38
Lines 284 284
Branches 52 52
=======================================
Hits 261 261
Misses 23 23
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
2025-04-01T06:38:22.542153
| 2020-01-14T14:25:52
|
549602364
|
{
"authors": [
"mjpitz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5277",
"repo": "deps-cloud/deps.cloud",
"url": "https://github.com/deps-cloud/deps.cloud/issues/2"
}
|
gharchive/issue
|
Introduce monitoring
No project integrates monitoring right now. This means that we need to leverage application logs to get a sense of what is going on in a system. By having something like statsd or prometheus monitoring, we would be able to better monitor the systems over time.
My proposal would be to leverage statsd as the main stat emission protocol, but then leverage prometheus sidecar containers to advertise the metrics. This should fit in rather nicely to many existing stat collection tools, like prometheus and datadog, without being too opinionated about which ones companies are using.
On the helm charts, the deployment of the sidecar should be optional and the host/port should be configurable through two environment variables: STATSD_HOST and STATSD_PORT
Golang Library: https://godoc.org/github.com/etsy/statsd/examples/go
NodeJS Library: https://www.npmjs.com/package/statsd-client
I recently put together a simple grafana dashboard for the system using metrics that were already in place from Kubernetes.
I'm going to close this for now. We can certainly add more later.
|
2025-04-01T06:38:22.598404
| 2016-01-27T20:15:26
|
129256190
|
{
"authors": [
"deranjer",
"lazyest"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5278",
"repo": "deranjer/OpenVPN-PHP-Management-Gui",
"url": "https://github.com/deranjer/OpenVPN-PHP-Management-Gui/issues/12"
}
|
gharchive/issue
|
PHP 5.5 & newest?
Just checked with old Debian6 - its working from scratch but with 8.0 is compeletely wrong. Is project abandoned or possible to take a quick look to adopt it for current Debian/Ubuntu?
Unfortunately, I don't plan on working on this any longer. You are free to fork it or if you submit pull requests I can still merge them, but that is the extent of my work on this from now on (unless I get a ton of time on my hands)
|
2025-04-01T06:38:22.613261
| 2023-06-13T09:36:25
|
1754460867
|
{
"authors": [
"DrGrixel",
"derfloh205"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5279",
"repo": "derfloh205/CraftSim",
"url": "https://github.com/derfloh205/CraftSim/issues/149"
}
|
gharchive/issue
|
Lua Error Crafting Results Export JSON
Hi there, i might got a little problem.
Soon after i did some Prospecting i went to the CraftSim Crafting Results and saw there is a Button "Export JSON" . After i press it, my game froze a bit and i got a cute little LUA Error. Just like this :
Message: ...terface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua:74: script ran too long
Time: Tue Jun 13 11:28:38 2023
Count: 1
Stack: ...terface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua:74: script ran too long
[string "=[C]"]: ?
[string "@Interface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua"]:74: in function Add' [string "@Interface/AddOns/CraftSim/Data/Classes/CraftResultItem.lua"]:35: in function GetJSON'
[string "@Interface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua"]:42: in function AddList' [string "@Interface/AddOns/CraftSim/Data/Classes/CraftResult.lua"]:121: in function GetJSON'
[string "@Interface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua"]:42: in function AddList' [string "@Interface/AddOns/CraftSim/Data/Classes/CraftRecipeData.lua"]:104: in function GetJSON'
[string "@Interface/AddOns/CraftSim/Data/Classes/JSONBuilder.lua"]:42: in function AddList' [string "@Interface/AddOns/CraftSim/Data/Classes/CraftSessionData.lua"]:109: in function <...ce/AddOns/CraftSim/Data/Classes/CraftSessionData.lua:101> [string "=(tail call)"]: ? [string "@Interface/AddOns/CraftSim/Modules/CraftResults/Frames.lua"]:47: in function clickCallback'
[string "@Interface/AddOns/CraftSim/Libs/GGUI-1.0/GGUI.lua"]:1107: in function <Interface/AddOns/CraftSim/Libs/GGUI-1.0/GGUI.lua:1105>
Locals: (*temporary) = defined =[C]:-1
Anny tipps on how i can fix it ?
This happens currently when you were craft a lot before exporting due to the sheer amount of data accumulating!
Currently there is no planned fix for this but its on my todo :)
|
2025-04-01T06:38:22.618259
| 2024-04-18T12:30:08
|
2250574541
|
{
"authors": [
"coveralls",
"yaswanth-deriv"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5280",
"repo": "deriv-com/ui",
"url": "https://github.com/deriv-com/ui/pull/166"
}
|
gharchive/pull-request
|
[FEQ]Yaswanth/FEQ-1765/Improve/Disabled label animation and added props
Added "isLabelAnimationDisabled" prop with that we can able to control label animation for input component
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.2%) to 61.243%
Totals
Change from base Build<PHONE_NUMBER>:
0.2%
Covered Lines:
202
Relevant Lines:
319
💛 - Coveralls
|
2025-04-01T06:38:22.706248
| 2024-04-20T02:50:15
|
2254338717
|
{
"authors": [
"diamondhands0",
"tholonious"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5281",
"repo": "deso-protocol/core",
"url": "https://github.com/deso-protocol/core/pull/1252"
}
|
gharchive/pull-request
|
All fixed now. Summary:
Fixed a bug whereby we would never increase the fee above the minimum fee because of the placement of a if bucketMinFee <= globalMinFeeRate check.
Fixed a bug where we were overwriting the mempool fee register in Start(), which was causing the mempool fee estimator to have no txns in it, and thus always return the minimum value. This meant we were not considering mempool congestion at all.
In many places, we were confusing "fee bucket growth rate basis points" with "fee bucket multiplier". The former value would be something like 1000 (= 10%) while the latter would map to 1.1 (= 10000 + 1000 / 10000). This caused fee-time ordering to be basically completely broken. All fixed now, and fixed tests.
Just to add a little more detail: The tests were very well-written and I think they exercise this logic very well. The reason why they were passing before, though, is because we were setting the value incorrectly in the Init() and passed it wrong as an argument, and the two sortof compensated for each other in the tests. But in production, we would go down a different path that wouldn't compensate properly, which is how I found the bug. Anyway it's all fixed now.
Set optimized defaults for the mempool dynamic fee params and added a deep comment explaining why we chose these values where they are define. Also made sure we're using them consistently in all the relevant places. These params optimize heavily toward getting your txn into the next block, which is what we want. They cause reordering issues if you're sending txns at a rate much higher than 1 block per second, but this is correct behavior, and the comments include suggestions on how to mitigate these issues (eg by manually setting the fee or using an atomic txn):
MempoolCongestionFactorBasisPoints
MempoolPriorityPercentileBasisPoints
PastBlocksCongestionFactorBasisPoints
PastBlocksPriorityPercentileBasisPoints
In computeFeeTimeBucketRangeFromExponent, there was a weird edge-case where we could have a fee bucket with start less than end. This can't happen in a real scenario, though, only when the bucket growth rate is like 1bp, which is ridiculously small. And I only found it because of the growth rate <> multiplier issue mentioned previously, which was causing a 10% growth rate to be threaded through as 1bp.
In EstimateFee, we accept a minFeeRateNanosPerKB, but we were ignoring it if the fee estimators returned a higher fee. This was much less useful than using the minFeeRateNanosPerKB as a straight-up override so I changed the behavior there. Doing this made it so that my script was able to blast the mempool with txns, with a custom fee rate, without any reordering issues (because all the txns were being put in the same fee bucket). Eventually, we should probably change the name of this field to something like overrideFeeRateNanosPerKB but I think it's fine for now.
For reference, in case it's useful, the way I found all this stuff was I slowed the block time down to 1 block every 10s and made the NumPastBlocks for the block estimator 5 blocks using the params in constants.go (so that txns would accumulate in the mempool) and added logging of the fees. Then I wrote a script that blasted the mempool with txns and noticed that the fees weren't adjusting properly, which led me down the rabbit-hole to find all of these issues. After fixing all the issues I took some time to optimize all the params, and then used my script to exercise everything and make sure it's fully 100% adapting correctly. Specifically, I saw that the fee goes up correctly once the mempool has a full block's worth of txns accumulated in it, stays high for a few blocks because of the block estimator, and then starts to go down as more blocks come through. It all works really well.
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#1252 👈
#1253
feature/proof-of-stake
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @diamondhands0 and the rest of your teammates on Graphite
|
2025-04-01T06:38:22.753363
| 2021-01-04T20:56:46
|
778360479
|
{
"authors": [
"BraisGabin",
"cortinico",
"robstoll",
"schalkms"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5282",
"repo": "detekt/detekt",
"url": "https://github.com/detekt/detekt/issues/3344"
}
|
gharchive/issue
|
throw if detekt cannot find any files, i.e input was most likely configured wrong
Expected Behavior
Detekt task exits at least with a non 0 code - preferably with a message that input was configured wrong
Current Behavior
Detekt is happy
Context
I had a project which was a multi-project build. I have configured detekt as follows:
detekt{
input = files(subprojects*.collect { it.projectDir })
}
At some point I simplified the structure and converted it to a single project. Detekt continued to run but did not analyse anything and I have not detekted (:wink:) that for a long time
I can't reproduce your issue. Which version of detekt are you using? Could you provide a sample project to demostrate this?
I can't reproduce your issue. Which version of detekt are you using? Could you provide a sample project to demostrate this?
The latest one I reckon, you can try out https://github.com/robstoll/niok/commit/c4917338abc38bfe0d008b40cba8afa02380d558
With the next commit I fixed the input again and I had to fix a few issues detekt detected:
https://github.com/robstoll/niok/commit/fb376a43c86b571c98896e78389e491ac4fbaf8e
The latest one I reckon, you can try out https://github.com/robstoll/niok/commit/c4917338abc38bfe0d008b40cba8afa02380d558
With the next commit I fixed the input again and I had to fix a few issues detekt detected:
https://github.com/robstoll/niok/commit/fb376a43c86b571c98896e78389e491ac4fbaf8e
How should a computer program detect whether such a configuration was on purpose or not. Is entirely possible that projects don't contain Kotlin but rather Java, Scala or source code of other JVM languages.
Detekt reports in all kinds of flavours show how many files have been analyzed.
I'm thinking about the following cases, where detekt is frequently used.
Template projects that as the name suggests contain no Kotlin sources.
Not every single project in multi-projects contains Kotlin sources.
Detekt task exits at least with a non 0 code
Why should it exit with a different code? Which other static source code analyzers yield an error due to the set of input source files being empty?
How should a computer program detect whether such a configuration was on purpose or not. Is entirely possible that projects don't contain Kotlin but rather Java, Scala or source code of other JVM languages.
Detekt reports in all kinds of flavours show how many files have been analyzed.
I'm thinking about the following cases, where detekt is frequently used.
Template projects that as the name suggests contain no Kotlin sources.
Not every single project in multi-projects contains Kotlin sources.
Detekt task exits at least with a non 0 code
Why should it exit with a different code? Which other static source code analyzers yield an error due to the set of input source files being empty?
Is entirely possible that projects don't contain Kotlin but rather Java, Scala or source code of other JVM languages.
I don't get why you think I am talking about Kotlin only here. is detekt { input =...} only for Kotlin?
Anyway, I am not talking about any specific language, the use case I am talking about here is that I have messed up input so that detekt has basically 0 files to analyse. That's almost always an error and not on purpose. Otherwise there is no point in setting up detekt at all IMO. There is also not a need for a flag such as allow 0 files IMO, one implement a flag on its on and not apply the detekt plugin in such cases
Why should it exit with a different code?
I guess you are in a sarcastic mood ifor something and can answer yourself why a program which has an erroneous setup exits with something different than 0 :wink:
Is entirely possible that projects don't contain Kotlin but rather Java, Scala or source code of other JVM languages.
I don't get why you think I am talking about Kotlin only here. is detekt { input =...} only for Kotlin?
Anyway, I am not talking about any specific language, the use case I am talking about here is that I have messed up input so that detekt has basically 0 files to analyse. That's almost always an error and not on purpose. Otherwise there is no point in setting up detekt at all IMO. There is also not a need for a flag such as allow 0 files IMO, one implement a flag on its on and not apply the detekt plugin in such cases
Why should it exit with a different code?
I guess you are in a sarcastic mood ifor something and can answer yourself why a program which has an erroneous setup exits with something different than 0 :wink:
I don't get why you think I am talking about Kotlin only here. is detekt { input =...} only for Kotlin?
Suppose input = mydir:
How should detekt know whether mydir contains no Kotlin source files on purpose or not?
It's entirely possible that a directory without Kotlin code at the given point in time is configured on purpose.
input so that detekt has basically 0 files to analyse.
This can be seen in the built-in reports of detekt. Why should detekt terminate because of that?
By the way, you can implement this behavior with detekt's custom report feature. If there is actually 0 analyzed code, you could throw an exception.
I guess you are in a sarcastic mood or something and can answer yourself why a program which has an erroneous setup exits with something different than 0 😉
Excuse me, this wasn't meant sarcastic. It was a serious question.
Which other static source code analyzers yield an error and terminate due to the set of input source files being empty?
I don't get why you think I am talking about Kotlin only here. is detekt { input =...} only for Kotlin?
Suppose input = mydir:
How should detekt know whether mydir contains no Kotlin source files on purpose or not?
It's entirely possible that a directory without Kotlin code at the given point in time is configured on purpose.
input so that detekt has basically 0 files to analyse.
This can be seen in the built-in reports of detekt. Why should detekt terminate because of that?
By the way, you can implement this behavior with detekt's custom report feature. If there is actually 0 analyzed code, you could throw an exception.
I guess you are in a sarcastic mood or something and can answer yourself why a program which has an erroneous setup exits with something different than 0 😉
Excuse me, this wasn't meant sarcastic. It was a serious question.
Which other static source code analyzers yield an error and terminate due to the set of input source files being empty?
It's entirely possible that a directory without Kotlin code at the given point in time is configured on purpose.
I agree that the directory can be empty at the point of configuration but when running detekt then, IMO, detekt should fail if it does not need to analyse anything at all. I don't know detekt well enough, maybe my assumption that input = can only be specified once leads to the confusion here; or in other words, maybe I should change the title of the issue to throw if detekt does not need to analyse any files (again, I am not talking about Kotlin files exclusively, I am talking about 0 files of what any kind).
Which other static source code analyzers yield an error and terminate due to the set of input source files being empty?
Don't know any on top of my head but I think it makes sense to error on misconfiguration instead of happily exit with 0 which in turn means the build/CI will not fail, giving the dev creating e.g. a PR as well as the reviewer a false view of the actual state (i.e. that the source code might be full of violations.)
I know a test-runner which exits with 0 if it cannot find any tests to execute and IMO this is the same use case. junit5 fails if --fail-if-no-tests is specified
It's entirely possible that a directory without Kotlin code at the given point in time is configured on purpose.
I agree that the directory can be empty at the point of configuration but when running detekt then, IMO, detekt should fail if it does not need to analyse anything at all. I don't know detekt well enough, maybe my assumption that input = can only be specified once leads to the confusion here; or in other words, maybe I should change the title of the issue to throw if detekt does not need to analyse any files (again, I am not talking about Kotlin files exclusively, I am talking about 0 files of what any kind).
Which other static source code analyzers yield an error and terminate due to the set of input source files being empty?
Don't know any on top of my head but I think it makes sense to error on misconfiguration instead of happily exit with 0 which in turn means the build/CI will not fail, giving the dev creating e.g. a PR as well as the reviewer a false view of the actual state (i.e. that the source code might be full of violations.)
I know a test-runner which exits with 0 if it cannot find any tests to execute and IMO this is the same use case. junit5 fails if --fail-if-no-tests is specified
I know a test-runner which exits with 2 if it cannot find any tests to execute and IMO this is the same use case. junit5 fails if --fail-if-no-tests is specified
Technically we could add a flag in the config file to achieve this. I'm unsure of the usefulness of this flag as it seems more an edge case to me 🤔
Also JUnit 5, as you mentioned, is returning a success if there are no tests to run (as they have the aformentioned flag).
Moreover the detekt task is a SourceTask. I don't know your exact detekt configuration, but you can probably add a doLast that will fail the task if source is empty or not.
I know a test-runner which exits with 2 if it cannot find any tests to execute and IMO this is the same use case. junit5 fails if --fail-if-no-tests is specified
Technically we could add a flag in the config file to achieve this. I'm unsure of the usefulness of this flag as it seems more an edge case to me 🤔
Also JUnit 5, as you mentioned, is returning a success if there are no tests to run (as they have the aformentioned flag).
Moreover the detekt task is a SourceTask. I don't know your exact detekt configuration, but you can probably add a doLast that will fail the task if source is empty or not.
First of all, feel free to close this issue, it's an idea, but I can live without it (can add an own check).
IMO a static analysis tool which tries to detect problems should also detect a wrong setup which is a problem as well and IMO a big problem because it covers up all real problems sitting there in the code. IMO this check should not be behind a flag but better behind a flag than nothing.
I don't know your exact detekt configuration
This was the erroneous configuration:
https://github.com/robstoll/niok/commit/fb376a43c86b571c98896e78389e491ac4fbaf8e#diff-49a96e7eea8a94af862798a45174e6ac43eb4f8b4bd40759b5da63ba31ec3ef7L71
As you can see, I have mis-configured input and pointed it to an empty list. The project had once subprojects where this configuration was correct but not anymore and I forgot to change it when I refactored the project to a single project. Since detekt was always green, I did not detect it for a long time.
First of all, feel free to close this issue, it's an idea, but I can live without it (can add an own check).
IMO a static analysis tool which tries to detect problems should also detect a wrong setup which is a problem as well and IMO a big problem because it covers up all real problems sitting there in the code. IMO this check should not be behind a flag but better behind a flag than nothing.
I don't know your exact detekt configuration
This was the erroneous configuration:
https://github.com/robstoll/niok/commit/fb376a43c86b571c98896e78389e491ac4fbaf8e#diff-49a96e7eea8a94af862798a45174e6ac43eb4f8b4bd40759b5da63ba31ec3ef7L71
As you can see, I have mis-configured input and pointed it to an empty list. The project had once subprojects where this configuration was correct but not anymore and I forgot to change it when I refactored the project to a single project. Since detekt was always green, I did not detect it for a long time.
I've done a bit more research on this front just to understand what was happening.
but you can probably add a doLast that will fail the task if source is empty or not.
This is the check you can add to your build.gradle if you want your build to fail once the input is empty.
gradle.taskGraph.afterTask {
if(it.state.noSource && it.path == ":detekt"){
throw new StopExecutionException("Detekt has an empty input")
}
}
What happened in your case was that you provided an empty input for detekt. The detekt task is a SourceTask that exists with the NO-SOURCE status if the input is empty. Technically our code not ever runs, and Gradle just realise that the task has no input so it can be skipped.
This is making adding a failOnEmptyInput config property even more complicated, as the culprit in this case was of Gradle and of how task execution is computed.
IMO a static analysis tool which tries to detect problems should also detect a wrong setup which is a problem as well and IMO a big problem because it covers up all real problems sitting there in the code.
Agree. Though in this specific case you instructed Detekt to pick an input that turned out being empty. I have several examples of Gradle modules that have no source code but have Detekt applied (a BOM, an Android resource only module, etc.). For those modules is totally reasonable to have Detekt just being skipped and resulting in a success.
The problem was that what you consider a "wrong setup" could be valid instead for another use case.
What we could do is list the snippet I posted in our official documentation, so others can benefit from it.
I've done a bit more research on this front just to understand what was happening.
but you can probably add a doLast that will fail the task if source is empty or not.
This is the check you can add to your build.gradle if you want your build to fail once the input is empty.
gradle.taskGraph.afterTask {
if(it.state.noSource && it.path == ":detekt"){
throw new StopExecutionException("Detekt has an empty input")
}
}
What happened in your case was that you provided an empty input for detekt. The detekt task is a SourceTask that exists with the NO-SOURCE status if the input is empty. Technically our code not ever runs, and Gradle just realise that the task has no input so it can be skipped.
This is making adding a failOnEmptyInput config property even more complicated, as the culprit in this case was of Gradle and of how task execution is computed.
IMO a static analysis tool which tries to detect problems should also detect a wrong setup which is a problem as well and IMO a big problem because it covers up all real problems sitting there in the code.
Agree. Though in this specific case you instructed Detekt to pick an input that turned out being empty. I have several examples of Gradle modules that have no source code but have Detekt applied (a BOM, an Android resource only module, etc.). For those modules is totally reasonable to have Detekt just being skipped and resulting in a success.
The problem was that what you consider a "wrong setup" could be valid instead for another use case.
What we could do is list the snippet I posted in our official documentation, so others can benefit from it.
Thanks for the analysis. Surely good to include the snippet 🙂👍
Personally, I would add a flag doNotFailOnEmptyInput which one needs to use in case of a BOM pom project or similar. I would even go that far to not provide a flag at all. Instead, such projects should simply not apply detekt because its for nothing. Or does detekt also check non-source related stuff?
Thanks for the analysis. Surely good to include the snippet 🙂👍
Personally, I would add a flag doNotFailOnEmptyInput which one needs to use in case of a BOM pom project or similar. I would even go that far to not provide a flag at all. Instead, such projects should simply not apply detekt because its for nothing. Or does detekt also check non-source related stuff?
Instead, such projects should simply not apply detekt because its for nothing
Agree, but if you use subprojects {} or allprojects {} block in your top level build.gradle file (as a lot of our users are doing), you're applying the plugin to all the modules.
The current behavior makes sure those modules with not source are not breaking your overall builds.
Instead, such projects should simply not apply detekt because its for nothing
Agree, but if you use subprojects {} or allprojects {} block in your top level build.gradle file (as a lot of our users are doing), you're applying the plugin to all the modules.
The current behavior makes sure those modules with not source are not breaking your overall builds.
I most of the time use subprojects or configure(...) instead of a build.gradle in the subproject. So I would do the following
configure(subprojects.filter{ !it.name.contains("-bom") }){
apply(...)
}
//or
subprojects {
if(!it.name.contains("-bom")) apply(...)
}
And there you have your flag. No big deal IMO. But I see that you are hesitant to take a more restrictive approach than the current one in the sense of fail-if-no-input by default. Fine with me, I brought up my points, I think it's clear by now that both approaches require more or less the same amount of implementation in detekt and for the workaround. In the end, members of detekt need to decide more on a principal level IMO.
I most of the time use subprojects or configure(...) instead of a build.gradle in the subproject. So I would do the following
configure(subprojects.filter{ !it.name.contains("-bom") }){
apply(...)
}
//or
subprojects {
if(!it.name.contains("-bom")) apply(...)
}
And there you have your flag. No big deal IMO. But I see that you are hesitant to take a more restrictive approach than the current one in the sense of fail-if-no-input by default. Fine with me, I brought up my points, I think it's clear by now that both approaches require more or less the same amount of implementation in detekt and for the workaround. In the end, members of detekt need to decide more on a principal level IMO.
I agree with @schalkms and @cortinico. Your point have sense but it could break other users flows. It seems that the use cases where you want empty source sets are legit. So I'm going to close this issue.
I do appreciate this kind of uses related with UX. Few people report things like this. But I think that in this case it's better to keep the plugin as it is now.
I agree with @schalkms and @cortinico. Your point have sense but it could break other users flows. It seems that the use cases where you want empty source sets are legit. So I'm going to close this issue.
I do appreciate this kind of uses related with UX. Few people report things like this. But I think that in this case it's better to keep the plugin as it is now.
|
2025-04-01T06:38:22.756251
| 2024-01-03T11:26:03
|
2063828262
|
{
"authors": [
"3flex",
"detekt-ci"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5283",
"repo": "detekt/detekt",
"url": "https://github.com/detekt/detekt/pull/6802"
}
|
gharchive/pull-request
|
Exclude FirErrors class from JaCoCo instrumentation to avoid ASM MethodTooLargeException
This is required for code coverage to work correctly when building using K2 compiler.
Warnings
:warning:
This PR is approved with no milestone set. If merged, it won't appear in the detekt release notes.
Generated by :no_entry_sign: dangerJS against 1971bee72b275a141265039ace7d6bc98faeaa75
|
2025-04-01T06:38:22.762003
| 2023-03-14T23:41:35
|
1624466483
|
{
"authors": [
"SubOptimal",
"dev-lu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5284",
"repo": "dev-lu/osint_toolkit",
"url": "https://github.com/dev-lu/osint_toolkit/pull/2"
}
|
gharchive/pull-request
|
Use self-hosted fonts instead of web-fonts.
The Poppins web-fonts served by fonts.google.com are replaced by self-hosted fonts.
This pr is no longer necessary, as it does not fit into the folder structure of the project and a font picker for user-defined fonts has been implemented.
|
2025-04-01T06:38:22.785079
| 2024-12-13T14:10:36
|
2738490438
|
{
"authors": [
"mmartinortiz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5286",
"repo": "devcontainers/cli",
"url": "https://github.com/devcontainers/cli/issues/940"
}
|
gharchive/issue
|
[Question] How to build the image in a remote host
Hi,
We started to adopt devcontainers at work and so far, it has been a smooth experience. Our development setup is as follows:
Developers has their own Windows Virtual Device
Developers connect via SSH (witch VS Code) to a remote Linux machine where development happens
We are migrating a project to devcontainer which Dockerfile requires an environment variable to be passed as a build argument, like:
docker build --arg MY_VAR=$VAR_VALUE .
The MY_VAR variable exists in the Linux machines. When the devcontainer is built from the Linux machine it finishes correctly. However, when the devcontainer is built from the Windows machine (using VS Code devctontainers extension), the build fails because such environment variable is empty.
Creating the environment variable in the Windows machine is not an option.
Is there a way to make sure that the devcontainer is always built and run in the Linux machine, even when it is built from the VS Code instance running in the Windows machine?
I'll reply to myself with the details of the actual problem and how we solved it, just in case it helps others.
When connecting from the Windows machine via SSH to the Linux machine, the build process of the docker image is happening in the Linux machine. We were under the impression that, at least the variables, were injected from the Windows machine. But that is not the case.
The environment variables are present in the Linux machine when the user's ~/.profile and ~/.bashrc are loaded. But our Linux machines have a managed identity (we authenticate against an AD server). That detail is important, because when the building process is started by the VS Code's devcontainer plugin, it looks for the login shell of the current user. This thread on StackOverflow put me on the right track.
In other words: the environment variables are empty because the shell used by the devcontainer process, /bin/sh, does not load the user's profile files.
For loading environment variables in that kind of scenarios, docker compose with an .env file would be an option.
In our case, we needed the variable to include an extra pip index. We solved it in two steps:
In the devcontainer.json we include the user's pip configuration folder as extra context:
"build": {
"dockerfile": "../Dockerfile",
"context": "..",
"options": [
"--build-context=user_home=${localEnv:HOME}/.config/pip"
]
Within the Dockerfile, we copy the pip.conf file into the image, install the packages, and remove the config file
ENV PIP_CONFIG_FILE=/tmp/pip.conf
COPY --from=user_home pip.conf /tmp/pip.conf
# Install the Python packages
RUN rm /tmp/pip.conf
|
2025-04-01T06:38:22.786910
| 2023-01-29T17:30:13
|
1561384755
|
{
"authors": [
"samruddhikhandale",
"tudortimi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5287",
"repo": "devcontainers/images",
"url": "https://github.com/devcontainers/images/issues/384"
}
|
gharchive/issue
|
Image list for Jekyll links to image list for Ruby
On https://github.com/devcontainers/images/tree/main/src/jekyll, the "full list" link points to the image list for Ruby, not for Jekyll.
Closing as completed with https://github.com/devcontainers/images/pull/389
|
2025-04-01T06:38:22.787819
| 2023-12-31T05:39:18
|
2060948697
|
{
"authors": [
"Jordanwaslistening",
"bamurtaugh"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5288",
"repo": "devcontainers/spec",
"url": "https://github.com/devcontainers/spec/pull/364"
}
|
gharchive/pull-request
|
Create devcontainer.json
"ghcr.io/devcontainers/features/aws-cli:1": {}
Thanks again for your interest in contributing. As I mentioned in the other couple of PRs, going to close this one as I don't think it's intended for this repo.
|
2025-04-01T06:38:22.789863
| 2023-01-07T08:37:14
|
1523615571
|
{
"authors": [
"DHRUVKHANDELWAL00",
"ManavLohia945",
"developer-diganta"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5289",
"repo": "developer-diganta/Dino",
"url": "https://github.com/developer-diganta/Dino/issues/59"
}
|
gharchive/issue
|
Starter code for website of Dino
Subject of the issue
Construct the code for the website as per the design mockup.
Prefer HTML,CSS and JS rather than any framework (up for discussion if you want)
@developer-diganta Can i try it?
Sorry @DHRUVKHANDELWAL00 . I have been discussing about this issue with @developer-diganta even before when SWOC started, So he is going to assign me this work. I have been looking forward to work on this issue for quite some time. So, I am sorry but you can look out for other issues
Okay manav. Also No need to say sorry @ManavLohia945 .
Thanks @DHRUVKHANDELWAL00 for understanding! @ManavLohia945 assigned
|
2025-04-01T06:38:22.814639
| 2019-01-06T18:25:40
|
396281790
|
{
"authors": [
"marvinhagemeister",
"rmacklin"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5290",
"repo": "developit/preact",
"url": "https://github.com/developit/preact/issues/1285"
}
|
gharchive/issue
|
Providing a minified mjs bundle
It's recommended to use preact.min.js instead of preact.js in production builds due to the file size reduction from the property mangling: https://github.com/developit/preact/blob/87a4ebe99dc068eaeb8503644c60ffe8ad735771/config/properties.json#L1-L27
This works well if you are using a regular script to pull preact from the CDN and relying on the preact global or if you are bundling with webpack. However, it doesn't work if you are importing preact as a native ES module.
In that case, you can successfully import the unminified preact.mjs bundle, as seen here:
http://jsfiddle.net/tpck6Lf4/
But since preact.min.js is not a native ES module, you cannot change it like this:
- import { Component, h, render } from<EMAIL_ADDRESS>+ import { Component, h, render } from<EMAIL_ADDRESS>
(you'd get SyntaxError: The requested module<EMAIL_ADDRESS>does not provide an export named 'h')
It'd be awesome if there was a corresponding preact.min.mjs that we could import from in production builds to take advantage of the reduced file size.
One way to do this might be to switch from uglify to terser where the minifier can understand ES2015 syntax. Alternatively perhaps we can minify the code with export stripped out, and then add it back. Thoughts?
We published an alpha version for Preact X just a few hours ago. It's available on npm via the preact@next tag and ships with a minified mjs bundle 💯
|
2025-04-01T06:38:22.845699
| 2013-10-01T05:04:51
|
20310370
|
{
"authors": [
"matrosovDev",
"mital87"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5291",
"repo": "devinross/tapkulibrary",
"url": "https://github.com/devinross/tapkulibrary/issues/264"
}
|
gharchive/issue
|
Mark event to Daily / Weekly / Monthly / Yearly
Hello,
i have create event on 1st September, 2013 for weekly intervals. so my mark date are 1, 8, 15, 22, 29.... so how can i implement this thing using this library. i am stuck at this point. which method of your library help me to resolve this issue. please help me as soon as possible.
Thank you so much in advance.....
yea the same question http://stackoverflow.com/users/587415/matrosov-alexander )
|
2025-04-01T06:38:22.850436
| 2018-09-17T01:51:33
|
360702248
|
{
"authors": [
"corroded",
"coveralls",
"devinus"
],
"license": "0BSD",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5292",
"repo": "devinus/poison",
"url": "https://github.com/devinus/poison/pull/178"
}
|
gharchive/pull-request
|
Update README to use current version
The current released version says it's in 4.0.1 so I just updated the version in the installation.
Pull Request Test Coverage Report for Build 244
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 89.423%
Totals
Change from base Build 236:
0.0%
Covered Lines:
186
Relevant Lines:
208
💛 - Coveralls
Fixed in 5.0.
|
2025-04-01T06:38:22.870318
| 2022-09-06T08:32:39
|
1362907135
|
{
"authors": [
"James4Ever0",
"JounQin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5293",
"repo": "devockr/deeplx",
"url": "https://github.com/devockr/deeplx/issues/3"
}
|
gharchive/issue
|
zu1k's docker image got removed
the docker image is gone. i see your app is running fine. would you help to restore or provide the exported .tar file of that image?
How can I help with that?
found the release here.
Should I/you close this issue or is there anything to be done?
It's alive again now.
|
2025-04-01T06:38:22.872630
| 2019-06-24T13:51:09
|
459904092
|
{
"authors": [
"CLAassistant",
"hohwille",
"maciejmalecki"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5294",
"repo": "devonfw-forge/keywi",
"url": "https://github.com/devonfw-forge/keywi/pull/19"
}
|
gharchive/pull-request
|
Enable h2-console for debug purposes
I have tricked sec config to make h2-console work.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
@maciejmalecki could you please sign the CLA in order to contriubte to devonfw?
|
2025-04-01T06:38:22.929422
| 2022-12-24T04:45:22
|
1509939780
|
{
"authors": [
"FluffyBumblebees",
"Frontear",
"MBatt1",
"aeaver"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5295",
"repo": "devs-immortal/Paradise-Lost",
"url": "https://github.com/devs-immortal/Paradise-Lost/issues/748"
}
|
gharchive/issue
|
Bug: Paradise lost is breaking some leaf textures from other mods due to christmas ornaments.
What happened?
A bug happened!
To replicate:
Install paradise lost & BYG (Or any mod that adds leaves)
Some leaves will not have textures.
Mod Version
Beta 1.6.9 1.18.2
Fabric API Version
0.67.0
Relevant log output
https://gist.github.com/FluffyBumblebees/3862e4b0fff36bc84c82630e7c72ff42
Other mods
BYG
Additional Information
No response
It seem that i was able to replicate this bug by installing sodium with byg and lost paradise, when i remove sodium both lost paradise and byg leaf's start to have texture
For me:
BYG + Paradies Lost (no sodium) = load
BYG + Sodium = load
Paradies Lost + Sodium = does not load
BYG + Paradies Lost + Sodium = does not load
Mod ### Version
Release 1.4.7 1.18.2
Fabric API ### Version
0.67.0
Ah its a sodium incompat.
I am currently using these 3 mods with my friend in a modded server, from his point of view the BYG and paradise lost leaf does load, but chest from paradise lost does not load
We actually don't have christmas textures for the chests that's why they're untextured. I believe the leaf problems are fixed in the 1.19.x versions however, but I will make a not of both.
1.19.2 Modpack MedievalMC users have reported this exact issue as well. Most notably, leaf textures from BYG are missing and are replaced with the purple/black texture.
This is fixed in my PR.
|
2025-04-01T06:38:22.944183
| 2024-10-20T13:20:10
|
2600452483
|
{
"authors": [
"Elessar1802",
"eshankvaish"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5296",
"repo": "devtron-labs/dashboard",
"url": "https://github.com/devtron-labs/dashboard/pull/2140"
}
|
gharchive/pull-request
|
chore: restructure into a monorepo using pnpm
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
Type of change
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[x] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
[ ] Test A
[ ] Test B
Checklist:
[ ] The title of the PR states what changed and the related issues number (used for the release note).
[ ] Does this PR require documentation updates?
[ ] I've updated documentation as required by this PR.
[ ] I have performed a self-review of my own code
[ ] I have commented my code, particularly in hard-to-understand areas
Remove linter.py, sentry.sh
Common out custom.d.ts
vite-env
|
2025-04-01T06:38:22.972444
| 2017-02-02T08:50:30
|
204813617
|
{
"authors": [
"GuiFSimoes",
"M4NC1O",
"Niraj-Sharma",
"PatilPritam",
"SerhiiTsybulskyi",
"SimoneMSR",
"Trenrod",
"arianul",
"bashoogzaad",
"bgaillard",
"calvingferrando18",
"chintharr",
"danilocubo",
"devyumao",
"dinusuresh",
"emidel",
"flexkiran",
"gepisolo",
"hambardzumyan-mane",
"justicewebtech",
"kthomas80",
"marciomsm",
"michalzfania",
"nolafs",
"oslanier",
"qcnguyen",
"rshatf",
"slesarevns",
"smainz",
"stanciupaul",
"superKalo",
"tim-hoffmann",
"tmwnni",
"trueflywood"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5297",
"repo": "devyumao/angular2-busy",
"url": "https://github.com/devyumao/angular2-busy/issues/33"
}
|
gharchive/issue
|
ERROR in BusyModule is not an NgModule
I get this error when I import BusyModule. Or this library only works on Angular 2.0.0?
"dependencies": {
"@angular/common": "^2.3.1",
"@angular/compiler": "^2.3.1",
"@angular/core": "^2.3.1",
"@angular/forms": "^2.3.1",
"@angular/http": "^2.3.1",
"@angular/platform-browser": "^2.3.1",
"@angular/platform-browser-dynamic": "^2.3.1",
"@angular/router": "^3.3.1",
"angular2-busy": "^1.0.2",
"angularfire2": "^2.0.0-beta.7",
"bootstrap": "^3.3.7",
"core-js": "^2.4.1",
"firebase": "^3.6.8",
"ng2-bootstrap": "^1.3.2",
"rxjs": "^5.0.1",
"ts-helpers": "^1.1.1",
"zone.js": "^0.7.2"
},
"devDependencies": {
"@angular/compiler-cli": "^2.3.1",
"@types/jasmine": "2.5.38",
"@types/node": "^6.0.42",
"angular-cli": "1.0.0-beta.26",
"codelyzer": "~2.0.0-beta.1",
"jasmine-core": "2.5.2",
"jasmine-spec-reporter": "2.5.0",
"karma": "1.2.0",
"karma-chrome-launcher": "^2.0.0",
"karma-cli": "^1.0.1",
"karma-jasmine": "^1.0.2",
"karma-remap-istanbul": "^0.2.1",
"protractor": "~4.0.13",
"ts-node": "1.2.1",
"tslint": "^4.3.0",
"typescript": "~2.0.3"
}
same issue
same issue
same issue
I have the same problem.
same issue
Same
same
Has anyone fixed this bug?
Same error for me with:
"@angular/compiler-cli": "^2.3.1",
"angular-cli": "1.0.0-beta.24",
"typescript": "~2.0.3"
same
Any idea how to fix this?
same issue!!
same issue. any idea how to fix?
same! any idea?
the same issue. any idea when it will be fixed?
Same error for me with:
"@angular/compiler-cli": "^2.4.8",
"angular-cli": "1.0.0-beta.32.3",
Same issues
"@angular/compiler-cli": "^2.3.1",
"angular-cli": "1.0.0-beta.28.3",
Same issue here after upgrading to @angular/cli@latest - is there a fix in the works?
@angular/cli: 1.0.0-beta.32.3
node: 7.3.0
os: win32 x64
@angular/cli: 1.0.0-beta.32.3
@angular/common: 2.4.8
@angular/compiler: 2.4.8
@angular/compiler-cli: 2.4.8
@angular/core: 2.4.8
@angular/forms: 2.4.8
@angular/http: 2.4.8
@angular/platform-browser: 2.4.8
@angular/platform-browser-dynamic: 2.4.8
@angular/router: 3.4.8
+1
A possible workaround, until it is patched, is to comment out
imports: [ ...
//,BusyModule
...]
Then ng build -w or whatever you prefer. Once it is built remove the comments. It just throws a fit during the build but it still works fine.
+1
same issue
yep! same issue...
same here
Same here
Changing the order in app.module.ts from
@NgModule({
imports: [
...
BusyModule,
],
declarations: [
...
],
...
}
to
@NgModule({
declarations: [
...
],
imports: [
...
BusyModule,
],
...
}
worked for me
same here :(
@devyumao , you can take a look at this issue: https://github.com/angular/angular-cli/issues/3426#issuecomment-269673735 maybe it will be able to help.
Do you plan to keep the library updated or it is abandoned currently?
I have fixed this error. Please use this until the official module is fixed.
https://github.com/dinusuresh/angular2-busy
Thank you @dinusuresh ! The solution you provided works on my side too! 👍
Why don't you consider to issue a pull request to https://github.com/devyumao/angular2-busy ? I bet this would help @devyumao deal with this issue faster.
Yay! Works great! Thanks, Dinesh!
From: Dinesh<EMAIL_ADDRESS>Sent: Friday, March 10, 2017 5:43 AM
To: devyumao/angular2-busy
Cc: Hatfield, R. Scott; Comment
Subject: Re: [devyumao/angular2-busy] ERROR in BusyModule is not an NgModule (#33)
I have fixed this error. Please use this until the official module is fixed.
https://github.com/dinusuresh/angular2-busy
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/devyumao/angular2-busy/issues/33#issuecomment-285636933, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ABDSfemQtnuFzG9iunR8soclrhBApxCxks5rkSk4gaJpZM4L02hA.
Thanks a lot @dinusuresh 👍 your fixed also worked for me.
Thanks @dinusuresh its works for me too!!!
I updated "ts-metadata-helper" and "angular2-dynamic-component" packages in my projetc too...
Glad I could help 😄
@superKalo I have created a pull request but considering the author is not active ATM I do not know if and when it will be accepted.
Can anyone please explain me how to solve this problem? I didn't understand what I have to change :)
@M4NC1O Sure. If you are looking to just get it working, then
In you package.json file amke sure the angular2-busy line is like this:
"angular2-busy": "https://github.com/dinusuresh/angular2-busy.git"
It works!
thank you!
Thank you !
It works well..thanks
When I build --aot I lose the spinner and the label "Loading..".. does anyone have the same issue?
@dinusuresh: started getting error "DynamicComponentModule is not an NgModule"..
@Niraj-Sharma Please can you post your package.json ?
It is working good on my end, the only warning is the rxjs version 5.0.2 which is used by angular2-dynamic-component
@M4NC1O I will look into it and let you know.
@Niraj-Sharma https://github.com/devyumao/angular2-busy/issues/39
Now the module has been fixed.
Thanks, @dinusuresh ! Your solution is great. 👍
Also thank @superKalo . The issue you mentioned is very helpful.
Sorry for the late reply. The library will be keeping updated :)
@Niraj-Sharma Now the module does not rely on angular2-dynamic-component any more. Please update to the latest version.
@devyumao Thank you
Hello,
I followed solution by dinusuresh, but it's not working. Please provide solution.
& also wanted to ask that It will create separate instance for different component if we add in diff component.
Currently, I am using loader with observables but problem is loader gets activated for any request to Server it's just because of service we are using.
Please provide solution.
@dinusuresh
|
2025-04-01T06:38:23.008840
| 2018-02-27T10:59:19
|
300583569
|
{
"authors": [
"dasboe",
"dfahlander"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5298",
"repo": "dfahlander/Dexie.js",
"url": "https://github.com/dfahlander/Dexie.js/issues/669"
}
|
gharchive/issue
|
Dexie is sometimes hard to debug because of missing context in error event object
I'm using window.addEventListener('unhandledrejection', callback) for global error handling of Dexie.js. My Problem is I often find myself in a situation where it's hard to debug my code because the error event object does not give enough meaningful information of where the error occurred in my code. Here is a very distilled example of a type of error which I had a hard time debugging:
window.addEventListener('unhandledrejection', function(e) {
// The output of the error object does not give any meaningful information where the error occured
console.log(e);
});
var db = new Dexie("TestDuplicateKey");
db.version(1).stores({ test: 'id' });
db.open().then(function() {
return db.test.add({ id: 1 });
}).then(function() {
return db.test.add({ id: 1 });
});
Fiddle: https://jsfiddle.net/1ksfk1hv/
Here is all information I get from the error event object in the console output:
message: "Key already exists in the object store."
name: "ConstraintError"
stack: "Error
at getErrorWithStack (https://unpkg.com/dexie@2.0.1/dist/dexie.js:322:12)
at new DexieError (https://unpkg.com/dexie@2.0.1/dist/dexie.js:451:19)
at mapError (https://unpkg.com/dexie@2.0.1/dist/dexie.js:481:14)
at handleRejection (https://unpkg.com/dexie@2.0.1/dist/dexie.js:965:14)
at IDBRequest. (https://unpkg.com/dexie@2.0.1/dist/dexie.js:4220:9)
at IDBRequest. (https://unpkg.com/dexie@2.0.1/dist/dexie.js:1178:23)"
I'm not sure if I should do something differently, this is a limitation I have to live with or if this is something which could be improved in Dexie.js.
Yes, you can set Dexie.debug to true to enable long call stacks:
Dexie.debug = true;
I updated the fiddle accordingly and after running it, the log will tell you the exact line where the error occurred, see screenshot:
Note: Dexie.debug will be default true when served from localhost.
http://dexie.org/docs/Dexie/Dexie.debug
Thank you - this helps a lot! I'm running my tests in a Jasmine suite and therefore Dexie.debug has been set to false by default.
|
2025-04-01T06:38:23.015746
| 2015-02-28T21:56:48
|
59360487
|
{
"authors": [
"dfarrell07",
"stefanoperone"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5299",
"repo": "dfarrell07/wcbench",
"url": "https://github.com/dfarrell07/wcbench/issues/59"
}
|
gharchive/issue
|
parameters information (wcbench)
Hi Daniel,
I'd like to find out more about "cbench_max, cbench_min, cbench_avg, one_min_load, five_min_load, fifteen_min_load" parameters?
What are exactly this parameters? How are they calculated??
Thanks :)
With these parameters:
I don't understand these values:
and the average responses/sec (817.41), are the answers that "switches" riceve by "controller"?
What are exactly this parameters?
They are described in detail in the WCBench Results section of the README.
How are they calculated??
This array contains most of the commands used to gather system stats, including all of the *_load ones you asked about. The CBench min/max/avg values are parsed from CBench's result output. Here's the relevant code.
are the answers that "switches" riceve by "controller"?
I don't think I understand what you mean by this - can you restate it more clearly?
You can find general information about system load from many places, including this wiki article.
Daniel excuse me,
I wouldn't be wrong;
"CBench is a somewhat classic SDN controller benchmark tool. It blasts a controller with OpenFlow packet-in messages and counts the rate of flow mod messages returned."
Is "cbench_avg" parameter calculated according to these messages returned?
Is "cbench_avg" parameter calculated according to these messages returned?
Yes, it's the average packet_ins/flow_mods per second.
My slides from LinuxCon provide a pretty good overview of CBench's algorithm.
Perfect ;)
Thanks you !!
|
2025-04-01T06:38:23.020079
| 2015-09-06T06:43:57
|
105075444
|
{
"authors": [
"dfch",
"rufer7"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5300",
"repo": "dfch/biz.dfch.CS.System.Utilities",
"url": "https://github.com/dfch/biz.dfch.CS.System.Utilities/issues/9"
}
|
gharchive/issue
|
MEF exports with IODataEndpoint or IODataEndpointData must support Version information
Currently MEF endpoints that are loaded via the IODataEndpoint interface do not specify their version. When using these endpoints more intensively it will become difficult to manage or detect their versions.
Therefore it is suggested to either implement a static property on IODataEndpointData which specifies the version of the assembly/plugin or to add a method/property to IODataEndpoint that returns the version.
The disadvantage of having the version information supplied in IODataEndpointData is that it is static, but it is then very obvious (within the data annotation) which version the plugin is supposed to have.
If the version information was supplied via IODataEndpoint then it could be read from the assembly or somewhere else (which could lead to a more consistent version scheme and could be detectable from the outside via file information).
Both of these suggested changes are breaking changes!
@rufer7 what do you think of this?
The downside is, that we have that IODataEndpoint inside the utilities package so everyone upgrading this package for other reasons will run into that breaking change.
I know that breaking changes are not the way to go, but this is very early in the dev process of that component, so I think we should go for it.
@dfch Seems to be a good point providing the version in IODataEndpoint. Another option could be reading the assembly version itself. The disadvantage is, that with this apptoach the version gets increased with every build. Exposing the version by a method or field seems to be more flexible. I would suggest taking the approach you suggested. Breaking changes in early development state should not be the reason for not doing it.
|
2025-04-01T06:38:23.045501
| 2022-04-06T23:13:43
|
1195292529
|
{
"authors": [
"computechrr",
"dfinke"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5301",
"repo": "dfinke/ImportExcel",
"url": "https://github.com/dfinke/ImportExcel/issues/1155"
}
|
gharchive/issue
|
not saving correctly
this is the part of my script that exports the data to excel
foreach($tedserver in $tedservers) {
$kbtenables | Where-Object {$.'dns name' -match $tedserver} | Export-Excel -Path 'D:\Powershell Scripts\Data\WindowsUpdates\Tenable.xlsx' -WorksheetName Ted-KB -Append -AutoSize
$othertenables | Where-Object {$.'dns name' -match $tedserver} | Export-Excel -Path 'D:\Powershell Scripts\Data\WindowsUpdates\Tenable.xlsx' -WorksheetName Ted-Other -Append -AutoSize
}
i have the same thing about 6 times, one for each tech. when i run one loop everything works but doing 2 or more i get this error message
MethodInvocationException: C:\Users\my user\Documents\PowerShell\Modules\ImportExcel\7.4.1\Public\Export-Excel.ps1:679:20
Line |
679 | else { $pkg.Save() }
| ~~~~~~~~~~~
| Exception calling "Save" with "0" argument(s): "Error saving file D:\Powershell Scripts\Data\WindowsUpdates\Tenable.xlsx"
what i end up with is a spreadsheet with 2 sheets which is from the last loop instead of all sheets for all techs
Thank you
i went back to this the next day, closed code and reran the script and everything works
Sometimes it only works every other Thursday ;-)
|
2025-04-01T06:38:23.054768
| 2021-11-18T14:11:35
|
1057403239
|
{
"authors": [
"mintar",
"relffok"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5302",
"repo": "dfki-ric/mir_robot",
"url": "https://github.com/dfki-ric/mir_robot/pull/96"
}
|
gharchive/pull-request
|
Draft: Galactic-Devel (Help wanted)
Hi there,
I have started porting this package to ROS2 galactic a while ago. It was a lot of effort so far and there is still heaps to do. I wanted to get further in the development before opening a PR but because of the lack of time I am guessing it would be easiest to all work together (hopefully) on this one to move along. And finally be able to use ROS2 for all the MiR platforms.
It is still pretty rudimentary, but the simulation and driver work together with our MiR100 platform and it is great to be able to use ROS2 package once the driver connection is up and running.
Overview:
mir description: ported
mir_driver: supports the laser_scans, cmd_vel, odom and tf so far. Since I didn't need the other topics, I have commented them out. (ROS1 to ROS2 msg require some dict filter, but that shouldn't be a big effort to implement for most topics) This was the biggest chunk to be solved, I ended up porting rospy_msg_converter to ROS2 (rclpy_msg_converter) as well. Will link the PR here shortly.
mir_gazebo: simulation is up and running
mir_navigation: mapping + navigation ported using slam_toolbox and nav2 (which is great) but there is still a lot of parameter tuning to be done (mostly waiting on 1.0.8 release right now)
Some issues to point out:
Separate laserscans: The laserscans are the most important part in mapping, localization and mapping and we have two separate scans (front and back). In comparison to ROS1 the slam and nav nodes require the scan to be merged and not only be passed alternately to the same topic. So I needed to merge those (using another package I ported) from laserscan to pointcloud then merge the clouds and then convert it back to a single laserscan. At that time I could not find a better solution, if somebody else can point out anything, I'd be happy to review that.
Known Bug: Gazebo simulated Laserscans drift away from time to time (before merging in a pointcloud). This seems to be an issue in the urdf configuration, but I haven't had the time to track it down. https://github.com/relffok/mir_robot/issues/1
Missing namespace support: While porting I missed to drag the namespace everywhere (I also felt like there is an ongoing discussion about namespaces in lots of ros2 pkg), but this will also be added some time soon https://github.com/relffok/mir_robot/issues/2
To sum up, it is useable for a few tasks, but there is still lots of things to do and bugs to remove. Help wanted and appreciated!
Thanks a lot for all this work! I'll review it as soon as possible (I'm pretty swamped with work right now). Just a quick comment on one of your points:
Separate laserscans: The laserscans are the most important part in mapping, localization and mapping and we have two separate scans (front and back). In comparison to ROS1 the slam and nav nodes require the scan to be merged and not only be passed alternately to the same topic. So I needed to merge those (using another package I ported) from laserscan to pointcloud then merge the clouds and then convert it back to a single laserscan. At that time I could not find a better solution, if somebody else can point out anything, I'd be happy to review that.
This is really unfortunate. The problem with merging the laserscans like that is that you lose the information which frame a particular point was recorded from. At least in mapping and navigation there's some ray tracing going on between the sensor frame and the point (to determine free space), so that'll probably be a problem. Perhaps for pure localization it's going to work.
I remember having to do the same hack because gmapping in ROS1 also doesn't support multiple laser scanners, but it didn't produce good results, so I switched to hector_mapping instead (which supports multiple laser scanners).
Thanks a lot for all this work! I'll review it as soon as possible (I'm pretty swamped with work right now).
Thank you. I'm looking forward to get this one fully ported!
This is really unfortunate. The problem with merging the laserscans like that is that you lose the information which frame a particular point was recorded from. At least in mapping and navigation there's some ray tracing going on between the sensor frame and the point (to determine free space), so that'll probably be a problem. Perhaps for pure localization it's going to work.
I used it on MiR for mapping and navigation and also SLAM with a what I called virtual_laser_link (which I placed in the middle of the robot platform) and I didn't run into any issues. But maybe I am overlooking something here and you'll find something explicit to show me once you tested it.
@ros-pull-request-builder retest this please
That doesn't seem to have worked. Closing and reopening to trigger retesting.
|
2025-04-01T06:38:23.401773
| 2015-10-02T22:29:01
|
109582530
|
{
"authors": [
"dgrijalva"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5303",
"repo": "dgrijalva/jwt-go",
"url": "https://github.com/dgrijalva/jwt-go/issues/90"
}
|
gharchive/issue
|
Migrate the FromRequest things into a sub-package with some more flexible features
The original ParseFromRequest method was just a helper I inserted based on how I was using this library. It is useful as an example, but far more specific than I'm comfortable with for this library. It's also hard to change it's behavior without introducing risk to users of the library.
I'd like to migrate, for version 3.0, all the request parsing behavior into a sub-package. In doing so, we should also modify it to be flexible, but have well defined behavior. Adding functionality should not introduce unexpected behavior for existing users.
Now's a good time to talk about a desired set of functionality. I'll go through the PRs and Issues and see what people have asked for in the past as a jumping off point. If anyone has any other thoughts, please post here.
|
2025-04-01T06:38:23.404645
| 2022-02-02T06:15:03
|
1121552977
|
{
"authors": [
"0pdd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5304",
"repo": "dgroup/lazylead",
"url": "https://github.com/dgroup/lazylead/issues/574"
}
|
gharchive/issue
|
touch.rb:54: Add support for search over multiple branches (.locations)
The puzzle 567-859269e5 from #567 has to be resolved:
https://github.com/dgroup/lazylead/blob/cab193ac3d5bedcf8f8c6ad487e898fc03f5d864/lib/lazylead/task/svn/touch.rb#L54-L54
The puzzle was created by rultor on 02-Feb-22.
role: DEV.
If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is removed from the source code. Here is more about PDD and about me.
The puzzle 567-859269e5 has disappeared from the source code, that's why I closed this issue.
|
2025-04-01T06:38:23.525196
| 2024-05-10T12:09:55
|
2289612737
|
{
"authors": [
"ThomasVitale",
"eddumelendez"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5305",
"repo": "diagridio/testcontainers-dapr",
"url": "https://github.com/diagridio/testcontainers-dapr/pull/45"
}
|
gharchive/pull-request
|
Start DaprPlacementContainer automatically when DaprContainer is started
Currently, DaprPlacementContainer and DaprContainer should be started manually. This commit adds DaprPlacementContainer as a dependency when DaprContainer is started.
/cc @ThomasVitale
This is great! Thanks a lot @eddumelendez!
|
2025-04-01T06:38:23.528645
| 2018-07-24T05:27:55
|
343889493
|
{
"authors": [
"karanvs",
"matthewayne",
"sarahdwyer",
"syedatifakhtar"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5306",
"repo": "dialogflow/dialogflow-fulfillment-nodejs",
"url": "https://github.com/dialogflow/dialogflow-fulfillment-nodejs/issues/105"
}
|
gharchive/issue
|
How to handle list/carousel selection events ?
I have made a carousel following this example,
link
How can I handle the selection event for this ?
Hi,
Your link does not work for me.
I got it to work for me on a project i did earlier using
actions_intent_OPTION as an EVENT TRIGGER on a followup intent to the intent containing the carousel.
To extract the value in a parameter use #actions_intent_option.OPTION
@karanvs Your link is broken, can you post the code you have so far and be more specific?
List selection events are Action on Google specific events and are not supported by this library. If you are building on Dialogflow with only Actions on Google in mind please use the Actions on Google client library . If you are building for Actions on Google and other platforms please see the Dialogflow fulfillment & Actions on Google client library sample.
|
2025-04-01T06:38:23.536376
| 2018-05-23T17:27:11
|
325802810
|
{
"authors": [
"JustinBeckwith",
"mephicide",
"pbragaalves"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5307",
"repo": "dialogflow/dialogflow-nodejs-client-v2",
"url": "https://github.com/dialogflow/dialogflow-nodejs-client-v2/issues/89"
}
|
gharchive/issue
|
SSL error when trying to run example
Environment details
OS: Windows 10
Node.js version: 10.1.0
npm version: 5.0.3
dialogflow version: 0.4.0
Steps to reproduce
Follow the QuickStart steps on https://github.com/dialogflow/dialogflow-nodejs-client-v2
Create a file using the code on https://github.com/dialogflow/dialogflow-nodejs-client-v2#using-the-client-library
Change the project ID as intended.
Run node filename.js
After struggling with the grpc installation, I followed the steps for Windows users here.
Now I have the following error when trying to run the code, as well as with any sample on the /samples directory:
Auth error:Error: write EPROTO 21584:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:openssl\ssl\record\ssl3_record.c:252:
ERROR: { Error: 14 UNAVAILABLE: Getting metadata from plugin failed with error: write EPROTO
21584:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:openssl\ssl\record\ssl3_record.c:252:
at Object.exports.createStatusError (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\common.js:87:15)
at Object.onReceiveStatus (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\client_interceptors.js:1214:28)
at InterceptingListener._callNext (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\client_interceptors.js:590:42)
at InterceptingListener.onReceiveStatus (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\client_interceptors.js:640:8)
at callback (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\client_interceptors.js:867:24)
code: 14,
metadata: Metadata { _internal_repr: {} },
details: 'Getting metadata from plugin failed with error: write EPROTO 21584:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:openssl\\ssl\\record\\ssl3_record.c:252:\n' }
I searched for problems with SSL relating to both grpc and Google cloud projects, but didn't find any clue on what to do.
For me it seems to be related to the installation issues involving openssl and Windows as cited on the grpc installation page, so I'm considering test the same steps on some Linux platform and see if the same problem occurs.
Could anyone help?
Update:
Running on an Ubuntu VM with
Node.js: 8.11.2
npm: 5.6.0
I got a similar error, here tried to run detect.js from /samples:
ubuntu@ubuntu-VirtualBox:~/dialogflow-nodejs-client-v2/samples$ node detect.js text -q "hi"
Sending query "hi"
E0523 16:58:36.419127299 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number.
E0523 16:58:36.774246703 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number.
E0523 16:58:37.124691019 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number.
E0523 16:58:37.471887273 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number.
E0523 16:58:38.195687058 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number.
E0523 16:58:38.542390862 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number.
Also I'm pretty sure the problem isn't with my credentials, because they work perfectly fine with the Python SDK.
I found out that the problem was my proxy, when I tried to use in a network without the proxy I forgot to deactivate the proxy configurations temporarily.
This is not an issue with the proxy from what I can tell. This issue appears to be with the node.js library itself. The TLS handshake needs to be initiated with an HTTP CONNECT message to the proxy so the destination server (google) can receive the TLS handshake initiation instead of your proxy. Otherwise you get a 400 or other error from your proxy server, which is invalid TLS protocol with respect to your client, resulting in the error. If the node.js grpc implementation really did send the CONNECT message to the proxy, then this all should work.
Greetings! We're tracking proxy support over in #20.
|
2025-04-01T06:38:23.538874
| 2022-06-11T10:29:29
|
1268229895
|
{
"authors": [
"wokalski"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5308",
"repo": "dialohq/inline-test-ppx",
"url": "https://github.com/dialohq/inline-test-ppx/pull/9"
}
|
gharchive/pull-request
|
Fix rethrowing errors
🎁
reject was not defined here 😄. I could use return Promise.reject(e) but opted for simpler throw.
|
2025-04-01T06:38:23.546150
| 2023-07-26T21:18:47
|
1823189480
|
{
"authors": [
"juliodialpad"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5309",
"repo": "dialpad/dialtone-vue",
"url": "https://github.com/dialpad/dialtone-vue/pull/1102"
}
|
gharchive/pull-request
|
feat(avatar): extract initials from full name
Feat (Avatar): Extract initials from full name
:hammer_and_wrench: Type Of Change
[ ] Fix
[x] Feature
[ ] Refactoring
[ ] Documentation
:book: Description
Refactored Avatar to remove slot based usage in favor of prop based
Added iconSize property to be able to resize the avatar's icon
Removed slot related documentation
Migrated avatar's usage on components that were using DtAvatar
Changed imported images to public path to improve Percy visual tests stability with images
Included changes from #1097 and #1098
:bulb: Context
We we're having a lot of issues maintaining avatar's as it was created slot based initially to make it more customizable.
:pencil: Checklist
[x] I have reviewed my changes
[x] I have added tests
[x] I have added all relevant documentation
[ ] I have validated components with a screen reader
[ ] I have validated components keyboard navigation
[ ] I have considered the performance impact of my change
[ ] I have checked that my change did not significantly increase bundle size
[ ] I am exporting any new components or constants in the index.js in the component directory
[ ] I am exporting any new components or constants in the index.js in the root
:crystal_ball: Next Steps
Migrate usages of DtAvatar, DtRecipeFeedItemRow, DtRecipeContactRow and DtRecipeContactInfo components on product.
:camera: Screenshots / GIFs
Seems like visual tests are running even the PR is not approved yet, I'll take a look into that.
|
2025-04-01T06:38:23.572584
| 2023-07-23T21:51:22
|
1817308060
|
{
"authors": [
"diazona"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5310",
"repo": "diazona/setuptools-pyproject-migration",
"url": "https://github.com/diazona/setuptools-pyproject-migration/issues/39"
}
|
gharchive/issue
|
Separate static checks from pytest
I want to separate the static code checks (black, mypy, ruff) so that they don't run as part of pytest. Instead they should run either as part of pre-commit, or as their own standalone tox environments. We only need to run these checks once, not for every Python version as we do with pytest, and besides the dependencies needed to run these tests as part of pytest are breaking compatibility with Python 3.6.
From what I've figured out so far, I think this should be done together with, or after, #20. Anyway I'm working on both.
|
2025-04-01T06:38:23.580754
| 2017-03-08T00:59:07
|
212606758
|
{
"authors": [
"Roam-Cooper",
"dickeyxxx"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5311",
"repo": "dickeyxxx/npm-register",
"url": "https://github.com/dickeyxxx/npm-register/issues/69"
}
|
gharchive/issue
|
Can't logout
npm logout
Results in
13:48:16 Fennec@VERGIL testpublish: npm logout
npm ERR! Darwin 15.6.0
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "logout"
npm ERR! node v6.9.1
npm ERR! npm v3.10.2
npm ERR! code E404
npm ERR! 404 Not found : -/user/token/191c8da2-07c5-43de-ac98-c8704cd915cd
npm ERR! Please include the following file with any support request:
npm ERR! /Users/Fennec/testpublish/npm-debug.log
Is this api just not supported?
not at the moment, feel free to submit a PR though, should be pretty simple
|
2025-04-01T06:38:23.593478
| 2024-06-03T07:37:15
|
2330400434
|
{
"authors": [
"didalgolab",
"pgebert"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5312",
"repo": "didalgolab/chatgpt-intellij-plugin",
"url": "https://github.com/didalgolab/chatgpt-intellij-plugin/issues/24"
}
|
gharchive/issue
|
Suffix v1/chat/completions added to custom server endpoint
Plugin Version
1.0.0-231
Actual Behaviour
I use a custom endpoint in the server settings https://ete-openai-experiments.openai.azure.com/openai/deployments/gpt-4/chat/completions?api-version=2024-02-15-preview and this worked fine until the latest version of the plugin.
Now it fails with the message
404 Not Found from POST https://ete-openai-experiments.openai.azure.com/openai/deployments/gpt-4/chat/completions/v1/chat/completions
Please notice the v1/chat/completions suffix that seems to be added.
Expected Behaviour
If I add a URL in the server settings configuration it gets applied as it is
Azure OpenAI services have got a separate configuration in the latest version of the plugin. Please go to Settings | Tools | ChatGPT Integration | Azure OpenAI and provide your endpoint configuration there: API Key, API Endpoint and Deployment Name.
And if you are not using previous endpoints for GPT-4 and GPT-3.5-Turbo you may also disable them to not show them in a tool window.
I hope this helps.
Great - thanks 👍 That resolved my issues. Thanks for this nice plugin.
|
2025-04-01T06:38:23.611190
| 2018-09-06T11:18:37
|
357609588
|
{
"authors": [
"TravisBuddy",
"codecov-io",
"theniceangel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5313",
"repo": "didi/cube-ui",
"url": "https://github.com/didi/cube-ui/pull/302"
}
|
gharchive/pull-request
|
Feat locale
Please make sure these boxes are checked before submitting your PR, thank you!
[x] Make sure you follow DiDi's contributing guide.
[x] Make sure you are merging your commits to dev branch.
[x] Add some descriptions and refer relative issues for you PR.
Codecov Report
Merging #302 into dev will decrease coverage by 0.04%.
The diff coverage is 91.52%.
@@ Coverage Diff @@
## dev #302 +/- ##
==========================================
- Coverage 92.89% 92.84% -0.05%
==========================================
Files 131 134 +3
Lines 2801 2853 +52
Branches 418 427 +9
==========================================
+ Hits 2602 2649 +47
- Misses 105 110 +5
Partials 94 94
Impacted Files
Coverage Δ
src/components/picker/picker.vue
82.45% <ø> (ø)
:arrow_up:
src/components/cascade-picker/cascade-picker.vue
81.81% <ø> (ø)
:arrow_up:
src/components/date-picker/date-picker.vue
98.73% <ø> (ø)
:arrow_up:
src/modules/locale/index.js
100% <100%> (ø)
src/modules/time-picker/index.js
100% <100%> (ø)
:arrow_up:
src/components/time-picker/time-picker.vue
89.41% <100%> (+0.52%)
:arrow_up:
src/modules/date-picker/index.js
100% <100%> (ø)
:arrow_up:
src/components/action-sheet/action-sheet.vue
90% <100%> (+1.11%)
:arrow_up:
src/modules/select/index.js
100% <100%> (ø)
:arrow_up:
src/components/dialog/dialog.vue
96.66% <100%> (+0.11%)
:arrow_up:
... and 13 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f0d64b8...d9498e8. Read the comment docs.
Hey @theniceangel,
Something went wrong with the build.
TravisCI finished with status errored, which means the build failed because of something unrelated to the tests, such as a problem with a dependency or the build process itself.
View build log
TravisBuddy Request Identifier: 0c3648f0-cd1d-11e8-9706-8d7bf71fb7b5
|
2025-04-01T06:38:23.623557
| 2021-04-01T19:08:23
|
848717858
|
{
"authors": [
"bmwill",
"bors-libra",
"vgao1996"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5314",
"repo": "diem/diem",
"url": "https://github.com/diem/diem/pull/8114"
}
|
gharchive/pull-request
|
[language] remove unused diem-types dependency from disassembler
This removes the (unused) diem-types dependency from crate disassembler.
/land
@bmwill :exclamation: Unable to run the provided command on a closed PR
/land
@vgao1996 :exclamation: Unable to run the provided command on a closed PR
Unable to run the provided command on a closed PR
@bmwill this doesn't look quite right
Let me try to close and then reopen this
/land
Looks like it works. Hmm... maybe bors didn't add this to In Review in the first place?
:broken_heart: Test Failed - ci-test-success
/land
|
2025-04-01T06:38:23.633065
| 2017-07-08T09:11:43
|
241437938
|
{
"authors": [
"Thomasdezeeuw",
"agersant",
"sgrif",
"tingfeng-key",
"weiznich"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5315",
"repo": "diesel-rs/diesel",
"url": "https://github.com/diesel-rs/diesel/issues/1007"
}
|
gharchive/issue
|
Support for order by random()
Setup
Versions
Rust: 1.18
Diesel: 0.14.0
Database: SQLite
Operating System Windows
Feature Flags
diesel: sqlite
diesel_codegen: sqlite
Problem Description
There is no way to randomly sort the results of a select statement.
What are you trying to accomplish?
Returning a random subset of rows from a table. (ie. SELECT * FROM my_table LIMIT 20 ORDER BY RANDOM())
I know it's possible to work around this with my_table.order(sql::<types::Bool>("RANDOM()")).load(connection) but this seems like a useful feature to support in a safe manner.
Checklist
[x] I have already looked over the issue tracker for similar issues.
You can use sql_function! or no_arg_sql_function! for this. Generally we want to avoid exporting every possible function in SQL from Diesel, since it's trivial to declare the ones that you want.
Thanks!
It took me too long to figure this out so this is the code required:
no_arg_sql_function!(RANDOM, (), "Represents the sql RANDOM() function");
// Usage, using the post schema from the getting started guide.
let results = posts
.order(RANDOM)
.limit(5)
.load::<Post>(&*connection)
.expect("unable to load posts");
Which will generate the following query:
SELECT * ORDER BY RANDOM()
@agersant How do you solve this problem? (The version I am using now is 1.4.2)
no_arg_sql_function!(
random,
sql_types::Integer,
"Represents the SQL RANDOM() function"
);
let results = table
.limit(10)
.order(random)
.load(connection);
I used this code in version 1.4.2, but it still reports an error.
pub fn query_by_random_order_by_id_desc(conn: &MysqlConnection, category_id_data: i32, limit_num: i64) -> Result<Vec<Self>, Error> {
no_arg_sql_function!(random, sql_types::Integer,"Represents the SQL RANDOM() function");
albums::table
.order(random)
.filter(category_id.eq(category_id_data))
.filter(status.eq(1))
.limit(limit_num)
.load::<Self>(conn)
}
@tingfeng-key Our issue tracker is used to track bugs and feature request. For asking questions please use our gitter channel
|
2025-04-01T06:38:23.675137
| 2020-01-31T07:35:37
|
557964729
|
{
"authors": [
"Elianne",
"oystein-asnes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5316",
"repo": "difi/dcat-ap-no",
"url": "https://github.com/difi/dcat-ap-no/issues/120"
}
|
gharchive/issue
|
Opprette spdx:algorithm
#52
"Range: spdx:checksumAlgorithm_sha1 Cardinality: 1..1 This property identifies the algorithm used to
produce the subject Checksum. Currently, SHA-1 is the only supported algorithm. It is anticipated that other algorithms will be supported at a later time."
Gjelder versjon 1.1
|
2025-04-01T06:38:23.681151
| 2024-08-06T07:12:33
|
2450126212
|
{
"authors": [
"seanes"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5317",
"repo": "digdir/dialogporten-frontend",
"url": "https://github.com/digdir/dialogporten-frontend/issues/921"
}
|
gharchive/issue
|
feat: add "all organisations" to global menu
When the option "all organisations" in party dropdown (situated left for filters) is chosen, this is not reflected in the global menu (cf. picture). It is also not possible to choose "all organisations" from the global menu.
Missing design?
Legges på is inntil detaljert videre. Fjernes fra PartyDropdown.
|
2025-04-01T06:38:23.698092
| 2024-12-09T13:01:15
|
2726991797
|
{
"authors": [
"lukew-cogapp",
"martinleveille"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5318",
"repo": "diginov/craft-sentry-logger",
"url": "https://github.com/diginov/craft-sentry-logger/issues/15"
}
|
gharchive/issue
|
Question: Only sending specific errors to Sentry
Hello,
We have a module that collects and deals with lots of personal information (including payment data).
For our staging site, all data is obfuscated or test data, so there's no issue in sending every error to Sentry.
However, for the production site, we would only want to send specific errors (either like an includedExceptions array or the ability to only manually throw an error when we want e.g. MyCustomSentryError. This is because we cannot guarantee that personal data isn't contained within the error until we go through a scrubbing process and it's a large code base.
Is something like this possible with this plugin? I note that the craft-sentry plugin seems to support this when writing module code, as it explicitly needs the Sentry error to be thrown.
Any ideas welcome!
Thanks.
Hello @lukew-cogapp,
I believe you can achieve the desired result with the categories parameter. This parameter is empty by default, which means all exception categories are sent to Sentry. If you specify classes in this parameter, only the mentioned exception categories will be sent to Sentry.
https://github.com/diginov/craft-sentry-logger?tab=readme-ov-file#categories
Since this plugin is a direct extension of a Yii Log Target, you can use all the base parameters.
https://www.yiiframework.com/doc/api/2.0/yii-log-target
|
2025-04-01T06:38:23.703255
| 2021-11-18T13:30:51
|
1057360894
|
{
"authors": [
"glenrobson",
"jpadfield",
"stephenwf",
"tomcrane"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5319",
"repo": "digirati-co-uk/iiif-manifest-editor",
"url": "https://github.com/digirati-co-uk/iiif-manifest-editor/issues/24"
}
|
gharchive/issue
|
Use IIIF terminology
For the training it is useful if things are referred to as canvas and manifest etc. This might be at odds to making the tool universally accessible especially to those that are less familiar with IIIF but for the training its useful if the terms mentioned in the tool are the ones we've already taught them from the specifications.
In v1 this was configurable, so some users could see terms appropriate for making an exhibit, and others would see the IIIF terms.
I think this is related to 118n but not quite the same - a French training course would use localised text but not translate "Canvas", for example.
So formal model terms maybe should have separate config.
Would it not just be easier to ensure that all terms and titles can be controlled by the 118n process - with the default being the standard English IIIF terms - if a use case requires simplified or domain specific terms then a single config file can be created to control it. This provides consistency or programming and administration.
TODO - decide what this looks like in https://github.com/digirati-co-uk/iiif-manifest-editor/wiki/Configuration
The UI strings can be included in Language Maps; there should be a special flag to indicate where a string is a label for an actual property value, e.g., requiredStatement, so they can be left intact if needed.
Different Apps can have their own overriding strings. E.g., a bespoke slide show editor.
There is a new sub-package of the Manifest Editor that specifically tries to match the IIIF specification with a static definition that can be read in applications.
https://codesandbox.io/s/iiif-meta-27v3q9?file=/index.html
Contains things like:
Required/recommended properties per resource
Valid rights statements + descriptions of each
Link + summary of each property (from IIIF specification)
This can be used to show contextual information inline and relate it back to the specification. Labels will be in the i18n configuration as previously mentioned.
|
2025-04-01T06:38:23.704238
| 2024-01-10T00:45:09
|
2073394286
|
{
"authors": [
"digiserf01"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5320",
"repo": "digiserf01/srp",
"url": "https://github.com/digiserf01/srp/pull/1"
}
|
gharchive/pull-request
| |
2025-04-01T06:38:23.706498
| 2020-05-19T15:22:25
|
621069343
|
{
"authors": [
"andreolf-da",
"digitalasset-cla"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5321",
"repo": "digital-asset/daml-cheat-sheet",
"url": "https://github.com/digital-asset/daml-cheat-sheet/pull/9"
}
|
gharchive/pull-request
|
Add google analytics tag
@bame-da asked me to do this
thanks @anthonylusardi-da for the help!
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T06:38:23.750955
| 2016-02-10T09:49:21
|
132651152
|
{
"authors": [
"Forshortmrmeth",
"jonscottclark",
"maximal"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5322",
"repo": "digitalBush/jquery.maskedinput",
"url": "https://github.com/digitalBush/jquery.maskedinput/issues/351"
}
|
gharchive/issue
|
Edit package.json config
Hi.
Can you change the main file in package.json from grunfile.js to src/jquery.maskedinput.js. It needs to built bundle using webpack.
:+1:
It would be useful if we can build a bundle using the webpack.
Please fix this. Anyone hoping to use this with a module bundler is out of luck.
In Browserify, you can just require the file directly until there's a fix.
require('jquery.maskedinput/src/jquery.maskedinput.js');
|
2025-04-01T06:38:23.780453
| 2022-02-01T08:07:22
|
1120351458
|
{
"authors": [
"bmuramatsu",
"sethduffin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5323",
"repo": "digitalcredentials/learner-credential-wallet",
"url": "https://github.com/digitalcredentials/learner-credential-wallet/pull/120"
}
|
gharchive/pull-request
|
Updated spacing to match designs
There were a couple of spacing issues that have been fixed to match the original designs.
Adobe XD Designs - https://xd.adobe.com/view/6f1463ae-cefd-4b5e-8641-474ab7880353-3a47
Before
After
Which screens, while not matching the designs, I'm not sure the
implemented looked bad...
Brandon
On Tue, Feb 1, 2022 at 3:07 AM Seth Duffin @.***> wrote:
There were a couple of spacing issues that have been fixed to match the
original designs.
Adobe XD Designs -
https://xd.adobe.com/view/6f1463ae-cefd-4b5e-8641-474ab7880353-3a47
You can view, comment on, or merge this pull request online at:
https://github.com/digitalcredentials/learner-credential-wallet/pull/120
Commit Summary
790b303
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/commits/790b303265a5d5672892cf4ef11f3893882df138
Updated spacing to match designs
File Changes
(9 files
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files
)
M app/components/CredentialItem/CredentialItem.styles.tsx
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-783ec984564e6476e607a66f54e4b11cc2184aff86982c7ef60193ef60774391
(17)
M app/screens/AddScreen/AddScreen.styles.tsx
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-8575a9bd35f15872fe3ff3385978610bf16f2aa2b5a1e1e0c3943877c2488af8
(2)
M
app/screens/ApproveCredentialsScreen/ApproveCredentialsScreen.styles.ts
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-01504ed743d49bb4661beaf2b716e1f1fe707c25cce6b1b6095767e097d386aa
(6)
M app/screens/ApproveCredentialsScreen/ApproveCredentialsScreen.tsx
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-c155cd019318e5ff514b34192b1f3b99eaab8828cae55ee00638a59f10f8aa10
(9)
M app/screens/HomeScreen/HomeScreen.styles.tsx
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-80ea8b3e615dbe45d84b09fa9e0a32f138c5b78cb1a79df42c5885465ab6fbc1
(5)
M app/screens/HomeScreen/HomeScreen.tsx
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-869377616d47d4f9c3f66acdb1b2534686f1c5ef38d665394fa6b60d924d0f53
(2)
M app/screens/ShareHomeScreen/ShareHomeScreen.styles.ts
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-d9cfeb678c385dee4e22edbe78597ab10dc13568d173302531bbbb41fbae2c5f
(10)
M app/screens/ShareHomeScreen/ShareHomeScreen.tsx
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-0c0fd255ffa680da0b656558bfc2c604619c5346266bdffcc63e3381f9314417
(1)
M app/styles/mixins.ts
https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-8a22920ed8ef9464931083f26f71fe02bb7df9943feaae9228fb5c35e23de441
(5)
Patch Links:
https://github.com/digitalcredentials/learner-credential-wallet/pull/120.patch
https://github.com/digitalcredentials/learner-credential-wallet/pull/120.diff
—
Reply to this email directly, view it on GitHub
https://github.com/digitalcredentials/learner-credential-wallet/pull/120,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAFXVRQHQK2OSBHHGPTB62LUY6IENANCNFSM5NINN7GA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
@bmuramatsu There were a couple of small padding issues that Brandon Findlay had pointed out to me. These were quick fixes that took no longer than 5 min.
👍
On Wed, Feb 2, 2022 at 1:42 PM Seth Duffin @.***> wrote:
@bmuramatsu https://github.com/bmuramatsu There were a couple of small
padding issues that Brandon Findlay had pointed out to me. These were quick
fixes that took no longer than 5 min.
—
Reply to this email directly, view it on GitHub
https://github.com/digitalcredentials/learner-credential-wallet/pull/120#issuecomment-1028243162,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAFXVRQKZQIG4EOPN6RK2XLUZF3H7ANCNFSM5NINN7GA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID:
@.***
com>
|
2025-04-01T06:38:23.802328
| 2024-06-05T06:04:37
|
2334979460
|
{
"authors": [
"RamakrushnaBiswal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5324",
"repo": "digitomize/digitomize",
"url": "https://github.com/digitomize/digitomize/pull/1011"
}
|
gharchive/pull-request
|
feat: Required logos added in the supported by section
Pull Request Details
Description
Rwquired logos added in the supported by section
Fixes
#989 solved
Type of PR
[x] Bug fix
[x] Feature enhancement
[ ] Documentation update
[ ] Refactoring
[ ] Other (specify): _______________
Summary
[Summarize the changes made in this PR.]
Screenshots (if applicable)
Additional Notes
[Include any additional information or context that might be helpful for reviewers.]
Checklist
[x] I have read and followed the Pull Requests and Issues guidelines.
[x] The code has been properly linted and formatted using npm run lint:fix and npm run format:fix.
[x] I have tested the changes thoroughly before submitting this pull request.
[x] I have provided relevant issue numbers, snapshots, and videos after making the changes.
[x] I have not borrowed code without disclosing it, if applicable.
[x] This pull request is not a Work In Progress (WIP), and only completed and tested changes are included.
[x] I have tested these changes locally.
[x] My code follows the project's style guidelines.
[x] I have updated the documentation accordingly.
[x] This PR has a corresponding issue in the issue tracker.
Summary by CodeRabbit
New Features
Added new partner logos to the Home section with direct links to Netlify, Google Cloud, and Holopin websites.
@pranshugupta54 LGTM 💥
|
2025-04-01T06:38:23.846191
| 2017-12-19T17:24:12
|
283309407
|
{
"authors": [
"dannyroberts",
"snopoke"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5325",
"repo": "dimagi/commcarehq-ansible",
"url": "https://github.com/dimagi/commcarehq-ansible/pull/1179"
}
|
gharchive/pull-request
|
Merge commcare hq deploy
Sorry for the long diff here. To review this PR
Consider your thoughts about merging commcare-hq-deploy into commcarehq-ansible/fab
Then skip down all the way to the bottom and review the commits starting with https://github.com/dimagi/commcarehq-ansible/pull/1179/commits/09cb79714574556f51dc2c0782c7ebcd4dd48293.
See whether you think https://github.com/dimagi/commcare-hq/pull/18959 is an okay bridge for easing the change.
When this is merged, people will be able to log into the control machine and (assuming they've already run the 2 steps to set it up) can run
update-code
and then (and every time they log in thereafter) they can run all our fab commands like we always have (without changing directories or anything).
Locally the change is going to be a little less automated but it's basically
workon ansible
cd commcarehq-ansible
git pull
pip install -r fab/requirements.txt
./control/check_install.sh
cd fab
and then you can run the fab command. Thereafter to run fab commands you must workon ansible and choose one of three options:
enter the fab directory:cd commcarehq-ansible/fab
fab production deploy
use the -f option on fabfab -f ~/.commcare-cloud/repo/fab/fabfile.py production deploy
put an alias in your bash profileecho "alias fab=fab -f ~/.commcare-cloud/fab/fabfile.py" >> ~/.bash_profile
# forever after, from any directory:
fab production deploy
I prefer the third one, which is what's done on control machines, and I suggest that in https://github.com/dimagi/commcare-hq/pull/18959/files
👍 from me. I think this makes a lot of sense. I've been think about making it a submodule for a while but I think this is better
This also breaks ground on getting the services deploy out of HQ deploy. I could also see us breaking the environments.yml file up and putting it with the other env vars files.
Yes! Exactly. Glad to hear we're on the same page. Those were also two of the top things on my wish-list related to this change.
It just occurred to me that we could possibly do this in two steps, where we merge this one first, and then merge the commcare-hq counterpart of this a week or two later. That way we can make sure fab is working in commcarehq-ansible for a number of people before pulling out the rug.
The only thing we'd have to do is to remember to make all changes in the interim to the current commcare-hq-deploy submodule and then I could make sure they get more or less continuously merged into this repo. Not sure it's worth the effort, but just wanted to throw that out there and highlight that merging this would initially be a very quiet change (but would mean committing to a louder change within a couple weeks).
your rollout plan seems good. I doubt much will change in the next few days
just tested this on icds-new and it worked fine:
(ansible) skelly@kafka0:~/commcarehq-ansible/fab$ fab icds-new restart_services
@dannyroberts FYI there are some commits in commcare-hq-deploy that need to be merged in here.
Looked at https://github.com/dimagi/commcare-hq-deploy/commits/master (latest commit on Dec. 21) and it's all been merged in; must have done that last week after you wrote this but before I read it.
|
2025-04-01T06:38:23.872968
| 2024-01-19T03:28:32
|
2089491171
|
{
"authors": [
"boilinabag"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5326",
"repo": "dimdenGD/OldTweetDeck",
"url": "https://github.com/dimdenGD/OldTweetDeck/issues/160"
}
|
gharchive/issue
|
OTD stopped working
looks like i'm not the only one. about 7pm PST, stopped loading the columns.
I see your on it. thanks.
|
2025-04-01T06:38:23.909243
| 2015-02-16T01:30:21
|
57756414
|
{
"authors": [
"EspadaV8",
"devmark",
"jasonlewis"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5327",
"repo": "dingo/api",
"url": "https://github.com/dingo/api/issues/363"
}
|
gharchive/issue
|
API routes fail with Debugbar enabled
I've just installed barryvdh/laravel-debugbar and attempting to load a page with an API named route now fails with a 'Route not defined' error.
I'm filing the bug here and with the debugbar (https://github.com/barryvdh/laravel-debugbar/issues/290) since I'm not sure which project is doing something funky.
Sample code? Works fine here mate.
maybe try to load api first.
'Dingo\Api\Provider\ApiServiceProvider',
'Barryvdh\Debugbar\ServiceProvider',
I've created a minimal test case laravel install that shows the issue
https://github.com/EspadaV8/dingo-debugbar/tree/develop
All good I'll give this a run when I can. Cheers.
I'm no longer supporting Laravel 4.x. Try this in either Lumen or Laravel 5 and report back if there's still issues. Cheers mate.
|
2025-04-01T06:38:23.943590
| 2022-11-30T13:31:14
|
1469642769
|
{
"authors": [
"julasamer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5328",
"repo": "diogopribeiro/Limn",
"url": "https://github.com/diogopribeiro/Limn/issues/12"
}
|
gharchive/issue
|
Limn(of:) an URL causes an EXC_BAD_ACCESS
In Limn+Objc.swift line 10.
Test:
Limn(of: URL(string: "http://foo.com")).dump()
Not sure what's so evil about reading the _urlString ivar, but it doesn't work.
Nice, thank you!
|
2025-04-01T06:38:24.000433
| 2020-11-15T02:53:56
|
743166815
|
{
"authors": [
"130s",
"dirk-thomas"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5329",
"repo": "dirk-thomas/vcstool",
"url": "https://github.com/dirk-thomas/vcstool/pull/194"
}
|
gharchive/pull-request
|
[log] merge-only option
Problem, approach
See https://github.com/dirk-thomas/vcstool/issues/174
Example operation and out put
# git checkout 0.1.27
# vcs-log --merge-only
=== . (git) ===
commit d82cb6ec3be31533dc90979e083072c6440c68d3
Merge: dcee1a0 c174cd8
Author: Dirk Thomas<EMAIL_ADDRESS>
Merge pull request #61 from dirk-thomas/use_pytest
commit 68a2451d33c4f4de6f148de57439f08f0330d898
Merge: f7f008a 93eb8c6
Author: Dirk Thomas<EMAIL_ADDRESS>
Merge pull request #58 from dirk-thomas/support_nested_repos
commit f7f008a1279dce9b213c9cfc9e72a2f1fbf73c33
Merge: 3d98c90 3d8d011
Author: Dirk Thomas<EMAIL_ADDRESS>
Merge pull request #59 from dirk-thomas/flake8
Open question
This PR at the time of writing just adds a simple wrapper for underlining vcs tool, without new advantage. The design with custom verb to delegate commands to each vcs tool seems to be a right approach. I can agree to close this PR if a simple change like what's in this PR is undesirable.
What I envisioned originally was to take advantage of vcstool's unique functionality and run a command against multiple local repos. Something like this (Obviously the argument oneline doesn't yet exist so the command failed):# vcs-log --merge-only --oneline
usage: vcs log [-h] [-l N] [--limit-tag TAG | --limit-untagged] [--merge-only] [--verbose] [--debug] [-s] [-n] [-w N] [--repos] [paths [paths ...]]
vcs log: error: unrecognized arguments: --oneline
Then I found custom verb lets you easily achieve what I wanted.# vcs-custom --git --args log --oneline --merges -n 10
...
=== ./ros_tutorials (git) ===
f40abd5 Merge pull request #31 from JavaJeremy/ThetaBugfix
626a3e5 Merge pull request #35 from jproft/kinetic-devel
f21c4d2 Merge pull request #29 from ros/fix_compiler_warnings_jade
d6d11f3 Merge pull request #27 from gusmonod/patch-1
36715e2 Merge pull request #23 from adamheins/jade-devel
9a1f606 Merge pull request #22 from ros/jade-devel-add-turtle
:
=== ./vcstool (git) ===
d82cb6e Merge pull request #61 from dirk-thomas/use_pytest
68a2451 Merge pull request #58 from dirk-thomas/support_nested_repos
f7f008a Merge pull request #59 from dirk-thomas/flake8
3d98c90 Merge pull request #55 from dirk-thomas/style
3a687d2 Merge pull request #54 from dirk-thomas/convert_version_to_string
:
OP updated:
Problem, approach
See https://github.com/dirk-thomas/vcstool/issues/174
Example operation and out put
# git checkout 0.1.27
# vcs-log --merge-only
=== . (git) ===
commit d82cb6ec3be31533dc90979e083072c6440c68d3
Merge: dcee1a0 c174cd8
Author: Dirk Thomas<EMAIL_ADDRESS>
Merge pull request #61 from dirk-thomas/use_pytest
commit 68a2451d33c4f4de6f148de57439f08f0330d898
Merge: f7f008a 93eb8c6
Author: Dirk Thomas<EMAIL_ADDRESS>
Merge pull request #58 from dirk-thomas/support_nested_repos
commit f7f008a1279dce9b213c9cfc9e72a2f1fbf73c33
Merge: 3d98c90 3d8d011
Author: Dirk Thomas<EMAIL_ADDRESS>
Merge pull request #59 from dirk-thomas/flake8
Open question
This PR at the time of writing just adds a simple wrapper for underlining vcs tool, without new advantage. The design with custom verb to delegate commands to each vcs tool seems to be a right approach. I can agree to close this PR if a simple change like what's in this PR is undesirable.
What I envisioned originally was to take advantage of vcstool's unique functionality and run a command against multiple local repos. Something like this (Obviously the argument oneline doesn't yet exist so the command failed):# vcs-log --merge-only --oneline
usage: vcs log [-h] [-l N] [--limit-tag TAG | --limit-untagged] [--merge-only] [--verbose] [--debug] [-s] [-n] [-w N] [--repos] [paths [paths ...]]
vcs log: error: unrecognized arguments: --oneline
Then I found custom verb lets you easily achieve what I wanted.# vcs-custom --git --args log --oneline --merges -n 10
...
=== ./ros_tutorials (git) ===
f40abd5 Merge pull request #31 from JavaJeremy/ThetaBugfix
626a3e5 Merge pull request #35 from jproft/kinetic-devel
f21c4d2 Merge pull request #29 from ros/fix_compiler_warnings_jade
d6d11f3 Merge pull request #27 from gusmonod/patch-1
36715e2 Merge pull request #23 from adamheins/jade-devel
9a1f606 Merge pull request #22 from ros/jade-devel-add-turtle
:
=== ./vcstool (git) ===
d82cb6e Merge pull request #61 from dirk-thomas/use_pytest
68a2451 Merge pull request #58 from dirk-thomas/support_nested_repos
f7f008a Merge pull request #59 from dirk-thomas/flake8
3d98c90 Merge pull request #55 from dirk-thomas/style
3a687d2 Merge pull request #54 from dirk-thomas/convert_version_to_string
:
Thanks for the patch and apologies for the late merge.
|
2025-04-01T06:38:24.079239
| 2024-06-22T07:56:48
|
2367677255
|
{
"authors": [
"ArMot",
"junetried"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5330",
"repo": "discordlinux/feedback",
"url": "https://github.com/discordlinux/feedback/issues/64"
}
|
gharchive/issue
|
screen sharing DO NOT WORK in WAYLAND !
This is somehow and old issue for me but it has not been fixed yet !!
It's been some time that Modern linux systems moved from X to Wayland protocols . but Still I'm not able to share my screen in wayland !
people have to use firefox/chrome web app version of discord so they can share screen, show teammate/friends something and then again get back to desktop version ! :)
That's kinda terrible experience for users ...
This repo is not for tracking client issues, and you're in the wrong place.
|
2025-04-01T06:38:24.093463
| 2018-11-19T17:03:58
|
382309917
|
{
"authors": [
"discosultan",
"meowxiik",
"nanodesu88",
"richard-hajek",
"romanov"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5331",
"repo": "discosultan/penumbra",
"url": "https://github.com/discosultan/penumbra/issues/18"
}
|
gharchive/issue
|
.NET Core Support Possible?
I know you stated that you will not update the library, so I am not asking whether you will do it, I am asking how hard would it be. As if it is easy enough I should be able to do it myself and possibly fork this one but if it is hard or near impossible without a complete rewrite, I will not attempt, because I would most likely fail tbh,
Thanks
Currently, MonoGame relies on WinForms implementations of .NET Framework or Mono for desktop GUI rendering. .NET Core 2.1 (latest as of this post) does not support WinForms, making it impossible to support it.
However, .NET Core 3.0 will introduce WinForms support for Windows platform. If MonoGame decides to support it, I'd be more than happy to move this code base over to .NET Core. Main benefits would be faster runtime and vastly improved .csproj format to simplify code.
Wait, if I understand correctly, your saying that MonoGame itself cannot currently run on .NET Core?
Okay, not to argue with you, but I am running a .NET Core MonoGame game on Linux right next to me, and MonoGame download links list Linux as well link
It's cool to see that there's working third party effort for .NET Core support!
The Linux download link on the official page is the Mono version of MonoGame.
It's hard to tell what problems may rise during the port. I'm mostly worried about the content pipeline, as Penumbra needs to compile custom shaders for each supported platform.
If it is any help, when I attempted to build Penumbra in .NET Core it compiled successfully, don't know if that included the shaders or not though.
Well, keep us updated, whether you plan to port or not!
Personally, I will wait for official support in MonoGame 3.8 before dabbling into it. The whole .NET Core story around MonoGame is still unclear for me.
Well what.
If you install nuGet package Penumbra.DesktopGL it works out of the box.
Just like that, no nothing needed.
Your thing does work for .NET Core.
After some testing, I have concluded, that it works reliably, had 0 problems
Well what.
If you install nuGet package Penumbra.DesktopGL it works out of the box.
Just like that, no nothing needed.
Your thing does work with .NET Core.
Edit: Tested on HelloPenumbra, saw shadow thing rotating, the whole window was slightly blue, I suppose that's what it is supposed to do
Edit2: Well someone is definitely a genius, don't actually know if specifically you.
How are you run monogame on .net core?
Install MonoGame like you normally would
Create new project
Add Penumbra.DesktopGL nuGet package
Should work now
Penumbra.DesktopGL is not working with Monogame UWP Core project:
System.TypeLoadException: 'Could not load type 'MonoGame.Framework.GameFrameworkViewSource`1' from assembly 'MonoGame.Framework, Version=<IP_ADDRESS>1, Culture=neutral, PublicKeyToken=null'.'
Penumbra.DesktopGL is not working with Monogame UWP Core project:
System.TypeLoadException: 'Could not load type 'MonoGame.Framework.GameFrameworkViewSource`1' from assembly 'MonoGame.Framework, Version=<IP_ADDRESS>1, Culture=neutral, PublicKeyToken=null'.'
Closing as supported already for awhile.
|
2025-04-01T06:38:24.115486
| 2012-01-27T02:21:23
|
2989599
|
{
"authors": [
"borfast",
"dcramer"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5332",
"repo": "disqus/disqus-php",
"url": "https://github.com/disqus/disqus-php/issues/14"
}
|
gharchive/issue
|
Minimum required PHP version
README.rst mentions PHP 5.3 as the minimum required version but from a quick code analysis with this neat little tool, it's actually 5.2 and just for json_decode().
For all the rest you only need PHP 5.1. Could you please confirm this and, if correct, change the docs accordingly?
Thanks! :)
5.3 might have actually been a guess. Will try to review and confirm.
Neat tool :)
Thanks for looking into it :)
I forgot to mention this but since you provide an alternative json implementation, the minimum required version is effectively 5.1.
I also went through the code manually to confirm the tool's result and I think it's right, I don't see anything that requires PHP over 5.1 (or 5.3 for json_decode()).
|
2025-04-01T06:38:24.142184
| 2023-05-09T10:25:47
|
1701806114
|
{
"authors": [
"codecov-commenter",
"thaJeztah"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5333",
"repo": "distribution/distribution",
"url": "https://github.com/distribution/distribution/pull/3905"
}
|
gharchive/pull-request
|
update to go1.19.9
Added back minor versions in these, so that we have a somewhat more
reproducible state in the repository when tagging releases.
Codecov Report
Patch and project coverage have no change.
Comparison is base (8900e90) 56.76% compared to head (b320abd) 56.76%.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
Additional details and impacted files
@@ Coverage Diff @@
## main #3905 +/- ##
=======================================
Coverage 56.76% 56.76%
=======================================
Files 106 106
Lines 10681 10681
=======================================
Hits 6063 6063
Misses 3944 3944
Partials 674 674
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
We'll probably need an updated golangci-lint;
> [stage-2 1/1] RUN --mount=type=bind,target=. --mount=type=cache,target=/root/.cache --mount=from=golangci-lint,source=/usr/bin/golangci-lint,target=/usr/bin/golangci-lint golangci-lint run:
#17 10.70 panic: load embedded ruleguard rules: rules/rules.go:13: can't load fmt
#17 10.70
#17 10.70 goroutine 1 [running]:
#17 10.70 github.com/go-critic/go-critic/checkers.init.22()
#17 10.70 github.com/go-critic/go-critic@v0.6.2/checkers/embedded_rules.go:46 +0x4b4
temporarily rebased on top of https://github.com/distribution/distribution/pull/3906 to see if things are looking good after that
(I'll rebase this one once the other PR is merged to remove the golangci-lint update commits)
|
2025-04-01T06:38:24.155165
| 2019-02-21T14:19:26
|
412947596
|
{
"authors": [
"jelovirt",
"masofcon",
"qvrijt",
"robander"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5334",
"repo": "dita-ot/dita-ot",
"url": "https://github.com/dita-ot/dita-ot/issues/3232"
}
|
gharchive/issue
|
Coderef using keyref also prints temporary file name
Expected Behavior
I am using a codeblock with coderef to a C# file pointed to by a keyref.
Only the file content is listed in the codeblock.
Following snapshot shows output using direct href path to the same file.
Actual Behavior
After the codeblock, the file name of the temporary file is printed.
Environment
DITA-OT version: 3.2.1 out-of the box, no external plugins
Operating system and version: Windows 10
How did you run DITA-OT? calling dita.bat
Transformation type: PDF, using FOP
My bookmap project used:
coderef-test.zip
Reproduced in 3.3.4 with both HTML and PDF.
In HTML output, the last line of the code block includes the full path name of the snippet:
}snippets/csharp-regions-simple.cs
Same in DITA-OT version 3.4.1
Do you have any plans to fix this?
Reproduced in 3.3.4 with both HTML and PDF.
In HTML output, the last line of the code block includes the full path name of the snippet:
}snippets/csharp-regions-simple.cs
Same in DITA-OT version 3.4.1
Do you have any plans to fix this?
Unable to reproce with 3.5. @robander, how about your?
|
2025-04-01T06:38:24.156107
| 2016-06-01T15:03:14
|
157930304
|
{
"authors": [
"jelovirt",
"robander"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5335",
"repo": "dita-ot/dita-ot",
"url": "https://github.com/dita-ot/dita-ot/pull/2404"
}
|
gharchive/pull-request
|
Add support for configuring tm scope in PDF #1245
Add mode to control whether trademark symbol is created.
:+1:
|
2025-04-01T06:38:24.161685
| 2020-10-22T03:30:39
|
727014893
|
{
"authors": [
"heri2468",
"palakdavda22",
"tyagi619"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5336",
"repo": "div-bargali/Data-Structures-and-Algorithms",
"url": "https://github.com/div-bargali/Data-Structures-and-Algorithms/issues/622"
}
|
gharchive/issue
|
Graph Dfs Application
Title - Count the number of islands in a grid
what will change - Code will be added
Type of Issue - Application of DFS
Please add/delete options that are not relevant.
[x] Adding New Code
[x] Improving Code
[x] Improving Documentation
[x] Bug Fix
Programming Language
Please add/delete options that are not relevant.
[] Python
[x] C++
[] Java
[] C
[] Go
[] Other language
Self Check
Ask for issue assignment before making Pull Request.
Add your file in the proper folder
Clean Code and Documentation for better readability
Add Title and Description of the program in the file
:star2: Star it :fork_and_knife:Fork it :handshake: Contribute to it!
Happy Coding,
please assign me @div-bargali @jai2dev for this task, this is an interview question and a nice application of DFS
Can I solve this issue?
I would also like to add some variations in this question, please assign me this so that I can add more relevant material here.
|
2025-04-01T06:38:24.163644
| 2015-10-08T15:19:10
|
110474340
|
{
"authors": [
"FinalAngel",
"mkoistinen"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5337",
"repo": "divio/django-cms",
"url": "https://github.com/divio/django-cms/issues/4566"
}
|
gharchive/issue
|
Documentation for 3.2 features...
This should include apphooks_reload in core, content creation wizards, etc. (see: https://github.com/divio/django-cms/pull/4563)
Also, replace any references of "click" et al, in the docs to something more touch-friendly.
Finally, review this closed PR for text changes: https://github.com/divio/django-cms/pull/4551
done by @evildmp
|
2025-04-01T06:38:24.166295
| 2016-07-19T21:40:52
|
166442452
|
{
"authors": [
"evildmp",
"pdbethke"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5338",
"repo": "divio/django-cms",
"url": "https://github.com/divio/django-cms/issues/5555"
}
|
gharchive/issue
|
Make note of need for migrations in plugin tutorial
Just a suggestion from a user working to develop a new CMS plugin that uses a model -- it may seem intuitive to some django programmers but you might want to add a line in the http://docs.django-cms.org/en/release-3.3.x/how_to/custom_plugins.html#storing-configuration tutorial noting that you should run manage.py makemigrations and manage.py migrate following your model creation to properly set up the configuration fields - it tripped me up a little before I realized what was the issue when I went to remove or save the newly created plugin.
Thanks, fixed in https://github.com/divio/django-cms/pull/5566.
|
2025-04-01T06:38:24.174883
| 2016-04-06T18:13:26
|
146393382
|
{
"authors": [
"bank32",
"divmain",
"rmnbrd",
"stoivo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5339",
"repo": "divmain/GitSavvy",
"url": "https://github.com/divmain/GitSavvy/issues/384"
}
|
gharchive/issue
|
GitSavvy doesn't work after updated
After installed GitGutter my Sublime can't use GitSavvy anymore. It was not found on my command palette. So, I tried to remove GitGutter and reinstall GitSavvy but it still doesn't work.
Please help me as soon as possible, GitSavvy is a part of my life now.
Same issue here, GitSavvy is totally absent from the command palette since the last update :disappointed:
There is some major bug now, I am working on fixing it now.
@bank32 @rmnbrd, I just pushed out an update that will disable the offending code as a temporary work-around. Syntax highlighting will disabled in inline-diff views, but we'll address that regression separately. If you see this in the next few minutes, it would be super helpful if you could pull down master and confirm that things are working for you. Sorry for the bad update!
fixed it. Thanks a lot.
@divmain I have another question. now I fixed it by less simple method but I wanna know when we can install latest version via Sublime package control ?
@bank32 it should be going out now. You may need to either 1) restart Sublime, or 2) run Package Control: Upgrade Package in the command palette. There is a delay between when I push a new tag in my Git repo and when packagecontrol.io picks up that changes and starts pushing it out to clients.
It's working like a charm now.
Thank you guys for your great reactivity! You rock! :+1:
|
2025-04-01T06:38:24.176815
| 2017-08-10T13:25:39
|
249345408
|
{
"authors": [
"asfaltboy",
"hanoii"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5340",
"repo": "divmain/GitSavvy",
"url": "https://github.com/divmain/GitSavvy/issues/732"
}
|
gharchive/issue
|
Is it possible to revert from gitsavvy?
I didn't find a way.. I'd guess it could be done from the graph window.
I don't believe we support it, and I don't use it personally.
However this is a nice feature request, to add a git: revert command, should be easy to implement if you want to give it a go and submit a Pull Request we'll be happy to assist.
Meanwhile, as a workaround I can suggest using custom commands.
|
2025-04-01T06:38:24.209629
| 2022-12-08T15:20:03
|
1484865337
|
{
"authors": [
"Trondtr",
"bbqsrc",
"dylanhand",
"rueter",
"snomos"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5342",
"repo": "divvun/kbdgen",
"url": "https://github.com/divvun/kbdgen/issues/4"
}
|
gharchive/issue
|
Cyrillic-based keyboards not selectable on Mac
On Mac 12.5.1, I have had Erzya (myv) and Moksha (mdf) keyboards as well as several Latin-based keyboards Apurinã (apu), Lushootseed (lut), Tenino (tqn), Võro (vro).
After restarting my mac today, 2022-12-08, I have been unable to select a Cyrillic-based keyboard (a) from the keyboard, (b) manually attempting to select the language from the drop-down menu.
Upon adding a Mac Russian keyboard, both Erzya and Moksha mysteriously disappeared.
The same issue affects the keyboards: Mansi (mns), Kildin Saami, Khanty (kca)
I can confirm the behaviour reported: Today I installed the Mansi (mns) cyrillic-based keyboard, with the same result:
I installed the Mansi keyboard, and after restart the Divvun installer claims it to be installed, as excpected. The keyboard, however, is nowhere to be found in the System menu for adding keyboards. DI lists the keyboard as installed ("No updates"), but to no avail. Latin-based keyboards (e.g. fao, rmn) work well, the problem seems to be the cyrillic-based one.
The issue is somewhat acute, since we work on mns this week, but it is of course a problem also genarally speaking.
Further confirmed by me using Mansi as a test case. @zoomix could you have someone look at this?
Details:
macOS 13.3.1 (a)
bundle properly installed as /Library/Keyboard\ Layouts/no.uit.giella.keyboards.mns.keyboardlayout.mns.bundle using Divvun Manager
not listed or visible at all in System Preferences (System Preferences > Keyboards > Input sources)
@dylanhand @SteffenErn — sorry, I forgot one step: you need to switch to the nightly channel in the app settings. After that the All repositories view will show Mansi near the bottom (it is written in Cyrillic, and the Cyrillic entries are towards the bottom - the actual packages are written in Latin, so should be no problem finding it).
We found the issue.
This occurs when the .keylayout file contains a self-closing tag, such as:
<actions />`
The .keylayout file can be found in /Library/Keyboard\ Layouts/<your language>.bundle/Contents/Resources
The problem doesn't occur if this tag is either omitted or re-written as:
<actions></actions>
Our current plan to fix is to fork https://github.com/bbqsrc/xmlem and add an option to disable self-closing tags.
Our current plan fix is to fork https://github.com/bbqsrc/xmlem and add an option to disable self-closing tags.
Feel free to make a pull request and i can publish the fixes on cargo.
So, what next? I understand the pull request is now done.
From my perspective I would then like to have a working Mansi keyboard on PC (Mac and Windows), my coworkers are typing in hex codes. Does the code in keyboard-xxx (here: keyboard-mns) need any changes, and if so, what changes?
@bbqsrc the option to disable self-closing tags ended up causing other issues. MacOS being persnickety 😄
The fix was instead to remove the <actions \> tag altogether if it contained no children when generating MacOS layouts.
@Trondtr changes to keyboard-xxx repos should not be required for this fix to work on MacOS. I'm not sure about Windows though - are you having issue there too? If so please create an issue with details.
Next step is to merge this and then have kbdgen re-generate MacOS layouts so they're available in the nightly channel. Hopefully the merge will trigger that. Still learning the tool chain. Will merge (if no objections) and investigate how to deploy the layouts to nightly after lunch 😄
Please do merge.
@dylanhand rebuild of the keyboards do not happen automatically yet (cross-repo build deps have been planned, but not yet implemented). I will trigger new builds of the most critical keyboards as soon as the fix is merged and kbdgen has been rebuilt.
@snomos thanks for the info.
Just merged, so feel free to trigger new builds of the keyboards.
I was able to download and install Mns (Mansi) keyboard on my Mac M2 13.0.1
It seems to work fine in the command line, so I am happy.
The keyboards for mhr (Meadow & Eastern Mari), mrj (Hill Mari aka Western Mari), myv (Erzya), mdf (Moksha), kpv (Komi-Zyrian), yrk (Nenets), did not show up as possible keyboards to install.
@rueter all the mentioned keyboards have now been rebuilt to fix the issue. They also have a new version number. They should be available in Divvun Manager as updates.
For keyboards with a Sámi flag as menu item icon, it has been replaced with a best effort alternative. Feel free to suggest other flags or icons 😊
@snomos
The Cyrillic keyboards presently working are: myv, mdf, kpv
The mrj (Hill Mari aka Western Mari) keyboard is only partial. It contains none of the extras required for mrj, so we will have to work on that.
The mhr (Eastern & Meadow Mari) keyboard does not appear on my Ventura as a selectable, even after down loading it.
The udm keyboard, has a Saami flag and is, in fact, a Latin letter content. More work for us;)
Thanks for the feedback, I will look into the various points. Further discussion should take place in new issues specific for the relevant keyboards, if needed.
This issue is now fixed, thanks for reporting it 😊
|
2025-04-01T06:38:24.217320
| 2020-12-07T17:42:22
|
758711340
|
{
"authors": [
"Halix267",
"diyajaiswal11"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5345",
"repo": "diyajaiswal11/Bloggitt",
"url": "https://github.com/diyajaiswal11/Bloggitt/issues/44"
}
|
gharchive/issue
|
Update About button (about.html) to show the user profile
Can I work on this @diyajaiswal11
Can I work on this @diyajaiswal11
Similar issue has been raised in #13 . Go through it.
|
2025-04-01T06:38:24.255134
| 2023-11-13T20:24:22
|
1991421067
|
{
"authors": [
"dj95",
"mike-lloyd03"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5346",
"repo": "dj95/zjstatus",
"url": "https://github.com/dj95/zjstatus/issues/24"
}
|
gharchive/issue
|
Plugin does not appear on session start. Only appears after serialization-interval time.
Describe the bug
After updating to Zellij 0.39.1, the status line is not initially displayed after a session is started. After some time, the statusbar displays as normal.
To Reproduce
Steps to reproduce the behavior:
Update to Zellij 0.39.1
Start a zellij session using this plugin
It appears that the amount of time it takes for the status bar to appear is the same as Zellij's session serialization frequency. Starting a session with just zellij, it will take 60 seconds for the status line to appear. 60 seconds is the new default session serialization frequency in 0.39.1. It was previously 1 second. Running zellij with zellij --serialization-interval 5, the statusline will appear after 5 seconds. With zellij --serialization-interval 20, it will appear after 20 seconds.
Expected behavior
Plugin should load immediately as it did on Zellij 0.39.0.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: 6.5.6-arch2-1
Zellij version: v0.39.1
Version: v0.9.0
Layout
How does the layout look like? Please copy it into a code block.
layout {
pane split_direction="vertical" {
pane
}
pane size=1 borderless=true {
plugin location="file:/home/mike/.config/zellij/plugins/zjstatus.wasm" {
format_left "{mode}#[fg=#1a1c23,bg=#4fa6ed,bold]{session}#[fg=#4fa6ed,bg=#1a1c23]{tabs}"
format_right "#[fg=#1a1c23,bg=#4fa6ed,bold]{datetime}"
format_space "#[bg=#1a1c23]"
border_enabled "false"
hide_frame_for_single_pane "true"
tab_normal "#[fg=#000000,bg=#4C4C59] {index} {name} #[fg=#4C4C59,bg=#1a1c23]"
tab_normal_fullscreen "#[fg=#000000,bg=#4C4C59] {index} {name} Z #[fg=#4C4C59,bg=#1a1c23]"
tab_normal_sync "#[fg=#000000,bg=#4C4C59] {index} {name} S #[fg=#4C4C59,bg=#1a1c23]"
tab_active "#[fg=#1a1c23,bg=#ffffff,bold] {index} {name} #[fg=#ffffff,bg=#1a1c23]"
tab_active_fullscreen "#[fg=#1a1c23,bg=#ffffff,bold] {index} {name} Z #[fg=#ffffff,bg=#1a1c23]"
tab_active_sync "#[fg=#1a1c23,bg=#ffffff,bold] {index} {name} S #[fg=#ffffff,bg=#1a1c23]"
datetime "#[fg=#1a1c23,bg=#4fa6ed,bold] {format} "
datetime_format "%A, %Y%m%d %H%M"
datetime_timezone "America/Los_Angeles"
mode_normal "#[fg=#1a1c23,bg=#4fa6ed,bold] NORMAL "
mode_locked "#[fg=#1a1c23,bg=#e55561,bold] LOCKED "
mode_resize "#[fg=#1a1c23,bg=#e2b86b,bold] RESIZE "
mode_pane "#[fg=#1a1c23,bg=#e2b86b,bold] PANE "
mode_tab "#[fg=#1a1c23,bg=#e2b86b,bold] TAB "
mode_scroll "#[fg=#1a1c23,bg=#e2b86b,bold] SCROLL "
mode_enter_search "#[fg=#1a1c23,bg=#e2b86b,bold] ENTER SEARCH "
mode_search "#[fg=#1a1c23,bg=#e2b86b,bold] SEARCH "
mode_rename_tab "#[fg=#1a1c23,bg=#e2b86b,bold] RENAME TAB "
mode_rename_pane "#[fg=#1a1c23,bg=#e2b86b,bold] RENAME PANE "
mode_session "#[fg=#1a1c23,bg=#e2b86b,bold] SESSION "
mode_move "#[fg=#1a1c23,bg=#e2b86b,bold] MOVE "
mode_prompt "#[fg=#1a1c23,bg=#e2b86b,bold] PROMPT "
mode_tmux "#[fg=#1a1c23,bg=#8ebd6b,bold] TMUX "
}
}
}
Awesome thanks for the quick fix
Hey,
the fix is available with the new release (0.9.1). Seems to work on my side. Hope this resolves the issue.
|
2025-04-01T06:38:24.359164
| 2023-06-28T13:21:44
|
1778937372
|
{
"authors": [
"christophehenry",
"matthewhegarty"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5347",
"repo": "django-import-export/django-import-export",
"url": "https://github.com/django-import-export/django-import-export/pull/1598"
}
|
gharchive/pull-request
|
Add more customizable blocks in import.html
Problem
Fields detail and definition cannot easily be overriden in import.html unless you overrite the whole import_form block.
Solution
This PR adds sub-blocks in import_form block to allow overide forms parts more easily.
Thanks, feel free to add your name to AUTHORS
Done, thank you!
Thanks - we had an issue with an upstream lib which caused the build to fail. I've fixed that. I'd be grateful if you could merge the main branch and re-push your change.
Done.
|
2025-04-01T06:38:24.371782
| 2024-08-05T01:16:23
|
2447406535
|
{
"authors": [
"GDay",
"ThomasDeudon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5348",
"repo": "django-q2/django-q2",
"url": "https://github.com/django-q2/django-q2/issues/204"
}
|
gharchive/issue
|
Get more info on tasks
Hello,
Is there a way to know the cluster / machine used by a task ?
I guess I could add it in the result for the successful ones but how would it be done for failures / queued ones (maybe the most important ones)
Thanks,
Couldn't get this through the Q_CLUSTER_NAME environment variable?
Yes, actually the cluster name can be found. For the machine I guess I'll have to save it as a variable (as few clusters can have the same name across different machines)
|
2025-04-01T06:38:24.379166
| 2021-04-20T06:38:52
|
862469732
|
{
"authors": [
"davidjb",
"felixxm"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5349",
"repo": "django/django",
"url": "https://github.com/django/django/pull/14286"
}
|
gharchive/pull-request
|
Removed unnecessary line in OrderBy.as_sql().
The first line of as_sql (https://github.com/django/django/blob/ed0cc52dc3b0dfebba8a38c12b6157a007309900/django/db/models/expressions.py#L1213) is a duplicate of this line that's being removed, meaning that in this function template is already defined in the same manner, making this line a noop.
@davidjb Thanks :+1: Good catch :dart:
|
2025-04-01T06:38:24.408679
| 2021-08-07T14:53:57
|
963245405
|
{
"authors": [
"cjerdonek",
"felixxm",
"francoisfreitag",
"jacobtylerwalls"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5350",
"repo": "django/django",
"url": "https://github.com/django/django/pull/14751"
}
|
gharchive/pull-request
|
Fixed #29026 -- Added --scriptable option to makemigrations.
ticket-29026 desires to separate logging from a simple list of filepaths created in makemigrations.
Today, there is logging to both stdout and stderr (for errors), but no real "program output": the filenames are bolded and indented to flow between other log messages.
This PR creates a new option --scriptable that will 1. divert all current logging to stderr and 2. log only the filepaths of created migration files to stdout, one per line (without leading spaces and styling)
--noinput mode is still necessary to suppress input completely, otherwise interactive prompts go to stderr when --scriptable is used
EDIT: moved ticket-29470 solution to #14805
All of this logging can be silenced with --verbosity 0: no changes in this respect.
The thing with structured is that text lines aren't that structured. To me that sounds more like json.
Very reasonable--I'll think about a name for this option.
It seems like the user-facing option should be more about whether logging should go to stderr. Then the non-logging ("output") lines would continue to go to stdout as is.
My thought process was this: merely switching the output to stderr is achievable today: you simply subclass the command and set self.stdout to whatever you want, sys.stderr, or whatever. I'm not sure we would add complexity if that were the only thing we were achieving. I think the reason for this PR is that we don't have any "non-logging/output" lines today, since this is not really nonlogging (leading spaces, hyphen, colors):
\x1b[1m/var/folders/6q/jljmh5xs27v8_7557q3rgtfw0000gn/T/django_klnxitjw/tmpwiefzufk/tmp82ptgwlr/migrations/0001_initial.py\x1b[0m\n
My thoughts would be to add an initial commit that adds a log() method and makes all stderr / stdout output go through that.
So I'm a little hesitant to do to this, because then we're making makemigrations more special than the other commands. This ticket is asking to make makemigrations more special by producing different kinds of output, so I guess we're already starting down that path.
Also, in the case that logging is going to stderr, I think you'd still want to log the output lines using log() as a message, so that info would be available both in the diagnostic stderr logs, as well as the output.
Agreed, I made sure this was the case.
Any thoughts on the ticket-29470 stuff? If we have design questions about ticket-29026 I could ask the fellows to re-triage 29470 and move it to a separate PR to keep it moving.
btw, thanks for the review, @cjerdonek! And it looks like I need to liberalize one of the tests to pass on Windows.
Maybe --separatelogs?
Everything but the last line going to stderr, last line to stdout:
Migrations for 'migrations':
/var/folders/6q/jljmh5xs27v8_7557q3rgtfw0000gn/T/django_e_o35bow/tmpso0994dn/tmp7ogni0kc/migrations/0001_initial.py
- Create model ModelWithCustomBase
- Create model SillyModel
- Create model UnmigratedModel
/var/folders/6q/jljmh5xs27v8_7557q3rgtfw0000gn/T/django_e_o35bow/tmpso0994dn/tmp7ogni0kc/migrations/0001_initial.py
you simply subclass the command
This didn't feel right when I wrote it, and indeed--it's easier than that, there's call_command(stdout=), see: https://docs.djangoproject.com/en/3.2/ref/django-admin/#output-redirection
The drift of my earlier comment was that I would be hesitant to rework the documented API we have for that. I see the issue as more defining wanting separate logs, period.
To respond to one point now:
My thoughts would be to add an initial commit that adds a log() method and makes all stderr / stdout output go through that.
So I'm a little hesitant to do to this, because then we're making makemigrations more special than the other commands.
I think making log() a method that is passed a message is more natural and has advantages over making it an attribute with a write() method. For example, it would give people a way to use a Python logger instead of writing to a stream. This is the approach taken in this commit for ticket #14150. Also, the word "log" is more commonly used in Python as the verb / method name rather than the stream (see e.g. Logger.log() in Python's logging module). It would also make the calling sites simpler / cleaner. Finally, if this pattern is found useful, it could be moved to BaseCommand so makemigrations.py wouldn't be so special.
Also, the word "log" is more commonly used in Python as the verb / method name rather than the stream (see e.g. Logger.log() in Python's logging module).
I have to admit, this did occur to me when I was writing it, and if you also noticed it, lots of folks will. :-O
For example, it would give people a way to use a Python logger instead of writing to a stream. This is the approach taken in this commit for ticket #14150.
That's a good reason.
Finally, if this pattern is found useful, it could be moved to BaseCommand so makemigrations.py wouldn't be so special.
I guess I wasn't looking at it that way. 👍🏻 Thanks for the quick feedback, and I'll have a look at implementing a log() method.
Maybe --separatelogs?
Maybe --scriptable or --scriptmode? The option seems like a higher-level mode as it does two things: it changes logging to go to stderr, and it adds additional info to stdout.
Maybe --scriptable or --scriptmode?
At a glance, I would be a bit worried it implies setting --noinput can be skipped when using in a script.
At a glance, I would be a bit worried it implies setting --noinput can be skipped when using in a script.
I think there's an argument the mode should imply --noinput. The reason is that, when programmatically consuming the stdout (e.g. piping it to a file), you wouldn't see the prompt anyways. (It would show up to the user as a hang.) This is because the command's questioner uses Python's input() built-in, which writes to stdout. So it seems incompatible with consuming stdout for programmatic use. Maybe you could write a prompt to stderr, but that seems non-standard.
Either way, I think you'd want to document in the help whether --noinput is implied, which should eliminate any worries.
Reading and thinking more about Python's input() and how --noinput behaves, I think the scriptable mode option we're discussing in this PR shouldn't default to --noinput, and when it's used, the questioner should write its prompt to stderr instead of stdout (and not use log()). There are a couple reasons for the latter. First, there is a very old Python ticket to change input() to use stderr, so it wouldn't actually be non-standard to use stderr like I suggested in my comment above. Secondly, if the mode didn't use stderr, then there would be no way to use scriptable mode while capturing the output stream (one of its intended use cases) when answers are required that differ from the default answers when --noinput is used. Lastly, I said "not use log()" because if someone, say, changed log() to log to a file, you would still want the prompt to go to stderr so the user could provide interactive feedback.
Also, on this question:
Any thoughts on the ticket-29470 stuff?
After reading and thinking about it, I think it should be re-opened and handled separately. I can go ahead and add a comment to the ticket.
Hello @francoisfreitag -- I noticed you have a WIP branch for ticket-21429 implementing logging for each of the management commands. As you can see above, Chris is making the good case that to move this PR forward, I should do something similar for makemigrations. Are you still interested in contributing your work? If #13853 is the only blocker, we can see about getting it into the review queue. Let me know if I can be helpful in any respect. 👍🏻
@jacobtylerwalls By the way, regarding the log() method I was suggesting you add above, the collectstatic management command is another class that already has a method like that: https://github.com/django/django/blob/8208381ba6a3d1613bb746617062ccf1a6a28591/django/contrib/staticfiles/management/commands/collectstatic.py#L207-L212
It will be of help here independent of that ticket.
Hi @cjerdonek,
Thanks for the ping! Fixing ticket-21429 is pretty time consuming and personal life has been busy (and will be for at least a few months). My employer is not interested in sponsoring the work for now, so it’s all on my personal time.
Basically, my todo list includes:
rebasing on main (last rebase was probably 9 months ago)
a readthrough, making sure:
assertNoLogs and assertLogRecords are used where possible
reviewing uses of io, StringIO, stdout and stderr in test code
code polish (e.g. grouping context managers, preferring single quotes, f-strings, etc)
consider introducing flake8-logging-format
installing my branch on an existing project shows unexpected line returns if I don’t change the existing project config. Needs investigation.
Completing the documentation
Testing against all DBs. I tried to use exact assertions for the logging output as much as possible, but different DB engines or configuration may cause messages to change slightly.
#13853 is just a very early step, introducing tools I would like to use for the logging PR. I’m afraid there are weeks of work remaining before my branch can be put up for review, and I am not able to commit to a timeline.
Thanks for the update. I certainly wouldn't ask you to commit to a timeline! I was just merely curious if I should try to conform to any likely pending changes. I think I have enough to go on for now. Good luck with everything, and be well. --Jacob
Thanks for the design guidance @cjerdonek , that was very helpful. I haven't squashed/reordered commits yet, and I am still thinking about https://github.com/django/django/pull/14751#discussion_r684664382, but I wanted to ask if you thought these changes were looking right-track to you. I'd be grateful if you had time for a re-review.
I didn't as of yet tackle a refactor of how verbosity is tracked. There's enough going on (and I feel like this PR is already two or so.) Speaking of which, thanks for commenting on ticket-29470.
The first one I'd recommend is a refactoring PR that just adds a log() method, and in particular doesn't make any reference to scriptable, etc. The next step PR can be discussed after that.
Done! :tada:
https://github.com/django/django/pull/14936
Summary:
Adds --scriptable flag to makemigrations, causing:
log() method to divert output to self.stderr (agreement on this here: "whether log() uses stderr or stdout could be controlled centrally in that method")
Interactive questioner to divert prompts to self.stderr (agreement here)
an additional line of "clean output" (no styling or indentation) written to self.stdout (original case from ticket)
That's all the diff does, it's just bloated because the interactive questioner was using print() statements, which had to be rewritten. I can move that to another commit (or PR?) if folks like.
Could also be good to rebase after #15212.
@jacobtylerwalls Thanks :+1:
I added writing a path of generated migration file to the --merge option (with tests) and pushed small edits.
@felixxm Thanks for the updates and the additional test!
|
2025-04-01T06:38:24.411981
| 2023-04-15T11:10:21
|
1669313237
|
{
"authors": [
"ankushagar99",
"felixxm"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5351",
"repo": "django/django",
"url": "https://github.com/django/django/pull/16768"
}
|
gharchive/pull-request
|
Added username in AbstractBaseUser as a index
I changed this
from this
Q Why I did this
A This field is used for login user and find user to most of time, as the number of user increases applications get slow in finding users by usernames. So I thought to add this in that. Every time I have to create a custom user field and I think most of developers does too. I think this will help for beginners too.
db_index is unnecessary as this field is already marked as unique.
|
2025-04-01T06:38:24.421201
| 2024-09-27T16:29:25
|
2553318907
|
{
"authors": [
"nessita",
"pauloxnet"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5352",
"repo": "django/django",
"url": "https://github.com/django/django/pull/18630"
}
|
gharchive/pull-request
|
Update docs.yml
Trac ticket number
"N/A"
Branch description
We're trying to check some alignment beetween docs in django repository and in django project website.
The goal for using the long format of the attributes is to increase clarity and make it easier for those who will interact with this action in the future.
The removal of the 'q' parameter is intentional and should help us read more information in the command execution logs.
Checklist
[x] This PR targets the main branch.
[ ] The commit message is written in past tense, mentions the ticket number, and ends with a period.
[ ] I have checked the "Has patch" ticket flag in the Trac system.
[ ] I have added or updated relevant tests.
[ ] I have added or updated relevant docs, including release notes if applicable.
[ ] I have attached screenshots in both light and dark modes for any UI changes.
@pauloxnet could you say more about why that change helps? It looks like you’ve changed the parameter names from the short-hands to long-hands, which I assume is to make it clearer what the command does?
Two other questions:
* See the -q option is now gone, not sure if you missed it or intentionally removed it?
* We also have sphinx-build in use in the Makefile – if we did this change from short-hand to long-hand, should it also be done there?
The goal for using the long format of the attributes is to increase clarity and make it easier for those who will interact with this action in the future.
The removal of the 'q' parameter is intentional and should help us read more information in the command execution logs. This improvement is preparatory to the alignment work in the generation of the documentation that we want to implement between the Django repository and that of its website.
See:
https://github.com/django/djangoproject.com/issues/1634
I would leave the changes of the Makefile in a future PR, because in any case it is already different from the command executed here, regardless of the long or short format of the options, which by the way do not change the behavior.
The Ubuntu version is outdated: the django website runs in the Python 3.12 docker container based on Debian 12, and the server runs Ubuntu 24.04.
Thank you @pauloxnet for this PR! Though, in all honesty, it's hard to justify the extra entry in git history for this change; the longer argument forms don't seem to justify the cost.
Thank you @pauloxnet for this PR! Though, in all honesty, it's hard to justify the extra entry in git history for this change; the longer argument forms don't seem to justify the cost.
Personally, I think that using the long form of arguments helps other people a lot to understand what those arguments do without having to consult the Sphinx guide, so in my opinion this commit had every right to be part of the Git history of the repository, just like commits that fix typos do.
Thank you @pauloxnet for this PR! Though, in all honesty, it's hard to justify the extra entry in git history for this change; the longer argument forms don't seem to justify the cost.
Personally, I think that using the long form of arguments helps other people a lot to understand what those arguments do without having to consult the Sphinx guide, so in my opinion this commit had every right to be part of the Git history of the repository, just like commits that fix typos do.
I understand your point, but docs.yml is a configuration file that's primarily intended for use by the build system, not for regular contributors. Most contributors will not need to read or understand this file in depth. While clarity in configuration files is important, the level of detail you're suggesting might not be necessary for this context. Unlike documentation or code, where clarity directly impacts the user experience or maintainability, changes like this don't provide significant value in terms of readability for most contributors. That said, I do appreciate your attention to detail!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.