added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:37:08.565378
| 2021-02-11T08:11:43
|
806168918
|
{
"authors": [
"bungle",
"darshandeep"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1470",
"repo": "Kong/kong",
"url": "https://github.com/Kong/kong/pull/6830"
}
|
gharchive/pull-request
|
Fix/balancer eventual consistency improve
Summary
SUMMARY_GOES_HERE
Full changelog
[Implement ...]
[Add related tests]
...
Issues resolved
Fix #false stoper nic-7016
I guess this was opened in accident. Thus closing it.
|
2025-04-01T06:37:08.575201
| 2023-02-03T16:36:51
|
1570115891
|
{
"authors": [
"codecov-commenter",
"czeslavo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1471",
"repo": "Kong/kubernetes-testing-framework",
"url": "https://github.com/Kong/kubernetes-testing-framework/pull/537"
}
|
gharchive/pull-request
|
tests(gke): ensure cluster cleanup is always called
Fixes cluster not being cleaned up mentioned in #533.
Codecov Report
Base: 54.08% // Head: 53.78% // Decreases project coverage by -0.31% :warning:
Coverage data is based on head (158730e) compared to base (82ecf03).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## main #537 +/- ##
==========================================
- Coverage 54.08% 53.78% -0.31%
==========================================
Files 50 50
Lines 3901 3901
==========================================
- Hits 2110 2098 -12
- Misses 1534 1543 +9
- Partials 257 260 +3
Flag
Coverage Δ
integration-test
58.75% <ø> (-0.36%)
:arrow_down:
unit-test
3.28% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
pkg/clusters/addons/knative/knative.go
60.44% <0.00%> (-5.98%)
:arrow_down:
pkg/clusters/utils.go
50.20% <0.00%> (-1.66%)
:arrow_down:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
2025-04-01T06:37:08.577890
| 2024-08-29T09:16:22
|
2493894773
|
{
"authors": [
"KonjacSource"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1472",
"repo": "KonjacSource/ShiTT",
"url": "https://github.com/KonjacSource/ShiTT/issues/2"
}
|
gharchive/issue
|
Substitution wrong
fun test (A : U) (x : A) (f : A -> U) : f x where
| A x f = _
Result in stack overflow.
This is due to the wrong implemention of ShiTT.Eval.subst function.
Use refresh to reimplement it.
fixed
|
2025-04-01T06:37:08.594510
| 2023-02-13T15:23:04
|
1582558429
|
{
"authors": [
"Akron",
"margaretha"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1473",
"repo": "KorAP/Kalamar",
"url": "https://github.com/KorAP/Kalamar/issues/196"
}
|
gharchive/issue
|
type:text is not supported
While investigating https://github.com/KorAP/Krill/issues/86, I found that corpusTitle eq gingko is serialized as
{
"@type": "koral:doc",
"key": "corpusTitle",
"match": "match:eq",
"value": "gingko",
"type": "type:text"
}
whilst type:text is not a type supported according to the KoralQuery doc and it is practically also not supported in Krill.
The type is not added by any query rewrite as it is not added when sending a direct API request:
https://korap.ids-mannheim.de/instance/test/api/v1.0/search?q=ich&cq=availability+%3D+%2FCC-BY.*%2F+%26+docTitle+%3D+"gingko"&ql=poliqarp&cutoff=1&state=&pipe=
Could it be that Kalamar add the type?
It's interesting that this shows up in the KQ-Viewer. The type:text is an index type and is introduced to help the VC Builder to show allowed operators. With this issue: Do you mean this shouldn't show up in the serialization or is there a bigger issue?
Yes, it shouldn't show up in the serialization and it shouldn't be used in general. There should be no problem with that in the backend since Kalamar only sends the corpus query, not KoralQuery.
Could you check what request Kalamar actually sends to Kustvakt since in the example request, there are no matches while in https://github.com/KorAP/Krill/issues/86, there are some results?
Could you please check what request Kalamar actually sends to Kustvakt? I don't get any results sending the example direct API request using OAuth2 token and VPN, while Kalamar shows some results as reported in https://github.com/KorAP/Krill/issues/86.
Well - it is used by the corpus builder and it is used for indexing - so what do you mean by "it shouldn't be used in general"? Yes it is not helpful in a corpus request, but that is not happening.
I am not sure to which query you are refering to.
Well - it is used by the corpus builder and it is used for indexing - so what do you mean by "it shouldn't be used in general"? Yes it is not helpful in a corpus request, but that is not happening.
I suppose it shouldn't be used since it is not part of the KoralQuery doc and not supported in backend. Why is it used by corpus builder and indexing?
I am not sure to which query you are refering to.
sorry for not being clear. I mean the query in https://github.com/KorAP/Krill/issues/86 or
the one I wrote above:
https://korap.ids-mannheim.de/instance/test/api/v1.0/search?q=ich&cq=availability+%3D+%2FCC-BY.*%2F+%26+docTitle+%3D+"gingko"&ql=poliqarp&cutoff=1&state=&pipe=
but using Kalamar instead of a direct API request.
The KoralQuery doc currently only covers the request and error reporting stuff - neither the indexing nor the response data format. Krill supports it for indexing (see index/FieldDocument) and for responses (see response/MetaFieldsObj). type:text means, the field is indexed tokenized, so single words can be searched in (like for title) as well as a whole string match works. This obviously means that the operators in the visual corpus builder should differ.
That query doesn't show results to me. The request is:
https://korap.ids-mannheim.de/instance/test/api/v1.0/search?context=40-t%2C40-t&count=25&cq=availability+%3D+%2FCC-BY.*%2F+%26+docTitle+%3D+%22gingko%22&cutoff=true&offset=0&q=ich&ql=poliqarp
Thanks for your explanation.
The query should show results with OAuth2 token and VPN since the Gingko corpus is restricted.
But the VC is limited to CC-BY.*
Sorry you are right. The request shouldn't be restricted to CC-BY.*
Besides I make a mistake due to the URL encoding for diacritics etc
For the following query
https://korap.ids-mannheim.de/instance/test?q=Z%C3%BCndkerze&cq=corpusTitle+%3D+%22gingko%22&ql=poliqarp&cutoff=1&state=&pipe=
Kalamar would send the query below to Kustvakt, right?
curl -v -H "Authorization: Bearer token" 'https://korap.ids-mannheim.de/instance/test/api/v1.0/search?q=Z%C3%BCndkerze&cq=corpusTitle+%3D+%22gingko%22&ql=poliqarp&cutoff=1&state=&pipe='
This doesn't seem to be a problem from Kalamar and isn't related to type:text so I suppose we should discuss in https://github.com/KorAP/Krill/issues/86 instead
Yes, this is unrelated. Regarding this topic: I think the corpus assistant shouldn't alter the query serialized by the KoralQuery helper - but I think that's the only problem there is and it's a minor one, not affecting any functionality of the platform.
|
2025-04-01T06:37:08.617352
| 2015-05-12T18:22:28
|
75692065
|
{
"authors": [
"bj0",
"dave08",
"gregpardo",
"yanex"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1474",
"repo": "Kotlin/anko",
"url": "https://github.com/Kotlin/anko/issues/39"
}
|
gharchive/issue
|
Add example of embedded fragment in activity
I searched a bit and I couldn't seem to figure out how to embed a fragment in my activity.
For example I want to do something like this.
frameLayout {
linearLayout {
baselineAligned = false
orientation = LinearLayout.HORIZONTAL
fragment {
name = "FragmentClass"
}.layoutParams(width = matchParent, height = matchParent)
}.layoutParams(width = matchParent, height = matchParent)
}
This feature is not supported yet, though it is planned to add in the next version. Thank you!
Is there a workaround for this? Would a custom inline function that accomplishes this be difficult to make?
This would be a nice little feature, since as far as I know, the only solution is a "long" piece of code (at least compared to how much anko tries to save us), see: http://stackoverflow.com/questions/18296868/how-to-add-a-fragment-to-a-programmatically-generated-layout
I think the problem is that this does not conform to just creating a view like the rest of the dsl...
But for those that are lazy like me, we are force to still keep some xml layouts around just because of this feature not being implemented...
Nice job to the anko team anyways for all the rest of the features they give us!!
Unfortunately, it's impossible to create a fragment in Android without an explicit class declaration.
@yanex I don't think anyone is saying to create a fragment without an explicit class declaration. In Android xml layouts, you can insert a <fragment> tag directly, instead of inserting a container element and then using FragmentManager to inject the fragment into the placeholder.
Both methods require explicit Fragment classes. I believe this issue was created because we couldn't figure out how to translate XML layouts with <fragment> tags into Anko language.
I included in my example of what it might look like while referencing the class.
I've created the new issue for this: #362.
|
2025-04-01T06:37:08.662107
| 2022-04-17T16:48:22
|
1206429513
|
{
"authors": [
"DenL",
"Kouzukii",
"filliph"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1475",
"repo": "Kouzukii/ffxiv-priceinsight",
"url": "https://github.com/Kouzukii/ffxiv-priceinsight/issues/7"
}
|
gharchive/issue
|
Price display cut off
For long tooltips that rests on bottom of screen, the price display is cut off like thus:
Hm, I'm not sure how this could be solved.
I was experimenting with placing the price on the item description itself. But there's so many different formats of item descriptions it would take quite a lot of effort to set that up.
I guess for a quickfix I could move the item description window up if it cuts past the bottom of the screen.
@Kouzukii could you please circle back to this? AllaganTools doesn't cause this issue because it adds its item information in the description area.
See screenshot:
Should be fixed now
|
2025-04-01T06:37:08.679474
| 2014-12-08T21:16:14
|
51353279
|
{
"authors": [
"KrauseFx",
"diogomaximo",
"rodrigocotton",
"shams-ahmed"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1476",
"repo": "KrauseFx/TSMessages",
"url": "https://github.com/KrauseFx/TSMessages/pull/196"
}
|
gharchive/pull-request
|
Accept UIAppearance
Added capacity to customize layout using UIAppearance
Now you have one alternative to customize TSMessageView, you can use UIAppearance.
These are the properties
[TSMessageView appearance] setTitleFont:[UIFont boldSystemFontOfSize:6]];
[[TSMessageView appearance] setTitleTextColor:[UIColor redColor]];
[[TSMessageView appearance] setContentFont:[UIFont boldSystemFontOfSize:10]];
[[TSMessageView appearance]setContentTextColor:[UIColor greenColor]];
[[TSMessageView appearance]setErrorIcon:[UIImage imageNamed:@"NotificationButtonBackground"]];
[[TSMessageView appearance]setSuccessIcon:[UIImage imageNamed:@"NotificationButtonBackground"]];
[[TSMessageView appearance]setMessageIcon:[UIImage imageNamed:@"NotificationButtonBackground"]];
[[TSMessageView appearance]setWarningIcon:[UIImage imageNamed:@"NotificationButtonBackground"]];
@KrauseFx whats your thoughts about this PR? i think it a really good approach to using custom design...
+1 @KrauseFx
Any updates on this?
Looks great, thanks for the pull request :+1:
Could you just fix the merge conflicts:
We can’t automatically merge this pull request.
Thanks!
Thanks!
@KrauseFx Merged with 'master'.
Thanks @diogomaximo for working on this! :+1:
And sorry it took so long :disappointed:
|
2025-04-01T06:37:08.710611
| 2019-06-24T16:08:50
|
459977027
|
{
"authors": [
"arlac77",
"coveralls"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1477",
"repo": "Kronos-Integration/kronos-endpoint",
"url": "https://github.com/Kronos-Integration/kronos-endpoint/pull/538"
}
|
gharchive/pull-request
|
merge package from arlac77/npm-package-template-esm-only
package.json
chore(package<EMAIL_ADDRESS>chore(scripts): cover@#overwrite c8 --temp-directory build/tmp ava && c8 report -r lcov -o build/coverage --temp-directory build/tmp
chore(scripts): posttest@markdown-doctest
Coverage increased (+2.6%) to 76.225% when pulling e08caa427485371143fa8f11a0b04137a1bc1371 on npm-template-sync-1 into e00dc2d82fc7f15442bf6496a2ac0446b5434eea on master.
:tada: This PR is included in version 3.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:37:08.719037
| 2019-11-14T11:18:44
|
522793160
|
{
"authors": [
"arlac77",
"codecov-io",
"coveralls"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1478",
"repo": "Kronos-Integration/kronos-endpoint",
"url": "https://github.com/Kronos-Integration/kronos-endpoint/pull/599"
}
|
gharchive/pull-request
|
merge from arlac77/npm-package-template-esm-only
.gitignore
chore(git): update .gitignore from template
Coverage remained the same at 86.377% when pulling d883d70f47ea4a69b603338ff2d49cce3a47c5f7 on npm-template-sync/1 into 2e2d5bfb6aac23bb2cef3b2538c62802bac1399b on master.
Codecov Report
Merging #599 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #599 +/- ##
=======================================
Coverage 84.97% 84.97%
=======================================
Files 3 3
Lines 892 892
Branches 62 62
=======================================
Hits 758 758
Misses 134 134
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2e2d5bf...d883d70. Read the comment docs.
:tada: This PR is included in version 3.1.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:37:08.722059
| 2019-12-30T20:29:53
|
544002661
|
{
"authors": [
"arlac77"
],
"license": "0BSD",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1479",
"repo": "Kronos-Integration/service-logger-gelf",
"url": "https://github.com/Kronos-Integration/service-logger-gelf/pull/672"
}
|
gharchive/pull-request
|
merge from arlac77/npm-package-template-esm-only
README.md
docs(README): update from template
:tada: This PR is included in version 2.0.13 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:37:08.727829
| 2020-09-12T17:14:33
|
700308259
|
{
"authors": [
"arlac77",
"codecov-commenter"
],
"license": "0BSD",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1480",
"repo": "Kronos-Integration/service-logger-gelf",
"url": "https://github.com/Kronos-Integration/service-logger-gelf/pull/820"
}
|
gharchive/pull-request
|
merge from arlac77/template-github-action,arlac77/template-kronos-component
README.md
docs(README): update from template
Codecov Report
Merging #820 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #820 +/- ##
=======================================
Coverage 28.94% 28.94%
=======================================
Files 1 1
Lines 76 76
Branches 1 1
=======================================
Hits 22 22
Misses 54 54
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 549f308...3bd81d1. Read the comment docs.
|
2025-04-01T06:37:08.734588
| 2019-01-10T16:24:58
|
397909282
|
{
"authors": [
"arlac77"
],
"license": "0BSD",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1481",
"repo": "Kronos-Integration/service-uti",
"url": "https://github.com/Kronos-Integration/service-uti/pull/423"
}
|
gharchive/pull-request
|
merge package from arlac77/npm-package-template
package.json
chore(package<EMAIL_ADDRESS>chore(scripts): cover@#overwrite c8 --temp-directory build/coverage ava && c8 report -r lcov --temp-directory build/coverage
chore(package): add nyc from template
chore(package): set $.ava.require='esm' as in template
chore(package): set $.ava.files='tests/-test.js,tests/-test.mjs' as in template
chore(package): set $.ava.extensions='js,mjs' as in template
:tada: This PR is included in version 2.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:37:08.751491
| 2021-07-21T12:39:49
|
949665122
|
{
"authors": [
"guicassolato"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1482",
"repo": "Kuadrant/authorino",
"url": "https://github.com/Kuadrant/authorino/issues/138"
}
|
gharchive/issue
|
Support for Envoy's Ext Authz Dynamic Metadata
Right now, the only way to retrieve information from the auth pipeline is through the Festival Wristbands and their support for custom claims. This will probably not play well for the Envoy feature for emiting "dynamic metadata" from the external authorization to be consumed by other filters (e.g. rate limit). See External Authorization Dynamic Metadata for more info.
Ideas for the implementation
To introduce a new phase to the auth pipeline, to be defined by the user in the CR. This phase will build a JSON (map[string]string) with entries declared in the CR and dynamically resolved in the auth pipeline, similarly to how it's done for wristband custom claims. See https://github.com/Kuadrant/authorino/blob/7612c4965ba08d1edee4ac410bbb35aad9392953/pkg/config/wristband.go#L67-L70 and https://github.com/Kuadrant/authorino/blob/7612c4965ba08d1edee4ac410bbb35aad9392953/pkg/config/wristband.go#L125-L132
Another possible implementation could be by continuing relying on the wristband to be the vessel but setting a different location for it to be passed back (#113), perhaps enhanced with additional support for non-signed and non-encoded wristbands issued. I'm afraid that this might be an overuse of the wristband feature though.
A more concrete implementation idea for this:
To define a new (final) phase for the auth pipeline called response, i.e. another array of evaluators (or "response configs"), just like we have for the other phases (identity, metadata and authorization), to be cached within the APIConfig object as ResponseConfigs []common.AuthConfigEvaluator.
After authorization phase is finished (and successful), the AuthPipeline would call the evaluators of the response phase (concurrently between them, as usual), handling the evaluated objects like it does for the other 3 phases, i.e. storing them in a map Response map[*config.ResponseConfig]interface{}.
The wristband issuer would become a type of evaluator of the response phase. Another type of response evaluator would be the DynamicMetadata evaluator:
type ResponseConfig struct {
Name string `yaml:"name"`
Wristband *response.WristbandIssuer `yaml:"wristband,omitempty"`
DynamicMetadata *response.DynamicMetadata `yaml:"dynamicMetadata,omitempty"`
}
with
type DynamicMetadata struct {
Name string
Value struct {
Static string
FromJSON string
}
}
evaluateAllAuthConfigs strategy would be used at the response phase. Once finished with all evaluators of the phase and returned control to AuthPipeline.Evaluate(), the pipeline would then build the AuthResult object, now:
type AuthResult struct {
Code rpc.Code
Message string
Headers []map[string]string
DynamicMetadata interface{} // or perhaps `map[string]interface{}`
}
OTB, this implementation would enable having multiple wristbands issued at the end of an auth pipeline instead of just one (if that even makes sense for any use case), as well as multiple “dynamic metadata” objects (probably to be merged into a single one before responding back to Envoy).
Something that would make this even more interesting would be modifying the AuthPipeline so the evaluated objected returned in the authorization phase also end up in the authorization JSON – i.e., an extension of what goes in https://github.com/Kuadrant/authorino/blob/7612c4965ba08d1edee4ac410bbb35aad9392953/pkg/service/auth_pipeline.go#L338
So the response evaluators (wristband issuer and dynamic metadata) could select values from the authorization phase as well (aside from the already possible ones identity and metadata). This could open up for solving #109.
|
2025-04-01T06:37:08.759685
| 2023-07-10T07:53:43
|
1796198237
|
{
"authors": [
"david-martin",
"roivaz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1483",
"repo": "Kuadrant/multicluster-gateway-controller",
"url": "https://github.com/Kuadrant/multicluster-gateway-controller/pull/318"
}
|
gharchive/pull-request
|
Improvements/fixes to the E2E CI workflow
Some improvements/fixes to the E2E CI workflow:
Add the contributor "role" to those that don't require approval. I have reviewed the PRs after the e2e workflow was in place and have seen that even though we are all members of the Kuadrant org, in the event for the pull_request we are marked as contributors.
Rename the GH environments so they are clearly identified as part of the e2e workflow
Merge all jobs into one using several steps instead
/cc @david-martin
/cc @mikenairn
/lgtm
/approve
/hold
Holding until we're happy the 2 new e2e environments are ready
@david-martin both e2e-external and e2e-internal are created, the external one with the required approval process
/unhold
|
2025-04-01T06:37:08.764023
| 2023-11-30T11:59:28
|
2018500267
|
{
"authors": [
"YoonJeongLulu",
"mfkrause"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1484",
"repo": "Kuatsu/react-native-cloud-storage",
"url": "https://github.com/Kuatsu/react-native-cloud-storage/issues/16"
}
|
gharchive/issue
|
Please any help ! I can not access "app_data" only "documents" works.
Describe the bug
firstly, I made a container Id with "a".
when I call CloudStorage.appendFile(path, content) I got the error like this.
ERR_DIRECTORY_NOT_FOUND app_data
but, when after CloudStorage.setDefaultScope("documents"); It works.
At first, I thought I couldn't find it because the names of the bundle identifier and container ID identifier were different.
My bundle identifier was com.a-test, and I realized that the cloud container Id was set to iCloud.com.a, so I created a new container Id iCloud.com.a-test.
However, app_data is still not found.
I don't want the data to be saved to be visible to the user.
Please help.
(When I already checked Cloud is available by your method)
Environment:
Device: Mobile
OS: iOS 17.0
Unfortunately, I'm not able to reproduce this on my end. The container ID indeed needs to be iCloud.com.a-test when the bundle identifier is com.a-test, so that might've caused the initial trouble. Beyond that, this seems like a configuration issue I can't diagnose on my end.
If you can provide a repository with a minimal reproducible example, I should be able to help you further.
|
2025-04-01T06:37:08.777568
| 2018-03-15T05:59:18
|
305425252
|
{
"authors": [
"Kungsgeten",
"zeltak"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1485",
"repo": "Kungsgeten/yankpad",
"url": "https://github.com/Kungsgeten/yankpad/issues/29"
}
|
gharchive/issue
|
yankpad-map direct bind?
Hi again!
so as you can see im doing my yankpad spring cleaning so more question, hope you dont mind :)
follwowing your great advice with issue #28 , i started using yankpad-map.
for several reasons i would like to use them via hydras. so i stated playing around with trying to get a function for each yankpad-map and came up with something like this (spoilers: i cant code :))):
(defun zzzzxxxx ()
" insert yankpad o "
(interactive)
(yankpad-map "o")
)
now this clearly dosent work since i dont know how to instruct emacs to press/enter 'o' after i launch the yankpad-map via the function.
any tips on how to do so?
thx!
Z
It seems like #32 may be of interest to you, since that adds hydra-like functionality to yankpad-map. If you want to do what you describe, you could use something like this:
(defun zzzzxxxx ()
" insert yankpad o "
(interactive)
(setq unread-command-events (listify-key-sequence (kbd "o")))
(yankpad-map))
You can now use yankpad-map-simulate for this.
|
2025-04-01T06:37:08.782475
| 2016-09-03T18:33:48
|
174910952
|
{
"authors": [
"Flarp",
"Kurimizumi"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1486",
"repo": "Kurimizumi/Honeybee-Hive",
"url": "https://github.com/Kurimizumi/Honeybee-Hive/pull/15"
}
|
gharchive/pull-request
|
Add dev branch badges
Added specific branch badges in order to provide a quick summary to people
I'm fixing a conflict, I accidentally merged a commit from a gitter badge bot which broke everything. The reversion checks have to pass, so give it a fix minutes.
|
2025-04-01T06:37:08.791450
| 2019-10-23T15:00:07
|
511384187
|
{
"authors": [
"KurtE",
"mr-stivo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1487",
"repo": "KurtE/ILI9341_t3n",
"url": "https://github.com/KurtE/ILI9341_t3n/issues/20"
}
|
gharchive/issue
|
Teensy 3.6 vs Teensy 4.0 DMA Speed and frameCount()
Hi.
I've recently been working on migrating a project from a Teensy 3.6 to a Teensy 4. I'm noticing the Teensy 3.6 is much faster updating the screen using DMA.
It looks like the SPI clock speed setting for the Teensy 4 is not ignored, but capped at a fairly slow speed. I've tried allowing the library to allocate the frame buffer as well as allocating it myself using DMAMEM.
I'm wondering if anyone else is seeing similar speed differences? Any workarounds?
Thanks in advance for the help.
Arduino 1.8.10
Teensyduino 1.48
Teensy 3.6
240 Mhz
#define ILI9341_SPICLOCK 30000000
frameCount() = 24 fps
#define ILI9341_SPICLOCK 60000000
frameCount() = 45 fps
Teensy 4.0
600MHz
#define ILI9341_SPICLOCK 144000000u
frameCount() = 28 fps
#define ILI9341_SPICLOCK 72000000u
frameCount() = 28 fps
#define ILI9341_SPICLOCK 36000000u
frameCount() = 19 fps
#define ILI9341_SPICLOCK 18000000u
frameCount() = 12 fps
#define ILI9341_SPICLOCK 999999999u
frameCount() = 28 fps
Sorry, I don't have too much time to look into this.
If you need higher SPI speeds, then maybe you need to change which clock is used to control SPI.
In particular what is the setting for CCM_CBCMR register.
I think by default we choose The second clock:
CCM_CBCMR = (CCM_CBCMR & ~(CCM_CBCMR_LPSPI_PODF_MASK | CCM_CBCMR_LPSPI_CLK_SEL_MASK)) |
CCM_CBCMR_LPSPI_PODF(6) | CCM_CBCMR_LPSPI_CLK_SEL(2); // pg 714
Actually it is now page 1112...
Which from our beginTransaction code looks like it starts off with 528MHZ clock going into SPI subsystem. If you try changing that (2) to (1), or change it after begin is called to (1), then I believe it will feed a 720mhz clock into SPI....
Also need to look at the PODF fields of that as well...
You might ask this types of question on the Forum and maybe will have some time to look again...
Hi Kurt,
I see there was some action in the PJRC forums regarding the SPI speed. Lucky timing for me.
I'm using your modified SPI library and setting #define ILI9341_SPICLOCK 80000000. frameCount() is around 61-62 fps. Very fast.
Thanks for the help and thank you so much for your libraries. I'll ask future questions in the PJRC forums.
|
2025-04-01T06:37:08.796276
| 2024-10-04T18:16:41
|
2567009725
|
{
"authors": [
"Chin-may02",
"SakiraAli1115"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1488",
"repo": "Kushal997-das/Project-Guidance",
"url": "https://github.com/Kushal997-das/Project-Guidance/issues/1366"
}
|
gharchive/issue
|
Title: Addition of a few basic codes like: contact book, note making app, to-do list
Is your proposal related to a problem? Please describe.
Although the repository is very vast it can be updated with a few basic programs in python. I would like to contribute to this issue.
Would you please add the gssoc-ext and hacktoberfest-accepted tags so that I can immediately work on it.
Add any other context or screenshots about the proposal request here.
N/A
I am the mentor of this project.
Hello @Chin-may02
Can you please describe little bit how the output will look like after update and give screenshots.!!
Hello, Here you go
Contact Book
Note making- yet to code
To do list
@SakiraAli1115 There are more projects that I will be adding along with the ones I have mentioned above with a readme for the beginners to better understand python.
Do assign me this task along with the tags gssoc-ext, hacktoberfest and hacktoberfest-accepted along with the level that seems fit.
Hello @Chin-may02
It is a Program not any application. Please read our project and understand, and thank you for your contribution ,make another pull issue by creating new concepts those are not exist in the project!!!
@Kushal997-das Kindly close this issue since it already exist in our project.
|
2025-04-01T06:37:08.808318
| 2023-03-27T11:45:55
|
1641964121
|
{
"authors": [
"Peefy",
"coveralls"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1489",
"repo": "KusionStack/KCLVM",
"url": "https://github.com/KusionStack/KCLVM/pull/471"
}
|
gharchive/pull-request
|
feat: impl schema query API.
1. Does this PR affect any open issues?(Y/N) and add issue references (e.g. "fix #123", "re #123".):
[ ] N
[x] Y
feat: #418
2. What is the scope of this PR (e.g. component or file name):
3. Provide a description of the PR(e.g. more details, effects, motivations or doc link):
[ ] Affects user behaviors
[ ] Contains syntax changes
[ ] Contains variable changes
[ ] Contains experimental features
[ ] Performance regression: Consumes more CPU
[ ] Performance regression: Consumes more Memory
[x] Other
4. Are there any breaking changes?(Y/N) and describe the breaking changes(e.g. more details, motivations or doc link):
[x] N
[ ] Y
5. Are there test cases for these changes?(Y/N) select and add more details, references or doc links:
[x] Unit test
[ ] Integration test
[ ] Benchmark (add benchmark stats below)
[ ] Manual test (add detailed scripts or steps below)
[ ] Other
kclvm/capi/src/service/service.rs
6. Release note
Please refer to Release Notes Language Style Guide to write a quality release note.
None
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
1 unchanged line in 1 file lost coverage.
Overall coverage remained the same at 89.313%
Files with Coverage Reduction
New Missed Lines
%
compiler_base/parallel/src/executor/timeout.rs
1
92.86%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
2106
Relevant Lines:
2358
💛 - Coveralls
|
2025-04-01T06:37:08.817501
| 2022-05-26T10:29:16
|
1249390632
|
{
"authors": [
"chai2010",
"coveralls"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1490",
"repo": "KusionStack/kclvm-go",
"url": "https://github.com/KusionStack/kclvm-go/pull/11"
}
|
gharchive/pull-request
|
test CLA
What problem does this PR solve?
Issue Number: close #issue-id
Problem Summary:
What is changed and how it works?
Check List
Tests
[ ] Unit test
[ ] Integration test
[ ] Manual test (add detailed scripts or steps below)
[ ] No code
Side effects
[ ] Performance regression: Consumes more CPU
[ ] Performance regression: Consumes more Memory
[ ] Breaking backward compatibility
Documentation
[ ] Affects user behaviors
[ ] Contains syntax changes
[ ] Contains variable changes
[ ] Contains experimental features
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.
None
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 46.461%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
2422
Relevant Lines:
5213
💛 - Coveralls
|
2025-04-01T06:37:08.842268
| 2023-05-17T23:03:56
|
1714760415
|
{
"authors": [
"Karansankhe",
"MaverickDe",
"Rishitha-VasiReddy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1491",
"repo": "KwickerHub/frontend",
"url": "https://github.com/KwickerHub/frontend/issues/239"
}
|
gharchive/issue
|
OVERLAPBLOCK (DESIGN 6)
Overlapblock (design 6)
Design a overlap block that look exactly like the above image...
Create a file "OVERLAP_DESIGN_6.html" in the modules/overlapblock folder.
Use simple HTML and CSS and JAVASCRIPT to create the overlapblock. You can use the interface development https://kwickerhub.com (This project) or you can write the lines of code yourself.
Step by Step Guide.
Fork this project(Use the 'fork' button in the top right corner) and Clone your Fork.
git clone https://github.com/YOUR_USERNAME/frontend
Open your code Editor and Create a file "OVERLAP_DESIGN_6.html" in the "modules/overlapblock" folder of this project you just cloned. No need to create head, title and body tags, Just Add a div tag with some embedded style(i.e use the style tag) and the script tag where necessary.
If you want to add an image resource, please add it in the folder ••
"modules/overlapblock/images_and_icons"
We recommend you use an svg for your image/icon.
Push your Code: You need to push your recent changes back to the cloud. Use the command below in the main directory of this Repository
git push origin dev
or use a GUI tool to avoid mistakes or complexity. LOL.
Make you Pull Request...
Good-luck.
please assign me these issue under ssoc 23
Hello @MaverickDe, Please assign this issue to me. I've started working on it.
Go on
On Mon, Nov 20, 2023, 4:31 AM Rishitha-VasiReddy @.***>
wrote:
Hello @MaverickDe https://github.com/MaverickDe, Please assign this
issue to me. I've started working on it.
—
Reply to this email directly, view it on GitHub
https://github.com/KwickerHub/frontend/issues/239#issuecomment-1818172570,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AYAUKVMDUHGRPERESWCQEGTYFLFJ5AVCNFSM6AAAAAAYFXG6UGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJYGE3TENJXGA
.
You are receiving this because you were mentioned.Message ID:
@.***>
|
2025-04-01T06:37:08.844454
| 2017-07-20T17:17:22
|
244436245
|
{
"authors": [
"Kwoth",
"QuantumToasted"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1492",
"repo": "Kwoth/NadekoBot",
"url": "https://github.com/Kwoth/NadekoBot/issues/1426"
}
|
gharchive/issue
|
.streamrole takes an unusually long time to execute.
.streamrole seems to take an unusually long time to execute (regularly 15+ seconds). I'm not sure if this is expected for the commands functionality or if it is a bug. Someone else was having this issue yesterday and they claimed it would take minutes to execute.
It is not a bug. Recent patch to streamrole forces the bot to check everyone from the first role to be checked right away if they're streaming, and if they are, give them the role right away (assuming they fulfill the preconditions).
I sped it up 5x now, but it may error out if there are too many users. I'm not sure about role add ratelimiting.
|
2025-04-01T06:37:08.845663
| 2016-04-01T15:27:09
|
145209994
|
{
"authors": [
"Kwoth",
"LawlyPopz"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1493",
"repo": "Kwoth/NadekoBot",
"url": "https://github.com/Kwoth/NadekoBot/pull/171"
}
|
gharchive/pull-request
|
Fixed pokemon
i have fixed the stats between the pokemon types and added the fairy
type to the types!
great, thanks a lot
|
2025-04-01T06:37:08.852357
| 2021-08-24T14:28:33
|
978162921
|
{
"authors": [
"sashkab",
"stephenc-ie"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1494",
"repo": "KxSystems/pyq",
"url": "https://github.com/KxSystems/pyq/issues/144"
}
|
gharchive/issue
|
Running pyq in terminal resulting in exit .pyq.run line of code executing from python.q file
Questions
[ ] Which operating system are you using (if Linux, please provide flavour of it, i.e RedHat, CentOS or Ubuntu), is it 32-bit, or 64-bit?
Raspbian 32 bit
[ ] Which version of PyQ are you running? Please provide output of pyq --versions, if PyQ isn't operational, please provide Python interpreter version and PyQ version python -V; python3 -V; pip list | grep pyq:
Python 3.7.3
Python 3.7.3
pyq 5.0.0
[ ] Which version of kdb+ are you using, is it 32-bit or 64-bit?
32 bit
[ ] If on 64-bit, is your QLIC set? Please provide output env | grep QLIC on linux/macOS, or set|grep QLIC on Windows.
[ ] Did you use virtual environment to install PyQ? If not, why?
yes
[ ] Where is your QHOME? Please provide output env | grep QHOME on linux/macOS, or set|grep QHOME on Windows.
QHOME=/home/pi/virtualenv/q
[ ] Do you use Conda? If so, what version?
no
Steps to reproduce the issue
Expected result
Expected q/python interactive session in terminal
Actual result
(virtualenv) pi@raspberrypi:~/virtualenv $ pyq
[2] /home/pi/virtualenv/q/python.q:9: if[`python.q~last` vs hsym .z.f;exit .pyq.run .pyq.args]
^
Arrow ^ points to exit .pyq.run. Any ideas what to try from that point? What may have gone wrong on install process? Any suggestions very welcome - or if any Rasp Pi install guides available please share
Q works in terminal on my virtual environment.
Python version 3.7.3
pyq 5.0.0 installed using pip
Workaround
If you know workaround, please provide it here.
I don't think PyQ was ever tested on Raspberry Pi. Could you please provide output of pip install pyq in clean venv?
Thanks for above suggestion and resolution sashkab. I am getting access to existing work dev server so won't be pursuing further at this moment on my RPi 3.
|
2025-04-01T06:37:08.862465
| 2023-10-11T19:38:06
|
1938589518
|
{
"authors": [
"zml2008"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1495",
"repo": "KyoriPowered/adventure",
"url": "https://github.com/KyoriPowered/adventure/pull/986"
}
|
gharchive/pull-request
|
1.20.3
This is a branch to gather work on supporting features and representation changes from Minecraft 1.20.3.
Relevant changelogs:
23w40a
So, my current thought for handling version differences is, we should have a 'feature flag' system, where things like 'emit legacy hover event' or 'emit uuid as ints' vs 'emit uuid as string' are toggleable options. We'd need to produce presets for compatibility with different game versions, plus probably a 'latest' and 'most compatible' levels.
Should this be game versions? Then how do we handle snapshots?
Alternately, we could mark these revisions by:
datapack versions
protocol versions
data versions
Thoughts on what makes most sense?
We still need to implement:
[ ] handling for the type field
[ ] NBT component serialization (or DFU integration)
[ ] matching the strictness of Vanilla serialization
but I think it's worth merging this as-is just to have published snapshots that others can depend on.
|
2025-04-01T06:37:08.881186
| 2020-04-11T21:02:50
|
598350594
|
{
"authors": [
"Kyusung4698",
"tragicnate",
"ymyt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1496",
"repo": "Kyusung4698/PoE-Overlay",
"url": "https://github.com/Kyusung4698/PoE-Overlay/issues/611"
}
|
gharchive/issue
|
Restrictive search behavior for defense values
🐞 bug report
Hi! I'm a new user of PoE-Overlay and really liking it so far. Thanks for developing this tool!
Here's a report for an adverse behavior I noticed, which I wasn't able to work around. I'm assuming it's a bug (apologies if I'm mistaken), but if not, then this could be a feature request instead.
📝 Desription
When filtering searches based on total defense values (AR/EV/ES), specifying the range doesn't work the same way as in other modifiers. This is unintuitive, and forces search based on flat/percent/hybrid defense modifiers, which is not very useful for e.g. rare items.
The minimum value is capped at the raw defense value(s) for the given item base (even if the item base is not filtered for). The maximum value can't be increased beyond the current modified defense value(s) of the item.
To Reproduce
Bring up a search for any armor piece, and scroll up/down on the value range for defenses. In the example in the screenshot, the range can't be expanded further than 246 (Slink Boots base EV) ~ 490 (the current EV on the item) in either direction.
Expected behavior
I would expect range specification for defenses to work in exactly the same way as modifiers. The user should be able to search in the range (#, current value) ~ (current value, #), with the corresponding handles on the configuration interface to set default search behavior.
Screenshots
🌍 My Environment
OS: Windows 10 Pro x64
Version: 0.6.20
PoE: Steam 3.10.1c English
Thank you so much for posting this. I am in a similar boat. I love this program and want to use it. However, not being able to auto select a defensive value like energy shield and setting the minimum value to like -10% and max value to uncapped is the biggest hurdle that is keeping me from being able to use this program exclusively over poe trade macro.
0.6.21 (2020-04-12)
add own min max range settings for properties
add preselect attack and defense as setting
remove quality min/ max restriction (#611)
0.6.21 (2020-04-12)
add own min max range settings for properties
add preselect attack and defense as setting
remove quality min/ max restriction (#611)
Thank you so much for this update. This is amazing tool and these changes are huge! I feel like this is the best tool out there now and no reason to use anything else. The ability to set min-max ranges separately for defensive values and stat values is also a great feature. Thank you!
Thank you so much for this update. This is amazing tool and these changes are huge! I feel like this is the best tool out there now and no reason to use anything else. The ability to set min-max ranges separately for defensive values and stat values is also a great feature. Thank you!
Thanks for your positive feedback. Closed.
|
2025-04-01T06:37:08.916465
| 2024-06-05T23:18:51
|
2336978681
|
{
"authors": [
"FIM43-Redeye",
"L-Spiro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1497",
"repo": "L-Spiro/BeesNES",
"url": "https://github.com/L-Spiro/BeesNES/issues/4"
}
|
gharchive/issue
|
Input doesn't seem to work
Since this is a very small project, I'm not actually sure how to go ahead and report a bug, What I do know is that BeesNES doesn't seem to take any inputs from my keyboard, and when I do put in new input configs, they stay temporarily (for as long as the program is running) but disappear on restart. Any idea what's up?
Someone else was having such an issue, and I suspect the reason is because
there is something on your system that registers as a game controller.
When a controller is plugged in, input uses that instead of the keyboard.
This is all temporary code; your key settings disappear because they are
not saved to a settings file. In the final version it will check your
controller settings and keyboard settings so this won't happen, but I still
want to know what is causing it to think there is a controller attached
when there seemingly isn't. The other person said he had no controllers
attached.
If you are building the source, can you have it print the names of the
controllers it is detecting?
In the final version, you will be able to specify at least 4 devices for
input, and each device will be polled in order until a key-press is found,
so this won't happen.
On Thu, Jun 6, 2024 at 8:19 AM FIM43-Redeye @.***>
wrote:
Since this is a very small project, I'm not actually sure how to go ahead
and report a bug, What I do know is that BeesNES doesn't seem to take
any inputs from my keyboard, and when I do put in new input configs, they
stay temporarily (for as long as the program is running) but disappear on
restart. Any idea what's up?
—
Reply to this email directly, view it on GitHub
https://github.com/L-Spiro/BeesNES/issues/4, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABYFQ2X4GTYGEC5HXE5LO3DZF6MHDAVCNFSM6AAAAABI3UF7N6VHI2DSMVQWIX3LMV43ASLTON2WKOZSGMZTMOJXHA3DQMI
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
So I'm building the source, but I'm unfortunately really not adept in C++ at all- I'm mostly really interested in this project and wanted to have a look, but to be honest, I don't really know where to begin, most of what I do is C#. Would a simple printf work? Should I use one for each function in Input that enumerates controllers? Apologies for my ineptitude.
Find this function (…\BeesNES\Src\Input\LSNDirectInput8.cpp):
BOOL PASCAL CDirectInput8::DIEnumDevicesCallback_GatherDevices(
LPCDIDEVICEINSTANCEW _lpdDi, LPVOID _pvRef ) {
Change it to:
BOOL PASCAL CDirectInput8::DIEnumDevicesCallback_GatherDevices(
LPCDIDEVICEINSTANCEW _lpdDi, LPVOID _pvRef ) {
std::vector * pvVector =
static_cast<std::vector >(_pvRef);
pvVector->push_back( (_lpdDi) );
::OutputDebugStringW( (*_lpdDi).tszProductName );
::OutputDebugStringW( L"\r\n" );
return DIENUM_CONTINUE;
}
You can also breakpoint that line to see what it prints if you run in the
debugger (via hitting F5).
Message ID: @.***>
So, oddly, that line doesn't seem to be called at all when I run it in the debugger. When I breakpoint it, the breakpoint never trips, and nothing prints to the debug window that looks like a product name.
Try the latest commit. It polls the keyboard even if a controller is
detected.
On Fri, Jun 7, 2024 at 2:35 AM FIM43-Redeye @.***>
wrote:
So, oddly, that line doesn't seem to be called at all when I run it in the
debugger. When I breakpoint it, the breakpoint never trips, and nothing
prints to the debug window that looks like a product name.
—
Reply to this email directly, view it on GitHub
https://github.com/L-Spiro/BeesNES/issues/4#issuecomment-2153063151, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABYFQ2UZUTJBQCELGJO7RYLZGCMXNAVCNFSM6AAAAABI3UF7N6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJTGA3DGMJVGE
.
You are receiving this because you commented.Message ID:
@.***>
Still nothing, so far. Buttons are useless, but the wrap_oal problem disappeared, so that was probably on my end. Interestingly, when I do hook up a controller, it picks up on it just fine, but then I can't assign inputs with it at all.
I wanted to redo the CPU and see if that was an issue.
This is now CPU #3. It’s like the original CPU but refined, sleek, and no hidden bugs outside of what I already know needs to be fixed.
CPU #2 had all kinds of issues and passed very few tests. It might have bee causing input-polling issues.
So try things now.
Tried now. Unfortunately input is still broken. No errors, though. Sorry for such a late response.
regsvr32 oleaut32.dll
Try this to address your error. Run with admin permissions.
Does it respond to your keyboard when you configure a controller?
Try running in compatibility mode.
Registering oleaut32.dll did not seem to help. More concerningly, I can't configure a controller at ALL, and none show up in the input devices list. Running in compatibility mode didn't help either.
Anything I can do on this end to make the code give more information as to what's wrong? This dumbfounds me.
Nothing shows in the list yet and those boxes don’t respond to controllers yet.
For now I am only trying to see if your keyboard can at least be recognized.
Input is polled here:
https://github.com/L-Spiro/BeesNES/blob/main/Src/Windows/MainWindow/LSNMainWindow.cpp#L1038
You could add print-outs to see if ::GetAsyncKeyState() returns anything for a key you know to be pressing.
https://github.com/L-Spiro/BeesNES/blob/main/Src/Windows/MainWindow/LSNMainWindow.cpp#L1587
This is where controllers are searched. You can try to follow it in any of the debug builds and check where it fails to gather your controller.
ChatGPT can help you adjust the code so that it finds your controller and then you can tell me what you did if it works.
Found it!
So the issue is that the controller dialog appears to not have any effect on what keys are actually recognized . The hardcoded keys starting at line 1158 of LSNMainWindow.cpp work fine (though a little futzing was needed to replace VK_OEM_1 with K, I assume that key's nice and handy on your keyboard). Aside from that it's a bit crunchy-sounding (significant slowdown on an R9 7940HS), but that's probably my issue to debug and not yours. Though it IS absolutely tapping out one of my cores.
My laptop seems much lower-spec than that and it can run at up to 90 FPS.
A lot of work goes into the visuals and audio. How well does Options -> Video Filter -> None work?
It might be a difference in AVX capabilities and I may have chosen bad defaults for my L. Spiro Filter for your AVX support since my own support may vary.
Audio may also be playing a part:
https://github.com/L-Spiro/BeesNES/blob/main/Src/Apu/LSNApu2A0X.h#L187
Try changing that * 3 to * 6 or something.
When you load a ROM, you will see this in Visual Studio: Kernel size: 95.
Change * 3 to something else to lower that number. Increasing it should lower that value but I may be misremembering which part of the equation that is influencing, so it might need to be lowered instead. Check the debug print to confirm.
My laptop has AVX, AVX2, and AVS-512, but not AVX10. Disabling the filter significantly increases performance, but still doesn't hit stable FPS.
Bumping up that value to 6 did not meaningfully improve performance, but it did push the kernel size to 191. However, I can confirm that the L. Spiro filter runs far worse than with no filter or the Blargg filter. Using it to artificially slow the system down creates a sound best described as the following, where | stands for a good audio sample and - stands for silence:
|---|---|---|---
The slices are extremely small, but there's definitely an audible distance between them. Not sure if that'll help much, but I wanted to report it.
Bumping up that value to 6 did not meaningfully improve performance, but it did push the kernel size to 191.
That number is supposed to go down. Higher means worse performance.
Change it to * 1.
The audio buffers are 16 samples long. This will be configurable later but for now I want it to be as close to real-time as possible.
All-in-all this is supposed to run on medium-end hardware even with some settings cranked up, but I am going to just have to get some more hardware for testing on my end.
https://github.com/L-Spiro/BeesNES/blob/main/Src/Audio/LSNAudioBase.h#L142
You can also decrease the output rate by changing both of those 44100’s to something else.
Even with the kernel shrunk to 31, performance appears similar to what it initially was. I think something different from audio is eating cycles.
What are the specs of your laptop? If it's Intel, that may explain how optimizations that work excellently on it seem to crash and burn on an AMD system. I can try to get access to another Intel system and test it there to see if the performance is linked to manufacturer.
I was thinking it is the AMD part too, because your specs are way better than mine:
11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz.
I will need to get an AMD machine unless you are able to find bottlenecks.
Performance might improve once I had GPU support but if “None” filter is still slow then it seems unlikely to be the only issue.
Eating an entire core is by design, but there may be something about that design that just doesn’t play well with AMD.
I ran the performance profiler- it's a bit clumsy, but here are my results:
[Uploading Report20241003-1059.zip…](Github didn't like the report, so I had to put it in a zip file.)
I always hate to doublepost, but I think I found the bottleneck:
https://github.com/L-Spiro/BeesNES/blob/main/Src/LSNLSpiroNes.cpp#L61
This particular line is eating nearly 50% of total CPU through the external call. I can't analyze it any further due to some confusion about missing symbol files.
I don’t know how that can be such a problem; it’s just peeking for a new message.
Can you work with Ms. ChatGPT to find some way to make it better on AMD?
I have embarrassed myself. Turns out I just had to swap it to Release configuration and now it runs smoothly. Swapping PeekMessageW for GetMessageW still uses a very large amount of CPU, if a little less, but I'm not sure how to optimize that - maybe it's just normal and the profiler is just seeing the work that function does.. I think the input issue is solved per earlier.
BeesNES now runs beautifully on my system. I don't know how to strip the FPS limit out, so I can't push it to its limit and see just how well it runs, but it handles full speed like a champ. The filters also run buttery smooth, with no FPS drop on your custom ones. I think the AVX optimization is working well on AMD as well.
Thanks again for making such an amazing program! I genuinely feel privileged for the opportunity to use a sub-cycle NES emulator in real time. Hope I didn't waste too much of your time, and thanks again for all your help.
I’m just glad to hear it is working as-expected and input works!!!
On Fri, Oct 4, 2024 at 2:16 FIM43-Redeye @.***> wrote:
I have embarrassed myself. Turns out I just had to swap it to Release
configuration and now it runs smoothly. Swapping PeekMessageW for
GetMessageW still uses a very large amount of CPU, if a little less, but
I'm not sure how to optimize that - maybe it's just normal and the profiler
is just seeing the work that function does.. I think the input issue is
solved per earlier.
BeesNES now runs beautifully on my system. I don't know how to strip the
FPS limit out, so I can't push it to its limit and see just how well it
runs, but it handles full speed like a champ. The filters also run buttery
smooth, with no FPS drop on your custom ones. I think the AVX optimization
is working well on AMD as well.
Thanks again for making such an amazing program! I genuinely feel
privileged for the opportunity to use a sub-cycle NES emulator in real
time. Hope I didn't waste too much of your time, and thanks again for all
your help.
—
Reply to this email directly, view it on GitHub
https://github.com/L-Spiro/BeesNES/issues/4#issuecomment-2391928354, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABYFQ2X3RQH5JXB6HGTKQ6TZZV3WRAVCNFSM6AAAAABI3UF7N6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOJRHEZDQMZVGQ
.
You are receiving this because you commented.Message ID:
@.***>
|
2025-04-01T06:37:08.949351
| 2022-05-05T20:07:07
|
1227125868
|
{
"authors": [
"pixeltrix"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1498",
"repo": "LBHackney-IT/bonuscalc-api",
"url": "https://github.com/LBHackney-IT/bonuscalc-api/pull/83"
}
|
gharchive/pull-request
|
Improve the SonarCloud reports
Report coverage correctly
Remove migrations from duplications
Fix current warnings/errors
Seems like dotnet-coverage swallows the exit status of dotnet test and always returns 0 🤦🏻♂️
Will rip out and replace with coverlet 😩
|
2025-04-01T06:37:08.957768
| 2019-05-29T21:23:34
|
450027075
|
{
"authors": [
"coveralls",
"mjgiarlo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1499",
"repo": "LD4P/sinopia_editor",
"url": "https://github.com/LD4P/sinopia_editor/pull/580"
}
|
gharchive/pull-request
|
Fix malformed URI in PropertyComponent test
Thanks, @ndushay!
Coverage remained the same at 81.825% when pulling af1ccf4f09571bf9ad2653f477ec0fe95e3bd5b2 on mjgiarlo-patch-1 into 5fd3365354fde2fff4dfb4f7a44f45b070972457 on master.
|
2025-04-01T06:37:08.958969
| 2023-07-29T13:24:01
|
1827477864
|
{
"authors": [
"cneshi",
"nebulorum"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1500",
"repo": "LDOMotors/steppy",
"url": "https://github.com/LDOMotors/steppy/issues/1"
}
|
gharchive/issue
|
Remove need for supports
On the multi-part model the bottom part has countersunk holes that will require support. Totally workable, but since you are such big in Voron community and their models are example of how to design with not supports, it would be cool to make this model to the same standard.
Thanks for noticing this issue. I will merge a fix for this soon
Super cool. Thanks...
|
2025-04-01T06:37:08.960313
| 2022-05-23T08:01:59
|
1244738990
|
{
"authors": [
"stevenBrownie"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1501",
"repo": "LEB-EPFL/eda-napari",
"url": "https://github.com/LEB-EPFL/eda-napari/issues/14"
}
|
gharchive/issue
|
When removing all layers must delete all widget data
If the widget data is not deleted then errors can occur if the widget is reopened. As connections are still connected etc...
At the moment the dockwidget is removed but this doesn't delete the widget
This problem was only in debug and seemed to be caused the stepping buttons from the debugger.
|
2025-04-01T06:37:08.969376
| 2023-03-21T02:41:13
|
1633135585
|
{
"authors": [
"Arthur-99",
"LFhase",
"ZYF150322661776"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1502",
"repo": "LFhase/CIGA",
"url": "https://github.com/LFhase/CIGA/issues/2"
}
|
gharchive/issue
|
keyError 6
Hi,I'm very inspired by you work,but when i create class SPMotif, it makes a program error that keyError 6 in torch.save(collate(data_list,id)):
in _collcate
key, [v[key] for v in values], data_list, stores, increment)
remove the `pos' attribute in spmotif_dataset.py line 103
data = Data(x=x,
y=y,
z=z,
edge_index=edge_index,
edge_attr=edge_attr,
# pos=p,
edge_gt_att=torch.LongTensor(ground_truth),
name=f'SPMotif-{self.mode}-{idx}',
idx=idx)
I'm closing the issue and feel free to reopen it if you guys have any further questions.
|
2025-04-01T06:37:09.002462
| 2021-10-13T13:27:07
|
1025252172
|
{
"authors": [
"LKaemmerling",
"geisi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1503",
"repo": "LKDevelopment/hetzner-cloud-php-sdk",
"url": "https://github.com/LKDevelopment/hetzner-cloud-php-sdk/pull/83"
}
|
gharchive/pull-request
|
Prevent API undefined property: stdClass:$server error when firewall has attached tags
Hello this is my first PR in this project:
Before this PR following error occurred when you query a firewall with an attached label:
Undefined property: stdClass::$server thrown in hetzner-cloud-php-sdk/src/Models/Firewalls/Firewall.php:125
I reproduced this error by extending the firewall.json and firewalls.json fixtures with tag objects.
This error happens because the firewall model assumes that only server resources can be attached to a firewall.
This PR checks every firewall applied service and only adds it to the appliedTo array when it is a server object. This is only a workaround around this problem. In the future these tags(and their corresponding servers) should also be applied to the appliedTo array but this needs some bigger refactoring in the firewall model code. So this is only a quick fix.
Thank you for your hard work!
Good catch! Thank you!
|
2025-04-01T06:37:09.007363
| 2017-03-09T05:42:37
|
212940645
|
{
"authors": [
"ericrosenbaum",
"rachel-fenichel",
"thisandagain"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1504",
"repo": "LLK/scratch-blocks",
"url": "https://github.com/LLK/scratch-blocks/pull/829"
}
|
gharchive/pull-request
|
Do not accept an input for some drop-down menus
Resolves
GH-827
Proposed Changes
Removes the ability to accept an input from some drop-down menus
looks_changeeffectby
looks_seteffectto
sound_changeeffectby
sound_seteffectto
operator_mathop
Reorder sound effects blocks to match graphic effects sort order
Minor adjustment to sound effects "clear" block language
The changes look good!
One requested addition:
The "set rotation style" block also has a drop-down with a fixed set of options that should probably not accept an input, so I suggest making the same change there.
Also, a note:
The < key [space] pressed > boolean reporter does not accept an input in 2.0, but currently in 3.0 it does. I got excited for a minute about this because it could be a feature: you could for example loop through a list of letters, to check if each is pressed, rather than using separate if statements. This already works for letters, but numbers seem to be treated as ascii codes, so they do not work as expected (but as a result checking < key 10 pressed > is true when you press tab!). I'll open a separate issue about these questions.
Looks good to me.
@ericrosenbaum Good catch! All set. Give it one more check and then I'd be happy to land this if it looks ok to you. Thanks for filing the new issue about the key [x] pressed block. 😄
Looks good!
|
2025-04-01T06:37:09.087516
| 2019-02-17T16:41:06
|
411207028
|
{
"authors": [
"coveralls",
"mrfelton"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1505",
"repo": "LN-Zap/zap-desktop",
"url": "https://github.com/LN-Zap/zap-desktop/pull/1594"
}
|
gharchive/pull-request
|
fix(wallet): ensure currency unit is saved on change
Description:
Ensure selected currency unit is saved and restored properly.
Motivation and Context:
This fixes a bug in which the selected currency unit is lost when you logout of a wallet or restart the app.
How Has This Been Tested?
Manually - change currency unit, log out, log back in, or restart the app and ensure that currency unit preference is restored.
Types of changes:
Bug fix
Checklist:
[x] My code follows the code style of this project.
[x] I have reviewed and updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
[ ] I have added tests to cover my changes where needed.
[ ] All new and existing tests passed.
[x] My commits have been squashed into a concise set of changes.
Coverage decreased (-0.03%) to 19.674% when pulling 4692d3403b8dfa4f800ad150248b9f3bd4412075 on mrfelton:fix/save-selected-currency-unit into 22e670a25d5351fff6444230144bffced389d678 on LN-Zap:master.
|
2025-04-01T06:37:09.097905
| 2022-08-17T14:19:37
|
1341840615
|
{
"authors": [
"Pheonix-Flames",
"rajarsheechatterjee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1506",
"repo": "LNReader/lnreader",
"url": "https://github.com/LNReader/lnreader/issues/395"
}
|
gharchive/issue
|
No Novel Cover on NovelHall
Steps to reproduce
Go to Browse and Click on NovelHall
Expected behavior
It should show the novel name with their cover
Actual behavior
It is not showing Novel Cover but when we click on a novel then it will show its cover
LNReader version
1.1.12
Android version
11
Device
Realme 3 pro
Other details
Acknowledgements
[X] I have searched the existing issues and this is a new ticket, NOT a duplicate or related to another open or closed issue.
[X] I have written a short but informative title.
[X] If this is an issue with an source, I should be opening an issue in the sources repository.
[X] I have updated the app to version 1.1.12.
[X] I will fill out all of the requested information in this form.
If this is an issue with an source, I should be opening an issue in the sources repository.
|
2025-04-01T06:37:09.152127
| 2023-09-06T03:23:26
|
1883108345
|
{
"authors": [
"ajshajib",
"sibirrer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1507",
"repo": "LSST-strong-lensing/sim-pipeline",
"url": "https://github.com/LSST-strong-lensing/sim-pipeline/pull/48"
}
|
gharchive/pull-request
|
fix most of the docstring errors
I made some minor changes to hopefully fix some docstrings in readthedocs
@ajshajib I expected that the pre-commit is formatting the lines to fulfill the PEP8 standards with black, or I am missing something?
@sibirrer, that's what it's expected to do, yes. But, it doesn't always work perfectly, unfortunately, particularly docformatter for docstring formatting. The black formatter itself works very well in my experience, but it only works on the code lines, not the docstrings. So, we still have to keep an eye out for blemishes. One reason the docformatter could break, is when it sees an unexpected way the docstring is laid out. For example, the expected Sphinx option is :return: and not :returns:.
|
2025-04-01T06:37:09.164239
| 2018-10-04T21:20:38
|
366975313
|
{
"authors": [
"beckermr",
"rainwoodman",
"villarrealas"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1508",
"repo": "LSSTDESC/CCL",
"url": "https://github.com/LSSTDESC/CCL/issues/485"
}
|
gharchive/issue
|
C Install and Python Install Don't Get Along
In recent bug-testing I've been trying to have both the C install (generated with cmake combined with make + sudo make install) and a Conda Python install (generated from a python setup.py install --user up and running at the same time. During the installation of Python though, the following error pops up.
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [libccl.dylib] Error 1
make[2]: *** [CMakeFiles/ccl.dir/all] Error 2
make[1]: *** [pyccl/CMakeFiles/_ccllib.dir/rule] Error 2
make: *** [_ccllib] Error 2
Traceback (most recent call last):
File "setup.py", line 65, in <module>
'Topic :: Scientific/Engineering :: Physics'
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/__init__.py", line 140, in setup
return distutils.core.setup(**attrs)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 172, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 158, in call_command
self.run_command(cmdname)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/Users/avillarreal/anaconda3/lib/python3.6/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/command/install_lib.py", line 105, in build
self.run_command('build_py')
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/avillarreal/anaconda3/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "setup.py", line 17, in run
raise Exception("Could not build CCL")
Exception: Could not build CCL
This error not occur in the case that only pyccl is installed or only the C is compiled. Not sure what causes this, but I would guess it is some sort of permissions error. Adding a sudo to the python install fixes this, but as this seems to be new behavior, I'm bringing it up just to make certain it is intended behavior.
It's a little hard to say. I did a little further playing around to make sure things still worked fine. I think the result was due to an unnecessary sudo in the make install for my MacOS X set-up. It looks like the Python installer attempts to build C shared library libccl.dylib in the same location, but if the python installer isn't run at the same permissions as the C installer, there can be a conflict.
I wonder if this exists in the pip install version; if this is just in the development installation I am happy to throw a warning up and call it a day. If this conflicts with the pip install, that is a bit more problematic.
Wait is sudo hard coded? That is bad form and we should fix immediately.
I don't think sudo is hard coded. Typical procedure for building the C library is make followed by make install. Previous builds have required me to use sudo make install due to permissions. This would then interfere with the new python installer (even if I try to build with --user) unless I include a sudo.
Now I seem to need to drop the sudo from make install and it all goes fine.
What is the known issue then? Sounds like this is an old thing the cmake build has solved?
Specifically that if you need to use a sudo make install to set up the C side, you need to use a sudo python setup.py install for the Python side.
ahhhh. I don't this qualifies as a CCL install issue, but we can add it to the docs!
unix/linux permissions are an issue no matter what you are doing. :P
Indeed. I agree. I'm going to add this to the wiki and close this.
pyccl shall probably build the c lib in a standard setuptools directory
rather than the recommended cmake directory.
On Fri, Oct 5, 2018, 8:39 AM Antonio Villarreal<EMAIL_ADDRESS>wrote:
Closed #485 https://github.com/LSSTDESC/CCL/issues/485.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/LSSTDESC/CCL/issues/485#event-1887476384, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAIbTHl4EOyDEpJX2F7z_cFKPZxERyjwks5uh30hgaJpZM4XI7zy
.
Ugh. This is hard. Do we really want to force people to install the C lib when doing python?
I thought the final python so file links to an archive library. That
archive has to be built. currently it shares the same location as the
building location of the C builder. It is not hard to split them. I will
file a PR.
On Fri, Oct 5, 2018, 1:19 PM Matthew R. Becker<EMAIL_ADDRESS>wrote:
Ugh. This is hard. Do we really want to force people to install the C lib
when doing python?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/LSSTDESC/CCL/issues/485#issuecomment-427487073, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAIbTOux5FwFcDasSykG_mGo8jyygE2Aks5uh767gaJpZM4XI7zy
.
|
2025-04-01T06:37:09.173469
| 2016-10-24T14:17:00
|
184853476
|
{
"authors": [
"Henk-JanVanHasselaar",
"mevinbabuc",
"mikedingjan",
"robmoorman"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1509",
"repo": "LUKKIEN/wagtailtrans",
"url": "https://github.com/LUKKIEN/wagtailtrans/issues/26"
}
|
gharchive/issue
|
Can we have a the module specify which version its on ?
Right now doing
import wagtailtrans
wagtailtrans.VERSION
does not tell which version the package is ? Is this something we can add, so that on the implementation side (project) side we can handle release changes smoothly.
Would this work for you @mevinbabuc ?
import pkg_resources
__version__ = pkg_resources.get_distribution("wagtailtrans").version
made a PR #30
PR #30 is merged to master, closing this ticket.
|
2025-04-01T06:37:09.178148
| 2024-05-25T15:42:42
|
2317093291
|
{
"authors": [
"LUXTACO",
"Legnatbird"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1510",
"repo": "LUXTACO/DBFarmer",
"url": "https://github.com/LUXTACO/DBFarmer/issues/1"
}
|
gharchive/issue
|
Not working
OS: Windows 11
Python: 3.12.2
Emulator: Memu
Resolution: 1366*768 (os)
Runs fine but, the bot not detect any object to interact
You might need to re take the screenshots, just go thru the game and take
screenshots of the elements and replace them, make sure your game is in
English and that you don't have something like HDR on! If the problem
persists change the confidence value inside the program
On Sat, May 25, 2024, 9:43 AM Alejandro Quiñones Caicedo <
@.***> wrote:
OS: Windows 11
Python: 3.12.2
Emulator: Memu
Resolution: 1366*768 (os)
Runs fine but, the bot not detect any object to interact
—
Reply to this email directly, view it on GitHub
https://github.com/LUXTACO/DBFarmer/issues/1, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ARW6F2ZHEGA2I5ZN7DMGFUDZECWQNAVCNFSM6AAAAABII7S2LWVHI2DSMVQWIX3LMV43ASLTON2WKOZSGMYTOMBZGMZDSMI
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
|
2025-04-01T06:37:09.215761
| 2020-12-11T20:28:53
|
762864560
|
{
"authors": [
"labkey-adam"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1513",
"repo": "LabKey/platform",
"url": "https://github.com/LabKey/platform/pull/1781"
}
|
gharchive/pull-request
|
Issue 41940: Fix up TabLoader junit tests
Rationale
Issue 41940: Fix up TabLoader junit tests
Related Pull Requests
https://github.com/LabKey/platform/pull/1733
https://github.com/LabKey/sampleManagement/pull/411
Changes
Close all TabLoaders, CloseableIterators, etc.
Correct reversed expected and actual parameters in assertEquals()
Combine nearly identical tests for TSV and CSV
Assert that file deletes were successful
Add test to verify that infer fields works for single data row with no \n
Note: New testSmallFile() junit test will fail until release20.11 branch is merged to develop
|
2025-04-01T06:37:09.217331
| 2017-10-25T14:46:21
|
268426522
|
{
"authors": [
"pollockm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1514",
"repo": "LabVIEW-DCAF/Scaling",
"url": "https://github.com/LabVIEW-DCAF/Scaling/issues/15"
}
|
gharchive/issue
|
Remove tab page
We show the tab control pages, even though there is just one.
Hide the pages, and reclaim some of the UI space.
Closing, since this is currently the standard for DCAF UIs. Whether this should be the standard is an entirely different question.
|
2025-04-01T06:37:09.264438
| 2024-11-26T22:54:41
|
2696361772
|
{
"authors": [
"Arrkayd",
"Aryanblood17",
"FeyrisTan",
"JDOGGOKUSSJ2",
"K1LL3RPUNCH",
"Lacro59",
"Maple-Elter",
"PaulTheCarman",
"Sergiokool",
"Storbfall",
"Thiagojustino1",
"Verssgn",
"WerewolfNandah",
"bivasbh",
"bryjo3",
"johnnywnb",
"pedrosacca",
"robbely",
"samarthc",
"wensleyoliv"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1515",
"repo": "Lacro59/playnite-howlongtobeat-plugin",
"url": "https://github.com/Lacro59/playnite-howlongtobeat-plugin/issues/243"
}
|
gharchive/issue
|
Nothing shows up when clicking view HowLongToBeat data (again)
Describe the bug
This issue was reported by Travers50 a few weeks ago. Now it's happening all over again.
I'm using his last post as a reference, since it's the same issue:
When pressing view HowLongToBeat datas to add the overlay onto my dashboard, no games show up.
To Reproduce
Steps to reproduce the behavior:
Click any game, press 3 bars, hover over howlongtobeat, press view howlongtobeat datas, search, see nothing
Expected behavior
I expect to see a list of any games that could be related to the title of the entry you're searching for.
Screenshots
Extensions log
Attach Playnite's Extensions.log file. It is located in Playnite's installation directory in the portable version or in %AppData%\Playnite (Can be pasted in Explorer) in the installed version
It's either a coincidence and they're simply making changes to their website, or they are actively trying to break the plugin. Let's hope it's the former and that it can be fixed again.
yeah can confirm getting the same issue again
Can confirm I have the same issue.
Same issue here.
Same problem for any games.
same anyone have the fix like the last time or it will be fixed?
Same here
Issue persisting after v3.6 update. The latest data being fetched is empty.
Could someone please upload one of their already setup games, the files are stored in ExtensionsData\e08cd51f-9c9a-4ee3-a094-fde03b55492f\HowLongToBeat I want to write a script that would allow you to manually add the HLTB data while this issue is getting fixed by the devs. (Don't have any game data since I reinstalled the addon). Thanks.
@Verssgn here's one.
02887464-34d7-4485-a2f1-38987d9601ac.json
@Verssgn here's one. 02887464-34d7-4485-a2f1-38987d9601ac.json
Thanks! If I get it to work I will post it here.
Here is a script that will allow you to download files for the addon from the HLTB website (This is a bandage solution), please note that the script sucks so for the button to appear refresh the website also give it couple of seconds to load - SCRIPT + TUTORIAL: https://greasyfork.org/en/scripts/519319-how-long-to-beat-to-playnite
Please do not post issues with the script here!
How Long To Beat To Playnite Script
Here is a script that will allow you to download files for the addon from the HLTB website (This is a bandage solution while this issue is happening), please note that the script sucks so for the button to appear refresh the website also give it couple of seconds to load - SCRIPT + TUTORIAL: https://greasyfork.org/en/scripts/519319-how-long-to-beat-to-playnite
Please do not post issues with the script here! Use the feedback section on greasyfork
Works great, thank you :)
For information:
https://howlongtobeat.com/forum/thread/681/6#post118870
Works like a charm. Thanks
Hi, I found the issue and submitted a fix via Pull Request.
While waiting for the official new release to be available, you can use this temporary release to test the fix on your side:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243
Just download and double-click on the .pext file while Playnite is open, or drag&drop the file into Playnite.
Hope it helps.
Hi, I found the issue and submitted a fix via Pull Request.
While waiting for the official new release to be available, you can use this temporary release to test the fix on your side: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243
Just download and double-click on the .pext file while Playnite is open, or drag&drop the file into Playnite.
Hope it helps.
This fixed it for me. Thanks a ton. People like you are what make Playnite so great!
Hi, I found the issue and submitted a fix via Pull Request.
While waiting for the official new release to be available, you can use this temporary release to test the fix on your side: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243
Just download and double-click on the .pext file while Playnite is open, or drag&drop the file into Playnite.
Hope it helps.
Ur truly a life saver mate, works like a charm, thanks a lot , hope Lacro could put you on his team of dev support :D
Hi, I found the issue and submitted a fix via Pull Request.
While waiting for the official new release to be available, you can use this temporary release to test the fix on your side: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243
Just download and double-click on the .pext file while Playnite is open, or drag&drop the file into Playnite.
Hope it helps.
ACK I really want to thank you for this, but now my stupid antivirus tags it as a virus and auto-blocks it everytime I try to download it. How can I stop it from doing that?
Great, I managed to download it from Firefox browser, but now my antivirus auto-tags the add-on itself as a virus and I can't load it anymore.
How can I fix that?
Great, I managed to download it from Firefox browser, but now my antivirus auto-tags the add-on itself as a virus and I can't load it anymore. How can I fix that?
Hi @WerewolfNandah , I don't know why your antivirus flagged this file as malicious. You could upload it to VirusTotal to scan it and check that it is totally safe.
Perhaps adding it as an exception in your antivirus could sort your issue.
If it does not work, you may decide to reinstall it from Playnite interface and wait for the next official release. That way, no manual installation will be required, so less risk of triggering any AV overzealous scans.
I finally managed to install the fix and make the app work again.
Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
This is broken again
I finally managed to install the fix and make the app work again. Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
Broken for me as well. The HLTB extension has always had a history of only working half the time though lol. I don't think it's the dev's fault - I think HLTB just updates their website a lot. Verssgn's workaround still works for me though.
I finally managed to install the fix and make the app work again. Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
Broken for me as well. The HLTB extension has always had a history of only working half the time though lol. I don't think it's the dev's fault - I think HLTB just updates their website a lot. Verssgn's workaround still works for me though.
Yeah, I mean, I can tell this is clearly not the devs' intentions, and I bet they must be as pissed as we users. But it's just annoying to see this extension can't work on its own.
At least we've got that workaround, you're right about that.
I finally managed to install the fix and make the app work again. Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
Same thing happened here. I was updating my library with two games, and while the plugin was downloading the information for both games simultaneously, one got the data, but the other failed to download anything. I retried for the other game with multiple different search words, but to no avail
HLTB has changed the default value of the parameter they recently added.
I updated the Pull Request branch to reflect the change.
Here is the compiled extension with the updated fix:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-2
I finally managed to install the fix and make the app work again. Only two days later, though, the bug returns.
I'm just going to wait for an official fix up to this point, but I'm starting to lose my confidence to this app entirely. This is just ridiculously annoying.
Same thing happened here. I was updating my library with two games, and while the plugin was downloading the information for both games simultaneously, one got the data, but the other failed to download anything. I retried for the other game with multiple different search words, but to no avail
It works again!
Thank you so much!
HLTB has changed the default value of the parameter they recently added. I updated the Pull Request branch to reflect the change.
Thank you man, mad respect. 🫡
HLTB has changed the default value of the parameter they recently added. I updated the Pull Request branch to reflect the change.
Here is the compiled extension with the updated fix: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-2
Thank you very much! This worked for me.
Yeah, same here :p
Working great now. Thanks a ton.
Does this still work? I had to (sadly) switch from arch kde to windows because kde HDR does not work for me. So I went back to playnite and tried this fix. Data still does not show up. Did something change again?
Does this still work? I had to (sadly) switch from arch kde to windows because kde HDR does not work for me. So I went back to playnite and tried this fix. Data still does not show up. Did something change again?
Yes, it has...
It's not working once again. Damnit.
HLTB has decided to change their Search API parameters again.
I updated the Pull Request branch to reflect the latest change.
Here is the compiled extension with the updated fix:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-3
HLTB has decided to change their Search API parameters again. I updated the Pull Request branch to reflect the latest change.
Here is the compiled extension with the updated fix: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6-hotfix%23243-3
Well, I'm afraid they've probably changed something again, because the issue still persists after the hotfix.
You're our hero, bud, taking your time and effort to keep fixing something that stubbornly insists on breaking itself again.
Yes, the bug is back again. I'm so sorry. You guys fixed it, and again, they changed something... 😢
The new update has fixed the issue for me, thanks!
@johnnywnb Thanks a lot. I downloaded it and it works really well. Thank you @Lacro59 and @johnnywnb for all the hard work.
Oh yes! It works like a charm now. Thanks a lot again!!
I hate to be that guy again, but the issue has returned...
yeah its messed up again
HTLB has changed their api endpoint again. I've submitted a Pull Request for @Lacro59 to have a look at.
In the meantime, here is a hotfix release I compiled while waiting for the official release. Feel free to use it to test the fix on your side:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6.1-hotfix-search
HTLB has changed their api endpoint again. I've submitted a Pull Request for @Lacro59 to have a look at.
In the meantime, here is a hotfix release I compiled while waiting for the official release. Feel free to use it to test the fix on your side: https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.6.1-hotfix-search
Tested and works thank you. However the latest official version is 3.7.0 - so playnite wants to update to a broken one again. Maybe change the version number to 3.7.1?
@robbely thanks!
I've updated the release to use version 3.7.0 indeed:
https://github.com/johnnywnb/playnite-howlongtobeat-plugin/releases/tag/v3.7.0-hotfix-search
Lacro will decide on the next version name (3.7.1 or 3.8). In the meantime, it's best to keep this hotfix release version similar to the current one so that when the new official release gets available, Playnite picks it up naturally.
|
2025-04-01T06:37:09.277164
| 2024-12-16T18:44:05
|
2743124690
|
{
"authors": [
"Lubrsi",
"shlyakpavel"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1516",
"repo": "LadybirdBrowser/ladybird",
"url": "https://github.com/LadybirdBrowser/ladybird/issues/2941"
}
|
gharchive/issue
|
WhatsApp Web does not display the QR code for login
Summary
When I load WhatsApp Web in Ladybird, the QR code for scanning does not appear.
Operating system
macOS
Steps to reproduce
Open Ladybird
Change UA to Chrome
Navigate to https://web.whatsapp.com/
Observe
Expected behavior
QR code shows up
Actual behavior
QR code doesn't show up
URL for a reduced test case
https://web.whatsapp.com/
HTML/SVG/etc. source for a reduced test case
N/A
Log output and (if possible) backtrace
It's spammed with A LOT of
342811.755 WebContent(25408): FIXME: InlineFormattingContext::dimension_box_on_line got unexpected box in inline context:
342811.755 WebContent(25408): Label <label.x17fgdl5.x1f6kntn.xt0psk2> at (885.15625,482.84375) content-size 0x0 [0+0+0 0 0+0+0] [0+0+0 0 0+0+0] children: inline
TextNode <#text>
Screenshots or screen recordings
Build flags or config settings
No response
Contribute a patch?
[ ] I’ll contribute a patch for this myself.
Our Text/input/wpt-import/html/syntax/parsing/html5lib_tests11.html test does the same output, possibly easier to reduce that
This seems to be caused by missing CacheStorage: https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage
Performing delete caches in Chrome Devtools as it's loading causes it to infinitely spin as well.
|
2025-04-01T06:37:09.278895
| 2024-11-29T20:39:34
|
2706125166
|
{
"authors": [
"alimpfard",
"rmg-x"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1517",
"repo": "LadybirdBrowser/ladybird",
"url": "https://github.com/LadybirdBrowser/ladybird/pull/2649"
}
|
gharchive/pull-request
|
LibDNS+RequestServer: Miscellaneous fixes and cleanup
See individual commits
@alimpfard I've removed the "g_dns_cache" again, but this time because we know that curl_slist_append copies the string. I don't believe we need to keep those around ourselves anymore.
That's odd, I remember seeing invalid data being read by curl when that hashmap wasn't present (that's actually why I added it to begin with) :thinking:
|
2025-04-01T06:37:09.280100
| 2024-06-05T21:56:47
|
2336898191
|
{
"authors": [
"BertalanD",
"negge"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1518",
"repo": "LadybirdWebBrowser/ladybird",
"url": "https://github.com/LadybirdWebBrowser/ladybird/pull/67"
}
|
gharchive/pull-request
|
Tests: Stop invoking UB in AK::NeverDestroyed's tests
Instead of attempting a stack use-after-free by reading an out-of-scope object's data member, let's keep a flag that checks if the destructor had been called in the outer scope.
Fixes #64
Can confirm this fixes #64.
|
2025-04-01T06:37:09.293173
| 2021-07-02T06:25:45
|
935460326
|
{
"authors": [
"Lagrang3"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1519",
"repo": "Lagrang3/gevolution-1.2",
"url": "https://github.com/Lagrang3/gevolution-1.2/pull/2"
}
|
gharchive/pull-request
|
2nd order finite differences to compute forces
We need to improve the numerical methods to solve the field equations. This pull request include 2nd order finite differences to solve the Poisson eq.
pw-008.pdf
Correcting the forces in k-space for the CIC interpolation/sampling do not solve the problem.
There seems to be something wrong with the 2nd order Finite Differences method proposed here.
Bug fixed: 2 ghost cells where necessary.
Notice that the small 2% difference with respect to gadget is already reduced with the use of a 2nd order FD.
If the CIC correction (p=4) is applied, the power spectrum approximates better to Gadget's
TreePM with the 2nd order finite differences:
|
2025-04-01T06:37:09.356259
| 2024-08-06T02:13:41
|
2449787711
|
{
"authors": [
"AIENGINE",
"ahmadbilaldev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1520",
"repo": "LangbaseInc/langbase-examples",
"url": "https://github.com/LangbaseInc/langbase-examples/pull/31"
}
|
gharchive/pull-request
|
👌 IMPROVE: Expert proofreader update
👌 IMPROVE: Expert proofreader update
added image and env example file
LGTM.
|
2025-04-01T06:37:09.359471
| 2017-05-20T01:34:08
|
230121857
|
{
"authors": [
"ProbablePrime",
"coveralls"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1521",
"repo": "Lange/node-elgato-stream-deck",
"url": "https://github.com/Lange/node-elgato-stream-deck/pull/23"
}
|
gharchive/pull-request
|
tests: Add tests for just about everything resolves #18
I'll add some inline comments
Coverage increased (+46.8%) to 97.403% when pulling 44a93302bfd7b84860e056c81bc39c6614891ee2 on tests/MOORRRREEEE into daa90d604cf25be253302bc022797dd0bc7d8e04 on master.
Coverage increased (+49.4%) to 100.0% when pulling ddba5f93af39d6171dd6cd9c5dea4d235c4284f3 on tests/MOORRRREEEE into daa90d604cf25be253302bc022797dd0bc7d8e04 on master.
|
2025-04-01T06:37:09.368402
| 2016-10-01T04:21:53
|
180437119
|
{
"authors": [
"OwenMelbz",
"b8ne",
"tabacitu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1522",
"repo": "Laravel-Backpack/CRUD",
"url": "https://github.com/Laravel-Backpack/CRUD/issues/151"
}
|
gharchive/issue
|
Pass Model Id to updateCRUD
Heya, fairly sure this is a support request, because i know it will be simple, but I cant for the life of me find the answer.
Ive set up a custom nested controller, so the parent model is saved as normal, but through the display multiple children models also exist (can be created and updated).
Ive already handled this through the controller, the only thing I cant seem to find is the parent models id on update requests, obviously I need this to reference all of the child models off.
I will need to access the id not only on save/update, but also on render, so that I can fill the update view with existing data.
Any help would be appreciated.
Not sure if this is any help, or i I've understood correctly, but these are just some ideas that maybe helpful to you
But when you submit the form, the request should contain id so you can access the items id via $request->input('id');as there is a hidden field on the form - Maybe you could use this?
If you have an instance of the parent model already, you could try using $model->getKey()
Additionally you could setup a method/relationship on the child models, which returns the parent? e.g if its stored in a column within the database?
Hmm... I'm not sure if I understand correctly. Telling us what you're trying to achieve (what columns and what fields) would help a lot.
I think your problem can be solved by having a "parent" field and column, just like it's done in the NewsCRUD documentation. Take a look at the Categories files - is this similar to what you need?
Cheers!
I just had a look at the NewsCRUD, I think its a little different.
Currently I have it working by doing a query in the field blade template based off of $id, but thought there would be a better way to do this outside of the templates with the other business logic.
I dont think refactoring will increase performance so theres no point trying to explain, I think ill just confuse everyone :)
|
2025-04-01T06:37:09.378342
| 2018-09-24T10:54:54
|
363096622
|
{
"authors": [
"danielbidala",
"jsvini",
"pxpm",
"tabacitu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1523",
"repo": "Laravel-Backpack/CRUD",
"url": "https://github.com/Laravel-Backpack/CRUD/issues/1648"
}
|
gharchive/issue
|
Display multiselect field in column
I have a multiselect field in my controller for example with the options below:
'options' => ['S' => 'Small', 'M' => 'Medium', 'L' => 'Large']
The values (array keys S, M, L) stored in the db as json. When I display the column as array I get a comma separated list for example L, M
My question that how to display the $values ( Large, Medium ) instead of $keys. Thanks in advance!
You could use the model_function column type, in the function you can do the transformations you need.
Thanks @jsvini, I figured it out but my solution is not too elegant:
`public function getServiceAttributes() {
$service_keys = $this->service;
$service_labels = [1 => "pdf bizonyítvány", 2 => "nyomtatott bizonyítvány", 3 => "beszabályozás", 4 => "gravírozás", 5 => "minősítés" ];
$service = array_combine($service_keys, $service_keys);
$service = array_intersect_key ($service_labels, $service);
return implode(', ', $service);
}`
I think a select_from_array_multiple column type is a must have...
Maybe can I get the defined field options from my Controller in the above model function?
Maybe you can use $this->crud->getCreateFields() or getUpdateFields() and work it from there.
Hope it helps you.
Br,
Pxpm
Hi @danielbidala ,
Help me understand your use case - it sounds like something we could improve in Backpack:
you're using a select_from_array field type, with 'allows_multiple' => true;
but neither select_from_array column nor an array column won't show the value properly;
Is this correct?
Thanks!
PS. If so, I think the most intuitive thing to do would be to make the select_from_array column support multiple values too (if {is array} echo {imploded string with enumeration}). I think it's reasonable to expect the select_from_array column to be the best way to show the select_from_array field.
HI @tabacitu ,
Thanks for your interest. I have a select2_from_array field defined as below:
$this->crud->addField([
'name' => 'service',
'type' => 'select2_from_array',
'label' => 'Szolgáltatások',
'options' => [1 => "pdf bizonyítvány", 2 => "nyomtatott bizonyítvány", 3 => "beszabályozás", 4 => "gravírozás", 5 => "minősítés" ],
'allows_multiple' => true
]);
(The field is casted to array)
If I use array column type I get the array $keys (1,2,3 etc) in list view. But I need a comma separated list of array $values (pdf bizonyítvány, nyomtatott bizonyítvány etc.)
With 'select_from_array' column type I get a pnotify error in list view: Error loading page. Please refresh the page. And my list table is totally empty...
Fixed! A composer update should fix it for you - select_from_array can now correctly display multiple values, if that’s the case.
Thank you for opening this issue @danielbidala !
|
2025-04-01T06:37:09.395368
| 2019-08-29T15:19:30
|
487024395
|
{
"authors": [
"tabacitu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1524",
"repo": "Laravel-Backpack/CRUD",
"url": "https://github.com/Laravel-Backpack/CRUD/pull/1997"
}
|
gharchive/pull-request
|
[4.0][Refactor][Ready] Settings API, clean up CrudPanel properties
Fixes #1942
using solution 2 - the $settings array uses the operation as a main identifier
Fixes #1941
using the last solution suggested: $crud->set('list.view', 'new_view') and $crud->get('list.view')
This basically creates a key-value storage, and a simple API for working with operation settings/features/whatchamacallit. The key's format is operation.key_name. So the API is:
$crud->get('list.detailsView'); // get the value of that key
$crud->set('list.detailsView', 'smth'); // set the value of that key
$crud->has('list.detailsView'); // checks if a value has been set for that key
We could (and should) overwrite all PanelTraits to use this Settings API, instead of storing stuff directly on the CrudPanel object, as properties. That will clean up the CrudPanel object (#1942):
[x] Access
[x] AutoFocus
[x] AutoSet
[x] Buttons
[x] Columns
[x] Create
[x] Delete
[x] Errors
[x] FakeColumns
[x] FakeFields
[x] Fields
[x] Filters
[x] HeadingsAndTitles
[x] Macroable
[x] Operations - $operation property is ok to keep on the CrudPanel object;
[x] Query - $query and $request are ok to keep on the CrudPanel object;
[x] Read
[x] RequiredFields
[x] SaveActions
[x] Search
[x] Settings - $settings property is ok to keep on the CrudPanel object;
[x] Tabs
[x] Update - $entry is ok to keep on the CrudPanel object;
[x] Views
[x] ViewsAndRestoresRevisions
Merged into the v4 branch. Needs docs changes.
I feel like I should add more details about this change, and there's no better place to put it than here:
The Problem
When first creating v3 (in 2015), we had no idea we were going to add so many features, operations, fields, options, access etc. That all came after years and years of adding new stuff. So the $crud object became bloated - each new feature added its own property, and we got to a bloated mess. One that worked - and worked well. But a mess nonetheless :-)
Additionally, this way of doing things (each feature having a property on the CrudPanel object), did NOT allow people to add their own features. We made the CrudPanel object Macroable, but that only allows you to add methods, not properties. So in v3 people could create their own operations, and add methods on CrudPanel, but not quite do everything a default operation can do. Custom operations were limited in possibilities.
The Solution
Instead of storing each feature inside a property on the CrudPanel object, we made one property to rule them all - CrudPanel::settings, a plain and simple array where operations can add anything they want. We decided to store EVERYTHING inside settings, and the convention is to store it by operation, using dot notation:
$this->crud->buttons is now $this->crud->settings['list.buttons'];
$this->crud->create_fields is now $this->crud->settings['create.fields'];
$this->crud->update_fields is now $this->crud->settings['update.fields'];
$this->crud->columns is now $this->crud->settings['list.columns'] for the List operation and $this->crud->settings['show.columns'] for the Show operation;
Here's a before and after shot of the $crudPanel object:
Not only is the CrudPanel object much MUCH cleaner, custom operations are now first-class citizens. There's no distinction between a default operation that comes with Backpack, and an operation a user creates. They both have access to do the same things (add methods through macros, add settings using the Settings API). This will help us create new Operations as first-party and third-party packages, that you can just slap on top of a CrudPanel. And it also allows anybody to create a custom Operation, no matter how complicated.
It IS a breaking change and it ISN'T a breaking change
This property cleanup was a HUGE change, that required changing probably 80% of the CrudPanel methods. But. We managed to achieve without breaking changes to the general API. MOST people use methods to manipulate the CrudPanel object for operations - that's the only thing we documented. They don't care how that information is stored on the CrudPanel object. And they won't be affected - it's a non-breaking change for them.
But some people, in advanced use cases, might have used those properties directly. Things like $this->crud->columns, $this->crud->create_fields, $this->crud->update_fields, $this->crud->buttons, $this->crud->access. It was our intention to leave those public so that people can easily manipulate them, but yes, now that we've eliminated them, they will have to use the Settings API too.
The Settings API
To interact with $this->crud->settings, use the Settings getters and setters. You can do:
$this->crud->getOperationSetting('buttons') and it'll get a setting for the current operation;
$this->crud->setOperationSetting('buttons', $value) and it'll set something for the current operation;
$this->crud->hasOperationSetting('buttons') and it'll check that setting exists;
If you want to add/edit/check settings for a specific operation, you can pass the operation name as a last parameter:
$this->crud->getOperationSetting('buttons', 'list')
$this->crud->setOperationSetting('buttons', $value, 'list')
$this->crud->hasOperationSetting('buttons', 'list')
You can use the Settings API to change a default setting for an operation, or add a setting that did not exist before.
|
2025-04-01T06:37:09.484669
| 2014-07-18T19:06:22
|
38203554
|
{
"authors": [
"binary1248",
"danharibo"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1526",
"repo": "LaurentGomila/SFML",
"url": "https://github.com/LaurentGomila/SFML/pull/665"
}
|
gharchive/pull-request
|
Use WM_STATE_FULLSCREEN for fullscreen under X11
This adds a more functional full screen mode for X11 that doesn't block window manager shortcuts like alt-tab because it doesn't grab the input.
However it is not finished, one downside of this approach is that changing the video mode from the desktop seems to introduce some graphical issues with composition window managers, before it's merged that will either have to be fixed, or doing what games that use SDL seem to do and just upscale to the native desktop resolution (compare with how Half Life 2, Super Meat Boy and Battleblock Theatre all behave under Linux).
This became part of #825, as such I'm marking this PR as superseded.
|
2025-04-01T06:37:09.490035
| 2020-10-01T07:18:28
|
712567203
|
{
"authors": [
"jerry73204"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1527",
"repo": "LaurentMazare/tch-rs",
"url": "https://github.com/LaurentMazare/tch-rs/issues/253"
}
|
gharchive/issue
|
Hunt SIGBUS from Tensor::drop
This is a weird bug I still cannot tell the reasons. The program runs in a forward backward loop and gets killed by SIGBUS in less than five minutes (sometimes immediately). It's only happens on specific machine. I opened rust-gdb to catch the SIGBUS and it says SIGBUS raised from Tensor::drop, and the line number points to the end of a function. I would bet it's somewhat a CUDA memory error, but gather further information. Let me paste the details and to see if anyhow has thoughts.
OS: CentOS 7
CPU: Intel Xeon Processor (Skylake)
GPU: Tesla P100-PCIE-16GB
Python 3.6 and PyTorch 1.6.0 installed by pip3 install --user
CUDA 10.2 and CUDNN 7
GDB points the error happens here.
The GDB backtrace.
#0 0x00007fffab4b1cf0 in c10::cuda::CUDACachingAllocator::DeviceCachingAllocator::free_block(c10::cuda::CUDACachingAllocator::(anonymous namespace)::Block*) ()
from /home/centos/.local/lib/python3.6/site-packages/torch/lib/libc10_cuda.so
#1 0x00007fffab4b38bb in c10::cuda::CUDACachingAllocator::raw_delete(void*) () from /home/centos/.local/lib/python3.6/site-packages/torch/lib/libc10_cuda.so
#2 0x00007fffac8b59cd in c10::TensorImpl::release_resources() () from /home/centos/.local/lib/python3.6/site-packages/torch/lib/libc10.so
#3 0x000055555583bbdb in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x7ffbf5313280) at /opt/rh/devtoolset-9/root/usr/include/c++/9/ext/atomicity.h:69
#4 c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x7ffbf5313280, __in_chrg=<optimized out>)
at /home/centos/.local/lib/python3.6/site-packages/torch/include/c10/util/intrusive_ptr.h:249
#5 at::Tensor::~Tensor (this=0x7ffbf5313280, __in_chrg=<optimized out>) at /home/centos/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:86
#6 torch::autograd::AutogradMeta::~AutogradMeta (this=0x7ffbf5313270, __in_chrg=<optimized out>)
at /home/centos/.local/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/variable.h:189
#7 torch::autograd::AutogradMeta::~AutogradMeta (this=0x7ffbf5313270, __in_chrg=<optimized out>)
at /home/centos/.local/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/variable.h:189
#8 0x00007fffac8b59a0 in c10::TensorImpl::release_resources() () from /home/centos/.local/lib/python3.6/site-packages/torch/lib/libc10.so
#9 0x000055555584400a in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x7ffbf5313330) at /opt/rh/devtoolset-9/root/usr/include/c++/9/bits/atomic_base.h:326
#10 0x00005555558440d7 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x7ffbf5313330, __in_chrg=<optimized out>)
at /home/centos/.local/lib/python3.6/site-packages/torch/include/c10/util/intrusive_ptr.h:248
#11 at::Tensor::~Tensor (this=0x7ffbf5313330, __in_chrg=<optimized out>) at /home/centos/.local/lib/python3.6/site-packages/torch/include/ATen/core/TensorBody.h:86
#12 at_free (t=0x7ffbf5313330) at libtch/torch_api.cpp:374
#13 0x000055555582bcbe in <tch::wrappers::tensor::Tensor as core::ops::drop::Drop>::drop (self=<optimized out>)
at /home/centos/.cargo/registry/src/github.com-1ecc6299db9ec823/tch-0.2.0/src/wrappers/tensor.rs:470
#14 0x000055555571dd0f in core::ptr::drop_in_place () at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:175
#15 core::ptr::drop_in_place () at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:175
#16 <alloc::vec::Vec<T> as core::ops::drop::Drop>::drop (self=0x7fff4d7ed5a0)
at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/vec.rs:2637
#17 core::ptr::drop_in_place () at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:175
#18 core::ptr::drop_in_place () at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:175
#19 core::ptr::drop_in_place () at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:175
#20 core::ptr::drop_in_place () at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:175
#21 alloc::sync::Arc<T>::drop_slow (self=0x7fffa837d2d8) at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/sync.rs:934
#22 0x00005555556432cc in train::train_worker (config=..., input_channels=<optimized out>, num_classes=<optimized out>, training_rx=...) at src/train/main.rs:134
#23 0x00005555556f1499 in train::main::main::{{closure}}::{{closure}} () at src/train/main.rs:61
#24 blocking::unblock::{{closure}}::{{closure}} () at /home/centos/.cargo/registry/src/github.com-1ecc6299db9ec823/blocking-1.0.0/src/lib.rs:303
#25 <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll (self=..., cx=0x7fffa837d6b0)
at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
#26 blocking::Executor::spawn::{{closure}} () at /home/centos/.cargo/registry/src/github.com-1ecc6299db9ec823/blocking-1.0.0/src/lib.rs:187
#27 <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll (self=..., cx=0x7fffa837d6b0)
at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80
#28 0x00005555558deff4 in blocking::Runnable::run (self=...) at /home/centos/.cargo/registry/src/github.com-1ecc6299db9ec823/blocking-1.0.0/src/lib.rs:133
#29 0x00005555558df226 in blocking::Executor::main_loop::{{closure}} () at /home/centos/.cargo/registry/src/github.com-1ecc6299db9ec823/blocking-1.0.0/src/lib.rs:215
#30 std::panicking::try::do_call (data=<optimized out>) at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:381
#31 std::panicking::try (f=...) at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:345
#32 std::panic::catch_unwind (f=...) at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:382
#33 blocking::Executor::main_loop (self=0x555555b40050 <blocking::EXECUTOR+8>) at /home/centos/.cargo/registry/src/github.com-1ecc6299db9ec823/blocking-1.0.0/src/lib.rs:215
#34 0x00005555558dfa36 in blocking::Executor::grow_pool::{{closure}} () at /home/centos/.cargo/registry/src/github.com-1ecc6299db9ec823/blocking-1.0.0/src/lib.rs:267
#35 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...)
at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/sys_common/backtrace.rs:137
#36 0x00005555558e044a in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} ()
at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:465
#37 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>)
at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:308
#38 std::panicking::try::do_call (data=<optimized out>) at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:381
#39 std::panicking::try (f=...) at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:345
#40 std::panic::catch_unwind (f=...) at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:382
#41 std::thread::Builder::spawn_unchecked::{{closure}} () at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:464
#42 core::ops::function::FnOnce::call_once{{vtable-shim}} () at /home/centos/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227
#43 0x0000555555939f7a in <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once () at /rustc/fc2daaae610b5515438b551a2f3706196a997f35/library/alloc/src/boxed.rs:1042
#44 <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once () at /rustc/fc2daaae610b5515438b551a2f3706196a997f35/library/alloc/src/boxed.rs:1042
#45 std::sys::unix::thread::Thread::new::thread_start () at library/std/src/sys/unix/thread.rs:87
#46 0x00007fffab956ea5 in start_thread (arg=0x7fffa8384700) at pthread_create.c:307
#47 0x00007fffabf6b8dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
It was solved by restricting the GPU usage via reducing the batch size. It might ve related to power issues.
Not sure whether it stems from tch, so I'm closing this issue..
|
2025-04-01T06:37:09.496636
| 2024-05-19T01:11:08
|
2304367592
|
{
"authors": [
"KibblesTheKitten",
"MeynethG"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1528",
"repo": "LavaGang/MelonLoader",
"url": "https://github.com/LavaGang/MelonLoader/issues/650"
}
|
gharchive/issue
|
[Bug]: Failed to initialize MelonLoader: Failed to load library
All of the following criteria must be met
[X] All Requirements must be installed.
[X] Changed the title so that it doesn't just says "[Bug]: "
[X] I have searched the GitHub issues for my bug, even in the closed issues.
All of the following are optional to answer
[X] Tried reinstalling the Game.
[X] Tried reinstalling MelonLoader.
[X] Tried restarting PC.
[ ] Was able to see the Start Screen.
Describe the issue.
So i installed MelonLoader for SRMP2( multiplayer mod for Slime Rancher 2), MelonLoader 0.6.1, and it says that, i've looked everywhere to fix it but nothing works. I did the same exact steps on my gf computer and it worked, the only difference (i believe) is that her computer is running on win 11 while mine is still on 10. I tried reinstalling the game, downloading it from another website, reinstalling melon loader multiples times, same for all the requirements, i even tried uploading the files of my gf on drive to dl them on my computer but nothing works. Does anyone know how i could fix this ?
Did you attach your log file?
No, I could not find a log file at {Game_Directory}\MelonLoader\Latest.log
With cracked games we can't provide help. And that downloaded from a different website is what makes me think of it
|
2025-04-01T06:37:09.500509
| 2023-08-23T18:51:39
|
1863848256
|
{
"authors": [
"boneskull",
"naugtur"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1529",
"repo": "LavaMoat/LavaMoat",
"url": "https://github.com/LavaMoat/LavaMoat/pull/677"
}
|
gharchive/pull-request
|
chore(ci): do not use npm ci
npm ci rimrafs node_modules then installs everything in package-lock.json. However, it is highly likely that the result of running npm install in a development environment will result in a very different tree, which updates package-lock.json. If one of the dependencies installed ships a poorly-built shrinkwrap file, the updated package-lock.json will contain potentially incompatible software from the shrinkwrap. This will be manifested upon the next npm ci because it strictly installs what's in package-lock.json.
TL;DR: npm ci is fast, but using it exclusively in CI can be dangerous.
This may or may not be a bug in npm, but the better solution is for projects to not use shrinkwrap files.
This is surprising to me. I was more leaning towards using npm ci exclusively and only running install when adding new dependencies or manually updating some.
@legobeat https://github.com/DavidAnson/markdownlint-cli2/issues/198
...but I'll close this, given @naugtur's comment. Things can get very ugly if a package starts shipping a shrinkwrap file and a dev doesn't notice; but I agree it's likely a rare occurrence. Socket will catch this sort of thing, but only if you click thru to the details about the package change.
|
2025-04-01T06:37:09.503548
| 2022-02-05T01:36:55
|
1124767684
|
{
"authors": [
"3hashsu",
"lawnchairci"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1530",
"repo": "LawnchairLauncher/lawnicons",
"url": "https://github.com/LawnchairLauncher/lawnicons/issues/337"
}
|
gharchive/issue
|
Icon Request
App Name: XPlayer
Package Name: video.player.videoplayer
Install: 50M+
Hello! Weʼre switching to a new system for icon requests in order to be able to better manage them. From now on, icon requests can be submitted here: https://forms.gle/Fx8vZAiWdW1Tyjo57. If your icon request hasnʼt yet been fulfiled, it has been copied to our new icon request database, meaning no action is required. Consequently, this issue will be closed.
|
2025-04-01T06:37:09.511900
| 2024-03-07T14:59:40
|
2174064775
|
{
"authors": [
"Pancham1603",
"vera-bernhard"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1531",
"repo": "Layout-Parser/layout-parser",
"url": "https://github.com/Layout-Parser/layout-parser/pull/208"
}
|
gharchive/pull-request
|
getsize() depricated in Pillow>=10.0.1
getsize() is deprecated and replaced with getbbox() in Pillow>=10.0.1
I replaced it in visualization.py such that it works with both newer and older Pillow versions.
There are other instances that also need to be adapted, but I have refrained from adapting them everywhere due to a lack of testing possibilities.
getbbox() doesn't return the width and height, it gives the coordinates. Will have to use left right top bottom values to get the width and height.
|
2025-04-01T06:37:09.518092
| 2023-04-25T17:26:36
|
1683567710
|
{
"authors": [
"ricksanchez"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1532",
"repo": "LazyBallsZealots/Results.Immutable",
"url": "https://github.com/LazyBallsZealots/Results.Immutable/issues/12"
}
|
gharchive/issue
|
Remove yarn pnp
Remove yarn pnp and use node-modules or pnpm to save time while cloning the repo.
We can use the cache action to speed up the workflows
The precommit hooks were never running before. Unless husky was installed manually.
|
2025-04-01T06:37:09.523464
| 2015-12-05T14:38:11
|
120560776
|
{
"authors": [
"LeaVerou",
"lipis",
"seebaermichi",
"verpixelt"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1533",
"repo": "LeaVerou/bliss",
"url": "https://github.com/LeaVerou/bliss/issues/6"
}
|
gharchive/issue
|
Something can't fit on the page and there is a horizontal scrolling
But I'm too lazy to figure out what's wrong.. :)
Huh, odd. I can’t reproduce it. Which browser?
Yes.. sorry..Chrome.. but I'm on dev channel so you can ignore I guess: Version 48.0.2564.22 dev.
Can you run console.log(innerWidth, innerHeight) in the console and let me know what it says?
I recognized the same issue in Firefox 42 on Mac Yosemite and on Windows 10 latest Edge and Firefox. It obviously depends on the browser-window size.
What I found out so far, it might have something to do with the code within pre- and code-boxes. So in Firefox there is much more space for indentation as in Chrome and it tries to fit code within the box.
While debugging I found another issue in Chrome on Windows 10 and 7. On some code-boxes it shows also horizontal scrollbars.
I guess the reason is the same, code doesn't fit into the boxes and code has overflow: auto.
Tried to fix it, but I couldn't found a solution without creating other issues.
I've opened a PR with a possible solution, tested in FF 42 on OS X 10.11.1 https://github.com/LeaVerou/bliss/pull/28.
|
2025-04-01T06:37:09.535152
| 2017-04-19T18:13:08
|
222824267
|
{
"authors": [
"ddproxy",
"germanjoey",
"jziggas"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1534",
"repo": "Leaflet/Leaflet.draw",
"url": "https://github.com/Leaflet/Leaflet.draw/issues/715"
}
|
gharchive/issue
|
How to approach limiting the size of a rectangle?
I'm wondering if anyone else has tried this before as Googling didn't show much. My project uses the rectangle polygon from Leaflet.draw as a bounding box to search within an area of interest on a map. I'd like to be able to place a limit on how large the size of a rectangle can become (whether it is by specifying a real unit of area such as 1000 miles or by hooking into something and calculating area myself). But I'm not even sure how to approach this or if it is even possible to stop a polygon from reaching a certain size while being drawn. Any thoughts?
https://github.com/Leaflet/Leaflet.draw/pull/651 basically does this. I recommend applying it yourself, or at the portions related to bounds. With it, just set map's maxbounds after hooking into drawstart and L.Draw will force all drawn or edited shapes to remain in that box.
If you'd like to enforce other sorts of size limits, such as by subarea, you might need to modify L.Draw.Rectangle and L.Edit.Rectangle yourself. It would be a pretty similar modification though.
... just set map's maxbounds ...
I'm not trying to restrict what the user can view inside the map though, just the size of the bounding box within the map. Unless I'm misunderstanding something?
You need a listener while drawing the rectangle to check it's size on each change and perform operations I the geometry while drawing.
It would be easier if you only have one drawn geometry at a time, but the events exist. https://leaflet.github.io/Leaflet.draw/docs/leaflet-draw-latest.html#l-draw-event
Thanks. Yeah we only allow one rectangle (and no other polygons) to exist on the map at any time. Does an event actually fire as the rectangle is being sized or just when drawing has started? What would stop the rectangle from being drawn beyond a certain size?
During edit mode, the event fires continuously as a rectangle is being resized. However, during draw mode, the event only fires once the rectangle has finished drawing. I think ddproxy is suggesting that you use an event to check the size/area of your rectangle after it has been created, and then resize it to your constraints.
What I was suggesting was that you hook onto drawstart and set maxbounds, and then remove it on drawstop. However, that would only work if you were drawing squares I guess. That's why I think what you really want to do is do what I had done for maxbounds in #651, but instead for an arbitrary bounds that is calculated on the fly based on the shape's area.
I guess I'm not understanding when you are saying to use maxBounds. My understanding of maxBounds is the one in the Leaflet docs that is restricting the view of the map:
When this option is set, the map restricts the view to the given geographical bounds, bouncing the user back if the user tries to pan outside the view. To set the restriction dynamically, use setMaxBounds method.
Is this what you're talking about or something different?
That's right. If you look at #651, there's code there that will also force any drawn shapes to respect that boundary. This is similar to what you want. My suggestion is that you modify that code to use your own criteria (e.g. require a bounding box that has an upper limit on area, or whatever you want) instead of simply using the map's maxbounds.
Ah okay. I will have to look into the code a bit then. Thanks for clarifying.
|
2025-04-01T06:37:09.538059
| 2016-08-09T10:27:44
|
170133022
|
{
"authors": [
"EricSch",
"IvanSanchez"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1535",
"repo": "Leaflet/Leaflet",
"url": "https://github.com/Leaflet/Leaflet/issues/4809"
}
|
gharchive/issue
|
How to init map correctly?
I have a map, which is displayed or hidden, depending of a button click.
I init the map while she is hidden, add layer, add markers, call fitBounds() for the markers.
My problem is, when the map is hidden, fitBounds doesn't work. After the switch the map is shown as fully zoomed out. When I call fitBounds again, the map is shown correctly.
How do I know, when the map is ready? I tried mapReady, but it is called also when the map is hidden.
Is there another way?
Thanks
Hi, great to hear that you find Leaflet useful!
However, this issue tracker is used for reporting bugs and discussing new features. For questions on using Leaflet, please use gis.stackexchange.com or stackoverflow.
|
2025-04-01T06:37:09.571216
| 2020-03-17T14:34:40
|
583058456
|
{
"authors": [
"Falke-Design",
"johnd0e",
"jonkoops",
"mourner"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1536",
"repo": "Leaflet/Leaflet",
"url": "https://github.com/Leaflet/Leaflet/issues/7032"
}
|
gharchive/issue
|
Manage stale branches
Leaflet repo currently contains 96 branches, most of them are stale.
So I propose:
[x] Remove merged
[ ] Remove closed
[ ] Where related PR absent, open one, with some description.
Perhaps except - 0.7, which is reserved for 0.7.*.
Thoughts?
Merged:
#6748 plugin-maidenhead
Updated 8 months ago by @IvanSanchez
#5859 fix-circle-bounds-calc
Updated 2 years ago by Per Liedman
#5690 fix-paused-drag-inertia
Updated 3 years ago by Per Liedman
#5689 warn-adding-non-layers
Updated 3 years ago by Per Liedman
#5667 isflat
Updated 3 years ago by @yohanboniface
#5577 docsUpdateWhenIdle
Updated 3 years ago by perliedman
#5581 1.1.0-blog-post
Updated 3 years ago by Per Liedman
#5580 default-icon-regexp
Updated 3 years ago by IvanSanchez
#5574 divIconContent
Updated 3 years ago by @ghybs
#5555 imageoverlay-classname
Updated 3 years ago by @mourner
#5572 quick-start-map-id
Updated 3 years ago by IvanSanchez
#5507 fix-marker-enter-key
Updated 3 years ago by @perliedman
#4883 bubbly-option
Updated 3 years ago by IvanSanchez
#5498 fix-canvas-empty-polyline
Updated 3 years ago by perliedman
#5476 no-gap-hooks
Updated 3 years ago by IvanSanchez
#5480 scroll-pixelratio
Updated 3 years ago by IvanSanchez
#5465 only-rearrange-dom-when-needed
Updated 3 years ago by Per Liedman
#5404 delayed-paths
Updated 3 years ago by IvanSanchez
#5007 tutorial-zoom-delta
Updated 3 years ago by perliedman
#5378 stop-map-on-drag-start
Updated 3 years ago by IvanSanchez
#5331 keyboard-escape
Updated 3 years ago by IvanSanchez
#5318 disable-click-prop-in-zoom-control
Updated 3 years ago by Per Liedman
#5303 tap-highlight-color
Updated 3 years ago by IvanSanchez
#5157 zero-bounds
Updated 3 years ago by IvanSanchez
#5280 layers-control-scroll-2
Updated 3 years ago by IvanSanchez
#5274 release-v1.0.3
Updated 3 years ago by IvanSanchez
#5166 wraplatlngbounds
Updated 3 years ago by IvanSanchez
#5177 infinite-tile-errors
Updated 3 years ago by IvanSanchez
#5100 canvas-click-single-layer
Updated 3 years ago by perliedman
#4711 control-layers-comparelayers
Updated 3 years ago by Per Liedman
#4916 layers-max-zoom
Updated 3 years ago by Per Liedman
#5049 as-feature-collection
Updated 3 years ago by Per Liedman
#5070 attribution-on-all-layers
Updated 3 years ago by Per Liedman
#5054 path-events-refactor
Updated 3 years ago by IvanSanchez
#5021 dup-if
Updated 4 years ago by yohanboniface
#4555 fix-removing-all-listeners
Updated 4 years ago by perliedman
#4543 grid-layer-docs
Updated 4 years ago by @nathancahill
#4531 wms-docs
Updated 4 years ago by IvanSanchez
#4520 insecure-geolocation
Updated 4 years ago by IvanSanchez
#4513 control-extension-docs
Updated 4 years ago by nathancahill
#4512 correct-geojson-docs
Updated 4 years ago by nathancahill
Closed:
#6691 cssIconDefault
Updated 2 years ago by ghybs
#5290 pixelgrid
Updated 3 years ago by IvanSanchez
#5279 layers-control-scroll
Updated 3 years ago by IvanSanchez
#5249 project-update-paths
Updated 3 years ago by perliedman
#5172 canvas-reset-2
Updated 3 years ago by IvanSanchez
#5171 canvas-reset
Updated 3 years ago by IvanSanchez
#5002 rollup3
Updated 4 years ago by IvanSanchez
#4965 tutorials-cleanup
Updated 4 years ago by IvanSanchez
#4947 v1-blog-the-wait-is-over
Updated 4 years ago by IvanSanchez
#4894 blog-foss4g
Updated 4 years ago by IvanSanchez
#4356 gh-pages-extending
Updated 4 years ago by IvanSanchez
#4709 event-refactor-round-2
Updated 4 years ago by Per Liedman
#4710 control-layers-eachlayer
Updated 4 years ago by yohanboniface
#4649 tile-prune-lru
Updated 4 years ago by IvanSanchez
#4614 gridlayer-margin
Updated 4 years ago by IvanSanchez
#4516 type-definitions
Updated 4 years ago by nathancahill
#4507 mutation-observer
Updated 4 years ago by IvanSanchez
#6520 0.7
Updated 4 years ago by Christopher Green
#4197 slimerjs
Updated 4 years ago by IvanSanchez
#4074 move-latlng-equals-to-crs
Updated 4 years ago by perliedman
#3809 mobile-setview
Updated 5 years ago by yohanboniface
#3598 geojson-round-trip-test
Updated 5 years ago by @patrickarlt
#3581 scroll-prevent
Updated 5 years ago by IvanSanchez
#3528 domevent-once
Updated 5 years ago by yohanboniface
Without related PR:
propagate-marker-drag
Updated 3 months ago by IvanSanchez
pr/6021
Updated 2 years ago by IvanSanchez
fix-circle-while-zooming
Updated 3 years ago by Per Liedman
canvas-improvements
Updated 3 years ago by Per Liedman
ffcfcc1
Updated 3 years ago by IvanSanchez
rotate
Updated 4 years ago by IvanSanchez
rollup
Updated 4 years ago by IvanSanchez
split-css-autoprefixer
Updated 4 years ago by IvanSanchez
gh-pages-custom-crs
Updated 4 years ago by IvanSanchez
drag-cancel-click
Updated 5 years ago by jfirebaugh
cache-mouse-pos
Updated 6 years ago by mourner
We should definitely remove merged branches. I actually just turned on the option to delete them automatically on merge, which will save us from those form now on.
For branches with closed PRs, I think they need a review — e.g. if the changes are substantial, sometimes they contain interesting attempts that may be useful in the future. If they are trivial, they should be deleted. If they are substantial but super old or addressing a part of the code that was significantly changed since then, they should be deleted too.
Did a pass and removed all merged branches and some unmerged ones with closed PRs — down to 46, but this definitely needs more cleanup.
For branches with closed PRs, I think they need a review — e.g. if the changes are substantial, sometimes they contain interesting attempts that may be useful in the future.
If we find such case - then related PR should be reopened, right?
(Or create new PR with some description)
@johnd0e not necessarily. PRs should only be opened if there's an intention to merge. If there's just a branch with some code that may be useful in future, we can keep it without a PR.
If not PR, then where we should discuss that branch?
When there is no intention to merge - PR can be opened in "draft" mode.
I haven't done any investigation, but this might help our cause: https://github.com/actions/stale
I think this is exactly what was described here but I personally still think it's wrong to close the Issues automatical without interaction from a maintainer. There around 430 Issues and they can't get solved if the PRs don't get merged or if a decison is needed and no one decides it. A bot would close them all but the issues are still valid. I think this makes sense when we reduced the amount of issues and PRs manually and Leaflet is better maintained (which is currently good but we don't know what is in 2 months )
|
2025-04-01T06:37:09.579816
| 2022-06-18T15:52:07
|
1275821861
|
{
"authors": [
"Falke-Design",
"bigomega",
"mourner",
"muditlambda"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1537",
"repo": "Leaflet/Leaflet",
"url": "https://github.com/Leaflet/Leaflet/issues/8297"
}
|
gharchive/issue
|
Access to the open source program of Browserstack.com
I registered us for the open source program of Browserstack and I hope we get a positiv answer.
The reseaon for the registration are the issues #7403 and #3575. We have different results in different browsers.
https://www.browserstack.com/open-source
FYI: @mourner @IvanSanchez @Malvoz @jonkoops
@mourner do you know something about an existing access to Browserstack?
@Falke-Design nope, no idea. I never used Browserstack myself.
@Falke-Design the response is very confusing. Let me ping them on Twitter, maybe they can elaborate...
If you wanna explore LambdaTest.com
feel free to ping me back 😁
or drop a mail at<EMAIL_ADDRESS>https://www.lambdatest.com/open-source
@muditlambda first bug in my code already found over your test suite 😄 I think LambdaTest has all what we need and is worth a try. Can you get us in the open source sponsoring?
We can discuss more over my mail<EMAIL_ADDRESS>
Browserstack responsed too:
@Falke-Design The confusion was primarily because of another registration and sponsorship on the name of Leaflet from kustalex5-at-gmail.com on 2022-04-23. Currently, we do not verify the open-source project ownership of the developer requesting access. We believe anyone can, and should-be-able-to contribute to any open-source project, so we trust developer and give them access.
Thanks for bringing this to our attention regardless. Glad we're able to resolve this.
@bigomega thank you for solving this!
@mourner Yes, dealing with this abuse of open-source support is an ongoing battle. The price is usually paid by genuine open source devs. TravisCI stopped their support for OSS because of crypto miners.
|
2025-04-01T06:37:09.583137
| 2021-03-20T18:22:53
|
836879846
|
{
"authors": [
"akrigline"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1538",
"repo": "League-of-Foundry-Developers/foundryvtt-devMode",
"url": "https://github.com/League-of-Foundry-Developers/foundryvtt-devMode/issues/5"
}
|
gharchive/issue
|
Reset Setting to Default
Is your feature request related to a problem? Please describe.
It is frustrating to try testing setting defaults because as soon as the setting exists, that default is not applicable.
Describe the solution you'd like
A button which allows a given setting to be 're-set' to the default on the module settings page.
Describe alternatives you've considered
Something which allows a key to be entered or even autocompleted that would let this work for non-config-able settings.
this lets you set a setting to its default:
game.settings.set('gm-screen', 'rows', game.settings.settings.get("gm-screen.rows").default)
|
2025-04-01T06:37:09.584311
| 2022-06-21T06:13:40
|
1277942793
|
{
"authors": [
"LeagueRaINi",
"zoulztealer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1539",
"repo": "LeagueRaINi/smuc_rs",
"url": "https://github.com/LeagueRaINi/smuc_rs/issues/3"
}
|
gharchive/issue
|
exe download
is there any link yet to download the compiled exe?
ty
https://github.com/LeagueRaINi/smuc_rs/releases/tag/v0.1.0
|
2025-04-01T06:37:09.595708
| 2017-10-04T22:00:55
|
262944584
|
{
"authors": [
"LeanplumBuild",
"alexisoyama"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1540",
"repo": "Leanplum/Leanplum-iOS-SDK",
"url": "https://github.com/Leanplum/Leanplum-iOS-SDK/pull/88"
}
|
gharchive/pull-request
|
fix(message): Fix openURL freeze on iOS10
Use the newer openURL for iOS10+
Can one of the admins verify this patch?
|
2025-04-01T06:37:09.631558
| 2022-06-10T17:01:54
|
1267800301
|
{
"authors": [
"GuilaneDen",
"apaillier-ledger",
"tjulien-ledger"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1541",
"repo": "LedgerHQ/app-ethereum",
"url": "https://github.com/LedgerHQ/app-ethereum/pull/316"
}
|
gharchive/pull-request
|
feat(velas): Add support for Velas
Description
Add support for Velas.
chain ID: 106
Changes include
[ ] Bugfix (non-breaking change that solves an issue)
[x] New feature (non-breaking change that adds functionality)
[ ] Breaking change (change that is not backwards-compatible and/or changes current functionality)
[ ] Tests
[ ] Documentation
[ ] Other (for changes that might not fit in any category)
Hi @GuilaneDen , We will be closing this PR as this PR needs to be rebased. Please apply the required changes, make sure to follow our guidelines detailed here and submit the form.
Already added by #391
|
2025-04-01T06:37:09.641437
| 2022-10-25T14:19:46
|
1422548202
|
{
"authors": [
"thomasrogerlux"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1542",
"repo": "LedgerHQ/ledger-live",
"url": "https://github.com/LedgerHQ/ledger-live/pull/1675"
}
|
gharchive/pull-request
|
Fix lotties for Nano X in new BLE pairing flow
📝 Description
The wrong lotties were used for the Nano X in BLE pairing flow. Replaced them with the correct one and adjusted styling
❓ Context
Impacted projects: ledger-live-mobile
Linked resource(s): FAT-536
✅ Checklist
[ ] ~Test coverage~: no UI testing on mobile yet
[x] Atomic delivery
[x] No breaking changes
📸 Demo
N/A, see slack channels
🚀 Expectations to reach
Please make sure you follow these Important Steps.
Pull Requests must pass the CI and be internally validated in order to be merged.
QA OK
|
2025-04-01T06:37:09.726218
| 2020-08-05T07:01:58
|
673305130
|
{
"authors": [
"Dreace",
"Miloas",
"Rosonlee",
"yihong0618"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1551",
"repo": "LeetCode-OpenSource/vscode-leetcode",
"url": "https://github.com/LeetCode-OpenSource/vscode-leetcode/issues/609"
}
|
gharchive/issue
|
Session expired
When I log in with web ,the account in vscode says sessions expired.
It just happens today, that means I can't log in with vs and web at the same time?
I used to submit with vs and write solutions through web.
It always works until today
The same problem, even using cookie to log in will make web log out.
Did you use leetcode or leetcode-cn? I didn't have this problem for leetcode both cookie login or third party login.
Did you use leetcode or leetcode-cn? I didn't have this problem for leetcode both cookie login or third party login.
Leetcode-cn, It just happened today.
Yes, I checked.Leetcode-cn change the policy.
I think there is no better solution for now.
Old leetcode-cli use share cookie way when they have this problem https://github.com/skygragon/leetcode-cli-plugins/blob/master/plugins/cookie.chrome.js, but I don't think it is a good way,
Yes, I checked.Leetcode-cn change the policy.
Ok, hope to fix it laterly, thank you
fixed
It happens again
fixed
It happens again
|
2025-04-01T06:37:09.765451
| 2024-04-27T02:00:14
|
2266709849
|
{
"authors": [
"Lenivaya",
"hellishvictor"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1552",
"repo": "Lenivaya/qrrs",
"url": "https://github.com/Lenivaya/qrrs/issues/226"
}
|
gharchive/issue
|
Feature request: Support for SVG format.
Hi, the title is self-explanatory.
Cheers.
Added with #228
Works fine for me, closing this as resolved
And here too, awesome!
|
2025-04-01T06:37:09.773817
| 2021-08-12T21:59:10
|
969640539
|
{
"authors": [
"LeoRiether"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1553",
"repo": "LeoRiether/FPGRARS",
"url": "https://github.com/LeoRiether/FPGRARS/issues/15"
}
|
gharchive/issue
|
Change display resolution
There's no 320x240 bitmap display in the original RARS, only in the modified version
--width and --height were implemented in v2.2!
|
2025-04-01T06:37:09.792063
| 2023-07-02T08:56:53
|
1784534652
|
{
"authors": [
"LeonYang95",
"dino-chiio",
"superzeroT"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1554",
"repo": "LeonYang95/PLELog",
"url": "https://github.com/LeonYang95/PLELog/issues/23"
}
|
gharchive/issue
|
Training process error
The training process encountered the following problems
bidirectional set as False.Another problem arose.
I have removed the back part of the multiplication in the image below.Can be trained but the result is wrong.
Below is the contents of my data file
I can't solve the problem yet.I hope I can get some help to sovle this problem.Thank you very much!
Can you check which one is 200d, and which one is 100d in hiddens * sent_probs? This could help clearify this issue.
Can you check which one is 200d, and which one is 100d in hiddens * sent_probs? This could help clearify this issue.
hiddens is 200d,sent_probs is 100d.
Sorry, I failed to reproducethis error. However, here's a tip that may help:
The shape of sent_prob should be batch_size * seq_len after the attention mechanism. And become batch_size * seq_len * 1 after the view operation. Therefore, I am not sure which part goes wrong. As shown in your output, 100 could be the batch_size.
Here I attach a screen shot of my runtime outputs with shapes, I hope this can help you with debugging.
What are the shapes after the view operation?
So it seems fine?
sent_probs can be regarded as the attention score of the hidden states for each log event in the log sequence. The multiplication between hidden_states and sent_probs is actually an averaged summation of hidden states so that it gives a final representation for each log sequence.
So it seems fine?
sent_probs can be regarded as the attention score of the hidden states for each log event in the log sequence. The multiplication between hidden_states and sent_probs is actually an averaged summation of hidden states so that it gives a final representation for each log sequence.
Thank you very much for your help.I'll keep looking for a solution.
Hi @superzeroT , may I ask if the issue is still unresolved? And, as mentioned in your screenshot, "此处有相应的修改“, what were those exactly?
Hi @LeonYang95 ,I haven't solved the problem yet.I tried to unify the dimensions but it didn't work.Don't worry about the note I added.Since your code is running successfully,I guess it has something to do with the environment configuration,etc.
Hi @LeonYang95 ,Can I see the shape of your sent_probs and hiddens values.
Mine were:
hiddens: [100, 38, 200]
sent_probs: [100,38,1]
Your first two shapes seems fine. But the sequence length of the last two shapes is only 1?
Hi you guys. Have you solved this problem? I have a same error when evaluating the testing set.
I see that It throws the error in the last batch of testing set.
Hi you guy, @LeonYang95. I have just considered that there is an error in module/Attention.py/class LinearAttention , in https://github.com/LeonYang95/PLELog/blob/c8bb56b08fe6368f3b3c71ff88de8a87c48c7607/module/Attention.py#L275
combined_tensors.squeeze(1) will remove the input dimension which has size 1. So, when the input has sequence length 1, it will be removed in dimension 1. When I remove the squeeze function, the code works completely.
Do I misunderstand anything in this situation?
Hi @dino-chiio ,
Your comment about the shape issue is correct, the squeeze will produce this error when sequence length is one. But for other situations, the code works fine. Please consider this error as an "anomaly" prediction.
The CPUEmbedding keeps the embedding weights in CPU instead of GPU to lower the cost of GPU memory. While we were doing this research, our GPUs resources were limited. If you have more advanced GPUs, you can try training the weights along with other parameters, hopefully, you will get a considerable improvement.
I do not recommend to cancel the squeeze operation. The attention I used was learned from other projects, and I am not sure about the results after the cancellation.
I believe regarding log sequences of length one as anomalies is an acceptable solution, since it is possible that those log sequences are actual anomalies or irrelavant to the system running status.
|
2025-04-01T06:37:09.794614
| 2021-04-15T14:06:46
|
858917876
|
{
"authors": [
"LeonardoCardoso",
"jeffhodsdon"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1555",
"repo": "LeonardoCardoso/SwiftLinkPreview",
"url": "https://github.com/LeonardoCardoso/SwiftLinkPreview/pull/138"
}
|
gharchive/pull-request
|
Support for video
Action
Add a brief description of what was made
Issues: #issue-number #issue-number ...
Commits: #commit-hash ...
...
Thanks, @jeffhodsdon. I will check it out ASAP.
|
2025-04-01T06:37:09.918556
| 2020-08-04T10:00:36
|
672668028
|
{
"authors": [
"DevTimur",
"Liahim85"
],
"license": "Artistic-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1556",
"repo": "Liahim85/MistyWorld_Open",
"url": "https://github.com/Liahim85/MistyWorld_Open/issues/79"
}
|
gharchive/issue
|
[BUG] planks localisation and icon
None of mist wood planks has propper name or icon.
IF I try to middleclick it (pick up in cerative) in will give me errored item.
This is strange because everything works fine for me.
This is probably a conflict with another mod.
Oh! Wait!
Did you change something if lang file?
Yep! It's a bug. Will be fixed in the next update.
|
2025-04-01T06:37:09.929838
| 2023-12-20T07:46:27
|
2050025798
|
{
"authors": [
"CSUPZW",
"Liang-Ding"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1557",
"repo": "Liang-Ding/seishmc",
"url": "https://github.com/Liang-Ding/seishmc/issues/1"
}
|
gharchive/issue
|
Can only calculate the related parameter about P wave?
Dear sir,
Because most of the collected microseisms are P-wave signals, and most of S-wave signals are difficult to utilize, we hope to calculate the source mechanism based only on P-waves. Can the function "DHMC_DC" run with only related parameter about P wave?
Hello and thank you for your suggestion! We're thrilled to let you know that SeisHMC is continuously evolving, with ongoing updates and enhancements. Expect to see a range of new and exciting features soon. Stay connected for more updates, and thanks again for being interested in our work.
|
2025-04-01T06:37:09.933768
| 2023-08-13T15:21:40
|
1848634904
|
{
"authors": [
"Dok0xv",
"binx6"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1558",
"repo": "LibChecker/LibChecker-Rules",
"url": "https://github.com/LibChecker/LibChecker-Rules/issues/507"
}
|
gharchive/issue
|
libx264.so
Library filename / 库文件名
libx264.so
Library label / 库的文字标签
x264
Library team / 库的开发团队
VideoLAN
Files & Comment / 同组文件 & 备注
No response
In Apps / 出现于
酷狗音乐
Library icon / 库图标
https://www.iconfont.cn/search/index?q=视频编码
Library description / 库描述
x264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.
Library relative URL / 相关链接
https://www.videolan.org/developers/x264.html
👍🏼
|
2025-04-01T06:37:09.940902
| 2016-02-16T10:25:10
|
133940364
|
{
"authors": [
"kevbite",
"rmourato"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1559",
"repo": "LiberisLabs/CompaniesHouse.NET",
"url": "https://github.com/LiberisLabs/CompaniesHouse.NET/pull/3"
}
|
gharchive/pull-request
|
Extension for retrieving company profile info
Hi folks,
First of all thanks for making this library available!
My use case for using the Companies House API was for retrieving details of a specific company (given it's number) rather than a generic search. Instead of rolling out my own thing I thought about extending your library to do this.
My proposed changes are an extension to your API to offer the request to GET /company/{company_number} and deserialize the response to the appropriate POCO, a CompanyProfile object.
The new method signature for this is Task<CompaniesHouseClientResponse<CompanyProfile>> GetCompanyProfileAsync(string companyNumber, CancellationToken cancellationToken = default(CancellationToken));
I kept the implementation in-line with yours so hopefully you should see no surprises, and I made no changes to existing code apart from moving a few common types around for reuse.
I might further extend this later (I have the officerList in mind to do next) but thought I'd put this forward for early review.
Hello,
Contributions are always welcome and it's great to see people helping out with the OSS community.
A quick review of the code - It looks great and keeps in line with the rest of the coding styles within the project.
If you could just update the version to the next minor (1.2.0) within the appveyor.yml we'll get that merged in and pushed to nuget.
Thanks
LiberisLabs
Hi Kevin,
Thanks for the quick reply.
The version has been updated and build looks green now.
Cheers,
Rui
It's all merge in now and pushed to nuget, we've had to bump up the version again to 1.2.1 due to appveyor but fixed this issue for next time round.
http://www.nuget.org/packages/CompaniesHouse/1.2.1
Thanks Kevin.
I'll have the version in mind if I submit another PR.
|
2025-04-01T06:37:09.981067
| 2023-12-02T12:44:39
|
2022027299
|
{
"authors": [
"derneuere",
"loulou91"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1560",
"repo": "LibrePhotos/librephotos",
"url": "https://github.com/LibrePhotos/librephotos/issues/1086"
}
|
gharchive/issue
|
feeback on new caption model: strange result
The result from log and caption result are different... "a stream in the middle of a field" on one side "a white and blue boat traveling down a mountain" on the other side...
2023-12-02 12:31:13,666 : api_util.py : get_search_term_examples : 186 : INFO : 100 possible ids 2023-12-02 12:31:13,667 : api_util.py : get_search_term_examples : 201 : INFO : Getting search terms for user 1 2023-12-02 12:31:14,306 : api_util.py : get_search_term_examples : 202 : INFO : Found 93 photos 2023-12-02 12:32:32,031 : photo.py : _generate_captions_im2txt : 189 : INFO : generated im2txt captions for image /protected_media/thumbnails_big/16dd51def768c326e131d5b70bbe51a51.webp with SiteConfig blip_base_capfilt_large with Blip: True and Onnx: False caption: a stream in the middle of a field
The suggestion will now be displayed above the caption. This should make it clearer, what the description is and what the suggestion is :)
Well... not so sure it's clearer for me... ;-)
where is "generate" button for image without caption? what means the icon?
sorry to be boring with all that stuff 😅 in addition doc will have to be updated accordingly ;-)
The wand button is now the generating button. You can click on the suggestion, which will add it to the caption field. You can then save it by clicking on submit.
This makes it clear, what the suggestion is, what you description is and when it gets saved!
|
2025-04-01T06:37:09.993700
| 2022-11-30T17:27:45
|
1469988258
|
{
"authors": [
"m-lyon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1561",
"repo": "Lightning-AI/lightning",
"url": "https://github.com/Lightning-AI/lightning/issues/15877"
}
|
gharchive/issue
|
pl.LightningModule.load_from_checkpoint fails when matlab.engine is imported
Bug description
pytorch_lightning fails to load a previously saved model when matlab.engine is imported within the script.
This only happens when import matlab.engine is included in the script. The checkpoint_path is not empty.
How to reproduce the bug
import matlab.engine
from my_custom_model import LITModel # this is a pytorch_lightning.LightningModule model
model = LITModel.load_from_checkpoint(checkpoint_path='/path/to/my/checkpoint.ckpt')
Error messages and logs
model = model_class.load_from_checkpoint(checkpoint_path=ckpt_path)
File "/home/anaconda3/envs/torch/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 139, in load_from_checkpoint
checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage)
File "/home/anaconda3/envs/torch/lib/python3.8/site-packages/pytorch_lightning/utilities/cloud_io.py", line 47, in load
return torch.load(f, map_location=map_location)
File "/home/anaconda3/envs/torch/lib/python3.8/site-packages/torch/serialization.py", line 705, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "/home/anaconda3/envs/torch/lib/python3.8/site-packages/torch/serialization.py", line 243, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: [enforce fail at inline_container.cc:106] . archive does not contain any files
Environment
- Linux: Ubuntu 22.04.1 LTS
- python==3.8.12
- pytorch_lightning==1.6.4
- torch==1.11.0
- matlabengine==9.13.1
- Matlab: R2022b
- Lightning Component: LightningModule
- GPU models and configuration: model checkpoint was originally trained on different GPU to the one loading it
- How you installed Lightning: conda
- Running: locally
More info
No response
This is most likely a bug with matlabengine, I need to further investigate to pinpoint the issue.
|
2025-04-01T06:37:09.995773
| 2023-07-16T17:04:49
|
1806652335
|
{
"authors": [
"Andrei-Aksionov",
"carmocca"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1562",
"repo": "Lightning-AI/lit-gpt",
"url": "https://github.com/Lightning-AI/lit-gpt/pull/267"
}
|
gharchive/pull-request
|
Support lora with disabled qkv
Hi there 👋
That's basically a leftover from my previous PR that I totally forgot to add.
Despite support of creating LoRAQKVLinear without lora_A/lora_B at all (if enabled_lora=(False, False, False)) by checking
if self.r > 0 and any(self.enable_lora)
in __init__ and train methods, that for some reason hasn't been done in forward method (even in the orig code), so if someone wants to apply LoRA to projection head only - it's gonna fail.
Thanks!
|
2025-04-01T06:37:10.001696
| 2023-11-09T11:19:48
|
1985410024
|
{
"authors": [
"Andrei-Aksionov",
"carmocca"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1563",
"repo": "Lightning-AI/lit-gpt",
"url": "https://github.com/Lightning-AI/lit-gpt/pull/715"
}
|
gharchive/pull-request
|
BNB>=0.41.2 uses quant_state as a class, not as a list
Hi there 👋
Fixes #707
Bitsandbytes starting from version 0.41.2 uses quant_state as a class, not as a list: https://github.com/TimDettmers/bitsandbytes/commit/61a4a20da91b7f780a98d3ff8235f1455835ecf2.
As a result, when trying to access the shape by the index
https://github.com/Lightning-AI/lit-gpt/blob/d9283f420be2ee3b9af77b2e12edf92aa200379e/lit_gpt/utils.py#L35
... we get:
TypeError: 'QuantState' object is not subscriptable
As proposed in #710 maybe we indeed need to pin version of BNB since such breaking changes might appear again?
Can you also update the minimum bitsandbytes version in the requirements?
Sure I can 😄.
But you want me to specify the exact version == or the minimum version >=?
If to specify the minimum version we can have a breaking change in the future.
I would specify a minimum, and if this becomes a trend in the future, we can pin it to a specific version
I don't know how do they do versioning, but a bugfix-level version bump (0.41.0 -> 0.41.2) introduced some number of breaking changes.
And doing a manual training/generation check on a GPU isn't sufficient.
Github started slowly enroll GPU accelerate CI.
That's what we need: a short, sanity-check type of training/generation. Otherwise many more issues might be overlooked. And without a proper stability the repo will go only this far in terms of popularity.
@Andrei-Aksionov Would you recommend that I revert this commit?
Yes, revert it.
Fixed version of BNB (0.41.0) is the best option for now.
|
2025-04-01T06:37:10.008805
| 2022-11-24T06:49:40
|
1462846959
|
{
"authors": [
"SkafteNicki",
"Xiaotian0726"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1564",
"repo": "Lightning-AI/metrics",
"url": "https://github.com/Lightning-AI/metrics/issues/1359"
}
|
gharchive/issue
|
I can't get rid of the userwarning of WER and CER: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has not been set for this class...
I can't get rid of this userwarning while using WordErrorRate and CharErrorRate:
/home/hxt/ASR_adversarial_examples/venv/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called `full_state_update` that has
not been set for this class (WordErrorRate). The property determines if `update` by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to `False`.
We provide an checking function
`from torchmetrics.utilities import check_forward_full_state_property`
that can be used to check if the `full_state_update=True` (old and potential slower behaviour,
default for now) or if `full_state_update=False` can be used safely.
According to the description above, a relevant property called full_state_update should be set for WER but not.
Actually, my torchmetrics version is:
torchmetrics 0.9.3
and the full_state_update property has been set already according to source code:
class WordErrorRate(Metric):
is_differentiable: bool = False
higher_is_better: bool = False
full_state_update: bool = False
# ...
Then I searched the source code for where this userwarning was thrown, and I found this in the base class torchmetrics/Metrics:
class Metric(Module, ABC):
# ...
is_differentiable: Optional[bool] = None
higher_is_better: Optional[bool] = None
full_state_update: Optional[bool] = None
def __init__(
self,
**kwargs: Any,
) -> None:
super().__init__()
# ...
if self.full_state_update is None and not is_overridden("forward", self, Metric):
rank_zero_warn(
f"""Torchmetrics v0.9 introduced a new argument class property called `full_state_update` that has
not been set for this class ({self.__class__.__name__}). The property determines if `update` by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to `False`.
We provide an checking function
`from torchmetrics.utilities import check_forward_full_state_property`
that can be used to check if the `full_state_update=True` (old and potential slower behaviour,
default for now) or if `full_state_update=False` can be used safely.
""",
UserWarning,
)
So is this a problem about class member full_state_update overriding?
And how to get rid of this? Thank you.
Hi @Xiaotian0726, thanks for reporting this issue.
I am a bit confused that you still get the warning even when you overwrite the property in your metric class. Here you can see that we do exactly that for our own metrics:
https://github.com/Lightning-AI/metrics/blob/ed2249d1e250dbfc4f2e654b455f43f3d29abfb1/src/torchmetrics/regression/mse.py#L42-L44
Also I cannot reproduce the error on v0.9.3 with this small example:
from torchmetrics import Metric
class WordErrorRate(Metric):
full_state_update: bool = False
def update(self):
pass
def compute(self):
return 1
if __name__ == "__main__":
metric = WordErrorRate()
you have not setup your warnings filtering in some weird way?
Regardless of all this the warning has been removed from the next release in this PR https://github.com/Lightning-AI/metrics/pull/1349. You can update directly from master now if you want the change:
pip install git+https://github.com/Lightning-AI/metrics.git
or wait for the next release (it will probably be within a week or so).
Yes, the warning is first removed it what will be v0.11.0, which will officially be released within the next week.
Closing issue.
|
2025-04-01T06:37:10.024663
| 2022-02-02T11:32:17
|
1121813735
|
{
"authors": [
"bibitibooo1",
"svetlio8"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1565",
"repo": "LimeChain/hashport-validator",
"url": "https://github.com/LimeChain/hashport-validator/issues/391"
}
|
gharchive/issue
|
Dashboard Metrics for NFT
We will need to implement the same metrics for the NFT assets as we already have for the fungible.
❗ We will need both the NFT implementation and the metrics. At the moment they are on NFT branch & Metrics branch
Locked same as #353
Minted - #354
Balance between networks - #357
❓ it is possible that new metrics will be needed based on the requests?
This task will become a lot larger with the amount of new tokens coming. To think of organising the panels in Grafana.
-> NFTs
-> FT
Add prefix to filter on a new panel.
Add NTF to the names?
|
2025-04-01T06:37:10.039763
| 2024-07-08T22:31:46
|
2396685582
|
{
"authors": [
"namanlalitnyu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1566",
"repo": "Lind-Project/safeposix-rust",
"url": "https://github.com/Lind-Project/safeposix-rust/pull/293"
}
|
gharchive/pull-request
|
Updates to "read_syscall" and "pread_syscall".
Description
Fixes # (issue)
The following changes include the tests and comments in the code for the "read_syscall" and "pread_syscall" file system calls under RustPosix.
The tests were added to cover all the possible scenarios that might happen when calling the file system_calls read_syscall and pread_syscall.
Type of change
[x] This change just contains the tests for an existing file system call.
[x] This change contains the minor code changes and comments for read_syscall and pread_syscall.
[x] This change contains code reformatting for existing file system calls.
How Has This Been Tested?
Inorder to run the tests, we need to run cargo test --lib command inside the safeposix-rust directory.
All the tests are present under this directory: lind_project/src/safeposix-rust/src/tests/fs_tests.rs
Test A - ut_lind_fs_read_write_only_fd()
Test B - ut_lind_fs_read_from_directory()
Test C - ut_lind_fs_read_from_epoll()
Test D - ut_lind_fs_read_from_regular_file()
Test E - ut_lind_fs_read_from_chardev_file()
Test F - ut_lind_fs_read_from_sockets()
Test G - ut_lind_fs_read_from_pipe_blocking_mode()
Test H - ut_lind_fs_read_from_pipe_nonblocking_mode()
Test I - ut_lind_fs_pread_write_only_fd()
Test J - ut_lind_fs_pread_from_file()
Test K - ut_lind_fs_pread_from_directory()
Test L - ut_lind_fs_pread_invalid_types()
Checklist:
[x] My code follows the style guidelines of this project
[x] I have commented my code, particularly in hard-to-understand areas
[x] My changes generate no new warnings
[x] I have added tests that prove my fix is effective or that my feature works
[x] Any dependent changes have been added to a pull request and/or merged in other modules (native-client, lind-glibc, lind-project)
@ve1nard and @Anway-Agte. Requesting your review of these changes.
Thanks!
@yashaswi2000, Please can you review these changes.
Thanks!
|
2025-04-01T06:37:10.041277
| 2018-02-20T17:20:19
|
298678476
|
{
"authors": [
"AdrianDC",
"jurf"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1567",
"repo": "LineageOS/www",
"url": "https://github.com/LineageOS/www/pull/11"
}
|
gharchive/pull-request
|
footer: use en dash for year range
Same thing as LineageOS/lineage_wiki#78. Small but pleases the eye :-)
Please do not open Pull requests, use LineageOS Gerrit instead.
Never found the time to migrate this.
|
2025-04-01T06:37:10.050257
| 2024-07-20T04:21:08
|
2420549436
|
{
"authors": [
"piste-jp",
"tonyperk"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1568",
"repo": "LinearTapeFileSystem/ltfs",
"url": "https://github.com/LinearTapeFileSystem/ltfs/issues/471"
}
|
gharchive/issue
|
LTO9 Tape Drive fails to mount a successfully formatted LTO8 media
Describe the bug
mkltfs run has been run on LTO9 tape drive that has LTO8 Tape Media (000001L8) and runs successfully. There are output errors related to:
cannot get remaining capacity log page x17 failed
But it successfully wrote the index to partition a and b. And it states it is successfully formatted.
Then running ltfs, it states that it cannot mount the volume due it not being partitioned.
To Reproduce
Steps to reproduce the behavior:
Run mkltfs on LTO9 tape drive with LTO8 media.
Run ltfs on /dev/IBMtape1 device that is LTO9 tape drive with the LTO8 media
See error
Expected behavior
Expect the ltfs to be able to mount the successfully formatted tape.
Desktop (please complete the following information):
OS: Rocky Linux 8.4
Lintape: 3.0.59
ltfs: <IP_ADDRESS>
Please read the quick start document before opening an issue. As it says you need to use a sg device instead.
Support of lin_tape was dropped in many years ago. I'm not sure why you want to use it at this time.
Please contact to IBM if you are using IBM provided pre-compiled version of the LTFS.
I'm using a build from 2.4.5 which was at least 5 years ago.
Also I have code that sends inquiry and TURs to sg device for health checks. So I cannot use sg device for ltfs. Also I still u see r lintape driver.
This should still work, no? It works with LTO 7 and LTO 8 tape drives. Just not LTO 9 tape drives.
Also I have code that sends inquiry and TURs to sg device for health checks. So I cannot use sg device for ltfs. Also I still u see r lintape driver.
This should still work, no? It works with LTO 7 and LTO 8 tape drives. Just not LTO 9 tape drives.
I believe the lin_tape driver backend was no more maintained when 2.4.5 is released. You are just lucky to be able to use it on LTO7 and LTO8. Please do not expect same quality on using an unofficial manner.
One more thing, I strongly recommend that you should not access to the same device with different device files. It could cause a catastrophic result against data on tape. For example #446, we never say LTFS works in such kind of condition at all.
How do you address multipath devices? IBMtapeX supported multi path and sg does not.
Data path failover is already supported on the sg backend.
Please see PR #27 and
https://github.com/LinearTapeFileSystem/ltfs/commit/a23c6da1b64fe5d9c3b27bedd22b32f8dba67a24.
That's very good news that we now have sg-ibmtape data path failover.
What about multi-path from a tape library? That is something that lin_tape was able to provide.
I'm not sure what kind of answer you are expecting but IBM Spectrum Archive Library Edition supports control path failover on it's sg backend for library, I think.
But it is not related into this LTFS project at all because this LTFS project is targeting only stand alone drive. So you need to contact IBM if you have an answer.
|
2025-04-01T06:37:10.067613
| 2019-12-19T16:19:33
|
540416090
|
{
"authors": [
"LingDong-",
"chaomenghsuan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1574",
"repo": "LingDong-/wenyan-lang",
"url": "https://github.com/LingDong-/wenyan-lang/issues/233"
}
|
gharchive/issue
|
[installation failed]
Installation was failed when running npm run make_cmdline
here's part of the log:
18 verbose node v13.5.0
19 verbose npm v6.13.4
20 error code ELIFECYCLE
21 error errno 2
22 error wenyan-lang@ make_cmdline: `node ./tools/make_cmdline.js && pkg ./build/wenyan.js --out-path ./build`
22 error Exit status 2
23 error Failed at the wenyan-lang@ make_cmdline script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 2, true ]
I wondered what the problem is
Hi there, thanks for the report. I haven't run into this issue. Maybe it's something wrong with the pkg packager? Meanwhile you can still run the cmdline tool by using node ./build/wenyan.js
Thanks.
|
2025-04-01T06:37:10.071601
| 2022-04-02T01:45:10
|
1190458017
|
{
"authors": [
"2220666627",
"Xiangyue-Zhang",
"yanghhx"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1575",
"repo": "LinguoLi/CrosSCLR",
"url": "https://github.com/LinguoLi/CrosSCLR/issues/3"
}
|
gharchive/issue
|
The linear evaluation is very low
I have run your code and the linear evaluation accuracy is only 45.84%. When I set the "param.requires_grad =True" in line 44,49,54 of linear.py, the linear evalution accuracy of NTU60 view can reach 83.3%. However , the operation may be the fine-tuning operation that is described in your paper. I am very confusing.
I have the same issue, the linear evaluation is very low, only set the "param.requires_grad =True" of linear.py can reach the 83.3% accuracy.
same here!
Maybe try to check your torch version, make sure it is 1.4.0, works for me.
|
2025-04-01T06:37:10.090178
| 2021-12-30T00:30:40
|
1090809581
|
{
"authors": [
"ByQuartz",
"Matteo02p",
"Smurf1987",
"UInt2048"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1576",
"repo": "LinusHenze/Fugu14",
"url": "https://github.com/LinusHenze/Fugu14/issues/219"
}
|
gharchive/issue
|
Setup failed - Fugu14App.closure doesn't exist
So i get the error "The File Fugu14App.closure doesn't exist.
Here is a Screenshot of that.
Any idea How to fix that?
I have an Iphone 11 with IOS 14.3
AltStore 1.4.8
and tried to install Unc0ver 8.0.0
"
Try 8.0.1
Also, Fugu14 != unc0ver
Still the same problem
Same to me
I managed to fix the issue by just deleting my previous Unc0ver Jailbreak.
After i deleted the unc0ver app i restarted my Phone so its not Jailbroken anymore and the tried to install Unc0ver x Fugu14 again and that worked. I hope i can help somebody with that
|
2025-04-01T06:37:10.103345
| 2018-08-26T15:24:36
|
354103284
|
{
"authors": [
"kennethdave",
"sjackman"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1577",
"repo": "Linuxbrew/brew",
"url": "https://github.com/Linuxbrew/brew/issues/827"
}
|
gharchive/issue
|
Error: undefined method `sdk_path_if_needed' for OS::Mac:Module
During installation got an issue below, please advise. Thanks.
==> Installing dependencies for node: python@2, icu4c
==> Installing node dependency: python@2
==> Downloading https://www.python.org/ftp/python/2.7.15/Python-2.7.15.tar.xz
Already downloaded<EMAIL_ADDRESS>==> Verifying<EMAIL_ADDRESS>checksum
tar xf<EMAIL_ADDRESS>-C /tmp/d20180826-3521-rvhobn
Error: undefined method `sdk_path_if_needed' for OS::Mac:Module
Please report this bug:
https://github.com/Linuxbrew/brew/wiki/troubleshooting
/home/ec2-user/.linuxbrew/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/python@2.rb:117:in `install'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/build.rb:153:in `block (2 levels) in install'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/formula.rb:1117:in `block in brew'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/formula.rb:2037:in `block (2 levels) in stage'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/utils.rb:564:in `with_env'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/formula.rb:2036:in `block in stage'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/resource.rb:120:in `block in unpack'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/resource.rb:189:in `block in mktemp'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/mktemp.rb:55:in `block in run'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/mktemp.rb:55:in `chdir'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/mktemp.rb:55:in `run'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/resource.rb:188:in `mktemp'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/resource.rb:115:in `unpack'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/resource.rb:93:in `stage'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/2.3.0/forwardable.rb:202:in `stage'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/formula.rb:2014:in `stage'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/formula.rb:1112:in `brew'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/build.rb:124:in `block in install'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/utils.rb:564:in `with_env'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/build.rb:121:in `install'
/home/ec2-user/.linuxbrew/Homebrew/Library/Homebrew/build.rb:202:in `<main>'
Please surround copied-and-pasted logs with triple back ticks. See GitHub Help / Quoting Code.
Thanks for the bug report, @kennethdave. Fixed by PR https://github.com/Linuxbrew/homebrew-core/pull/9073
Thanks for the fixed, @sjackman. :)
Happy to help!
|
2025-04-01T06:37:10.111400
| 2018-06-12T23:58:05
|
331794284
|
{
"authors": [
"codecov-io",
"sjackman"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1578",
"repo": "Linuxbrew/brew",
"url": "https://github.com/Linuxbrew/brew/pull/727"
}
|
gharchive/pull-request
|
shims/super/cc: Use cc on Linux
Address the error
.../shims/linux/super/cc:440:in `exec': No such file or directory - clang (Errno::ENOENT)
See this failed Docker Hub build:
https://hub.docker.com/r/bcgsc/tigmint/builds/bd6bopkjfmbyn9q4bdacsgm/
Codecov Report
Merging #727 into master will decrease coverage by <.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #727 +/- ##
==========================================
- Coverage 68.29% 68.28% -0.01%
==========================================
Files 393 393
Lines 20983 20983
==========================================
- Hits 14330 14329 -1
- Misses 6653 6654 +1
Impacted Files
Coverage Δ
hardware.rb
47.31% <0%> (-1.08%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9184475...24b7b90. Read the comment docs.
This PR did resolve the above failed Docker Hub build of Tigmint. https://hub.docker.com/r/bcgsc/tigmint/builds/
|
2025-04-01T06:37:10.114222
| 2020-03-27T13:33:38
|
589137120
|
{
"authors": [
"BehnazDibayee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1579",
"repo": "Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB",
"url": "https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/172"
}
|
gharchive/issue
|
version-RFB-640.onnx issue
Hi
I've tried to use "version-RFB-640.onnx" model with "ultra_face_opencvdnn_inference.py" code but it gives me this error:
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\dnn\src\layers\slice_layer.cpp:147: error: (-215:Assertion failed) requiredOutputs > 0 && inpShape[axis] % requiredOutputs == 0 in function 'cv::dnn::SliceLayerImpl::getMemoryShapes'
However your other simplified onnx models work well. But these three don't: "version-RFB-640.onnx" , "version-RFB-320.onnx" , "version-slim-320.onnx"
Would you please add models "version-RFB/slim-640-simplified" in your Repo or tell me how to encounter with this error ? I need your "640" models.
Thanks
Also it would be great if you add caffe models for RFB-640 and slim-640
|
2025-04-01T06:37:10.123992
| 2022-05-26T15:42:52
|
1249736290
|
{
"authors": [
"SeungheonOh",
"emiflake"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1580",
"repo": "Liqwid-Labs/agora",
"url": "https://github.com/Liqwid-Labs/agora/issues/101"
}
|
gharchive/issue
|
Deprecate agora-testlib
With #94, agora-testlib lost its place as storing all test-related functions. Currently, it only has few functions that are loosely related to testing itself; agora-testlib is good to go.
Hmmm, but shouldn't Specification be part of agora-testlib?
This is how I see the three coexisting:
agora-testlib "library code" in order to test / write specs.
agora-spec the actual spec tree written out.
agora-sample the sample library containing a bunch of Agora preset components and contexts.
Additionally:
agora-test: a "frontend" for agora-spec which runs the spec as tests
agora-bench a "frontend" for agora-spec which runs the spec as benchmarks
|
2025-04-01T06:37:10.162910
| 2023-08-16T06:39:43
|
1852608333
|
{
"authors": [
"Little-Podi",
"jananithangavel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1581",
"repo": "Little-Podi/GRM",
"url": "https://github.com/Little-Podi/GRM/issues/7"
}
|
gharchive/issue
|
Why didn't your model directly apply the mask with infinity and then apply the softmax function? Could you please explain the following code?
For stable training
max_att, _ = torch.max(attn, dim=-1, keepdim=True)
attn = attn - max_att
attn = attn.to(torch.float32).exp_() * attn_policy.to(torch.float32)
attn = (attn + eps / N) / (attn.sum(dim=-1, keepdim=True) + eps)
The direct replace operation is not differentiable, which means the attn_policy produced by our prediction modules will not receive any gradients during training. Thus, it is not suitable as the prediction modules for token division are not learning at all. However, during inference, you can use any operation with the identical functionality if you think it is better.
|
2025-04-01T06:37:10.166053
| 2021-05-26T05:19:20
|
901784572
|
{
"authors": [
"jzongker",
"tec7z7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1582",
"repo": "LiveChurchSolutions/ChumsApp",
"url": "https://github.com/LiveChurchSolutions/ChumsApp/issues/62"
}
|
gharchive/issue
|
Wrong donations amounts showing on person page
When a person gives to multiple funds in a single donation, the values reported on the donation summary list on the profile page are wrong.
Example: Total of $20 donated, split between two funds
Profile page shows two $20 donations. Show the value from fundDonations instead of donations table.
Fixed in https://github.com/LiveChurchSolutions/ChumsApp/pull/117
|
2025-04-01T06:37:10.179224
| 2019-05-06T08:54:00
|
440601539
|
{
"authors": [
"black-snow"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1583",
"repo": "LivePersonInc/dropwizard-websockets",
"url": "https://github.com/LivePersonInc/dropwizard-websockets/issues/26"
}
|
gharchive/issue
|
Upgrade Jetty
We're still using Jetty 9.4.10 while the current version is already 9.4.18.
There's at least one breaking change in 9.4.17:
3464 Split SslContextFactory into Client and Server
which blows up my current project for another dependency in my fat jar includes a 9.4.18 jetty leading to a NoClassDefFoundError.
Can we upgrade to the latest jetty version? I'll try to file a PR if I find some spare time.
My crash is gone with dropwizard 1.3.10. I'ma file a PR with some dep updates.
PR here: https://github.com/LivePersonInc/dropwizard-websockets/pull/27
|
2025-04-01T06:37:10.243883
| 2017-09-10T21:49:06
|
256542952
|
{
"authors": [
"kulla",
"vroland"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1585",
"repo": "Lodifice/mfnf-pdf-export",
"url": "https://github.com/Lodifice/mfnf-pdf-export/issues/41"
}
|
gharchive/issue
|
How to handle inline images
Currently inline images are igored with an error. Is this how it should be gererally handled? Maybe just setting the image height to \lineheight might work, for example in https://de.wikibooks.org/wiki/Mathe_für_Nicht-Freaks:_Intervall.
Yes, it is a bug to output an error. Some of the inline images are smileys via https://de.wikibooks.org/wiki/Vorlage:Smiley which can be handeled seperately...
As there are not that many cases of inline images, just displaying them with with the line height might be fine. (64c13f0062fd5b002dba050963a3d7aa52d5ab72)
@kulla @Lodifice @elbotho Do you think this implementation is enough for the 1.0 milestone?
@vroland Good for me :smile:
|
2025-04-01T06:37:10.293097
| 2016-08-17T11:33:35
|
171635827
|
{
"authors": [
"LBegnaud",
"WouterTinus",
"garyrowswell",
"gdau"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:1586",
"repo": "Lone-Coder/letsencrypt-win-simple",
"url": "https://github.com/Lone-Coder/letsencrypt-win-simple/issues/279"
}
|
gharchive/issue
|
Manual SAN?
Would it be possible to specify a list of comma separated SANs via a command line argument like manualhost instead of interactively typing?
e.g. letsencrypt -- manualsan a.example.com,b.example.com
The existing --san seems to just be a boolean triggering the interactive question
This would be amazing... I need this too.
It would be nice to know if this is even under consideration?
not sure why these aren't linked, but there is a PR built off of this issue
https://github.com/Lone-Coder/letsencrypt-win-simple/pull/299
Which has been merged in the mean while :)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.