id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
590632096 | Date::parse('February')->month returns 3
When using Date::parse('February')->month the returned result is 3. Same result is returned when using another language Date::parse('Février')->month always returns 3
But works as expected when running Date::parse('1 February')->month returns 2 which is correct.
Is this a bug or something else I ignore?
The same happen with new DateTime('February') with pure PHP if you're a 30 or 31 day e.g. > 28 or 29 if leap year. Because PHP will create you a date from "now", let's say 2020-03-31 10:52:30 UTC then it will change the month to February as asked by the string and so get: 2020-02-31 10:52:30 UTC but as this date is not valid, it will overflow to the next month (2 days because 31 - number of days in February 2020) => 2020-03-02 10:52:30 UTC
You rather should use the start of month with:
Date::parse('February 1');
Date::parse('Février 1');
The same happens with new DateTime('February') with pure PHP if you're a 30 or 31 day e.g. > 28 or 29 if leap year. Because PHP will create you a date from "now", let's say 2020-03-31 then it will change the month to February as asked by the string and so get: 2020-02-31 but as this date is not valid, it will overflow to the next month (2 days because 31 - number of days in February 2020) => 2020-03-02
You rather should use the start of month with:
Date::parse('February 1');
Date::parse('Février 1');
This explains why when I run Date::parse('1 February')->month it works properly.
Thank you! — Gonna close the issue now.
| gharchive/issue | 2020-03-30T22:12:23 | 2025-04-01T06:44:36.065955 | {
"authors": [
"kylekatarnls",
"waiylgeek"
],
"repo": "jenssegers/date",
"url": "https://github.com/jenssegers/date/issues/337",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1006573755 | fix: morphTo when relation name is in camel case
Commit e19e10a in this PR updates existing tests to use a 'multi-word' relation for the morph relationship. This means that:
imageable is changed to has_image when passed as the $name parameter to morphOne() and morphMany()
Note that snake case is required here - the parent Eloquent getMorphs() method constructs the expected column names by appending '_id' and '_type' to the name that's passed.
imageable is changed to hasImage when accessed as the relation name
That commit causes RelationsTest::testMorph() to fail, then commit c02e584 fixes the issue in line with how Eloquent's morphTo() method works, where $name variable defaults to the name of the calling function and is only snake-cased when passed to getMorphs() (reference)
Codecov Report
Merging #2318 (e26c877) into master (6aa6ad1) will not change coverage.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## master #2318 +/- ##
=========================================
Coverage 88.12% 88.12%
Complexity 664 664
=========================================
Files 33 33
Lines 1566 1566
=========================================
Hits 1380 1380
Misses 186 186
Impacted Files
Coverage Δ
src/Eloquent/HybridRelations.php
93.58% <100.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6aa6ad1...e26c877. Read the comment docs.
This was merged in #2498.
Thanks!
| gharchive/pull-request | 2021-09-24T15:17:47 | 2025-04-01T06:44:36.077128 | {
"authors": [
"codecov-commenter",
"divine",
"willtj"
],
"repo": "jenssegers/laravel-mongodb",
"url": "https://github.com/jenssegers/laravel-mongodb/pull/2318",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1854278536 | [FP]: Wrongly reporting vulnerability CVE-2021-41033 on org.eclipse.osgi-3.18.0
Package URl
pkg:maven/org.eclipse.platform/org.eclipse.osgi@3.18.0
CPE
cpe:2.3:a:eclipse:equinox:::::::: versions up to (excluding) 4.21
CVE
CVE-2021-41033
ODC Integration
{"label"=>"Maven Plugin"}
ODC Version
8.3.1
Description
Per CVE Affected component:
Eclipse Equinox, at least until version 4.21
cpe:2.3:a:eclipse:equinox:::::::: versions up to (excluding) 4.21
only 3PP "org.eclipse.osgi-3.18.0.jar" used, but NOT packing/using the vulnerable 3PP component "Eclipse Equinox", even they are NOT packed as indirect dependency in the environment. And this vulnerability is more specific to IDE and the plugin installation of eclipse. But tool is reporting this vulnerability on org.eclipse.osgi-3.18.0.jar, which is wrong.
From Dependency Check tool team, we need confirmation on these false positives. Could you please validate and confirm?
Hi team, any update on this pls?
What is the status of this?
I tried to summarize the problem in order to make it more clear.
The Equinox Platform just bundles a bunch of components which can be also individually used and consumed from maven central. The Equinox Platform has a version number but each bundled component has its own version number (and a maven group/artifact id). The latest version of the Equinox Platform is 4.30, see here for a list of contained components are their version.
CVE-2021-41033 states that all Equinox versions < 4.21 affected.
A common use case is that you are using just part of the Equinox Platform, e.g. just the OSGi Framework
<dependency>
<groupId>org.eclipse.platform</groupId>
<artifactId>org.eclipse.osgi</artifactId>
<version>3.18.0</version>
</dependency>
For an outsider it is hard to determine if your used component suffers from a CVE which just states which Equinox Platform version is affected. In this specific case i think that the OSGi Framework was not vulnerable because of CVE-2021-41033.
The problem was in P2, according to this issue.
However i think this a general problem and lets assume that the OSGi Framework was affected.
CVE-2021-41033 was fixed in Equinox Platform >= 4.21, which contains Equinox OSGi Framework in version 3.17, see here.
So using Equinox OSGi Framework in version 3.17 and above should be fine, but OWASP complains because it mixes up the Equinox Platform version with the Equinox OSGi Framework version.
I think this is general problem and might affect other parts of the Equinox Platform as well like (Equinox CM, Equinox Console, etc..) and it would be nice if we could work out a solution for this.
The eclipse maintainer stated that this is indeed a false positive
and that the mentioned artifact (Equinox OSGi Framework) was never affected by CVE-2021-41033, see here.
So please add this as a false positive, thanks in advance.
| gharchive/issue | 2023-08-17T05:29:23 | 2025-04-01T06:44:36.114746 | {
"authors": [
"prabutdr",
"profhenry"
],
"repo": "jeremylong/DependencyCheck",
"url": "https://github.com/jeremylong/DependencyCheck/issues/5881",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
581682211 | Customizing FontAwesome version
Hi,
I have a FontAwesome Pro license, and I'd like to use it in my Laravel-AdminLTE project. Is it possible to customize the FontAwesome-related URIs? I checked out the solution on #261, but the solution seems to be no longer available.
Thanks!
run adminlte:install —only=main_views
this will publish the master views, edit then and replace fontawesome includes
please note that if the views get updated in future releases you will need to do that again
| gharchive/issue | 2020-03-15T13:49:05 | 2025-04-01T06:44:36.117409 | {
"authors": [
"LBreda",
"andcarpi"
],
"repo": "jeroennoten/Laravel-AdminLTE",
"url": "https://github.com/jeroennoten/Laravel-AdminLTE/issues/500",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
273510038 | ExitStatus Program terminated with exit(0)
Hi,
I have this error when I deploy on server (platform.sh), I got this error after accepting camera :
Uncaught ExitStatus ar.js:19
message: "Program terminated with exit(0)"
name: "ExitStatus"
status: 0
__proto__: Error at https://xxx.eu.platform.sh/assets/js/ar.js:19:2247
constructor: ƒ ExitStatus(status)
arguments: null
caller: null
length: 1
name: "ExitStatus"
prototype: Error at https://xxx.eu.platform.sh/assets/js/ar.js:19:2247
__proto__: ƒ ()
[[FunctionLocation]]: ar.js:19
[[Scopes]]: Scopes[1]
stack: "Error↵ at https://xxx.eu.platform.sh/assets/js/ar.js:19:2247"
How to fix that ?
Thanks,
PS: This happen on latest versions of Chrome, Safari and Firefox
PS2: SO link: https://stackoverflow.com/questions/47269591/uncaught-exitstatus-with-ar-js-on-server
@T1l3 Are you using AR.js in a node environment?
I have the same problem when I use it on node environment. Anyone able to solve it?
solved
| gharchive/issue | 2017-11-13T17:19:03 | 2025-04-01T06:44:36.121990 | {
"authors": [
"T1l3",
"ZoltanVeres",
"ericflyfly",
"nicolocarpignoli"
],
"repo": "jeromeetienne/AR.js",
"url": "https://github.com/jeromeetienne/AR.js/issues/245",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
397034698 | Update AR.js and aframe versions at README
Hi,
At the README.md, the code snippet of the basic example has outdated versions of AR.js and aframe. The latest versions are
Thanks!
absolutely. changed on master Readme. thanks!
| gharchive/issue | 2019-01-08T18:41:05 | 2025-04-01T06:44:36.123693 | {
"authors": [
"cernadasjuan",
"nicolocarpignoli"
],
"repo": "jeromeetienne/AR.js",
"url": "https://github.com/jeromeetienne/AR.js/issues/454",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
777587649 | Assertion 'ECMA_PROPERTY_GET_TYPE (property) == ECMA_PROPERTY_TYPE_NAMEDDATA || ECMA_PROPERTY_GET_TYPE (property) == ECMA_PROPERTY_TYPE_VIRTUAL' in ecma_is_property_writable
JerryScript revision
2faafa4
Build platform
Ubuntu 18.04.5 LTS(Linux 4.15.0-119-generic x86_64)
Build steps
···
./tools/build.py --clean --debug --compile-flag=-fsanitize=address
--compile-flag=-m32 --compile-flag=-fno-omit-frame-pointer
--compile-flag=-fno-common --compile-flag=-g --strip=off
--system-allocator=on --logging=on --linker-flag=-fuse-ld=gold
--error-messages=on --profile=es2015-subset --builddir=$PWD/build
···
Test case
···
var count = "0000";
var arr = Math.cos;
function func(val) {
if (count++ > 300)
return;
try { arr = new Float32Array([Symbol('a'), 1]);
} catch(e) {}
arr[i] = -1;
try { Object.setPrototypeOf(arr, new Int8Array(arr)); } catch(e) {}
}
for (var i=0; i<10; i ++) {
func(new Proxy(func, []));
}
···
Output
···
ICE: Assertion 'ECMA_PROPERTY_GET_TYPE (property) == ECMA_PROPERTY_TYPE_NAMEDDATA || ECMA_PROPERTY_GET_TYPE (property) == ECMA_PROPERTY_TYPE_VIRTUAL' failed at /root/jerryscript/jerry-core/ecma/base/ecma-helpers.c(ecma_is_property_writable):1035.
Error: ERR_FAILED_INTERNAL_ASSERTION
Aborted
···
Credits: Found by chong from OWL337.
Minimal testcase:
var arr = [1];
var ta = new Int8Array(arr);
Object.setPrototypeOf(arr, ta);
arr[1] = 2;
ta = new Int8Array(arr);
Object.setPrototypeOf(arr, ta);
Minimal testcase:
var arr = [1];
var ta = new Int8Array(arr);
Object.setPrototypeOf(arr, ta);
arr[1] = 2;
ta = new Int8Array(arr);
Object.setPrototypeOf(arr, ta);
| gharchive/issue | 2021-01-03T07:11:19 | 2025-04-01T06:44:36.142271 | {
"authors": [
"owl337",
"rerobika"
],
"repo": "jerryscript-project/jerryscript",
"url": "https://github.com/jerryscript-project/jerryscript/issues/4405",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
274350079 | Add Travis CI jobs for build testing several targets
Hitherto, code under the targets directory was not tested and so
its maintenance was sometimes speculative. This commit adds build
testing for several targets to prevent them from bit rotting.
Targets covered by this commit are: ESP8266, Mbed, Mbed OS 5,
NuttX, RIOT, Tizen RT, and Zephyr.
Some issues were revealed and fixed:
ESP8266: added missing include for uint32_t typedef.
Tizen RT: replaced missing str_to_uint with strtol.
JerryScript-DCO-1.0-Signed-off-by: Akos Kiss akiss@inf.u-szeged.hu
This PR does NOT want to deal with the above issues but they are left for target maintainers for consideration.
@akosthekiss Please open an issue for them.
| gharchive/pull-request | 2017-11-16T00:09:10 | 2025-04-01T06:44:36.145553 | {
"authors": [
"LaszloLango",
"akosthekiss"
],
"repo": "jerryscript-project/jerryscript",
"url": "https://github.com/jerryscript-project/jerryscript/pull/2102",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
640402706 | Refactor Number.prototype methods toFixed, toExponential, toPrecision
JerryScript-DCO-1.0-Signed-off-by: Adam Szilagyi aszilagy@inf.u-szeged.hu
Updated the PR!
Also refactored toExponential and toPrecision, now every method uses one common method.
Note: I don't have a solution for the toFixed method where i can use ecma_number_to_decimal yet.
Updated the PR!
Also refactored toExponential and toPrecision, now every method uses one common method.
Note: I don't have a solution for the toFixed method where i can use ecma_number_to_decimal yet.
Did an update so i use ecma_number_to_decimal in all of the three methods
| gharchive/pull-request | 2020-06-17T12:28:49 | 2025-04-01T06:44:36.148198 | {
"authors": [
"szilagyiadam"
],
"repo": "jerryscript-project/jerryscript",
"url": "https://github.com/jerryscript-project/jerryscript/pull/3911",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2057840813 | Highlight current month (underline) instead of printing it & Week starts on monday setting
I find it more convenient to underline the current month of the year instead of printing it twice as this gave the contributions graph a bit of offset. Months were not 'inline' with the contributions' squares.
It's a bit more accurate with the actual GitHub profile contribution graph as seen on the profile page.
Totally agree! The alignment is way better now, and underlining the current month just makes it look cleaner. Appreciate your contribution! 👍
| gharchive/pull-request | 2023-12-27T23:22:19 | 2025-04-01T06:44:36.149660 | {
"authors": [
"aighita",
"jervw"
],
"repo": "jervw/dono",
"url": "https://github.com/jervw/dono/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
148354001 | Support do-while control structure
do-while is a basic control structure, but Groovy does not support it.
@blackdrag Can we support it in the new parser?
We did not support it due to issues with ambiguity. Afaik "do" is a keyword in the old grammar, so it would be no breaker... I guess we can support it, if it does not cause problems
OK. I'll try to support it when all the GinA2 test cases pass ;-)
| gharchive/issue | 2016-04-14T12:59:49 | 2025-04-01T06:44:36.156910 | {
"authors": [
"blackdrag",
"danielsun1106"
],
"repo": "jespersm/groovy",
"url": "https://github.com/jespersm/groovy/issues/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
665365306 | Prioritize new files
There's gotta be a way to see date created metadata and sort by that when selecting random files.
Can't figure out an algorithm that would make a noticeable difference. Going with better sound file organization.
| gharchive/issue | 2020-07-24T19:17:36 | 2025-04-01T06:44:36.169643 | {
"authors": [
"jessemillar"
],
"repo": "jessemillar/screm",
"url": "https://github.com/jessemillar/screm/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
163489362 | Clean up garden.py
Hey, I just became very interested in the language and I did some clean up to graden.py.
Use strip() to remove quotes
Remove all pesky PEP8
Thanks, was waiting for someone to teach me the ways! ;)
yeah, there is conflict because I've cloned the repository before your changes.
so I just pull the changes again and make a new branch then I send a pull request :+1:
you could just close this pull request.
@jesserayadkins
| gharchive/pull-request | 2016-07-01T23:26:32 | 2025-04-01T06:44:36.172724 | {
"authors": [
"azbshiri",
"justjoeyuk"
],
"repo": "jesserayadkins/lily",
"url": "https://github.com/jesserayadkins/lily/pull/199",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
52597635 | Link colors flicker when updating cells in high frequency
Hi,
I set the colors of my text chat bubbles in this function:
override func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell like this:
if let textView = cell.textView {
textView.textColor = UIColor.whiteColor()
textView.linkTextAttributes = [NSForegroundColorAttributeName:myColor, NSUnderlineStyleAttributeName: NSUnderlineStyle.StyleThick.rawValue,
NSUnderlineColorAttributeName: mycolor];
}
However, in situations where I load multiple images for image cells and refresh the collectionview each time an image has finished downloading, I can observe a flickering of the links in the text view. It flickers between having the default text color and the color I specify for links.
I am not entirely sure if this has something to do with JSQMessagesViewController, but I didn't find anything using google, so I thought I'd ask here.
Thanks for your time,
Philipp
Has anyone found a fix yet?
I have the same issues, anyone solved this?
Hello everyone!
I'm sorry to inform the community that I'm officially deprecating this project. 😢 Please read my blog post for details:
http://www.jessesquires.com/blog/officially-deprecating-jsqmessagesviewcontroller/
Thus, I'm closing all issues and pull requests and making the necessary updates to formally deprecate the library. I'm sorry if this is unexpected or disappointing. Please know that this was an extremely difficult decision to make. I'd like to thank everyone here for contributing and making this project so great. It was a fun 4 years. 😊
Thanks for understanding,
— jsq
| gharchive/issue | 2014-12-21T15:10:15 | 2025-04-01T06:44:36.176626 | {
"authors": [
"brokeniceinteractive",
"eschanet",
"jessesquires",
"pflenker"
],
"repo": "jessesquires/JSQMessagesViewController",
"url": "https://github.com/jessesquires/JSQMessagesViewController/issues/690",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1415374263 | Explizite Zuweisungen der Tottasten hinzufügen
Für die Totttasten sind noch keine expliziten Zuweisungen vorhanden.
Checkliste:
[ ] Gruppe 1, Ebene 1: U+0202, U+0301
[ ] Gruppe 1, Ebene 2: U+0300
[ ] Gruppe 1, Ebene 3: ...
(Für Gruppe 2 können keine expliziten Zuweisungen erfolgen, siehe Limitierung in der README)
(Für Gruppe 2 können keine expliziten Zuweisungen erfolgen, siehe Limitierung in der README)
Was aber am Microsoft Keyboard Layout Creator liegt. kbdedit kann das schon.
| gharchive/issue | 2022-10-19T17:54:01 | 2025-04-01T06:44:36.179096 | {
"authors": [
"jessestricker",
"markusC64"
],
"repo": "jessestricker/e1-tastatur",
"url": "https://github.com/jessestricker/e1-tastatur/issues/1",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1925500814 | Measurements that disappear when the joulescope is disconnected
Joulescope model
No response
UI version
1.0.31
Your idea
Hi Matt,
This has been written by my coworker. We absolutely did not discuss together about this before but it bothered me exactly the same. Just not enough to open an issue on my side.
https://forum.joulescope.com/t/measurements-that-disappear-when-the-joulescope-is-disconnected/659
How often does this behavior occur that you accidentally unplug your Joulescope?
Are you actually unplugging your Joulescope, or are you finding that plugging/unplugging other devices on USB is causing your Joulescope to disconnect? If so, can you provide more detail about your USB topology including the use of hubs, docks, & adapters?
What type of environment are you working in where this seems to occur: your desk, while traveling, in a lab?
Are you frequently unplugging other devices (like your device under test, debug adapters, etc)? Simply labeling the Joulescope USB connector may significantly help in this case.
For me about 1 to 5 times a day. It's not about unplugging the Joulescope but rather touching the USB with my mouse that disconnects the Joulescope for some milliseconds.
No related to other devices
Not happening on desk where we have space, second screen and additional mouse/keyboard. It happens on smaller desk when we need to be near an oven or other specific instrument on a smaller desk.
Unrelated
A popup "connection lost" would be annoying but could be a solution. Then either save data or drop them as you already do. Reconnection could happen only after user made his choice.
Does your idea concern a specific OS?
No response
For me it was just bumping on the USB cable (on computer side of JS110) for him it was bumping on JS220 USB-C side.
We just did some additional tests by playing with USB cables on both sides of both JS110 and JS220 and got no disconnection. Very strange.
Let's put this into standby for now. We will try to reproduce and add information if it happens again. If you don't hear from us within 1 month, consider closing the issue.
Hi @atsju - Is this still a problem that we should address?
| gharchive/issue | 2023-10-04T06:34:31 | 2025-04-01T06:44:36.213710 | {
"authors": [
"atsju",
"mliberty1"
],
"repo": "jetperch/pyjoulescope_ui",
"url": "https://github.com/jetperch/pyjoulescope_ui/issues/226",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
607652360 | Waveform view should clearly indicate sample drops and missing data.
Version: 0.8.11
Platform: All
Description
The Joulescope software is designed to reliably transfer data from the instrument to the host computer over USB. This communication method normally works well, but the host computer can "forget" to service the Joulescope instrument, either because of too much USB traffic or too high of CPU utilization. Sample drops are represented by NaNs (not a number). The Joulescope UI currently ignores these samples. The waveform view will draw a straight line from the last sample before the drop to the first sample after the drop.
Desired Behavior
The Joulescope UI should include a preference to add a visual indicator showing the drop. Possible display methods are:
A vertical rectangular shaded region
Different color and/or trace width.
1.0.0 now draws rectangles over regions with missing samples, like this:
| gharchive/issue | 2020-04-27T15:43:17 | 2025-04-01T06:44:36.216925 | {
"authors": [
"mliberty1"
],
"repo": "jetperch/pyjoulescope_ui",
"url": "https://github.com/jetperch/pyjoulescope_ui/issues/76",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
462345314 | challenges controller attempts to list secrets at cluster scope
Describe the bug:
Upgrading a namespace-scoped (i.e. uses Issuers, not ClusterIssuers) cert-manager from v0.7.2 to v0.8.1, the challenges controller now tries to access cluster scoped secrets. We forbid this in RBAC policy, which catches the change in cert-manager's behavior (btw v0.7.2 and v0.8.0-alpha.0 actually):
E0629 21:36:22.953032 1 reflector.go:131] vendor/k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:cert-manager:cert-ma
nager" cannot list resource "secrets" in API group "" at the cluster scope
Does cert-manager challenges have new/changed code that doesn't respect the namespace scope configuration? Is there a new configuration parameter we need now to get it to be namespace-scope?
Expected behaviour:
cert-manager is specifically configured to run only a few controllers and use only a specific namespace. It should not access anything at cluster scope in this configuration (repro example below).
Steps to reproduce the bug:
Here is the most minimal reproducible example with just the challenges controller (not very practical, but shows the problem)
apiVersion: apps/v1
kind: Deployment
metadata:
name: cert-manager
namespace: cert-manager
spec:
replicas: 1
selector:
matchLabels:
name: cert-manager
template:
metadata:
labels:
name: cert-manager
spec:
serviceAccountName: cert-manager
containers:
- name: cert-manager
image: quay.io/jetstack/cert-manager-controller:v0.8.0-alpha.0
args:
- --namespace=$(POD_NAMESPACE)
- --leader-election-namespace=$(POD_NAMESPACE)
- --cluster-resource-namespace=$(POD_NAMESPACE)
- --controllers=challenges
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Such pods will spew the RBAC error listed above (for repro, you could use a SA with not permissions). Change the version back to v0.7.2 and the challenges controller will start.
Anything else we need to know?:
Typically, we only use issuers,certificates,orders,challenges controllers and do not use the webhook component. This should narrow the search space of possible regression points.
git log --oneline v0.7.2..v0.8.0-alpha.0 -- pkg/controller/acmechallenges
bbf4012e Handle expired challenge responses in acmechallenges controller
57075123 Merge pull request #1585 from munnerz/validate-caa-feature-gate
49f587c8 Set Reason field on ACME challenges during Present/CleanUp
9906c0d9 Add feature gate for ValidateCAA functionality and default it to off
af9bce72 Add 'webhook' DNS01 provider type
871ed428 Allow controller constructors to return errors
eaeefdf5 Update acmechallenges controller
/kind bug
Doing some bisect tests of the images cert-manager publishes between releases,
quay.io/jetstack/cert-manager-controller:f3910e0d # bad
quay.io/jetstack/cert-manager-controller:113c424c # good, looks to be v0.7.1
quay.io/jetstack/cert-manager-controller:076ecb4e # good
Seems like maybe the New DNS solver feature accesses cluster-level secrets https://github.com/jetstack/cert-manager/compare/076ecb4e...f3910e0d Also, I guess not a lot of users lock down cert-manger to a namespace, or if they do, they also haven't updated yet.
Thanks for the in-depth description and analysis of this problem. You are correct I think in suggesting it's due to the way the new webhook providers are working.
Specifically you can see here where the problematic informer is instantiated.
We will need to work out how best to plumb through the namespace parameter to the DNS solver's Initialize function so this can be appropriately filtered.
/milestone v0.9
/priority important-soon
/area acme
I've opened #1849 which should fix this 😄
Thanks for your work and the fix!
| gharchive/issue | 2019-06-29T21:53:12 | 2025-04-01T06:44:36.231355 | {
"authors": [
"dghubble",
"munnerz"
],
"repo": "jetstack/cert-manager",
"url": "https://github.com/jetstack/cert-manager/issues/1838",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
708315569 | Hability to disable webhook in the helm chart
Is your feature request related to a problem? Please describe.
In the past you could disable the webhook deployment with webhook.enabled, but it doesn't seem to be an option now.
I understand that it's not good to do so, but in some cases may be necessary, so it's always interesting to be able to do so.
Describe the solution you'd like
Restore that functionality.
/kind feature
If we add this to the Helm chart it means we would have to support it and handle cases where it goes wrong. The webhook became an essential part (especially the conversion one) of the cert-manager controller.
If you want to disable this I invite you to make a private copy of the chart, however we will not be able to provide you any guarantees or help with this disabled.
I'd like to know more about a case where this is necessary, we try to do our best to improve documentation on fixing webhook issues where needed.
/close
I see, thanks. I just found that the way to go is using the host network in my case, thanks!
I'm actually not sure about why the validation doesn't work. I use k3s with the ipsec backend on flannel, and I've activated the hostNetwork on webhook. I have the master on a remote network and one worker in my local lan. The worker is the one that has all the cert-manager related stuff.
I still can't create issuers or clusterissuers because of the timeout on the validating api:
Error from server (InternalError): error when creating "manifests/cluster-issuer.yml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.kube-system.svc:443/mutate?timeout=10s: context deadline exceeded
What I don't understand is that I launch a pod in the master and do some curl requests to https://cert-manager-webhook.kube-system.svc:443/ and it works. I even get the warning about the ssl certificate being self-signed. I don't know how to debug it further, could you point in some direction?
| gharchive/issue | 2020-09-24T16:34:20 | 2025-04-01T06:44:36.236186 | {
"authors": [
"alexppg",
"meyskens"
],
"repo": "jetstack/cert-manager",
"url": "https://github.com/jetstack/cert-manager/issues/3317",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
728878933 | Issue #356: reverse example: writing a test
OK - so this is a test for reverse example - which is suppose to illustrate how to work with non-native ABI sizes.
The test successfully reverses everything on x86 from sse2 to AVX2.
Please have a look - see how much does it look like example you are looking for
I have questions:
I don't believe I can compile with EVE_NO_SIMD in a meaningful way since current_api is still going to be SSE2, correct?
https://github.com/jfalcou/eve/blob/e749a9ce6f029d362825ff16961e7dc59ff190cf/include/eve/arch/spec.hpp#L27
I don't know how to correctly disable this test for anything but the ones I want. Like - avx512 is still going to try to compile my test with current macro?
How do I include parts of the test in the doc and not the entire code? There is a bit of boiler plate that is probably not necessary in the doc.
I don't believe I can compile with EVE_NO_SIMD in a meaningful way since current_api is still going to be SSE2, correct?
It's indeed an oversight. SPY provides a undefined_simd_ marker so I guess I need to use this whenever the macro is defined. I'm fixing that in turbo mode today
2. I don't know how to correctly disable this test for anything but the ones I want. Like - avx512 is still going to try to compile my test with current macro?
One way to do so is to wrap your test around a #if !defined(SPY_SIMD_IS_X86_AVX512).
3. How do I include parts of the test in the doc and not the entire code?
Currently, it's complicated. I am sending a message to Morgan from markdeep to see if we could have some support for that.
Meanwhile, I guess you can put your code in a .hpp then include it in the cpp. We'll then write somethign to wrap loose .hpp into markdeep.
I'm going over the code and filling small comments manwhile.
As we discussed on slack, the reverse for chars on sse should happen via shorts.
However at the moment eve::convert(wide<uint16_t>, eve::as_<uint8_t>{}) generates horrandous code, see #348
I can fix that but I am confused about where that should happen, can you point me please?
Updated the pr.
Fixing conversions in general is a bit more tricky, so I have a separate implementation here.
Fixed the test. However, I don't like that we only test the native size, when the whole point of the test is to show all emulation.
the types coverage is insufficient.
Fixed the test. However, I don't like that we only test the native size, when the whole point of the test is to show all emulation.
the types coverage is insufficient.
I adapted it as a regular tests, i.e variations of size is handled by the CI.
We can change to a model where developer tests are more manually driven, not a big deal.
@jfalcou - can we merge this?
| gharchive/pull-request | 2020-10-24T21:16:04 | 2025-04-01T06:44:36.250237 | {
"authors": [
"DenisYaroshevskiy",
"jfalcou"
],
"repo": "jfalcou/eve",
"url": "https://github.com/jfalcou/eve/pull/381",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1695632801 | frogbot on Azure DevOps Server 2020 broken since 2.6.2
Hello,
I had succesfully run frogbot 2.6.1 on Azure DevOps Server 2020, however, since 2.6.2, with the same config, it's not working anymore.
I've also tried with the latest 2.8.0 and it's the same behavior.
Here are logs for a 2.6.1 run:
Frogbot downloaded successfully!
11:17:00 [Info] Frogbot version: 2.6.1
11:17:00 [Info] Checking whether the build-info extractors exist locally
11:17:00 [Info] Downloading maven extractor to path: /home/alm/.jfrog/dependencies/maven/2.39.5
11:17:00 [Info] Downloading build-info-extractor from https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-maven3/2.39.5/build-info-extractor-maven3-2.39.5-uber.jar
11:17:00 [Debug] Sending HTTP GET request to: https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-maven3/2.39.5/build-info-extractor-maven3-2.39.5-uber.jar
11:17:01 [Info] Downloading gradle extractor to path: /home/alm/.jfrog/dependencies/gradle/4.31.5
11:17:01 [Info] Downloading build-info-extractor from https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-gradle/4.31.5/build-info-extractor-gradle-4.31.5-uber.jar
11:17:01 [Debug] Sending HTTP GET request to: https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-gradle/4.31.5/build-info-extractor-gradle-4.31.5-uber.jar
11:17:02 [Debug] Reading config from file system. Looking for .frogbot/frogbot-config.yml
11:17:02 [Debug] frogbot-config.yml found in /opt/agt/_work/1/s/.frogbot/frogbot-config.yml
11:17:02 [Info] Running Frogbot "scan-and-fix-repos" command
11:17:02 [Debug] Created temp working directory: /tmp/jfrog.cli.temp.-1683191822-355528740
11:17:02 [Debug] Usage Report: Sending info...
11:17:02 [Debug] Downloading Global/CockpIT-back , branch: master to: /tmp/jfrog.cli.temp.-1683191822-355528740
11:17:02 [Debug] Download url: https://<AZURE_URL>/global/CockpIT/_apis/git/repositories/CockpIT-back/items/items?path=/&versionDescriptor[version]=master&$format=zip
11:17:02 [Debug] Sending HTTP GET request to: https://<ARTIFACTORY_URL>/artifactory/api/system/version
11:17:02 [Debug] Artifactory response: 200
11:17:02 [Debug] JFrog Artifactory version is: 7.55.3
11:17:02 [Debug] Sending HTTP POST request to: https://<ARTIFACTORY_URL>/artifactory/api/system/usage
11:17:02 [Debug] Usage Report: Usage info sent successfully. Artifactory response: 200
11:17:02 [Info] CockpIT-back repository downloaded successfully. Starting with repository extraction...
11:17:02 [Info] Extracted repository successfully
11:17:02 [Debug] Repository download completed
11:17:02 [Info] Auditing project: /tmp/jfrog.cli.temp.-1683191822-355528740
11:17:02 [Info] Detected: maven.
[...]
And here is a run for 2.6.2
11:17:44 [Info] Frogbot version: 2.6.2
11:17:44 [Info] Checking whether the build-info extractors exist locally
11:17:44 [Info] Downloading maven extractor to path: /home/alm/.jfrog/dependencies/maven/2.39.5
11:17:44 [Info] Downloading build-info-extractor from https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-maven3/2.39.5/build-info-extractor-maven3-2.39.5-uber.jar
11:17:44 [Debug] Sending HTTP GET request to: https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-maven3/2.39.5/build-info-extractor-maven3-2.39.5-uber.jar
11:17:44 [Info] Downloading gradle extractor to path: /home/alm/.jfrog/dependencies/gradle/4.31.5
11:17:44 [Info] Downloading build-info-extractor from https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-gradle/4.31.5/build-info-extractor-gradle-4.31.5-uber.jar
11:17:44 [Debug] Sending HTTP GET request to: https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-gradle/4.31.5/build-info-extractor-gradle-4.31.5-uber.jar
11:17:44 [Debug] Reading config from file system. Looking for .frogbot/frogbot-config.yml
11:17:44 [Debug] frogbot-config.yml found in /opt/agt/_work/1/s/.frogbot/frogbot-config.yml
11:17:44 [Info] Running Frogbot "scan-and-fix-repos" command
11:17:44 [Debug] Usage Report: Sending info...
11:17:44 [Debug] Sending HTTP GET request to: https://<ARTIFACTORY_URL>/artifactory/api/system/version
11:17:44 [Debug] Artifactory response: 200
11:17:44 [Debug] JFrog Artifactory version is: 7.55.3
11:17:44 [Debug] Sending HTTP POST request to: https://<ARTIFACTORY_URL>/artifactory/api/system/usage
11:17:44 [Debug] Usage Report: Usage info sent successfully. Artifactory response: 200
11:17:44 [Error] repository CockpIT-back returned the following error:
TF200016: The following project does not exist: CockpIT-back. Verify that the name of the project is correct and that the project exists on the specified Azure DevOps Server.
repository CockpIT-front returned the following error:
TF200016: The following project does not exist: CockpIT-front. Verify that the name of the project is correct and that the project exists on the specified Azure DevOps Server.
##[error]Bash exited with code '1'.
Let me know if you need more details.
Hi @anael-l, thank you for using Frogbot and bringing this issue to our attention.
Could you please provide us with more details, such as your frogbot-config.yml and azure-pipelines.yml files? This would greatly assist us in our investigation. Thank you very much.
No problem, I will just redact some infos.
For the pipeline, as I use the JFrog Azure DevOps extension and jfrog cli curl, I can't download and execute the bash file at the same time. That's why it is in two steps
azure-pipelines.yml
jobs:
- job:
timeoutInMinutes: 360
displayName: "Frogbot Scan And Fix Repos"
steps:
- checkout: self
persistCredentials: true
- task: JFrogToolsInstaller@1
displayName: Install Jfrog CLI
inputs:
artifactoryConnection: 'artifactory-prd-$(System.TeamProject)'
cliInstallationRepo: 'jfrog-cli-remote'
installExtractors: true
extractorsInstallationRepo: 'extractors'
- task: JfrogCliV2@1
inputs:
jfrogPlatformConnection: 'jfrog-prd-$(System.TeamProject)'
command: 'jf rt curl -vLJO frogbot-generic-external/v2/2.6.2/getFrogbot.sh'
- task: CmdLine@2
displayName: 'Run Frogbot'
env:
# [Mandatory]
# Azure Repos personal access token with Code -> Read & Write permissions
JF_GIT_TOKEN: $(System.AccessToken)
# [Mandatory]
# JFrog platform URL (This functionality requires version 3.29.0 or above of Xray)
JF_URL: https://<ARTIFACTORY_URL>
# [Mandatory if JF_USER and JF_PASSWORD are not provided]
# JFrog access token with 'read' permissions for Xray
JF_USER: $(artifactory_username)
JF_PASSWORD: $(artifactory_password)
#JF_ACCESS_TOKEN: $(artifactory_password)
# [Mandatory]
# The name of the organization that owns this project
JF_GIT_OWNER: "Global"
JF_RELEASES_REPO: "frogbot-generic-external"
# Predefined Azure Pipelines variables. There's no need to set them.
JF_GIT_PROJECT: $(System.TeamProject)
JF_GIT_API_ENDPOINT: $(System.CollectionUri)
JF_GIT_PROVIDER: 'azureRepos'
JFROG_CLI_LOG_LEVEL: "DEBUG"
JAVA_HOME: $(JAVA_HOME_17_X64)
MAVEN_OPTS: "-Dmaven.gitcommitid.skip=true -DskipTests"
inputs:
script: |
sh getFrogbot.sh
./frogbot scan-and-fix-repos
frogbot-config.yml
- params:
git:
repoName: CockpIT-back
branches:
- master
scan:
projects:
jfrogPlatform:
- params:
git:
repoName: CockpIT-front
branches:
- master
scan:
projects:
- installCommand: "npm ci"
jfrogPlatform:
I can, but it's the exact same as 2.6.2
Frogbot downloaded successfully!
11:16:02 [Info] Frogbot version: 2.8.0
11:16:02 [Info] Checking whether the build-info extractors exist locally
11:16:02 [Info] Downloading maven extractor to path: /home/alm/.jfrog/dependencies/maven/2.39.9
11:16:02 [Info] Downloading build-info-extractor from https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-maven3/2.39.9/build-info-extractor-maven3-2.39.9-uber.jar
11:16:02 [Debug] Sending HTTP GET request to: https://<ARTIFACTORY_URL>/artifactory/jfroginsight-generic-external/artifactory/oss-release-local/org/jfrog/buildinfo/build-info-extractor-maven3/2.39.9/build-info-extractor-maven3-2.39.9-uber.jar
11:16:03 [Debug] Reading config from file system. Looking for .frogbot/frogbot-config.yml
11:16:03 [Debug] frogbot-config.yml found in /opt/agt/_work/1/s/.frogbot/frogbot-config.yml
11:16:03 [Info] Running Frogbot "scan-and-fix-repos" command
11:16:03 [Debug] Usage Report: Sending info...
11:16:03 [Debug] Sending HTTP GET request to: https://<ARTIFACTORY_URL>/artifactory/api/system/version
11:16:03 [Debug] Artifactory response: 200
11:16:03 [Debug] JFrog Artifactory version is: 7.55.3
11:16:03 [Debug] Sending HTTP POST request to: https://<ARTIFACTORY_URL>/artifactory/api/system/usage
11:16:03 [Debug] Usage Report: Usage info sent successfully. Artifactory response: 200
11:16:03 [Error] repository CockpIT-back returned the following error:
TF200016: The following project does not exist: CockpIT-back. Verify that the name of the project is correct and that the project exists on the specified Azure DevOps Server.
repository CockpIT-front returned the following error:
TF200016: The following project does not exist: CockpIT-front. Verify that the name of the project is correct and that the project exists on the specified Azure DevOps Server.
Dear @anael-l,
We would like to express our gratitude for bringing the bug to our attention. We have identified a discrepancy between the project and repository names in Azure DevOps that caused the issue. We are pleased to inform you that we have now released a solution to the problem, and we would appreciate any feedback you may have.
Thank you again for your valuable contribution.
Wow that was fast !
Indeed, this fixes my issue.
Thank you very much for your reactivity !
| gharchive/issue | 2023-05-04T09:33:04 | 2025-04-01T06:44:36.281700 | {
"authors": [
"EyalDelarea",
"anael-l",
"omerzi"
],
"repo": "jfrog/frogbot",
"url": "https://github.com/jfrog/frogbot/issues/327",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
382107254 | Panic when using --spec-vars - bug fix
Fixes #253.
👍 @RobiNino
| gharchive/pull-request | 2018-11-19T08:36:53 | 2025-04-01T06:44:36.283135 | {
"authors": [
"RobiNino",
"eyalbe4"
],
"repo": "jfrog/jfrog-cli-go",
"url": "https://github.com/jfrog/jfrog-cli-go/pull/275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2291551260 | [website] Amir’s email from May 9th regarding the configuration he used to cause an internal server error.
TODO: pull the graphs from that email and stick them here
This is the config Amir used:
{
"name": "Subprefix Hijack",
"desc": "Subprefix hijack with custom announcements",
"scenario": null,
"announcements": [
{
"prefix": "1.2.0.0/16",
"as_path": [
777
],
"seed_asn": 777
},
{
"prefix": "1.2.0.0/24",
"as_path": [
666
],
"seed_asn": 666
}
],
"roas": [
{
"prefix": "1.2.0.0/16",
"origin": 777
}
],
"attacker_asns": [
666
],
"victim_asns": [
1
],
"graph": {
"cp_links": [
[
999,
2
],
[
2,
1
],
[
3,
666
],
[
4,
999
],
[
4,
3
],
[
2,
666
],
[
3,
5
]
],
"peer_links": [
[
2,
3
]
]
},
"asn_policy_map": {
"1": "aspa",
"2": "aspa",
"3": "aspa",
"4": "aspa",
"5": "aspa"
},
"propagation_rounds": 1
}
This causes a KeyError in the SimulationEngine because there is an announcement for AS 777, which does not exist in the graph. We should catch this on the backend and return an error to the client before a simulation can be ran
Agreed, although how is the scenario null in this case? Is that expected
here?
On Mon, May 13, 2024 at 11:32 AM Arvind @.***> wrote:
This is the config Amir used:
{
"name": "Subprefix Hijack",
"desc": "Subprefix hijack with custom announcements",
"scenario": null,
"announcements": [
{
"prefix": "1.2.0.0/16",
"as_path": [
777
],
"seed_asn": 777
},
{
"prefix": "1.2.0.0/24",
"as_path": [
666
],
"seed_asn": 666
}
],
"roas": [
{
"prefix": "1.2.0.0/16",
"origin": 777
}
],
"attacker_asns": [
666
],
"victim_asns": [
1
],
"graph": {
"cp_links": [
[
999,
2
],
[
2,
1
],
[
3,
666
],
[
4,
999
],
[
4,
3
],
[
2,
666
],
[
3,
5
]
],
"peer_links": [
[
2,
3
]
]
},
"asn_policy_map": {
"1": "aspa",
"2": "aspa",
"3": "aspa",
"4": "aspa",
"5": "aspa"
},
"propagation_rounds": 1
}
This causes a KeyError in the SimulationEngine because there is an
announcement for AS 777, which does not exist in the graph. We should catch
this on the backend and return an error to the client before a simulation
can be ran
—
Reply to this email directly, view it on GitHub
https://github.com/jfuruness/bgpy_pkg/issues/130#issuecomment-2108017037,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AHP5F7UCLR3WFKHFOQLNB7TZCDMHXAVCNFSM6AAAAABHTGCGB2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBYGAYTOMBTG4
.
You are receiving this because you authored the thread.Message ID:
@.***>
Yeah, that just means it's using a custom scenario (i.e., announcements and ROAs)
Agreed. Custom scenario is now "CustomScenario" in the config and the website will not let create an announcement for an AS that is not on the graph
clicking the edit announcement and then announcing AS causes the box to move for some weird reason
| gharchive/issue | 2024-05-12T23:33:02 | 2025-04-01T06:44:36.299516 | {
"authors": [
"Arvonit",
"jfuruness"
],
"repo": "jfuruness/bgpy_pkg",
"url": "https://github.com/jfuruness/bgpy_pkg/issues/130",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1450749143 | hrnet 2d 数据
请问,您可以提供一下 hrnet 在 hm36 上的 17 个关键点的数据吗,就是相比于您 16 个关键点文件删减之前的,或者是您可以告知一下哪里可以下载吗?
Hi~ 这个: https://github.com/Nicholasli1995/EvoSkeleton/blob/master/docs/TRAINING.md
非常感谢!
| gharchive/issue | 2022-11-16T03:04:33 | 2025-04-01T06:44:36.301280 | {
"authors": [
"Garfield-kh",
"qiqiApink"
],
"repo": "jfzhang95/PoseAug",
"url": "https://github.com/jfzhang95/PoseAug/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
56765291 | time-1.5 compatibility
The time package now exposes its own TimeLocale.
Nice fix!
The upper limit on time needs to be removed in the cabal file though.
@nikomi as far as I can tell there are not bounds on time (which is itself a bit problematic). Are you seeing something I'm not?
In hslogger.cabal of version 1.2.6 on Hackage I see the following:
if flag(small_base)
build-depends: base >= 4 && < 5, containers, directory, process,
time < 1.5, old-locale
else
I believe this < 1.5 needs to be removed when applying this pull request. The else part describes dependencies for base < 3 - I don't think that path needs to be changed.
Since the pull request commit does not seem to mention an update to hslogger.cabal I thought I'd better mention it.
@nikomi ahh, right. Unfortunately the latest Hackage release hasn't been pushed to the repository so there's really no way to fix this at the moment.
No prob - it should just be remembered when the pull request is merged.
| gharchive/pull-request | 2015-02-06T03:23:22 | 2025-04-01T06:44:36.344754 | {
"authors": [
"bgamari",
"nikomi"
],
"repo": "jgoerzen/hslogger",
"url": "https://github.com/jgoerzen/hslogger/pull/29",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1914237312 | Recuperar archivo web por borrado de cookies
Buenos dias
Estabe trabajando diagramas importantes y lo guarde en mi navegador la URL como marcador, pero hace poco borre historial y cookies para otra cosa y ya no me abre el archivo.
Solicito su apoyo si lo puedo recuperar es muy importante, si esta en su base de datos o algo por el estilo
la url a recuperar son 3:
https://app.diagrams.net/#LDiagrama disponibilidad.drawio#{"pageId"%3A"k4bv-V6Oib1x1KGIiG4g"}
https://app.diagrams.net/#LCyberArk-Prod
https://app.diagrams.net/#LCyberark - Test.drawio
The #L part means it's stored in your browser. When you cleared everything out you deleted it from the storage in your browser. Can I ask why you selected "browser" as the storage?
Fue por la urgencia por la cual elegi ese metodo para cambiarlo en un futuro, pero sucedio esta incidencia, hay forma re recuperarlos?
| gharchive/issue | 2023-09-26T20:17:48 | 2025-04-01T06:44:36.353599 | {
"authors": [
"Ricardo2e",
"davidjgraph"
],
"repo": "jgraph/drawio",
"url": "https://github.com/jgraph/drawio/issues/3886",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
477227732 | How to avoid the edge pass through the vertex
The edge can pass through the vertex, how to avoid it ?
Move the edge toBack.
| gharchive/issue | 2019-08-06T07:59:06 | 2025-04-01T06:44:36.354833 | {
"authors": [
"davidjgraph",
"paper-play"
],
"repo": "jgraph/mxgraph",
"url": "https://github.com/jgraph/mxgraph/issues/353",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1895649450 | BBox adds 1 to height and width
Not sure why, I forget.
>>> BBox(top=50, left=40, bottom=150, right=180).width
141
>>> BBox(top=50, left=40, bottom=150, right=180).height
101
That's what PIL's bounding boxes do, we'll leave it as is.
| gharchive/issue | 2023-09-14T04:46:21 | 2025-04-01T06:44:36.451208 | {
"authors": [
"jhanarato"
],
"repo": "jhanarato/uposatha-inky",
"url": "https://github.com/jhanarato/uposatha-inky/issues/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1710375226 | Euler angles -> DCM -> euler angles picks up minus sign
If I take three Euler angles, get a dcm, and then get the Euler angles back, the result is the negation of the starting angle set.
There are two parts to this issue. The first was that many of the Euler angle -> DCM calculations were incorrect. These rotations need to be passive, not active. Furthermore, with the proper Euler angles (form i-j-i), the result is correct but Euler angles from DCM will return angle % pi for angles 1 and 3, and -angle for angle 2. The resulting rotation is the same, though. I'm adding a method to swap proper Euler angles to handle this. d687e30
| gharchive/issue | 2023-05-15T15:52:33 | 2025-04-01T06:44:36.452849 | {
"authors": [
"jhand1993"
],
"repo": "jhand1993/attitude",
"url": "https://github.com/jhand1993/attitude/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1069074140 | .gitignore file it's created without .
When creating an app with create-mf-app the .gitignore file it's created without a .
This causes that my project now has 10K in file changes.
I have no problem changing file extension name by myself but it's like a little bug
Which configuration was this? And which operating system? Were there any error messages during the run?
Windows 10.
This is the configuration
Tried again and happened the same
Can you try again with version 1.0.14?
Same
Figured it out, turns out it wasn't Windows specific at all. Should be good now with 1.0.15.
Very much appreciated jack wonderful tool, will keep using it for further projects
| gharchive/issue | 2021-12-02T02:57:32 | 2025-04-01T06:44:36.489745 | {
"authors": [
"AndresBetancourt-Dev",
"jherr"
],
"repo": "jherr/create-mf-app",
"url": "https://github.com/jherr/create-mf-app/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1228637816 | Angular: health page
After generating Angular as client, it could be nice to have the possibility to generate health page:
entry in menu
health html
health component
health service
health tests
SImilar to:
@pascalgrimaud I don't understand why on JHLite, I have:
whereas on Generator:
(I have property components). An idea?
@qmonmert : I edit your pull request to keep this opened.
Can you add the front part to generate the health check plz? Then, we can close this one after that :)
@pascalgrimaud https://opencollective.com/generator-jhipster/expenses/82559 :)
@qmonmert : approved
| gharchive/issue | 2022-05-07T14:06:59 | 2025-04-01T06:44:36.580186 | {
"authors": [
"pascalgrimaud",
"qmonmert"
],
"repo": "jhipster/jhipster-lite",
"url": "https://github.com/jhipster/jhipster-lite/issues/1652",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
129350045 | Support of several JDL files
From the last meetup: it would be nice to have and use more than one JDL file when generating entities.
Indeed, that would be a nice feature, so you can just create a new file when you need to add new entities to your application. Relationships to existing entities should not require including them again, just like relationships to the User entity do not require to include the User entity.
BR
This could "break" the change introduce by #135.
One would have to concatenate the JDL files to see where an error is...
| gharchive/issue | 2016-01-28T06:10:46 | 2025-04-01T06:44:36.581846 | {
"authors": [
"MathieuAA",
"david-vasquez"
],
"repo": "jhipster/jhipster-uml",
"url": "https://github.com/jhipster/jhipster-uml/issues/125",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
55712282 | Update riotify
Make it work with the current version of Riot (2.0.5)
Thank you! :+1:
| gharchive/pull-request | 2015-01-28T04:15:34 | 2025-04-01T06:44:36.589614 | {
"authors": [
"jhthorsen",
"mathieulegrand"
],
"repo": "jhthorsen/riotify",
"url": "https://github.com/jhthorsen/riotify/pull/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
956830284 | Image traceability and parallel builds
About
Updates idc-buildkit CI to build images in parallel (using a matrix strategy). Adds labels to each image to support traceability.
creates a new setup job to capture the commit hashes and image tag used in subsequent jobs.
build job is updated to identify each image by name; each image is labeled and pushed
Remaining jobs and steps are left alone, except to refer to the constants established in the setup job.
Parallel Image Builds
Uses the matrix stragegy to build and push docker images in parallel, invoking ./gradlew ${{ matrix.image-name }}:push for each image specified in the matrix. Each build occurs in isolation in its own runner, and each runner starts with exactly the same state (i.e. the provided idc-isle-buildkit commit hash).
The specified images are:
- activemq
- alpaca
- cantaloupe
- crayfish
- crayfits
- drupal
- drupal-dev
- fits
- homarus
- houdini
- hypercube
- idp
- ldap
- mariadb
- solr
Only the ultimate images, like drupal or crayfish, are specified, not intermediate images like nginx. The Gradle plugin insures that the intermediate images are built first, before pushing the ultimate image to the registry.
Caching
Mindful that we have seen issues relating to caching, I cannot see how parallel builds increases the risk of caching related issues. Nevertheless, we have seen inexplicable behavior with respect to buildkit caching.
Builds are isolated in their own runner, and each build begins with the same state. Each build uses the same commit hash of idc-isle-dc, and no GitHub cache artifacts are being shared between the runners.
FROM statements in isle-buildkit Dockerfiles refer to the local repository, e.g. FROM local/base:latest
Images named for the local repository (e.g. local/nginx) are never pushed to the registry by that name. Therefore, in order for docker build to resolve any image in the local repository, it must be built locally first.
The gradle build plugin is responsible for resolving and building the intermediate images (e.g. resolving and building local/nginx:latest when the crayfish image is built)
The gradle plugin will build each image using the publicly tagged latest version as a cache. For example, when building the nginx image, the gradle plugin will specify --cache-from ghcr.io/jhu-sheridan-libraries/idc-isle-dc/nginx:latest
Example
Given that images are being built in parallel using the matrix strategy, let's suppose a change is made to the nginx image, which is an intermediate image used for the crayfish, crayfits, drupal, homarus, houdini, and hypercube images. The concern is: is there a race condition that exists by which one of the downstream images (crayfish, crayfits, drupal, etc...) will receive an outdated nginx image that does not contain the change?
The answer is "no". The rationale is that each build, executing in parallel, will be forced to build local/nginx:latest, because that image is not published anywhere. As long as each parallel build is starting with the same isle-buildkit commit hash, each image (crayfish, crayfits, drupal, etc...) will include the change introduced to the nginx image. Layers that are cached by ghcr.io/jhu-sheridan-libraries/idc-isle-dc/nginx:latest will not be built.
Related to: https://github.com/jhu-idc/iDC-general/issues/398
Question @emetsger , in the following:
each build, executing in parallel, will be forced to build local/nginx:latest, because that image is not published anywhere. As long as each parallel build is starting with the same isle-buildkit commit hash, each image (crayfish, crayfits, drupal, etc...) will include the change introduced to the nginx image. Layers that are cached by ghcr.io/jhu-sheridan-libraries/idc-isle-dc/nginx:latest will not be built.
Let's suppose we made a change strictly to the nginx image, and not any of the leaf images. Who would push a new nginx:latest reflecting those changes? Is the answer one of the following:
All child image builds (e.g. homarus, houdini, drupal, etc) would push nginx:latest... but that is not a problem, because all will by byte-for-byte identical
All child image builds (e.g. homarus, houdini, drupal, etc) would push nginx:latest... and that is a problem, because these images will be subtly different (e.g. dates) and their hashes won't match. It's a race condition!
No child image build will result in pushing to nginx:latest. That would need to be handled some other way.
Let's suppose we made a change strictly to the nginx image, and not any of the leaf images. Who would push a new nginx:latest reflecting those changes? Is the answer one of the following:
The layers will get pushed by a leave image, but nginx:latest won't be tagged.
All child image builds (e.g. homarus, houdini, drupal, etc) would push nginx:latest... but that is not a problem, because all will by byte-for-byte identical, and only one will have any real effect (the others are essentially reduced to noops). To the outside world, it looks just like before.
Yes, that's one possibility. A leaf image builds the new layers from nginx and pushes them.
All child image builds (e.g. homarus, houdini, drupal, etc) would push nginx:latest... and that is a problem, because these images will be subtly different (e.g. dates) and their hashes won't match. It's a race condition!
The layer content will be the same, so I don't think there is a race condition.
No child image build will result in pushing to nginx:latest. That would need to be handled some other way.
nginx:latest won't ever be tagged unless we tag it manually, but that's really not a problem I don't think.
This is actually simpler than I thought it would be. It looks like layer caching might be unaffected in the end, but it's really hard to reason about. Let's go forward with this approach
Yeah, nothing is consulting the layer chain tagged as ghcr.io/jhu-sheridan-libraries/idc-isle-dc/nginx:latest. The leaf images consult ghcr.io/jhu-sheridan-libraries/idc-isle-dc/<image>:latest as a cache, and each leaf image is required to build the layers locally or use them from their cache.
| gharchive/pull-request | 2021-07-30T15:06:12 | 2025-04-01T06:44:36.606457 | {
"authors": [
"birkland",
"emetsger"
],
"repo": "jhu-idc/idc-isle-buildkit",
"url": "https://github.com/jhu-idc/idc-isle-buildkit/pull/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
579674575 | android lint check issue
when lint check this issue: TrustAllX509TrustManager ,
this class org.jsoup.helper.HttpConnection$Response$2 report lint exception.
exactly message below:
Insecure TLS/SSL trust manager (TrustAllX509TrustManager : Warning) :
checkClientTrusted is empty, which could cause insecure network traffic due to trusting arbitrary TLS/SSL certificates presented by peers
Insecure TLS/SSL trust manager (TrustAllX509TrustManager : Warning) :
checkServerTrusted is empty, which could cause insecure network traffic due to trusting arbitrary TLS/SSL certificates presented by peers
Insecure TLS/SSL trust manager (TrustAllX509TrustManager : Warning) :
checkClientTrusted is empty, which could cause insecure network traffic due to trusting arbitrary TLS/SSL certificates presented by peers
Insecure TLS/SSL trust manager (TrustAllX509TrustManager : Warning) :
checkServerTrusted is empty, which could cause insecure network traffic due to trusting arbitrary TLS/SSL certificates presented by peers
Insecure TLS/SSL trust manager (TrustAllX509TrustManager : Warning) :
checkClientTrusted is empty, which could cause insecure network traffic due to trusting arbitrary TLS/SSL certificates presented by peers
Insecure TLS/SSL trust manager (TrustAllX509TrustManager : Warning) :
checkServerTrusted is empty, which could cause insecure network traffic due to trusting arbitrary TLS/SSL certificates presented by peers
I really want this issue be fixed.
thank you very much
Dupe of #912. Please use a current version.
Dupe of #912. Please use a current version.
| gharchive/issue | 2020-03-12T03:51:32 | 2025-04-01T06:44:36.615679 | {
"authors": [
"jhy",
"lipanpan1030"
],
"repo": "jhy/jsoup",
"url": "https://github.com/jhy/jsoup/issues/1338",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1307381846 | Update README.md
Update Performance
Add classification accuracy data of VGG-16, MobileNet V3, ResNet50, DarkNet.
Add classification accuracy data of VGG-16, MobileNet V3, ResNet50, DarkNet.
| gharchive/pull-request | 2022-07-18T04:18:55 | 2025-04-01T06:44:36.647150 | {
"authors": [
"Sunrise723",
"jiansowa"
],
"repo": "jiansowa/powervr_paddle_model",
"url": "https://github.com/jiansowa/powervr_paddle_model/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
450882742 | Trying import on app.module.ts error
Hi, i have a problem, when i try import the plugin on app.module.ts it crash, when i call over my component.ts (i am using nativescript + angular) it works fine but just on Yezz 5E, i had test on devices like samsung s7, j7 prime, s9 and it crash whatever i call it.
the error is
System.err: Error: Unexpected value 'Mediafilepicker' imported by the module 'AppModule'. Please add a @NgModule annotation. System.err: File: "file:///data/data/com.qrgroup.letsrelo/files/app/tns_modules/@angular/compiler/bundles/compiler.umd.js, line: 18118, column: 16
my app.module.ts
`import { Mediafilepicker } from 'nativescript-mediafilepicker';
@NgModule({
bootstrap: [
AppComponent
],
imports: [
NativeScriptModule,
AppRoutingModule,
NativeScriptRouterModule,
NativeScriptRouterModule.forRoot(routes),
TNSFontIconModule.forRoot({
'flaticon': './assets/css/flaticon.css',
}),
NativeScriptUIListViewModule,
TNSCheckBoxModule,
Mediafilepicker,
],
declarations:[.....],
entryComponents: [......],
providers: [.....],
schemas: [
NO_ERRORS_SCHEMA
]
`
curiously it error on my devices Yezz 5E dont appear when i delete all related to Mediafilepicker from app.module.ts, but doing on this way crash on all device previously related..
sorry for my bad english, thanks in advance for your help on this bug/problem.
Hi @melaniol try not adding the import for this plugin on your app.module.ts
thanks for your quickly response, yeah i am not adding on app.module.ts and it works on my phone (Yezz 5E), but when i try run on s7 and j7 prime, it crash, with this same error, but i dont understand because i am not adding on app.module.ts, i removed the platform and try and try and nothing :(
how are you using the plugin, you should just directly import them to your components.
import { Mediafilepicker, ImagePickerOptions } from 'nativescript-mediafilepicker';
`public selectFile(){
console.log("tap on function");
let options: ImagePickerOptions = {
android: {
isCaptureMood: false, // if true then camera will open directly.
isNeedCamera: true,
maxNumberFiles: 10,
isNeedFolderList: true
}, ios: {
isCaptureMood: false, // if true then camera will open directly.
maxNumberFiles: 10
}
};
let mediafilepicker = new Mediafilepicker();
mediafilepicker.openImagePicker(options);
mediafilepicker.on("getFiles", function (res) {
let results = res.object.get('results');
console.dir(results);
});
mediafilepicker.on("error", function (res) {
let msg = res.object.get('msg');
console.log(msg);
});
mediafilepicker.on("cancel", function (res) {
let msg = res.object.get('msg');
console.log(msg);
});
}
`
and just one tap on view.
i am following the documentation guide. I dont understand why this error appears if i am not adding on app.module.ts
You got confused by how to implement the plugin then...
But here's a way to test if the plugin works.
on your component e.g
home.component.ts
... // other imports
import { MediaFilepicker, FilePickerOptions } from 'nativescript-mediafilepicker';
@Component({ ... })
export class HomeComponent {
mediaFilePicker = new Mediafilepicker();
onButtonClick() {
let fileExtensions = [];
if (app.ios) {
fileExtensions = ["kUTTypePDF"];
} else {
fileExtensions = ["pdf"];
}
const options: FilePickerOptions = {
android: {
extensions: fileExtensions,
maxNumberFiles: 1
},
ios: {
extensions: fileExtensions,
multipleSelection: false
}
};
this.mediaFilePicker.openFilePicker(options);
this.mediaFilePicker.on("getFiles", (res) => {
const results = res.object.get("results");
console.dir(results);
});
this.mediaFilePicker.on("error", (res) => {
const msg = res.object.get("msg");
console.dir(msg);
});
this.mediaFilePicker.on("cancel", (res) => {
const msg = res.object.get("msg");
console.dir(msg);
});
}
}
Hope that helps you.
In your case this is how you implement it on a component
import { Component, OnInit } from "@angular/core";
import { Mediafilepicker, ImagePickerOptions } from "nativescript-mediafilepicker";
@Component({
selector: "app-name",
templateUrl: "./name.component.html",
styleUrls: ["./name.component.scss"]
})
export class NameComponent implements OnInit {
constructor() { }
ngOnInit(): void { }
public selectFile() {
let options: ImagePickerOptions = {
android: {
isCaptureMood: false, // if true then camera will open directly.
isNeedCamera: true,
maxNumberFiles: 10,
isNeedFolderList: true
},
ios: {
isCaptureMood: false, // if true then camera will open directly.
maxNumberFiles: 10
}
};
let mediafilepicker = new Mediafilepicker();
mediafilepicker.openImagePicker(options);
mediafilepicker.on("getFiles", (res) => {
let results = res.object.get("results");
console.dir(results);
});
mediafilepicker.on("error", (res) => {
let msg = res.object.get("msg");
console.log(msg);
});
mediafilepicker.on("cancel", (res) => {
let msg = res.object.get("msg");
console.log(msg);
});
}
}
exactly, i have like that, and it works in my phone (Yezz 5E), but when i run on other devices like samsung s7, and j7 prime it crash with the error previously mentioned. The problem with the crash error is showing than i am adding in app.module.ts but it is not added, i am adding just in my component.
can you show me a full stack error?
`import { NgModule, NO_ERRORS_SCHEMA } from "@angular/core";
import { NativeScriptModule } from "nativescript-angular/nativescript.module";
import { NativeScriptRouterModule } from "nativescript-angular/router";
import { routes, navigatableComponents } from "./app-routing.module";
import { TNSFontIconModule } from 'nativescript-ngx-fonticon';
import { TNSCheckBoxModule } from 'nativescript-checkbox/angular';
//Clase para mediapicker
import { Mediafilepicker } from 'nativescript-mediafilepicker';
import { AppRoutingModule } from "./app-routing.module";
import { AppComponent } from "./app.component";
import { ItemsComponent } from "./item/items.component";
import { ItemDetailComponent } from "./item/item-detail.component";
import { LoginComponent } from "./modulos/login/login.component";
import { InicioComponent } from "./modulos/inicio/inicio.component";
import { TutorialComponent } from "./modulos/tutorial/tutorial.component";
import { HomeComponent } from "./modulos/home/home.component";
import { RequestComponent } from "./modulos/request/request.component";
import { TasksComponent } from "./modulos/tasks/tasks.component";
import { NotesComponent } from "./modulos/notes/notes.component";
import { ContactComponent } from "./modulos/contact/contact.component";
import { FooterComponent } from "./modulos/footer/footer.component";
import { ReportComponent } from "./modulos/report/report.component";
import { ChatComponent } from "./modulos/chat/chat.component";
import { ConversacionComponent } from "./modulos/conversacion/conversacion.component";
import { ConversacionService } from "./modulos/conversacion/conversacion.service";
import { LocationsComponent } from "./modulos/locations/locations.component";
import { OptionsComponent } from "./modulos/options/options.component";
import { NotificationsComponent } from "./modulos/notifications/notifications.component";
import { ConfigurationComponent } from "./modulos/configuration/configuration.component";
import { NewrequestsComponent } from "./modulos/newrequests/newrequests.component";
import { animacionesService } from "./modulos/servicios_generales/peticiones.service";
import { Desplegable } from "./modulos/desplegable/desplegable.component";
import { detail_desplegableComponent } from "./modulos/desplegables/desplegables_detail.component";
import { task_desplegableComponent } from "./modulos/desplegables/desplegables_task.component";
import { detail_desplegable_botonsComponent } from "./modulos/desplegables/despegable_detail_botons.component";
import { WebViewComponent } from "./modulos/desplegables/desplegables_webview.component";
import { PropertiesComponent } from "./modulos/properties/properties.component";
import { PropertiesViewComponent } from "./modulos/properties/properties_view.component";
import { desplegable_consultantComponent } from "./modulos/desplegables/desplegable_consultant.component";
//
import { CenectorServerService } from "./modulos/servicios_generales/ConectorServer.service";
import { NativeScriptUIListViewModule } from "nativescript-ui-listview/angular";
// Uncomment and add to NgModule imports if you need to use two-way binding
// import { NativeScriptFormsModule } from "nativescript-angular/forms";
// Uncomment and add to NgModule imports if you need to use the HttpClient wrapper
// import { NativeScriptHttpClientModule } from "nativescript-angular/http-client";
// descargar plugin nativescript-loader-indicator
@NgModule({
bootstrap: [
AppComponent
],
imports: [
NativeScriptModule,
AppRoutingModule,
NativeScriptRouterModule,
NativeScriptRouterModule.forRoot(routes),
TNSFontIconModule.forRoot({
'flaticon': './assets/css/flaticon.css',
}),
NativeScriptUIListViewModule,
TNSCheckBoxModule,
Mediafilepicker,
],
declarations: [
AppComponent,
ItemsComponent,
ItemDetailComponent,
LoginComponent,
InicioComponent,
HomeComponent,
TutorialComponent,
RequestComponent,
TasksComponent,
NotesComponent,
ContactComponent,
FooterComponent,
ChatComponent,
ConversacionComponent,
OptionsComponent,
NotificationsComponent,
LocationsComponent,
ConfigurationComponent,
NewrequestsComponent,
ReportComponent,
PropertiesComponent,
PropertiesViewComponent,
Desplegable,
detail_desplegableComponent,
task_desplegableComponent,
detail_desplegable_botonsComponent,
WebViewComponent,
desplegable_consultantComponent,
...navigatableComponents
],
entryComponents: [Desplegable,detail_desplegableComponent,task_desplegableComponent,detail_desplegable_botonsComponent,WebViewComponent,PropertiesViewComponent,desplegable_consultantComponent],
providers: [animacionesService,CenectorServerService,ConversacionService],
schemas: [
NO_ERRORS_SCHEMA
]
})
/*
Pass your application module to the bootstrapModule function located in main.ts to start your app
*/
export class AppModule { }`
It is the error when i try tns run android, after of tns platform remove android and tns prepare andriod
Remove Mediafilepicker from your imports.
I am compiling again without the imports. Waiting for the results
Oh bro, this issue was solved. Thanks, just clean and build all again without the imports and works like a charm.. Thanks so much again! @virtualbjorn
Glad I could help.
| gharchive/issue | 2019-05-31T16:32:43 | 2025-04-01T06:44:36.687101 | {
"authors": [
"melaniol",
"virtualbjorn"
],
"repo": "jibon57/nativescript-mediafilepicker",
"url": "https://github.com/jibon57/nativescript-mediafilepicker/issues/71",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1933745983 | feat: Add clang-format-17 to package registry
Hi @jidicula,
As you requested in the Pull Request #163, I am creating this PR dedicated to the new entry in the package registry :wink:
Part of: https://github.com/jidicula/clang-format-action/issues/162
Edited your PR's body to link to https://github.com/jidicula/clang-format-action/issues/162
| gharchive/pull-request | 2023-10-09T20:27:23 | 2025-04-01T06:44:36.689845 | {
"authors": [
"Xav83",
"jidicula"
],
"repo": "jidicula/clang-format-action",
"url": "https://github.com/jidicula/clang-format-action/pull/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
621486284 | Update app.py
While Running project I got error in the terminal and I saw that it was due to key and region inserted without the inverted comma. so made a slight change by adding just inverted comma in key and region fields.
Thanks for your contribution! I always appreciate it when folks help make my code better.
In this case I'm going to reject the PR - the key and region are actually variables set further up in the code by loading the values from environment variables in a .env file. The readme's should have made this clear, but if you've hit this then I haven't done as good a job as I should. I'll work back over the readme and code comments to make sure this is as clear as possible. I've created #4 to track this.
| gharchive/pull-request | 2020-05-20T06:06:40 | 2025-04-01T06:44:36.699488 | {
"authors": [
"jimbobbennett",
"naseeb0"
],
"repo": "jimbobbennett/SpeechToTextSamples",
"url": "https://github.com/jimbobbennett/SpeechToTextSamples/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
745898003 | Encoding issues while generating schema from endpoint
When generating schema from an endpoint, the resulting schema's results incorrectly interpret UTF-8 characters
We have comments in our schema that contain UTF-8 characters such as 'é'
When we run introspection query, the generated schema shows the comments containing 'é' ('C383 C2A9')
Under UTF-8, 'é' has a code of 'C3A9'. If you interpret this as ascii, it shows up as 'é'. When you save that as UTF-8, it gives the result 'C383 C2A9'
It looks like the introspection query is assuming that the encoding is ascii, but it should be whatever the server tells you (UTF-8 in our case)
I dug a little deeper, and noticed some things:
1 - our server did not specify a charset of utf-8
2 - application/json does not specify a charset
3 - json is always supposed to be utf-8
I managed to force an encoding on the responses, which solves the parsing issues
It does seem that the charset is actually optional and should actually be ignored (!)
https://www.iana.org/assignments/media-types/application/json
It is possible that other servers will omit the charset and cause problems
Similar issues:
https://github.com/dart-lang/http/issues/175#issuecomment-415721621
| gharchive/issue | 2020-11-18T18:18:34 | 2025-04-01T06:44:36.705900 | {
"authors": [
"fearhq"
],
"repo": "jimkyndemeyer/js-graphql-intellij-plugin",
"url": "https://github.com/jimkyndemeyer/js-graphql-intellij-plugin/issues/414",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1931423174 | 🛑 Hacker News is down
In 3f8dd65, Hacker News (https://news.ycombinator.com) was down:
HTTP code: 502
Response time: 282 ms
Resolved: Hacker News is back up in 699a9ff after 13 minutes.
| gharchive/issue | 2023-10-07T16:57:45 | 2025-04-01T06:44:36.709726 | {
"authors": [
"jimmymjin"
],
"repo": "jimmymjin/uptime",
"url": "https://github.com/jimmymjin/uptime/issues/223",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
772586415 | A new chinese text example on the version 0.8.0
A new chinese text example based on webwq-search example on the version of jina==0.8.0. Because there are some problems at query-flow, I need your kind help.
Thanks for your PR @ultimatedaotu ! We're currently restructuring our examples repo, so it might be more useful to post this on our own personal repo and we can link it in our upcoming "Community Examples" section
Thanks for your PR @ultimatedaotu ! We're currently restructuring our examples repo, so it might be more useful to post this on our own personal repo and we can link it in our upcoming "Community Examples" section
@ultimatedaotu We've created an example in Chinese for the 1.0 and please check it out.
| gharchive/pull-request | 2020-12-22T02:21:39 | 2025-04-01T06:44:36.727509 | {
"authors": [
"alexcg1",
"nan-wang",
"ultimatedaotu"
],
"repo": "jina-ai/examples",
"url": "https://github.com/jina-ai/examples/pull/322",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2168536367 | Master test
发射点士大夫
手动阀手动阀
sadasd
| gharchive/pull-request | 2024-03-05T07:54:16 | 2025-04-01T06:44:36.742749 | {
"authors": [
"jingdeluren"
],
"repo": "jingdeluren/weathermap",
"url": "https://github.com/jingdeluren/weathermap/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
53357428 | Many to many relations
See https://github.com/jinzhu/gorm/issues/307
Hello @pjebs
Thank you for your confirm, let me write some samples to check it.
Hello @pjebs
Sorry for the delay, finally I tested it, after pushed a commit to fix quote table name, seems like everything works for me, here is the sample.
Thank you
package main
import (
"log"
_ "github.com/go-sql-driver/mysql"
"github.com/jinzhu/gorm"
)
type Customer struct {
Id int
Name string
Events []Event `gorm:"many2many:db1.customer_events"`
}
func (Customer) TableName() string {
return "db1.customers"
}
type Event struct {
Id int
Name string
}
func (Event) TableName() string {
return "db2.events"
}
func main() {
db, err := gorm.Open("mysql", "root:@/gorm?charset=utf8&parseTime=True")
if err != nil {
log.Fatal(err)
}
db.LogMode(true)
db.AutoMigrate(&Customer{}, &Event{})
customer := Customer{Name: "c1", Events: []Event{{Name: "e2"}, {Name: "e3"}}}
db.Save(&customer)
var events []Event
db.Model(&customer).Association("Events").Find(&events)
}
outputs
(/home/jinzhu/sample.go:36)
[2015-02-24 18:04:41] [183.04ms] CREATE TABLE db1.customer_events (`customer_id` int,`event_id` int)
(/home/jinzhu/sample.go:36)
[2015-02-24 18:04:41] [199.54ms] CREATE TABLE `db1`.`customers` (`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,`name` varchar(255))
(/home/jinzhu/sample.go:36)
[2015-02-24 18:04:41] [197.37ms] CREATE TABLE `db2`.`events` (`id` int NOT NULL AUTO_INCREMENT PRIMARY KEY,`name` varchar(255))
(/home/jinzhu/sample.go:39)
[2015-02-24 18:04:41] [2.15ms] INSERT INTO `db1`.`customers` (`name`) VALUES ('c1')
(/home/jinzhu/sample.go:39)
[2015-02-24 18:04:41] [2.11ms] INSERT INTO `db2`.`events` (`name`) VALUES ('e2')
(/home/jinzhu/sample.go:39)
[2015-02-24 18:04:41] [2.38ms] INSERT INTO db1.customer_events (`customer_id`,`event_id`) SELECT '1','1' FROM DUAL WHERE NOT EXISTS (SELECT * FROM db1.customer_events WHERE `customer_id` = '1' AND `event_id` = '1');
(/home/jinzhu/sample.go:39)
[2015-02-24 18:04:41] [1.65ms] INSERT INTO `db2`.`events` (`name`) VALUES ('e3')
(/home/jinzhu/sample.go:39)
[2015-02-24 18:04:41] [1.54ms] INSERT INTO db1.customer_events (`customer_id`,`event_id`) SELECT '1','2' FROM DUAL WHERE NOT EXISTS (SELECT * FROM db1.customer_events WHERE `customer_id` = '1' AND `event_id` = '2');
INNER JOIN db1.customer_events ON db1.customer_events.`event_id` = `db2`.`events`.`id`
(/home/jinzhu/sample.go:42)
[2015-02-24 18:04:41] [3.28ms] SELECT * FROM `db2`.`events` INNER JOIN db1.customer_events ON db1.customer_events.`event_id` = `db2`.`events`.`id` WHERE (db1.customer_events.`customer_id` = '1')
[{1 e2} {2 e3}]
| gharchive/issue | 2015-01-05T01:56:00 | 2025-04-01T06:44:36.823934 | {
"authors": [
"jinzhu",
"pjebs"
],
"repo": "jinzhu/gorm",
"url": "https://github.com/jinzhu/gorm/issues/328",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
86167865 | Angular toaster appears twice when invoked from Angular Modal
I use and abuse angular toaster since it is really good looking and simple to manage.
Recently I found what is like to be a bug where angular toaster appears twice when invoked from an angular UI modal.
toasters related to main view behave the way they are supposed to behave (they disappear when their timeout is done)
these are the real toasters.
toasters (exact copy from previous) sticked to modal don't disappear from timeout (need to click).
these are wrong toasters, no need ones.
Bug or not, I will conclude by :
Thank you for this cool toaster :+1:
Ok I'll close this issue and post the reason (maybe it will help someone not to loose time like me).
As I said, this occurred in a an angular modal view.
The problem was
my calling view had already a toaster declared :
<toaster-container toaster-options="{
'position-class': 'toast-top-left',
'timeOut':1000,
}">
</toaster-container>
and (where it was a bit stupid) I added another one in my modal view (same code as above).
When using angular toaster, if you already declared one in main view (a modal calling view) don't declare another one in modal view, just manage it from your controller
toaster.pop({
type: 'warning',
title: 'Entered option is not unique' ,
body: '\''+ $scope.newOptionBasicSelect.saisie + '\'' + ' already exists.',
showCloseButton: true
});
| gharchive/issue | 2015-06-08T13:02:03 | 2025-04-01T06:44:36.830197 | {
"authors": [
"MacKentoch"
],
"repo": "jirikavi/AngularJS-Toaster",
"url": "https://github.com/jirikavi/AngularJS-Toaster/issues/132",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2503356039 | Convert to package and add libritts data prep script
Adds pyproject.toml to be able to run pip install -e . and use the wavtokenizer module
Adds gitignore
Adds prepare_libritts.py to download, extract, and prep libritts for training
@jishengpeng Thanks for making WavTokenizer! I wanted to try it out in Speechbrain, and then best way I could find was converting to a package. Please let me know if you have feedback or changes
I'm attempting to update to Pytorch Lightning 2.0+ and then will open a PR
| gharchive/pull-request | 2024-09-03T16:58:40 | 2025-04-01T06:44:36.833338 | {
"authors": [
"saveriyo"
],
"repo": "jishengpeng/WavTokenizer",
"url": "https://github.com/jishengpeng/WavTokenizer/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2473038687 | Problems with saveing video on server in k8s cluster
I run jitsi in local k8s baremetal cluster with local-storage storageClass. I created pvs and jitsi sucesssefuly connected to them and created dirs and files . There's no problem with saving video to the local machine but when i try to save to the servev I've get an error
Jibri 2024-08-19 13:17:30.397 FINE: [71] [hostname=teamok-jitsi-prosody.teamok.svc.cluster.local id=teamok-jitsi-prosody.teamok.svc.cluster.local] MucClient$3.handleIQRequest#513: Received an IQ with type set: IQ Stanza (jibri http://jitsi.org/protocol/jibri) [to=jibri@auth.meet.jitsi/-LOGAde-xxFs,from=jibribrewery@internal-muc.meet.jitsi/focus,id=amlicmlAYXV0aC5tZWV0LmppdHNpLy1MT0dBZGUteHhGcwBNM0RGQS0yNTMArR2PG9W3dic=,type=set,]
Jibri 2024-08-19 13:17:30.397 INFO: [71] XmppApi.handleJibriIq#230: Received JibriIq <iq xmlns='jabber:client' to='jibri@auth.meet.jitsi/-LOGAde-xxFs' from='jibribrewery@internal-muc.meet.jitsi/focus' id='amlicmlAYXV0aC5tZWV0LmppdHNpLy1MT0dBZGUteHhGcwBNM0RGQS0yNTMArR2PG9W3dic=' type='set'><jibri xmlns='http://jitsi.org/protocol/jibri' action='start' recording_mode='file' room='chat152-member67-68.27994178584757-1724062614766@muc.meet.jitsi' session_id='c9cc925b-b555-4e57-b62f-73073ce2d27c' app_data='{"file_recording_metadata":{"share":true}}'/></iq> from environment [MucClient id=teamok-jitsi-prosody.teamok.svc.cluster.local hostname=teamok-jitsi-prosody.teamok.svc.cluster.local]
Jibri 2024-08-19 13:17:30.398 INFO: [71] XmppApi.handleStartJibriIq#262: Received start request, starting service
Jibri 2024-08-19 13:17:30.401 INFO: [71] XmppApi.handleStartService#373: Parsed call url info: CallUrlInfo(baseUrl=jitsi.teamok.area, callName=chat152-member67-68.27994178584757-1724062614766, urlParams=[])
Jibri 2024-08-19 13:17:30.401 INFO: [71] JibriManager.startFileRecording#128: Starting a file recording with params: FileRecordingRequestParams(callParams=CallParams(callUrlInfo=CallUrlInfo(baseUrl=jitsi.teamok.area, callName=chat152-member67-68.27994178584757-1724062614766, urlParams=[]), email='', passcode=null, callStatsUsernameOverride=, displayName=), sessionId=c9cc925b-b555-4e57-b62f-73073ce2d27c, callLoginParams=XmppCredentials(domain=recorder.meet.jitsi, port=null, username=recorder, password=*****))
Jibri 2024-08-19 13:17:30.402 FINE: [71] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] FfmpegCapturer.<init>#92: Detected os as OS: LINUX
Jibri 2024-08-19 13:17:30.402 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: ConfigSourceSupplier: Trying to retrieve key 'jibri.chrome.flags' from source 'config' as type kotlin.collections.List<kotlin.String>
Jibri 2024-08-19 13:17:30.403 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: ConfigSourceSupplier: Found value [--use-fake-ui-for-media-stream, --start-maximized, --kiosk, --enabled, --autoplay-policy=no-user-gesture-required] for key 'jibri.chrome.flags' from source 'config' as type kotlin.collections.List<kotlin.String>
Starting ChromeDriver 126.0.6478.182 (5b5d8292ddf182f8b2096fa665b473b6317906d5-refs/branch-heads/6478@{#1776}) on port 1563
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
Jibri 2024-08-19 13:17:31.032 INFO: [71] org.openqa.selenium.remote.ProtocolHandshake.createSession: Detected dialect: OSS
Jibri 2024-08-19 13:17:31.042 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: checking for value via suppliers:
LambdaSupplier: 'JibriConfig::recordingDirectory'
ConfigSourceSupplier: key: 'jibri.recording.recordings-directory', type: 'kotlin.String', source: 'config'
Jibri 2024-08-19 13:17:31.043 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: LambdaSupplier: Trying to retrieve value via JibriConfig::recordingDirectory
Jibri 2024-08-19 13:17:31.043 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: failed to find value via LambdaSupplier: 'JibriConfig::recordingDirectory': org.jitsi.metaconfig.ConfigException$UnableToRetrieve$Error: class java.lang.NullPointerException
Jibri 2024-08-19 13:17:31.044 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: ConfigSourceSupplier: Trying to retrieve key 'jibri.recording.recordings-directory' from source 'config' as type kotlin.String
Jibri 2024-08-19 13:17:31.045 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: ConfigSourceSupplier: Found value /data/recordings for key 'jibri.recording.recordings-directory' from source 'config' as type kotlin.String
Jibri 2024-08-19 13:17:31.045 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: value found via ConfigSourceSupplier: key: 'jibri.recording.recordings-directory', type: 'kotlin.String', source: 'config'
Jibri 2024-08-19 13:17:31.045 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: checking for value via suppliers:
LambdaSupplier: 'JibriConfig::finalizeRecordingScriptPath'
ConfigSourceSupplier: key: 'jibri.recording.finalize-script', type: 'kotlin.String', source: 'config'
Jibri 2024-08-19 13:17:31.046 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: LambdaSupplier: Trying to retrieve value via JibriConfig::finalizeRecordingScriptPath
Jibri 2024-08-19 13:17:31.046 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: failed to find value via LambdaSupplier: 'JibriConfig::finalizeRecordingScriptPath': org.jitsi.metaconfig.ConfigException$UnableToRetrieve$Error: class java.lang.NullPointerException
Jibri 2024-08-19 13:17:31.047 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: ConfigSourceSupplier: Trying to retrieve key 'jibri.recording.finalize-script' from source 'config' as type kotlin.String
Jibri 2024-08-19 13:17:31.047 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: ConfigSourceSupplier: Found value /config/finalize.sh for key 'jibri.recording.finalize-script' from source 'config' as type kotlin.String
Jibri 2024-08-19 13:17:31.048 FINE: [71] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: value found via ConfigSourceSupplier: key: 'jibri.recording.finalize-script', type: 'kotlin.String', source: 'config'
Jibri 2024-08-19 13:17:31.048 INFO: [71] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] FileRecordingJibriService.<init>#134: Writing recording to /data/recordings/c9cc925b-b555-4e57-b62f-73073ce2d27c, finalize script path /config/finalize.sh
Jibri 2024-08-19 13:17:31.048 FINE: [71] JibriMetrics.incrementStatsDCounter#41: Incrementing statsd counter: start:recording
Jibri 2024-08-19 13:17:31.049 INFO: [71] JibriStatusManager$special$$inlined$observable$1.afterChange#75: Busy status has changed: IDLE -> BUSY
Jibri 2024-08-19 13:17:31.049 FINE: [71] WebhookClient$updateStatus$1.invokeSuspend#109: Updating 0 subscribers of status
Jibri 2024-08-19 13:17:31.050 INFO: [71] XmppApi.updatePresence#203: Jibri reports its status is now JibriStatus(busyStatus=BUSY, health=OverallHealth(healthStatus=HEALTHY, details={})), publishing presence to connections
Jibri 2024-08-19 13:17:31.050 FINE: [71] MucClientManager.setPresenceExtension#160: Setting a presence extension: org.jitsi.xmpp.extensions.jibri.JibriStatusPacketExt@7cae0660
Jibri 2024-08-19 13:17:31.050 FINE: [71] MucClientManager.saveExtension#185: Replacing presence extension: org.jitsi.xmpp.extensions.jibri.JibriStatusPacketExt@40811ac8
Jibri 2024-08-19 13:17:31.051 INFO: [71] XmppApi.handleStartJibriIq#275: Sending 'pending' response to start IQ
Jibri 2024-08-19 13:17:31.052 INFO: [88] AbstractPageObject.visit#32: Visiting url jitsi.teamok.area
Jibri 2024-08-19 13:17:31.055 FINE: [48] org.jitsi.xmpp.extensions.DefaultPacketExtensionProvider.parse: Could not add a provider for element busy-status from namespace http://jitsi.org/protocol/jibri
Jibri 2024-08-19 13:17:31.055 FINE: [48] org.jitsi.xmpp.extensions.DefaultPacketExtensionProvider.parse: Could not add a provider for element health-status from namespace http://jitsi.org/protocol/health
Jibri 2024-08-19 13:17:31.077 SEVERE: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.joinCall$lambda$3#333: An error occurred while joining the call
org.openqa.selenium.InvalidArgumentException: invalid argument
(Session info: chrome=126.0.6478.126)
(Driver info: chromedriver=126.0.6478.182 (5b5d8292ddf182f8b2096fa665b473b6317906d5-refs/branch-heads/6478@{#1776}),platform=Linux 5.10.0-28-amd64 x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 0 milliseconds
Build info: version: 'unknown', revision: 'unknown', time: 'unknown'
System info: host: 'teamok-jitsi-jitsi-meet-jibri-79d5d589fb-k5x6r', ip: '10.244.43.197', os.name: 'Linux', os.arch: 'amd64', os.version: '5.10.0-28-amd64', java.version: '17.0.11'
Driver info: org.openqa.selenium.chrome.ChromeDriver
Capabilities {acceptInsecureCerts: false, acceptSslCerts: false, browserConnectionEnabled: false, browserName: chrome, chrome: {chromedriverVersion: 126.0.6478.182 (5b5d8292ddf..., userDataDir: /tmp/.org.chromium.Chromium...}, cssSelectorsEnabled: true, databaseEnabled: false, fedcm:accounts: true, goog:chromeOptions: {debuggerAddress: localhost:46587}, handlesAlerts: true, hasTouchScreen: false, javascriptEnabled: true, locationContextEnabled: true, mobileEmulationEnabled: false, nativeEvents: true, networkConnectionEnabled: false, pageLoadStrategy: normal, platform: LINUX, platformName: LINUX, proxy: Proxy(), rotatable: false, setWindowRect: true, strictFileInteractability: false, takesHeapSnapshot: true, takesScreenshot: true, timeouts: {implicit: 0, pageLoad: 300000, script: 30000}, unexpectedAlertBehaviour: ignore, unhandledPromptBehavior: ignore, version: 126.0.6478.126, webStorageEnabled: true, webauthn:extension:credBlob: true, webauthn:extension:largeBlob: true, webauthn:extension:minPinLength: true, webauthn:extension:prf: true, webauthn:virtualAuthenticators: true}
Session ID: 91ceaefa4c4046deb14bdbb59bdaaed1
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
at org.openqa.selenium.remote.ErrorHandler.createThrowable(ErrorHandler.java:214)
at org.openqa.selenium.remote.ErrorHandler.throwIfResponseFailed(ErrorHandler.java:166)
at org.openqa.selenium.remote.http.JsonHttpResponseCodec.reconstructValue(JsonHttpResponseCodec.java:40)
at org.openqa.selenium.remote.http.AbstractHttpResponseCodec.decode(AbstractHttpResponseCodec.java:80)
at org.openqa.selenium.remote.http.AbstractHttpResponseCodec.decode(AbstractHttpResponseCodec.java:44)
at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:158)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:83)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:543)
at org.openqa.selenium.remote.RemoteWebDriver.get(RemoteWebDriver.java:271)
at org.jitsi.jibri.selenium.pageobjects.AbstractPageObject.visit(AbstractPageObject.kt:35)
at org.jitsi.jibri.selenium.JibriSelenium.joinCall$lambda$3(JibriSelenium.kt:297)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Jibri 2024-08-19 13:17:31.078 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.onSeleniumStateChange#218: Transitioning from state Starting up to Error: FailedToJoinCall SESSION Failed to join the call
Jibri 2024-08-19 13:17:31.078 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] StatefulJibriService.onServiceStateChange#39: File recording service transitioning from state Starting up to Error: FailedToJoinCall SESSION Failed to join the call
Jibri 2024-08-19 13:17:31.078 INFO: [88] XmppApi$createServiceStatusHandler$1.invoke#311: Current service had an error Error: FailedToJoinCall SESSION Failed to join the call, sending error iq <iq xmlns='jabber:client' to='jibribrewery@internal-muc.meet.jitsi/focus' id='8NX7V-27' type='set'><jibri xmlns='http://jitsi.org/protocol/jibri' status='off' failure_reason='error' should_retry='true'/></iq>
Jibri 2024-08-19 13:17:31.079 FINE: [88] JibriMetrics.incrementStatsDCounter#41: Incrementing statsd counter: stop:recording
Jibri 2024-08-19 13:17:31.079 INFO: [88] JibriManager.stopService#250: Stopping the current service
Jibri 2024-08-19 13:17:31.079 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] FileRecordingJibriService.stop#182: Stopping capturer
Jibri 2024-08-19 13:17:31.079 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSubprocess.stop#75: Stopping ffmpeg process
Jibri 2024-08-19 13:17:31.079 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSubprocess.stop#89: ffmpeg exited with value null
Jibri 2024-08-19 13:17:31.079 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] FileRecordingJibriService.stop#184: Quitting selenium
Jibri 2024-08-19 13:17:31.080 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] FileRecordingJibriService.stop#191: No media was recorded, deleting directory and skipping metadata file & finalize
Jibri 2024-08-19 13:17:31.080 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.leaveCallAndQuitBrowser#344: Leaving call and quitting browser
Jibri 2024-08-19 13:17:31.080 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.leaveCallAndQuitBrowser#347: Recurring call status checks cancelled
Jibri 2024-08-19 13:17:31.090 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.leaveCallAndQuitBrowser#353: Got 0 log entries for type browser
Jibri 2024-08-19 13:17:31.099 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.leaveCallAndQuitBrowser#353: Got 62 log entries for type driver
Jibri 2024-08-19 13:17:31.104 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.leaveCallAndQuitBrowser#353: Got 0 log entries for type client
Jibri 2024-08-19 13:17:31.105 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.leaveCallAndQuitBrowser#362: Leaving web call
Jibri 2024-08-19 13:17:31.136 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.leaveCallAndQuitBrowser#369: Quitting chrome driver
Jibri 2024-08-19 13:17:31.221 INFO: [88] [session_id=c9cc925b-b555-4e57-b62f-73073ce2d27c] JibriSelenium.leaveCallAndQuitBrowser#371: Chrome driver quit
Jibri 2024-08-19 13:17:31.222 INFO: [88] JibriStatusManager$special$$inlined$observable$1.afterChange#75: Busy status has changed: BUSY -> IDLE
Jibri 2024-08-19 13:17:31.222 FINE: [88] WebhookClient$updateStatus$1.invokeSuspend#109: Updating 0 subscribers of status
Jibri 2024-08-19 13:17:31.222 INFO: [88] XmppApi.updatePresence#203: Jibri reports its status is now JibriStatus(busyStatus=IDLE, health=OverallHealth(healthStatus=HEALTHY, details={})), publishing presence to connections
I tied it with chart 1.3.8 and 1.4.0
Is anyone can show me the way?
Hello @truecorax!
Can you please show your publicURL value and the whole jibri: section? It looks like Jibri cannot connect to your call room, which usually happens when the Jibri pod is somehow unable to reach the Jitsi Meet installation using the URL provided in .Values.publicURL value. Since it has to "emulate" a real (albeit invisible) person joining the room, it has to connect to Jitsi Meet as if it was connecting to it from some remote location and not inside the cluster.
Same problem. This happens even without any auth enabled.
Seems jibri expects the public_url to start with https://
So instead of publicURL: "jitsi.example.com" (as the comment in the values suggest) it needs to be publicURL: "https://jitsi.example.com"
Yep, with https:// jitsi started to recording
But now there's another problem. I can't find url to download video and in logs I've got an error with finalize.sh
Jibri 2024-09-24 10:18:50.022 INFO: [69] [session_id=d1261748-08b5-487d-93f9-7d691ab4d070] JibriSelenium.leaveCallAndQuitBrowser#369: Quitting chrome driver
Jibri 2024-09-24 10:18:50.098 INFO: [69] [session_id=d1261748-08b5-487d-93f9-7d691ab4d070] JibriSelenium.leaveCallAndQuitBrowser#371: Chrome driver quit
Jibri 2024-09-24 10:18:50.098 INFO: [69] [session_id=d1261748-08b5-487d-93f9-7d691ab4d070] FileRecordingJibriService.stop#232: Finalizing the recording
Jibri 2024-09-24 10:18:50.098 INFO: [69] JibriServiceFinalizeCommandRunner.doFinalize#44: Finalizing the jibri service operation using command [/config/finalize.sh, /data/recordings/d1261748-08b5-487d-93f9-7d691ab4d070]
Jibri 2024-09-24 10:18:50.103 SEVERE: [69] JibriServiceFinalizeCommandRunner.doFinalize#63: Failed to run finalize script
java.io.IOException: Cannot run program "/config/finalize.sh": error=2, No such file or directory
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1143)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1073)
at org.jitsi.jibri.util.ProcessWrapper.start(ProcessWrapper.kt:88)
at org.jitsi.jibri.service.impl.JibriServiceFinalizeCommandRunner.doFinalize(JibriServiceFinalizeCommandRunner.kt:47)
at org.jitsi.jibri.service.impl.FileRecordingJibriService.stop(FileRecordingJibriService.kt:233)
at org.jitsi.jibri.JibriManager.stopService(JibriManager.kt:253)
at org.jitsi.jibri.api.xmpp.XmppApi.handleStopJibriIq(XmppApi.kt:344)
at org.jitsi.jibri.api.xmpp.XmppApi.handleJibriIq(XmppApi.kt:238)
at org.jitsi.jibri.api.xmpp.XmppApi.handleIq(XmppApi.kt:219)
at org.jitsi.xmpp.mucclient.MucClient.handleIq(MucClient.java:551)
at org.jitsi.xmpp.mucclient.MucClient$3.handleIQRequest(MucClient.java:514)
at org.jivesoftware.smack.AbstractXMPPConnection$3.run(AbstractXMPPConnection.java:1561)
at org.jivesoftware.smack.AbstractXMPPConnection$10.run(AbstractXMPPConnection.java:2146)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.io.IOException: error=2, No such file or directory
at java.base/java.lang.ProcessImpl.forkAndExec(Native Method)
at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:314)
at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:244)
at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1110)
... 15 more
Jibri 2024-09-24 10:18:50.104 FINE: [69] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: checking for value via suppliers:
LambdaSupplier: 'JibriConfig::singleUseMode'
ConfigSourceSupplier: key: 'jibri.single-use-mode', type: 'kotlin.Boolean', source: 'config'
Jibri 2024-09-24 10:18:50.104 FINE: [69] MainKt$setupMetaconfigLogger$1.debug#234: LambdaSupplier: Trying to retrieve value via JibriConfig::singleUseMode
Jibri 2024-09-24 10:18:50.104 FINE: [69] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: failed to find value via LambdaSupplier: 'JibriConfig::singleUseMode': org.jitsi.metaconfig.ConfigException$UnableToRetrieve$Error: class java.lang.NullPointerException
Jibri 2024-09-24 10:18:50.104 FINE: [69] MainKt$setupMetaconfigLogger$1.debug#234: ConfigSourceSupplier: Trying to retrieve key 'jibri.single-use-mode' from source 'config' as type kotlin.Boolean
Jibri 2024-09-24 10:18:50.107 FINE: [69] MainKt$setupMetaconfigLogger$1.debug#234: ConfigSourceSupplier: Found value false for key 'jibri.single-use-mode' from source 'config' as type kotlin.Boolean
Jibri 2024-09-24 10:18:50.107 FINE: [69] MainKt$setupMetaconfigLogger$1.debug#234: FallbackSupplier: value found via ConfigSourceSupplier: key: 'jibri.single-use-mode', type: 'kotlin.Boolean', source: 'config'
Jibri 2024-09-24 10:18:50.107 INFO: [69] JibriStatusManager$special$$inlined$observable$1.afterChange#75: Busy status has changed: BUSY -> IDLE
Oh, there's no default script finalize.sh. So I should create it on my own.
Works for me with https:// prefix in publicURL. Thanks!
Sorry for the delay. 💀
Yes, Jibri seems to require the schema (so, http:// or https://) to always be present in publicURL. However, you don't always need to specify the URL yourself, as the chart can generate it for you from the ingress config:
{{- define "jitsi-meet.publicURL" -}}
{{- if .Values.publicURL }}
{{- .Values.publicURL -}}
{{- else -}}
{{- if .Values.web.ingress.tls -}}https://{{- else -}}http://{{- end -}}
{{- if .Values.web.ingress.tls -}}
{{- (.Values.web.ingress.tls|first).hosts|first -}}
{{- else if .Values.web.ingress.hosts -}}
{{- (.Values.web.ingress.hosts|first).host -}}
{{ required "You need to define a publicURL or some value for ingress" .Values.publicURL }}
{{- end -}}
{{- end -}}
{{- end -}}
As for the file recording, Jibri makes the recording and saves it on a local filesystem. You can either use a finalize script to upload the recording somewhere, or use a PVC as a permanent file storage for your recordings.
| gharchive/issue | 2024-08-19T10:40:28 | 2025-04-01T06:44:36.846155 | {
"authors": [
"d-mo",
"jens-kuerten",
"spijet",
"truecorax"
],
"repo": "jitsi-contrib/jitsi-helm",
"url": "https://github.com/jitsi-contrib/jitsi-helm/issues/128",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1459701406 | 请问如何使用pipetransport?
感谢分享,请问如何使用pipetransport?有demo吗?
你可以参考一下pipe_transport_test.go,目前没有相关的demo,如果是同一个主机做级联,可以使用router.PipeToRouer方法,如果是不同主机级联,需要参考router.PipeToRouer跨主机交换信令。
| gharchive/issue | 2022-11-22T11:40:36 | 2025-04-01T06:44:36.884099 | {
"authors": [
"jiyeyuran",
"ouxiand"
],
"repo": "jiyeyuran/mediasoup-go",
"url": "https://github.com/jiyeyuran/mediasoup-go/issues/21",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
2693437812 | Panic can't be recover in both Then() and ThenAsnyc() function.
func TestThen(t *testing.T) {
f1 := future.Async(func() (interface{}, error) {
time.Sleep(1000 * time.Millisecond)
return "", errors.New("MAGA")
})
then := future.Then(f1, func(in interface{}, err error) (interface{}, error) {
if err != nil {
panic(err)
}
return "success!", nil
})
get, err := then.Get()
if err != nil {
t.Error(err)
}
t.Log(get)
}
Expect to print out error by t.Error(err) but panic.
Panic recovering itself is not part of the design of Then and ThenAsync; it is the responsibility of user code. In go-future, only future.Async implements panic recovering logic, which should not be its responsibility. However, because a new goroutine is created in future.Async, if panic in future.Async is not recovered, the process will exit unexpectedly. In order To avoid this situation, decide to deal with panic but not responsibility of go-future
| gharchive/issue | 2024-11-26T06:56:46 | 2025-04-01T06:44:36.886199 | {
"authors": [
"jizhuozhi",
"nan-www"
],
"repo": "jizhuozhi/go-future",
"url": "https://github.com/jizhuozhi/go-future/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
56819609 | Use default version not working in CentOS
While yum-epel has packages for centos 6.5, 6.6, and 7.0- these packages are out of date and therefore will not likely align with the default 3.4.3. I think that the only way that it would, is if someone is replicating these to their own personal server. As such the "default-use-distro-version" test suite will always fail with our default values.
I have come up with a few solutions that I can submit a PR for.
Edit the test-suite "default-use-distro-version" with versions that we know are available in YUM epel. (probably the easiest and most clean)
Create a new test-suite in mini-test fashion and import a repo that might have up to date versions.
Deprecate this feature in CentOS since the packages from yum-epel are several x.Y.z behind.
Great points. I'm leaning towards to 1 out of these; it seems like the best option.
Closed via #223
| gharchive/issue | 2015-02-06T14:43:39 | 2025-04-01T06:44:36.889320 | {
"authors": [
"cmluciano",
"jjasghar"
],
"repo": "jjasghar/rabbitmq",
"url": "https://github.com/jjasghar/rabbitmq/issues/218",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1111628773 | Customizations to concurency branch
Adds several options, including one option to pass on accounts without at least a minimum of equity. This should help with the cases where it liquidates 0 of a token on accounts without any assets.
Thank you for the contribution <3 !
Thank you for the liquidator!
| gharchive/pull-request | 2022-01-22T17:17:44 | 2025-04-01T06:44:36.902805 | {
"authors": [
"jkbpvsc",
"ricardojmendez"
],
"repo": "jkbpvsc/reactive-liquidator-marketmaker",
"url": "https://github.com/jkbpvsc/reactive-liquidator-marketmaker/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
426161899 | docker on qnap x64 possible?
Hello Jocelyn :)
I have acquired a new QNAP TS-228A, trying to configure this, and the Container won't even start. I click the Start option, it seems to start for a split second, and immediately quits.
Console repeatedly lists:
standard_init_linux.go:187: exec user process caused "exec format error"
I know now that it s because of the different architecture, but even new synology Nas devices have arm processors and x64. Is it possible to change this image to make it compatible or do I have to buy an older Nas system? I just bought it to use it with your docker app..
See this issue https://github.com/jlesage/docker-crashplan-pro/issues/107. This container is compiled for x86 (which stands for intel). Also, this TS-228A has too little memory, so even if it started you would not have enough memory to run crashplan. If you want to use this container, find a nas with intel processor and at least 4 Gb internal memory (but preferably 8Gb or even more).
Also note that CrashPlan is not open source and there is no ARM binaries available...
So, is it possible to run CrashPlan on a QNAP Nas? I have the TVS-872XT which should be more than capable. Any ideas? Thanks!
Yes, it's possible. Only arm-based models can't run it.
Closing this issue. Please re-open if needed.
| gharchive/issue | 2019-03-27T20:11:33 | 2025-04-01T06:44:36.941998 | {
"authors": [
"Raainman",
"fryhight",
"jarms03",
"jlesage"
],
"repo": "jlesage/docker-crashplan-pro",
"url": "https://github.com/jlesage/docker-crashplan-pro/issues/167",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1552290101 | syntax error in expression (error token is "80 / syntax error in expression (error token is "01")
Errors
/usr/local/bin/lsusb: line 89: 16#80
: syntax error in expression (error token is "80
/usr/local/bin/lsusb: line 89: 16#00
01: syntax error in expression (error token is "01")
/usr/local/bin/lsusb: line 89: 16#: invalid integer constant (error token is "16#")
https://github.com/jlhonora/lsusb/issues/18
Full output
⇒ lsusb
2023-01-23 08:16:39.121 system_profiler[29463:4207731] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be
2023-01-23 08:16:39.126 system_profiler[29463:4207731] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be
2023-01-23 08:16:39.126 system_profiler[29463:4207731] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be
2023-01-23 08:16:39.127 system_profiler[29463:4207731] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be
2023-01-23 08:16:39.127 system_profiler[29463:4207731] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be
2023-01-23 08:16:39.127 system_profiler[29463:4207731] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be
2023-01-23 08:16:39.128 system_profiler[29463:4207731] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be
2023-01-23 08:16:39.128 system_profiler[29463:4207731] SPUSBDevice: IOCreatePlugInInterfaceForService failed 0xe00002be
Bus 020 Device 005: ID 1532:0032 1532 Razer Ouroboros
Bus 020 Device 000: ID 0bda:5452 Realtek Semiconductor Corp. billboard Serial: 123456789ABCDEFGH
Bus 128 Device 006: ID 05ac:8104 Apple Inc. Composite Device Serial: 000000000000
/usr/local/bin/lsusb: line 89: 16#80
80
80
80
80
80
80: syntax error in expression (error token is "80
80
80
80
80
80")
Bus 000 Device 000: ID 05ac
05ac
05ac
05ac
05ac
05ac
05ac:8102
8302
0340
8103
8262
8514
8233 Apple Inc.
Apple Inc.
Apple Inc.
Apple Inc.
Apple Inc.
Apple Inc.
Apple Inc. Touch Bar Backlight Serial: 0000000000000000
0000000000000000
FM713350289HYYKC0+EMN
000000000000
000000000000
CC2123601X806G7B6
0000000000000000
/usr/local/bin/lsusb: line 89: 16#00
01: syntax error in expression (error token is "01")
Bus 000 Device 001: ID 1d6b:CITR
CITR
XHCI Linux Foundation USB 3.1 Bus
/usr/local/bin/lsusb: line 89: 16#: invalid integer constant (error token is "16#")
Bus 000 Device 000: ID 05ac:8104 Apple Inc. Apple T2 Bus
I get this same error when trying to run it
use:
brew install laniksj/tap/lsusb-plus
| gharchive/issue | 2023-01-22T21:23:28 | 2025-04-01T06:44:36.947915 | {
"authors": [
"0xdevalias",
"6xdd",
"danielboston38"
],
"repo": "jlhonora/lsusb",
"url": "https://github.com/jlhonora/lsusb/issues/21",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2531880874 | DOES IT WORK ?
Does This app still work ? I have riot open and its not loading
This should still work? I'm unable to replicate this bug on my machine. Would be great if you could post some logs from %APPDATA%/valorant-chat-client/logs but remember to redact out sensitive info such as tokens.
what is tokens ?
it still doesnt open
[image: image.png]
[image: b39ded0f-0582-4bd1-9716-3946ba4137b1.png]
בתאריך יום ד׳, 18 בספט׳ 2024 ב-6:59 מאת Jonathan Loh <
@.***>:
This should still work? I'm unable to replicate this bug on my machine.
Would be great if you could post some logs from
%APPDATA%/valorant-chat-client/logs but remember to redact out sensitive
info such as tokens.
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2357444614,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILIDQFVA4ARNLBNSZEDZXD3CBAVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJXGQ2DINRRGQ
.
You are receiving this because you authored the thread.Message ID:
@.***>
its still like this . please help meee i would loveeee to use this chat thing . its so annoying to open every second the game
Hi please navigate to %APPDATA%/valorant-chat-client/logs on your Windows file explorer and send it here. Redact lines containing the word "token"
בתאריך יום ו׳, 20 בספט׳ 2024 ב-7:31 מאת Jonathan Loh <
@.***>:
Hi please navigate to %APPDATA%/valorant-chat-client/logs on your Windows
file explorer and send it here. Redact lines containing the word "token"
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2362791302,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILJRP4TDZPKOFZM7L4LZXOQKXAVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNRSG44TCMZQGI
.
You are receiving this because you authored the thread.Message ID:
@.***>
[2024-09-18 21:52:12.697] [info] [Background] Lock Retrieved and log initialized
[2024-09-18 21:52:12.774] [info] [Preferences] Path to Preferences: C:\Users\brach\AppData\Roaming\valorant-chat-client/preferences.json
[2024-09-18 21:52:12.781] [info] [Preferences] Found preferences: {"minimizeToTray":true,"notifications":true,"winWidth":1200,"winHeight":800,"winX":360,"winY":120}
[2024-09-18 21:52:12.832] [info] [Background] Main window created
[2024-09-18 21:52:12.889] [info] [Background] Tray created
[2024-09-18 21:52:12.890] [info] [Background] VALORANT API initialized
[2024-09-18 21:52:13.159] [info] [WINDOWS] Uninstall Path: "C:\Riot Games\Riot Client\RiotClientServices.exe" --uninstall-product=valorant --uninstall-patchline=live
[2024-09-18 21:52:13.291] [info] Checking for update
[2024-09-18 21:52:13.294] [info] [VALORANT] Lockfile data: Riot Client:17052:62993:XsUDXJgBl28nwOp_RLVF4Q:https
[2024-09-18 21:52:13.295] [info] [VALORANT] Region: eu
[2024-09-18 21:52:13.296] [info] [VALORANT] Shard: eu
[2024-09-18 21:52:14.690] [info] [WINDOWS] Uninstall Path: "C:\Riot Games\Riot Client\RiotClientServices.exe" --uninstall-product=valorant --uninstall-patchline=live
[2024-09-18 21:52:15.204] [info] Update for version 1.0.8 is not available (latest version: 1.0.8, downgrade is disallowed).
[2024-09-18 21:52:18.361] [info] [VALORANT] Lockfile data: Riot Client:17052:63028:tNdQk68l4VRkLhzMtosxcw:https
[2024-09-18 21:52:18.362] [info] [VALORANT] Region: eu
[2024-09-18 21:52:18.362] [info] [VALORANT] Shard: eu
[2024-09-18 21:52:18.381] [info] [VALORANT] Entitlement:
[2024-09-18 21:52:18.450] [info] [VALORANT] Auth User Info: {"country":"isr","sub":"90ea0bac-bee9-5d60-91ad-a9300b13f6da","lol_account":{"summoner_id":3585294410458400,"profile_icon":29,"summoner_level":1,"summoner_name":""},"email_verified":true,"player_plocale":null,"country_at":1591879749000,"pw":{"cng_at":1591879750000,"reset":false,"must_reset":false},"lol":{"cuid":3585294410458400,"cpid":"EUW1","uid":3585294410458400,"pid":"EUW1","apid":null,"ploc":"en","lp":false,"active":true},"original_platform_id":"EUW1","original_account_id":3585294410458400,"phone_number_verified":false,"photo":"https://avatar.leagueoflegends.com/euw/.png","preferred_username":"foxy2007b","ban":{"restrictions":[]},"ppid":null,"lol_region":[{"cuid":3585294410458400,"cpid":"EUW1","uid":3585294410458400,"pid":"EUW1","lp":false,"active":true}],"player_locale":"en","pvpnet_account_id":3585294410458400,"region":{"locales":["de_DE","en_GB","es_ES","fr_FR","it_IT"],"id":"EUW1","tag":"euw"},"acct":{"type":0,"state":"ENABLED","adm":false,"game_name":"Serious Fox","tag_line":"Crazy","created_at":1591879749000},"jti":"_DHKJyW-emk","username":"foxy2007b"}
thx for the help btw :D i realllllly want this chat without opening app cuz omg it will be so useful
have u found a fix yet ?
בתאריך שבת, 21 בספט׳ 2024 ב-22:10 מאת Brachya Gaziel <
@.***>:
בתאריך יום ו׳, 20 בספט׳ 2024 ב-7:31 מאת Jonathan Loh <
@.***>:
Hi please navigate to %APPDATA%/valorant-chat-client/logs on your
Windows file explorer and send it here. Redact lines containing the word
"token"
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2362791302,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILJRP4TDZPKOFZM7L4LZXOQKXAVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNRSG44TCMZQGI
.
You are receiving this because you authored the thread.Message ID:
@.***>
jonathan have u found an answer yet ?
בתאריך יום ב׳, 23 בספט׳ 2024 ב-18:27 מאת Brachya Gaziel <
@.***>:
have u found a fix yet ?
בתאריך שבת, 21 בספט׳ 2024 ב-22:10 מאת Brachya Gaziel <
@.***>:
בתאריך יום ו׳, 20 בספט׳ 2024 ב-7:31 מאת Jonathan Loh <
@.***>:
Hi please navigate to %APPDATA%/valorant-chat-client/logs on your
Windows file explorer and send it here. Redact lines containing the word
"token"
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2362791302,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILJRP4TDZPKOFZM7L4LZXOQKXAVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNRSG44TCMZQGI
.
You are receiving this because you authored the thread.Message ID:
@.***>
Sorry I've been pretty busy with other work lately. There seems to be an issue with getting entitlement token in which the relevant code is here. If I have time I will check this but I cannot reproduce this on my end
https://github.com/jloh02/valorant-chat-client/blob/57426167c7437f1aa44c0c1d27dc2032fa3afff6/packages/main/valorant.ts#L180C1-L189C4
thanks for responding , is there anybody you can refer me to ? that might
be able to help understand the issue and find a solution ? i dont really
understand anything with coding and programming so it would be really great
if you or anybody you know can help . i tried to find more info online but
nothing including Ai chat boxes can help find the issue
בתאריך יום א׳, 13 באוק׳ 2024 ב-9:15 מאת Jonathan Loh <
@.***>:
Sorry I've been pretty busy with other work lately. There seems to be an
issue with getting entitlement token in which the relevant code is here. If
I have time I will check this but I cannot reproduce this on my end
https://github.com/jloh02/valorant-chat-client/blob/57426167c7437f1aa44c0c1d27dc2032fa3afff6/packages/main/valorant.ts#L180C1-L189C4
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2408841270,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILNHLS7NSXVO5PDVOR3Z3IFYXAVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBYHA2DCMRXGA
.
You are receiving this because you authored the thread.Message ID:
@.***>
Unfortunately i am currently the only developer working on this. if anyone else is interested in OSS development, feel free to pick this up and create a PR!
omg im sorry to hear that . This program is sounding like a huge life saver
tho. like i talk to friends so much on val but i dont like having to open
the app on my phone and opening full games just to see what they playing is
annoying . please please if u can find time for the solution it would be
great and i would forever be appreciative of you
בתאריך יום א׳, 13 באוק׳ 2024 ב-17:49 מאת Jonathan Loh <
@.***>:
Unfortunately i am currently the only developer working on this. if anyone
else is interested in OSS development, feel free to pick this up and create
a PR!
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2409008360,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILI3M7KCVVQ4MCVJD43Z3KB73AVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBZGAYDQMZWGA
.
You are receiving this because you authored the thread.Message ID:
@.***>
is there anything you need me to send to you ? any files ? logs ? anything
? please let me know . i really want to get this working
בתאריך יום א׳, 13 באוק׳ 2024 ב-21:13 מאת Brachya Gaziel <
@.***>:
omg im sorry to hear that . This program is sounding like a huge life
saver tho. like i talk to friends so much on val but i dont like having to
open the app on my phone and opening full games just to see what they
playing is annoying . please please if u can find time for the solution it
would be great and i would forever be appreciative of you
בתאריך יום א׳, 13 באוק׳ 2024 ב-17:49 מאת Jonathan Loh <
@.***>:
Unfortunately i am currently the only developer working on this. if
anyone else is interested in OSS development, feel free to pick this up and
create a PR!
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2409008360,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILI3M7KCVVQ4MCVJD43Z3KB73AVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBZGAYDQMZWGA
.
You are receiving this because you authored the thread.Message ID:
@.***>
Unless there's someone who has the same issue in the EU, I can't really replicated this issue. Can I check for the Entitlement line, was it removed by you, or was it empty to begin with?
Also noticed your logs stopped at a specific step, can you try seeing if this returns anything for you
https://valorant-api.com/v1/version
[image: image.png]when i open the link
and these are the logs fully
בתאריך יום ג׳, 15 באוק׳ 2024 ב-18:16 מאת Jonathan Loh <
@.***>:
Also noticed your logs stopped at a specific step, can you try seeing if
this returns anything for you
https://valorant-api.com/v1/version
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2414253218,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILKHZM3CI7RKWOQB4JDZ3UWUVAVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJUGI2TGMRRHA
.
You are receiving this because you authored the thread.Message ID:
@.***>
i play on EU servers like thats where my account is . is it possible that
the services dont work for me because i tried to install it late ? like it
works for everyone because its already on pc ?
בתאריך יום ג׳, 15 באוק׳ 2024 ב-20:11 מאת Brachya Gaziel <
@.***>:
[image: image.png]when i open the link
and these are the logs fully
בתאריך יום ג׳, 15 באוק׳ 2024 ב-18:16 מאת Jonathan Loh <
@.***>:
Also noticed your logs stopped at a specific step, can you try seeing if
this returns anything for you
https://valorant-api.com/v1/version
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2414253218,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILKHZM3CI7RKWOQB4JDZ3UWUVAVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJUGI2TGMRRHA
.
You are receiving this because you authored the thread.Message ID:
@.***>
vcc-20241013-185235.log
vcc-20241014-181034.log
These are the full logs ,Maby something got broken .
I play on EU servers thats also where my account is located and about the link you sent
this is what i see
Ok that helps. But please remember not to post your entitlement token as someone can use it to login and hack your account. For now, please change your passwords for your Riot account for security.
Thanks for the full log though, for some reason I think the web query to https://valorant-api.com/v1/version is not working and I have no idea why. Specifically it's this line of code
https://github.com/jloh02/valorant-chat-client/blob/57426167c7437f1aa44c0c1d27dc2032fa3afff6/packages/main/valorant.ts#L208
i changed my password .
How can i fix that line then ? i have no idea in coding . do i copy that somewhere ?
@jloh02 any news ?
Have you found some time? If you have the files it should be fast no? Like
seems like you found the issue already
On Wed, Oct 16, 2024, 5:19 AM Jonathan Loh @.***> wrote:
Ok that helps. But please remember not to post your entitlement token as
someone can use it to login and hack your account. For now, please change
your passwords for your Riot account for security.
Thanks for the full log though, for some reason I think the web query to
https://valorant-api.com/v1/version is not working and I have no idea
why. Specifically it's this line of code
https://github.com/jloh02/valorant-chat-client/blob/57426167c7437f1aa44c0c1d27dc2032fa3afff6/packages/main/valorant.ts#L208
—
Reply to this email directly, view it on GitHub
https://github.com/jloh02/valorant-chat-client/issues/11#issuecomment-2415588666,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJ6VILI2RKCWWOS5TLDML7TZ3XELLAVCNFSM6AAAAABOMDOGQ2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJVGU4DQNRWGY
.
You are receiving this because you authored the thread.Message ID:
@.***>
Unfortunately, as much as I am able to find the request that fails, I am unable to replicate this on my system. Will require more debugging, preferably with someone who is able to replicate this issue reliably or debug why this issue occurs on a public API endpoint.
| gharchive/issue | 2024-09-17T19:02:12 | 2025-04-01T06:44:37.019975 | {
"authors": [
"foxy7002b",
"jloh02"
],
"repo": "jloh02/valorant-chat-client",
"url": "https://github.com/jloh02/valorant-chat-client/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
199828599 | Fix tests for Windows.
~~https://github.com/demoneaux/prettier/commit/95a89e9cdf9a7eadbafad51f17e51c46ec1286c7~~ https://github.com/jlongster/prettier/pull/11/commits/258575af9ade68a28813ea79a829468b61c94f22 allows the tests to run on Windows.
~~https://github.com/demoneaux/prettier/commit/8e2eaccfc1c5c4c456d5787992d2d19b67e11b31~~ ~~https://github.com/jlongster/prettier/pull/11/commits/29334081d2f7cd0ffcd8535b145e36d2f266e3d4~~ https://github.com/jlongster/prettier/pull/11/commits/e72347e51b3cd9d1694462856fad0047c4c6b38f attempts to solve EOL problems with Windows. but there are still some issues with that.
Closes #10.
Thanks a lot!
| gharchive/pull-request | 2017-01-10T14:00:55 | 2025-04-01T06:44:37.023392 | {
"authors": [
"demoneaux",
"jlongster"
],
"repo": "jlongster/prettier",
"url": "https://github.com/jlongster/prettier/pull/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2002666385 | Delayed responses due to openai-node 10 minute default request timeout
Issue
The openai-node library has a default request timeout of 10 minutes, and then automatically retries. That's why some chat responses and image creations seem to magically pop in at the 10 minute mark - it's when the API retry finally happened.
>req-1176148572933210162-intent 2023-11-20T13:14:31.600Z
>req-1176148572933210162-intent 2023-11-20T13:23:59.998Z (last preceding log timestamp)
>res-1176148572933210162-intent 2023-11-20T13:24:32.493Z
(took 10 full minutes with one req retry logged)
Sometimes the requests do become unstuck and finish after minutes without retrying.
>req-1176150955545342022-intent 2023-11-20T13:23:59.652Z
>res-1176150955545342022-intent 2023-11-20T13:23:59.997Z
>req-1176150955545342022-chat 2023-11-20T13:23:59.998Z
>res-1176150955545342022-chat 2023-11-20T13:31:35.646Z
(chat response took about 8 minutes)
>req-1176151555938992138-intent 2023-11-20T13:26:22.804Z
>res-1176151555938992138-intent 2023-11-20T13:33:09.261Z
>req-1176151555938992138-emoji 2023-11-20T13:33:09.262Z
>res-1176151555938992138-emoji 2023-11-20T13:33:09.600Z
(intent response took about 7 minutes)
Solution
The openai-node library implements XBR+Jitter (source), so I'll implement a default timeout of 5 seconds, and see how the library handles retries after that.
This will have to be implemented in:
CreateChatCompletion.ts
CreateChatCompletionConfiguration.ts
CreateImage.ts
CreateImageConfiguration.ts
I'll add a configuration option to ConfigTemplate.json:
{
"name": "openai_api_timeoutSec",
"allowedValues": 0,
"defaultValue": 5,
"required": false,
"secret": false
},
Merged, tested, and deployed in #142
| gharchive/issue | 2023-11-20T17:19:39 | 2025-04-01T06:44:37.029678 | {
"authors": [
"jlyons210"
],
"repo": "jlyons210/discord-bot-ol-bootsie",
"url": "https://github.com/jlyons210/discord-bot-ol-bootsie/issues/131",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
215766514 | A list of Problems about the Examples
Dear Junior,
Right now I am systematicly learning about how to use RISE as it really lowers the threshold about how to solve and estimate Markov-DSGE model. This is very good for beginners, thanks!
Now I am trying to run several examples included in the RISE and encountered some problems. Here they are:
(1)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/TaoZha/Tutorials/DSGE/Tutorial1
The results are:
....
Regime 2 : const = 1 & nk = 2
PAI R X ZD ZS
________ _______ ________ ____ ______
R{-1} -0.70877 0.34035 -0.60807 0 0
ZD{-1} 1.6535 0.5933 1.2358 0.68 0
ZS{-1} 2.2564 0.70276 0.11434 0 0.82
ED 0.65652 0.23558 0.4907 0.27 0
ER -0.2717 0.13047 -0.23309 0 0
ES 2.3943 0.7457 0.12133 0 0.8701
________ _______ ________ ____ ______
PAI R X ZD ZS
Error using isfield
Too many input arguments.
Error in dsge/print_solution (line 91)
if ~isfield(obj.solution,'Tz')||isempty(obj.solution.Tz{1})
Error in howto (line 71)
print_solution(m)
(2)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/TaoZha/Tutorials/DSGE/Tutorial3
The results are:
List of issues
none
--------------------------------------------------------------
----------- MCMC for svar_constant model-------------
--------------------------------------------------------------
Undefined function or variable 'posterior_simulator'.
Error in howto (line 121)
[obj{imod},postSims{imod}]=posterior_simulator(estim_models{imod},'mcmc_number_of_simulations',...
(3)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/MarkovSwitching/FarmerWaggonerZha2010
The results are:
Parameterization ::1, solver ::mfi
MODEL HAS NOT BEEN SOLVED AT LEAST UP TO THE FIRST ORDER
SOLVER :: mfi
Reference to non-existent field 'Tz'.
Error in dsge/print_solution>print_solution_engine/build_printing_array (line 211)
Tz=obj.solution.(Tzname){regime_index}(ids,:).';
Error in dsge/print_solution>print_solution_engine (line 152)
build_printing_array(ii,myprologue);
Error in dsge/print_solution (line 102)
print_solution_engine(obj(iobj),varlist,compact_form,orders);
Error in howto (line 62)
eval(['models_with_solver_',int2str(solver),'(ii,1).print_solution'])
(4)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/PeterIreland/Endogenous_JME2003
The results are:
none
file2blocks:: (gentle warning): I is also a matlab function
file2blocks:: (gentle warning): MU is also a matlab function
file2blocks:: (gentle warning): beta is also a matlab function
file2blocks:: (gentle warning): alpha is also a matlab function
Error using parser.capture_equations/capture_equations_engine (line 124)
capture_equations:: attributes to model block no longer permitted in file
emosp_linear at line 23
Error in parser.capture_equations (line 58)
[block,equation]=capture_equations_engine(block,listing(ii,:),block_name,equation);
Error in parser.parse_model (line 25)
[Model_block,dictionary]=parser.capture_equations(dictionary,blocks(current_block_id).listing,'model');
Error in parser.parse (line 195)
[Model_block,dictionary,blocks]=parser.parse_model(dictionary,blocks);
Error in dsge (line 320)
dictionary=parser.parse(model_filename,cmfArgins{:});
Error in rise (line 64)
obj=obj@dsge(varargin{:});
Error in master (line 59)
ml=rise('emosp_linear','rise_flags',{'original',true});
(5)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/PeterIreland/NKperspective_JMCB2011
The results are:
Error using generic/set/set_one_at_a_time (line 232)
forecast_start_date' is not a settable option or property of class rise
Error in generic/set (line 178)
set_one_at_a_time();
Error in dsge/set (line 132)
obj=set@generic_switch(obj,varargin{:});
Error in generic/forecast (line 107)
obj=set(obj,varargin{:});
Error in master (line 115)
fkst=forecast(mest,'data',pageify(serials(2)-1,db),...
(6)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/PeterIreland/productivity_RED2008
The results are:
Insufficient number of outputs from right hand side of equal sign to satisfy
assignment.
Error in
parser.preparse>process_flow_controls/process_flow_control_engine/do_for/update_set
(line 324)
sset=info.set;
Error in parser.preparse>process_flow_controls/process_flow_control_engine/do_for
(line 308)
update_set()
Error in parser.preparse>process_flow_controls/process_flow_control_engine (line
289)
do_for()
Error in parser.preparse>process_flow_controls (line 279)
process_flow_control_engine()
Error in parser.preparse>preparse_expand (line 140)
[rawfile,has_macro]=process_flow_controls(rawfile,definitions,has_macro);
Error in parser.preparse (line 105)
[output,has_macro]=preparse_expand(RawFile,parsing_definitions);
Error in parser.parse (line 137)
RawFile=parser.preparse(FileName,DefaultOptions.rise_flags);
Error in dsge (line 320)
dictionary=parser.parse(model_filename,cmfArgins{:});
Error in rise (line 64)
obj=obj@dsge(varargin{:});
Error in master (line 6)
m=rise('pdk','rise_flags',{'Sektors',{'C','I'}},...
(7)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/PeterIreland/sgusea_JEEA2013
The results are:
Insufficient number of outputs from right hand side of equal sign to satisfy
assignment.
Error in
parser.preparse>process_flow_controls/process_flow_control_engine/do_for/update_set
(line 324)
sset=info.set;
Error in parser.preparse>process_flow_controls/process_flow_control_engine/do_for
(line 308)
update_set()
Error in parser.preparse>process_flow_controls/process_flow_control_engine (line 289)
do_for()
Error in parser.preparse>process_flow_controls (line 279)
process_flow_control_engine()
Error in parser.preparse>preparse_expand (line 140)
[rawfile,has_macro]=process_flow_controls(rawfile,definitions,has_macro);
Error in parser.preparse (line 105)
[output,has_macro]=preparse_expand(RawFile,parsing_definitions);
Error in parser.parse (line 137)
RawFile=parser.preparse(FileName,DefaultOptions.rise_flags);
Error in dsge (line 320)
dictionary=parser.parse(model_filename,cmfArgins{:});
Error in rise (line 64)
obj=obj@dsge(varargin{:});
Error in master12 (line 17)
m=rise('sgusea12',...
That's all.
Best
Zixiang Zhu
Hi Zixiang,
Thank you for reporting. There are many examples that need updating with
the new syntax of RISE. But I have addressed the issues you've raised:
(1)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/TaoZha/Tutorials/DSGE/Tutorial1
Fixed
(2)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/TaoZha/Tutorials/DSGE/Tutorial3
You should use howto_new instead of howto
(3)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/MarkovSwitching/FarmerWaggonerZha2010
Fixed
(4)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/PeterIreland/Endogenous_JME2003
Fixed
(5)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/PeterIreland/NKperspective_JMCB2011
Fixed
(6)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/PeterIreland/productivity_RED2008
Fixed
(7)/Users/zhuzixiang/Desktop/RISE_toolbox-master/examples/VariousModels/PeterIreland/sgusea_JEEA2013
Fixed
Cheers,
Junior
Many thanks!:)
| gharchive/issue | 2017-03-21T14:52:08 | 2025-04-01T06:44:37.056777 | {
"authors": [
"jmaih",
"zhuzixiang"
],
"repo": "jmaih/RISE_toolbox",
"url": "https://github.com/jmaih/RISE_toolbox/issues/44",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
323328503 | RS-DSGE estimation: filtered, updated and smoothed variables
Hi Junior,
Regarding RS-DSGE model estimation, in particular the output that is stored in the structure "filtering" after estimation, is the terminology as in the Dynare output? I.e., is it correct that filtered_variables are one step ahead forecast (t+1) for the endogenous variables given information up to time t; updated_variables are the best guess for the endogenous variables at t given information up to time t; and smoothed_variables are best guess for the endogenous variables given all available information?
Regarding the “Expected” concepts of the "filtering" output, what is the relation between the smoothed_state_probabilities, updated_state_probabilities and filtered_state_probabilities and the Expected_smoothed, Expected_updated and Expected_filtered variables?
Many thanks!
Cheers,
Martin
Hi Martin,
Your intuition is exactly right:
1- filtered_variables refer to one-step ahead forecasts: a_{t|t-1}
2- updated_variables refer to the updates a_{t|t}
3- smoothed_variables refer to the final estimates conditional on all
available information a_t{t|n}
As for the expected counterparts, they are averages across all regimes.
This means that in a regime switching model, you will have for every
variable, as many series as the number of regimes. The weights used to
compute those averages are the probabilities.
Cheers,
J.
On Tue, May 15, 2018 at 8:17 PM, mhardinga notifications@github.com wrote:
Hi Junior,
Regarding RS-DSGE model estimation, in particular the output that is
stored in the structure "filtering" after estimation, is the terminology as
in the Dynare output? I.e., is it correct that filtered_variables are one
step ahead forecast (t+1) for the endogenous variables given information up
to time t; updated_variables are the best guess for the endogenous
variables at t given information up to time t; and smoothed_variables are
best guess for the endogenous variables given all available information?
Regarding the “Expected” concepts of the "filtering" output, what is the
relation between the smoothed_state_probabilities,
updated_state_probabilities and filtered_state_probabilities and the
Expected_smoothed, Expected_updated and Expected_filtered variables?
Many thanks!
Cheers,
Martin
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/jmaih/RISE_toolbox/issues/78, or mute the thread
https://github.com/notifications/unsubscribe-auth/ACagz2LaOUjXE_2tu3tDCnbCTMHalPuwks5tyxuugaJpZM4UACJv
.
Correct!!!
mai 2018 kl. 21:03 skrev mhardinga notifications@github.com:
Thanks!
So in the expected counterparts, say the Expected_filtered_variables are the filtered variables in each regime weighted with the filtered_state_probabilities, correct?
Martin
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
| gharchive/issue | 2018-05-15T18:17:18 | 2025-04-01T06:44:37.067107 | {
"authors": [
"jmaih",
"mhardinga"
],
"repo": "jmaih/RISE_toolbox",
"url": "https://github.com/jmaih/RISE_toolbox/issues/78",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
98188251 | Gridlines not shown
Even if <x:DisplayGridlines/> is present in xls template, gridlines are not shown when opened in Excel (Excel 2011 Mac OS).
I had to add <meta name=ProgId content=Excel.Sheet> <meta name=Generator content="Microsoft Excel 11"> to template, just before <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> to make them appear.
thank you. very helpful .
| gharchive/issue | 2015-07-30T14:50:29 | 2025-04-01T06:44:37.068767 | {
"authors": [
"dackmin",
"jy1989"
],
"repo": "jmaister/excellentexport",
"url": "https://github.com/jmaister/excellentexport/issues/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
727083459 | Iss1
train_cnn.ipynb - The os.system method throws an error. I've replaced it with os.symlink() to prevent the ln command. The symlink creation can be verified.(Commands in the readme file).
download_images.ipynb - While dropping the images that are already downloaded, it happens that some images present in the csv file ' image_download_locs.csv ' are not present in the countries/<country_name> directory and vice versa. So i have included only those images that are common in both the files.
Do let me know what you think of it.
Hey, I don't have too much time to look at this but generally what you have seems fine. Few important things:
"While dropping the images that are already downloaded, it happens that some images present in the csv file ' image_download_locs.csv ' are not present in the countries/<country_name> directory and vice versa." Are you sure? The logic of dropping images that were not downloaded seems correct.
I won't merge the PR directly because your changes include more than just the minimum (there is your history + some other little things changed here and there). Can you summarize what you want to change in master so I can just apply that in one commit? The README diff is pretty clear, but with Jupyter Notebooks the diff can be a little troublesome to read.
This fork has a lot of commit history in the main branch I'm sorry about that. I will delete this fork and raise another PR from a new one with just the essential commits. I'm closing this.
| gharchive/pull-request | 2020-10-22T06:16:11 | 2025-04-01T06:44:37.076361 | {
"authors": [
"San411",
"jmather625"
],
"repo": "jmather625/predicting-poverty-replication",
"url": "https://github.com/jmather625/predicting-poverty-replication/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
205066587 | Add integration tests
I suspect that under-the-hood, things will change quite a bit as I add more features. What I can do to prevent regressions is to write unit tests that just run migrations, then hit endpoints. That way, I can be sure that I'm not introducing bugs into existing features, like editing regular attrs or meta.
Gonna close this. All of the CRUD commands are mostly tested. Now I'm just filling in some holes here and there.
| gharchive/issue | 2017-02-03T03:57:20 | 2025-04-01T06:44:37.094045 | {
"authors": [
"jmeas"
],
"repo": "jmeas/api-pls",
"url": "https://github.com/jmeas/api-pls/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
308788285 | Need the ability to unique a list
Need the ability to get a unique list of elements in an array (similar to UNIX uniq)
Say I have an array { "result": [ "G", "L", "G", "G", "L" ] }
uniq(result) yields [ "G", "L" ]
I honestly think it can't be done right now with JMESPath.
Just try the last tip here (new Set() from ES2015): https://www.jstips.co/en/javascript/deduplicate-an-array/
This document is intended to help you, jmespath.py
CustomFunctions
| gharchive/issue | 2018-03-27T01:18:30 | 2025-04-01T06:44:37.103528 | {
"authors": [
"DanielFTwum",
"abeelan",
"maxxyme"
],
"repo": "jmespath/jmespath.site",
"url": "https://github.com/jmespath/jmespath.site/issues/51",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
614092697 | Support for merging SQLCMD scripts
In version 1.1.0 we've added support for pre- and post deployment scripts (see #9). Unfortunately we do not yet support including other scripts from those scripts using the :r OtherScript.sql syntax.
Some initial work has been done to support this (see feature/sqlcmd-file-merging branch), but it has some limitations in that it doesn't work cross-platform. This is due to the fact that the SQL libraries we're currently using seems to uppercase the path, which doesn't work on platforms that have a case-sensitive file system (ie. Linux). Ideally we would like to keep this library cross-platform, so we'll need to figure out how to work around this.
Is this all that is holding up merging in https://github.com/jmezach/MSBuild.Sdk.SqlProj/tree/feature/sqlcmd-file-merging ?
I'm wondering if you could just release this knowing that reference path resolution within DacFX is not working cross-platform yet (or I should say, converting given the platform its building and resolving on).
I notice the DACExtensions project you commented on hasnt been worked on for about 3 years, which leaves me little hope that they'll resolve this anytime soon.
I'm wondering if a short-term solution would be to do something like this.
Include 2 post/pre scripts, and ONLY include reference paths in them
1 set of pre/post scripts is named .windows. and the other .linux. with proper pathing syntax in both.
developers could build for their single environment, or then use csproject conditionals to determine which one should be the included pre/post scripts
Does that make sense? I haven't tried proving this out on my side yet to know if that's a viable solution or not.
I just know as it stands now, this SDK doesn't support pre/post at all (in v 1.1.0), so it would be nice to have SOMETHING sooner than later.
1.1.0 DOES support pre and post scripts, what makes you think otherwise? They must just be single scripts, and reside in the correctly named folders
1.1.0 DOES support pre and post scripts, what makes you think otherwise? They must just be single scripts, and reside in the correctly named folders
Sorry, @ErikEJ ! I should've proof read my response a little better. I edited it above.
I meant to say that it doesn't support resolving the references in those scripts!
I was thinking about making this work in a way that it would work as expected on Windows, while having the "handicap" on Linux and macOS of having to use upper-cased filenames. That way it would pass CI and it would just work on Windows as expected. Not sure when I'll have the time to finish this though.
I've had another look at this, but I don't think there's an easy fix for this issue. Not only does it uppercase the filename, but it uppercases the absolute file path. That means I would need to rename the entire folder structure to be uppercases in order for it to work during CI, but I can't do that because some higher level folders are outside of my control.
So I guess we can only hope that someone at Microsoft is going to fix this, or we have to do the parsing ourselves which is non-trivial IMHO.
I had another look at this. Apparently we're allowed to use code from microsoft/sqltoolsservice as long as we include the license. I've added a submodule and reference the ManagedBatchParser project directly and I think that will give us what we need to implement this. I'm already able to get the list of included scripts from the pre/post-deployment script. I think it will work for nested includes as well. Then I'll just need to figure out how to flatten everything into a single script. I can probably draw some inspiration from how SSDT does it ;).
@jmezach , I have a database project which utilizes the :r sqlcmd syntax in the post deployment context. I'm happy to help test when you get to that point (I'm afraid it would be limited to Windows though :-\ ).
@jmezach , I have a database project which utilizes the :r sqlcmd syntax in the post deployment context. I'm happy to help test when you get to that point (I'm afraid it would be limited to Windows though :-\ ).
@jmezach some news when the merge SQL scripts support will be available?
This one is currently blocking us from upgrading our solutions to container builds and it would be great to have this feature available.
If nobody else is looking into this, I may attempt it in the near future, as it's also important for my organization. I can try to work on it next week.
This isn't very high on our priority list since we are not using this in our projects. I did some initial work on it in feature/sqlcmd-file-merging which wasn't very successful in that it wouldn't work in cross-platform scenarios. I then tried again in feature/sqlcmd-merging-v2 by re-using code from another project (using a git submodule). That looked a bit more promising in that I was able to collect the list of files that would need merging. I just haven't gotten around to do the actual merging yet.
This is definitely a big blocker for my org as we use includes heavily.
Quick status update: This is in progress; I hope to have a PR for review this week or early next.
That's great news @jeffrosenberg. If you need any help with this please let me know.
Support for this got added in 1.8.0 thanks to @jeffrosenberg.
Pinging @gregj77, @mrozwod, @ltk-mpadelsky, @MarkIannucci, @knoxi, @ssa3512 and @whighsmith since they seem to have been waiting for this feature to drop. Please let us know if this covers your scenario's and if it doesn't don't hesitate to file an issue.
@jmezach looks like 1.8.0 actually got cut just before my commit, so this change isn't in there 😕
Hmm, then something must have gone wrong. I'll have a look.
You're right. I guess my local master branch wasn't up-to-date when I prepared the release. Pretty sure I pulled all the changes but I guess I was too soon or something. Anyway, I've cherry-picked the commit to the release branch, bumped the version number and the CI is running now. 1.8.1 that includes these changes should be up on NuGet momentarily.
| gharchive/issue | 2020-05-07T14:15:31 | 2025-04-01T06:44:37.117921 | {
"authors": [
"ErikEJ",
"MarkIannucci",
"jeffrosenberg",
"jmezach",
"knoxi",
"ltk-mpadelsky",
"ssa3512"
],
"repo": "jmezach/MSBuild.Sdk.SqlProj",
"url": "https://github.com/jmezach/MSBuild.Sdk.SqlProj/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
164657388 | Unable to build android target. android:dexDebug task fails
The task fails with com.android.dex.DexException: Multiple dex files define Lorg/hamcrest/Description
This comes up when I click the run button in Android Studio as well as when I use gradlew from the command line:
> ./gradlew android:installDebug
...
:android:dexDebug
Unknown source file : UNEXPECTED TOP-LEVEL EXCEPTION:
Unknown source file : com.android.dex.DexException: Multiple dex files define Lorg/hamcrest/Description;
Unknown source file : at com.android.dx.merge.DexMerger.readSortableTypes(DexMerger.java:591)
Unknown source file : at com.android.dx.merge.DexMerger.getSortedTypes(DexMerger.java:546)
Unknown source file : at com.android.dx.merge.DexMerger.mergeClassDefs(DexMerger.java:528)
Unknown source file : at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:164)
Unknown source file : at com.android.dx.merge.DexMerger.merge(DexMerger.java:188)
Unknown source file : at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main.java:504)
Unknown source file : at com.android.dx.command.dexer.Main.runMonoDex(Main.java:334)
Unknown source file : at com.android.dx.command.dexer.Main.run(Main.java:277)
Unknown source file : at com.android.dx.command.dexer.Main.main(Main.java:245)
Unknown source file : at com.android.dx.command.Main.main(Main.java:106)
My setup:
Android Studio 2.2 Preview 1Build #AI-145.2878421, built on May 17, 2016
JRE: 1.8.0_40-b25 x86_64
JVM: Java HotSpot(TM) 64-Bit Server VM by Oracle Corporation
Gradle version 2.2
From the internet, it looks like this has something to do with the fact that junit 4.10 includes a copy of hamcrest 1.1[1]
This happens randomly, what you'll need to do is enable the multidex option and delete the generated folder. To do this, go to the build.gradle file for the Android project and add the following lines within the android block:
defaultConfig {
multiDexEnabled true
}
After this, go to the project explorer on the left, go to the android project, go to build, and delete the entire generated folder. Then clean and rebuild and it should work.
Thanks for the tip @jmrapp1 and the very detailed explanation! This ended up not being the issue though, I fixed it by excluding hamcrest-core from json-simple as described here: http://stackoverflow.com/questions/32543197/error-json-simple-java-util-zip-zipexception-duplicate-entry-org-hamcrest-bas
It now runs on my android!
| gharchive/issue | 2016-07-09T08:29:33 | 2025-04-01T06:44:37.144822 | {
"authors": [
"dsyang",
"jmrapp1"
],
"repo": "jmrapp1/TerraLegion",
"url": "https://github.com/jmrapp1/TerraLegion/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2247854678 | Update tested JRuby version
... and also fix tests for newer Minitest (not MiniTest).
/cc @christophweegen
coverage: 98.23% (+0.1%) from 98.126%
when pulling 5d772a2f5652ce9d2c338b2720c7f4f45757f6ae on 87-test-for-jruby-94x
into 501ab4865c22ac76ea0408474cca3a26b39b8898 on master.
| gharchive/pull-request | 2024-04-17T09:43:01 | 2025-04-01T06:44:37.182720 | {
"authors": [
"coveralls",
"jnbt"
],
"repo": "jnbt/candy_check",
"url": "https://github.com/jnbt/candy_check/pull/88",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
864358098 | max_num_cycles: undefined variable in _build_skeleton_path_graph when _buffer_size_offset is set
error
s = Skeleton(skeleton0, spacing=1, _buffer_size_offset=10000)
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-29-f661641f342d> in <module>
----> 1 s = Skeleton(skeleton0, spacing=1, _buffer_size_offset=10000)
~/anaconda3/envs/test-py37/lib/python3.7/site-packages/skan/csr.py in __init__(self, skeleton_image, spacing, source_image, _buffer_size_offset, keep_images, unique_junctions)
331 self.coordinates = coords
332 self.paths = _build_skeleton_path_graph(self.nbgraph,
--> 333 _buffer_size_offset=_buffer_size_offset)
334 self.n_paths = self.paths.shape[0]
335 self.distances = np.empty(self.n_paths, dtype=float)
~/anaconda3/envs/test-py37/lib/python3.7/site-packages/skan/csr.py in _build_skeleton_path_graph(graph, _buffer_size_offset)
241 # of the number of points.
242 n_points = (graph.indices.size + np.sum(endpoint_degrees - 1) +
--> 243 max_num_cycles)
244 path_indices = np.zeros(n_points, dtype=int)
245 path_data = np.zeros(path_indices.shape, dtype=float)
UnboundLocalError: local variable 'max_num_cycles' referenced before assignment
originating from
def _build_skeleton_path_graph(graph, *, _buffer_size_offset=None):
if _buffer_size_offset is None:
max_num_cycles = graph.indices.size // 4
_buffer_size_offset = max_num_cycles
where the code does not set max_num_cycles variable properly in case _buffer_size_offset is passed to function call.
Oops! Well-spotted @matbb! I think #112 is the right fix. If you could comment on that, I can merge and maybe even cut a release if you need this fix on PyPI.
I tested the code in the pull request at c02bde43d00b1c5479234598e26cb51cf54d86b1 and confirm it works for me
and the bug does not appear.
Thanks for the module, it solves my problem pretty well.
And I don't need a new release. I have noticed, though, that skan on conda-forge (https://anaconda.org/conda-forge/skan)
is a release behind (0.8 vs 0.9 on PyPI).
| gharchive/issue | 2021-04-21T22:34:05 | 2025-04-01T06:44:37.197432 | {
"authors": [
"jni",
"matbb"
],
"repo": "jni/skan",
"url": "https://github.com/jni/skan/issues/111",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
3279431 | Compiled ../lib/ not in npm
when installing via npm, I don't have the ../lib/ folder where I assume you want to compile the ../src/ folder to. Did it myself and everything works fine :)
Currently the built coffeescript is not checked in, you can build it using ./bin/dev/compile_coffee there is also development binary you can use ./bin/dev/cli that will use the coffeescript source directly.
And I just pushed up the latest version to npm, so a npm install wintersmith should work now.
Also note that the docs aren't updated yet, and i might do some more refactoring before i consider it ready for release :)
| gharchive/issue | 2012-02-18T14:56:36 | 2025-04-01T06:44:37.199226 | {
"authors": [
"jnordberg",
"jouz"
],
"repo": "jnordberg/wintersmith",
"url": "https://github.com/jnordberg/wintersmith/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
267403763 | Test failures on rakudo HEAD (the code is relying on buggy behavior)
…
…
…
ok 39 - LABEL with multi-line value
ok 1 - Parsed successfully
ok 2 - 1 instruction
ok 3 - Correct type
ok 4 - Correct instruction
ok 5 - Correct number of labels
ok 6 - Correct label (1)
ok 7 - Correct label (2)
ok 8 - Correct label (3)
1..8
ok 40 - LABEL with multiple values on one line
Could not parse Docker file
in method parse at /home/alex/git/p6-docker-file/lib/Docker/File.pm6 (Docker::File) line 593
in block <unit> at t/parse-basic.t line 514
Bisectable points at https://github.com/rakudo/rakudo/commit/963a0f0657abaa0431d465e601c75b50462b4cd2
Most likely the issue is in the code (and not in rakudo).
It's probably the || alternation backtracking change. I won't have time to get to this particularly soon.
Hah, but I do have time to merge a PR :heart:
| gharchive/issue | 2017-10-21T18:35:21 | 2025-04-01T06:44:37.218041 | {
"authors": [
"AlexDaniel",
"jnthn"
],
"repo": "jnthn/p6-docker-file",
"url": "https://github.com/jnthn/p6-docker-file/issues/2",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1766574542 | 🛑 RolCris.ro is down
In 9f81a77, RolCris.ro (https://rolcris.ro) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RolCris.ro is back up in 3e91384.
| gharchive/issue | 2023-06-21T02:44:01 | 2025-04-01T06:44:37.229907 | {
"authors": [
"joahn3"
],
"repo": "joahn3/upptime2",
"url": "https://github.com/joahn3/upptime2/issues/361",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
223788724 | Wave deletion does not happen as expected
I must first say im impressed by this library. Im using it for a hobby CNC project and it makes it possible to do parallel execution of waves for several motors based on GPIO's.
But i think there is an issue with wave deletion.
When using a python add-create-send-delete loop for waves, i cannot always remove the completed waves. It leads to 'No more CBs for waveform' when creating new waves.
I can recreate the problem with a simple loop. When the deletion does not happen, the wave id increases continuously.
and here is the code
import pigpio
import time
import random
pi = pigpio.pi('navio2.local')
# system info
print 'pigpiod version: {}'.format(pi.get_pigpio_version()) # 61
print 'pigpio.py version: {}'.format(pigpio.VERSION) # 1.36
print 'wave_get_max_pulses: {}'.format(pi.wave_get_max_pulses()) # 12000
gpio = [19, 5, 25]
total = 30000
blocksize = 3000
iter = total / blocksize
waves = []
wavepos = 0
"""
When running with different delay, the loop fails with pigpio.error: No more CBs
for waveform'.
But when running with equal delays, the loop completes.
Proberbly because the waves are reused in gpioWaveCreate.
"""
delays_that_fails = [200, 0] # fails
equal_delay_does_not_fail = [200] # to se an execution as expected
delay = delays_that_fails
for p in range(total / 6):
waves.append(pigpio.pulse(1 << gpio[0], 0, random.choice(delay)))
waves.append(pigpio.pulse(1 << gpio[1], 0, random.choice(delay)))
waves.append(pigpio.pulse(1 << gpio[2], 0, random.choice(delay)))
waves.append(pigpio.pulse(0, 1 << gpio[0], random.choice(delay)))
waves.append(pigpio.pulse(0, 1 << gpio[1], random.choice(delay)))
waves.append(pigpio.pulse(0, 1 << gpio[2], random.choice(delay)))
print 'pulses: {}'.format(len(waves))
pi.wave_clear()
iteration = 0
old_id = -1
# add-create-send-delete loop with a sleep delay after send
while wavepos <= len(waves) - blocksize:
new_wavepos = wavepos + blocksize
slice = waves[wavepos:new_wavepos]
wavepos = new_wavepos
iteration += 1
print 'iteration %s' % iteration
pi.wave_add_generic(slice)
new_id = pi.wave_create()
pi.wave_send_using_mode(new_id, pigpio.WAVE_MODE_ONE_SHOT_SYNC)
while pi.wave_tx_at() != new_id:
time.sleep(0.01)
if old_id >= 0:
print 'delete wave id %s' % old_id
pi.wave_delete(old_id)
old_id = new_id
pi.stop()
print 'complete'
I can also replicate this issue.
wave_clear or wave_delete does not work as expected, not sure which one.
I can't remember why I just labelled this without comment.
Did I reply by e-mail?
When a wave is deleted it is simply marked for deletion. Its resources will not be reused except under certain conditions. One of these is that if a new wave is created needing exactly the same resources then it will reuse the old space. It's safest to force all waves to use the same resources if you want to rapidly create/delete waves in a loop.
I'm not the original poster, so can't comment on that.
Here is my scenario:
wave_clear()
wave_add_generic() #returns 1700 pulses for instance
wid = wave_create()
wave_send_once(wid)
while wave_tx_busy():
time.sleep(0.1)
wave_delete(wid)
If I run this code again, wave_add_generic will return 3400, 5100 on the next one, etc...
No i did not recieve a mail.
In my case i was able to release the waveform resources after it is deleted, when the amount of control blocks was the same as the previous. It was not deleted / reused if different.
The tricky part is that it's not only the number of pulses that gives the number of control blocks. The delay can influence the number of cb's dependent on the length. So counting the pulses when building waves will not always work.
I think the issue here is that the dokumentation on wave_delete at least should have a note about the conditional deletion of the waves.
Umm, this appears to be a real issue that needs to be addressed. Try waveAddSerial() and you find, unless you want to send the same message over and over, that you quickly run out of resources and can no longer transmit. Is there something about the hardware that prevents waveDelete() from freeing the resources immediately?
The source is the documentation:
int gpioWaveDelete(unsigned wave_id)
{
DBG(DBG_USER, "wave id=%d", wave_id);
CHECK_INITED;
if ((wave_id >= waveOutCount) || waveInfo[wave_id].deleted)
SOFT_ERROR(PI_BAD_WAVE_ID, "bad wave id (%d)", wave_id);
waveInfo[wave_id].deleted = 1;
if (wave_id == (waveOutCount-1))
{
/* top wave deleted, garbage collect any other deleted waves */
while ((wave_id > 0) && (waveInfo[wave_id-1].deleted)) --wave_id;
waveOutBotCB = waveInfo[wave_id].botCB;
waveOutBotOOL = waveInfo[wave_id].botOOL;
waveOutTopOOL = waveInfo[wave_id].topOOL;
waveOutCount = wave_id;
}
return 0;
}
So resources are only freed if the most recently created wave is deleted. Ok, so now the issue is you can't create a new wave whilst the current wave is transmitting, send the new wave synchronously, then delete the wave just completed. It would be really nice if this worked!
Also, gpioWaveClear() seems pretty worthless. Is there not a way to re-initialize the wave generator without terminating pigpio?
I have clarified the documentation for wave delete.
I am not planning any code changes in this area, mainly because I can think of no way of doing it better than it is currently done.
Please reopen if you know of a workable method of better reusing deleted waves.
@joan2937,
Thanks for looking into this and other issues in the pigpio backlog. It has been a while but the use case I have is being able to transmit a long waveform by breaking it into smaller segments then transmitting the segments back-to-back to reproduce a continuous waveform at the output. To do so, you need to create the next waveform segment, N, while the last segment, N-1, is transmitting. After detecting the status of waveform segment N-1 being completed, you need to delete N-1 and then begin creation of waveform segment N+1. As pigpio currently stands, wave delete removes the last (latest) wave, which in my example above is waveform N.
I understand that what I have described is the use case and not the solution. I'm hoping that you have an inspirational moment and come up with one.
I wouldn't hold your breath waiting for a fresh inspiration from me. I have given the area a considerable amount of thought. Reusing resources boils down to shifting DMA control blocks around while they are actually being used. It is too horrendous to contemplate. Have a look at the wave chain code which allows loops to be repeated a specific number of times. I couldn't write that again however hard I tried.
The best I can think of is making it easier to create waves of consistent size thus allowing their reuse.
Hello,
I came across with this issue in my use case and I am looking how to solve. What does it means consistent size in this context? It is enough same length in micros and same number of pulses?
Lets put and example. Supose two wave of length 10 ms composes with pulses of 1 ms
Wave1: off-off-on-on-off-off-on-on-off-off
Wave2: off-on-off-on-off-on-off-on-off-on
Are this two waves 'consistent in size' ?
Thanks
No i did not recieve a mail.
In my case i was able to release the waveform resources after it is deleted, when the amount of control blocks was the same as the previous. It was not deleted / reused if different.
The tricky part is that it's not only the number of pulses that gives the number of control blocks. The delay can influence the number of cb's dependent on the length. So counting the pulses when building waves will not always work.
I think the issue here is that the documentation on wave_delete at least should have a note about the conditional deletion of the waves.
@ingvardsen how did you match the number of used control blocks? I ran into the same issue and looking at the source code it seems that not only the CBS need to match but also the OOL for a reuse of a deleted wave. Since the OOL can't be determined from the python interface I wonder what your approach was to get around this limitation?
As @joan2937 mentioned a easy way to create waves of consistent size would be a a nice addition here. Are there any updates on this?
@Jul3k i did not solve the problem.
But in case you are working on a CNC, 3D print project or other realtime pulse generation from Python code, have a look at this PyCNC.
GitHub repos here.
@ingvardsen I do indeed work on project for synchronized stepper movement. I started with PyCNC but the problem is that it does not fit all the requirements. I found extending PyCNC difficult because it is already a very complete solution with everything included for a 3D Printer. The library I wrote is small and works actually quite well with step frequencies up to 100kHz for synchronized movements (using numpy) and variable number of drives . The problem is that I discovered the limitation of piogpio after most of the work was done already during testing of the code with low accelerations. I really hope @joan2937 has an idea on how to get around this limitation.
@Jul3k , are you running the latest version? The reason I ask is there was a fix to #223 which may have had some influence on this issue.
If you want to get attention on this issue you should reopen it. Let me know if that is not possible since you are not the author.
@guymcswain, yes I am running version 74 and I am not able to reopen this issue.
from gpioWaveCreate:
if (waveInfo[i].deleted &&
(waveInfo[i].numCB == numCB) &&
(waveInfo[i].numBOOL == numBOOL) &&
(waveInfo[i].numTOOL == numTOOL))
{
/* Reuse the deleted waves resources. */
wid = i;
break;
}
from waveCBsOOLs:
numCB++;
for (i=0; i<numWaves; i++)
{
if (waves[i].gpioOn) {numCB++; numBOOL++;}
if (waves[i].gpioOff) {numCB++; numBOOL++;}
if (waves[i].flags & WAVE_FLAG_READ) {numCB++; numTOOL++;}
if (waves[i].flags & WAVE_FLAG_TICK) {numCB++; numTOOL++;}
numCB += waveDelayCBs(waves[i].usDelay);
}
so is it correct that not only the number of CB but also the OOL need to match?
So is it correct that not only the number of CB but also the OOL need to match?
I don't currently know the answer. But first, can we agree on the problem statement. I propose it is this one:
"Pigpio wave methods do not support the generation and transmission of continuous waves." Continuous wave(s) generation require a "create-sync-delete" loop that can run indefinitely without exhausting the CB or other DMA resources (within reason). I think this covers the use case for the OP as well as my own.
Please note that:
an easy way to create waves of consistent size would be a nice addition here
would not resolve what I'm describing as a continuous wave.
Given your situation fits this problem statement, I'm willing to put in the time to investigate if and how it can be accomplished. I believe this issue would then fit into the 'feature/enhancement' category as opposed to a fix.
I agree that this is the general use case and I would say that this is also a use case which would make pigpio a very powerful tool (it is already). I would be happy if you could look into this and I am also willing to support.
My understanding from what @joan2937 said is that it will be quite difficult to implement a feature that frees resources from deleted waves directly.
Let me explain why I was asking for a way to create waves of consistent size:
There could be two waves that each require half of the available resources. One acts as a buffer which is filled while the other one is being transmitted. The transmitted wave is then deleted and overwritten with the newly created wave occupying the same amount of resources. The waves could be padded so that they always require the same amount of resources independent from the pulses which have originally been added to them. This could repeat indefinitely.
A way to delete a wave and free the resources directly would of corse be the more elegant and cleaner approach, which I would also prefer. I also see this as a feature/enhancement
There could be two waves that each require half of the available resources. One acts as a buffer which is filled while the other one is being transmitted.
I was thinking along the same line. But I was also assuming that it would not make sense to force fit it into the existing regime but to make a new mode perhaps. At the end of the day, it can't break existing functionality.
@joan2937 , if you could let us know if we're going down a bad path or give us other ideas to consider, we'll do the heavy lifting.
The ability to reuse a previous wave in-situ was an attempt to sole this problem. Obviously an insufficient attempt.
Wave resources are allocated from several blocks of memory. These blocks are easily fragmented as waves are created and deleted, especially if the resources required change. You can not (rather I know of no way of doing so) move allocated blocks to unfragment the space. In theory I suppose you could build up a wave from non-contiguous memory. I almost certainly would have done so if I had thought of this problem.
My current idea was to pad out waves so they would use a consistent size, that is what I was going to do. Perhaps it would be better now to address the root problem and think of a fresh method of allocation.
Two waves padded out to consume half the DMA resources, while not necessarily elegant may be the right approach.
Is there a way to pad the waves with the current version of PIGPIO?
I'm still coming up to speed on the waveform generation code. If you pad with a pulse of [0,0,0] pigpio will not consume a CB. You could try to pad with [0,0,1] within your waveform to experiment.
I had a similar problem and numpy and numba where a great help. Here is a link to some code I use for testing Stepper Puls Generation. Have a look at the wave_add_numpy_array function, this was a great step to speed up the generation process.
I'll check that out later, I'm in information overload :)
The control blocks of the DMA are natural linked list structures. While the CB resources are the critical factor in determining if a waveform 'can fit', there is this concept of OOL which are linear structures tied to the start and end of the CB chain. Therefore the CB must be structured linearly in memory. See this comment:
https://github.com/joan2937/pigpio/blob/ef48a043af0f54ecfcd8bcb2515c96ae1b13ccee/pigpio.h#L461-L472
... so the result of your padding experiment would be of great interest.
Ok, I tried what you suggested using [0,0,0] gives no change on the consumed CBs. Padding with [0,0,1] increases the consumed CBs but not by the number of added [0,0,1]. I do not see a pattern in the added CBs but the amount is always below the number of pulses added for padding
https://github.com/joan2937/pigpio/blob/ef48a043af0f54ecfcd8bcb2515c96ae1b13ccee/pigpio.c#L2948-L2957
This might explain why no CBs are added for a zero delay
I don't see debug statements in the internal wave related functions. What specifically do you want to see? I could compile a version with a custom function to return some of the internal data. I'd rather wait though until I can get more understanding of the OOL data structure.
But it looks like your discovering some potentially useful information in your experiments!
I believe the OOLs also need to fit.
Yes, indeed both OOLs and CBs must match to reuse a wave id. This makes it quite challenging.
@joan2937 , you can unsubscribe to this issue and we will only pull you into the discussion when needed - as in now for this topic:
The waveform flags seem to only be accessible using the C/IF methods and are designated as "expert" and "not intended for general use". In the waveform data structures, the flags run from top to bottom, counter to normal waveform constructs. There is probably some mechanism that I'm not seeing to prevent writing the incoming 'level' and 'tick' data into the same buffer that has the outgoing OOLs. Do I understand this correctly so far?
And my real question is who uses the flags or where are they used? If they're not used, it may simplify the waveform data structures if they could be eliminated. Your thoughts please.
@guymcswain padding with [0,gpio,0] has an unwanted side effect as it adds an additional delay to the sequence. It kind of makes sense because also the DMA needs cycles to execute the copy. 1000 such pulses at the start of the wave result in an additional delay of around 240 us. So [0,gpio,0] is also not a good option for padding. Currently I don't see any more options to perform the padding from the python side. Hope you are having more luck ...
I have a proposed solution that I'm going to code and test over the next few days. gpioCreateWave(size) will allocate the number of CBs and OOLs for the wave by calling wave2Cbs(...) with parameters that are sized accordingly. wave2Cbs() will link the last CB to the end of the wave's allocated block. All waves will have an exact count of CB and OOL so waveCreate(size) will always find a place as long as a wave got deleted.
I'm sure I'll uncover some problems but it looks like the basic structures in place are suitable for this change. I'll push a 'feature branch' when I'm code complete and have begun testing so that you can do so in parallel.
size will be the proportion of the maximum resources. ie, size=4 would consume one quarter of the full resources available. I believe your application would want size=2.
@guymcswain
Are you referring to the following structure?
#define WAVE_FLAG_READ 1
#define WAVE_FLAG_TICK 2
typedef struct
{
uint32_t gpioOn;
uint32_t gpioOff;
uint32_t usDelay;
uint32_t flags;
} rawWave_t;
The READ flag is only used by rawWaveAddSPI.
The TICK flag is only used by some example code (pps.c) which generates a pulse on the second.
Thanks for pointing that out. I was just observing how the wave2Cbs() function was building the CBs and OOLs and noticed flags were in the mix.
The path I'm pursuing, if it works, should be compatible with flags.
@joan2937 , How do I compile without the built-in tests (x_pigpio.c) executing? I have deliberately broken the gpioWaveCreate() api but just want it to compile anyway.
I see now that it is the tests themselves that are failing to compile.
First problem:
#define CBS_PER_OPAGE 118
#define OOL_PER_OPAGE 79
The total CBs and OOLs are scaled by the above constants. To greatly simplify things, I need to force these constants to be the equal. Let me if this is not possible and what relationship needs to be maintained between these two quantities.
@Jul3k , I have pushed an experimental branch, 'wavesize', which allows the creation of fixed size waves. I modified the script from the top of this thread to create 4 fixed size waves using:
new_id = pi.wave_create(4)
The script now runs to completion. I did not hook up my scope to observe the waveforms, yet.
Give it a try and let me know how your application responds.
Latest commit 58c5ad756bb751fab80619ac89d19fbe8bbd8829
Errata:
size parameter must be greater than 2
flags are not supported
@guymcswain, great job! Thank you! It seems to run perfectly after some testing. I can confirm that size must be greater than 2. Just a note from my side: The name size might be misleading as it does not really reflect the size. Maybe something like splits or max_waves would be better?
Can you observe your generated waveforms? I haven't put my test together to do so. I worry that the 'dummy' cb that got added at the end of the wave might have side effects.
@joan2937 ,
The READ flag is only used by rawWaveAddSPI.
Consistent with the above usage, can we generalize and say the usage of flags and ools are mutually exclusive within a given wave?
The TICK flag is only used by some example code (pps.c) which generates a pulse on the second.
I could not find the file pps.c. I assume it also uses flags exclusive of ools.
@guymcswain
I wouldn't make any assumptions about flags and ools being mutually exclusive.
I can only comment on my usage of the library. I'd be loathe to remove anything someone else might be relying on.
I agree not to remove/break anything that users could be relying on. That said, users shouldn't rely on anything that is undocumented. I'm having difficulty finding anything in the documentation that explains the usage of flags with waves. If no such documentation exists, then it should be fair game to change but then nail it down with documentation. Its good to think of the documentation of the APIs as a contract.
Forgive my rambling, but, I do think there are valid use cases for flags in waves. To my understanding, flags in waves allow you to create a sort of "input wave" where you can get time stamped level information similar to what the alertEmit() thread does for notifications. Mutual exclusion would still allow these cases.
What puzzles me is a use case for a simultaneous output (ools) and input (flags) waveform. Also, it appears there may be a conflict with both waves using a common OOL buffer.
@Jul3k , which part of that comment are you reacting to?
@guymcswain, I didn't react to your comment. I tested your addition today running 4 steppers simultaneously with microstepping and it worked perfectly. I did not see or hear any jerk. I also looked at the timings with a logic analyser and saw no jitter. Step frequencies up to 100kHz for infinite time 😃
@guymcswain, yes probably. I'm gonna habe a look at it tomorrow.
New function is possible but is a lot of work to change all the client's code and documentation.
@guymcswain, it is not possible to have variadic functions also accepting no parameters, see this Question on Stackoverflow. I would go for a new function wave_set_resource_splits(int n) or having a dedicated wave_create function. What is your oppinion?
Perhaps
gpioWaveCreatePad(int percent) C
wave_create_and_pad(percent) Python
wave_create_and_pad(percent) pigpiod_if2
WVCAP percent pigs
Where the percent gives the percentage of the resources to use (in terms of the theoretical maximum, not the current amount free).
So if you wanted to have three waves being rotated you could pad each to 25% leaving another 25% free for any other waves which might be wanted.
Should it be percent or fraction (total number of waves)? Percent has the problem that e.g. to have 3 waves would require 33.3333 percent set or mean to loose some recources.
I would be happier with a percent. In practice I reckon we are only talking about a few waves. The percent could be specified out of a million as done for hardware PWM dutycycle.
There are going to be wasted resources whatever happens with padding. It is not always trivial to create a wave which fills a particular hole (if it was there would be no need for this function).
Ok, thanks for direction on APIs. Do we have agreement on this statement:
"Standard waves and padded waves can coexist, but padded waves do not support flags."
Would flags in principal allow to create a callback once a certain pulse has been transmitted by the DMA? There might be use for a callback once a certain position has been reached in the example of stepper motor control.
From what I can see in the rawWaveAddSpi() or rawWaveAddGeneric(), flags allow a wave to be constructed that collects system tick or input levels (while simultaneously outputting levels?). I don't think it gives us insight into the position of the transmission.
I added the functions as @joan2937 recommended and included your changes @guymcswain https://github.com/Jul3k/pigpio/commits/wavesize
Ok, thanks for direction on APIs. Do we have agreement on this statement:
"Standard waves and padded waves can coexist, but padded waves do not support flags."
That's fine by me. If you think they could be added in the future I would just say "do not currently support flags". Otherwise it's fine as is.
@Jul3k , Thanks I'll take a look at that shortly.
@joan2937 , I need your thoughts on the handling of flags. The code snip below, I believe, shows the crux of the problem with flags on padded waves:
https://github.com/joan2937/pigpio/blob/58c5ad756bb751fab80619ac89d19fbe8bbd8829/pigpio.c#L9619-L9628
The middle condition constrains the shared memory pool of OOLs to be non-overlapping between the allocation for numBOOL (OOLs that DMA writes to gpio registers) and numTOOL (OOLs that DMA reads from the gpio register or system timer).
I initially tried to allocate half OOL resource to BOOL and half to TOOL but the OOL resources were consumed too quickly. Meaning it failed for waves padded to 50% (and possibly down to 25%, but I can't remember). Nevertheless, it misses the 'sweet spot' for padded wave applications in my opinion. So in the current experiment I just allocated everything to BOOL and nothing to TOOL. This is was I mean by flags are not supported.
Just because I can't imaging a use case for flags in the case of generating a continuous output waves doesn't mean we shouldn't support flags in padded waves - I've been wrong before. So should we decide to support flags in padded waves we would need to add arguments to specify the proportion of CBs, BOOLs and TOOLs in the new APIs.
@Jul3k , nice! To enable both of us to collaborate on the same branch, you can open a PR from your fork against the 'develop' branch in pigpio. That way we both should be able to pull/push as needed until we are ready to merge.
I'm out the remainder of today. My thoughts are that we should look to make the APIs capable of accepting the additional parameters in the future - (%[, %, %]).
'develop' or 'wavesize'?
'develop' Eventually this is where it land.
| gharchive/issue | 2017-04-24T11:32:40 | 2025-04-01T06:44:37.283066 | {
"authors": [
"Jul3k",
"ffleandro",
"guymcswain",
"ingvardsen",
"joan2937",
"nachoplus"
],
"repo": "joan2937/pigpio",
"url": "https://github.com/joan2937/pigpio/issues/126",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
155510144 | Measure PWM timing
Hi,
I read over the docs two times now and couldn't figure out how to measure the timing of an incoming pulse. In the arduino world, there is a function called pulseIn() for that purpose. Do you provide a similar functionality, which I missed somehow?
Thanks
One way to do it would be with gpioSetAlertFunc.
All pulseIn does is a busy spin waiting on pulse changes. It then uses the number of cycles to calculate the pulse length.
You can do the same if you are happy with a busy spin, but use gpioTick() to record the microsecond tick at the start and end of the pulse (this rather assumes you are using C).
The above will be the most accurate but is wasteful.
If you are happy to get results within 10 µs then just use callbacks. A callback gives you the GPIO, level, and the tick. So pulseIn becomes record the tick, wait for the next callback, the current tick minus the last tick is the pulse duration.
There are plenty of examples in http://abyz.co.uk/rpi/pigpio/examples.html
Here is the Arduino pulseIn code.
/*
wiring_pulse.c - pulseIn() function
Part of Arduino - http://www.arduino.cc/
Copyright (c) 2005-2006 David A. Mellis
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General
Public License along with this library; if not, write to the
Free Software Foundation, Inc., 59 Temple Place, Suite 330,
Boston, MA 02111-1307 USA
$Id: wiring.c 248 2007-02-03 15:36:30Z mellis $
*/
#include "wiring_private.h"
#include "pins_arduino.h"
/* Measures the length (in microseconds) of a pulse on the pin; state is HIGH
* or LOW, the type of pulse to measure. Works on pulses from 2-3 microseconds
* to 3 minutes in length, but must be called at least a few dozen microseconds
* before the start of the pulse. */
unsigned long pulseIn(uint8_t pin, uint8_t state, unsigned long timeout)
{
// cache the port and bit of the pin in order to speed up the
// pulse width measuring loop and achieve finer resolution. calling
// digitalRead() instead yields much coarser resolution.
uint8_t bit = digitalPinToBitMask(pin);
uint8_t port = digitalPinToPort(pin);
uint8_t stateMask = (state ? bit : 0);
unsigned long width = 0; // keep initialization out of time critical area
// convert the timeout from microseconds to a number of times through
// the initial loop; it takes 16 clock cycles per iteration.
unsigned long numloops = 0;
unsigned long maxloops = microsecondsToClockCycles(timeout) / 16;
// wait for any previous pulse to end
while ((*portInputRegister(port) & bit) == stateMask)
if (numloops++ == maxloops)
return 0;
// wait for the pulse to start
while ((*portInputRegister(port) & bit) != stateMask)
if (numloops++ == maxloops)
return 0;
// wait for the pulse to stop
while ((*portInputRegister(port) & bit) == stateMask) {
if (numloops++ == maxloops)
return 0;
width++;
}
// convert the reading to microseconds. The loop has been determined
// to be 20 clock cycles long and have about 16 clocks between the edge
// and the start of the loop. There will be some error introduced by
// the interrupt handlers.
return clockCyclesToMicroseconds(width * 21 + 16);
}
Oh, I must have been blind. Sorry for the trivia question. It works like a charm!
| gharchive/issue | 2016-05-18T14:06:08 | 2025-04-01T06:44:37.290424 | {
"authors": [
"fivdi",
"infusion",
"joan2937"
],
"repo": "joan2937/pigpio",
"url": "https://github.com/joan2937/pigpio/issues/64",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
700465342 | I encountered the following error when building, how to solve it?
qianxu@qianxu-VirtualBox:~/桌面/3dsident-i18n-zh-cn$ make
make[1]: 进入目录“/home/qianxu/桌面/3dsident-i18n-zh-cn/console”
system.c
/home/qianxu/桌面/3dsident-i18n-zh-cn/console/../common/system.c: In function 'System_GetNANDLocalFriendCodeSeed':
/home/qianxu/桌面/3dsident-i18n-zh-cn/console/../common/system.c:252:9: error: writing 1 byte into a region of size 0 [-Werror=stringop-overflow=]
252 | buf[6] = '\0';
| ~^~
/home/qianxu/桌面/3dsident-i18n-zh-cn/console/../common/system.c:213:22: note: at offset 6 to an object with size 0 allocated by 'malloc' here
213 | char *buf = (char *)malloc(6);
| ^~~~~~~
cc1: all warnings being treated as errors
make[2]: *** [/opt/devkitpro/devkitARM/base_rules:85:system.o] 错误 1
make[1]: *** [Makefile:207:all] 错误 2
make[1]: 离开目录“/home/qianxu/桌面/3dsident-i18n-zh-cn/console”
make[1]: 进入目录“/home/qianxu/桌面/3dsident-i18n-zh-cn/gui”
system.c
/home/qianxu/桌面/3dsident-i18n-zh-cn/gui/../common/system.c: In function 'System_GetNANDLocalFriendCodeSeed':
/home/qianxu/桌面/3dsident-i18n-zh-cn/gui/../common/system.c:252:9: error: writing 1 byte into a region of size 0 [-Werror=stringop-overflow=]
252 | buf[6] = '\0';
| ~^~
/home/qianxu/桌面/3dsident-i18n-zh-cn/gui/../common/system.c:213:22: note: at offset 6 to an object with size 0 allocated by 'malloc' here
213 | char *buf = (char *)malloc(6);
| ^~~~~~~
cc1: all warnings being treated as errors
make[2]: *** [/opt/devkitpro/devkitARM/base_rules:85:system.o] 错误 1
make[1]: *** [Makefile:207:all] 错误 2
make[1]: 离开目录“/home/qianxu/桌面/3dsident-i18n-zh-cn/gui”
make: *** [Makefile:4:all] 错误 2
qianxu@qianxu-VirtualBox:~/桌面/3dsident-i18n-zh-cn$
This hasn't been built for over 2 years. I'd suggest removing the "-Werror" cflag from the makefile and see if it still compiles. It will still produce warnings however. This honestly has a lot of room for improvement.
This hasn't been built for over 2 years. I'd suggest removing the "-Werror" cflag from the makefile and see if it still compiles. It will still produce warnings however. This honestly has a lot of room for improvement.
Thank you. But I encountered a new error.
qianxu@qianxu-VirtualBox:~/桌面/3DSident$ make
make[1]: 进入目录“/home/qianxu/桌面/3DSident/console”
main.c
hardware.c
system.c
/home/qianxu/桌面/3DSident/console/../common/system.c: In function 'System_GetNANDLocalFriendCodeSeed':
/home/qianxu/桌面/3DSident/console/../common/system.c:252:9: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
252 | buf[6] = '\0';
| ~^~
/home/qianxu/桌面/3DSident/console/../common/system.c:213:22: note: at offset 6 to an object with size 0 allocated by 'malloc' here
213 | char *buf = (char *)malloc(6);
| ^~~~~~~
storage.c
utils.c
misc.c
wifi.c
kernel.c
fs.c
ac.c
am.c
actu.c
linking 3DSident.elf
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: kernel.o:/home/qianxu/桌面/3DSident/console/../common/fs.h:6: multiple definition of archive'; system.o:/home/qianxu/桌面/3DSident/console/../common/fs.h:6: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: fs.o:/home/qianxu/桌面/3DSident/console/../common/fs.h:6: multiple definition of archive'; system.o:/home/qianxu/桌面/3DSident/console/../common/fs.h:6: first defined here
collect2: error: ld returned 1 exit status
make[2]: *** [/opt/devkitpro/devkitARM/3ds_rules:42:/home/qianxu/桌面/3DSident/console/3DSident.elf] 错误 1
make[1]: *** [Makefile:207:all] 错误 2
make[1]: 离开目录“/home/qianxu/桌面/3DSident/console”
make[1]: 进入目录“/home/qianxu/桌面/3DSident/gui”
sprites.t3s
C2D_helper.c
menus.c
config.c
main.c
menu_control.c
textures.c
hardware.c
system.c
/home/qianxu/桌面/3DSident/gui/../common/system.c: In function 'System_GetNANDLocalFriendCodeSeed':
/home/qianxu/桌面/3DSident/gui/../common/system.c:252:9: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
252 | buf[6] = '\0';
| ~^~
/home/qianxu/桌面/3DSident/gui/../common/system.c:213:22: note: at offset 6 to an object with size 0 allocated by 'malloc' here
213 | char *buf = (char *)malloc(6);
| ^~~~~~~
storage.c
utils.c
misc.c
wifi.c
kernel.c
fs.c
ac.c
am.c
actu.c
linking 3DSident-GUI.elf
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menus.o:/home/qianxu/桌面/3DSident/gui/include/common.h:8: multiple definition of exitJmp'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/common.h:8: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menus.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of sizeBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menus.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of dynamicBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menus.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of staticBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menus.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: multiple definition of RENDER_BOTTOM'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menus.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: multiple definition of RENDER_TOP'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of volumeIcon'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of cursor'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_home'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Cstick'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Cpad'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Dpad'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_ZR'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_ZL'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_R'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_L'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Start_Select'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Y'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_X'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_B'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_A'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: multiple definition of drive_icon'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: multiple definition of banner'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/common.h:8: multiple definition of exitJmp'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/common.h:8: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of sizeBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of dynamicBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of staticBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: multiple definition of RENDER_BOTTOM'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: main.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: multiple definition of RENDER_TOP'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of volumeIcon'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of cursor'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_home'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Cstick'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Cpad'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Dpad'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_ZR'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_ZL'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_R'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_L'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Start_Select'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Y'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_X'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_B'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_A'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: multiple definition of drive_icon'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: multiple definition of banner'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of sizeBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of dynamicBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: multiple definition of staticBuf'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:16: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: multiple definition of RENDER_BOTTOM'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: menu_control.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: multiple definition of RENDER_TOP'; C2D_helper.o:/home/qianxu/桌面/3DSident/gui/include/C2D_helper.h:15: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of volumeIcon'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of cursor'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_home'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Cstick'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Cpad'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Dpad'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_ZR'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_ZL'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_R'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_L'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Start_Select'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_Y'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_X'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_B'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: multiple definition of btn_A'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:7: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: multiple definition of drive_icon'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: textures.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: multiple definition of banner'; menus.o:/home/qianxu/桌面/3DSident/gui/include/textures.h:6: first defined here
/opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: kernel.o:/home/qianxu/桌面/3DSident/gui/../common/fs.h:6: multiple definition of archive'; system.o:/home/qianxu/桌面/3DSident/gui/../common/fs.h:6: first defined here /opt/devkitpro/devkitARM/bin/../lib/gcc/arm-none-eabi/10.2.0/../../../../arm-none-eabi/bin/ld: fs.o:/home/qianxu/桌面/3DSident/gui/../common/fs.h:6: multiple definition of archive'; system.o:/home/qianxu/桌面/3DSident/gui/../common/fs.h:6: first defined here
collect2: error: ld returned 1 exit status
make[2]: *** [/opt/devkitpro/devkitARM/3ds_rules:42:/home/qianxu/桌面/3DSident/gui/3DSident-GUI.elf] 错误 1
make[1]: *** [Makefile:207:all] 错误 2
make[1]: 离开目录“/home/qianxu/桌面/3DSident/gui”
make: *** [Makefile:4:all] 错误 2
qianxu@qianxu-VirtualBox:~/桌面/3DSident$
Some variable probably redefined somewhere. I don't really maintain this anymore.
| gharchive/issue | 2020-09-13T02:47:07 | 2025-04-01T06:44:37.361611 | {
"authors": [
"joel16",
"qianxu2001"
],
"repo": "joel16/3DSident",
"url": "https://github.com/joel16/3DSident/issues/27",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
29062481 | Allow BigDecimal and BigInteger to be used for number and integer properties
My Json schema has Geo coordinates which have been mapped as follows
"lat": {
"description": "Latitude",
"type": "number",
"required" : true
},
"lng": {
"description": "Longitude",
"type": "number",
"required" : true
}
These get transformed as Double. Can a configuration be added to transform them as BigDecimal as precision is critical for me
The Jackson ObjectMapper also has a feature
DeserializationFeature.USE_BIG_DECIMAL_FOR_FLOATS
Hello,
the "useDoubleNumbers" configuration (developped by abloomston) could be added a future release ?
Thanks,
Hi @mifernandez, useDoubleNumbers has been included in this plugin since 0.4.0.
Hi,
Sorry I would ask for 'useBigDecimalNumbers' which not seem to on latest version.
+1 for this. Useful for money-related things.
+1 we need this too, same reason like rozhok
| gharchive/issue | 2014-03-09T21:02:20 | 2025-04-01T06:44:37.369602 | {
"authors": [
"akushe",
"joelittlejohn",
"mifernandez",
"rozhok",
"xandrox"
],
"repo": "joelittlejohn/jsonschema2pojo",
"url": "https://github.com/joelittlejohn/jsonschema2pojo/issues/161",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
115585952 | Missing classes ? when generating Java from a JSON document (not schema)
I concatenated all my JSON instances into a single file and ran it through:
sh jsonschema2pojo.sh -s all.json -T JSON -t X3D
I looked at the resulting class files and noticed that some stuff was missing. Could you have a look? One thing that was missing was Sphere. JSON is here: https://github.com/coderextreme/x3djson/blob/master/all.json.zip?raw=true
Thanks!
Hi John. When jsonschema2pojo encounter an array inside an example JSON file, it uses the first array value to decide the array's type. I notice that you seem to have concatenated different JSON objects into a single array. Only the first object will be read.
We don't support arrays that contain varying types, since these would require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example that you don't think is explained by the above, it would be useful if you could reduce this to a minimal example (rather than the full 0.5Gb json file).
It’s easy to create examples, you can just take elements out of the array. But here’s my original source for the JSON files I concatenated together. http://www.web3d.org/x3d/content/examples/X3dExampleArchivesJsonScenes.zip http://www.web3d.org/x3d/content/examples/X3dExampleArchivesJsonScenes.zip I had to filter the JSON files that wouldn’t parse. I can provide a zip with only the parseable files. I think however, there is a issue that you bring up about polymorphic deserialization. We might have a Geometry class which contains a IndexedFaceSet, Box, Sphere, Cone, or Cylinder, but only one of them at a time. I see from the generated Java code that it only has IndexedFaceSet, so your explanation is good. Do you know any JSON->JSON schema tools which support this? Also, I’m interested in recursive JSON schemas, if you know a tool which does both. I have Transforms within Transforms, and I only want one result Transform class.
Thanks,
John
On Nov 6, 2015, at 3:22 PM, Joe Littlejohn notifications@github.com wrote:
Hi John. When jsonschema2pojo encounter an array inside an example JSON file, it uses the first array value to decide the array's type. I notice that you seem to have concatenated different JSON objects into a single array. Only the first object will be read.
We don't support arrays that contain varying types, since these would require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example that you don't think is explained by the above, it would be useful if you could reduce this to a minimal example (rather than the full 0.5Gb json file).
—
Reply to this email directly or view it on GitHub https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406.
Attached is a single example that doesn’t seem to do the right thing:
…no Shape, no Sphere, multiple Child classes (with underscores).
Thanks. If the attachment doesn’t go through, I can put it on github with a link.
John
On Nov 6, 2015, at 3:22 PM, Joe Littlejohn notifications@github.com wrote:
Hi John. When jsonschema2pojo encounter an array inside an example JSON file, it uses the first array value to decide the array's type. I notice that you seem to have concatenated different JSON objects into a single array. Only the first object will be read.
We don't support arrays that contain varying types, since these would require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example that you don't think is explained by the above, it would be useful if you could reduce this to a minimal example (rather than the full 0.5Gb json file).
—
Reply to this email directly or view it on GitHub https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406.
Yes, no attachment I don't think
On 6 Nov 2015 10:36 pm, "John Carlson" notifications@github.com wrote:
Attached is a single example that doesn’t seem to do the right thing:
…no Shape, no Sphere, multiple Child classes (with underscores).
Thanks. If the attachment doesn’t go through, I can put it on github with
a link.
John
On Nov 6, 2015, at 3:22 PM, Joe Littlejohn notifications@github.com
wrote:
Hi John. When jsonschema2pojo encounter an array inside an example JSON
file, it uses the first array value to decide the array's type. I notice
that you seem to have concatenated different JSON objects into a single
array. Only the first object will be read.
We don't support arrays that contain varying types, since these would
require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example
that you don't think is explained by the above, it would be useful if you
could reduce this to a minimal example (rather than the full 0.5Gb json
file).
—
Reply to this email directly or view it on GitHub <
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406
.
—
Reply to this email directly or view it on GitHub
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154564064
.
https://raw.githubusercontent.com/coderextreme/x3djson/master/SeaStarGroup.json https://raw.githubusercontent.com/coderextreme/x3djson/master/SeaStarGroup.json
Thanks,
John
On Nov 6, 2015, at 4:47 PM, Joe Littlejohn notifications@github.com wrote:
Yes, no attachment I don't think
On 6 Nov 2015 10:36 pm, "John Carlson" notifications@github.com wrote:
Attached is a single example that doesn’t seem to do the right thing:
…no Shape, no Sphere, multiple Child classes (with underscores).
Thanks. If the attachment doesn’t go through, I can put it on github with
a link.
John
On Nov 6, 2015, at 3:22 PM, Joe Littlejohn notifications@github.com
wrote:
Hi John. When jsonschema2pojo encounter an array inside an example JSON
file, it uses the first array value to decide the array's type. I notice
that you seem to have concatenated different JSON objects into a single
array. Only the first object will be read.
We don't support arrays that contain varying types, since these would
require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example
that you don't think is explained by the above, it would be useful if you
could reduce this to a minimal example (rather than the full 0.5Gb json
file).
—
Reply to this email directly or view it on GitHub <
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406
.
—
Reply to this email directly or view it on GitHub
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154564064
.
—
Reply to this email directly or view it on GitHub https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154567038.
Multiple Child*.java classes might be acceptable. I am not so sure about other repeated classes, and not Transforms.
I can try to find an example that generates multiple Transforms, if that will help.
John
On Nov 6, 2015, at 4:47 PM, Joe Littlejohn notifications@github.com wrote:
Yes, no attachment I don't think
On 6 Nov 2015 10:36 pm, "John Carlson" notifications@github.com wrote:
Attached is a single example that doesn’t seem to do the right thing:
…no Shape, no Sphere, multiple Child classes (with underscores).
Thanks. If the attachment doesn’t go through, I can put it on github with
a link.
John
On Nov 6, 2015, at 3:22 PM, Joe Littlejohn notifications@github.com
wrote:
Hi John. When jsonschema2pojo encounter an array inside an example JSON
file, it uses the first array value to decide the array's type. I notice
that you seem to have concatenated different JSON objects into a single
array. Only the first object will be read.
We don't support arrays that contain varying types, since these would
require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example
that you don't think is explained by the above, it would be useful if you
could reduce this to a minimal example (rather than the full 0.5Gb json
file).
—
Reply to this email directly or view it on GitHub <
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406
.
—
Reply to this email directly or view it on GitHub
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154564064
.
—
Reply to this email directly or view it on GitHub https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154567038.
Looks like the first example in the file has multiple transforms. Here it is:
https://github.com/coderextreme/x3djson/blob/master/CleatClamp.json https://github.com/coderextreme/x3djson/blob/master/CleatClamp.json
John
On Nov 6, 2015, at 5:09 PM, John Carlson yottzumm@gmail.com wrote:
Multiple Child*.java classes might be acceptable. I am not so sure about other repeated classes, and not Transforms.
I can try to find an example that generates multiple Transforms, if that will help.
John
On Nov 6, 2015, at 4:47 PM, Joe Littlejohn <notifications@github.com mailto:notifications@github.com> wrote:
Yes, no attachment I don't think
On 6 Nov 2015 10:36 pm, "John Carlson" <notifications@github.com mailto:notifications@github.com> wrote:
Attached is a single example that doesn’t seem to do the right thing:
…no Shape, no Sphere, multiple Child classes (with underscores).
Thanks. If the attachment doesn’t go through, I can put it on github with
a link.
John
On Nov 6, 2015, at 3:22 PM, Joe Littlejohn <notifications@github.com mailto:notifications@github.com>
wrote:
Hi John. When jsonschema2pojo encounter an array inside an example JSON
file, it uses the first array value to decide the array's type. I notice
that you seem to have concatenated different JSON objects into a single
array. Only the first object will be read.
We don't support arrays that contain varying types, since these would
require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example
that you don't think is explained by the above, it would be useful if you
could reduce this to a minimal example (rather than the full 0.5Gb json
file).
—
Reply to this email directly or view it on GitHub <
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406 https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406
.
—
Reply to this email directly or view it on GitHub
<https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154564064 https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154564064>
.
—
Reply to this email directly or view it on GitHub https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154567038.
Here’s an example of the worse form of nesting Transforms we have:
https://raw.githubusercontent.com/coderextreme/x3djson/master/RedPlayer.json https://raw.githubusercontent.com/coderextreme/x3djson/master/RedPlayer.json probably way too many Child.java and Transform.java classes.
Even if you don’t handle polymorphism, can you notify me in the Java code or in an error log and I can deal with it? Also, generate the subclasses of the unknown superclass without generating the inheritance hierarchy if possible. I need the classes generated, please. That will help immensely to have the classes. I know what the superclass is, it comes from the XSD and the standard specification. Unfortunately, so far, we are not generating the JSON schema from the XSD as there are differences in the JSON from the XML. We prefix attributes, comments and arrays with @, #, - in the JSON
John
On Nov 6, 2015, at 5:38 PM, John Carlson yottzumm@gmail.com wrote:
Looks like the first example in the file has multiple transforms. Here it is:
https://github.com/coderextreme/x3djson/blob/master/CleatClamp.json https://github.com/coderextreme/x3djson/blob/master/CleatClamp.json
John
On Nov 6, 2015, at 5:09 PM, John Carlson <yottzumm@gmail.com mailto:yottzumm@gmail.com> wrote:
Multiple Child*.java classes might be acceptable. I am not so sure about other repeated classes, and not Transforms.
I can try to find an example that generates multiple Transforms, if that will help.
John
On Nov 6, 2015, at 4:47 PM, Joe Littlejohn <notifications@github.com mailto:notifications@github.com> wrote:
Yes, no attachment I don't think
On 6 Nov 2015 10:36 pm, "John Carlson" <notifications@github.com mailto:notifications@github.com> wrote:
Attached is a single example that doesn’t seem to do the right thing:
…no Shape, no Sphere, multiple Child classes (with underscores).
Thanks. If the attachment doesn’t go through, I can put it on github with
a link.
John
On Nov 6, 2015, at 3:22 PM, Joe Littlejohn <notifications@github.com mailto:notifications@github.com>
wrote:
Hi John. When jsonschema2pojo encounter an array inside an example JSON
file, it uses the first array value to decide the array's type. I notice
that you seem to have concatenated different JSON objects into a single
array. Only the first object will be read.
We don't support arrays that contain varying types, since these would
require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example
that you don't think is explained by the above, it would be useful if you
could reduce this to a minimal example (rather than the full 0.5Gb json
file).
—
Reply to this email directly or view it on GitHub <
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406 https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406
.
—
Reply to this email directly or view it on GitHub
<https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154564064 https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154564064>
.
—
Reply to this email directly or view it on GitHub https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154567038.
Can you suggest a good JSON Schema generator from multiple JSON files suitable for your jsonschema2pojo program?
As you can see from my issues (coderextreme) I’m not having much luck with the Schema files I try. Either no Java files are generated or one file is generated
Thanks,
John
On Nov 6, 2015, at 4:47 PM, Joe Littlejohn notifications@github.com wrote:
Yes, no attachment I don't think
On 6 Nov 2015 10:36 pm, "John Carlson" notifications@github.com wrote:
Attached is a single example that doesn’t seem to do the right thing:
…no Shape, no Sphere, multiple Child classes (with underscores).
Thanks. If the attachment doesn’t go through, I can put it on github with
a link.
John
On Nov 6, 2015, at 3:22 PM, Joe Littlejohn notifications@github.com
wrote:
Hi John. When jsonschema2pojo encounter an array inside an example JSON
file, it uses the first array value to decide the array's type. I notice
that you seem to have concatenated different JSON objects into a single
array. Only the first object will be read.
We don't support arrays that contain varying types, since these would
require polymorphic deserialization.
This is an assumptions about what's wrong here. If you have an example
that you don't think is explained by the above, it would be useful if you
could reduce this to a minimal example (rather than the full 0.5Gb json
file).
—
Reply to this email directly or view it on GitHub <
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154545406
.
—
Reply to this email directly or view it on GitHub
https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154564064
.
—
Reply to this email directly or view it on GitHub https://github.com/joelittlejohn/jsonschema2pojo/issues/439#issuecomment-154567038.
JSONs:
{
"person" : {
"name" : "sandeep",
"id" : 12,
"address" : {
"add1" : "ab",
"add2" : "cd",
"state" : "up",
"country" : "India"
}
}
}
{
"employee" : {
"name" : "rajesh",
"id" : 13,
"address" : {
"add1" : "xy",
"state" : "ap"
}
}
}
classes:
public class Address{
private String add1;
private String add2;
private String state;
private String country;
//setters and getters
}
public class Address_ {
private String add1;
private String state;
}
rather than creating multiple classes(with "_" symbol) is there any way to merge the existing class If we get another "address" json object in another json file.
JSONFiles.pdf
Lets discuss this on #441
| gharchive/issue | 2015-11-06T20:54:10 | 2025-04-01T06:44:37.429999 | {
"authors": [
"coderextreme",
"joelittlejohn",
"sandeepmeenuga"
],
"repo": "joelittlejohn/jsonschema2pojo",
"url": "https://github.com/joelittlejohn/jsonschema2pojo/issues/439",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1091998377 | #92 Use dnd-kit, drop sortable-hoc
#92
Tests passing locally
| gharchive/pull-request | 2022-01-02T11:12:24 | 2025-04-01T06:44:37.434845 | {
"authors": [
"joepio"
],
"repo": "joepio/atomic-data-browser",
"url": "https://github.com/joepio/atomic-data-browser/pull/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1900092855 | Handle array access AST creation
Added handling for AST creation of array access.
@ankit-privado As discussed handle typeFullName to be assigned to call node, add respective unit tests for the same.
Also add unit test inside io.joern.go2cpg.dataflow.ArrayDataflowTests around use cases handled in this PR.
| gharchive/pull-request | 2023-09-18T03:18:49 | 2025-04-01T06:44:37.435982 | {
"authors": [
"ankit-privado",
"pandurangpatil"
],
"repo": "joernio/joern",
"url": "https://github.com/joernio/joern/pull/3663",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1997101022 | [x2cpg] Unification of Lambda Naming
Added io.joern.x2cpg.AstCreatorBase.nextClosureName to generate names for closures/lambdas/anonymous functions.
Chose the naming scheme following Kotlin/Python using <lambda>0, <lambda>1, etc. due it low likelihood of collisions with real source code method naming schemes, and it doesn't include special regex characters.
Replaced naming conventions with this unified one in:
c2cpg
kotlin2cpg
javasrc2cpg
jssrc2cpg
php2cpg
pysrc2cpg
The result is that all lambdas in the CPG now share the same naming scheme.
Resolves #3792
Wait.. I guess with this approach the names are not stable across multiple runs/files in parallel. Did you check this? Maybe I am wrong here.
That's a good point.
Apologies for the close. I fumbled and accidentally hit "Close with comment" instead of just comment
Yeah, it's bound to AstCreator so it'll restart at 0 for each file there and as long as there is no concurrency within that task-level then it should be fine. A similar pattern was followed in Kotlin and Java with the key-pools.
@maltek FYI: when this goes live (i.e., jssrc2cpg-internal is also updated) the sptests expectations for js2cpg and jssrc2cpg will differ a bit more from each other.
Not just sptests, that's fine. (Nobody cares about the method names in those...)
This will have some customer impact for us since stable method full names are a requirement for tracking findings across different scans. @bbrehm you're the fingerprint guy - will this only affect findings where the source or sink is a lambda, or also any findings where it's anywhere on the path? (I think most of those frontends aren't considered GA yet, so we might still be able to get away with such a change. But it requires a discussion.)
I don't particularly like that this is per-file now everywhere. At least jssrc had a per-method counter for this, which means that a code change of a lambda only affected stability of the fullnames within that method - the fullnames in the rest of that file staying as they were before. Ideally, we'd move all frontends in that direction instead of the opposite one.
Alternatively, we could add a modifier like CLOSURE to unify this "special" type of method, similar to https://github.com/joernio/joern/pull/3826? The main idea of this issue and PR is to really be able to easily separate this from other methods without considering every unique frontend's naming scheme.
Either solution works for me, the modifier direction may be less intrusive.
This is not a pressing issue, it's more of a larger project towards cleaning up. Lambda and module-defining methods seem to follow different patterns in each frontend. For the CPG to be more of a uniform abstraction, I'd say that these structures should either share the same naming conventions or have a defining property.
for the use-case of finding all lambdas, a modifier feels cleaner to me anyway than a regex search :+1: (Though I would bikeshed the naming a bit - if it's about unnamed functions, I would go for something like ANONYMOUS or LAMBDA. After all, named functions can also be closures / close over variables from an outer scope.)
It still would be nice if we could unify these names, just to have things cleaner... with fullnames that's just breaking a deep assumption on the qwiet side :(
@maltek Let me keep this PR open for a while and prioritize the modifier then. That way, we can migrate to checking the modifier to retrieve these first.
https://github.com/ShiftLeftSecurity/codepropertygraph/pull/1746
Related PR https://github.com/joernio/joern/pull/3842
I'd like to merge this at some point tomorrow to conclude the unification of lambdas in the CPG. Then this week can conclude the uniform representation of lambdas in the CPG.
| gharchive/pull-request | 2023-11-16T15:16:16 | 2025-04-01T06:44:37.445839 | {
"authors": [
"DavidBakerEffendi",
"johannescoetzee",
"maltek",
"max-leuthaeuser"
],
"repo": "joernio/joern",
"url": "https://github.com/joernio/joern/pull/3831",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
416461411 | not an issue, just saying good work
good work on this, will feature it in our weekly python on hardware newsletter. feel free to delete/close issue!
Thanks!
I appreciate that coming from you guys! You rock!
| gharchive/issue | 2019-03-03T01:23:47 | 2025-04-01T06:44:37.450423 | {
"authors": [
"joewez",
"ptorrone"
],
"repo": "joewez/WifiMarquee",
"url": "https://github.com/joewez/WifiMarquee/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1153997597 | handle invalidParameters error response
In case when order_ref is non existing bankid api returns error message with additional attributes error_code && details like
{"errorCode":"invalidParameters","details":"No such order"}
this should fix this problem.
Thanks! Sorry for letting your PR rot here - I noticed this myself and fixed it, didn't get the notification I guess :(
| gharchive/pull-request | 2022-02-28T11:50:34 | 2025-04-01T06:44:37.458577 | {
"authors": [
"johanhalse",
"morkevicius"
],
"repo": "johanhalse/bankid",
"url": "https://github.com/johanhalse/bankid/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
39467372 | Interactive transcript feature/plugin for MediaElement.js - accessibility/ usability
I have a proposed solution/ patch... -- relates to #1262.
Features:
Highlight word/phrase in transcript as audio/video is played,
Search for word, and seek to that point in media,
Click on word/phrase to jump to that point in media (in PROGRESS),
Auto-scroll of the transcript as media plays (in PROGRESS).
@nfreear I have a question, since I'm not familiarized a lot with Accesibility: why do we need a transcript having closed captioning?
Closing this issue since no answer has been posted in more than 2 weeks. If you'd like to reopen this, let us know. Thanks!
| gharchive/issue | 2014-08-04T22:22:20 | 2025-04-01T06:44:37.523961 | {
"authors": [
"nfreear",
"ron666"
],
"repo": "johndyer/mediaelement",
"url": "https://github.com/johndyer/mediaelement/issues/1264",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
166006838 | Fix way pluginVersions is being looped
Fix for #1656
@johndyer Please check the last commit; I don't know why is showing previous commits when I pushed this request branch. Thanks
Closing PR since issue couldn't be reproducible
| gharchive/pull-request | 2016-07-18T01:30:09 | 2025-04-01T06:44:37.525798 | {
"authors": [
"ron666"
],
"repo": "johndyer/mediaelement",
"url": "https://github.com/johndyer/mediaelement/pull/1778",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
296915577 | Team editor fixes
Edit team details now loads data and saves properly. I also removed a file that appeared to be unused and it was a bit confusing (my-teams.js).
Fixes issue: https://github.com/johnneed/GreenUpVermont/issues/11
| gharchive/pull-request | 2018-02-13T22:53:53 | 2025-04-01T06:44:37.569134 | {
"authors": [
"smaraf"
],
"repo": "johnneed/GreenUpVermont",
"url": "https://github.com/johnneed/GreenUpVermont/pull/41",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
163558489 | Munging and progress indicator
Needs a graphical indicator that it is loading. Also there is munging to force the aspect ratio, and there should be an indicator to reflect that.
Done
| gharchive/issue | 2016-07-03T11:36:05 | 2025-04-01T06:44:37.574967 | {
"authors": [
"johnny-morrice"
],
"repo": "johnny-morrice/webdelbrot",
"url": "https://github.com/johnny-morrice/webdelbrot/issues/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1437325349 | after format, eslint(vue/html-indent) show warning
after format:
after eslint fix:
how can I get the right indent like eslint do with just format?
use just eslint.
prettier is bullshit!!!
https://antfu.me/posts/why-not-prettier
| gharchive/issue | 2022-11-06T09:15:55 | 2025-04-01T06:44:37.594760 | {
"authors": [
"transtone"
],
"repo": "johnsoncodehk/volar",
"url": "https://github.com/johnsoncodehk/volar/issues/2097",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
297945092 | We need a better error for corruption in the password DB
We need a better error for corruption in the password DB
AFAICT, the only times corruption is identified is when either FileBackingStore::line_has_user or FileBackingStore::hash_is_locked discover that a line is missing a password. (v0.8.0 has another case, which I added in 0.8.0 and took back out in a71ae76 because it's not very important.)
At that point, both of them return BackingStoreError::MissingData, which is correct. We could rename it to DataCorruption if you like, but otherwise, I think we're set.
If either of them find a bad line, the rest of the database is suspect anyway, so barfing is the right thing to do.
| gharchive/issue | 2018-02-16T22:54:49 | 2025-04-01T06:44:37.638663 | {
"authors": [
"therealbstern"
],
"repo": "jolhoeft/websession.rs",
"url": "https://github.com/jolhoeft/websession.rs/issues/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
56660769 | jolici just stop itself in the middle
➜ Project git:(feature/ci) jolici run -vvv
Creating builds...
1 builds created
Running job php = 5.4
Step 0 : FROM jolicode/php54:latest
---> Using cache
---> a3d266dc994f
Step 2 : ADD . $WORKDIR
---> 6cd615595474
Removing intermediate container ad9ba4567285
Step 3 : WORKDIR $HOME/project
---> Running in 55c778a47f92
---> 8d8068fd451e
Removing intermediate container 55c778a47f92
Step 4 : RUN sudo chown travis:travis -R $HOME/project
---> Running in 7486cbd87b2a
➜ Project git:(feature/ci) ✗
I have no idea why it stops here.
@joelwurtz you need to upgrade to docker-php ~0.4.0
Here is another build:
➜ Project git:(feature/ci) ✗ jolici run -vvv
Creating builds...
1 builds created
Running job php = 5.4
Step 0 : FROM jolicode/php54:latest
---> 15d056b9de11
Step 1 : ENV WORKDIR $HOME/project
---> Using cache
---> a3d266dc994f
Step 2 : ADD . $WORKDIR
Do you still encounter the issue with the last release ?
I'm closing this issue, if you still have the bug don't hesitate to reopen it.
| gharchive/issue | 2015-02-05T11:54:20 | 2025-04-01T06:44:37.641247 | {
"authors": [
"Nek-",
"joelwurtz"
],
"repo": "jolicode/JoliCi",
"url": "https://github.com/jolicode/JoliCi/issues/43",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1567362697 | More features for planes: Landing gears, jetways, more planes, Airport utilities
Hello MTR Mod.
I would like to add more features for planes for the MTR Mod.
1. Landing Gears
In the mod, the planes don't have landing gears, shown here.
I would like to add so that if planes are on the ground, the landing gears are shown.
During taxiing and takeoff, the landing gears are shown, after takeoff, the landing gears will be retracted, then when landing, the landing gears show up.
2. Jetways
You need to add Plane Platforms to Minecraft Transit Railway.
Make a jetway (could be any block or just Airstairs)
Use the Platform Station Rail to make the plane stop at the gate!
Place 3 plane platforms in the exit of the jetway that leads to the plane (max height 3 blocks)
Enjoy!

3. Add more planes + edit the A320 to add Business Class
Add more planes to MTR Mod and add Business Class to A320.
A320-200
2-2 business class configuration
2-2 economy class configuration
1. B737-800
2-2 business class configuration
2-2 economy class configuration
2. A330-300
1-2-1 business class configuration
2-3-2 economy class configuration
3. B757-200ER
2-2 business class configuration
3-3 economy class configuration
4. B787-8
1-2-1 first class configuration
1-2-1 business class configuration
2-4-2 economy class configuration
5. B777-300ER
1-2-1 first class configuration
1-2-1 business class configuration
2-4-2 economy class configuration
6. A350-900
1-2-1 first class configuration
1-2-1 business class configuration
2-4-2 economy class configuration
Plane / airport utilities
Add airport utilities for MTR Mod.
Airport Signs
You can enter text and icons and add more lines. Text lines can be added up to 3.
Airport PIDS
This shows all the flights on the gates (or platforms), departures and arrivals.
Airport Ticket Check-In, Airport Ticket entrance/arrival
This allows you to get airport tickets with emeralds or money.
Build a check-in/go to the check-in counters
Buy a ticket of any flight
Go enter the airport with your ticket (Airport Ticket Enterance)
Exit with your airport ticket when arrival (Airport Ticket Arrival)
Done!
I think you can add more features! Thanks!
Add more planes + edit the A320 to add Business Class
Once again, stop asking for adding new vehicles (both trains, planes, or other transit vehicles).👎
If you want more transit vehicles types, create resource pack by yourself.
If you don’t know how to create resource pack or really want Jonathan add vehicles you want to the mod officially, support him on Patreon.
If you want more transit vehicles types, create resource pack by yourself.
If you don’t know how to create resource pack or really want Jonathan add vehicles you want to the mod officially, support him on Patreon.
I can't believe you are so rude to me, trying to reject my suggestion. I am just adding ideas when you are trying to ruin people's ideas. Try to be considerate.
👎 If you want more transit vehicles types, create resource pack by yourself.
If you don’t know how to create resource pack or really want Jonathan add vehicles you want to the mod officially, support him on Patreon.
I can't believe you are so rude to me, trying to reject my suggestion. I am just adding ideas when you are trying to ruin people's ideas. Try to be considerate.
Once again, stop asking for adding new vehicles (both trains, planes, or other transit vehicles).👎
If you want more transit vehicles types, create resource pack by yourself.
If you don’t know how to create resource pack or really want Jonathan add vehicles you want to the mod officially, support him on Patreon.
I can't believe you are rude to me and others. I am just making ideas when you are flotting around ruining people's suggestions. Try to be more considerate!
I can't believe you are rude to everyone and others. They are just making ideas when you are flotting around ruining people's suggestions. Try to be more considerate!
Making ideas not mean to give something is impossible or not real or breaking rules, OK?
First, no matter on Discord or MTRBBS, it already mentioned that only Patreon supporter can ask for adding new vehcles.
Second, why don't you think the reason there is a tool called resource pack creater?
Third, considerate doesn't mean can accept everything, especially some breaking the rules, understand?
Other than the plane requests, the other items seem to be purely cosmetic. Closing this for now.
| gharchive/issue | 2023-02-02T05:46:46 | 2025-04-01T06:44:37.666496 | {
"authors": [
"52PD",
"Neon4UltraPlays",
"chortle3",
"jonafanho"
],
"repo": "jonafanho/Minecraft-Transit-Railway",
"url": "https://github.com/jonafanho/Minecraft-Transit-Railway/issues/618",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1390761753 | Skip serializing None with method?
Not sure if this fits here, but is there a way to optionally skip serializing None?
Example,
I have a query that is select * that I want every value to show up null or not.. But if select id, location, whatever I'd like only those 3 to be serialized... Seems like an easy thing to do, I'm probably just not looking at the problem the right way..
This question seems to belong to the https://github.com/serde-rs/serde/ repository and not here. Further, you are very sparse on details, so helping you is hard right now. Please check here: https://github.com/jonasbb/serde_with/blob/master/CONTRIBUTING.md#reporting-bugs
But I try to answer anyway: Skipping Nones is already using a function #[serde(skip_serializing_if = "Option::is_none")]. If you need more flexibility, you can always write the Serialize implementation by hand. It is quite easy, and you can use the derived implementation as start.
If you are unfamiliar with serde, you should take a look over the documentation at https://serde.rs/ since it explains the available attributes and manual implementation.
Yeah the skip_serializing_if isn't flexible enough. Trying to avoid writing my own serialize impl because it seems like something that should be able to be done already, or if not should be implemented, because being able to skip None sometimes and not others is useful in many situations.
It can be done already, by writing your own Serialize. There is no reason to shy away from it, if the derive stuff is not flexible enough. You definitely can use skip_serializing_if to "skip sometimes and not others". You can provide any function as argument there and the function can access whatever they want, be it global state, environment values, thread locals, etc.
I'm going to close the issue though, since problems with the derive macros cannot be fixed here.
Any examples you know of that use actix_web and pass the request to serialize by chance?
This struct has 112 fields that all need to be serialized based on whether the request has a select= in the query string and what is in it if it does...
Figured out a way to do it, but I don't like it. It's not pretty. Basically create a Value that has either "result": MyStruct or a "result": Value that has been paired down to selected fields..
| gharchive/issue | 2022-09-29T12:09:06 | 2025-04-01T06:44:37.674561 | {
"authors": [
"jonasbb",
"letto4135"
],
"repo": "jonasbb/serde_with",
"url": "https://github.com/jonasbb/serde_with/issues/520",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1043638756 | 🛑 Nextcloud is down
In ecf39f0, Nextcloud (https://nextcloud.jonasled.de) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Nextcloud is back up in fa51fd6.
| gharchive/issue | 2021-11-03T14:03:10 | 2025-04-01T06:44:37.677436 | {
"authors": [
"jonasled"
],
"repo": "jonasled/status",
"url": "https://github.com/jonasled/status/issues/320",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1293337215 | 🛑 Cultuurcentrum is down
In 12af4ae, Cultuurcentrum (https://cultuurcentrum.mechelen.be/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Cultuurcentrum is back up in e691548.
| gharchive/issue | 2022-07-04T15:53:11 | 2025-04-01T06:44:37.680084 | {
"authors": [
"jonassalen"
],
"repo": "jonassalen/uptime-mechelen",
"url": "https://github.com/jonassalen/uptime-mechelen/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2465107431 | GetX 5 web refresh blank screen
Using example_nav2 example project, if I do one of the following two things, every route except /home will be blank screen after refresh the browser.
if I change to use Flutter 3.22 stable's default web/index.html as this (No matter debug or build):
<body>
<script src="flutter_bootstrap.js" async></script>
</body>
but not use old index file:
<body>
<!-- This script installs service_worker.js to provide PWA functionality to
application. For more information, see:
https://developers.google.com/web/fundamentals/primers/service-workers -->
<script>
var serviceWorkerVersion = null;
var scriptLoaded = false;
function loadMainDartJs() {
if (scriptLoaded) {
return;
}
scriptLoaded = true;
var scriptTag = document.createElement('script');
scriptTag.src = 'main.dart.js';
scriptTag.type = 'application/javascript';
document.body.append(scriptTag);
}
if ('serviceWorker' in navigator) {
// Service workers are supported. Use them.
window.addEventListener('load', function () {
// Wait for registration to finish before dropping the <script> tag.
// Otherwise, the browser will load the script multiple times,
// potentially different versions.
var serviceWorkerUrl = 'flutter_service_worker.js?v=' + serviceWorkerVersion;
navigator.serviceWorker.register(serviceWorkerUrl)
.then((reg) => {
function waitForActivation(serviceWorker) {
serviceWorker.addEventListener('statechange', () => {
if (serviceWorker.state == 'activated') {
console.log('Installed new service worker.');
loadMainDartJs();
}
});
}
if (!reg.active && (reg.installing || reg.waiting)) {
// No active web worker and we have installed or are installing
// one for the first time. Simply wait for it to activate.
waitForActivation(reg.installing ?? reg.waiting);
} else if (!reg.active.scriptURL.endsWith(serviceWorkerVersion)) {
// When the app updates the serviceWorkerVersion changes, so we
// need to ask the service worker to update.
console.log('New service worker available.');
reg.update();
waitForActivation(reg.installing);
} else {
// Existing service worker is still good.
console.log('Loading app from service worker.');
loadMainDartJs();
}
});
// If service worker doesn't succeed in a reasonable amount of time,
// fallback to plaint <script> tag.
setTimeout(() => {
if (!scriptLoaded) {
console.warn(
'Failed to load app from service worker. Falling back to plain <script> tag.',
);
loadMainDartJs();
}
}, 4000);
});
} else {
// Service workers not supported. Just drop the <script> tag.
loadMainDartJs();
}
</script>
</body>
Not matter change web/index.html or not, build web with wasm:
flutter build web --wasm
Screenshots:
+1
| gharchive/issue | 2024-08-14T07:30:48 | 2025-04-01T06:44:37.684170 | {
"authors": [
"Ssiswent",
"fisforfaheem"
],
"repo": "jonataslaw/getx",
"url": "https://github.com/jonataslaw/getx/issues/3168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1419521004 | Limit search result to city
What would it take to extend the plugin with an option to limit search results to cities, for example named citycodes?
Nominatim has an option for this called city=<city> as explained here, so that seems possible adding it to OSM provider.
I also saw viewbox parameter in ol-ext SearchNominatim control, so another option would be to add an extent parameter.
nominatim doc does not specify citycode as a limitation parameter
I have provided the viewbox parameter in https://github.com/Dominique92/ol-geocoder
Closed in https://github.com/Dominique92/ol-geocoder/releases/tag/v4.3.0
| gharchive/issue | 2022-10-22T21:40:57 | 2025-04-01T06:44:37.687816 | {
"authors": [
"Dominique92",
"geraldo"
],
"repo": "jonataswalker/ol-geocoder",
"url": "https://github.com/jonataswalker/ol-geocoder/issues/260",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
67325274 | I select day19 but the log is day18
It log the past day......
Since this is a very vague problem I am not very sure what you mean, but if you are just printing out the NSDate provided by the calendarDidDateSelected function of the JTCalendarDataSource delegate you have to watch out your time zone. By default NSDate's description variable of the Printable protocol converts the date to CET, which means if you are in a timezone like Berlin (GMT +1) it will print out 2014-11-29 23:00:00 +0000 even though you selected the 30th of November.
Concluding this is not a bug but a general problem of date handling that you need to solve by your application. If I got your problem wrong feel free to specify it more accurate.
If you log the date using [NSDateFormatter localizedStringFromDate:date dateStyle:NSDateFormatterMediumStyle timeStyle:NSDateFormatterShortStyle] you'll see that @steilerDev is OK ;-)
Don't rely on [date description] which NSLog uses, it takes into account your local timezone, use NSDateFormatter.
| gharchive/issue | 2015-04-09T09:57:21 | 2025-04-01T06:44:37.695756 | {
"authors": [
"jonathantribouharet",
"malaimoo",
"sendoa",
"steilerDev"
],
"repo": "jonathantribouharet/JTCalendar",
"url": "https://github.com/jonathantribouharet/JTCalendar/issues/120",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
197560287 | Date displayed incorrectly
It's because you don't manage the view correctly, the top calendar doesn't have a good height and overstep on the 2nd calendar.
You have to show 6 lines of days and not 5 because some month are on 6 weeks.
| gharchive/issue | 2016-12-26T08:54:26 | 2025-04-01T06:44:37.697562 | {
"authors": [
"Guodadada",
"jonathantribouharet"
],
"repo": "jonathantribouharet/JTCalendar",
"url": "https://github.com/jonathantribouharet/JTCalendar/issues/322",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1183934367 | Example models in protobuf_flow/Services.yaml aren't recognized as valid YAML
Example models in protobuf_flow/Services.yaml aren't recognized as valid YAML.
Example to recreate:
$ aac puml-component python/model/protobuf_flow/Services.yaml
gen_plant_uml: parser_failure
Failed to parse /workspace/AaC/python/python/model/protobuf_flow/Services.yaml
provided content was not YAML
/workspace/AaC/python/python/model/protobuf_flow/Services.yaml
AC:
[ ] Identify the invalid YAML cause
[ ] Correct the issue
[ ] Add a test to demonstrate the fix and prevent regression
[ ] -or- create an issue to address the DSL design issues and change
This error has been resolved.
| gharchive/issue | 2022-03-28T20:11:53 | 2025-04-01T06:44:37.702435 | {
"authors": [
"Coffee2Bits"
],
"repo": "jondavid-black/AaC",
"url": "https://github.com/jondavid-black/AaC/issues/276",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
28301039 | Hooks after message is processed
Created the following hooks that are executed after the message is processed:
on_ack
on_timeout
on_error
on_reject
on_requeue
Sneakers.configure(hooks: { on_timeout: ->{ puts "Timeout happened" })
or
class TestWorker
include Sneakers::Worker
from_queue 'test', hooks: { on_timeout: ->{ puts "Timeout happened" } }
def work(msg)
end
end
Any tips on how to properly test this?
hi, what's the progress?
@jondot I just rebased this against master, are you willing to merge this?
What's the use-case for this? Don't handlers already do this?
Please rebase this against master if it's still relevant.
Use-cases for this need to be discussed for this because it overlaps too much with handlers. Closing for now.
| gharchive/pull-request | 2014-02-26T01:20:59 | 2025-04-01T06:44:37.706046 | {
"authors": [
"adisos",
"gabrieljoelc",
"michaelklishin",
"rodrigosaito"
],
"repo": "jondot/sneakers",
"url": "https://github.com/jondot/sneakers/pull/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2598527515 | 🛑 NO - Norway is down
In c645709, NO - Norway (https://www.skyshowtime.com/no/help/) was down:
HTTP code: 404
Response time: 1068 ms
Resolved: NO - Norway is back up in fae81c3 after 2 minutes.
| gharchive/issue | 2024-10-18T23:02:49 | 2025-04-01T06:44:37.708624 | {
"authors": [
"jonesyriffic"
],
"repo": "jonesyriffic/gsp-sst",
"url": "https://github.com/jonesyriffic/gsp-sst/issues/685",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
375757324 | Ability to display multiple images on the news page
News page can only display one image (as a thumbnail). Provide ability to display more images in text.
@htadeusiak, this should be fixed with new rich text editor, right?
Yes. However, if you want to position them in specific spots in the article we'll probably need to look into a CKEditor plug in.
https://ckeditor.com/cke4/addon/easyimage
^ ^ this is the one I was looking at
I would like to get CKEditor working on the production site before we start adding plugins.
Definitely.
On Wed, Feb 13, 2019 at 8:14 PM Hank Tadeusiak notifications@github.com
wrote:
Yes. However, if you want to position them in specific spots in the
article we'll probably need to look into a CKEditor plug in.
https://ckeditor.com/cke4/addon/easyimage
^ ^ this is the one I was looking at
I would like to get CKEditor working on the production site before we
start adding plugins.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/jonfroehlich/makeabilitylabwebsite/issues/683#issuecomment-463482578,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABi-9TDajk1siplpg27JxPSOT5u_8l3qks5vNOKegaJpZM4YDKSb
.
--
Jon Froehlich
Associate Professor
Paul G. Allen School of Computer Science & Engineering
University of Washington
http://www.cs.umd.edu/~jonf/
http://makeabilitylab.io
@jonfroehlich https://twitter.com/jonfroehlich - Twitter
I am un-assigning myself from this after speaking with Jon. I never really attempted issue. However, I do know that this can be easily achieve after CKEditor installed. As stated above.
Yes, I think the solution is to use the CKEditor plugin that @htadeusiak got working in a local branch.
This is now fixed with the CKEditor. You can add arbitrary number of images to the News post (with alt text support, yay!). Thanks @htadeusiak.
| gharchive/issue | 2018-10-31T01:10:02 | 2025-04-01T06:44:37.716558 | {
"authors": [
"higherdefender",
"htadeusiak",
"jonfroehlich"
],
"repo": "jonfroehlich/makeabilitylabwebsite",
"url": "https://github.com/jonfroehlich/makeabilitylabwebsite/issues/683",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
188489250 | Little bounce animation
Is possible to do a bounce animation of sidemenu to remind user about it when viewController is shown?
Thanks
@Fr3E it's possible but not trivial. There's no property in SideMenu that does this or enables you to add it in a simple way. To do it yourself you would either need to write your own custom transition or override SideMenuTransition (SideMenu uses custom transitions to be displayed).
You can learn more about custom transitions here: https://developer.apple.com/library/content/featuredarticles/ViewControllerPGforiPhoneOS/CustomizingtheTransitionAnimations.html
| gharchive/issue | 2016-11-10T11:54:45 | 2025-04-01T06:44:37.721325 | {
"authors": [
"Fr3E",
"jonkykong"
],
"repo": "jonkykong/SideMenu",
"url": "https://github.com/jonkykong/SideMenu/issues/97",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
71272853 | error
Attempted to call method "getLocale" on class "JonnyW\PhantomJs\Message\Request".
Why does this appear?
This looks like something to do with the framework you are using and not the library itself.
| gharchive/issue | 2015-04-27T12:41:18 | 2025-04-01T06:44:37.729008 | {
"authors": [
"jonnnnyw",
"webspin"
],
"repo": "jonnnnyw/php-phantomjs",
"url": "https://github.com/jonnnnyw/php-phantomjs/issues/56",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.