added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:05.847139
| 2017-08-05T15:51:09
|
248192316
|
{
"authors": [
"apodkutin",
"chkal",
"djbehnke",
"edysli"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4335",
"repo": "brettwooldridge/HikariCP",
"url": "https://github.com/brettwooldridge/HikariCP/issues/950"
}
|
gharchive/issue
|
PrometheusMetricsTrackerFactory should support custom registries
Prometheus collectors can typically be registered either in the default registry or in a user supplied registry. As the default registry is basically a singleton maintained in a static field of Prometheus' CollectorRegistry, using it in non-trivial class loader scenarios (like in a Java EE servers) can be problematic. Therefore, I think that providing support for custom registries is very valuable.
I'm aware of the problems discussed in #940 and #851 regarding the difficulties to maintain single instances of collectors as required by the Prometheus Client API, but I think there must be a way to fix it. If we force users to use a single MetricsTrackerFactory for each CollectorRegistry, this could actually be very simple.
There should also be a way to deregister collectors in a clean way. Something like PrometheusMetricsTrackerFactory.close(). It is important to support this especially if you create and destroy pools at runtime.
I would love to work in this and provide a pull request if you are interested. But we should merge #940 before as it already improves the overall structure of the code.
Hi @chkal !
Could you please have a look on this PR https://github.com/brettwooldridge/HikariCP/pull/1331 ?
I've introduced the ability to deregister collectors when connection pool is shutting down.
@apodkutin Thanks. I'll try to find some time in the next days to have a deeper look this. But I'm not very familiar with the details, so I may not be the best person for a review.
Now that #1331 has been merged and released, can this issue be closed?
Now that #1331 has been merged and released, can this issue be closed?
I second this. @brettwooldridge Please close this issue.
|
2025-04-01T06:38:05.872274
| 2015-08-12T15:44:04
|
100573408
|
{
"authors": [
"brianc",
"edudutra"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4336",
"repo": "brianc/node-sql",
"url": "https://github.com/brianc/node-sql/pull/255"
}
|
gharchive/pull-request
|
Fix #224 - aliased column in function
Fixed the aliased column in function call issue. When in a function call, columns must not have alias. Added some tests as well.
Hi @brianc. More conflicts resolved. I think this one is good to merge too
:dancer:
|
2025-04-01T06:38:06.446454
| 2016-06-30T00:03:50
|
163054266
|
{
"authors": [
"brianchirls",
"johanbelin",
"tencircles"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4337",
"repo": "brianchirls/Seriously.js",
"url": "https://github.com/brianchirls/Seriously.js/issues/126"
}
|
gharchive/issue
|
Jitter when animating via transformNode.scale
Hey!
I'm currently trying to put together a project using masking and we've run across an issue wherein setting scale on a transformNode causes a visible jitter.
This behavior seems to become worse the more nodes are added to the graph.
Essentially we are running an requestAnimationFrame loop which sets a new value using
transformNode.scale(value)
on every frame. What we observe is either stale values, or conflicting values within the transform node.
I've attached a video of the observed bug. This doesn't seem to be performance related as the page runs at a solid 60fps in the test.
I'd be happy to try to contribute a fix if you can point to me in the right direction in the source. Any help on this one is greatly appreciated as I've got a delivery coming up very soon.
jitter_capture.mp4.zip
Hey @tencircles. That's interesting. Do you have a reduced code sample you can share? Maybe a jsbin, codepen or something live like that?
Does it only happen with whatever effects you're using for masking?
Have you tried it in Firefox as well?
Does it make a difference if you change the resolution of the source image/video?
Hey @brianchirls, thanks for the response!
I've set up a test repo here
https://github.com/tencircles/seriously_mask_test
The javascript code is here:
https://github.com/tencircles/seriously_mask_test/blob/master/js/index.js
And you can find the code live here:
http://<IP_ADDRESS>/seriously_mask_test/
The jitter seems to be most prominent when using the translate method of the transform2d node.
Steps to reproduce:
Check do_translate
Set start/end_translate to any values
Click 'tween'
There seems to be an observable disparity between the numbers we send (viewable under 'translate' in the GUI), and the result on screen. You can see both the scale and translate input values on screen.
You can find a screen cap showing the disparity between input values on and on screen result here. If you step through frame by frame it's really easy to see.
http://<IP_ADDRESS>/seriously_mask_test/captures/chrome_counter.mp4
The behavior seems to be the same in firefox and in safari.
Tried changing images, didn't have any effect. Seems to be limited to the transform node, not sure if the behavior is present in other nodes.
Let me know if I can dig anything up!
Okay, first of all, this is a pretty cool-looking composition. I'm looking forward to seeing the finished product.
This is a very strange one. But there is definitely something going wrong here - you're not imagining things. ;-) I'm not able to replicate your problem on my machine, but I can see in your video that the "A" is moving backwards, which it obviously should not do. I've run CPU profiles, timeline recordings and screen captures; I even tried CPU throttling. Apart from a few janky frames, this does not appear to be a performance issue. That wouldn't explain backwards movement anyway.
I wonder if something weird might be going on with TweenMax. It's hard for me to tell for sure, because the code is minified and I'm not familiar with the API or behavior of that library. In theory, it might be possible that two requestAnimationFrame cycles are running and fighting over setting different values on the same target object. A later value might be smaller than the previous one, and the difference might be < 1/1000, in which case you would not see it show up in your scale value text. But the difference might be magnified enough to be visible in the canvas by rounding of pixel positions and/or floating point quirks. I know it sounds like a bit of a stretch, but it's all I got at the moment.
Are you sure you're cleaning up the timeline properly?
Is it possible that this only happens after you've gone through the animation at least once? Or does it happen on the first time you play through it?
Since I can't replicate, I can't do the debugging for you, but I can point you in the right direction. I suggest two things:
Set a conditional breakpoint at line 5933 of seriously.js with this code: translateX < 499 && x < translateX. If it breaks, you'll know that the incoming value is less than the one before it. (This will only work if you let the animation play all the way to the end.) You can see in the call stack where that happened. If it never breaks and you still see the A going backwards, that will tell us we need to look deeper into Seriously.
Try making a reduced test without TweenMax or even dat.gui. Write your own code in the callback passed to seriously.go() to determine the x value of the translation. If you still have the problem, then we'll know for sure the bug is in Seriously.js. If not, then it's more likely that something is going on with TweenMax.
Can you report the results back here?
Set a conditional breakpoint at line 5933 of seriously.js with this code: translateX < 499 && x < translateX. If it breaks, you'll know that the incoming value is less than the one before it. (This will only work if you let the animation play all the way to the end.) You can see in the call stack where that happened. If it never breaks and you still see the A going backwards, that will tell us we need to look deeper into Seriously.
Just tried this out, but the breakpoint doesn't trigger. You can also seek frame by frame in the video above to see the values being passed to seriously. You'll see the values on screen going up constantly, but the result rendered jumps back.
It's very difficult to see with the naked eye, so this might be occurring for you just not quite as noticeable perhaps? For me most of the time it looks like slight performance jank, but at a 60fps screen cap you can actually see it's moving backwards.
Try making a reduced test without TweenMax or even dat.gui. Write your own code in the callback passed to seriously.go() to determine the x value of the translation. If you still have the problem, then we'll know for sure the bug is in Seriously.js. If not, then it's more likely that something is going on with TweenMax.
Rather than coding the manually, I just added all values sent to seriously to an array with timestamp + value. Since all values we send are send through a single point (line 112 in the example above), we can be 100% sure that nothing else is setting values or calling methods within seriously. Checking the values sent both manually and programmatically it looks like we are sending values which steadily increase over time. If you want to have a look at the number let me know.
Any ideas on where to look within seriously to try to catch this? Stale matrix values maybe?
Just pinging this. Any clue where we could start looking?
I was only just able to replicate this. Earlier efforts, even with screen recording, didn't show the problem.
It's not jank, because that wouldn't be moving backwards. Also seems unlikely that stale matrix values would cause it to go backwards. I checked most of the matrix values of the different, and they all seem correct.
Another hunch I had is that something weird is happening with ping-ponging textures. Some effects do that, but not the ones you're using. I suppose it's possible that something weird is going on with buffering in the GPU driver, though unlikely. What machines have you tested this on?
I did notice that you're using a lot of "layer" effects that don't seem necessary, either because they only have a single source input or they have 2 sources but only one of them is in use. Maybe there's something going on there? Is there a reason for that? Maybe if you can eliminate those nodes and it fixes the problem, that might get you far enough to deliver your product and it'll give me/us a starting point to hunt down whatever bug.
Hello Brian,
We've ran some stress tests to try and see what nodes specifically could be causing the issue.
For each of these tests, we animate the letter by calling the letter transform node's translate method within the callback passed to seriously.go().
All tests have jittering when animating. The jittering is sometimes limited, sometimes erratic, seemingly without taking into account the level of complexity. The video captures in the zip have all been made on Chrome OS X, the behavior is exactly the same in Safari. On Chrome Windows, the jittering is still present but much more subtle.
We haven't been able to identify what worsens or betters the jittering as the behavior seems to be random. To clear any possible external causes, we are not updating the translate method's x position with TweenMax anymore.
To answer your question, we are using layer nodes so that we can control each layer's opacity level independently.
You can download the zip and run the directory on a local server.
Here's the routing for each of these tests, in order of complexity.
simple_test_step1_noReformat:
letter source image => letter transform node
target canvas (source: letter transform node)
simple_test_step1:
letter source image => letter reformat node => letter transform node
target canvas (source: letter transform node)
simple_test_step2_noReformat:
letter source image => letter transform node
color node
layers node (source0: color node, source1: letter transform node)
target canvas (source: layers node)
simple_test_step2:
letter source image => letter reformat node => letter transform node
color node
layers node (source0: color node, source1: letter transform node)
target canvas (source: layers node)
simple_testBug:
letter source image => letter reformat node => letter transform node
color node
layers node (source0: color node, source1: letter transform node)
starry sky source image => starry sky reformat node
gradient wipe (source: starry sky reformat node, gradient: layers node)
target canvas (source: gradient wipe)
Do you have any suggestions as to what we could do?
Thank you very much
seriously_stressTests.zip
Okay, thanks for these reduced test cases. I can work with this. I have some ideas of where to start and will dive into it as soon as I can.
@brianchirls Any way I can help out with this one? I've poked around in the source, but it's really just guesswork.
Hi Brian! Have you had any chance to look into this? Can we be of any help?
|
2025-04-01T06:38:06.458523
| 2022-01-22T20:01:59
|
1111673246
|
{
"authors": [
"briandelmsft",
"piaudonn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4338",
"repo": "briandelmsft/SentinelAutomationModules",
"url": "https://github.com/briandelmsft/SentinelAutomationModules/issues/210"
}
|
gharchive/issue
|
AAD SignIns Insights
I wonder if there would be an interest for such a module. It's essentially a similar concept of what we have for analysts in the entity page available for automation and a bit of what we have in UEBA.
Takes a user and return stats such as:
Last successful logon data (timestamp + other metadata)
Last failed logon data
Usual user-agent-string data
Usual contries/IPs
If there is a cloud-logon-session present in the entities (case of an AAD Protection alert), return all the info about this particular login.
That last one maybe could be added to the AAD Risk Module instead.
I like the idea, would this be a new module or perhaps an extension to the capabilities of the AAD Risks? It may make it too complex though if we add it... not sure off hand.
One issue, the cloud-logon-session is not passed from the incident trigger so you have to go back into SecurityAlerts to get it.... this is one of the reasons for #205 so you can use the KQL module to lookup the incident easily and work from there
Maybe also return a table of last successfull access per app?
possibilities to include:
Conditional access failures (and the policies that failed)
Named locations the user has been seen from
Device join status of the signins
password resets or other interesting admin actions on account
insights about signin hours
(although we would need to define what baseline could be used to determine out of character behaviors)
|
2025-04-01T06:38:06.471518
| 2014-10-22T21:28:08
|
46562977
|
{
"authors": [
"brianreavis",
"gdibble"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4339",
"repo": "brianreavis/selectize.js",
"url": "https://github.com/brianreavis/selectize.js/issues/604"
}
|
gharchive/issue
|
publish to npm
so ppl can npm install selectize --save :+1:
Caved in and published it: https://www.npmjs.com/package/selectize
:+1: Thank you :)
On Thursday, January 29, 2015, Brian Reavis<EMAIL_ADDRESS>wrote:
Closed #604 https://github.com/brianreavis/selectize.js/issues/604.
—
Reply to this email directly or view it on GitHub
https://github.com/brianreavis/selectize.js/issues/604#event-227320904.
--
~ BE*Kind . kn0wledge 1s p0wer ~
|
2025-04-01T06:38:06.472563
| 2017-09-05T15:51:14
|
255324908
|
{
"authors": [
"brianshumate",
"vincent-legoll"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4340",
"repo": "brianshumate/ansible-consul",
"url": "https://github.com/brianshumate/ansible-consul/pull/115"
}
|
gharchive/pull-request
|
Fix removed task file include
Fix bug introduced in : fb936053af1663abe4557befb4db36a845043c3a
I tested this is working on a single node install (ubuntu 17.04)
Our fixes just crossed as I just fixed this as well and made a new release.
|
2025-04-01T06:38:06.493637
| 2024-02-26T18:33:46
|
2154845812
|
{
"authors": [
"Kmschr",
"voximity"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4341",
"repo": "brickadia-community/brickadia-rs",
"url": "https://github.com/brickadia-community/brickadia-rs/pull/3"
}
|
gharchive/pull-request
|
Proper save time handling
Finally resolve the TODO for reading save time
use chrono::DateTime for parsed save time
if save does not have a time, use current time when writing
Result of read_json example
{
"version":10,
"game_version":6781,
"map":"Plate",
"description":"",
"author":{
"name":"x",
"id":"3f5108a0-c929-4e77-a115-21f65096887b"
},
"host":{
"name":"x",
"id":"3f5108a0-c929-4e77-a115-21f65096887b"
},
"save_time":"2021-07-10T22:22:49.135Z",
...
}
I made the chrono and uuid imports pub use so that dependent crates can use them without adding whole crates to their Cargo.toml.
Otherwise, this is great. Thanks!
|
2025-04-01T06:38:06.502959
| 2022-08-24T07:50:18
|
1349017990
|
{
"authors": [
"lonegunmanb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4342",
"repo": "bridgecrewio/yor",
"url": "https://github.com/bridgecrewio/yor/pull/298"
}
|
gharchive/pull-request
|
Add depth check for brace and paren to fix #297
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
This patch added a depth check for brace and paren to fix incorrect tag value parse.
Unfortunately I found that the gosec issue and failed tests also exist in the main branch. I'll try to fix them in this pr but cannot guarantee it.
I've passed the unit tests and integration tests on my machine, and I've fixed gosec issue, could we run CI for this pr please? Thanks.
Sign off, I've passed all unit tests and integration tests on my machine, and I've fixed gosec issue, could we run CI for this pr please? Thanks.
Hi @nimrodkor , would you please give this pr a review? Thanks!
Hi, I've solved the conflicts, can anyone give this pr a review? Thanks.
|
2025-04-01T06:38:06.587415
| 2022-08-01T19:26:08
|
1324871695
|
{
"authors": [
"daileytj",
"jeffvg"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4343",
"repo": "brightlayer-ui/react-native-component-library",
"url": "https://github.com/brightlayer-ui/react-native-component-library/pull/286"
}
|
gharchive/pull-request
|
Publish v6.0.3
Publishes v6.0.3
Publishes fixes for all v.6.0.3 milestone issues
ran across this when I toggled theme...is correct with the white-ish?
@jeffvg for that section it's supposed to be a random image. It's just not rendering for you for some reason so I think this is ok for now. There's no styles or anything doing that. I assume that's just what happens when an image doesn't render.
|
2025-04-01T06:38:06.590073
| 2021-10-22T01:49:16
|
1033097366
|
{
"authors": [
"bkarambe",
"huayunh"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4344",
"repo": "brightlayer-ui/react-themes",
"url": "https://github.com/brightlayer-ui/react-themes/issues/11"
}
|
gharchive/issue
|
TableSortLabel should not be hidden when mouse is not hovering on it
Describe the desired behavior
When a .MuiTableSortLabel is not hovered on, the arrow should have the ~disabled color~
(edit: should be a "disabledBackground" color of Gray500@12% for light theme, and a Black200@24% for dark theme).
When it is hovered or in-use, the label should be text.secondary
Describe the current behavior
The arrow has an opacity 0 on it when not in use.
Additional Context
This will be part of the Tables effort.
2633
@huayunh Shall we close this, as PR has been merged?
|
2025-04-01T06:38:06.592484
| 2023-03-27T12:33:10
|
1642017995
|
{
"authors": [
"lostcontrol"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4345",
"repo": "briis/hass-weatherflow",
"url": "https://github.com/briis/hass-weatherflow/issues/66"
}
|
gharchive/issue
|
Weatherflow Hourly based Forecast: Unknown
As visible in the following screenshot, I sometimes get 'unknown' ('Inconnu' in French) as the current weather condition:
I get this for a few weeks/months now. Not sure if it was caused by a change in this plugin or if Weatherflow changed something in their API. One of the statuses is probably not mapped properly. I didn't manage to find which one at the moment. Looks also like there is no French translation in this plugin so I guess the value "Inconnu" comes from HA directly.
Looking at the code, I guess you take the value from "icon" in "current_conditions" from /better_forecast. I'll see if I can find the value reported by Weatherflow API when the problem occurs.
There is nothing in HA's logs by the way.
Duplicate. Sorry, Github had a hiccup.
|
2025-04-01T06:38:06.598289
| 2023-05-05T07:13:36
|
1697128985
|
{
"authors": [
"brillout",
"samuelstroschein"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4346",
"repo": "brillout/vite-plugin-ssr",
"url": "https://github.com/brillout/vite-plugin-ssr/issues/860"
}
|
gharchive/issue
|
Discord Bot
Description
Create a new Discord Server for Vike, and use/create a Discord bot to teach users about https://github.com/brillout/vite-plugin-ssr/discussions/526.
Having a cozy firechat-like place to casually chit-chat about general topics could be lovely. Example use cases:
Show us what you're building (startup, dev tool, Vike extension, etc.)
Philosophical discussions about the future of programming and/or Vike (e.g. vertical integration vs do-one-thing-do-it-well)
News
Etc.
It's paramount that users don't ask for help in that space. Therefore I think a bot is necessary to to prominently show the rules of https://github.com/brillout/vite-plugin-ssr/discussions/526 with maybe (if possible?) a checkbox "I've read the rules" that users have to check before being able to start chatting.
Contribution much welcome to point us to a Discord bot that does this, or maybe implement one? I'm unfamiliar with Discord's bot API.
I express interest in such a bot for inlang
Closing as we don't need this anymore.
|
2025-04-01T06:38:06.601515
| 2023-11-11T15:58:12
|
1989014187
|
{
"authors": [
"jameskerr",
"theearlofsandwich"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4347",
"repo": "brimdata/react-arborist",
"url": "https://github.com/brimdata/react-arborist/issues/187"
}
|
gharchive/issue
|
Show max depth or prevent drop on folders
I've got a tree that has categories and within those there are child links. I render two different trees, one that shows just the top level categories (and I want it only to ever display the root level nodes) and one that when you select a category shows the children of the selected category. Imagine it's a bit like windows explorer with 2 panes, and the left pane only ever shows the top level items.
This is all working great apart from one small thing! :)
If you drag within the top level category tree and drop onto one of the other categories, it then opens the node and shows all the sub items.
Is there a way you can tell a tree to allow drag and drop, but not to automatically open the folders if you drag something into it OR only ever show a max-depth of children (so you could set it to one for example)
I'm using a controlled tree and have my own handlers for onMove etc.
I hope that rather convoluted explanation makes sense... Thanks for any help, it's a terrific package!
I managed to get a solution and tackled it in a different way. What I do is pass a modified version of the top level node data into the left hand tree where I strip all the children from the array. That makes it behave as leaf nodes, so no folder opening!
Great! Glad you found a solution. I love seeing screenshots of people using the package. Happy coding.
|
2025-04-01T06:38:06.604239
| 2022-12-22T16:42:00
|
1508193376
|
{
"authors": [
"nwt",
"philrz"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4348",
"repo": "brimdata/zed",
"url": "https://github.com/brimdata/zed/issues/4278"
}
|
gharchive/issue
|
Replace zio/parquetio with zio/arrowio and pqarrow
We can replace the Parquet reader and writer in zio/parquetio with a combination of zio/arrowio and github.com/apache/arrow/go/v11/parquet/pqarrow, and we probably should since
it'll let us remove about 750 lines from zio/parquetio,
the Arrow Parquet implementation will probably receive more attention the one we're currently using, and
it fixes #764.
Verifications of this change are in https://github.com/brimdata/zed/issues/764#issuecomment-1526667106 and https://github.com/brimdata/zed/issues/4527#issuecomment-1526671541. Thanks @nwt!
|
2025-04-01T06:38:06.723382
| 2016-02-29T21:06:06
|
137374964
|
{
"authors": [
"davidbenjamin",
"samuelklee"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4349",
"repo": "broadinstitute/gatk-protected",
"url": "https://github.com/broadinstitute/gatk-protected/issues/380"
}
|
gharchive/issue
|
Filter based on Gaussianity
We filter targets with extreme variance, the idea being that tangent normalization did a poor job with these and our model is unreliable for them. However, issue #378 will likely go a long way toward making this less of a concern. One thing we don't filter on and perhaps should is not how much variance remains after tangent normalization but how Gaussian the coverage looks after tangent normalization. Since our model assumes Gaussian copy ratio this amounts to filtering out targets for which our model is unsuitable.
We could implement this, for example, by filtering on the Anderson-Darling test statistic for each target.
Obviated by new coverage model.
|
2025-04-01T06:38:06.760685
| 2024-10-24T11:38:51
|
2611301659
|
{
"authors": [
"Shchepotin",
"ukpagrace"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4350",
"repo": "brocoders/nestjs-boilerplate",
"url": "https://github.com/brocoders/nestjs-boilerplate/issues/1772"
}
|
gharchive/issue
|
github video player having issues when playing video
Describe the bug
Video keeps restarting or seizing the volume button and playing with no sound
Desktop (please complete the following information):
chrome, edge
Additional context
Suggest the video is moved to youtube and displayed in the readme
@ukpagrace This is regular GitHub behavior 🙏
|
2025-04-01T06:38:06.762051
| 2023-04-27T09:40:37
|
1686493301
|
{
"authors": [
"Seantourage",
"ShatteredGod"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4351",
"repo": "brokiem/auto-hoyolab-checkin",
"url": "https://github.com/brokiem/auto-hoyolab-checkin/issues/2"
}
|
gharchive/issue
|
Honkai Star Rail Support?
With the recent launch of Honkai Star Rail, seems adding it would be really easy.
https://act.hoyolab.com/bbs/event/signin/hkrpg/index.html?act_id=e202303301540311&bbs_auth_required=true&bbs_presentation_style=fullscreen&lang=en-us&utm_source=share&utm_medium=link&utm_campaign=web
yes please, add Star Rail too
|
2025-04-01T06:38:06.785024
| 2023-03-12T04:04:08
|
1620226847
|
{
"authors": [
"brownag"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4352",
"repo": "brownag/gpkg",
"url": "https://github.com/brownag/gpkg/issues/1"
}
|
gharchive/issue
|
refactor gpkg_write()
[x] vector write not working properly anymore
[x] better handling of list input for naming feature / tile sets / data_null
[x] basic post-processing/validating of result
[x] tests
I am happy with the functioning of gpkg_write() and think it is now more robust and better documented. Will create a new issue for the idea of validating geopackages.
|
2025-04-01T06:38:06.823047
| 2018-01-30T17:47:15
|
292876140
|
{
"authors": [
"arnomi",
"brpandey"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4353",
"repo": "brpandey/interval_tree",
"url": "https://github.com/brpandey/interval_tree/issues/1"
}
|
gharchive/issue
|
License / publication on hex
Hi brpandey,
thanks for this implementation. Is there any chance of publishing this on hex? And/or, could you put an explicit license on the code?
Thanks in advance
arnomi
https://hex.pm/packages/interval_tree
|
2025-04-01T06:38:06.850829
| 2015-01-06T11:14:56
|
53504800
|
{
"authors": [
"devel-pa",
"es128"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4354",
"repo": "brunch/brunch",
"url": "https://github.com/brunch/brunch/issues/904"
}
|
gharchive/issue
|
Test assets folder copied in public folder on production env
In previous versions, too. Today one is 1.17.20
Change the assets convention in your config to be more specific, or rename the assets dir you don't want copied.
|
2025-04-01T06:38:06.854741
| 2020-11-19T08:18:35
|
746353127
|
{
"authors": [
"brxck",
"thomasvaeth",
"yansusanto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4355",
"repo": "brxck/gatsby-starter-stripe",
"url": "https://github.com/brxck/gatsby-starter-stripe/issues/32"
}
|
gharchive/issue
|
Installation problem
Hi there,
I hope someone in the know could help point me in the right direction. What could I possibly be missing? Thank you!
/usr/local/lib/node_modules/gatsby-cli/node_modules/yoga-layout-prebuilt/yoga-layout/build/Release/nbind.js:53
throw ex;
^
Error: listen EADDRINUSE: address already in use :::8000
at Server.setupListenHandle [as _listen2] (net.js:1280:14)
at listenInCluster (net.js:1328:12)
at Server.listen (net.js:1415:7)
at startDevelopProxy (/Users/adam/PROJECT/MAIONE/node_modules/gatsby/src/utils/develop-proxy.ts:86:10)
at module.exports (/Users/adam/PROJECT/MAIONE/node_modules/gatsby/src/commands/develop.ts:124:17)
at process._tickCallback (internal/process/next_tick.js:68:7)
at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)
Oh apparently, we can't run another app at the same port :8000 (even though I have none running)
Been trying for hours, just couldn't get pass
Cannot query field "allStripeSku" on type "Query".
I've created a test data on Stripe and follow everything to a "T"
Hopefully someone is kind enough to point me in the right direction. What supposedly a 5-min installation turned in hours scratching my head
Thanks everyone.
@yansusanto Did you figure this out? You have to use the Stripe API to add SKU/quantity.
The latest version of the starter comes with Stripe fixtures, which I hope will alleviate problems others are having with getting the proper data setup in Stripe. It also moves from the Orders API (Skus) to the Prices API.
Please let me know if you have any problems with the new version of the starter!
|
2025-04-01T06:38:06.892131
| 2015-08-02T16:43:25
|
98620899
|
{
"authors": [
"bryanjos",
"cs-victor-nascimento"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4356",
"repo": "bryanjos/joken",
"url": "https://github.com/bryanjos/joken/issues/54"
}
|
gharchive/issue
|
Allow the config_module key to be placed on any config block
Since there is now only one config item for Joken, config_module, it would probably be nice to make it so that the user can define which config block contains the key.
I propose change the config_module key to joken_module, and adding a using macro that takes an otp_app option which defines which config block to look for the joken_module key.
Something like below:
#the config block
config :my_app,
joken_config: My.Config.Module
#Next, tell Joken where to find the config block
use Joken, otp_app: :my_app
#then to encode and decode
{:ok, token} = encode_token(%{username: "johndoe"})
{:ok, decoded_payload} = decode_token(jwt)
I made a branch with the change and the only thing I could come up with is having the functions to encode and decode added to the using macro and called directly in the module using it. Since a lot of libraries use encode and decode as function names, in this branch I renamed the functions encode_token and decode_token.
This is nice! Some things I think would be useful:
Perhaps it is good to have the property name as an optional parameter. This way it would be possible to have 2 config modules: one for user login and another for system to system authentication. I don't need this use case but I think it will be desired by people who are coding microservices.
We could use a @on_load to cache the generated JOSE header for each configuration. :smiley:
Good points. I think making the name of the config property optional makes sense. Or maybe add a property that points directly to the module instead?
I had to look up @on_load :smile:. It sounds like that could work.
@cs-victor-nascimento I added a PR for this, but I haven't added the bit to customize the property itself yet.
Also I'm not sure I completely understand what should be cached using @on_load. Could you give me an idea of what to do?
The cache idea is that for each config module the first part of a JWT (which describes de algorithm and optional parameters) will always be the same. Say you choose to use HS256. Then it will always be {"alg": "HS256", "typ": "jwt"} or something.
So we can generate the JSON and Base64 representation of those once we have that information. Not sure the on_load will work with that (not 100% secure we can ensure our dependencies are all loaded when on_load function is executed) and probably it would be better to go all in and define an application module. I mentioned the on_load just to avoid adding another breaking change: people would need to add joken to their list of apps in mix.exs.
What do you think?
Actually I was thinking and I believe we should have a performance benchmark suite set before atempting any early optimization. I will open another issue for that.
The caching may work, but I will probably do a separate issue/PR for it. I can probably finish this by adding the ability to customize the config key tonight.
With all the changes going on, the next release will be a big one!
Closing this as it's no longer valid
|
2025-04-01T06:38:06.914512
| 2024-07-03T13:28:47
|
2388636511
|
{
"authors": [
"243006306",
"ArnWEB",
"bsorrentino"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4357",
"repo": "bsorrentino/langgraph4j",
"url": "https://github.com/bsorrentino/langgraph4j/issues/8"
}
|
gharchive/issue
|
Evaluate to implement other use cases from langgraph tutorial
Original langgraph tutorial contains a lot of interesting examples, I would implements others to promote langgraph4j as first citizen in langchain4j eco-system:
RAG
Agentic RAG
Corrective RAG (CRAG)
Corrective RAG (CRAG) using local LLMs
Self-RAG
Self-RAG using local LLMs
SQL Agent
Agent Architectures
Multi-Agent Systems
Collaboration
Supervision
Hierarchical Teams
Planning Agents
Plan-and-Execute
Reasoning without Observation
LLMCompiler
Reflection & Critique
Basic Reflection
Reflexion
Language Agent Tree Search
Self-Discover Agent
That's really great. When will these RAG and Agent frameworks be combined with Langchain4j for an example! I have been following langchain4j and langgraph4j all along.
Hi @243006306 thanks for interest
However Agent Executor and Adaptive RAG are already available
This project looks interesting. Currently going through the code base , please let me know if you want me to contribute on anything. Thanks
|
2025-04-01T06:38:06.920716
| 2022-01-06T05:24:42
|
1094980527
|
{
"authors": [
"bsp2",
"loiteringsloth"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4358",
"repo": "bsp2/VeeSeeVSTRack",
"url": "https://github.com/bsp2/VeeSeeVSTRack/issues/33"
}
|
gharchive/issue
|
Can't load plugins into Ableton 11 or Bitwig 4.1.2
For bug reports
Operating system(s): Windows 10 64-bit 21H2
Version of Rack if using official binary, or commit hash and branch if compiling from source: VCVRack:v2.0.5
All hardware relevant to your issue (e.g. graphic card model, audio/MIDI device): Radeon Pro WX3100, Radeon RX550, SSL 2+ ASIO
Plugins don't show up during scan, both Ableton and Bitwig refuse to let me load the .dll files manually; I get a crossed out circle icon for a mouse cursor when I try this
Both instrument and effect VCVR plugins load fine into VSThostx64, no problems
Tried restarting machine, running both as admin, still won't load. Tried deleting preferences files and cache, still wont load
Not sure what to try next. Any suggestions?
Thanks
Version of Rack if using official binary, or commit hash and branch if compiling from source: VCVRack:v2.0.5
for questions regarding VCVRack v2.0.5, please go to https://github.com/VCVRack
Sorry I'm confused, I have the standalone VCVR which works fine no problems there, I'm trying to get these plugins to work inside Bitwig and Ableton.
I think its 0.6.1? I just downloaded it a few hours ago
your post mentioned v2.0.5, that's what got me confused.
if you are indeed using VCVR (0.6.x), make sure that the entire vst2_bin/ folder is in the host plugin path.
I vaguely recall that someone once resolved a similar issue by not placing the plugin in a deeply nested folder structure (cannot remember which host required that).
I've tested VCVR in both Ableton Live and Bitwig and it worked fine (just re-tested it with the latest Live version 11.0.12).
Well I got them to work inside Bitwig. Was never able to manually drag and drop the .dlls, but after deleting the Bitwig preferences file and Index folder, then running Bitwig as admin, they were both picked up during the plugin scan.
This is the exact same procedure I tried initially; not sure what I did differently this time that made it work, but not complaining!
|
2025-04-01T06:38:06.929371
| 2021-01-14T19:00:36
|
786236497
|
{
"authors": [
"janoside",
"mflaxman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4359",
"repo": "btcguide/btcguide.github.io",
"url": "https://github.com/btcguide/btcguide.github.io/pull/70"
}
|
gharchive/pull-request
|
UX Improvements
"bootstrap native" breadcrumbs
"alert" wrapper for "Advanced Considerations"
tweak text related to "advanced" sections
use some "»" chars (for "next steps" prefix and before "Advanced Considerations" in titles)
tweak margins/padding in the base layout to give a little more breathing room
Love the new style, thanks!
Love the new style, thanks!
|
2025-04-01T06:38:06.956828
| 2019-12-26T02:02:44
|
542407074
|
{
"authors": [
"FiloSottile",
"davecgh"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4360",
"repo": "btcsuite/btcutil",
"url": "https://github.com/btcsuite/btcutil/issues/152"
}
|
gharchive/issue
|
bech32: Encode accepts invalid uppercase HRP
bech32.Encode will accept an uppercase HRP and generate an invalid bech32 encoding instead of normalizing it or returning an error.
BIP 0173 is clear that mixed case encodings are invalid, and that the lowercase encoding should be used for checksum purposes. This means there are two ways to handle an uppercase HRP in Encode:
treat it as lowercase, and ideally return a lowercase encoding
return an error
Instead, this library will use the uppercase values for the checksum and return a mixed case encoding. Decode will correctly reject the mixed case encoding, and will fail the checksum of the lowercase version.
https://play.golang.org/p/h0ekj8VmiPV
I'm no longer active on this project, but for reference for the new maintainers, you guys might want to go ahead and back port the updated version from Decred at https://github.com/decred/dcrd/tree/master/bech32. We originally based it on this implementation from the LL folks, but improved it in many ways, which includes handling what this issue raises properly.
The primary improvements made are:
Much improved efficiency in terms of memory allocations and overall encoding and decoding performance along with benchmarks
Corrected the issue being reported here by automatically converting the HRP to lowercase and improved error handling to catch other potential misuses
Added convenience functions for EncodeFromBase256 and DecodeToBase256 which automatically handles the typical case of converting to base32 with padding before encoding and back to base256 without padding when decoding
Improved the error handling to be more in line with the rest of the code base such that the errors are more descriptive and programmatically detectable
Fleshed out the test coverage to test more corner cases as well as ensure the actual errors that are produced are the expected errors versus just checking that some error happened
Updated the code and documentation to be more consistent
Created and tagged a separate Go module specifically for bech32 (github.com/decred/dcrd/bech32) and thus provide a tighter module with a smaller API surface which results in less notifications of new versions for consumers due to other things not related to bech32 changing
I've slightly modified the code provided by @FiloSottile accordingly to show the improved version works as expected:
https://play.golang.org/p/VlNZprYObxE
package main
import (
"log"
"strings"
"github.com/decred/dcrd/bech32"
)
func main() {
s, err := bech32.EncodeFromBase256("UPPERCASE", []byte("xxx"))
if err != nil {
log.Fatal(err)
}
log.Print("encoded: ", s)
log.Print(bech32.Decode(s))
log.Print(bech32.Decode(strings.ToUpper(s)))
}
Output:
... encoded: uppercase10pu8sss7kmp
... uppercase[15 1 28 7 16] <nil>
... uppercase[15 1 28 7 16] <nil>
|
2025-04-01T06:38:06.993855
| 2023-12-20T22:19:56
|
2051369313
|
{
"authors": [
"fulldecent",
"weiqiushi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4361",
"repo": "buckyos/DCRM",
"url": "https://github.com/buckyos/DCRM/issues/6"
}
|
gharchive/issue
|
sortedlist 试两次
{ "hash": "0x39c767e230f1cc4d8fa7baa4ef8c39bc2e4add8680d09bfe086e1efdaa0d6437", "score": 290 }, // 10
{ "hash": "0x39c767e230f1cc4d8fa7baa4ef8c39bc2e4add8680d09bfe086e1efdaa0d6438", "score": 290 }, // 10
看那个先进
According to current implementation, two items with the same score are sorted in order of insertion, from first to last. If the inserted item has the same score as the last one in list, the insertion will be considered invalid.
I updated the testcase to include insert two items which has same score. it will ranked by their insert order.
|
2025-04-01T06:38:07.030191
| 2018-04-12T13:21:54
|
313724601
|
{
"authors": [
"Shiv-Dangi",
"amanthegreatone",
"bengourley",
"pavan-syook"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4362",
"repo": "bugsnag/bugsnag-js",
"url": "https://github.com/bugsnag/bugsnag-js/issues/339"
}
|
gharchive/issue
|
code splitting and uploading all source maps
Hi,
In my react app I am using code splitting (react-loadable) to reduce the bundle size and serve them on demand whenever the respective route is loaded. Using this technique I am getting about 53 chunks and a main file (.js and .map).
How can I upload all these source maps to identify the error stack trace correctly.
regards
Aman
Hi @amanthegreatone. Are you using webpack? If so you should take a look at our webpack plugins module which can upload sourcemaps:
webpack-bugsnag-plugins (docs)
If you're not using webpack, you can make use of the underlying JS API/CLI tool:
bugsnag-sourcemaps (docs)
I see!
So create-react-app does use webpack for you but it hides the details away, and as far as I can tell doesn't let you add plugins.
Our webpack plugin would do exactly what you want – it iterates over all the chunks/maps and uploads them all. But it seems like your only option would be to "eject" from create-react-app in order to modify the webpack config yourself. But you probably don't want to do that!
Alternatively you can use bugsnag-sourcemaps, iterating over all of the generated source maps using bash or javascript. Here's a sketch of what I mean (in JS):
const { upload } = require('bugsnag-sourcemaps')
const glob = require('glob')
// find all of the map files in ./dist
glob('dist/**/*/*.map', (err, files) => {
if (err) throw err
// process each .map file
Promise.all(files.map(processMap))
})
// returns a Promise which uploads the source map with accompanying sources
function processMap (sourceMap) {
// remove .map from the file to get the js filename
const minifiedFile = sourceMap.replace('.map', '')
// remove the preceding absolute path to the static assets folder
const minifiedFileRelativePath = minifiedFile.replace(`${__dirname}/dist/`, '')
// call bugsnag-sourcemaps upload()
return upload({
apiKey: 'YOUR_API_KEY_HERE',
appVersion: '1.2.3',
minifiedUrl: `http*://your-domain.app/path/to/assets/${minifiedFileRelativePath}`,
sourceMap,
minifiedFile,
projectRoot: __dirname
uploadSources: true
})
}
I didn't run or test this so you will undoubtably need to tweak paths/urls for your setup and play with the arguments to bugsnag-sourcemaps. This also rather crudely will fire off all the uploads concurrently. See webpack-bugsnag-plugins for an example of how to limit the concurrency of that.
Feel free to continue on this thread if you have any further questions, or alternatively email<EMAIL_ADDRESS>where we can dig into your specific project and have a bit more context. We can also then share more detail than we would be able to on this public issue tracker.
Thanks!
Thanks @bengourley.
I will try your suggestions and update.
@bengourley as per your suggestion I followed the same instructions and uploaded the chunk.js.map files to bugsnag. Here is my code.
const upload = require('bugsnag-sourcemaps').upload;
const glob = require('glob');
const appVersion = require('./package.json').version;
const bugsnagKey = require('./src/config/env').bugsnagKey;
const path = require('path');
glob('source-maps/*.js.map', (err, files) => {
if (err) throw err;
Promise.all(files.map(processMap));
});
const processMap = sourceMap => {
const minifiedFileName = sourceMap.split('/')[1].split('.map')[0]; //just extracting the file name.
return upload({
apiKey: bugsnagKey,
appVersion: appVersion,
minifiedUrl: `http://myAppWebsite.com/static/js/${minifiedFileName}`,
sourceMap,
minifiedFile: `${__dirname}/build/static/js/${minifiedFileName}`,
projectRoot: __dirname,
uploadSources: true,
overwrite: true,
});
};
The files are all uploaded and everything works. But the problem is the map and chunks are not mapped when an error is raised in the stacktrace i still get the chunk code rather than the original code.
Hoping you could point out where I am going wrong or what the error really is.
Thanks.
Hi @pavan-syook. I took a look at your account and it seems you are uploading everything correctly 👍
There is just one minor problem which you need to resolve. For your source maps, you are providing a value for appVersion (it's currently 0.1.0), but events coming in from your notifier have no appVersion. What our system does is look for an uploaded source map matching the url and app version, and since the app version is different (undefined vs. 0.1.0) it doesn't find your source map.
All you need to do is make sure the app version is set in your notifier and kept in sync with the version of your app. Then we'll be able to use the source maps you've uploaded to show the original sources. Hope this helps!
@bengourley Thanks for the quick update. will do that and let you know if I face any issues.
Hi @bengourley. we are also using code splitting in our react web application(groww.in) and in our case, the number of chunks are more than 80. all chunks source map files are uploaded on bugsnag for every app release. So my doubt is how many source map files can be uploaded on bugsnag, is there any restriction on uploading number of files?
Nope, no restriction. As many as you need.
@bengourley Thanks for the reply.
|
2025-04-01T06:38:07.041568
| 2020-01-30T11:23:07
|
557429558
|
{
"authors": [
"Ashwini-ap",
"SanjanaTailor",
"lc3t35",
"mattdyoung",
"phillipsam",
"xander-jones"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4363",
"repo": "bugsnag/bugsnag-js",
"url": "https://github.com/bugsnag/bugsnag-js/issues/719"
}
|
gharchive/issue
|
RNCNetInfo.getCurrentState got 3 arguments, expected 2.
Enviornment info :
System:
OS: Windows 10 10.0.17763
CPU: (8) x64 Intel(R) Core(TM) i5-8350U CPU @ 1.70GHz
Memory: 1.06 GB / 7.88 GB
Binaries:
Node: 8.16.1 - C:\Program Files\nodejs\node.EXE
npm: 6.4.1 - C:\Program Files\nodejs\npm.CMD
SDKs:
Android SDK:
API Levels: 21, 22, 23, 24, 25, 26, 27, 28, 29
Build Tools: 26.0.2, 28.0.3, 29.0.2
System Images: android-24 | Google Play Intel x86 Atom, android-28 | Google Play Intel x86 Atom
IDEs:
Android Studio: Version <IP_ADDRESS> AI-191.8<IP_ADDRESS>10548
npmPackages:
react: ~16.9.0 => 16.9.0
react-native: https://github.com/expo/react-native/archive/sdk-36.0.0.tar.gz => 0.61.4
Facing this problem using expo directory.
import {AppState } from 'react-native';
import NetInfo from '@react-native-community/netinfo';
Using this two library
@mattdyoung : I have not used bugsnag-js in current project. I am using it in expo directory.
Above all information mentioned and JS code too attached.
@SanjanaTailor
You've raised this as an issue with bugsnag-js. If you're not using bugsnag-js in your project I don't think there's any issue for the maintainers of bugsnag-js to resolve here.
If you are using bugsnag-js can you provide a reproducible example of the issue bugsnag-js is causing using the latest version v6.5.1.
Please reopen, we need to know what is the correct version of @react-native-community/netinfo that can be used with sdk35 too
@react-native-community/netinfo - expected version range: ~3.2.1 - actual version installed: ^5.5.1
@react-native-community/netinfo - expected version range: ~3.2.1 - actual version installed: 4.6.2
Both crash with RNCNetInfo.getCurrentState got 3 arguments, expected 2..
Hi @lc3t35 - if you have a React Native project, we would recommend using our React Native notifier: https://github.com/bugsnag/bugsnag-react-native
Are you currently using bugsnag-js in your project? what version of bugsnag-js are you using?
I'm using expo sdk35 with "@bugsnag/expo": "6.4.1" for dev and "bugsnag-react-native": "^2.23.2" for production.
Hey @lc3t35, the bugsnag-react-native library should not be used with Expo apps. Additionally, a fix was released that fixes an issue that sound exactly like this. This was released in v6.5.1 of our bugsnag-js library. Can you please try a version that is at least v6.5.1 to see if the issue persists?
Yes as ejected Expo app, I use bugsnag-react-native ("bugsnag-react-native": "^2.23.6",)
But for staging/dev, I would like to use https://docs.bugsnag.com/platforms/react-native/expo/ too. What do you suggest ?
I had to remove bugsnag-react-native and @bugsnag/expo, so I can work on my project, I can stay stuck ... QA is waiting for updates.... without error catching then...
I expect to update to sdk36 soon, so I'll check again.
@lc3t35 Have you confirmed that you still see this issue with the latest @bugsnag/expo which includes the fix @xander-jones refers to?
https://github.com/bugsnag/bugsnag-js/releases
installing version 3.2.1 worked for me
|
2025-04-01T06:38:07.043737
| 2015-02-20T01:59:12
|
58300908
|
{
"authors": [
"ConradIrwin",
"paton"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4364",
"repo": "bugsnag/bugsnag-node",
"url": "https://github.com/bugsnag/bugsnag-node/pull/51"
}
|
gharchive/pull-request
|
Rename req.host to req.hostname
Just upgraded to Bugsnag v1.6.0, but express v4.11.2 started logging this warning:
express deprecated req.host: Use req.hostname instead node_modules/bugsnag/lib/request_info.js:6:46
This should fix this!
Original pull request broke on old versions of express. Just fixed that.
Awesome, thanks!
|
2025-04-01T06:38:07.049707
| 2020-08-04T13:17:48
|
672786287
|
{
"authors": [
"GOVINDDIXIT",
"mattdyoung"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4365",
"repo": "bugsnag/bugsnag-react-native",
"url": "https://github.com/bugsnag/bugsnag-react-native/issues/473"
}
|
gharchive/issue
|
Sourcemap path in case of android flavor builds
I have created flavor builds in the android side of the React Native app. Earlier there was no flavor builds
So I was uploading the source map using CI like this
- run:
name: Upload sourcemaps to Bugsnag
command: |
if [[ $BUGSNAG_KEY ]]; then
yarn generate-source-maps-android upload \
--api-key=$BUGSNAG_KEY \
--app-version=$CIRCLE_BUILD_NUM \
--minifiedFile=android/app/build/generated/assets/react/release/app.bundle \
--source-map=android/app/build/generated/sourcemaps/react/release/app.bundle.map \
--minified-url=app.bundle \
--upload-sources
fi
But now the app is divided into two flavor builds (play and foss) and Bugsnag is available only in play build.
I am getting this error as the source map path needs to be updated.
[error] Error uploading source maps: Error: Source map file does not exist (android/app/build/generated/sourcemaps/react/release/app.bundle.map)
at /home/********/repo/node_modules/bugsnag-sourcemaps/lib/options.js:141:17
Can anyone help me in determining what will be the updated path of source map in case of multiple flavors in the app? Thanks in advance.
PS: I Have gone through the docs but didn't find anything related to this scenario.
Hi @GOVINDDIXIT
Are you using the Hermes JS engine on Android in your React Native app? If you're not using Hermes you can just follow our standard instructions here to upload via the API without bugsnag-sourcemaps:
https://docs.bugsnag.com/platforms/react-native/react-native/showing-full-stacktraces/#uploading-source-maps-to-bugsnag
If Hermes is enabled then using bugsnag-sourcemaps is currently the only upload method we support.
Where does you flavor build output the .bundle and .bundle.map files? Have you tried manually running the build and searching for the files? Is it just the path that's wrong in your upload command?
Hi @mattdyoung
Thanks for the quick response. Yes, I have got the correct path by building source maps locally and the issue is now solved. If possible I would suggest adding a specific section for the android flavor builds in official docs.
After adding flavors the updated path for minified file and source map is
--minifiedFile=android/app/build/generated/assets/react/play/release/app.bundle
--source-map=android/app/build/generated/sourcemaps/react/play/release/app.bundle.map
In general
--minifiedFile=android/app/build/generated/assets/react/{flavor_name}/release/app.bundle
--source-map=android/app/build/generated/sourcemaps/react/{flavor_name}/release/app.bundle.map
@GOVINDDIXIT
Thanks for letting us know! We're looking into various docs improvements currently so we'll consider this clarification.
|
2025-04-01T06:38:07.052955
| 2022-12-30T19:28:28
|
1514757688
|
{
"authors": [
"metaclips"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4366",
"repo": "build-trust/ockam",
"url": "https://github.com/build-trust/ockam/issues/4005"
}
|
gharchive/issue
|
Use global default shell in release-tag.yml workflow
To remove the need to repeat default shell declaration in our workflow, we should set a global default so that it is available across all jobs in https://github.com/build-trust/ockam/blob/cdf925aa2adb4439061449e5ea5f9ad774833e20/.github/workflows/release-tag.yml#L1
More here https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#defaults
Closing this now. Thanks @rghdrizzle
|
2025-04-01T06:38:07.058434
| 2016-03-02T07:27:58
|
137796424
|
{
"authors": [
"ab9",
"toolmantim"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4367",
"repo": "buildkite/agent",
"url": "https://github.com/buildkite/agent/issues/249"
}
|
gharchive/issue
|
Bootstrap script doesn't fetch correctly with Git < v1.9
TL;DR: In version 2.1.5, the bootstrap script runs git fetch origin --tags to update all heads and tags. In some older versions of Git, that command will only update the tags, which can cause the subsequent checkout to fail.
We upgraded to version 2.1.5 on Ubuntu 12.04 LTS and the new bootstrap script failed. The problematic command is git fetch origin --tags. It seems to have been introduced in #243.
The relevant comment in the bootstrap script reads: "we fall back to fetching all heads and tags, hoping that the commit is included." With Git 1.9 and later, git fetch origin --tags will indeed fetch all heads and tags, assuming the remote is configured in the usual way. With Ubuntu 12.04 LTS's version of Git (v<IP_ADDRESS>), the command will run cleanly – that is, it will give a successful exit code and won't print error messages – but it will fetch only the tags. As a result, the git checkout command that follows it may fail and complain about "bad object".
We worked around this issue by adding a custom checkout hook that uses the command git fetch origin --tags +refs/heads/*:refs/remotes/origin/*.
#250 may have fixed this.
Yep, you're right, this is fixed in #250. Thanks for the report!
|
2025-04-01T06:38:07.062896
| 2018-05-08T10:44:48
|
321134627
|
{
"authors": [
"ksanderer",
"lox"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4368",
"repo": "buildkite/agent",
"url": "https://github.com/buildkite/agent/issues/761"
}
|
gharchive/issue
|
Docker images docs
I have few suggestions about docker images:
There should be the docs on the dockerhub where explicitly stated base image for tags. Or at least how to determine which base image was used.
Agent docker images should have appropriate tag naming for ubuntu and alpine, etc.. Take a look on python dockerhub.
buildkite/agent:3.1.1-ubuntu
buildkite/agent:3.1.1-alpine
buildkite/agent:3.1.1-alpine3.6
etc..
You can still say that defalut image is alpine. So buildkite/agent:3.1.1 and buildkite/agent:3.1.1-alpine are just symlinks.
IMO it's much more convinient way.
Why?
Sometimes it's useful to modify existing image (in my case I need to add envsubst to the image). And it was quite confusing for me when dockerfile dockerhub page use ubuntu:14.04 and agent:latest based on alpine.
Yup, good idea @ksanderer, we're going through a bit of a transition with how we build and manage docker images, so still trying to figure out best practice and how to work around some of the clunky aspets of docker hub.
|
2025-04-01T06:38:07.065677
| 2016-04-25T23:59:00
|
151009758
|
{
"authors": [
"deoxxa",
"mikekap",
"toolmantim"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4369",
"repo": "buildkite/buildkite-aws-stack",
"url": "https://github.com/buildkite/buildkite-aws-stack/issues/44"
}
|
gharchive/issue
|
Add support for other aws regions
It would be nice to launch these in e.g. us-west-2 in my case.
@mikekap @deoxxa this is now live, you can now use the stack in:
us-west-1
us-west-2
eu-west-1
eu-central-1
ap-northeast-1
ap-northeast-2
ap-southeast-1
ap-southeast-2
sa-east-1
Excellent! Thank you!
|
2025-04-01T06:38:07.072317
| 2020-04-21T12:26:04
|
603957345
|
{
"authors": [
"plaindocs",
"toolmantim"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4370",
"repo": "buildkite/docs",
"url": "https://github.com/buildkite/docs/pull/710"
}
|
gharchive/pull-request
|
Make h3's linkable, and provide the ability for id and class overrides
Makes all the h3 elements linkable by adding an automatic id, and let's you customize the id and class.
For example, the following markdown:
## A long complicated section description
{: id="short-id"}
### Subsection 1
## Section 2
### Subsection 2
generates:
<h2 id="short-id">A long complicated section description</h2>
<h3 id="short-id-subsection-1">Subsection 1</h3>
<h2 id="section-2">Section 2</h2>
<h3 id="section-2-subsection-2">Subsection 2</h3>
This means we can now link to individual environment variables, and instead of #buildkite-environment-variables-buildkite-agent-pid we can have #bk-env-vars-buildkite-agent-pid for example.
Fixes #708
Everything here is up for grabs, so if you’ve got any preferred syntax changes or anything, let me know!
Ace! Yeah, I was hoping we could use the class/id for a bunch of things.
I'm happy with this as is, unless you fancy testing the ID clashes just converting H3 without prepending H2 to them? But it is probably not worth the time.
Yeah, I couldn’t think of a good algorithm. Would it be just adding -2, -3 etc to the end of the ids that clash? So as you progress down the page?
Also I didn’t add the hover link doohicky that h2’s have. Should we?
ooh, I didn't notice the lack of doohicky! I reckon add it, saves from having to "view source" :-p. Please and thank you.
On the clash front, if there are a few dozen or so site wide, I'd be happy to add manual overrides to all of them, and just put up with an ID clash breaking the build for new content. What do you think?
ooh, I didn't notice the lack of doohicky! I reckon add it, saves me from having to "view source" :-p. Please and thank you.
All done!
On the clash front, if there are a few dozen or so site wide, I'd be happy to add manual overrides to all of them, and just put up with an ID clash breaking the build for new content. What do you think?
I like the idea, but I think I'm keen to stick with the current method for now — 90% because the code is mostly ready, and 10% because of getting my head around dealing with the clashes. Let's stick with the nesting method we have now, and revisit if we're not feeling it's working well.
:100:
|
2025-04-01T06:38:07.088690
| 2019-02-14T21:22:48
|
410498119
|
{
"authors": [
"ekcasey",
"scothis"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4371",
"repo": "buildpack/pack",
"url": "https://github.com/buildpack/pack/pull/99"
}
|
gharchive/pull-request
|
Set individual environment variables for build
pack allows setting build-time environment variables via a file, but not
as individual values on the cli. This PR adds the ability to set one (or
more) environment variables directly as cli switches.
The same semantics are preserved from --env-file where a value can be
forwarded from the current environment if a variable name is specified
without a value. The new flags can be used to augment or override
specific values from a file.
The BuildFlags struct now includes an Env field in addition to EnvFile.
The BuildConfig struct EnvFile field has been renamed to Env for
consistency as the field contains the parsed values and not a file.
Refs buildpack/roadmap#25
A little background context.
riff currently uses a riff.toml file to pass configuration to function buildpacks. This file is generated based on values provided to the riff cli. Since this config file is not actually part of the project, it's awkward to write a file to the filesystem, do some work and then remove that file. It would be nicer to specify this config as environment variables.
Merged master and fixed a conflict with build config changes.
cc @ekcasey
@scothis I was under the impression that riff was importing github.com/buildpack/pack as a library using the pack.Build(...) function to run builds. In which case, I would imagine the correct solution for this problem is to add a map of env vars as a param to pack.Build. In that case riff should be able to construct that map as desired (no file required).
Is riff shelling out to the pack cli? I am not opposed to supporting a --env flag but to better enable library consumers it's important to us to know where users are integrating. In the near future we plan to make everything except a deliberately exposed library internal and thus not importable.
@ekcasey yes, riff embeds pack rather than shelling out. On its own, this PR is not enough to fully solve riff's needs, but it provides a large step towards scratching that itch. I'll open another PR that takes a baby step towards creating a programatic API that exposes the same capabilities.
This PR is updated to resolve conflicts.
@scothis Thanks for the explanation. If we think cli users will value having a --env flag in addition to an --env-file, flag this is a good step. I think the only thing missing here is acceptance test coverage. If we are exposing a new flag we should add coverage in acceptance/acceptance_test.go, just like we do with --env-file.
@ekcasey added an acceptance test and manually verified that all the tests are actually passing
|
2025-04-01T06:38:07.112489
| 2023-11-17T16:39:12
|
1999519850
|
{
"authors": [
"jagthedrummer",
"newstler"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4373",
"repo": "bullet-train-co/jbuilder-schema",
"url": "https://github.com/bullet-train-co/jbuilder-schema/pull/73"
}
|
gharchive/pull-request
|
Add CI job for testing against the starter repo
This make it so that the GitHub Actions CI pipeline will:
Checkout the starter repo. We'll look for a branch on the starter repo that matches the name of the branch being tested. If we can't find one we'll test against the main branch.
Checkout the core repo, also looking for a matching branch.
Alter the Gemfile in the starter repo to point to the local branch of jbuilder-schema and the core gems.
Run the Minitest suite of the starter repo
~Run the Super Scaffolding suite of the starter repo~ Edit: I'm not sure it's useful to run the Super Scaffolding tests, so I'm skipping it for now.
@newstler This should be good to go for running tests against the starter repo. There's one test failing on this PR related to https://github.com/bullet-train-co/jbuilder-schema/issues/74 but this doesn't change any shipped code so it should be safe to merge.
@jagthedrummer How urgent is this? I would wait when #74 is resolved if possible (we're looking for solution with @kaspth) to be able to release next versions with green tests.
@newstler not super urgent. I just figured that we should get this going sooner rather than later since we kept finding issues after a release instead of before.
@newstler looks like #74 is now resolved, so I rebased this branch and now it looks like everything is good to go.
@jagthedrummer Released this as v2.6.7 so should be good to check in BT gems without updating the minor version number.
|
2025-04-01T06:38:07.126514
| 2020-08-25T06:30:44
|
685214884
|
{
"authors": [
"hazem3500",
"jxom"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4374",
"repo": "bumbag/bumbag-ui",
"url": "https://github.com/bumbag/bumbag-ui/issues/47"
}
|
gharchive/issue
|
Menu with options
Is your feature request related to a problem? Please describe.
It isn't clear how or there isn't a component yet to make a menu that contains options that could be selected
Describe the solution you'd like
If it's already possible with the current components we could add an example for it in the Docs, If it's not possible with the current set of components we could consider adding a component that handles this use-case.
I would say SelectMenu might be what you are looking for... However, it doesn't have the ability to group sections yet...
SelectMenu acts more like an input.
I was thinking more of a menu that works with a Button like the chakra-UI Menu
Yeah true. Reakit exports a MenuItemRadio & MenuItemCheckbox component. So we could definitely make use of them. https://reakit.io/docs/menu/#menu-bar
Added in 1.2.0
|
2025-04-01T06:38:07.181902
| 2017-01-02T02:18:36
|
198291201
|
{
"authors": [
"colby-swandale",
"hmistry",
"pallavi16"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4375",
"repo": "bundler/bundler",
"url": "https://github.com/bundler/bundler/issues/5299"
}
|
gharchive/issue
|
gem error
Error details
Errno::EACCES: Permission denied @ rb_file_s_rename - (/var/folders/vj/dkscdd2x3wbftywhjf9d8lx80000gn/T/bundler-compact-index-20170101-53146-7cinnb/versions, /Users/pallaviaggarwal/.bundle/cache/compact_index/rubygems.org.443.29b0360b937aa4d161703e6160654e47/versions)
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:528:in `rename'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:528:in `block in mv'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:1571:in `block in fu_each_src_dest'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:1587:in `fu_each_src_dest0'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:1569:in `fu_each_src_dest'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/fileutils.rb:517:in `mv'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/compact_index_client/lib/compact_index_client/updater.rb:55:in `block in update'
/Users/pallaviaggarwal/.rvm/rubies/ruby-2.3.3/lib/ruby/2.3.0/tmpdir.rb:89:in `mktmpdir'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/compact_index_client/lib/compact_index_client/updater.rb:29:in `update'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/compact_index_client/lib/compact_index_client.rb:65:in `update'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/compact_index_client/lib/compact_index_client.rb:56:in `update_and_parse_checksums!'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/fetcher/compact_index.rb:67:in `available?'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/fetcher/compact_index.rb:15:in `call'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/fetcher/compact_index.rb:15:in `block in compact_index_request'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/fetcher.rb:157:in `use_api'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:332:in `block in api_fetchers'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:332:in `select'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:332:in `api_fetchers'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:337:in `block in remote_specs'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/index.rb:10:in `build'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:336:in `remote_specs'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/source/rubygems.rb:83:in `specs'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:261:in `block (2 levels) in index'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:259:in `each'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:259:in `block in index'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/index.rb:10:in `build'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:256:in `index'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:250:in `resolve'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:174:in `specs'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/definition.rb:162:in `resolve_remotely!'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/installer.rb:225:in `resolve_if_need'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/installer.rb:78:in `run'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/installer.rb:24:in `install'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/cli/install.rb:71:in `run'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/cli.rb:189:in `install'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/thor/lib/thor/invocation.rb:126:in `invoke_command'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/thor/lib/thor.rb:359:in `dispatch'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/cli.rb:20:in `dispatch'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/vendor/thor/lib/thor/base.rb:440:in `start'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/cli.rb:11:in `start'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/exe/bundle:34:in `block in <top (required)>'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/lib/bundler/friendly_errors.rb:100:in `with_friendly_errors'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/gems/bundler-1.13.7/exe/bundle:26:in `<top (required)>'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/bin/bundle:22:in `load'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/bin/bundle:22:in `<main>'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/bin/ruby_executable_hooks:15:in `eval'
/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3/bin/ruby_executable_hooks:15:in `<main>'
Environment
Bundler 1.13.7
Rubygems 2.6.8
Ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-darwin16]
GEM_HOME /Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3
GEM_PATH /Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3:/Users/pallaviaggarwal/.rvm/gems/ruby-2.3.3@global
RVM 1.28.0 (latest)
Git 2.11.0
rubygems-bundler (1.4.4)
--- TEMPLATE END ----------------------------------------------------------------
See the other Permission denied issues. A suite of permission errors have been addressed by #5007 which will release in Bundler 1.14.
Let us know if you're still having trouble and none of those solutions worked for you.
I'm closing this for now. If you're still experiencing your original issue don't be afraid to re-open this ticket.
|
2025-04-01T06:38:07.184308
| 2019-03-27T14:08:46
|
425982322
|
{
"authors": [
"colby-swandale",
"dylanahsmith"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4376",
"repo": "bundler/bundler",
"url": "https://github.com/bundler/bundler/pull/7069"
}
|
gharchive/pull-request
|
backport: bundle clean native extensions for gems with a git source
Cherry-pick https://github.com/bundler/bundler/pull/7059 to 1-17-stable since, as https://github.com/bundler/bundler/issues/7058 mentioned, the bug is present on 1.17.3
This test error seems unrelated to this PR
ERROR: Error installing rubocop:
parallel requires Ruby version >= 2.2.
Thanks for the PR. I've marked the PR this references with a backport for when we organize the next Bundler 1 release which will get cherry-picked.
|
2025-04-01T06:38:07.240812
| 2017-02-23T14:55:25
|
209784276
|
{
"authors": [
"southpolesteve",
"zfoster"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4377",
"repo": "bustlelabs/shep",
"url": "https://github.com/bustlelabs/shep/issues/202"
}
|
gharchive/issue
|
Master CI build error
Transpile... https://travis-ci.org/bustlelabs/shep/builds/204605328
will fix
On Feb 23, 2017, at 6:55 AM, Steve Faulkner<EMAIL_ADDRESS>wrote:
Transpile... https://travis-ci.org/bustlelabs/shep/builds/204605328
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
I just restarted and it passed
|
2025-04-01T06:38:07.243909
| 2017-10-27T22:20:57
|
269252776
|
{
"authors": [
"bonustrack",
"jm90m",
"ryanbaer"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4378",
"repo": "busyorg/busy",
"url": "https://github.com/busyorg/busy/issues/915"
}
|
gharchive/issue
|
Single post, interesting people block should not appear
The block "Interesting people" on right sidebar should not appear on single post when logged:
Also there should be a bottom margin on sidebar right blocks
Yes, was going to report this. Thanks for fixing it. Also noticed: on Recommended Posts, there's no context menu (i.e. I can't right-click), nor Cmd + Click to open in a new tab. I don't always want to navigate away from the page I'm on right away.
I can file into another issue if you'd like.
@ryanbaer yeah probably should put it in another issue
Couldn't assign you, so here you go: https://github.com/busyorg/busy/issues/917
@ryanbaer thanks!
Fixed in #916
|
2025-04-01T06:38:07.301470
| 2024-04-18T00:35:06
|
2249502540
|
{
"authors": [
"Fabioni",
"bvaughn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4379",
"repo": "bvaughn/react-resizable-panels",
"url": "https://github.com/bvaughn/react-resizable-panels/issues/341"
}
|
gharchive/issue
|
Option to completely deactivate hitAreaMargins
The whole hitAreaMargins is quite tricky and compote heavy as you state yourself in: https://github.com/bvaughn/react-resizable-panels/blob/638d0f6d3a9d1aeabae8333396ed725ecbeff513/packages/react-resizable-panels/src/PanelResizeHandleRegistry.ts#L169-L174
I propose we have an option to completely disable the behavior, getting back the old straight-forward behaviour.
My problems with hitAreaMargins:
clicking panning in the margin area triggers movement of the panel but also of a element behind it (a map that gets panned). This feels bad, because a user would expect either to pan the panel or the map but not both
setting hitAreaMargins={{ coarse: 0, fine: 0}} still uses the tricky calculations you do in your code which is totally unnecessary and can lead to bugs. Reporting the bugs is one thing, but being able to disableable it would help a lot.
I have some weird behaviour in combination with collapsible, which I can not reproduce in codesandbox but I suspect it has to do with the way hitAreaMargins works.
What you're describing sounds like it's pretty specific to your website. If you can provide a Code Sandbox example, I'll take a look and see if I can recommend something to help. (Or if there is a bug in this library, that's also good to uncover.)
Generally speaking, I don't want to support two separate mechanisms for resize handling based on hitAreaMargins though, so this is not a change I'm interested in making.
This problem was related to https://github.com/bvaughn/react-resizable-panels/issues/342, so I am fine for the moment.
I still think we should have the option to disable the "TRICKY" functionality from hitAreaMargins 😃
If you'd like to remove that feature, I suggest just forking this library. The license is very permissive so as to allow that.
One of my problems ("clicking panning in the margin area triggers movement of the panel but also of a element behind it") should have actually been solved by https://github.com/bvaughn/react-resizable-panels/pull/338, correct?
I still have it and will try to create a sandbox. Maybe this is related to how google maps api is also kind of greedy for panning events.
For reference, here a screen recording:
https://github.com/bvaughn/react-resizable-panels/assets/45362676/5f6d9679-3c67-49ae-8f15-1b14e5e461cd
If you'd like to remove that feature, I suggest just forking this library. The license is very permissive so as to allow that.
Would you allow a PR?
No. This is not a change I'm interested in making.
It's possible Google maps is also listening at the root of the window and intercepting the events before this library is. I don't know.
|
2025-04-01T06:38:07.401890
| 2023-04-23T07:55:24
|
1679911003
|
{
"authors": [
"DeutscheMark",
"bwp91"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4404",
"repo": "bwp91/homebridge-meross",
"url": "https://github.com/bwp91/homebridge-meross/issues/519"
}
|
gharchive/issue
|
Add EU smart plug mini MSS315 (Matter)
{"uuid":"xxxxxxxxxxxxxxxxxxxxx","onlineStatus":1,"devName":"Reiskocher","devIconId":"device024","bindTime":1681137596,"deviceType":"mss315","subType":"eu","channels":[{}],"region":"eu","fmwareVersion":"9.3.26","hdwareVersion":"9.0.0","userDevIcon":"","iconType":1,"domain":"mqtt-eu-3.meross.com","reservedDomain":"mqtt-eu-3.meross.com","cluster":3,"hardwareCapabilities":[],"firmware":"9.3.26","hbDeviceId":"xxxxxxxxxxxxxxxxxxxxx","model":"MSS315"
Hi @DeutscheMark
Please install the beta version of the plugin
https://github.com/bwp91/homebridge-meross/wiki/Beta-Version
Hi @bwp91
Thanks for the fast response and the beta version with support for the MSS315.
All the plugs I use are now found by the plugin (hybrid with simple confit) and I can see their current status in Home and turn them on/off. I also have some offline plugs that are ignored as intended.
Thank you. 😊
|
2025-04-01T06:38:07.405376
| 2018-03-05T14:30:20
|
302321001
|
{
"authors": [
"rennervo",
"tamazlykar"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4405",
"repo": "bwsw/cloudstack-ui",
"url": "https://github.com/bwsw/cloudstack-ui/issues/1012"
}
|
gharchive/issue
|
Buttons "Create", "Edit" and "Delete" for template tags are absent for user
Buttons "Create", "Edit" and "Delete" tags aren't available for role "User" in Images tab.
Steps:
Log in and go to Images;
Go to Tags tab of any Template or ISO
Actual result: Buttons for manage of tags aren't available for a role "User"
Expected result: Buttons for manage of tags are available for a role "User"
Connected feature: image_tag_create, image_tag_edit, image_tag_delete
Screenshot:
Test on:
tamazlykar/1012-template-tags-action-buttons
Regression:
image_tag_create, image_tag_edit, image_tag_delete
Tested on tamazlykar/1012-template-tags-action-buttons
|
2025-04-01T06:38:07.411540
| 2021-01-25T21:57:15
|
793755106
|
{
"authors": [
"shesek",
"tiero"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4406",
"repo": "bwt-dev/bwt-electrum-plugin",
"url": "https://github.com/bwt-dev/bwt-electrum-plugin/issues/2"
}
|
gharchive/issue
|
Use with Tails
Unfortunately Tails has not the recent Electrum version and since I cannot use pip I cannot either install from source.
The only option is to use AppImage, but AFAIK this plugin cannot work with it. There is any workaround to this?
(I cannot use the old Electrum, because my watch-only wallet has been created with a recent electrum version and is not compatible)
I'm afraid that I'm not aware of a workaround for using the AppImage. It's not just that bwt can't work with it, it cannot be used with external electrum plugins at all.
But I'll investigate some more and report back, I might have an idea that could work...
I'm afraid that I'm not aware of a workaround for using the AppImage. It's not just that bwt can't work with it, it cannot be used with external electrum plugins at all.
But I'll investigate some more and report back, I might have an idea that could work...
Okay, so its actually pretty simple!
# Extract AppImage (to a subdirectory named 'squashfs-root')
$ ./electrum-x.y.z-x86_64.AppImage --appimage-extract
# Copy the bwt plugin directory
$ cp -r /path/to/bwt squashfs-root/usr/lib/python3.7/site-packages/electrum/plugins
# Start Electrum, setup bwt, then run again without --offline
./squashfs-root/AppRun --offline
(The --offline thing is unrelated to the AppImage, just a general recommendation to avoid acceidntly connecting to public servers.)
Okay, so its actually pretty simple!
# Extract AppImage (to a subdirectory named 'squashfs-root')
$ ./electrum-x.y.z-x86_64.AppImage --appimage-extract
# Copy the bwt plugin directory
$ cp -r /path/to/bwt squashfs-root/usr/lib/python3.7/site-packages/electrum/plugins
# Start Electrum, setup bwt, then run again without --offline
./squashfs-root/AppRun --offline
(The --offline thing is unrelated to the AppImage, just a general recommendation to avoid acceidntly connecting to public servers.)
I added some instructions and a small helper script to ease the setup.
Thanks for getting me to look into this again!
I added some instructions and a small helper script to ease the setup.
Thanks for getting me to look into this again!
Reopening until you confirm this works for you.
Reopening until you confirm this works for you.
It worked on various environments that I tried this on, closing.
|
2025-04-01T06:38:07.451391
| 2024-11-04T19:17:09
|
2633638339
|
{
"authors": [
"DevMC7",
"byteManiak"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4407",
"repo": "byteManiak/mecha",
"url": "https://github.com/byteManiak/mecha/issues/4"
}
|
gharchive/issue
|
Hitbox being offset and way too large
I tried to make a boss that has a custom hitbox using this mod but the hitbox is really large and offset. I used the sample code from the repo's readme but whenever I returned a list with more than one element the hitbox became like the one in the image. I've tried to use extremely small numbers for the hitbox size but it didn't work. I've also customized and completely removed the setDimensions() method for the entity but it also didn't work
The entity type I'm using is a PathAwareEntity
Hi, can I see the code for your custom collider?
|
2025-04-01T06:38:07.463136
| 2023-11-29T01:18:17
|
2015614708
|
{
"authors": [
"divya-mohan0209",
"ricochet"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4408",
"repo": "bytecodealliance/governance",
"url": "https://github.com/bytecodealliance/governance/pull/59"
}
|
gharchive/pull-request
|
Add Divya as recognized contributor
I am nominating or self-nominating a Recognized Contributor.
Name: Divya Mohan
GitHub Username: @divya-mohan0209
Projects/SIGs:
SIG-Documentation
JCO
Nomination
Divya has made several contributions to documentation for component docs and JCO.
Optional: Endorsements
Bailey Hayes (@ricochet)
[ ] I have read and understood the qualifications for a Recognized Contributor
Thank you @ricochet :heart:
|
2025-04-01T06:38:07.494580
| 2023-05-30T10:53:07
|
1732008081
|
{
"authors": [
"wenyongh",
"yamt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4409",
"repo": "bytecodealliance/wasm-micro-runtime",
"url": "https://github.com/bytecodealliance/wasm-micro-runtime/pull/2244"
}
|
gharchive/pull-request
|
aot/jit native stack bound check improvement
summary:
Move the native stack overflow check from the caller to the callee because the former doesn't work for call_indirect and imported functions.
Make the stack usage estimation more accurate.
Instead of making a guess from the number of wasm locals in the function, use the LLVM's idea of the stack size of each MachineFunction. The former is inaccurate because a) it doesn't reflect optimization passes and b) wasm locals are not the only reason to use stack.
To use the post-compilation stack usage information without requiring 2-pass compilation or machine-code imm rewrites, introduce a global array to store stack consumption of each functions.
for JIT, use a custom IRCompiler with an extra pass to fill the array.
for AOT, use clang -fstack-usage equivalent instead because we support external llc.
Re-implement function call stack usage estimation to reflect the real calling conventions better.
(aot_estimate_stack_usage_for_function_call)
Re-implement stack estimation logic (--enable-memory-profiling) based on the new machinery.
discussions:
https://github.com/bytecodealliance/wasm-micro-runtime/issues/2105
todo/known issues/open questions:
implement 32-bit case
fill the stack_sizes array for jit (use something similar to https://github.com/bytecodealliance/wasm-micro-runtime/pull/2216)
fix jit tier up (or confirm it isn't broken) reading the code, i couldn't find anything broken.
ensure appropriate jit partitioning (ensure to compile the function body before executing the corresponding wrapper)
account caller-side stack consumption (cf https://github.com/bytecodealliance/wasm-micro-runtime/issues/2105#issuecomment-1543533575)
what to do for native function calls? do nothing special, at least within this PR
fix external llc. pass -fstack-usage to the external command?
re-implement enable_stack_estimation based on the new machinary
what to do for RtlAddFunctionTable?
it seems broken regardless of this PR. but this PR might break it further.
i'm not even sure how i can test it. is it for AddVectoredExceptionHandler?
see also: https://github.com/bytecodealliance/wasm-micro-runtime/issues/2242
references:
https://learn.microsoft.com/en-us/windows/win32/api/winnt/nf-winnt-rtladdfunctiontable
https://learn.microsoft.com/en-us/windows/win32/debug/pe-format?redirectedfrom=MSDN#the-pdata-section
test
this test module consumes about 8MB of stack (wamr aot with llvm 14, amd64) https://github.com/yamt/toywasm/blob/1cc6d551b0fcd10cc8c8b3516c48ba08015e6ad6/wat/many_stack.wat.jinja#L37-L38 worked as expected
investigate assertion failure seen with js.wasm
investigate app heap corruptions seen with aot https://github.com/bytecodealliance/wasm-micro-runtime/issues/2275
non x86 archs
benchmark. noinline can have severe implications for certain type of modules. however, as wasm is usually a compiler target, hopefully fundamental inlining has already been done before aot/jit compilation. https://github.com/bytecodealliance/wasm-micro-runtime/pull/2244#issuecomment-1588769538
remove/disable debug code
do something for func_ctx->debug_func. just disable for the wrapper func?
add missing error checks
reduce code dup. probably make aot_create_func_context use create_basic_func_context.
fix errors caused by empty function
look at x86_32 failure on the ci
Segv https://github.com/bytecodealliance/wasm-micro-runtime/pull/2260
*Nan related issue: an i32.reinterpret_f32 test in conversions.wast was failing. it's x87 flds/fstp which doesn't preserve sNaN. the problem is not specific to this PR. it seems working on main branch just by luck. (it would fail if you disable optimizations.) i don't think there's a simple way to fix it w/o changing the aot ABI. https://github.com/bytecodealliance/wasm-micro-runtime/pull/2269
a quick benchmark with coremark
# wamr versions
# base: 7ec77598dd5c62eafdbe03eca883bc42781f097e
# new: be166e8a4fdeef4b941a979d62ab81e62cdf4ddf (https://github.com/bytecodealliance/wasm-micro-runtime/pull/2244)
# wamrc options
# bc0: --bounds-checks=0
# bc1: --bounds-checks=1
# bc1-emp: --bounds-checks=1 --enable-memory-profiling
script: https://gist.github.com/yamt/78de859809694a893b7c7732a1025722
@yamt Do we need to add extra option if we want to enable the new stack overflow check? Or we just use it as normal, e.g. wamrc --target=i386 -o test.aot test.wasm?
the same usage as before.
@yamt Do we need to add extra option if we want to enable the new stack overflow check? Or we just use it as normal, e.g. wamrc --target=i386 -o test.aot test.wasm?
the same usage as before.
Got it, thanks. I just had a quick review, it looks good but it is a little complex, I need to read more carefully about the aot_llvm.c and aot_emit_function.c, and do some tests.
while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit.
while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit.
there are at least two problems:
with its windowed abi, tail call elimination is difficult. the xtensa version of llvm doesn't implement it.
it allocates the area of function call arguments as a part of caller's stack frame. unlike x86, it seems that it's already included the stack size reported by MFI->getStackSize().
while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit.
there are at least two problems:
with its windowed abi, tail call elimination is difficult. the xtensa version of llvm doesn't implement it.
it allocates the area of function call arguments as a part of caller's stack frame. unlike x86, it seems that it's already included by the stack size reported by MFI->getStackSize().
Thanks, do you mean changing size += 16; to size = align_uint(size, 16) doesn't work for xtensa, or this PR has issue for xtensa?
while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit.
there are at least two problems:
with its windowed abi, tail call elimination is difficult. the xtensa version of llvm doesn't implement it.
it allocates the area of function call arguments as a part of caller's stack frame. unlike x86, it seems that it's already included by the stack size reported by MFI->getStackSize().
Thanks, do you mean changing size += 16; to size = align_uint(size, 16) doesn't work for xtensa, or this PR has issue for xtensa?
this PR.
the approach with a wrapper function somehow assumes efficient tail call.
while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit.
there are at least two problems:
* with its windowed abi, tail call elimination is difficult. the xtensa version of llvm doesn't implement it.
as a tail call in general seems impossible in xtensa windowed abi, i suspect there is no simple solution.
also, riscv seems to prevent tail call optimization in some cases. (when a function have too many parameters to pass via registers?)
a possible workaround is to tweak our aot abi to make >N parameters via a pointer like the following.
that way the stack consumption of the wrapper functions will not be too large even w/o tail call optimization. i feel it's a bit too intrusive though.
struct func1_stack_params {
arg3
arg4
};
func1(exec_env, arg1, arg2, struct func1_stack_params *)
caller()
{
struct func1_stack_params params;
func1(exec_env, arg1, arg2, ¶ms);
}
* it allocates the area of function call arguments as a part of caller's stack frame. unlike x86, it seems that it's already included by the stack size reported by MFI->getStackSize().
this is just a matter of adding some target dependent code. (eg if (xtensa))
It is good to me if it is only for xtensa 32-bit. For xtensa 64-bit linux/macos, we can also use stack hw boundary check, right?
i fixed xtensa case. it's still inefficient, but not broken.
while i haven't tested on a real hardware yet, the wamrc output looks reasonable.
i fixed xtensa case. it's still inefficient, but not broken. while i haven't tested on a real hardware yet, the wamrc output looks reasonable.
lightly tested on esp32-devkitc. it worked as expected so far.
i fixed xtensa case. it's still inefficient, but not broken. while i haven't tested on a real hardware yet, the wamrc output looks reasonable.
lightly tested on esp32-devkitc. it worked as expected so far.
OK, it seems there is no comment from other developers, let's merge this PR?
i fixed xtensa case. it's still inefficient, but not broken. while i haven't tested on a real hardware yet, the wamrc output looks reasonable.
lightly tested on esp32-devkitc. it worked as expected so far.
OK, it seems there is no comment from other developers, let's merge this PR?
i have no problem with it
|
2025-04-01T06:38:07.498619
| 2024-11-09T01:47:38
|
2645465756
|
{
"authors": [
"lum1n0us",
"sjamesr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4410",
"repo": "bytecodealliance/wasm-micro-runtime",
"url": "https://github.com/bytecodealliance/wasm-micro-runtime/pull/3899"
}
|
gharchive/pull-request
|
GlobalValueSet was moved to IRPartitionLayer recently, but we have a …
…local definition anyway
This resolves a compilation error against recent revisions of LLVM
GlobalValueSet was moved to IRPartitionLayer recently
In which llvm release? WAMR is somewhat sensitive to the version of LLVM. Currently WAMR depends on LLVM 15.x
It was moved about a month ago in https://github.com/llvm/llvm-project/commit/04af63b267c391a4b0a0fb61060f724f8b5bc2be. Internally at Google we build WAMR against LLVM at approximately HEAD.
In the above change, GlobalValueSet is now using GlobalValueSet = std::set<const GlobalValue *>; inside IRPartitionLayer.
I think my change is safe, to my non-expert eyes the definition of GlobalValueSet is the same before and after this change.
|
2025-04-01T06:38:07.516135
| 2024-06-26T20:50:40
|
2376259543
|
{
"authors": [
"alexcrichton",
"lann",
"lukewagner",
"tschneidereit"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4411",
"repo": "bytecodealliance/wasmtime",
"url": "https://github.com/bytecodealliance/wasmtime/issues/8878"
}
|
gharchive/issue
|
What should the default behavior of wasmtime serve be with scheme/authority?
My changes in https://github.com/bytecodealliance/wasmtime/pull/8861 introduced a change in the default behavior of wasmtime serve. Notably this program:
use wasi::http::types::*;
struct T;
wasi::http::proxy::export!(T);
impl wasi::exports::wasi::http::incoming_handler::Guest for T {
fn handle(request: IncomingRequest, outparam: ResponseOutparam) {
println!("request.method = {:?}", request.method());
println!("request.scheme = {:?}", request.scheme());
println!("request.authority = {:?}", request.authority());
let resp = OutgoingResponse::new(Fields::new());
ResponseOutparam::set(outparam, Ok(resp));
}
}
(compiled component)
When run with wasmtime serve and hit with curl http://localhost:8080 it prints:
Serving HTTP on http://<IP_ADDRESS>:8080/
stdout [0] :: request.method = Method::Get
stdout [0] :: request.scheme = Some(Scheme::Http)
stdout [0] :: request.authority = Some("localhost:8080")
On main, however, it prints
Serving HTTP on http://<IP_ADDRESS>:8080/
stdout [0] :: request.method = Method::Get
stdout [0] :: request.scheme = None
stdout [0] :: request.authority = None
This regression is due to these changes because I didn't understand what they were doing.
Now why wasn't this caught by the test suite? I tried writing a test for this and it passed, but apparently it's due to our usage of
hyper::Request::builder().uri("http://localhost/") in the test suite. That creates an HTTP requests that looks like:
GET http://localhost/ HTTP/1.1
...
where using curl on the command line generates:
GET / HTTP/1.1
...
That leads me to this issue. What should scheme and authority report in these two cases for wasmtime serve by default? The previous behavior means that GET / could not be distinguished from GET http://localhost/ which naively seems like what scheme and authority are trying to map to.
Is the previous behavior of wasmtime serve buggy? Is the current behavior buggy? Should the spec be clarified?
cc @elliottt @pchickey
I'll note that the difference can be seen with:
$ curl -v --request-target http://localhost/ http://localhost:8080
vs
$ curl -v http://localhost:8080
in terms of how the headers are set. The former is basically what our test suite does while the latter is what the curl command line does by default.
The former (full URL in request) is incorrect; that form is only applicable to CONNECT methods.
The former (full URL in request) is incorrect; that form is only applicable to CONNECT methods.
Edit: looks like I might be wrong about it being incorrect per se; it might just be very uncommon.
That makes sense, and means we should probably update our tests, but I guess I'm also curious still what the behavior here should be. For example why do scheme and authority return an Option at the WIT level? Are they intended to map to this or is it expected that they're effectively always Some?
scheme is derived from out of band info: whether the request came in over TLS or not
my take is that we should have wasi:http either always provide an authority, or provide the host header. Otherwise there's no standard way for content to learn about the authority it's called under. I know that @lukewagner concluded that we must never provide the host header. If that stands, I conversely think that we must continue providing the authority for incoming requests.
In effect, that means that for incoming requests what content sees is always the absoluteURI form, which the RFC seems to indicate is the way forward, too:
To allow for transition to absoluteURIs in all requests in future
versions of HTTP, all HTTP/1.1 servers MUST accept the absoluteURI
form in requests, even though HTTP/1.1 clients will only generate
them in requests to proxies.
Don't pay too much attention to the 1.1 spec for future direction. HTTP/2 makes authority even more special by splitting it into a "pseudo-header".
Great question. First of all, from asking some HTTP folks, I believe it is the case that we could tighten the spec wording to say that for methods other than CONNECT and OPTIONS, there must always be an authority (i.e., the return value is some). Apparently, CONNECT and OPTIONS have an unfortunate * option that simply has no authority.
Next, my understanding from RFC 9110 is that the authority either comes from the :authority pseudo-header in H/2/3 or the Host header in H/1, and if both are present, they are not allowed to disagree (and this is Web-compatible). Thus, I think what WASI HTTP should say is that:
Host is in the definitely-forbidden headers list
The request.authority field is derived in a transport-dependent manner (and required to be present for non-CONNECT/OPTIONS)
This allows the host implementation to do the transport-appropriate thing for requests coming in or out over the wire.
Ok so for the use case of wasmtime serve specifically:
[method]incoming-request.scheme is always some(http) because we don't implement https yet
[method]incoming-request.authority uses the incoming URI's host if it's there (probably only for CONNECT and OPTIONS). Otherwise it uses Host, otherwise it returns .... None? "<IP_ADDRESS>"? The value of --addr?
This all indicates to me that new_incoming_request should take both a scheme and an authority argument rather than inferring these from the Request as well?
(sorry I'm really looking for guidance/second opinions here, I don't know why things were originally constructed the way they are or if they're just how things got shaken out)
otherwise it returns .... None?
Do we want/need to support HTTP/1.0? afaik host is mandatory for 1.1
Ah ok I didn't realize that was a 1.0 thing. Sounds like it should search for Host as a header in the host-side hyper::Request for the authority if it's not present in the URI. I try to make a PR with these changes tomorrow.
The impression that I got talking to an HTTP server maintainer is that it's web-compatible to require the Host field (rejecting if it's absent in HTTP 1.0 or 1.1) and that allowing an empty authority in cases other than CONNECT/OPTIONS can transitively lead to security issues (random googling found this e.g.). Similarly, RFC 9110 (which also intends to be Web-compatible) says that there MUST be a Host header (when there is no :authority pseudo-header) without making an exception for HTTP 1.0. Thus, I'd suggest making it an error in wasmtime serve if there is no Host in HTTP 1.0, at least to start with, and see if anyone complains.
|
2025-04-01T06:38:07.520581
| 2024-01-21T11:14:41
|
2092538955
|
{
"authors": [
"CLAassistant",
"losfair"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4412",
"repo": "bytedance/monoio",
"url": "https://github.com/bytedance/monoio/pull/226"
}
|
gharchive/pull-request
|
fix: BufReader should not panic when used after cancellation
If a BufReader::fill_buf() call is cancelled when the internal buffer is held by .read(), re-allocate the buffer when used in the future.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Closing this - simply re-allocating the buffer is incorrect because cancellation is asynchronous and this can result in lost data.
The error message could be improved though...
|
2025-04-01T06:38:07.522790
| 2023-06-29T05:21:00
|
1780151232
|
{
"authors": [
"AsterDY",
"chenzhuoyu",
"zacharytse"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4413",
"repo": "bytedance/sonic",
"url": "https://github.com/bytedance/sonic/issues/472"
}
|
gharchive/issue
|
Undefined symbols for architecture x86_64
Hello.When I use sonic which version is 1.9.2,I get an error 'Undefined symbols for architecture x86_64'.I test this problem in linux and mac.My go version is 1.17.11.I don't know how to solve this problem
Please describe your question in more details.
Maybe you mean decoder.SyntaxError, it works now on v1.10.0-rc
|
2025-04-01T06:38:07.548630
| 2017-04-30T16:09:31
|
225336601
|
{
"authors": [
"VisualFox",
"ianmjones"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4414",
"repo": "bytepixie/actordb-for-docker",
"url": "https://github.com/bytepixie/actordb-for-docker/pull/4"
}
|
gharchive/pull-request
|
Update to ActorDB 0.10.25.
Updated ActorDB to 0.10.25 and dumb-init to 1.20
Reviewed and tested, merging.
Thanks @VisualFox!
|
2025-04-01T06:38:07.601092
| 2022-11-16T21:24:36
|
1452292597
|
{
"authors": [
"callumrollo",
"hvdosser",
"jklymak",
"richardsc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4416",
"repo": "c-proof/pyglider",
"url": "https://github.com/c-proof/pyglider/issues/128"
}
|
gharchive/issue
|
SeaExplorer delayed mode time series data loss
In testing the new code for this pull request, I found an issue with processing the delayed mode SeaExplorer time series data for missions for which certain sensors (oxygen in this case) are severely oversampled. These missions end up with delayed mode data files that contain fewer actual (non-nan) data points than the realtime files. In other words, we are losing data during the processing.
Currently, the dropna function is used to remove the oversampled oxygen data when converting the raw data. The dropna function is working correctly, however note that the resulting data has many nan values in it, for both the CTD and optics. These nan values will often not co-occur.
I think the problem in the processing is caused by using the GPCTD_TEMPERATURE as the default time base in seaexplorer.py. This variable contains nan values that are not all co-located with the nan values in the oxygen and optical variables. It's desirable to use the CTD as the time base, but we may need to do some interpolation to avoid losing data when the other variables are mapped onto this base.
We have had similar issues with our delayed mode datasets in the past. To get around it, we use NAV_LATITUDE as the timebase in all of our datasets, as this has a non-nan value for every line. Here's an example yml for one of our gliders with a GPCTD. https://github.com/voto-ocean-knowledge/deployment-yaml/blob/master/mission_yaml/SEA70_M15.yml
Would this solution work for you?
This also raises the broader point that perhaps we should have a test with delayed mode data, as currently all tests are run with nrt data
We use NAV_LATITUDE as timebase in conjunction with keep_variables that account for each of the sensors. In this way, all data pass the dropna function, but rows with no data from any of the sensors are dropped by the 'keep_variables' in ncvar block.
I'm not sure if this is related, but I had previously noticed that the timestamps recorded for sensors that are known to be outputting at 1Hz (according to the sensor) are never exactly 1Hz, likely due to the time that they arrive at the payload computer -- in other words, timing information from the sensor is not recorded by the PLD but it assigns it's own. The result is that two different sensors sampling at the same rate (e.g. a CTD and an optical O2 sensor) end up with times that are different by microseconds from one sample to the next, and even though we expect them to be "simultaneous", in terms of the recorded time stamps, they aren't.
For our data, we have the situation where indeed "Nav" is written every heartbeat; certainly though there is no new data in Nav. Some heartbeats have no data at all, some have CTD data (all variables), some have Optical, some have O2. Sometimes these line up, and currently we only save when the other sensors line up with the CTD sensors.
I think what needs to happen is that for each instrument we need a timeseries, and then we need to align it with the timebase. For our SeaExplorers, that time base is most naturally the CTD. The other samples may be offset from that a it, but the CTD is sampling at 2 Hz, and I don't think we care about the other sensors slight phase errors at less than 2 Hz.
I haven't explored how to do this with polars, but in xarray you would just do an interp operation before filling it into the parent. So in pseudocode:
time, ctd_temp = decode('ctd_temp', drop_na=True)
ds['time'] = ('time', time)
ds['temperature'] = ctd_temp
# etc other ctd variables
time_O2, O2_sat = decode('O2_sat', drop_na=True)
ds['O2'] = np.interp(time, time_O2, O2_sat)
...
# etc other O2 variables
I don't think this step would slow things down very much, and I think linear interpolation should be pretty OK from a data point of view.
A second option would be just to save the three instruments as separate polars arrays as raw output and then merge as a second step. That would allow double checking the raw data. However, I think the raw SeaExplorer data is simple enough that it's pretty usable as-is for any debugging.
I'll ping about this. Is there a consensus about how to proceed?
I think what needs to happen is that for each instrument we need a timeseries, and then we need to align it with the timebase. For our SeaExplorers, that time base is most naturally the CTD. The other samples may be offset from that a it, but the CTD is sampling at 2 Hz, and I don't think we care about the other sensors slight phase errors at less than 2 Hz.
I agree with this approach, with the clarification (not really important for the decision or discussion) that I'm 99% sure the GP-CTD samples internally at a max of 1Hz. That's not true for a legato, which can be programmed to sample much faster (up to 16Hz), though the SeaExplorer PLD can only be configured to sample at 1Hz or "as fast as possible" (which IIRC is something like 20Hz)
@callumrollo any objections to this? I can take a crack at doing it the next few days.
@hvdosser, do we have a link to some data where this is problematic?
The delayed L0-timeseries (dfo-bb046-20200908_delayed.nc) for this mission is a nice example of the problem.
Is there a time, or ideally a set of raw files where this problem occurred? I couldn't replicate, though I couldn't get it to work at all with the setup in that directory.
Sorry for the delay on this, just got back from vacation. The idea of aligning timebases sounds good to me. Should we linearly interpolate, or do a nearest neighbour? I'm always a little cautious about interpolating data. Especially as it would lead to some strange results with integer data like raw counts for chlorophyll etc. If we want to do it at the raw to raw nc step I can look at implementing it in polars tomorrow
OK, I ran the whole data set, and can see the problem now.
@callumrollo if you wanted to do this, happy to let you. I'm less nervous about linear interpolation - nearest neighbour leads to phase errors; However, if there is a difference for other data types, maybe we need an entry in the yml metadata that says which interpolation is used?
The tests are currently failing on the main branch. Looks like this way caused by something in #129. That PR was only for slocum data, so I'll work from the last commit where all the tests cleared for this PR. I've downloaded the dataset that @hvdosser indicated and will use that as a test case for timestamp alignment
@callumrollo I think the tests should be OK on main now (I hope)
I think we need to do some work on the test infrastructure. In particular, we should force rewriting of the files we compare with. Right now the way the processing works is it does incremental.
Thanks for working to get the tests passing again @jklymak. I'll start on resolving this Issue now.
I agree on forced reprocessing in the tests. would using the incremental=False flag suffice for this?
@hvdosser I think the keep_vars functionality present in pyglider already could solve this problem to first order. Is it something you're using?
I've put a demo together of the difference between using CTD as a timebase and using NAV_LATITUDE as a timebase then cutting down to the rows where at least one of the sensors has a sample. It's not perfect, but has been working pretty well for us so far.
https://github.com/callumrollo/keep_vars_experiment/blob/main/timebase_keep_experiment.ipynb
Sorry, it's a bit of a rushed Friday afternoon job!
OK, I forgot about this, and I'm not sure it's documented. What does this do exactly?
I haven't had a chance to look in detail yet, but the first thing I'd check is whether this works for a delayed-mode dataset. We didn't see much of an issue with the realtime data.
looking at it quickly it seems to keep the data if any of the listed sensors are present in a line.
I guess this is a philosophical thing - do we want all the raw data in a time series, which means any given sensor is riddled with NaN, or do we want time series where the sensors are time-aligned to one sensor's time base.
I guess I'll argue for the latter. If someone needs the time series with raw O2, for instance, they can rerun the processing with just that variable, and that variable as the timebase. Or they can load the raw parquet files. I think by the time we make the first time series netcdf, it should be time-aligned, and not full of NaN's.
This seems particularly apropos for the O2 and the optics on the SeaExplorer, which so far as I can tell are often ludicrously oversampled?
But happy to discuss further.
II agree we need a way to reduce the size of the final timeseries while keeping as much science data as possible.
I've been white-boarding this morning to represent the way pyglider currently does this for seaexplorer data, and what potential improvements could look like. Resultant files are big, particularly if one of the keep_vars has a high sampling rate.
This is the method we currently implement at VOTO, as the scientists we supply data to don't want any interpolation of data. Some of our delayed mode datasets are almost 10 GB though! Not ideal.
Potential improvements
timebase:ctd linear interpolation
Time
nav
ctd
oxy
nitrate
0
X
X
I
X
3
X
X
I
I
6
X
X
I
I
10
X
X
I
I
This solves the problem nicely for oversampling sensors like oxygen. However, this would be a severe distortion of e.g. a methane sensor that records a sample every 10 minutes and now has an apparent sample every second
timebase:ctd nearst neighbour
Time
nav
ctd
oxy
nitrate
0
X
X
N
X
3
X
X
N
-
6
X
X
N
-
10
X
X
N
N
This avoids over-interpolation of slow sampling sensors, but the downsampling of more fast sampling sensors may be less preferable. May also be more complex to implement.
Whatever we decide to do going forward, I recommend that it is either controllable from the yaml, like the keep_vars solution, or operates on the l0 timeseries as an extra step, so that an end user can get the final l0 timeseries without any interpolation if they want.
We should also explain these various options in the documentation. Perhaps using diagrams like the ones in this comment.
Folks who don't want any interpolation or alignment of sensors have two options (that I agree should be documented)
I would argue that the raw merged parquet file is what the folks who don't want any interpolation are after. That, to my knowledge, doesn't drop any data or do any processing?
They can make a oxy.yaml that aligns everything to oxygen instead of the ctd, and then they have the best of both worlds - pure oxygen data, and interpolated quantities of everything else.
For interpolation options, totally fine with both linear and nearest. I'd never use nearest, but...
The original reason for using the CTD for our data sets was that indeed oxygen had much more data, but it was all repeated and was really being sampled at 1 Hz or 2 Hz or something, and seemed a silly error of Alseamars to be sampling it at 48 Hz or whatever they were doing. Of course if someone has a real data set that needs sampling at higher frequency than the CTD, they should be using that as the timebase.
This sounds like a good way to implement the interpolation. I've started working on a PR.
|
2025-04-01T06:38:07.607931
| 2021-06-22T13:34:24
|
927241606
|
{
"authors": [
"c-smile",
"zyxk"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4417",
"repo": "c-smile/sciter-js-sdk",
"url": "https://github.com/c-smile/sciter-js-sdk/issues/125"
}
|
gharchive/issue
|
sciter.js using console.log cannot exceed 11 lines
Using usciter debugging, console.log cannot exceed 11 lines, otherwise it cannot be displayed
Fixed already.
|
2025-04-01T06:38:07.627151
| 2022-11-10T14:10:45
|
1443975313
|
{
"authors": [
"c0c0n3"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4418",
"repo": "c0c0n3/kitt4sme.live",
"url": "https://github.com/c0c0n3/kitt4sme.live/pull/184"
}
|
gharchive/pull-request
|
JSON Agent
This PR implements a fully-fledged JSON Agent deployment with HTTP and MQTT transports as #185 requested. Specifically, the following got implemented.
Manifests to run JSON Agent in a K8s cluster. The agent comes with an internal endpoint for provisioning and two external endpoints to get JSON device data in, one over HTTP, the other over MQTT---see #181 for our current MQTT setup.
Routing. We expose the HTTP endpoint at <kitt4sme-base-url>/jsonagent/ so devices can POST JSON data at <kitt4sme-base-url>/jsonagent/iot/json?k=<api-key>&i=<device-id>. To send JSON data over MQTT, devices use the MQTT WebSocket implemented in #181 and the topic: json/<api-key>/<device-id>/attrs.
Security. Istio handles TLS termination for the HTTP endpoint and delegates security decisions to the existing FIWARE OPA policy. Security over MQTT works as explained in #181.
In-memory configuration. The service pre-loads device mappings and other config in memory to speed up JSON to FIWARE data translation---see config.js.
Persistence. No Mongo DB backend. Data sits in memory (it isn't much actually) but it's loaded from K8s configmaps, so that's the persistence backend. (If the pod restarts, config map data gets loaded again in memory; all service pods share the same config map data.)
IaC management. The manifests of all agent-related resources are tied to an Argo CD app in the mesh infra project so we can easily manage deployments through a GUI too.
Simplified config model. All the agent config sits in our repo, including devices and mappings. Argo CD automatically deploys this data to the configmaps the service uses as a persistence backend---see point above. No need to fiddle with awkward service calls to figure out what devices have been defined, what their mappings are, or to add new devices. It's all defined in config.js in our repo. Look at that file to figure out the lay of the land at a glance. Edit it to add or modify device defs, mappings etc. Argo CD takes care of propagating your changes to the cluster.
Demo
Here's a sum up of how to demo most of what this PR implemented.
First off, build your own KITT4SME cluster in a Multipass VM as explained in the bootstrap procedure. Wait a bit until all services are ready. The commands below use the <IP_ADDRESS> IP address to reach the Multipass VM; replace <IP_ADDRESS> in each command with your VM's IP.
IaC
Log into Argo CD. There's a json-agent app among the platform infra services. Check out all the resources deployed and have a look at the logs. Open the config map, you should be able to see the exact same content as in the config.js included in this PR.
Provisioning
Provision a service with two devices, one sending data over MQTT and the other over HTTP. Notice it's best to do that in config.js, but we'll do that manually here to speed things up. So as a rule, there would be no need to expose JSON Agent's provisioning API. Since we're bending the rules here, port-forward the agent's provisioning port.
$ kubectl port-forward svc/jsonagent 4041:4041
On to the provisioning business. Create a service with an API key of gr33t, entity type of Greeting and HTTP endpoint of /iot/d.
$ curl -iX POST 'http://localhost:4041/iot/services' \
-H 'Content-Type: application/json' \
-H 'fiware-service: greeting' \
-H 'fiware-servicepath: /' \
-d '{
"services": [
{
"apikey": "gr33t",
"entity_type": "Greeting",
"resource": "/iot/json"
}
]
}'
Create two devices to send a greeting message. The message is in the JSON format: { "w": data }, where data is a greeting string the device sends. The corresponding NGSI entity has type Greeting and a words attribute holding the actual greeting. The first device has an ID of greeter001 and sends its data over MQTT, whereas the second has ID greeter002 and sends data over HTTP.
$ curl -iX POST \
'http://localhost:4041/iot/devices' \
-H 'Content-Type: application/json' \
-H 'fiware-service: greeting' \
-H 'fiware-servicepath: /' \
-d '{
"devices": [
{
"device_id": "greeter001",
"entity_name": "urn:ngsi-ld:Greeting:001",
"entity_type": "Greeting",
"protocol": "PDI-IoTA-JSON",
"transport": "MQTT",
"attributes": [
{ "object_id": "w", "name": "words", "type": "Text" }
]
}
]
}
'
$ curl -iX POST \
'http://localhost:4041/iot/devices' \
-H 'Content-Type: application/json' \
-H 'fiware-service: greeting' \
-H 'fiware-servicepath: /' \
-d '{
"devices": [
{
"device_id": "greeter002",
"entity_name": "urn:ngsi-ld:Greeting:002",
"entity_type": "Greeting",
"protocol": "PDI-IoTA-JSON",
"transport": "HTTP",
"attributes": [
{ "object_id": "w", "name": "words", "type": "Text" }
]
}
]
}
'
With this setup, greeter001 is expected to send its UL payload to the MQTT topic
json/gr33t/greeter001/attrs
whereas greeter002 is supposed to POST its JSON payload to the URL
http://<IP_ADDRESS>/jsonagent/iot/d?k=gr33t&i=greeter002
since our Istio config routes /jsonagent/<rest> to /<rest> on port 7896 of the jsonagent service.
Finally, check you can retrieve the service and devices you've just created.
$ curl 'http://localhost:4041/iot/services' \
-H 'fiware-service: greeting' \
-H 'fiware-servicepath: /'
{"count":1,"services":[{"apikey":"gr33t","resource":"/iot/json","service":"greeting","subservice":"/","_id":1,"creationDate":1668087518563,"entity_type":"Greeting"}]}
$ curl 'http://localhost:4041/iot/devices' \
-H 'fiware-service: greeting' \
-H 'fiware-servicepath: /'
{"count":2,"devices":[{"device_id":"greeter001","service":"greeting","service_path":"/","entity_name":"urn:ngsi-ld:Greeting:001","entity_type":"Greeting","transport":"MQTT","attributes":[{"object_id":"w","name":"words","type":"Text"}],"commands":[],"static_attributes":[],"protocol":"PDI-IoTA-JSON","explicitAttrs":false},{"device_id":"greeter002","service":"greeting","service_path":"/","entity_name":"urn:ngsi-ld:Greeting:002","entity_type":"Greeting","polling":true,"transport":"HTTP","attributes":[{"object_id":"w","name":"words","type":"Text"}],"commands":[],"static_attributes":[],"protocol":"PDI-IoTA-JSON","explicitAttrs":false}]}
Sending device data over MQTT
We're going to use an external WebSocket client to simulate device data coming in over MQTT.
Browse to http://www.emqx.io/online-mqtt-client. Hit the "New Connection" button and enter the following data: name=kitt4sme, client-id=tasty, host=ws://<IP_ADDRESS>, path=/mqtt/, port=80, username=iot. You also need to enter the "iot" user's password, which I can't type here obviously. Hit connect, then send the following JSON message to the json/gr33t/greeter001/attrs topic: { "w": "howzit!" }.
Check the "howzit!" greeting trekked all the way to Orion. It should be stored in the "Greeting" entity having an ID of urn:ngsi-ld:Greeting:001.
$ curl \
'http://<IP_ADDRESS>/orion/v2/entities/urn:ngsi-ld:Greeting:001/attrs/words/value' \
-H 'fiware-service: greeting' \
-H 'fiware-servicepath: /'
"howzit!"
Sending device data over HTTP
Let's also send a greeting from greeter002. This device sends its data over HTTP
$ curl -iX POST \
'http://<IP_ADDRESS>/jsonagent/iot/json?k=gr33t&i=greeter002' \
-H 'Content-Type: application/json' \
-d '{ "w": "ahoy, matey!" }'
HTTP/1.1 403 Forbidden
date: Thu, 10 Nov 2022 14:00:52 GMT
server: istio-envoy
content-length: 0
What?! Yep, that's right. Our FIWARE OPA policy checks you've got a valid JWT token, since we didn't have one, we got shown the door. How rude. }'
This time the POST goes through and Orion gets our friendly greeting
$ curl \
'http://<IP_ADDRESS>/orion/v2/entities/urn:ngsi-ld:Greeting:002/attrs/words/value' \
-H 'fiware-service: greeting' \
-H 'fiware-servicepath: /'
"ahoy, matey!"
As expected, the greeting got stored in the "Greeting" entity having an ID of urn:ngsi-ld:Greeting:002.
Cool bananas!
|
2025-04-01T06:38:07.704314
| 2018-05-19T23:25:50
|
324668178
|
{
"authors": [
"samhocevar",
"yugr"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4422",
"repo": "cacalabs/libcaca",
"url": "https://github.com/cacalabs/libcaca/pull/34"
}
|
gharchive/pull-request
|
Hide private symbols (issue #33).
Hi, this is a PR for patch in https://github.com/cacalabs/libcaca/issues/33
Thanks for the PR.
|
2025-04-01T06:38:07.721818
| 2015-01-06T23:53:41
|
53579515
|
{
"authors": [
"Crash--",
"GrahamCampbell",
"jbrooksuk",
"joecohens"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4426",
"repo": "cachethq/Cachet",
"url": "https://github.com/cachethq/Cachet/pull/317"
}
|
gharchive/pull-request
|
[WIP] Notifications
DO NOT MERGE THIS YET. IT'S TOTALLY UNREADY!
@Crash-- You've misused migrations here. That belongs in a seeder.
@GrahamCampbell OK but I absolutely need these data in the DB. Without, we'll have a problem, this is why I put it into a migration because migrations are mandatory and seeds are optional, right?
@Crash-- you can still put the data into tables, but you need to look doing that in a seeder, not the migration.
@jbrooksuk Yes, but in the Cachet's documentation we can read " then you may want to seed the database with some example data." and from Laravel's "with test data using seed classes". But in my case, these datas are needed at the beginning.
I can ofc change this behavior, but I'll add more lines of code :(
@GrahamCampbell Can you close this PR? I'll open a new one with several fixes and improvements.
@Crash-- adding more lines of code isn't a problem if it's in the right place. If you're relying on default values to be inserted before the service can run, then you need to add error handling to only look for the service if the service is indeed setup.
This needs rebasing and needs all language filled synced again.
Travis is failing because of a package not being able to download:
- Installing symfony/process (v2.5.8)
Cloning 62c77d834c6cbf9cafa294a864aeba3a6c985af3
Failed to download symfony/process from source: Failed to clone git@github.com:symfony/Process.git via git, https, ssh protocols, aborting.
Hmm. I see nothing on github/travis status pages about this.
Also I saw "Could not authenticate github.com"
Do we have to add an API key or something (I've never had to).
No, not at all.
Travis has just f***ed up somewhere.
Bizarre. They need to run Cachet and let us know about this kind of shenanigans!
Maybe we should tweet to them to let them know this is happening?
Bizarre. They need to run Cachet and let us know about this kind of shenanigans!
lol
Will do from the Cachet account.
Done :)
Needs rebasing.
Rebased but not squashed yet. @GrahamCampbell please tell me that I didn't !@#$ it up this time?
@GrahamCampbell are you ok continuing work on this one? If not, what's left?
@GrahamCampbell Maybe this will help https://github.com/dinkbit/notifyme
@joecohens how do you want to do this? Close this PR and implement fresh with notifyme, or modify this one?
I think more than half of this branch is reusable. I'll take it form here :)
@cachethq/owners Closing this for now, I'm starting fresh.
:+1:
|
2025-04-01T06:38:07.755587
| 2023-05-04T17:21:11
|
1696397086
|
{
"authors": [
"Lingxi-Li",
"mholt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4427",
"repo": "caddyserver/forwardproxy",
"url": "https://github.com/caddyserver/forwardproxy/issues/101"
}
|
gharchive/issue
|
Rationale of :443 in ":443, example.com"
The doc says
In the Caddyfile the addresses must start with :443 for the forward_proxy to work for proxy requests of all origins.
Could you help further clarify? I thought example.com alone should have both 80 and 443 covered. The magic :443, example.com looks a self contradiction to me.
It's not a contradiction. (But this is a good question.) A site block name in the Caddyfile serves three purposes (somewhat regrettably):
To tell the web server what port to listen on
To tell the web server what domain name(s) to manage certs for
To tell the web server how to route HTTP requests
In most cases, these are correlate and align identically as long as we assume the default port(s) of 80/443: you can tell the server you have example.com and it will listen on 443, get a cert for example.com, and serve HTTP requests with a Host header of example.com accordingly.
But when you're running a forward proxy, the Host header can contain basically anything, so you need to listen on :443 to not black-hole those HTTP requests (no. 3). But without a domain name it can't get a cert (no. 2), so you need to tell which certificate to serve in the TLS handshake. Hence, both :443, example.com.
|
2025-04-01T06:38:07.770678
| 2015-01-06T20:09:02
|
53557099
|
{
"authors": [
"Turini",
"renanigt"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4428",
"repo": "caelum/vraptor4",
"url": "https://github.com/caelum/vraptor4/issues/922"
}
|
gharchive/issue
|
Deploy vraptor-site
Taking a look at vraptor-site commit, if I'm not worng vraptor-site isn't deployed since October with "articles and presentations" page.
The download page still with 4.1.1 vraptor's version.
Since we had some improvements on documentation, what about deploy it ?
Done! (actually, I forgot that it's done manually!) Thanks
|
2025-04-01T06:38:07.774461
| 2017-03-15T01:01:12
|
214252053
|
{
"authors": [
"Yangqing",
"pietern",
"slayton58"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4429",
"repo": "caffe2/caffe2",
"url": "https://github.com/caffe2/caffe2/pull/202"
}
|
gharchive/pull-request
|
Explicitly pass CXX to NCCL Makefile
Necessary if CXX isn't set when cmake is called. The CXX variable will then be
empty which prevents make from using its own default.
cc Mr @slayton58 by the way - in case NV finds errors in other nccl clients.
@Yangqing Thanks! Good catch @pietern!
|
2025-04-01T06:38:07.800274
| 2016-06-15T11:49:58
|
160403061
|
{
"authors": [
"gep13",
"patriksvensson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4430",
"repo": "cake-build/cake",
"url": "https://github.com/cake-build/cake/issues/988"
}
|
gharchive/issue
|
Fix broken Publish-GitHub-Release Task
Both both the 0.12.0 and 0.13.0 release of Cake, we were met with an error similar to this:
https://ci.appveyor.com/project/cakebuild/cake/build/0.12.0.build.2049#L506
When trying to Publish the GitHub Release. Initially, I thought this was an error with GitReleaseManager, but I have just ran the following test:
And as you can see here, the asset got added correctly:
https://github.com/gep13/FakeRepository/releases/tag/untagged-694b79655a063bd1e6f7
So, if we get this https://github.com/cake-build/cake/issues/923 working, we can try to figure out what is going on. Looks like there is an issue in one of the parameters that are being passed in, but I can't replicate it on my test repository.
@gep13 Should be possible now when we can set the verbosity of the logger from the script.
In an update to this, it also failed for the 0.14.0 release:
https://ci.appveyor.com/project/cakebuild/cake/build/0.14.0.build.2320#L575
This time, with the aid of the additional logging, I have come to the conclusion that the problem is the password that is being passed into the command line. I think it must contain a " or similar, that is breaking the input to GitReleaseManager. Going to change this password for the next release, and assume that everything is going to work :smile: Will re-open if required.
Tested this locally using an newly generated personal access token, and it seems to work :+1:
|
2025-04-01T06:38:07.803822
| 2017-09-09T15:43:35
|
256446872
|
{
"authors": [
"Julien-Mialon",
"devlead",
"dnfclas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4431",
"repo": "cake-build/cake",
"url": "https://github.com/cake-build/cake/pull/1793"
}
|
gharchive/pull-request
|
GH1625 Escape comma and semicolon in msbuild property values
Replace comma and semicolon in msbuild properties values by hex equivalent to fix issue #1625
@Julien-Mialon,
Thanks for your contribution.
To ensure that the project team has proper rights to use your work, please complete the Contribution License Agreement at https://cla2.dotnetfoundation.org.
It will cover your contributions to all .NET Foundation-managed open source projects.
Thanks,
.NET Foundation Pull Request Bot
@Julien-Mialon, thanks for signing the contribution license agreement. We will now validate the agreement and then the pull request.
Thanks, .NET Foundation Pull Request Bot
@Julien-Mialon your changes have been merged, thanks for your contribution 👍
|
2025-04-01T06:38:07.868738
| 2019-08-28T07:57:29
|
486217982
|
{
"authors": [
"dereuromark",
"tarnagas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4437",
"repo": "cakephp/phinx",
"url": "https://github.com/cakephp/phinx/issues/1597"
}
|
gharchive/issue
|
"Phinx\Console\Command\Init" cannot have an empty name (symfony/console >= 4.3.4)
PHP Fatal error: Uncaught Symfony\Component\Console\Exception\LogicException: The command defined in "Phinx\Console\Command\Init" cannot have an empty name. in /var/www/localhost/htdocs/vendor/symfony/console/Command/Command.php:453
Stack trace:
#0 /var/www/localhost/htdocs/vendor/robmorgan/phinx/src/Phinx/Console/Command/Init.php(47): Symfony\Component\Console\Command\Command->getName()
#1 /var/www/localhost/htdocs/vendor/symfony/console/Command/Command.php(77): Phinx\Console\Command\Init->configure()
#2 /var/www/localhost/htdocs/vendor/robmorgan/phinx/src/Phinx/Console/PhinxApplication.php(60): Symfony\Component\Console\Command\Command->__construct()
#3 /var/www/localhost/htdocs/vendor/nartex/nx-phinx/bin/nx-phinx(12): Phinx\Console\PhinxApplication->__construct()
#4 {main}
thrown in /var/www/localhost/htdocs/vendor/symfony/console/Command/Command.php on line 453
With composer json:
{
"require": {
"robmorgan/phinx": "~0.10"
},
}
composer info | grep console
symfony/console v4.3.4 Symfony Console C...
When symfony/console is downgraded to v4.3.3:
{
"require": {
"robmorgan/phinx": "~0.10",
"symfony/console": "=4.3.3"
},
}
It works. I guess symfony/console v4.3.4 introduces a breaking change with phinx.
We released a fix today.
see: https://github.com/cakephp/phinx/pull/1596
|
2025-04-01T06:38:07.871818
| 2019-07-17T19:21:13
|
469400083
|
{
"authors": [
"MasterOdin",
"dereuromark",
"lorenzo",
"raph1mm"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4438",
"repo": "cakephp/phinx",
"url": "https://github.com/cakephp/phinx/pull/1575"
}
|
gharchive/pull-request
|
Cache names of created tables for exists check
Fixes #1569 where because running under dry-run meant that the table was never actually created subsequent checks for existence of the table would fail, and an extra invalid CREATE TABLE statement would end up getting inserted into the dry-run log.
I'm not 100% satisfied with how this looks, but unsure of how might improve things and reduce the amount of copy/paste of things. It'll also still fail in the case where someone would directly call execute() with a create/drop/rename query.
I think we can't fix all cases, but this sure looks already like an improvement for most standard cases :+1:
Yeah, unless you put in a full SQL parser, the real complex cases around direct execute will never be fully captured. This will also not capture case of doing say an insert and save, then select over the table, and then use that to create additional queries. That case (assuming using internal functions) could be captured by just caching the various inserts/updates, but that's probably a bridge to burn if people actually report it. Can also probably just put a note in the docs about dry-run not being able to fully generate all SQL for complex cases (e.g. using execute, or the above example) due to it not being hooked up to a real DB.
Thanks, this makes sense to me
Can also probably just put a note in the docs about dry-run not being able to fully generate all SQL for complex cases (e.g. using execute, or the above example) due to it not being hooked up to a real DB.
Good idea.
|
2025-04-01T06:38:07.905489
| 2017-03-23T13:19:56
|
216424762
|
{
"authors": [
"angelosarto",
"benjie",
"calebmer",
"yleigh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4439",
"repo": "calebmer/postgraphql",
"url": "https://github.com/calebmer/postgraphql/issues/404"
}
|
gharchive/issue
|
JWT Token Verification using RS256
I was having some problems getting token verification working if the token was signed using RS256 instead of HS256.
In RS256 you need to pass in the public key in PEM format. My issues was that the PEM format is strict both in terms of encoding (base64) AND line breaking.
Download your pem file it should like this
-----BEGIN CERTIFICATE-----
MIIC+DC...
...j+NjK0Bjo=
-----END CERTIFICATE-----
Your PEM file needs to have a line break after BEGIN CERTIFICATE---- and then a line break every 64 characters. A quick way to do this on *nix/mac is this fold filename.pem > filename-wrapped.pem
Since its pretty hard to pass things with line breaks you should set an environment variable from it like this. (this folds it automatically, if its already wrapped it will do nothing export GRAPHQL_SECRET="$(cat filename.pem | fold -64)"`
Then start up your graphql server passing --secret=${GRAPHQL_SECRET}
You still need to figure out how to pass audience until this issue closes
@angelosarto Nice auto-close to give documentation for others who may struggle in future 👍
It’s really awesome to know RS256 works without any other modifications! Does signing work fine?
Thanks @natejenkins; I've modified the issue description with your fixes 👍
I was having some problems getting token verification working if the token was signed using RS256 instead of HS256.
In RS256 you need to pass in the public key in PEM format. My issues was that the PEM format is strict both in terms of encoding (base64) AND line breaking.
Download your pem file it should like this
-----BEGIN CERTIFICATE-----
MIIC+DC...
...j+NjK0Bjo=
-----END CERTIFICATE-----
Your PEM file needs to have a line break after BEGIN CERTIFICATE---- and then a line break every 64 characters. A quick way to do this on *nix/mac is this fold filename.pem > filename-wrapped.pem
Since its pretty hard to pass things with line breaks you should set an environment variable from it like this. (this folds it automatically, if its already wrapped it will do nothing
export GRAPHQL_SECRET="$(cat filename.pem | fold -64)"
Then start up your graphql server passing --jwt-secret="${GRAPHQL_SECRET}"
You still need to figure out how to pass audience until this issue closes
Could you please explain how RS256 works in this case?
I'm using PostGraphile 4.6.0, has option set as:
createServer( postgraphile(env.DATABASE_URL, "public", { jwtSecret: "./publickey.pem", jwtVerifyAlgorithms: ["RS256"], jwtPgTypeIdentifier: "public.jwt_token", graphiql: true, enhanceGraphiql: true, graphqlRoute: env.POSTGRAPHILE_ROUTE + "/graphql", graphiqlRoute: env.POSTGRAPHILE_ROUTE + "/graphiql", }) ).listen(port, () => { console.log("Listening at port:" + port); });
But when I sending the Authorization header in Postman with RS256 encrypted JWT token, always get 'errors: invalid algorithm'.
In fact, the JWT token I get back from this PostGraphile server is always HS256 encrypted.
Appears that RS256 doesn't take effect.
Could you show your working example of PostGraphile JWT verification with RS256?
It is treating the literal string you have passed it as the secret, as the secret. Same as it would if you passed it “MY_SECRET_HERE”. You need to read the file and send through the file contents.
|
2025-04-01T06:38:07.908528
| 2017-11-03T17:16:46
|
271053887
|
{
"authors": [
"Tafkas",
"rsangole"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4440",
"repo": "calintat/minimal",
"url": "https://github.com/calintat/minimal/issues/36"
}
|
gharchive/issue
|
Did something break with netlify?
Hello,
This isn't an issue with your repo per say. I had a working website for the past many months (forked your code.. and modified it considerably for my own purposes).
But today, it looks really wierd on all my browsers:
http://rsangole.netlify.com
Did netlify break? Why has this happened suddenly?
Rahul
The https://bootswatch.com/solar/bootstrap.css gives a 404 error.
Thanks @Tafkas. They have a version 3 and version 4 now, which broke the link.
|
2025-04-01T06:38:07.910622
| 2017-11-01T00:08:20
|
270156208
|
{
"authors": [
"oliviertassinari",
"t49tran"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4441",
"repo": "callemall/material-ui",
"url": "https://github.com/callemall/material-ui/pull/8930"
}
|
gharchive/pull-request
|
[Table Pagination] export LabelDisplayedRowArgs interface and improve label props types
Export LabelDisplayedRowsArgs from TablePagination.d.ts.
Change labelDisplayedRows return type to JSX.Element | string so it is aligned with the implementation.
Change labelRowsPerPage return type to JSX.Element | string.
Hi @oliviertassinari , do you have any idea on how to address the differences in Argos test ? I have no clue and the regression test run successful on my environment.
@t49tran thanks
|
2025-04-01T06:38:07.943949
| 2015-07-07T00:24:10
|
93406900
|
{
"authors": [
"IBPX",
"notolaf"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4443",
"repo": "cameron/squirt",
"url": "https://github.com/cameron/squirt/issues/174"
}
|
gharchive/issue
|
Damn!
This could be a powerful classroom tool for reading fluency - in fact, I found it on an educational website - but I would be very hesitant to use it in my third grade classroom when the word "damn" pops up every time they finish reading. Is there a way to get that changed?
Cathi Palmer
I agree, this should be changed.
@notolaf, this project is no longer maintained (thanks to legal threats from Spritz), so I don't think this will be changed.
If you're looking for something to use in class, jetzt works very well, although it's less polished.
Good luck.
|
2025-04-01T06:38:08.011655
| 2019-07-01T15:00:23
|
462763881
|
{
"authors": [
"rnelson0",
"rodjek"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4444",
"repo": "camptocamp/facterdb",
"url": "https://github.com/camptocamp/facterdb/issues/101"
}
|
gharchive/issue
|
New release
The Puppet Development Kit includes pinned gems and that includes facterdb 0.6.0. That is unfortunately a bit behind the times and leads to things like RHEL 6 and 7 not including the networking facts hash. It would be nice to see a newer version they could include in the next PDK release.
@rnelson0 I'm working on closing out some open issues and updating factsets to get ready for a release :)
|
2025-04-01T06:38:08.029082
| 2023-03-01T08:18:47
|
1604492702
|
{
"authors": [
"barmac",
"markfarkas-camunda",
"nikku"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4445",
"repo": "camunda/camunda-modeler",
"url": "https://github.com/camunda/camunda-modeler/issues/3483"
}
|
gharchive/issue
|
Shortcuts override the search input in template selection modal
Describe the bug
If you open the template selector popup window the search input field has the focus, and if you press the button "p", everything works as expected. But if you press any button which is a shortcut, for example "n", or "a", it does not search but execute the command bound to that shortcut. The biggest problem is the backspace. Press p, and then backspace. It will remove the Task element, but won't close the Template selector popup window. If you select the template after this, it will throw an error saying : Cannot read properties of undefined (reading 'children')
Steps to reproduce
Open editor
Create a new Task
Click on template selection button
Press "p" to start filtering the templates
Press backspace to delete the letter "p"
It will remove the Task
OR
Open editor
Create a new Task
Click on template selection button
Press "n" to start filtering the templates
It won't filter the templates as expected but it will open the Create element context menu
Expected behavior
When the search input field has the focus, we should disable shortcuts.
Environment
OS: Windows 11
Camunda Modeler Version: 5.9.0-nightly.20230227
Execution Platform: Camunda Platform
Installed plug-ins: none
Additional context
Related to SUPPORT-21053
I can reproduce the issue in 5.21.0 but cannot reproduce the backspace-related error anymore.
Closed via https://github.com/camunda/camunda-modeler/issues/4195.
|
2025-04-01T06:38:08.049294
| 2022-10-13T16:45:43
|
1408126176
|
{
"authors": [
"aabouzaid",
"vctrmn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4446",
"repo": "camunda/camunda-platform-helm",
"url": "https://github.com/camunda/camunda-platform-helm/issues/443"
}
|
gharchive/issue
|
[BUG] Connection insecure on Operate with Keycloak enabled
Describe the bug:
Operate and Keycloak is exposed via a secure https endpoint.
When I authenticate to Operate in the Keycloak UI, I have a "Connection insecure" warning.
Tested via Google Chrome and Mozilla Firefox.
Environment:
Platform: IBM Cloud Kubernetes Service
Chart version: 8.0.14
Values file:
# Default values for Camunda Platform helm.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# The values file follows helm best practices https://helm.sh/docs/chart_best_practices/values/
#
# This means:
# * Variable names should begin with a lowercase letter, and words should be separated with camelcase.
# * Every defined property in values.yaml should be documented. The documentation string should begin with the name of the property that it describes, and then give at least a one-sentence description
#
# Furthermore, we try to apply the following pattern: # [VarName] [conjunction] [definition]
#
# VarName:
#
# * In the documentation the variable name is started with a big letter, similar to kubernetes resource documentation.
# * If the variable is part of a subsection/object we use a json path expression (to make it more clear where the variable belongs to).
# The root (chart name) is omitted (e.g. zeebe). This is useful for using --set in helm.
#
# Conjunction:
# * [defines] for mandatory configuration
# * [can be used] for optional configuration
# * [if true] for toggles
# * [configuration] for section/group of variables
# Global configuration for variables which can be accessed by all sub charts
global:
# Annotations can be used to define common annotations, which should be applied to all deployments
annotations: {}
# Labels can be used to define common labels, which should be applied to all deployments
labels:
app: camunda-platform
# Image configuration to be used in each sub chart
image:
# Image.tag defines the tag / version which should be used in the chart
tag: 8.0.0
# Image.pullPolicy defines the image pull policy which should be used https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
pullPolicy: IfNotPresent
# Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
pullSecrets: []
# Ingress configuration to configure the ingress resource
ingress:
# Ingress.enabled if true, an ingress resource is deployed. Only useful if an ingress controller is available, like Ingress-NGINX.
enabled: true
# Ingress.className defines the class or configuration of ingress which should be used by the controller
className: public-iks-k8s-nginx
# Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller
annotations:
ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
# If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host.
host: cwa.xxxxxxxxxxxxxx.com
# Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
tls:
# Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined.
enabled: true
# Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate
secretName: xxxxxxxxxxxxxx.com
# Elasticsearch configuration which is shared between the sub charts
elasticsearch:
# Elasticsearch.disableExporter if true, disables the elastic exporter in zeebe
disableExporter: false
# Elasticsearch.url can be used to configure the URL to access elasticsearch, if not set services fallback to host and port configuration
url:
# Elasticsearch.host defines the elasticsearch host, ideally the service name inside the namespace
host: "elasticsearch-master"
# Elasticsearch.port defines the elasticsearch port, under which elasticsearch can be accessed
port: 9200
# Elasticsearch.clusterName defines the cluster name which is used by Elasticsearch
clusterName: "elasticsearch"
# Elasticsearch.prefix defines the prefix which is used by the Zeebe Elasticsearch Exporter to create Elasticsearch indexes
prefix: zeebe-record
# ZeebeClusterName defines the cluster name for the Zeebe cluster. All Zeebe pods get this prefix in their name and the brokers uses that as cluster name.
zeebeClusterName: "{{ .Release.Name }}-zeebe"
# ZeebePort defines the port which is used for the Zeebe Gateway. This port accepts the GRPC Client messages and forwards them to the Zeebe Brokers.
zeebePort: 26500
# Identity configuration to configure identity specifics on global level, which can be accessed by other sub-charts
identity:
keycloak:
# Identity.keycloak.fullname can be used to change the referenced Keycloak service name inside the sub-charts, like operate, optimize, etc.
# Subcharts can't access values from other sub-charts or the parent, global only.
# This is useful if the identity.keycloak.fullnameOverride is set, and specifies a different name for the Keycloak service.
fullname: ""
# Identity.auth configuration, to configure Identity authentication setup
auth:
# Identity.auth.enabled if true, enables the Identity authentication otherwise basic-auth will be used on all services.
enabled: true
# Identity.auth.publicIssuerUrl defines the token issuer (Keycloak) URL, where the services can request JWT tokens.
# Should be public accessible, per default we assume a port-forward to Keycloak (18080) is created before login.
# Can be overwritten if, ingress is in use and an external IP is available.
publicIssuerUrl: "https://keycloak.xxxxxxxxxxxxxx.com/auth/realms/camunda-platform"
# Identity.auth.operate configuration to configure Operate authentication specifics on global level, which can be accessed by other sub-charts
operate:
# Identity.auth.operate.existingSecret can be used to reference an existing secret. If not set, a random secret is generated.
# The existing secret should contain an `operate-secret` field, which will be used as secret for the Identity-Operate communication.
existingSecret:
# Identity.auth.operate.redirectUrl defines the redirect URL, which is used by Keycloak to access Operate.
# Should be public accessible, the default value works if port-forward to Operate is created to 8081.
# Can be overwritten if, ingress is in use and an external IP is available.
redirectUrl: "https://operate.xxxxxxxxxxxxxx.com"
# Identity.auth.tasklist configuration to configure Tasklist authentication specifics on global level, which can be accessed by other sub-charts
tasklist:
# Identity.auth.tasklist.existingSecret can be used to use an own existing secret. If not set a random secret is generated.
# The existing secret should contain an `tasklist-secret` field, which will be used as secret for the Identity-Tasklist communication.
existingSecret:
# Identity.auth.tasklist.redirectUrl defines the root (or redirect) URL, which is used by Keycloak to access Tasklist.
# Should be public accessible, the default value works if port-forward to Tasklist is created to 8082.
# Can be overwritten if, ingress is in use and an external IP is available.
redirectUrl: "https://tasklist.xxxxxxxxxxxxxx.com"
# Identity.auth.optimize configuration to configure Optimize authentication specifics on global level, which can be accessed by other sub-charts
optimize:
# Identity.auth.optimize.existingSecret can be used to use an own existing secret. If not set a random secret is generated.
# The existing secret should contain an `optimize-secret` field, which will be used as secret for the Identity-Optimize communication.
existingSecret:
# Identity.auth.optimize.redirectUrl defines the root (or redirect) URL, which is used by Keycloak to access Optimize.
# Should be public accessible, the default value works if port-forward to Optimize is created to 8082.
# Can be overwritten if, ingress is in use and an external IP is available.
redirectUrl: "https://optimize.xxxxxxxxxxxxxx.com"
# Zeebe configuration for the Zeebe sub chart. Contains configuration for the Zeebe broker and related resources.
zeebe:
# Enabled if true, all zeebe related resources are deployed via the helm release
enabled: true
# Image configuration to configure the zeebe image specifics
image:
# Image.repository defines which image repository to use
repository: camunda/zeebe
# Image.tag can be set to overwrite the global tag, which should be used in that chart
tag:
# Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
pullSecrets: []
# ClusterSize defines the amount of brokers (=replicas), which are deployed via helm
clusterSize: "1"
# PartitionCount defines how many zeebe partitions are set up in the cluster
partitionCount: "1"
# ReplicationFactor defines how each partition is replicated, the value defines the number of nodes
replicationFactor: "1"
# Env can be used to set extra environment variables in each zeebe broker container
env:
- name: ZEEBE_BROKER_DATA_SNAPSHOTPERIOD
value: "5m"
- name: ZEEBE_BROKER_DATA_DISKUSAGECOMMANDWATERMARK
value: "0.85"
- name: ZEEBE_BROKER_DATA_DISKUSAGEREPLICATIONWATERMARK
value: "0.87"
# ConfigMap configuration which will be applied to the mounted config map.
configMap:
# ConfigMap.defaultMode can be used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
# See https://github.com/kubernetes/api/blob/master/core/v1/types.go#L1615-L1623
defaultMode: 0754
# Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
command: []
# LogLevel defines the log level which is used by the zeebe brokers
logLevel: info
# Log4j2 can be used to overwrite the log4j2 configuration of the zeebe brokers
log4j2: ''
# JavaOpts can be used to set java options for the zeebe brokers
javaOpts: >-
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/usr/local/zeebe/data
-XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log
-XX:+ExitOnOutOfMemoryError
# Service configuration for the broker service
service:
# Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
type: ClusterIP
# Service.httpPort defines the port of the http endpoint, where for example metrics are provided
httpPort: 9600
# Service.httpName defines the name of the http endpoint, where for example metrics are provided
httpName: "http"
# Service.commandPort defines the port of the command api endpoint, where the broker commands are sent to
commandPort: 26501
# Service.commandName defines the name of the command api endpoint, where the broker commands are sent to
commandName: "command"
# Service.internalPort defines the port of the internal api endpoint, which is used for internal communication
internalPort: 26502
# Service.internalName defines the name of the internal api endpoint, which is used for internal communication
internalName: "internal"
# extraPorts can be used to expose any other ports which are required. Can be useful for exporters
extraPorts: []
# - name: hazelcast
# protocol: TCP
# port: 5701
# targetPort: 5701
# ServiceAccount configuration for the service account where the broker pods are assigned to
serviceAccount:
# ServiceAccount.enabled if true, enables the broker service account
enabled: true
# ServiceAccount.name can be used to set the name of the broker service account
name: ""
# ServiceAccount.annotations can be used to set the annotations of the broker service account
annotations: {}
# CpuThreadCount defines how many threads can be used for the processing on each broker pod
cpuThreadCount: "3"
# IoThreadCount defines how many threads can be used for the exporting on each broker pod
ioThreadCount: "3"
# Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits
resources:
requests:
cpu: 800m
memory: 1200Mi
limits:
cpu: 960m
memory: 1920Mi
# PersistenceType defines the type of persistence which is used by Zeebe. Possible values are: disk, local and memory.
# disk - means a persistence volume claim is configured and used
# local - means the data is stored into the container, no volumeMount nor volume nor claim is configured
# memory - means zeebe uses a tmpfs for the data persistence, be aware that this takes the limits into account
persistenceType: disk
# PvcSize defines the persistent volume claim size, which is used by each broker pod https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
pvcSize: "16Gi"
# PvcAccessModes can be used to configure the persistent volume claim access mode https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
pvcAccessModes: ["ReadWriteOnce"]
# PvcStorageClassName can be used to set the storage class name which should be used by the persistent volume claim. It is recommended to use a storage class, which is backed with a SSD.
pvcStorageClassName: ''
# ExtraVolumes can be used to define extra volumes for the broker pods, useful for additional exporters
extraVolumes: []
# ExtraVolumeMounts can be used to mount extra volumes for the broker pods, useful for additional exporters
extraVolumeMounts: []
# ExtraInitContainers can be used to set up extra init containers for the broker pods, useful for additional exporters
extraInitContainers: []
# PodAnnotations can be used to define extra broker pod annotations
podAnnotations: {}
# PodLabels can be used to define extra broker pod labels
podLabels: {}
# PodDisruptionBudget configuration to configure a pod disruption budget for the broker pods https://kubernetes.io/docs/tasks/run-application/configure-pdb/
podDisruptionBudget:
# PodDisruptionBudget.enabled if true a pod disruption budget is defined for the brokers
enabled: false
# PodDisruptionBudget.minAvailable can be used to set how many pods should be available. Be aware that if minAvailable is set, maxUnavailable will not be set (they are mutually exclusive).
minAvailable:
# podDisruptionBudget.maxUnavailable can be used to set how many pods should be at max. unavailable
maxUnavailable: 1
# PodSecurityContext defines the security options the Zeebe broker pod should be run with
podSecurityContext: {}
# ContainerSecurityContext defines the security options the Zeebe broker container should be run with
containerSecurityContext: {}
# NodeSelector can be used to define on which nodes the broker pods should run
nodeSelector: {}
# Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
# The default defined PodAntiAffinity allows constraining on which nodes the Zeebe pods are scheduled on https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
# It uses a hard requirement for scheduling and works based on the Zeebe pod labels
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app.kubernetes.io/component"
operator: In
values:
- zeebe-broker
topologyKey: "kubernetes.io/hostname"
# PriorityClassName can be used to define the broker pods priority https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
priorityClassName: ""
# ReadinessProbe configuration for the zeebe broker readiness probe
readinessProbe:
# ReadinessProbe.probePath defines the readiness probe route used on the zeebe brokers
probePath: /ready
# ReadinessProbe.periodSeconds defines how often the probe is executed
periodSeconds: 10
# ReadinessProbe.successThreshold defines how often it needs to be true to be marked as ready, after failure
successThreshold: 1
# ReadinessProbe.timeoutSeconds defines the seconds after the probe times out
timeoutSeconds: 1
# Gateway configuration to define properties related to the standalone gateway
zeebe-gateway:
# Replicas defines how many standalone gateways are deployed
replicas: 1
# Image configuration to configure the zeebe-gateway image specifics
image:
# Image.repository defines which image repository to use
repository: camunda/zeebe
# Image.tag can be set to overwrite the global tag, which should be used in that chart
tag:
# Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
pullSecrets: []
# PodAnnotations can be used to define extra gateway pod annotations
podAnnotations: {}
# PodLabels can be used to define extra gateway pod labels
podLabels: {}
# LogLevel defines the log level which is used by the gateway
logLevel: info
# Log4j2 can be used to overwrite the log4j2 configuration of the gateway
log4j2: ''
# JavaOpts can be used to set java options for the zeebe gateways
javaOpts: >-
-XX:+ExitOnOutOfMemoryError
# Env can be used to set extra environment variables in each gateway container
env: []
# ConfigMap configuration which will be applied to the mounted config map.
configMap:
# ConfigMap.defaultMode can be used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
# See https://github.com/kubernetes/api/blob/master/core/v1/types.go#L1615-L1623
defaultMode: 0744
# Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
command: []
# PodDisruptionBudget configuration to configure a pod disruption budget for the gateway pods https://kubernetes.io/docs/tasks/run-application/configure-pdb/
podDisruptionBudget:
# PodDisruptionBudget.enabled if true a pod disruption budget is defined for the gateways
enabled: false
# PodDisruptionBudget.minAvailable can be used to set how many pods should be available. Be aware that if minAvailable is set, maxUnavailable will not be set (they are mutually exclusive).
minAvailable: 1
# PodDisruptionBudget.maxUnavailable can be used to set how many pods should be at max. unavailable
maxUnavailable:
# Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits
resources:
requests:
cpu: 400m
memory: 450Mi
limits:
cpu: 400m
memory: 450Mi
# PriorityClassName can be used to define the gateway pods priority https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
priorityClassName: ""
# PodSecurityContext defines the security options the gateway pod should be run with
podSecurityContext: {}
# ContainerSecurityContext defines the security options the gateway container should be run with
containerSecurityContext: {}
# NodeSelector can be used to define on which nodes the gateway pods should run
nodeSelector: {}
# Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
# The default defined PodAntiAffinity allows constraining on which nodes the Zeebe gateway pods are scheduled on https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
# It uses a hard requirement for scheduling and works based on the Zeebe gateway pod labels
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app.kubernetes.io/component"
operator: In
values:
- zeebe-gateway
topologyKey: "kubernetes.io/hostname"
# ExtraVolumeMounts can be used to mount extra volumes for the gateway pods, useful for enabling tls between gateway and broker
extraVolumeMounts: []
# ExtraVolumes can be used to define extra volumes for the gateway pods, useful for enabling tls between gateway and broker
extraVolumes: []
# ExtraInitContainers can be used to set up extra init containers for the gateway pods, useful for adding interceptors
extraInitContainers: []
# Service configuration for the gateway service
service:
# Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
type: ClusterIP
# Service.loadBalancerIP defines public ip of the load balancer if the type is LoadBalancer
loadBalancerIP: ""
# Service.loadBalancerSourceRanges defines list of allowed source ip address ranges if the type is LoadBalancer
loadBalancerSourceRanges: []
# Service.httpPort defines the port of the http endpoint, where for example metrics are provided
httpPort: 9600
# Service.httpName defines the name of the http endpoint, where for example metrics are provided
httpName: "http"
# Service.gatewayPort defines the port of the gateway endpoint, where client commands (grpc) are sent to
gatewayPort: 26500
# Service.gatewayName defines the name of the gateway endpoint, where client commands (grpc) are sent to
gatewayName: "gateway"
# Service.internalPort defines the port of the internal api endpoint, which is used for internal communication
internalPort: 26502
# Service.internalName defines the name of the internal api endpoint, which is used for internal communication
internalName: "internal"
# Service.annotations can be used to define annotations, which will be applied to the zeebe-gateway service
annotations: {}
# ServiceAccount configuration for the service account where the gateway pods are assigned to
serviceAccount:
# ServiceAccount.enabled if true, enables the gateway service account
enabled: true
# ServiceAccount.name can be used to set the name of the gateway service account
name: ""
# ServiceAccount.annotations can be used to set the annotations of the gateway service account
annotations: {}
# Ingress configuration to configure the ingress resource
ingress:
# Ingress.enabled if true, an ingress resource is deployed with the Zeebe gateway deployment. Only useful if an ingress controller is available, like nginx.
enabled: true
# Ingress.className defines the class or configuration of ingress which should be used by the controller
className: public-iks-k8s-nginx
# Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller
annotations:
ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
# Ingress.path defines the path which is associated with the operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
path: /
# Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
# If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host.
host: zeebe.xxxxxxxxxxxxxx.com
# Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
tls:
# Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined.
enabled: true
# Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate
secretName: xxxxxxxxxxxxxx.com
# Operate configuration for the Operate sub chart.
operate:
# Enabled if true, the Operate deployment and its related resources are deployed via a helm release
enabled: true
# Image configuration to configure the Operate image specifics
image:
# Image.repository defines which image repository to use
repository: camunda/operate
# Image.tag can be set to overwrite the global tag, which should be used in that chart
tag:
# Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
pullSecrets: []
# ContextPath can be used to make Operate web application works on a custom sub-path. This is mainly used to run Camunda Platform web applications under a single domain.
# contextPath: "/operate"
# PodAnnotations can be used to define extra Operate pod annotations
podAnnotations: {}
# PodLabels can be used to define extra Operate pod labels
podLabels: {}
# Logging configuration for the Operate logging. This template will be directly included in the Operate configuration yaml file
logging:
level:
ROOT: INFO
io.camunda.operate: DEBUG
# Service configuration to configure the Operate service.
service:
# Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
type: ClusterIP
# Service.port defines the port of the service, where the Operate web application will be available
port: 80
# Service.annotations can be used to define annotations, which will be applied to the Operate service
annotations: {}
# Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits
resources:
requests:
cpu: 600m
memory: 400Mi
limits:
cpu: 2000m
memory: 2Gi
# Env can be used to set extra environment variables in each Operate container
env: []
# ConfigMap configuration which will be applied to the mounted config map.
configMap:
# ConfigMap.defaultMode can be used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
# See https://github.com/kubernetes/api/blob/master/core/v1/types.go#L1615-L1623
defaultMode: 0744
# Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
command: []
# ExtraVolumes can be used to define extra volumes for the Operate pods, useful for tls and self-signed certificates
extraVolumes: []
# ExtraVolumeMounts can be used to mount extra volumes for the Operate pods, useful for tls and self-signed certificates
extraVolumeMounts: []
# ServiceAccount configuration for the service account where the Operate pods are assigned to
serviceAccount:
# ServiceAccount.enabled if true, enables the Operate service account
enabled: true
# ServiceAccount.name can be used to set the name of the Operate service account
name: ""
# ServiceAccount.annotations can be used to set the annotations of the Operate service account
annotations: {}
# Ingress configuration to configure the ingress resource
ingress:
# Ingress.enabled if true, an ingress resource is deployed with the Operate deployment. Only useful if an ingress controller is available, like nginx.
enabled: true
# Ingress.className defines the class or configuration of ingress which should be used by the controller
className: public-iks-k8s-nginx
# Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller
annotations:
ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# Ingress.path defines the path which is associated with the Operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
path: /
# Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
# If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host.
host: operate.xxxxxxxxxxxxxx.com
# Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
tls:
# Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined.
enabled: true
# Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate
secretName: xxxxxxxxxxxxxx.com
# PodSecurityContext defines the security options the Operate pod should be run with
podSecurityContext: {}
# ContainerSecurityContext defines the security options the Operate container should be run with
containerSecurityContext: {}
# NodeSelector can be used to define on which nodes the Operate pods should run
nodeSelector: {}
# Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Tasklist configuration for the tasklist sub chart.
tasklist:
# Enabled if true, the tasklist deployment and its related resources are deployed via a helm release
enabled: true
# Image configuration to configure the tasklist image specifics
image:
# Image.repository defines which image repository to use
repository: camunda/tasklist
# Image.tag can be set to overwrite the global tag, which should be used in that chart
tag:
# Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
pullSecrets: []
# ContextPath can be used to make Tasklist web application works on a custom sub-path. This is mainly used to run Camunda Platform web applications under a single domain.
# contextPath: "/tasklist"
# Env can be used to set extra environment variables on each Tasklist container
env: []
# PodAnnotations can be used to define extra Tasklist pod annotations
podAnnotations: {}
# PodLabels can be used to define extra tasklist pod labels
podLabels: {}
# ConfigMap configuration which will be applied to the mounted config map.
configMap:
# ConfigMap.defaultMode can be used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
# See https://github.com/kubernetes/api/blob/master/core/v1/types.go#L1615-L1623
defaultMode: 0744
# Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
command: []
# Service configuration to configure the tasklist service.
service:
# Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
type: ClusterIP
# Service.port defines the port of the service, where the tasklist web application will be available
port: 80
# GraphqlPlaygroundEnabled if true, enables the graphql playground
graphqlPlaygroundEnabled: ""
# GraphqlPlaygroundEnabled can be set to include the credentials in each request, should be set to "include" if graphql playground is enabled
graphqlPlaygroundRequestCredentials: ""
# ExtraVolumes can be used to define extra volumes for the Tasklist pods, useful for tls and self-signed certificates
extraVolumes: []
# ExtraVolumeMounts can be used to mount extra volumes for the Tasklist pods, useful for tls and self-signed certificates
extraVolumeMounts: []
# PodSecurityContext defines the security options the Tasklist pod should be run with
podSecurityContext: {}
# ContainerSecurityContext defines the security options the Tasklist container should be run with
containerSecurityContext: {}
# NodeSelector can be used to define on which nodes the Tasklist pods should run
nodeSelector: {}
# Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits
resources:
requests:
cpu: 400m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
# Ingress configuration to configure the ingress resource
ingress:
# Ingress.enabled if true, an ingress resource is deployed with the tasklist deployment. Only useful if an ingress controller is available, like nginx.
enabled: true
# Ingress.className defines the class or configuration of ingress which should be used by the controller
className: public-iks-k8s-nginx
# Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller
annotations:
ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# Ingress.path defines the path which is associated with the operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
path: /
# Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
# If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host.
host: tasklist.xxxxxxxxxxxxxx.com
tls:
# Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined.
enabled: true
# Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate
secretName: xxxxxxxxxxxxxx.com
# Optimize configuration for the Optimize sub chart.
optimize:
# Enabled if true, the Optimize deployment and its related resources are deployed via a helm release
enabled: true
# Image configuration to configure the Optimize image specifics
image:
# Image.repository defines which image repository to use
repository: camunda/optimize
# Image.tag can be set to overwrite the global tag, which should be used in that chart
tag: 3.9.0-preview-2
# Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
pullSecrets: []
# ContextPath can be used to make Optimize web application works on a custom sub-path. This is mainly used to run Camunda Platform web applications under a single domain.
# contextPath: "/optimize"
# PodAnnotations can be used to define extra Optimize pod annotations
podAnnotations: {}
# PodLabels can be used to define extra Optimize pod labels
podLabels: {}
# PartitionCount defines how many Zeebe partitions are set up in the cluster and which should be imported by Optimize
partitionCount: "1"
# Env can be used to set extra environment variables in each Optimize container
env: []
# Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
command: []
# ExtraVolumes can be used to define extra volumes for the Optimize pods, useful for tls and self-signed certificates
extraVolumes: []
# ExtraVolumeMounts can be used to mount extra volumes for the Optimize pods, useful for tls and self-signed certificates
extraVolumeMounts: []
# ServiceAccount configuration for the service account where the Optimize pods are assigned to
serviceAccount:
# ServiceAccount.enabled if true, enables the Optimize service account
enabled: true
# ServiceAccount.name can be used to set the name of the Optimize service account
name: ""
# ServiceAccount.annotations can be used to set the annotations of the Optimize service account
annotations: {}
# Service configuration to configure the Optimize service.
service:
# Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
type: ClusterIP
# Service.port defines the port of the service, where the Optimize web application will be available
port: 80
# Service.annotations can be used to define annotations, which will be applied to the Optimize service
annotations: {}
# PodSecurityContext defines the security options the Optimize pod should be run with
podSecurityContext: {}
# ContainerSecurityContext defines the security options the Optimize container should be run with
containerSecurityContext: {}
# NodeSelector can be used to define on which nodes the Optimize pods should run
nodeSelector: {}
# Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits
resources:
requests:
cpu: 600m
memory: 1Gi
limits:
cpu: 2000m
memory: 2Gi
# Ingress configuration to configure the ingress resource
ingress:
# Ingress.enabled if true, an ingress resource is deployed with the Optimize deployment. Only useful if an ingress controller is available, like nginx.
enabled: true
# Ingress.className defines the class or configuration of ingress which should be used by the controller
className: public-iks-k8s-nginx
# Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller
annotations:
ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# Ingress.path defines the path which is associated with the operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
path: /
# Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
# If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host.
host: optimize.xxxxxxxxxxxxxx.com
# Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
tls:
# Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined.
enabled: true
# Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate
secretName: xxxxxxxxxxxxxx.com
# RetentionPolicy configuration to configure the elasticsearch index retention policies
retentionPolicy:
# RetentionPolicy.enabled if true, elasticsearch curator cronjob and configuration will be deployed.
enabled: false
# RetentionPolicy.schedule defines how often/when the curator should run
schedule: "0 0 * * *"
# RetentionPolicy.zeebeIndexTTL defines after how many days a zeebe index can be deleted
zeebeIndexTTL: 1
# RetentionPolicy.zeebeIndexMaxSize can be set to configure the maximum allowed zeebe index size in gigabytes.
# After reaching that size, curator will delete that corresponding index on the next run.
# To benefit from that configuration the schedule needs to be configured small enough, like every 15 minutes.
zeebeIndexMaxSize:
# RetentionPolicy.operateIndexTTL defines after how many days an operate index can be deleted
operateIndexTTL: 30
# RetentionPolicy.tasklistIndexTTL defines after how many days a tasklist index can be deleted
tasklistIndexTTL: 30
# Image configuration for the elasticsearch curator cronjob
image:
# Image.repository defines which image repository to use
repository: bitnami/elasticsearch-curator
# Image.tag defines the tag / version which should be used in the chart
tag: 5.8.4
# PrometheusServiceMonitor configuration to configure a prometheus service monitor
prometheusServiceMonitor:
# PrometheusServiceMonitor.enabled if true then a service monitor will be deployed, which allows an installed prometheus controller to scrape metrics from the deployed pods
enabled: false
# PromotheuServiceMonitor.labels can be set to configure extra labels, which will be added to the servicemonitor and can be used on the prometheus controller for selecting the servicemonitors
labels:
release: metrics
# PromotheuServiceMonitor.scrapeInterval can be set to configure the interval at which metrics should be scraped
# Should be *less* than 60s if the provided grafana dashboard is used, which can be found here https://github.com/camunda/zeebe/tree/main/monitor/grafana,
# otherwise it isn't able to show any metrics which is aggregated over 1 min.
scrapeInterval: 10s
# Identity configuration for the identity sub chart.
identity:
# Enabled if true, the identity deployment and its related resources are deployed via a helm release
#
# Note: Identity is required by Optimize. If Identity is disabled, then Optimize will be unusable.
# If you don't need Optimize, then make sure to disable both: set global.identity.auth.enabled=false AND optimize.enabled=false.
enabled: true
# FirstUser configuration to configure properties of the first Identity user, which can be used to access all
# web applications
firstUser:
# FirstUser.username defines the username of the first user, needed to log in into the web applications
username: demo
# FirstUser.password defines the password of the first user, needed to log in into the web applications
password: demo
# Image configuration to configure the identity image specifics
image:
# Image.repository defines which image repository to use
repository: camunda/identity
# Image.tag can be set to overwrite the global tag, which should be used in that chart
tag:
# Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
pullSecrets: []
# FullURL can be used when Ingress is configured (for both multi and single domain setup).
# Note: If the `ContextPath` is configured, then value of `ContextPath` should be included in the URL too.
# fullURL: "https://camunda.example.com/identity"
# ContextPath can be used to make Identity web application works on a custom sub-path. This is mainly used to run Camunda Platform web applications under a single domain.
# contextPath: "/identity"
# PodAnnotations can be used to define extra Identity pod annotations
podAnnotations: {}
# Service configuration to configure the identity service.
service:
# Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
type: ClusterIP
# Service.port defines the port of the service, where the identity web application will be available
port: 80
# Service.annotations can be used to define annotations, which will be applied to the identity service
annotations: {}
# PodSecurityContext defines the security options the Identity pod should be run with
podSecurityContext: {}
# ContainerSecurityContext defines the security options the Identity container should be run with
containerSecurityContext: {}
# NodeSelector can be used to define on which nodes the Identity pods should run
nodeSelector: {}
# Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits
resources:
requests:
cpu: 600m
memory: 400Mi
limits:
cpu: 2000m
memory: 2Gi
# Env can be used to set extra environment variables in each identity container. See the documentation https://docs.camunda.io/docs/self-managed/identity/deployment/configuration-variables/ for more details.
env: []
# Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
command: []
# ExtraVolumes can be used to define extra volumes for the identity pods, useful for tls and self-signed certificates
extraVolumes: []
# ExtraVolumeMounts can be used to mount extra volumes for the identity pods, useful for tls and self-signed certificates
extraVolumeMounts: []
# Keycloak configuration, for the keycloak dependency chart which is used by identity. See the chart documentation https://github.com/bitnami/charts/tree/master/bitnami/keycloak#parameters for more details.
keycloak:
# Keycloak.service configuration, to configure the service which is deployed along with keycloak
service:
# Keycloak.service.type can be set to change the service type.
# We use clusterIP for keycloak service, since per default LoadBalancer is used, which is not supported on all cloud providers.
# This might prevent scheduling of the service.
type: ClusterIP
## Keycloak authentication parameters
## ref: https://github.com/bitnami/bitnami-docker-keycloak#admin-credentials
##
## Identity uses the secrets generated by keycloak, to access keycloak.
auth:
# Keycloak.auth.adminUser defines the keycloak administrator user
adminUser: admin
# Keycloak.auth.existingSecret can be used to reuse an existing secret containing authentication information.
# See https://docs.bitnami.com/kubernetes/apps/keycloak/configuration/manage-passwords/ for more details.
#
# Example:
#
# Keycloak.auth.existingSecret:
# name: mySecret
# keyMapping:
# admin-password: myPasswordKey
# management-password: myManagementPasswordKey
# tls-keystore-password: myTlsKeystorePasswordKey
# tls-truestore-password: myTlsTruestorePasswordKey
existingSecret: ""
# ServiceAccount configuration for the service account where the identity pods are assigned to
serviceAccount:
# ServiceAccount.enabled if true, enables the identity service account
enabled: true
# ServiceAccount.name can be used to set the name of the identity service account
name: ""
# ServiceAccount.annotations can be used to set the annotations of the identity service account
annotations: {}
# Ingress configuration to configure the ingress resource
ingress:
# Ingress.enabled if true, an ingress resource is deployed with the identity deployment. Only useful if an ingress controller is available, like nginx.
enabled: true
# Ingress.className defines the class or configuration of ingress which should be used by the controller
className: public-iks-k8s-nginx
# Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller
annotations:
ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
# Ingress.path defines the path which is associated with the operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
path: /
# Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
# If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host.
host: identity.xxxxxxxxxxxxxx.com
# Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
tls:
# Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined.
enabled: true
# Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate
secretName: xxxxxxxxxxxxxx.com
elasticsearch:
enabled: true
extraEnvs:
- name: "xpack.security.enabled"
value: "false"
replicas: 1
persistence:
labels:
enabled: true
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 16Gi
esJavaOpts: "-Xmx1g -Xms1g"
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 2
memory: 2Gi
Hi @vctrmn
How did you setup the TLS in the ingress?
Also the values file doesn't show how the Ingress is setup for Keycloak.
It should be under the identity key like this:
identity:
[...]
keycloak:
ingress:
enabled: true
ingressClassName: nginx
hostname: "keycloak.camunda.example.com"
extraEnvVars:
- name: KEYCLOAK_PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_FRONTEND_URL
value: "https://keycloak.camunda.example.com"
For more details on the setup, please take a look at the Ingress setup guide.
Hi @aabouzaid,
Thank you for your help ! Your configuration effectively fix the issue.
Would it be possible to add this configuration (at least as a comment) in the default values.yaml ?
https://github.com/camunda/camunda-platform-helm/blob/main/charts/camunda-platform/values.yaml
Also, would it be possible to add the tls configuration in the keycloak ingress ?
Below is the generated ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
meta.helm.sh/release-name: demo
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-10-18T08:43:40Z"
generation: 1
labels:
app.kubernetes.io/component: keycloak
app.kubernetes.io/instance: demo
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: keycloak
helm.sh/chart: keycloak-7.1.6
name: demo-keycloak
namespace: default
resourceVersion: "356570"
uid: 466259b4-065c-471f-8cf7-3598deb09845
spec:
ingressClassName: public-iks-k8s-nginx
rules:
- host: keycloak.xxxxxxxxxxxxxxxxxxx.com
http:
paths:
- backend:
service:
name: demo-keycloak
port:
name: http
path: /
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: xxxxxxxxxxxxxxxxxxxxxx
|
2025-04-01T06:38:08.092323
| 2022-10-04T01:34:52
|
1395546237
|
{
"authors": [
"obriensystems"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4447",
"repo": "canada-ca/accelerators_accelerateurs-gcp",
"url": "https://github.com/canada-ca/accelerators_accelerateurs-gcp/issues/51"
}
|
gharchive/issue
|
Add GR 09: restrict public IPs for VMs and SQL instances via organization policy
IAM organization policy - restrict SQL public IPs - https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictPublicIp?organizationId=743091813895&supportedpurview=project
AM organization policy - allowd external IPs for VMs - https://console.cloud.google.com/iam-admin/orgpolicies/compute-vmExternalIpAccess?organizationId=743091813895&supportedpurview=project
https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/issues/155
https://github.com/GoogleCloudPlatform/pbmm-on-gcp-onboarding/issues/184
|
2025-04-01T06:38:08.201544
| 2024-04-18T01:02:16
|
2249527679
|
{
"authors": [
"yanksyoon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4448",
"repo": "canonical/github-runner-image-builder-operator",
"url": "https://github.com/canonical/github-runner-image-builder-operator/pull/2"
}
|
gharchive/pull-request
|
feat: initial charm
Applicable spec: ISD-143
Overview
A charm that periodically builds ubuntu image.
Supports relation to GitHub runner and provides the image ID on openstack for consumption.
Rationale
To support GitHub runner with ready-to-use images.
Juju Events Changes
On install: installs packages needed to build custom images.
On config changed: builds new image according to the config and resets cron if necessary.
On build-image action: juju action to manually trigger a new image build.
On cron-trigger: custom internal hook to enable rebuild with cron jobs, propagating new images to existing relations.
On image relation joined: provides any existing latest image to the relation if available.
Module Changes
builder: responsible for building images
charm: main charm event handlers
chroot: module for handling chroot environments
cron: module for handling cron hooks
image: observer module for handling image relation event handlers
openstack_manager: module responsible for communicating with openstack
state: the charm state
utils: provides retry
Library Changes
Uses operator libs linux.
Checklist
[x] The charm style guide was applied
[x] The contributing guide was applied
[x] The changes are compliant with ISD054 - Managing Charm Complexity
[x] The documentation is generated using src-docs
[ ] The documentation for charmhub is updated.
[x] The PR is tagged with appropriate label (urgent, trivial, complex)
@jdkandersson somehow the comments are left as comments not as conversation 😭, i just have cloud config modelling left!
|
2025-04-01T06:38:08.208066
| 2024-04-15T13:58:48
|
2243766354
|
{
"authors": [
"beliaev-maksim",
"gruyaume"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4449",
"repo": "canonical/httprequest-lego-k8s-operator",
"url": "https://github.com/canonical/httprequest-lego-k8s-operator/issues/123"
}
|
gharchive/issue
|
Charm is not in Blocked state when cannot access let's enrypt
Describe the bug
I have deployed the charm, but I did not provide auth configs. However, juju shows my charm as green. Also, firewall is not yet opened, so it cannot talk to the site
I think charm could not properly reconcile events after I started relating with nginx
To Reproduce
10 2024-04-15 10:39:23 juju deploy nginx-ingress-integrator
11 2024-04-15 10:39:31 juju status
12 2024-04-15 10:39:34 juju status
13 2024-04-15 10:39:47 juju status
14 2024-04-15 10:39:50 juju status
15 2024-04-15 10:39:58 juju status
16 2024-04-15 10:40:00 juju status
17 2024-04-15 10:40:04 juju status
18 2024-04-15 10:40:27 juju status
19 2024-04-15 10:40:34 juju status
20 2024-04-15 10:40:39 watch -c juju status
21 2024-04-15 10:40:51 juju status
22 2024-04-15 10:41:02 juju trust nginx-ingress-integrator --scope=cluster
23 2024-04-15 10:41:04 juju status
24 2024-04-15 10:41:16 juju relate charmed-cla-checker nginx-ingress-integrator
25 2024-04-15 10:41:19 juju status
26 2024-04-15 10:41:22 juju status
27 2024-04-15 12:17:49 juju deploy httprequest-lego-k8s
28 2024-04-15 12:19:12 juju status
29 2024-04-15 12:19:30 juju relate httprequest-lego-k8s nginx-ingress-integrator
30 2024-04-15 12:19:35 juju status
31 2024-04-15 12:19:42 juju status
32 2024-04-15 12:52:04 juju config httprequest-lego-k8s
33 2024-04-15 12:52:26 juju config httprequest-lego-k8s | grep httpreq_endpoint -a 5
34 2024-04-15 12:52:34 grep --help
35 2024-04-15 12:52:40 juju config httprequest-lego-k8s | grep httpreq_endpoint -A 5
36 2024-04-15 12:53:08 juju config httprequest-lego-k8s httpreq_endpoint='https://lego-certs.canonical.com'
37 2024-04-15 12:53:31 juju config httprequest-lego-k8s<EMAIL_ADDRESS> 38 2024-04-15 12:53:44 juju config httprequest-lego-k8s | grep username -A 5
39 2024-04-15 12:54:03 juju config httprequest-lego-k8s | grep timeout -A 5
40 2024-04-15 12:54:40 juju status
41 2024-04-15 12:55:17 juju config nginx-ingress-integrator | grep name -A 5
42 2024-04-15 12:57:40 juju config nginx-ingress-integrator service-hostname=cla-checker.canonical.com
43 2024-04-15 12:58:49 juju model-config juju-http-proxy = "http://squid.internal:3128"
44 2024-04-15 12:58:49 juju model-config juju-https-proxy = "http://squid.internal:3128"
45 2024-04-15 12:58:49 juju model-config juju-no-proxy = "<IP_ADDRESS>,localhost,::1,<IP_ADDRESS>/8,<IP_ADDRESS>/12,<IP_ADDRESS>/16,.canonical.com,.launchpad.net,.internal,.jujucharms.com"
46 2024-04-15 12:59:18 juju model-config juju-no-proxy="<IP_ADDRESS>,localhost,::1,<IP_ADDRESS>/8,<IP_ADDRESS>/12,<IP_ADDRESS>/16,.canonical.com,.launchpad.net,.internal,.jujucharms.com"
47 2024-04-15 12:59:48 juju model-config juju-http-proxy="http://squid.internal:3128"
48 2024-04-15 12:59:48 juju model-config juju-https-proxy="http://squid.internal:3128"
49 2024-04-15 12:59:49 juju model-config juju-no-proxy="<IP_ADDRESS>,localhost,::1,<IP_ADDRESS>/8,<IP_ADDRESS>/12,<IP_ADDRESS>/16,.canonical.com,.launchpad.net,.internal,.jujucharms.com"
50 2024-04-15 12:59:59 juju model-config juju-http-proxy
51 2024-04-15 13:50:27 juju config httprequest-lego-k8s httpreq_endpoint
52 2024-04-15 13:50:30 juju status
Expected behavior
charm should be blocked
Logs
prod-cla-checker@enterprise-engineering-bastion-ps6:~$ juju status
Model Controller Cloud/Region Version SLA Timestamp
prod-cla-checker prodstack-is k8s-prod-general/default 3.1.7 unsupported 13:57:36Z
App Version Status Scale Charm Channel Rev Address Exposed Message
charmed-cla-checker active 1 charmed-cla-checker edge 1 <IP_ADDRESS> no
httprequest-lego-k8s active 1 httprequest-lego-k8s stable 40 <IP_ADDRESS> no
nginx-ingress-integrator 24.2.0 active 1 nginx-ingress-integrator stable 84 <IP_ADDRESS> no Ingress IP(s): <IP_ADDRESS>
Unit Workload Agent Address Ports Message
charmed-cla-checker/0* active idle <IP_ADDRESS>
httprequest-lego-k8s/0* active idle <IP_ADDRESS>
nginx-ingress-integrator/0* active idle <IP_ADDRESS> Ingress IP(s): <IP_ADDRESS>
2024-04-15T12:53:38.012Z [container-agent] 2024-04-15 12:53:38 INFO juju-log Received Certificate Creation Request for domain charmed-cla-checker
2024-04-15T12:54:08.108Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log Exited with code 1. Stderr:
2024-04-15T12:54:08.112Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log 2024/04/15 12:53:38 No key found for account<EMAIL_ADDRESS>Generating a P256 key.
2024-04-15T12:54:08.116Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log 2024/04/15 12:53:38 Saved key to /tmp/.lego/accounts/acme-v02.api.letsencrypt.org/is-admin<EMAIL_ADDRESS>2024-04-15T12:54:08.120Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log 2024/04/15 12:54:08 Could not create client: get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get "https://acme-v02.api.letsencrypt.org/directory": dial t
cp <IP_ADDRESS>:443: i/o timeout
2024-04-15T12:54:08.124Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log Failed to execute lego command
Additional context
another example. Server returns 500, but charm is green
ot pass JSON Schema validation
unit-nginx-ingress-integrator-0: 16:26:19 INFO juju.worker.uniter.operation ran "certificates-relation-changed" hook (via hook dispatching script: dispatch)
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: Exited with code 1. Stderr:
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] acme: Registering account for<EMAIL_ADDRESS>unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: Obtaining bundled SAN certificate given a CSR
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] AuthURL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/338846294087
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: Could not find solver for: tls-alpn-01
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: Could not find solver for: http-01
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: use dns-01 solver
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: Preparing to solve DNS-01
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:49 [INFO] [cla-checker.canonical.com] acme: Cleaning DNS-01 challenge
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:27:15 [WARN] [cla-checker.canonical.com] acme: cleaning up failed: httpreq: unexpected status code: [status code: 500] body: <!doctype html>
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <html lang="en">
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <head>
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <title>Server Error (500)</title>
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: </head>
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <body>
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <h1>Server Error (500)</h1><p></p>
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: </body>
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: </html>
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:27:16 [INFO] Deactivating auth: https://acme-v02.api.letsencrypt.org/acme/authz-v3/338846294087
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:27:16 Could not obtain certificates:
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: error: one or more domains had a problem:
unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: [cla-checker.canonical.com] [cla-checker.canonical.com] acme: error presenting token: httpreq: unable to communicate with the API server: error: Post "https://lego-certs.canonical.com/present": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Hello @beliaev-maksim, indeed the charm status does not depend on the endpoint being available. We simply validate that it is valid. I will re-classify this issue as a "request for enhancement".
@gruyaume not sure how this should be treated. From my perspective I expect that if the charm is Active and green, then everything has worked out and I got my certs.
Since if everything is green, I should see TLS on my app. When I do not see it, I have to go and debug what is wrong. Charm status should be able to assist me in this.
Closing as this is the same concern as discussed in #154, the effort will be tracked there.
|
2025-04-01T06:38:08.238950
| 2024-09-10T11:17:21
|
2516153223
|
{
"authors": [
"AlanGriffiths",
"CharleeSF",
"Saviq",
"robert-ancell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4450",
"repo": "canonical/iot-example-graphical-snap",
"url": "https://github.com/canonical/iot-example-graphical-snap/issues/34"
}
|
gharchive/issue
|
Flutter snaps with 3.24.1 shows inverted colors
Hey guys,
I know this is probably a flutter bug again, just like earlier reported https://github.com/canonical/iot-example-graphical-snap/issues/31, but I think it still should be listed here, as this seems the only place where people talk about snapping flutter applications.
Indeed on flutter versions >3.24.1 the vector shading issue reported in issue 31 is fixed. However, the colors displayed are not correct. It seems that the blue and red channel are swapped somehow.
I see this issue when I build the snap on branch 22/Flutter-demo, I see the issue when I build my own flutter application (both running in Ubuntu Core and with frame-it on a system running an X session), and someone else reported it here. The last reporter also reported that the issue is fixed with core24, which makes me believe that there could be an issue in how flutter and the mesa libraries interact?
Do you have any suggestions on how to move forward with this? Should I make a bug on the flutter project? Or on ubuntu-frame?
Kind regards, Charlee
Hey @CharleeSF,
there's no point in reporting it as an issue with Frame, as the problem clearly has nothing to do with Frame.
As the issue seems to be related to Flutter changes I would guess that's most likely where the problem lies. (But I don't know the Flutter linux embedder code well enough to give an informed opinion.)
The best way forward would be try reproducing the problem on a standard 22.04 system (without snaps). If that also has colour issues, that's an easier scenario to report to the Flutter project.
BW,
Alan
The normal Linux build on 22.04 on my intel machine has no color problems, only when running it as a snap.
So there is something different between Ubuntu Desktop 22.04 and the way the flutter application on core22 snaps are packaged.. Is there any way you can recommend to get my standard 22.04 system to be closer to the snaps to see if I can reproduce?
If I do this for the 22/Flutter-demo on a Ubuntu Desktop 22.04 amd64 system
snapcraft --verbose
sudo snap install iot-example-graphical-snap_0+git.dcf41bf_amd64.snap --dangerous
frame-it iot-example-graphical-snap
Then the application is pink, see image
2. Also runs without problems with correct colors, and the application is purple. (I know I have wayland by doing echo $WAYLAND_DISPLAY and it gives wayland-0)
If instead of
frame-it iot-example-graphical-snap
You run directly on your desktop:
iot-example-graphical-snap
That doesn't work for me..
I tried:
charlee@lpks0013-ubuntu:~/xs4$ iot-example-graphical-snap
(flutterdemo:352541): Gtk-WARNING **: 16:30:13.430: cannot open display:
this crashes, then I tried
charlee@lpks0013-ubuntu:~/xs4$ sudo iot-example-graphical-snap
Setting up watches.
Watches established.
This waits forever, then I tried setting WAYLAND_DISPLAY because I remember reading somewhere that that was necessary
charlee@lpks0013-ubuntu:~/xs4$ WAYLAND_DISPLAY=99 iot-example-graphical-snap
Setting up watches.
Watches established.
That waits forever as well.
I don't know why, from what I can see the display should be available:
charlee@lpks0013-ubuntu:~/xs4$ ls -l $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY
srwxrwxr-x 1 charlee charlee 0 sep 10 15:35 /run/user/1000/wayland-0
I have the interfaces connected:
charlee@lpks0013-ubuntu:~/xs4/iot-example-graphical-snap$ snap connections iot-example-graphical-snap
Interface Plug Slot Notes
content[graphics-core22] iot-example-graphical-snap:graphics-core22 mesa-core22:graphics-core22 -
opengl iot-example-graphical-snap:opengl :opengl -
wayland iot-example-graphical-snap:wayland ubuntu-frame:wayland manual
I also tried to use the systems wayland interface, by doing a disconnect on ubuntu-frame:wayland and sudo snap connect iot-example-graphical-snap:wayland :wayland, but this gives the same Watches established results.
I am probably missing something obvious here...
Sorry, missed
charlee@lpks0013-ubuntu:~/xs4$ iot-example-graphical-snap
(flutterdemo:352541): Gtk-WARNING **: 16:30:13.430: cannot open display:
That looks like it is failing to connect to X11. Try:
$ env -u DISPLAY iot-example-graphical-snap
Whoop that worked!!
AND the application is pink (with the wrong colors), like it is on Ubuntu Core..
I took the effort to update the iot-example-graphical-snap to core24, the snapcraft.yaml is here: https://github.com/CharleeSF/iot-example-graphical-snap/blob/24/Flutter-demo/snap/snapcraft.yaml
Indeed, on core24 the application is purple again.....
So summary: running on Ubuntu Desktop as snap without frame-it has wrong colors, but doing the same thing when building the snap on core24 has the correct colors.
Whoop that worked!!
AND the application is pink (with the wrong colors), like it is on Ubuntu Core..
Now try to eliminate snap: run the unsnapped version with env -u DISPLAY
That does not work...
charlee@lpks0013-ubuntu:~/xs4/iot-example-graphical-snap/flutterdemo$ env -u DISPLAY ./build/linux/x64/debug/bundle/flutterdemo
(flutterdemo:155030): Gtk-WARNING **: 09:47:23.814: cannot open display:
That does not work...
OK, so that build of flutter doesn't (or isn't configured to) support Wayland. Unfortunately, I don't know how to change that.
Anyway, we can eliminate Frame from the stack as you see the same behaviour on desktop.
I've hacked the demo (as follows) to run the core22 version on X11:
$ snap install --dangerous --devmode iot-example-graphical-snap_0+git.dcf41bf_amd64.snap
$ snap run --shell iot-example-graphical-snap
$ env -u WAYLAND_DISPLAY DISPLAY=:0 $SNAP/bin/flutterdemo
To summarise our current findings:
protocol
Unsnapped
Snapped/core22
Snapped/core24
X11
:white_check_mark:
:x:
n/a
Wayland
??
:x:
:white_check_mark:
That strongly suggests that there is something odd about the core22 snap builds.
Does the move to core24 unblock you? Or do you need to investigate further?
We'll be updating the tutorials to core24 in the coming weeks, so this might be a moot issue.
I have yet to figure out how to run our own snap on core24, but I will be working on this today/tomorrow and then I will know if it unblocks me :) but I will be trying this. It is not ideal as I think we want to wait with updating our whole system tot core24, but might at least be an MVP solution..
Where would be a good channel to get support on getting my flutter snap to work on core24? Just the snapcraft forum, or here?
It is not ideal as I think we want to wait with updating our whole system tot core24
For what it's worth, you can mix different base:s on Ubuntu Core, regardless of version. UC24 really just means that the kernel and root filesystem (core24) are 24.04-based, but you can run older base:d snaps, and the other way around. It does add to the memory and disk usage, though.
Where would be a good channel to get support on getting my flutter snap to work on core24? Just the snapcraft forum, or here?
https://github.com/canonical/iot-example-graphical-snap/
I'll get the Flutter example updated in a bit. Actually let me move this issue there.
Where would be a good channel to get support on getting my flutter snap to work on core24? Just the snapcraft forum, or here?
I guess that depends on the nature of your problems. It sounds like you've already got a Snap recipe for the Flutter side of the move. If your dependencies are available for 24.04, then the rest should be trivial. If not, then it depends on why the dependency isn't available and the options for resolving that.
@CharleeSF I just pushed the 24 version of the example:
https://github.com/canonical/iot-example-graphical-snap/compare/22/Flutter-demo...24/Flutter-demo
That said, whether I build 22 or 24, env -u WAYLAND_DISPLAY or env -u DISPLAY, snapped or not, it's always purple for me…?
You're probably on a GLES platform?
I honestly don't know :') I know very little about graphics.
It is happening on my laptop, but also on a NUC-based device for which we are trying to build the application.
If you have any commands you want me to run on either device (bear in mind that on the target device we have Ubuntu Core available, although I could boot a live linux USB to try some commands) to get more details, let me know :smile:
You're probably on a GLES platform?
I'm running on my laptop (so EGL, but 24.04).
A further datapoint is that running the the snap content directly shows the correct colours:
env -u DISPLAY /snap/iot-example-graphical-snap/current/bin/flutterdemo
env -u WAYLAND_DISPLAY /snap/iot-example-graphical-snap/current/bin/flutterdemo
Even trying to load the mesa-core22 userspace gives the right result...
env -u DISPLAY __EGL_EXTERNAL_PLATFORM_CONFIG_DIRS="/snap/mesa-core22/current/usr/share/egl/egl_external_platform.d" __EGL_VENDOR_LIBRARY_DIRS="/snap/mesa-core22/current/usr/share/glvnd/egl_vendor.d" LIBGL_DRIVERS_PATH="/snap/mesa-core22/current/usr/lib/x86_64-linux-gnu/dri/:/snap/mesa-core22/current/usr/lib/i386-linux-gnu/dri/" LD_LIBRARY_PATH="/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void:/snap/iot-example-graphical-snap/x25/usr/lib:/snap/iot-example-graphical-snap/x25/usr/lib/x86_64-linux-gnu:/snap/mesa-core22/current/usr/lib/x86_64-linux-gnu:/snap/mesa-core22/current/usr/lib/x86_64-linux-gnu/vdpau:/snap/mesa-core22/current/usr/lib/i386-linux-gnu:/snap/mesa-core22/current/usr/lib/i386-linux-gnu/vdpau" /snap/iot-example-graphical-snap/current/bin/flutterdemo
Ah! the snap has GDK_GL: gles
And that is what causes the colour problem
I think that worked!!!
That's amazing. Do you have any docs I can read to understand what happened?
I will now test this on my own snap and I will let you know if it fixes it, but I am assuming it will
Yessssssss it fixed my application!
Thanks sooooo much!
Do you have any docs I can read to understand what happened?
I can't point to any docs that the details. But GLES is a somewhat dated graphics API but one that is simpler and better supported by some embedded devices (typically ARM with bespoke graphics stacks).
We have that line in the Snap recipe as it forcing use of the better supported API helps on a lot of embedded systems and should be harmless on more capable systems.
Another datapoint relevant to further progress: With the base: core22 example on RPi3/Ubuntu Core 22 GDK_GL=gles is needed to render at all...
# env -u GDK_GL $SNAP/bin/flutterdemo
(flutterdemo:5092): Gdk-CRITICAL **: 16:07:37.480: gdk_gl_context_make_current: assertion 'GDK_IS_GL_CONTEXT (context)' failed
** (flutterdemo:5092): WARNING **: 16:07:37.481: Failed to initialize GLArea: Unable to create a GL context
However, the colours are still wrong.
And a final datapoint: on the same setup, with the base: core24 example we no longer need GDK_GL=gles
Looking into the RGB/BGR logic (which I suspect a mismatch has caused the color switch shown above) in the Flutter Linux embedder shows some questionable code. I've made https://github.com/flutter/engine/pull/55121 to make this consistent with the Windows embedder.
I wasn't able to reproduce the issue here, but if anyone can reproduce and try with the change in the PR that would be very helpful!
Interesting @robert-ancell, I would like to try this for you, but not sure how I bump the flutter engine to that specific commit?
We get flutter like this:
parts:
flutter-git:
source: https://github.com/flutter/flutter.git
source-tag: 3.24.1
source-depth: 1
plugin: nil
override-build: |
mkdir -p $CRAFT_PART_INSTALL/usr/bin
mkdir -p $CRAFT_PART_INSTALL/usr/libexec
cp -r $CRAFT_PART_SRC $CRAFT_PART_INSTALL/usr/libexec/flutter
ln -s $CRAFT_PART_INSTALL/usr/libexec/flutter/bin/flutter $CRAFT_PART_INSTALL/usr/bin/flutter
ln -s $SNAPCRAFT_PART_INSTALL/usr/libexec/flutter/bin/dart $SNAPCRAFT_PART_INSTALL/usr/bin/dart
$CRAFT_PART_INSTALL/usr/bin/flutter doctor
I am assuming there is a step I can add in the override-build to get the flutter engine to follow your commit? (I am still new to flutter and learning how all the parts click together)
@CharleeSF it is largely a mystery to me too, but I can offer some guidance. The above snippet is building the app (not Flutter) and Robert's changes are in the Flutter engine. Unfortunately building the engine locally isn't something I know well enough to integrate into a snap recipe.
But you shouldn't need to snap to verify the fix: On your Ubuntu 22.04, you should be able to reproduce the colour problem by prefixing the launch with GDK_GL=gles:
$ GDK_GL=gles ./build/linux/x64/debug/bundle/flutterdemo
If you succeed with building Robert's branch, then using that will, hopefully, fix the colours.
|
2025-04-01T06:38:08.245114
| 2024-10-15T08:09:11
|
2587986565
|
{
"authors": [
"Batalex",
"deusebio"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4451",
"repo": "canonical/kafka-k8s-operator",
"url": "https://github.com/canonical/kafka-k8s-operator/pull/143"
}
|
gharchive/pull-request
|
[DPE-5612] Update shared workflows version
This PR updates the shared workflows we use in our CI.
In addition, I also upgraded the GH actions checkout and download-artifact. This will prevent a nodejs version warning in the logs, and we were at risk of our CI breaking soon.
I would have separated the project update (not the CI workflow update) to a different PR
Yes, I need to be mindful of keeping things in scope. In that case however, I did not have much choice since the charm-lib dependencies group is required for the workflows, and scenario 6 breaks with ops 2.17
Oh I see. That's fine. Again, it was more of a niptick but I'm happy to hear that we are on the same page :D
|
2025-04-01T06:38:08.258708
| 2022-03-18T11:09:55
|
1173467088
|
{
"authors": [
"Abuelodelanada",
"balbirthomas",
"mmanciop",
"rbarry82",
"simskij"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4452",
"repo": "canonical/loki-k8s-operator",
"url": "https://github.com/canonical/loki-k8s-operator/issues/112"
}
|
gharchive/issue
|
Every time the constructor is called, an HTTP request is sent to Loki asking it for it's version
Bug Description
Every time the constructor is called, an HTTP request is sent to Loki asking it for its version. This information is then logged into the debug logs, spamming them with unneeded log lines.
To Reproduce
juju deploy loki-k8s --channel edge
look in the debug logs
Environment
charm version: latest/edge
Relevant log output
N/A
Additional context
No response
The _provide_loki() method queries the Loki server version using a HTTP API call and raise different types of exceptions depending on the success or failure of this call. This method is only being used to set a status message by the Loki provider charm depending on if the Loki server is active and reachable or not. The docstring of the LokiPushApiProvider object also recommends the use of such a _provide_loki() method in this manner. However neither the docstring nor the Loki charm use the exceptions raised by the _provide_loki() to conditionally instantiate the LokiPushApiProvider object only if the Loki server is active and reachable. The LokiPushApiProvier object does use the Loki HTTP API in its _check_alert_rules method. This method is invoked in response to any changes in the relation with the consumer object and in response to a pebble ready method (see PR 132 ). Hence instantiating the LokiPushApiProvider object can lead to false negative result in _check_alert_rules just because the Loki HTTP API is not responsive. In view of these observations it is proposed that
The _provide_loki() method be renamed to something like _is_loki_active(). Instead of raising exceptions this method may return a boolean that asserts if the Loki server HTTP API is responsive or not.
Instantiation of the LokiPushApiProvider be guarded by check for _is_loki_active().
There may be a few concerns here. For example how should such a scenario be handled
The Loki charm receives a relation changed event but its workload container is not yet alive so this event is ignored (as per PR 132 ).
Subsequently on pebble ready the Loki charm fails to see an active loki server because of the time lag between the workload container becoming active and the time it takes a Loki HTTP API to become responsive. hence the Loki charm does not instantiate the LokiPushApiProvider . Hence the pending relation changed event is still not handled. In fact it will never be handled.
What is missing here is the functionality of a "Readyness Probe" and the Loki charm being informed through a suitable event when the status of such a readyness probe changes. In the absence of such functionality one approach to solving the problem is to use the periodic update-status event to check for active status of Loki server (i.e. responsiveness of its HTTP API) and update all related consumers.
I think renaming the method and guarding the instantiation of LokiPushApiProvider is a good idea.
Let's keep in mind that we will need to observe the on.loki_push_api_alert_rules_changed event inside the guard, for instance:
if self._is_loki_active():
self.loki_provider = LokiPushApiProvider(
self,
address=external_url.hostname or "",
port=external_url.port or self._port,
scheme=external_url.scheme,
path=f"{external_url.path}/loki/api/v1/push",
)
self.framework.observe(self.loki_provider.on.loki_push_api_alert_rules_changed, self._loki_push_api_alert_rules_changed)
I think renaming the method and guarding the instantiation of LokiPushApiProvider is a good idea. Let's keep in mind that we will need to observe the on.loki_push_api_alert_rules_changed event inside the guard, for instance:
if self._is_loki_active():
self.loki_provider = LokiPushApiProvider(
self,
address=external_url.hostname or "",
port=external_url.port or self._port,
scheme=external_url.scheme,
path=f"{external_url.path}/loki/api/v1/push",
)
self.framework.observe(self.loki_provider.on.loki_push_api_alert_rules_changed, self._loki_push_api_alert_rules_changed)
As mentioned earlier, we have to be very, very careful with this.
The actual Loki charm does not listen to relation_* events, but they will still be emitted as it is part of the relation. If there are no observers, they will simply disappear.
In the case of, say, a bundle deployment, where a relation-joined event may occur before Loki is actually ready, this means that relation-joined simply disappears into the ether, never to be seen again. Any data which is expected to be set in relation_joined will not be.
The alternative to this is quite literally the "common exit hook" which we have a number of issue about moving away from. If the charm/library looks at every relation data bag on every event to compare the state of the charm, we should probably try to hash absolutely everything.
In order to protect the validity of relation-* events in this case, I'd suggest waiting until after container operations are moved into the charm itself, so the provider can manage relation data only.
I honestly do not see why we need to have Loki active in order to create the rules in the container. For that, we need only Pebble.
I am also very conflicted about providing the URL via LokiPushApiProvider ONLY if Loki is active, as relation changes have a lag to propagate to the other side. Interestingly enough, if Loki is not active, we do NOT remove the URL from LokiPushApiProvider, which is at very least least inconsistent.
Interestingly enough, if Loki is not active, we do NOT remove the URL from LokiPushApiProvider, which is at very least least inconsistent.
How can a Provider charm let the Consumer know that its workload endpoint is temporarily not in service (due to say upgrade or maintenance etc) ? This is to prevent errors in the Consumer charm should it try to connect to the workload endpoints of the Provider charm during this process. There was previously an attempt to cater to this issue using ready() and unready() methods in the Operator Framework through an relation management object that was removed as it was not being used as intended.
Closed PR https://github.com/canonical/loki-k8s-operator/pull/135 since fix is moved to https://github.com/canonical/loki-k8s-operator/pull/151
|
2025-04-01T06:38:08.284801
| 2023-03-17T12:53:22
|
1629265588
|
{
"authors": [
"paulomach"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4453",
"repo": "canonical/mysql-k8s-operator",
"url": "https://github.com/canonical/mysql-k8s-operator/pull/175"
}
|
gharchive/pull-request
|
Feature/tls runtime set
Issue
Slow and fragile TLS setup.
DPE-1447
Solution
Port TLS hot reload with lib from merged PR#142
Should we delete the rolling ops lib since we are no longer using it?
what a miss, thanks!
Done db310a1
|
2025-04-01T06:38:08.301169
| 2021-11-18T06:54:26
|
1057003499
|
{
"authors": [
"dstathis",
"sed-i"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4454",
"repo": "canonical/prometheus-operator",
"url": "https://github.com/canonical/prometheus-operator/pull/168"
}
|
gharchive/pull-request
|
support python 3.5 in libs
Note: @balbirthomas I converted JujuTopology from a dataclass to a regular class. Tests still pass but I wanted to bring that to your attention.
Bump LIBPATCH?
manually merging so leon can continue work
|
2025-04-01T06:38:08.306621
| 2024-04-12T13:03:47
|
2240115656
|
{
"authors": [
"ghislainbourgeois",
"gruyaume"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4455",
"repo": "canonical/sdcore-nssf-k8s-operator",
"url": "https://github.com/canonical/sdcore-nssf-k8s-operator/pull/111"
}
|
gharchive/pull-request
|
chore: Use the rustup snap to install latest stable rust
Description
This changes the way Rust and Cargo are installed for building the charm. Instead of using the deb packages available for the base, it uses the rustup snap to install the latest stable Rust and Cargo releases.
This will solve building issues for modules like pydantic that now depend on newer releases of Rust to build.
Checklist:
[x] My code follows the style guidelines of this project
[x] I have performed a self-review of my own code
[ ] I have made corresponding changes to the documentation
[ ] I have added tests that validate the behaviour of the software
[x] I validated that new and existing unit tests pass locally with my changes
[ ] Any dependent changes have been merged and published in downstream modules
[ ] I have bumped the version of the library
My feeling here was more or less "let's use what is in the latest Ubuntu". But I understand if we really want the latest pydantic we need this workaround.
|
2025-04-01T06:38:08.317563
| 2018-02-06T04:56:19
|
294632411
|
{
"authors": [
"coveralls",
"kunsingh"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4456",
"repo": "canoo/dolphin-platform",
"url": "https://github.com/canoo/dolphin-platform/pull/840"
}
|
gharchive/pull-request
|
Controller tests for parent child
This change is
https://github.com/canoo/dolphin-platform/issues/603
Coverage increased (+0.07%) to 57.736% when pulling 007da71d8ea45d03de3c493564522a4ee974a19d on ControllerTestsForParentChild into 5483d3fbcbb5f61ca999e0cba6f932f6b844d4e8 on master.
Review Comments Fixed.
|
2025-04-01T06:38:08.319329
| 2022-01-12T20:22:22
|
1100755625
|
{
"authors": [
"canozbey",
"izemlyanskiy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4457",
"repo": "canozbey/finite-state-rice-decoder",
"url": "https://github.com/canozbey/finite-state-rice-decoder/pull/2"
}
|
gharchive/pull-request
|
global refactoring and make possible to access longs
Hi! Sorry, there were long holidays in my country, that's why I kept silent all that time.
Thank you very much for the hints you gave me in #1. I successfully adapted your code to longs, and now I'm happy to share the results.
I made a minor refactoring, so now it's more like a library that, in theory, is ready for prod use.
I would be more that happy to hear any thoughts about this work.
Thank you once again.
It looks awesome! And, very neat indeed. I once again understand how primitive I am at software engineering particularly with respect to high-level design :) Thank you for your effort!
|
2025-04-01T06:38:08.339206
| 2016-01-03T20:16:08
|
124668952
|
{
"authors": [
"cantino",
"dsander"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4458",
"repo": "cantino/huginn",
"url": "https://github.com/cantino/huginn/pull/1205"
}
|
gharchive/pull-request
|
When the WebsiteAgent receives Events, we do not need to require that they contain a url keyword
I think it’s unfortunate that we wrote this Agent to use the url value from an event. The newer url_from_event is much better, and could just be set to {{ url }} to emulate the old behavior, but dropping this would be a breaking change. I’m not sure how to handle this, but it sucks that if an incoming event just contains a url keyword, even when you just wanted merging behavior, it changes the behavior of the Agent.
@dsander, I've added a migration and removed usage of the Event's url payload value. Can you or @knu think of any other way that this could break existing users who expect this behavior?
The migration looks good to me.
Thanks!
|
2025-04-01T06:38:08.343104
| 2022-09-14T06:57:55
|
1372461514
|
{
"authors": [
"StefH",
"canton7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4459",
"repo": "canton7/RestEase",
"url": "https://github.com/canton7/RestEase/issues/235"
}
|
gharchive/issue
|
Question: Extend the RestEaseGeneratedType ?
When using the RestEase.SourceGenerator, a internal class is generated: internal class Implementation_1_ITestApi : global::Test.Net.Client.ITestApi.
I want to extend this class like:
public class MyClient : RestEaseGeneratedTypes.Implementation_1_ITestApi, ITestClient
{
public Task<Result> DoSomethingAsync(string request, CancellationToken cancellationToken = default)
{
throw new System.NotImplementedException();
}
}
But I encounter several issues:
CS0060 Inconsistent accessibility: base class 'Implementation_1_ITestApi' is less accessible than class 'MyClient'
CS7036 There is no argument given that corresponds to the required formal parameter 'requester' of 'Implementation_1_ITestApi.Implementation_1_ITestApi(IRequester)'
Do you have a solution or plan to change this behavior in a next version?
No, that is explicitely not supported. Even if you declare your subclass internal and give it the right constructor, there's an [Obsolete] attribute which will cause a compiler error. If you manage to hack your way around that, RestClient.For doesn't know anything about your subclass and won't instantiate it.
The name of a generated type is not stable (it may change from build to build), and the constructor syntax may change from release to release.
The supported way to add extra methods is using extension methods, as documented here.
Thank you.
|
2025-04-01T06:38:08.355333
| 2017-09-11T09:06:07
|
256631357
|
{
"authors": [
"caouecs",
"idhamperdameian"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4461",
"repo": "caouecs/Laravel-lang",
"url": "https://github.com/caouecs/Laravel-lang/issues/751"
}
|
gharchive/issue
|
Validation changed in original source.
I think you need to update it again, please look original source here
https://github.com/caouecs/Laravel-lang/blob/827aa1240855862582573495aa67b50835792bc5/script/en/validation.php#L44
:'(
Headache? 😺
I open a issue for all languages ( #752 )
|
2025-04-01T06:38:08.356398
| 2018-04-26T20:10:31
|
318179531
|
{
"authors": [
"caouecs",
"mohsen4887"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4462",
"repo": "caouecs/Laravel-lang",
"url": "https://github.com/caouecs/Laravel-lang/pull/819"
}
|
gharchive/pull-request
|
not_regex translated and province attribute added
this is my first pull request 😄
i'll work on fa language and help to make it perfect.
Thank you
|
2025-04-01T06:38:08.364280
| 2019-08-28T11:55:32
|
486334814
|
{
"authors": [
"nnganesha"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4463",
"repo": "capergroup/bayou",
"url": "https://github.com/capergroup/bayou/issues/221"
}
|
gharchive/issue
|
Source files for the web front end
Any idea where can we see the source for the front end (askbayou.com)? I can see that there is a flask server running. Ports 8080,8081 & 8084 are open. But 8084 seems to be accessible for API access only. Any documentation on how to call the same (http://localhost:8084/apisynthesis)?
Thanks a lot...
|
2025-04-01T06:38:08.399441
| 2018-09-26T20:34:54
|
364200663
|
{
"authors": [
"jberg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4464",
"repo": "captbaritone/webamp",
"url": "https://github.com/captbaritone/webamp/pull/659"
}
|
gharchive/pull-request
|
Add MilkDrop title text animation
Text doesn't look great at the default size, not sure if there is something wrong or just its small so looks jaggy.
im working on fixing the blurriness of the text in butterchurn
the text should be less aliased now (we were rendering at a smaller resolution and scaling it up before)
|
2025-04-01T06:38:08.409217
| 2021-01-21T19:57:35
|
791450758
|
{
"authors": [
"RobertaJHahn",
"yellowdragonfly"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4465",
"repo": "carbon-design-system/carbon-for-ibm-dotcom",
"url": "https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/4971"
}
|
gharchive/issue
|
menu item doesn't show active state in hamburg menu in masthead
talked with Anna Wen on this issue. She was able to reproduce it on current testing environment.
When setting the last menu item link in the masthead to be selected, the item does not have a selected state in the mobile nav, only desktop
DESKTOP:
MOBILE MENU: (You can see the selected menu item in desktop does not have selected state in mobile)
@annawen1 Issue is ready to finish the work. Airtable updated, labels and release added.
@annawen1 Issue is ready to finish the work. Airtable updated, labels and release added.
|
2025-04-01T06:38:08.415609
| 2021-05-26T17:27:45
|
902719681
|
{
"authors": [
"oliviaflory"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4466",
"repo": "carbon-design-system/carbon-for-ibm-dotcom",
"url": "https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/6200"
}
|
gharchive/issue
|
[Content group] without expressive, update to use Carbon core
Detailed description
Describe in detail the issue you're having.
Removing the expressive theme, Link and list within Content group should be updated to use Link large and List large variants from Carbon core.
I am hoping that updating the Link and list within Content group will cascade to all instances of Content group simple, with image, etc.
Please let me know if we need additional issues to update the other stories individually
Assuming CTA will be updated through Link with icon in issue #6179
Is this a feature request (new component, new icon), a bug, or a general
issue?
Is this issue related to a specific component?
What did you expect to happen? What happened instead? What would you like to
see changed?
What browser are you working in?
What version of Carbon for IBM.com are you using?
What offering/product do you work on? Any pressing ship or release dates we
should be aware of?
Additional information
PR removing expressive
Web components
React
Closing in favor of #6185 should be able to update both instances within utility fix.
|
2025-04-01T06:38:08.418061
| 2019-05-28T16:19:28
|
449361673
|
{
"authors": [
"ibmer20",
"jcharnetsky"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4467",
"repo": "carbon-design-system/carbon-tutorial",
"url": "https://github.com/carbon-design-system/carbon-tutorial/pull/63"
}
|
gharchive/pull-request
|
feat(tutorial): complete step 1
Closes #
{{short description}}
Changelog
New
{{new thing}}
Changed
{{change thing}}
Removed
{{removed thing}}
Congratulations! 🥇 You have successfully completed part 1.
|
2025-04-01T06:38:08.420524
| 2022-03-29T21:49:51
|
1185450703
|
{
"authors": [
"joshblack",
"laurenmrice",
"mbgower"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4468",
"repo": "carbon-design-system/carbon-website",
"url": "https://github.com/carbon-design-system/carbon-website/pull/2831"
}
|
gharchive/pull-request
|
Update usage.mdx
Removed screen reader info; usual editorial tweaks with Keyboard. Made "group label" the consistent name by removing "heading" (as discussed).
Most important change to scrutinize: the repositioning of the 2nd bullet of the label to the group label area. Took a guess at what was meant; may not be correct.
The other change to confirm: I moved the last bullet of Group labels to Checkbox labels and modified one word. It otherwise made no sense to me.
@mbgower Maybe this should be a discussion topic. Group labels should be concise and to the point. For other types of form inputs, after selecting something, you have the option to set it as a warning state in case you need to communicate more information based off of a current selection which could be something similar to what you are talking about? Checkbox does not currently have a warning state though.
@laurenmrice I've removed the 'instruction' text, and added it to the next agenda.
With that done, I think this is ready to go?
bump @aledavila or @dakahn when you get a sec!
|
2025-04-01T06:38:08.447508
| 2019-10-18T20:54:03
|
509305734
|
{
"authors": [
"oliviaflory"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4469",
"repo": "carbon-design-system/ibm-dotcom-library-design-kit",
"url": "https://github.com/carbon-design-system/ibm-dotcom-library-design-kit/issues/7"
}
|
gharchive/issue
|
Design kit: add icon cell variants
Carbon added additional icon cell variations to design kit
02 hover
03 selected
04 disabled
Acceptance criteria:
[ ] white theme
[ ] gray 10 theme
[ ] gray 90 theme
[ ] gray 100 theme
not stale
|
2025-04-01T06:38:08.450502
| 2023-11-29T19:07:01
|
2017235785
|
{
"authors": [
"elycheea",
"ljcarot"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4470",
"repo": "carbon-design-system/ibm-products",
"url": "https://github.com/carbon-design-system/ibm-products/issues/3863"
}
|
gharchive/issue
|
Clean up Datagrid stories
Reorganize Datagrid stories so that base component features are exposed top level and “extensions” are included as extensions.
The base component of the Datagrid includes the following features
– Table headers
– Clickable row items
– Row action buttons
– Batch actions
– Empty state
– Frozen columns (scrolling)
– Responsiveness (scrolling)
– Column alignment
– Infinite scrolling
– Resizable columns
Some earlier proposals for reorganizing the stories although batch actions still may better suit the base section for now.
Moving to Later for now since some of the previous cleanup broke docs. May want to revisit this after we do some Storybook improvements.
Done
|
2025-04-01T06:38:08.498914
| 2022-12-13T11:58:04
|
1494090408
|
{
"authors": [
"LukaKurnjek",
"rdlrt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4471",
"repo": "cardano-foundation/developer-portal",
"url": "https://github.com/cardano-foundation/developer-portal/issues/886"
}
|
gharchive/issue
|
Issue with building cardano-node: Fedora 37 official repos do not contain ncurses-compat-libs
I followed the procedure https://github.com/cardano-foundation/developer-portal/blob/staging/docs/get-started/installing-cardano-node.md on my Fedora 37 (newest version as of writing) and found out that the package ncurses-compat-libs that is needed to install the cardano-node is not contained anymore in official Fedora repositories. Tried to install the Fedora 36 version of this package on Fedora 37 and had dependency issues. Not sure whether the same problem also appears for the newest RHEL.
Can this be raised against cardano-node repo? ncurses-compat-libs was dropped in RHEL/Fedora from ncurses-6.3.1 as the newer ABI has been live since 7 years. There are hacks around (installing ncurses and creating symlink /usr/lib64/libncurses.so.5 pointing to /usr/lib64/libncurses.so.6), but should be easier to solve correctly
|
2025-04-01T06:38:08.501937
| 2022-08-12T00:10:11
|
1336611305
|
{
"authors": [
"falcon78921"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4472",
"repo": "cardinal-dev/Cardinal",
"url": "https://github.com/cardinal-dev/Cardinal/issues/180"
}
|
gharchive/issue
|
[Documentation] Fix Redundant Header in Compatibility Guide
See: https://cardinal-dev.github.io/Cardinal/pages/compatibility-guide/
Fixed in: https://github.com/cardinal-dev/Cardinal/pull/185
|
2025-04-01T06:38:08.505042
| 2024-10-30T00:32:49
|
2622587337
|
{
"authors": [
"habdelra",
"lukemelia"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4473",
"repo": "cardstack/boxel",
"url": "https://github.com/cardstack/boxel/pull/1735"
}
|
gharchive/pull-request
|
Only show finder filter scrollbar when hovered
This PR updates the cards grid such that the filter list scroll bar is only shown when hovered over.
Note that this update is hostile to touch devices like the iPad...
I tried it on a MacBook. It works when connected to an external mouse but does not show if I just use the trackpad from the MacBook.
If an external mouse is connected
scroll-showing-on-macbook-with-mouse-connected.mov
Using Macbook trackpad only
It only shows until you scroll
scroll-not-showing-on-macbook-mousepad-only.mov
Thanks for that. My computer is an Ubuntu. @lukemelia is this what you want to see in the case that user is on a MacBook and they use a trackpad?
I think this is OK. Let's live with it like this and see how it feels.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.