added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:08.546566
| 2018-01-10T15:14:10
|
287468375
|
{
"authors": [
"eds89",
"germanros1987"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4474",
"repo": "carla-simulator/carla",
"url": "https://github.com/carla-simulator/carla/issues/125"
}
|
gharchive/issue
|
Loading new camera post-processing assets dynamically
Hello there.
How can I load a new camera postprocessing effect (uasset) without having to re-compile Carla? Is that possible at this stage?
This is now totally possible using CARLA API, so I am closing this issue.
|
2025-04-01T06:38:08.564069
| 2020-12-01T11:07:07
|
754298371
|
{
"authors": [
"HaoZhouGT",
"OmarAbdElNaser",
"Vaan5"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4475",
"repo": "carla-simulator/carla",
"url": "https://github.com/carla-simulator/carla/issues/3653"
}
|
gharchive/issue
|
[Camera RGB][synchronous mode]: sensor_tick not in synch with fixed_delta_seconds
Problem - short description
I have observed weird (atleast to me) behavior when using sensor_tick of the camera sensor (didn't try others yet) and the synchronous mode.
As soon as a value is set (different than 0), the sensor data is not being transmitted during the second tick.
Additionally, the sensor data doesn't get transmitted according to the least common multiple
See Observations for more info.
Example script
This is a modified sensor_synchronization.py example:
import glob
import os
import sys
from queue import Queue
from queue import Empty
sys.path.insert(0, r'C:\workdir\installations\carla\<IP_ADDRESS>\CARLA_<IP_ADDRESS>\WindowsNoEditor\PythonAPI\carla\dist\carla-0.9.9-py3.7-win-amd64.egg')
import carla
def sensor_callback(sensor_data, sensor_queue, sensor_name):
# just to make sure that something weird doesn't happen due to the queue
print("RECEIVED sensor data for frame: {}".format(sensor_data.frame))
sensor_queue.put((sensor_data.frame, sensor_name, sensor_data.timestamp))
def main():
# We start creating the client
client = carla.Client('localhost', 2000)
client.set_timeout(2.0)
world = client.get_world()
try:
original_settings = world.get_settings()
settings = world.get_settings()
# We set CARLA syncronous mode
settings.fixed_delta_seconds = 0.02
settings.synchronous_mode = True
world.apply_settings(settings)
sensor_queue = Queue()
blueprint_library = world.get_blueprint_library()
cam_bp = blueprint_library.find('sensor.camera.rgb')
sensor_list = []
## 1. no sensor tick changes
#cam_bp.set_attribute('sensor_tick', '0.0') # 2. manually set to 0.0
#cam_bp.set_attribute('sensor_tick', '0.02') # 3. manually set to 0.02 (same as fixed_delta_seconds)
#cam_bp.set_attribute('sensor_tick', '0.04') # 4. manually set to 0.04 (double the value of fixed_delta_seconds)
#cam_bp.set_attribute('sensor_tick', '0.03') # 5. manually set to 0.03 (least common multiple is 0.06)
cam01 = world.spawn_actor(cam_bp, carla.Transform())
cam01.listen(lambda data: sensor_callback(data, sensor_queue, "camera01"))
sensor_list.append(cam01)
# Main loop
for i in range(10):
# Tick the server
world.tick()
w_frame = world.get_snapshot().frame
w_time = world.get_snapshot().timestamp
print("\nWorld's frame: %d timestamp: %f" % (w_frame, w_time.elapsed_seconds))
try:
for i in range(0, len(sensor_list)):
s_frame = sensor_queue.get(True, 1.0)
print(" Frame: %d Sensor: %s Timestamp: %f" % (s_frame[0], s_frame[1], s_frame[2]))
except Empty:
print(" Some of the sensor information is missed")
finally:
world.apply_settings(original_settings)
for sensor in sensor_list:
sensor.destroy()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
print(' - Exited by user.')
In order to try it out on your machine:
modify sys.path accordingly
uncomment the setting of sensor_tick which interests you (see Observations)
And, I didn't mess with the queue that much, just added a print to make sure that i don't miss some sensor data due to the nature of the example (queue and popping).
Observations
I use fixed_delta_seconds of 0.02 in all tests, and modify the sensor_tick of the camera to observe the following:
sensor_tick not modified -> image received every tick [EXPECTED]
World's frame: 178547 timestamp: 2195.663063
RECEIVED sensor data for frame: 178547
Frame: 178547 Sensor: camera01 Timestamp: 2195.663063
World's frame: 178548 timestamp: 2195.683063
RECEIVED sensor data for frame: 178548
Frame: 178548 Sensor: camera01 Timestamp: 2195.683063
World's frame: 178549 timestamp: 2195.703063
RECEIVED sensor data for frame: 178549
Frame: 178549 Sensor: camera01 Timestamp: 2195.703063
World's frame: 178550 timestamp: 2195.723063
RECEIVED sensor data for frame: 178550
Frame: 178550 Sensor: camera01 Timestamp: 2195.723063
World's frame: 178551 timestamp: 2195.743063
RECEIVED sensor data for frame: 178551
Frame: 178551 Sensor: camera01 Timestamp: 2195.743063
World's frame: 178552 timestamp: 2195.763063
RECEIVED sensor data for frame: 178552
Frame: 178552 Sensor: camera01 Timestamp: 2195.763063
World's frame: 178553 timestamp: 2195.783063
RECEIVED sensor data for frame: 178553
Frame: 178553 Sensor: camera01 Timestamp: 2195.783063
World's frame: 178554 timestamp: 2195.803063
RECEIVED sensor data for frame: 178554
Frame: 178554 Sensor: camera01 Timestamp: 2195.803063
World's frame: 178555 timestamp: 2195.823063
RECEIVED sensor data for frame: 178555
Frame: 178555 Sensor: camera01 Timestamp: 2195.823063
World's frame: 178556 timestamp: 2195.843063
RECEIVED sensor data for frame: 178556
Frame: 178556 Sensor: camera01 Timestamp: 2195.843063
sensor_tick manually set to 0.0 -> image received every tick [EXPECTED]
World's frame: 182146 timestamp: 2241.293400
RECEIVED sensor data for frame: 182146
Frame: 182146 Sensor: camera01 Timestamp: 2241.293400
World's frame: 182147 timestamp: 2241.313400
RECEIVED sensor data for frame: 182147
Frame: 182147 Sensor: camera01 Timestamp: 2241.313400
World's frame: 182148 timestamp: 2241.333400
RECEIVED sensor data for frame: 182148
Frame: 182148 Sensor: camera01 Timestamp: 2241.333400
World's frame: 182149 timestamp: 2241.353400
RECEIVED sensor data for frame: 182149
Frame: 182149 Sensor: camera01 Timestamp: 2241.353400
World's frame: 182150 timestamp: 2241.373400
RECEIVED sensor data for frame: 182150
Frame: 182150 Sensor: camera01 Timestamp: 2241.373400
World's frame: 182151 timestamp: 2241.393400
RECEIVED sensor data for frame: 182151
Frame: 182151 Sensor: camera01 Timestamp: 2241.393400
World's frame: 182152 timestamp: 2241.413400
RECEIVED sensor data for frame: 182152
Frame: 182152 Sensor: camera01 Timestamp: 2241.413400
World's frame: 182153 timestamp: 2241.433400
RECEIVED sensor data for frame: 182153
Frame: 182153 Sensor: camera01 Timestamp: 2241.433400
World's frame: 182154 timestamp: 2241.453400
RECEIVED sensor data for frame: 182154
Frame: 182154 Sensor: camera01 Timestamp: 2241.453400
World's frame: 182155 timestamp: 2241.473400
RECEIVED sensor data for frame: 182155
Frame: 182155 Sensor: camera01 Timestamp: 2241.473400
sensor_tick manually set to 0.02 -> image NOT received for the second tick, afterwards received every tick [NOT EXPECTED]
World's frame: 184174 timestamp: 2270.390377
RECEIVED sensor data for frame: 184174
Frame: 184174 Sensor: camera01 Timestamp: 2270.390377
World's frame: 184175 timestamp: 2270.410377
Some of the sensor information is missed <---- MISS ALWAYS on the second tick
World's frame: 184176 timestamp: 2270.430377
RECEIVED sensor data for frame: 184176
Frame: 184176 Sensor: camera01 Timestamp: 2270.430377
World's frame: 184177 timestamp: 2270.450377
RECEIVED sensor data for frame: 184177
Frame: 184177 Sensor: camera01 Timestamp: 2270.450377
World's frame: 184178 timestamp: 2270.470377
RECEIVED sensor data for frame: 184178
Frame: 184178 Sensor: camera01 Timestamp: 2270.470377
World's frame: 184179 timestamp: 2270.490377
RECEIVED sensor data for frame: 184179
Frame: 184179 Sensor: camera01 Timestamp: 2270.490377
World's frame: 184180 timestamp: 2270.510377
RECEIVED sensor data for frame: 184180
Frame: 184180 Sensor: camera01 Timestamp: 2270.510377
World's frame: 184181 timestamp: 2270.530377
RECEIVED sensor data for frame: 184181
Frame: 184181 Sensor: camera01 Timestamp: 2270.530377
World's frame: 184182 timestamp: 2270.550377
RECEIVED sensor data for frame: 184182
Frame: 184182 Sensor: camera01 Timestamp: 2270.550377
World's frame: 184183 timestamp: 2270.570377
RECEIVED sensor data for frame: 184183
Frame: 184183 Sensor: camera01 Timestamp: 2270.570377
sensor_tick manually set to 0.04 -> data not received for the second and third tick [NOT EXPECTED]; afterwards it is received every second tick [EXPECTED]
World's frame: 192157 timestamp: 2367.593661
RECEIVED sensor data for frame: 192157
Frame: 192157 Sensor: camera01 Timestamp: 2367.593661
World's frame: 192158 timestamp: 2367.613661
Some of the sensor information is missed <-- MISS
World's frame: 192159 timestamp: 2367.633661
Some of the sensor information is missed <-- MISS
World's frame: 192160 timestamp: 2367.653661
RECEIVED sensor data for frame: 192160
Frame: 192160 Sensor: camera01 Timestamp: 2367.653661
World's frame: 192161 timestamp: 2367.673661
Some of the sensor information is missed
World's frame: 192162 timestamp: 2367.693661
RECEIVED sensor data for frame: 192162
Frame: 192162 Sensor: camera01 Timestamp: 2367.693661
World's frame: 192163 timestamp: 2367.713661
Some of the sensor information is missed
World's frame: 192164 timestamp: 2367.733661
RECEIVED sensor data for frame: 192164
Frame: 192164 Sensor: camera01 Timestamp: 2367.733661
World's frame: 192165 timestamp: 2367.753661
Some of the sensor information is missed
World's frame: 192166 timestamp: 2367.773661
RECEIVED sensor data for frame: 192166
Frame: 192166 Sensor: camera01 Timestamp: 2367.773661
sensor_tick manually set to 0.03 - weird sequence of HIT MISS HIT MISS at the begininng
World's frame: 205626 timestamp: 2531.280411
RECEIVED sensor data for frame: 205626
Frame: 205626 Sensor: camera01 Timestamp: 2531.280411
World's frame: 205627 timestamp: 2531.300411
Some of the sensor information is missed
World's frame: 205628 timestamp: 2531.320411
RECEIVED sensor data for frame: 205628
Frame: 205628 Sensor: camera01 Timestamp: 2531.320411
World's frame: 205629 timestamp: 2531.340411
Some of the sensor information is missed
World's frame: 205630 timestamp: 2531.360411
RECEIVED sensor data for frame: 205630
Frame: 205630 Sensor: camera01 Timestamp: 2531.360411
World's frame: 205631 timestamp: 2531.380411
RECEIVED sensor data for frame: 205631
Frame: 205631 Sensor: camera01 Timestamp: 2531.380411
World's frame: 205632 timestamp: 2531.400411
Some of the sensor information is missed
World's frame: 205633 timestamp: 2531.420411
RECEIVED sensor data for frame: 205633
Frame: 205633 Sensor: camera01 Timestamp: 2531.420411
World's frame: 205634 timestamp: 2531.440411
RECEIVED sensor data for frame: 205634
Frame: 205634 Sensor: camera01 Timestamp: 2531.440411
World's frame: 205635 timestamp: 2531.460411
Some of the sensor information is missed
Problems
Maybe some of the questions below are not bugs (you know better than I do), but based on the information available in the documentation, and without looking deeper into the workings of UE, I couldn't figure out the behavior. We can extend the docs if needed (I can make a PR as long as I understand the behavior - np).
Why is data always skipped for the second tick (case 3) ?
** Is it because of SetActorTickInterval which is used?
In case 4, why don't we get data for the 2nd and 3rd tick? I would expect something like:
1. 0.00 -> 0.02 HIT (if I interpret this as an initial value)
2. 0.02 -> 0.04 MISS (0.04 seconds didn't pass from 0.02 -> 0.04)
3. 0.04 -> 0.06 HIT (at 0.06, 0.04 seconds passed: 0.02 -> 0.06)
4. 0.06 -> 0.08 MISS
5. 0.08 -> 0.10 HIT
6. 0.10 -> 0.12 MISS
7. 0.12 -> 0.14 HIT
8. 0.14 -> 0.16 MISS
In case 5, why do we have a sequence of HIT MISS HIT MISS at the beginning? I would expect something like:
1. 0.00 -> 0.02 HIT (if I interpret this as an initial value)
2. 0.02 -> 0.04 MISS (0.03 seconds didn't pass from 0.02 -> 0.04)
3. 0.04 -> 0.06 HIT (at 0.05, 0.03 seconds passed)
4. 0.06 -> 0.08 HIT (at 0.08 0.03 seconds passed again)
5. 0.08 -> 0.10 MISS
6. 0.10 -> 0.12 HIT
7. 0.12 -> 0.14 HIT
8. 0.14 -> 0.16 MISS
Could it be related to how the ticks are stored (double vs float) ?
Related issues
#3385 and the related PR and testing issue
I couldn't find anything useful on the discord channel as well
Environment
Platform: Windows 10
Python: Python 3.7.7
Carla: <IP_ADDRESS>, <IP_ADDRESS>, 0.9.10 (ignored due to the linked issue)
Is there a solution for this issue without waiting till the next release ?
Hi, I found a similar issue in Carla 0.9.10, changing the sensor tick for the camera just doesn't work. The timestamps of frames do not change accordingly after we set a different sensor tick.
Do you find any solution?
at the latest version 0.9.11 they solved this issue, but i didn't try it yet.
at the latest version 0.9.11 they solved this issue, but i didn't try it yet.
Thanks. I also tried the async mode, it does not work either. And the real sensor tick depends on the quality of rendering. Did you encounter a similar issue?
sorry i didn't go through this issue before, but i checked the bugs they solved in the latest version, and they mentioned it.
@OmarAbdElNaser Thanks. Where did they mention it's fixed? Can you direct me? I saw something like Fixed bug causing camera-based sensors to stop sending data, not sure it is related though.
This is what i'm refering to, the camera issue was sending data without caring about the sensor tick, they said they solved it.
@OmarAbdElNaser Great, I will test it with the new version and let you know if it's really fixed. Thanks.
@OmarAbdElNaser I tested 0.9.11, the sensor_tick functionality has been improved, at least in the synchronized mode. I see the sensor tick setting is effective if you don't choose a very small value, this is expected due to the performance limitation of our system I guess.
This is not fixed. Problem still persists.
e.g. in case 4 from my description, we still get 2 misses one after the other.
Tried with 0.9.13.
Please reopen the issue.
|
2025-04-01T06:38:08.566889
| 2024-12-05T12:04:58
|
2720231774
|
{
"authors": [
"Blyron",
"Vincent318"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4476",
"repo": "carla-simulator/carla",
"url": "https://github.com/carla-simulator/carla/issues/8444"
}
|
gharchive/issue
|
How can I remove all things from a map
Hello, I want to build something in CarlaUE4 and want to make it in a map which is already there, for example Town01_Opt. I want to remove all other actors for this, but this doesn't function because some actors are grey in the World Outliner and I can neither click on them, nor delete them. Does somebody know how I can remove all actors from a map (also the grey)? Thank you for your help!
Hello! _Opt maps are composed by sublevels, you can just delete the sublevels and the actor will dissapear
https://dev.epicgames.com/documentation/en-us/unreal-engine/managing-multiple-levels?application_version=4.27
|
2025-04-01T06:38:08.568663
| 2020-09-29T16:34:50
|
711279495
|
{
"authors": [
"glopezdiest"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4477",
"repo": "carla-simulator/leaderboard",
"url": "https://github.com/carla-simulator/leaderboard/pull/64"
}
|
gharchive/pull-request
|
Removed carla and srunner from requirements
Both carla and srunner have been removed from the requirements.
This change is
Updated the README. This is heavily based on the first part of the leaderboard web. Adding @sergi-e
|
2025-04-01T06:38:08.572494
| 2019-02-05T15:08:19
|
406826630
|
{
"authors": [
"GetDarren"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4478",
"repo": "carla-simulator/scenario_runner",
"url": "https://github.com/carla-simulator/scenario_runner/issues/36"
}
|
gharchive/issue
|
ERROR: 'Vehicle' object has no attribute 'get_control'
(carla) cienet@cienet-desktop:~/scenario_runner$ python scenario_runner.py --scenario FollowLeadingVehicle
Preparing scenario: FollowLeadingVehicle
ScenarioManager: Running scenario FollowVehiclcd
And then I start a new terminal, try to run the manual_control.py
but I met an error like this:
ERROR: 'Vehicle' object has no attribute 'get_control'
Traceback (most recent call last):
File "manual_control.py", line 681, in main
game_loop(args)
File "manual_control.py", line 622, in game_loop
if not world.tick(clock):
File "manual_control.py", line 162, in tick
self.hud.tick(self, clock)
File "manual_control.py", line 281, in tick
c = world.vehicle.get_control()
AttributeError: 'Vehicle' object has no attribute 'get_control'
my friend and I have the same version of packages, but it works well at his computer.
Please help to have a look.
You need to update to the recent CARLA release 0.9.3, which will resolve this issue.
I update the 0.9.3 and then run the "python scenario_runner.py"
There is an import module error:
Traceback (most recent call last): File "scenario_runner.py", line 25, in <module> from Scenarios.follow_leading_vehicle import * File "/home/cienet/scenario_runner/Scenarios/follow_leading_vehicle.py", line 24, in <module> from ScenarioManager.atomic_scenario_behavior import * File "/home/cienet/scenario_runner/ScenarioManager/atomic_scenario_behavior.py", line 21, in <module> from agents.navigation.basic_agent import * File "/home/cienet/CARLA_0.9.3/PythonAPI/agents/navigation/basic_agent.py", line 21, in <module> from agents.navigation.global_route_planner import GlobalRoutePlanner File "/home/cienet/CARLA_0.9.3/PythonAPI/agents/navigation/global_route_planner.py", line 15, in <module> from local_planner import RoadOption ImportError: No module named 'local_planner'
But the 'local_planner.py' is in the same folder with "global_route_planner.py"
Then I commented " from local_planner import RoadOption"
The scenario_runner.py and manual_control.py both worked.
|
2025-04-01T06:38:08.576639
| 2020-09-02T07:42:06
|
690797139
|
{
"authors": [
"glopezdiest",
"r-snijders"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4479",
"repo": "carla-simulator/scenario_runner",
"url": "https://github.com/carla-simulator/scenario_runner/issues/626"
}
|
gharchive/issue
|
OpenSCENARIO support - PrivateActions
This issue is meant to track the support of PrivateActions within OSC 1.0:
[x] ActivateControllerAction (Activates CARLA's autopilot)
[x] ControllerAction
[x] LaneChangeAction (Some dynamics are still ignored)
[ ] LaneOffsetAction
[ ] LateralDistanceAction
[ ] LongitudinalDistanceAction
[x] SpeedAction
[ ] SynchronizeAction
[x] TeleportAction
[ ] VisibilityAction
[x] AcquirePositionAction
[x] AssignRouteAction
[ ] FollowTrajectoryAction
Thanks for working on improving support for OSC1.0! :+1:
I am especially interested in support for the LaneOffsetAction.
As of #628, AcquirePositionAction is now supported at the Story part
As of #689, SynchronizeAction is now supported
|
2025-04-01T06:38:08.582236
| 2023-09-07T01:52:18
|
1884977699
|
{
"authors": [
"carlk3"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4481",
"repo": "carlk3/no-OS-FatFS-SD-SPI-RPi-Pico",
"url": "https://github.com/carlk3/no-OS-FatFS-SD-SPI-RPi-Pico/pull/85"
}
|
gharchive/pull-request
|
Another interrupt handling bug fixed
Specifically, this line in spi_transfer:
// Clear the interrupt request.
dma_hw->ints0 = 1u << spi_p->rx_dma;
which must have been left over from a past when only DMA_IRQ_0 was used.
Relevant to #74
|
2025-04-01T06:38:08.583832
| 2017-12-01T15:23:03
|
278499446
|
{
"authors": [
"m4sk1n"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4482",
"repo": "carlmjohnson/pomodoro",
"url": "https://github.com/carlmjohnson/pomodoro/issues/4"
}
|
gharchive/issue
|
Snap package
Pomodoro is now available as Snap package!
https://github.com/m4sk1n/pomodoro
Maybe some info about it?
Closed, not fully working…
|
2025-04-01T06:38:08.593083
| 2021-05-20T22:51:27
|
897528354
|
{
"authors": [
"carlmjohnson",
"frontierpsycho"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4483",
"repo": "carlmjohnson/tumblr-importr",
"url": "https://github.com/carlmjohnson/tumblr-importr/issues/4"
}
|
gharchive/issue
|
Applying themes
I am a bit puzzled about how to apply themes to the resulting blogs. I ran the importer successfully, and also realized I probably need to copy over the layouts/ folder from this repository to my blog (it wasn't done automatically). But then, even though I've installed a theme and specified it in config.toml, it's not applied, my blog is plain.
Any idea if I did something wrong, and if not, what the correct method to theme the blog is?
Thanks for using this. It sounds like you didn't do anything wrong. Any normal Hugo template that you use with this importer will need to be partially rewritten to work with it. I'm not sure if you saw this "philosophy" section of the README, but it's good background on why:
When converting a Tumblr blog to Hugo, you may initially think you want all your content converted to Markdown files. For example, you may think you want your link posts to become something like ### Link: [$TITLE]($LINK)↵↵$CONTENT. The trouble with this approach is that converting to Markdown loses formatting information from Tumblr and locks you into a single representation of the data which cannot be easily changed later.
How tumblr-importr works instead is it reads the common post metadata out of the Tumblr API (title, URL, slug, date, etc.) and writes that in the format Hugo expects, then it makes all of the other data from Tumblr on the post available as a custom parameter. Now you can format your link posts using Hugo's templating language to make it look exactly how you want:
<h3>Link: <a href="{{ .Params.tumblr.url }}">{{ .Params.tumblr.title }}</a></h3>
{{ .Params.tumblr.description | safeHTML }}
If you decide the H3 should be an H2 or the content needs a wrapper <div class="content"> or you want to change "Link:" to be an emoji 🔗, all you need to do is change your Hugo theme, rather than going back and reformatting all your Markdown files. All of the information that Tumblr had on the post is available, making it possible to fully replicate a Tumblr theme in Hugo without any information loss.
The side effect of this philosophy is that you're going to need to change existing theme to make them work with this data. You can see some of the basics in the layouts directory here, but it's really just a suggested starting point. If I had more time, I would love to have a better sample theme or write some adaptations to show how it works.
If you look at the _default/single.html, the relevant section is here:
{{ .Render "content" }}
That means to render the content.html file according to the current page's type. A normal content.html looks like _defaults/content.html, which calls {{ .Content }}. In other words, it's just rendering the page's Markdown as is. This is what normal themes do because it's what you're expected to do by Hugo. What you'll need to do is to go into the theme you want and change {{ .Content }} to {{ .Render "content" }} and then add different content.html for the different Tumblr page types. So for example, for Tumblr's video page type, I have tumblr-video/content.html, which overrides the normal content.html and looks like this:
{{ range last 1 .Params.tumblr.player }}
<div class="vidblock">{{ .embed_code | safeHTML }}</div>
{{ end }}
<div class="caption">{{ .Params.tumblr.caption | safeHTML }}</div>
In other words, it says "if a page has type tumblr-video, instead of rendering Markdown (which isn't there), look at the .tumblr.play data and render the last embed code in that list, followed by the .tumblr.caption."
The other page types have similar content.html adaptations, such as showing the picture for a photo post or the link for a link post.
Hope that helps. Let me know if any part of that didn't make sense.
One more comment: It's possible that you'll find it easier to go from a custom Tumblr theme to a custom Hugo theme than vice versa. E.g. this snippet from Tumblr theme documentation:
<ol id="posts">
{block:Posts}{block:Text}
<li class="post text">
{block:Title}
<h3><a href="{Permalink}">{Title}</a></h3>
{/block:Title}{Body}
</li>
{/block:Text}{block:Photo}
would turn into something like this in Hugo markdown:
<ol id="posts">
{{ range .Pages }}{{ if eq .Section "tumblr-text" }}
<li class="post text">
{{ if eq .Kind "page" }}
<h3><a href="{{ .Permalink }}">{{ .Title }}</a></h3>
{{ end }}
{{ .Params.tumblr.content | safeHTML }}
</li>
{{ end }}{{ if eq .Section "tumblr-photo" }}
Thanks for the detailed explanation. I guess I have to delve into Hugo themes then :)
I have one immediate question, though: let's say I manage to change an existing theme to render my tumblr posts the way I want it to. However, when I add new posts, I'm probably not going to follow the structure that the posts imported from tumblr have. So my guess is that any new blog posts I write will not work with the theme I created to accommodate my tumblr posts. Does that sound correct or am I misunderstanding something?
Yes, your edit is correct. That’s what I did with blog.carlmjohnson.net, where the old posts are Tumblr formatted but the new posts are normal Hugo files.
|
2025-04-01T06:38:08.601063
| 2024-08-16T09:05:51
|
2469842855
|
{
"authors": [
"Aalivexy",
"carloskiki"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4484",
"repo": "carloskiki/pulldown-latex",
"url": "https://github.com/carloskiki/pulldown-latex/issues/11"
}
|
gharchive/issue
|
Node.js binding/npm package
Currently, there seems to be no library in npm that converts latex to mathml core.
It would be great if we could provide Node.js binding or wasm-wasi to javascript binding!
I believe this can promote the popularity of mathml core.
Yes of course! This is one of the goals I have with this crate: having the library available as an npm package for js.
I do not have a lot of experience with publishing crates to npm, and I have never done it with a rust lib. If you have any idea on how it is done, it would gladly take the advice!
|
2025-04-01T06:38:08.606021
| 2023-11-20T18:34:12
|
2002779704
|
{
"authors": [
"LEFD",
"caro401",
"joethei"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4485",
"repo": "caro401/royal-velvet",
"url": "https://github.com/caro401/royal-velvet/issues/39"
}
|
gharchive/issue
|
Network usage is prohibited for themes
It appears that your theme uses network connections to load assets (e.g. fonts, icons, or images). This is prohibited by the official Obsidian developer policies because themes should function completely locally.
You can bundle an asset for local use by using data URLs. See this guide.
Please let us know if you have any questions. Any themes that use network connections will be removed from the official directory in the first week of January 2024.
The Obsidian team.
@LEFD what do you think is the best path forward for this?
Possibly the best thing is just not bundling the font anymore, so people can opt into using the fonts I like, but that's a breaking change. I'm not sure the best way to communicate to people that they would need to install fonts to keep the theme looking the same
@caro401 I think the best way forward would be to embed the fonts in the CSS file. That's a bit messy, but this way the theme would look as we intended out of the box.
I'm not sure the best way to communicate to people that they would need to install fonts to keep the theme looking the same
I don't think people read change logs for themes. (I don't). Installing fonts on mobile devices might be a problem as well.
Also, one always has the option to use a custom font if one desires to do so.
|
2025-04-01T06:38:08.630753
| 2022-07-01T18:58:36
|
1291739052
|
{
"authors": [
"maneesha"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4486",
"repo": "carpentrycon/carpentrycon2022",
"url": "https://github.com/carpentrycon/carpentrycon2022/pull/77"
}
|
gharchive/pull-request
|
try bulleted view for schedule info
Changes schedule details (what asterisks and abbreviations mean) to a bulleted view rather than paragraph view.
@acrall if you like it with the bullets, please merge.
If you'd prefer to keep it like it is, then you can just close this PR.
|
2025-04-01T06:38:08.667463
| 2017-01-08T19:12:24
|
199439572
|
{
"authors": [
"asztal",
"carstengehling",
"pjaeger16",
"slarti-b"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4487",
"repo": "carstengehling/jirastopwatch",
"url": "https://github.com/carstengehling/jirastopwatch/issues/47"
}
|
gharchive/issue
|
Improvement: Total worklog of today
In addition to the total of non-submitted worklogs of time tracker, there should be a display of users total worklogs time which are booked for today. It should be updated after submission of tracked time.
If you cannot get it via api, it would be an alternative to only show the todays tracked time via jira stopwatch so you can (export times to a local file and) make an addition every time.
Thank you very much.
Hi Philip,
I cannot find anything in the Jira REST API, that enables me to fetch all the worklogs, that a single user has posted on a specific date. Only way is to poll through the issues, and that doesn't really work that well, if you remove an issue key from StopWatch during the day (which I do a lot, once I'm done with it).
You are welcome to fiddle with this yourself and if you come up with a solution, that you are happy about, I'll gladly accept a PR.
Could you make a local log instead?
Beste Grüße
Best regards
Philipp Jäger
SAP Basis Administrator
Von: Carsten Gehling<EMAIL_ADDRESS>Gesendet: Montag, 20. März 2017 12:56
An: carstengehling/jirastopwatch<EMAIL_ADDRESS>Cc: Jäger, Philipp<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Betreff: Re: [carstengehling/jirastopwatch] Improvement: Total worklog of today (#47)
Hi Philip,
I cannot find anything in the Jira REST API, that enables me to fetch all the worklogs, that a single user has posted on a specific date. Only way is to poll through the issues, and that doesn't really work that well, if you remove an issue key from StopWatch during the day (which I do a lot, once I'm done with it).
You are welcome to fiddle with this yourself and if you come up with a solution, that you are happy about, I'll gladly accept a PR.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/carstengehling/jirastopwatch/issues/47#issuecomment-287739291, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AVcxiLKJhd7Qp_EEodUueSjFlXu2hvQuks5rnmlmgaJpZM4LdxL3.
Possibly yes. Then it would need to be reset automatically on change of current date. So you should store the current date in the user config.
Could you develop this?
Beste Grüße
Best regards
Philipp Jäger
SAP Basis Administrator
Von: Carsten Gehling<EMAIL_ADDRESS>Gesendet: Montag, 20. März 2017 13:08
An: carstengehling/jirastopwatch<EMAIL_ADDRESS>Cc: Jäger, Philipp<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Betreff: Re: [carstengehling/jirastopwatch] Improvement: Total worklog of today (#47)
Possibly yes. Then it would need to be reset automatically on change of current date. So you should store the current date in the user config.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/carstengehling/jirastopwatch/issues/47#issuecomment-287741505, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AVcxiH9mEGchSd2_7Msxv3yTOK1aaOmyks5rnmw9gaJpZM4LdxL3.
@pjaeger16: Not sure if you know about it already but we use a Jira plugin (https://marketplace.atlassian.com/plugins/org.everit.jira.timetracker.plugin/server/overview) which provides a nice page within Jira which lists the time logged by the current use per day, including total time. it's not bad (although I would like more flexibility to customize the columns in the view)
Personally, I think it's more appropriate there - the stopwatch is to help log time, not to audit or manage existing time logs in my view
I also use Everit's time tracker and it's pretty useful. It would be nice
to see my previous work log entries in JIRA stopwatch somehow, for those
times when I want to "fill in the blanks", but the Everit plugin actually
works OK for that too.
Adam Conway<EMAIL_ADDRESS>ezt írta (időpont: 2017. márc. 20.,
Hét 15:54):
@pjaeger16 https://github.com/pjaeger16: Not sure if you know about it
already but we use a Jira plugin (
https://marketplace.atlassian.com/plugins/org.everit.jira.timetracker.plugin/server/overview)
which provides a nice page within Jira which lists the time logged by the
current use per day, including total time. it's not bad (although I would
like more flexibility to customize the columns in the view)
Personally, I think it's more appropriate there - the stopwatch is to help
log time, not to audit or manage existing time logs in my view
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/carstengehling/jirastopwatch/issues/47#issuecomment-287804018,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAEKzmOY0soz-moNFjuj6OZVWaUfJbeJks5rnqEsgaJpZM4LdxL3
.
Could you develop this?
I don't personally find the feature particularly useful, especially since it it would require quite a lot to make it accurate. But if you could implement it yourself or know someone else who could, feel free to make a pull request.
|
2025-04-01T06:38:08.687962
| 2019-01-15T10:50:58
|
399289104
|
{
"authors": [
"MichaelGrupp",
"ftbmynameis",
"joekeo",
"jonra1993"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4488",
"repo": "cartographer-project/cartographer_ros",
"url": "https://github.com/cartographer-project/cartographer_ros/issues/1159"
}
|
gharchive/issue
|
Unable to match scans (i.e. submap list is empty amd no map is being built)
Hi Cartographer team,
I am trying to run cartographer on a turtlebot 2 with a 360° rplidar using only the laser scan (as of now) without odometry or an imu. I understand this might not be optimal, however I think my results shouldn't be that wrong even without odom/imu.
I have managed to run cartographer however it is not working and there has to be some major configuration issue because not a single submap can be matched. (in rviz if I disable "All" in submaps the list of all submaps contains only one element with the float in front of it increasing steadily). This also shows in the visualization because no map is built (i.e. I get only a "random" scattering of laser scan points in grey color while in other issues I have seen a "white map with black edges" is supposed to be built.
My rosbag validate result:
Is there a documentation for this tool available? I can barely make sense of the output data. It doesn't seem wrong, however I don't understand what the histogram/distribution is supposed to tell me.
https://gist.github.com/ftbmynameis/a6a51eba839f5906ecc971a7a4c99711
My branch with my configuration files:
Note: I have installed cartographer_ros using the official manual (i.e. installed it in a catkin workspace) and made my configuration changes in /install_isolated/share/cartographer_ros. Since the official repository seems to be located in the "src" folder I have copied my configuration files there (in a new branch "turtlebot_config") and pushed them to my github fork of the repository. While doing so I have noticed that configuration files in the install_isolated/.. and src/.. folders do not match exactly which confuses me, but I have simply commited them hoping you guys have a better idea what is going on.
https://github.com/ftbmynameis/cartographer_ros/tree/turtlebot_config
My bag file is located here:
https://drive.google.com/open?id=1fs7C5IL_9VitraK0TFMOyb6UA18NkMbh
When running with the given configuration and bag file the point cloud in rviz starts sort of "spinning" which almost reminds me of a tumbling airplane going down and shows something major going wrong.
While I am here I also have a question about the parameter TRAJECTORY_BUILDER_nD.num_accumulated_range_data: as I understand this is a sensor dependent parameter (i.e. how many scans / how much data is provided by all my laser sensors) however I couldn't understand / figure out how to analyze my data to retrieve this value.
Thanks for your help and insights! If there is anything else required to be used I will try my best to provide it.
I am using the same lidar in a custom robot, initially only with lidar, I had issues finding which bits to tune and how to do it. A first tuning approach that produced good results, my config.lua was as follows:
include "map_builder.lua"
include "trajectory_builder.lua"
options = {
map_builder = MAP_BUILDER,
trajectory_builder = TRAJECTORY_BUILDER,
map_frame = "map",
tracking_frame = "base_link",
published_frame = "base_link",
use_odometry = false,
provide_odom_frame = true,
odom_frame = "odom",
publish_frame_projected_to_2d = false,
use_pose_extrapolator = true,
use_nav_sat = false,
use_landmarks = false,
num_laser_scans = 1,
num_multi_echo_laser_scans = 0,
num_subdivisions_per_laser_scan = 1,
num_point_clouds = 0,
lookup_transform_timeout_sec = 0.2,
submap_publish_period_sec = 0.3,
pose_publish_period_sec = 5e-3,
trajectory_publish_period_sec = 30e-3,
rangefinder_sampling_ratio = 1.,
odometry_sampling_ratio = 1.,
fixed_frame_pose_sampling_ratio = 1.,
imu_sampling_ratio = 1.,
landmarks_sampling_ratio = 1.,
}
MAP_BUILDER.use_trajectory_builder_2d = true
--this one tries to match two laser scans together to estimate the position,
--I think if not on it will rely more on wheel odometry
TRAJECTORY_BUILDER_2D.use_online_correlative_scan_matching = true
-- tune this value to the amount of samples (i think revolutions) to average over
--before estimating te position of the walls and features in the environment
TRAJECTORY_BUILDER_2D.num_accumulated_range_data = 1
--use or not use IMU, if used, the tracking_frame should be set to the one that the IMU is on
TRAJECTORY_BUILDER_2D.use_imu_data = false
--bandpass filter for lidar distance measurements
TRAJECTORY_BUILDER_2D.min_range = 0.3
TRAJECTORY_BUILDER_2D.max_range = 8.
--This is the scan matcher and the weights to different assumptions
--occupied_space gives more weight to the 'previous' features detected.
TRAJECTORY_BUILDER_2D.ceres_scan_matcher.occupied_space_weight = 10.
TRAJECTORY_BUILDER_2D.ceres_scan_matcher.translation_weight = 10.
TRAJECTORY_BUILDER_2D.ceres_scan_matcher.rotation_weight = 40.
return options
might not be optimal, but it is working the robot and its angular and linear speeds.
Hi @joekeo I used you same configuration file without "use_pose_extrapolator" but list_map is empty and there is no map. Do you know which can be the problem?
Closing for inactivity, we can't invest time to help you with your setup anymore unfortunately.
|
2025-04-01T06:38:08.714756
| 2022-08-18T23:57:46
|
1343765331
|
{
"authors": [
"Bruc3Stark",
"cofecatt",
"fernandolguevara",
"hsluoyz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4489",
"repo": "casdoor/casdoor",
"url": "https://github.com/casdoor/casdoor/issues/1031"
}
|
gharchive/issue
|
can not login with admin/123 on docker casbin/casdoor:latest
I'm trying to access to the admin dashboard using admin/123 but casdooor throw this error
{
"status": "error",
"msg": "Unauthorized operation",
"sub": "",
"name": "",
"data": null,
"data2": null
}
the enforcer says built-in/admin isn't allowed
Check the database table 'permission_rule' to see whether there is any data
Check the db table named casdoor.permission_rule whether there is any data, it may be a bug start with docker.
@hsluoyz can we have at least some test on the login methods to avoid this kind of issues on the future?
@fernandolguevara good idea, created here: https://github.com/casdoor/casdoor/issues/1036
@fernandolguevara i wonder how did this bug finally be solved 😕 i met the same question when i upgrade casdoor from v1.105.0 to v1.270.0, and now i cannot even reset the admin password 😡
|
2025-04-01T06:38:08.717840
| 2024-07-22T09:08:18
|
2422351926
|
{
"authors": [
"hsluoyz",
"yangyulele"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4490",
"repo": "casdoor/casdoor",
"url": "https://github.com/casdoor/casdoor/issues/3071"
}
|
gharchive/issue
|
sms login
Does Casdoor support custom applications using SMS verification for login (without CAPTCHA)? Why can't I find the corresponding interface or demo?
Hello 我会尽快回复的。
@yangyulele it's code login: https://door.casdoor.com/login . It can be verification code from Email or phone
|
2025-04-01T06:38:08.718683
| 2017-03-07T08:26:14
|
212361883
|
{
"authors": [
"casey"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4491",
"repo": "casey/just",
"url": "https://github.com/casey/just/issues/157"
}
|
gharchive/issue
|
Print justfile path on line above syntax errors
Possibly above run errors too.
Ehhhhhhh, I think not doing this is probably fine. Every hoopy frood knows where their justfile is.
|
2025-04-01T06:38:08.720948
| 2020-07-23T18:41:31
|
664688735
|
{
"authors": [
"yschimke"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4492",
"repo": "cashapp/certifikit",
"url": "https://github.com/cashapp/certifikit/pull/16"
}
|
gharchive/pull-request
|
Certificate.sha256Hash() method
As used in https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning
The HPKP policy specifies hashes of the subject public key info of one of the certificates in the website's authentic X.509 public key certificate chain (and at least one backup key) in pin-sha256 directives, and a period of time during which the user agent shall enforce public key pinning in max-age directive, optional includeSubDomains directive to include all subdomains (of the domain that sent the header) in pinning policy and optional report-uri directive with URL where to send pinning violation reports. At least one of the public keys of the certificates in the certificate chain needs to match a pinned public key in order for the chain to be considered valid by the user agent.
Failure makes no sense
<img width="810" alt="image" src="https://user-images.githubusercontent.com/231923/88431371-1701e000-cdf2-11ea-9e9a-f4fa4c38688b.png">
|
2025-04-01T06:38:08.722078
| 2023-02-27T10:30:12
|
1600903401
|
{
"authors": [
"JakeWharton",
"hfhbd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4493",
"repo": "cashapp/licensee",
"url": "https://github.com/cashapp/licensee/issues/171"
}
|
gharchive/issue
|
Add because to allowUrl
Nice to have a comment in allowUrl too, symmetry to allowDependency.
Seems like it should just be on everything
|
2025-04-01T06:38:08.728120
| 2024-11-01T14:02:01
|
2629103722
|
{
"authors": [
"callebtc",
"prusnak",
"thesimplekid"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4494",
"repo": "cashubtc/nuts",
"url": "https://github.com/cashubtc/nuts/pull/185"
}
|
gharchive/pull-request
|
Add nix flake for prettier checks
just for you @callebtc
why do we need this unsolicited nix config flash again?
visits spec repo, sees this:
why do we need this unsolicited nix config flash again?
This would be useful to me as I don't have prettier installed globally to run the linting check, and it would be easier for me (and others on nix) to have a dev flake defined in the repo to drop into then creating a custom dev shell each time. However, I do realize this is likely an issue for only me so feel free to close.
Changing the CI only has the benefit since there is a flake defined it ensures what is run locally is run in CI, though in such a simply CI likely no real benefit.
Closing then.
|
2025-04-01T06:38:08.739236
| 2015-09-16T23:27:44
|
106884760
|
{
"authors": [
"dereklwood",
"gokulavasan",
"wolf31o2"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4495",
"repo": "caskdata/hadoop_cookbook",
"url": "https://github.com/caskdata/hadoop_cookbook/pull/225"
}
|
gharchive/pull-request
|
COOK-72 Support later HDP 2.1 and HDP 2.2 updates on Ubuntu
Otherwise, we fail to find the correct repository path.
LGTM :+1:
:+1:
|
2025-04-01T06:38:08.741035
| 2015-07-30T16:07:08
|
98206396
|
{
"authors": [
"jawshooah",
"vitorgalvao"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4496",
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/issues/12903"
}
|
gharchive/issue
|
Unrelated commits included when auditing modified casks during Travis builds
The commit range provided by Travis in the enrivonment variable TRAVIS_COMMIT_RANGE does not appear to include only those commits to be merged in a pull request, as evidenced by this spurious build failure (#12899).
We may want to hard-code the commit range in .travis.yml as refs/remotes/origin/HEAD..HEAD to ensure that only commits relevant to the pull request are considered when auditing modified Casks.
Go for whatever solution you find best. You have been finding a lot of areas where our Travis checks can be improved, and have been fast and efficient in patching those. Feel free to proceed as you like, with Travis.
|
2025-04-01T06:38:08.743751
| 2017-07-24T11:57:40
|
245056865
|
{
"authors": [
"commitay",
"pareut"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4497",
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/issues/36922"
}
|
gharchive/issue
|
brew cask add tistory-editor.app
Cask details
Please fill out as much as possible. Before you do, note we cannot support Mac App Store-only apps.
Name: Tistory-editor.app
Homepage: https://joostory.github.io/tistory-editor/
Download URL: https://github.com/joostory/tistory-editor/releases/download/0.3.8/TistoryEditor-0.3.8-mac.zip
Description: This is the most used blog service management tool in Korea
I already created tistory-editor.rb and I have posted it on the pull request.
I've already run it
https://github.com/caskroom/homebrew-cask/issues/36923
|
2025-04-01T06:38:08.756432
| 2014-08-05T23:55:38
|
39577430
|
{
"authors": [
"adidalal",
"bcg62",
"chino",
"mtougeron",
"vitorgalvao"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4498",
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/issues/5667"
}
|
gharchive/issue
|
Support "sudo -A" so that multiple large cask installs can be scripted
When using tools like https://github.com/pivotal-sprout/sprout-wrap or https://github.com/kitchenplan/kitchenplan a large number of casks are defined to be installed via a script. This installation can frequently take longer than the sudo timeout. This means that when homebrew-cask calls sudo (https://github.com/caskroom/homebrew-cask/blob/master/lib/cask/system_command.rb#L49) there is no prompt for the user to enter their password. The error given is "sudo: no tty present and no askpass program specified"
It would be nice if https://github.com/caskroom/homebrew-cask/blob/master/lib/cask/system_command.rb#L49 could also support using "sudo -A" so that the SUDO_ASKPASS environment variable can be used. An example script of this would be something like https://gist.github.com/mtougeron/8dc9cd42c1dd9bd566a3 This would allow the sudo authentication to be handled without user input.
Thanks, Mike
Closing for lack of interest/implementation. This is not urgent and can be revisited at a later date. it concerns homebrew-cask being called by other tools and not homebrew-cask itself, but it should be revisited.
Why not mark an issue with a non-critical tag instead of closing a valid concern?
I believe from the initial request all that needs to be done is adding -A to sudo so it can optionally use the $SUDO_ASKPASS env var.
Why not mark an issue with a non-critical tag instead of closing a valid concern?
Because more tags is not a solution. An over-abundance of issues makes it difficult to go through them and focusing. Having issues open indefinitely is a very poor solution.
It’s not like issues are deleted. You were still able to find it, and if you really think it’s simple to do, you’re very welcome to submit a PR with a new discussion.
But closing an unresolved issue is a solution?
Tags exist to make an over-abundance of issues less difficult to go through, focus can be spent on the important ones.
Yes, its not deleted but its less relevant and accessible. Someone may have the same experience and odds are they're not searching though closed issues.
Keep in mind I tend to be somewhat blunt when writing something so long, and that can make my tone seem confrontational. Nothing could be farther from the truth: this is meant as an explanation, not a defence.
The message is clear. The door is not closed on this issue, and it is very clear it can be revisited at a later date.
Someone may have the same experience and odds are they're not searching though closed issues.
Good. If an issue was closed due to lack of interest and someone opens a new one, it means there’s interest again, and we can form a new issue (backed up by the knowledge of the old one) to try again to tackle it.
Tags exist to make an over-abundance of issues less difficult to go through, focus can be spent on the important ones.
With all due respect1, you’re not managing the project. I don’t mean this in a “your opinion is irrelevant” way (because it is relevant, just like any other user’s or maintainer’s) but in a “you haven’t experienced how bad it is”. Your solution is all well and good in theory; in practice, that is not what happens. You know why are close to all new issues labeled? Because one day, while managing the open ones I noticed how unbearable it had become and personally spent a stupid amount of hours going through all open issues, reading most in their entirety, and making decisions on labels and open status. We’re now keeping issues well labeled and even document how, but it has not always been like that.
Furthermore, many people don’t decide “let me work on this tag” and only pick that one, they look at every issue and decide what to work on based on the labels it has. In other words, the filtering is done visually after picking an issue, not the reverse.
No, we will not keep decrepit issues no one cares about or wants to work on and have minimal dubious benefit, open indefinitely. Their overhead is not worth it, and once again they can be revisited at a later time. This project was close to stagnation, you just couldn’t tell from the outside because casks were still being merged. Among the many other changes, organising issues is helping us move forward again, and that includes closing unimportant ones.
To put it bluntly, you can theorise all you want about the effectiveness and use of labels, I’ve experienced first hand what this specific project needs at this specific time. Right now, this is it.
1 I’d been wanting to use that, for a while. I just find the scene funny, I mean nothing more by it.
This affects ability to use it from tools like chef/puppet to manage workstations which is a big use case.
This affects ability to use it from tools like chef/puppet to manage workstations which is a big use case.
Irrelevant if no one works on it or shows interest in working on it. It is also absolutely irrelevant for the functioning of homebrew-cask, and we have big issues there that need addressing. If the core tool itself has issues, you can be damn sure those take precedence above making it play nicely with other tools. Why wouldn’t it?
Unless, of course, we get a PR. So please either submit one or lets end the conversation here. We’re all volunteers, and you do not get to pick how volunteers spend their time. For the last time, this feature is non-critical, and hence of low important and if no one works on it until then we can revisit it in the future.
Even though this is an old issue, FYI @mattbell87 posted a possible solution in https://github.com/caskroom/homebrew-cask/issues/19180#issuecomment-188522310
|
2025-04-01T06:38:08.758050
| 2015-11-26T16:38:40
|
119076266
|
{
"authors": [
"adityadalal924"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4499",
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/15428"
}
|
gharchive/pull-request
|
remove qtox
Ref https://github.com/caskroom/homebrew-cask/issues/15420
Merged as 43bcf9fa32a4fbb367dcd1b537953c99484567af.
|
2025-04-01T06:38:08.763268
| 2017-02-20T11:34:08
|
208854462
|
{
"authors": [
"jbeagley52",
"vitorgalvao"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4500",
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/30251"
}
|
gharchive/pull-request
|
OmegaT
If there’s a checkbox you can’t complete for any reason, that's okay, just explain in detail why you weren’t able to do so.
After making all changes to the cask:
[x] brew cask audit --download {{cask_file}} is error-free.
[x] brew cask style --fix {{cask_file}} reports no offenses.
[x] The commit message includes the cask’s name and version.
Additionally, if adding a new cask:
[x] Named the cask according to the token reference.
[x] brew cask install {{cask_file}} worked successfully.
[x] brew cask uninstall {{cask_file}} worked successfully.
[x] Checked there are no open pull requests for the same cask.
[x] Checked the cask was not already refused in closed issues.
[x] Checked the cask is submitted to the correct repo.
OmegaT Cask Installer
In the future, please submit separate pull requests for each cask, as more often than not this type of pull request brings problems with one or more of them, and it halts the inclusion of the other ones.
|
2025-04-01T06:38:08.764787
| 2017-04-27T00:29:05
|
224636375
|
{
"authors": [
"commitay",
"vitorgalvao"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4501",
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/33021"
}
|
gharchive/pull-request
|
Update travis xcode to 8.3
I just noticed this and I'm not sure if it should be updated or not.
@vitorgalvao Should I do the other repos?
@commitay No need, thank you.
|
2025-04-01T06:38:08.766489
| 2017-12-11T14:48:22
|
281042721
|
{
"authors": [
"eenick",
"vitorgalvao"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4502",
"repo": "caskroom/homebrew-cask",
"url": "https://github.com/caskroom/homebrew-cask/pull/41831"
}
|
gharchive/pull-request
|
Update camtasia to 3.1.2
After making all changes to the cask:
[x] brew cask audit --download {{cask_file}} is error-free.
[x] brew cask style --fix {{cask_file}} left no offenses.
[x] The commit message includes the cask’s name and version.
Thank you, but this is a regression and has conflicts.
|
2025-04-01T06:38:08.773971
| 2018-02-19T00:04:47
|
298132034
|
{
"authors": [
"ckrooss"
],
"license": "unlicense",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4503",
"repo": "caskroom/homebrew-drivers",
"url": "https://github.com/caskroom/homebrew-drivers/pull/337"
}
|
gharchive/pull-request
|
Add Basler Pylon Camera Suite 5.0.5
After making all changes to the cask:
[x] brew cask audit --download {{cask_file}} is error-free.
[x] brew cask style --fix {{cask_file}} reports no offenses.
[x] The commit message includes the cask’s name and version.
Additionally, if adding a new cask:
[x] Named the cask according to the token reference.
[x] brew cask install {{cask_file}} worked successfully.
[x] brew cask uninstall {{cask_file}} worked successfully.
[x] Checked there are no open pull requests for the same cask.
[x] Checked the cask was not already refused in closed issues.
[x] Checked the cask is submitted to the correct repo.
The Package "Pylon 5.0.5 Camera Software Suite OS X" includes
pylon IP Configurator.app (com.baslerweb.pylon.util.ipconf)
pylon Programmer's Guide and API Reference.app (com.baslerweb.pylon.doc.cpp)
pylon Viewer.app (com.baslerweb.pylon.viewer)
Headers and Libraries (com.baslerweb.pylon.framework)
So I named the cask "pylon". Possible alternatives could be "pylon-suite" or "basler-pylon".
Moved from homebrew-cask PR #44127
I could not find clear rules on quotation marks:
brew cask create uses single quotes
Some packages in homebrew-drivers use double quotes
Single quotes don't work with variable expansion like 'package-#{version}.pkg' (afaik)
So I just stuck with double quotes for everything because I wanted to use variable expansion, I hope that's ok.
|
2025-04-01T06:38:08.777973
| 2021-12-01T10:10:37
|
1068211991
|
{
"authors": [
"piotr-dziubecki"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4505",
"repo": "casper-network/casper-node",
"url": "https://github.com/casper-network/casper-node/issues/2425"
}
|
gharchive/issue
|
InMemoryGlobalState hangs during 2nd run of create_domains entrypoint
Follow-up for #2346. When smart contract from #2346 was ran through InMemoryGlobalState it worked for the first time (it wrote new entries in the trie) but 2nd run when the entries existed already it hang seemingly forever. I couldn't observe similar behavior with LMDB. Running it through strace seem to hang on brk (memory allocations are out of hand?). I didn't try to debug it further than this. (edited)
fixed in https://github.com/casper-network/casper-node/pull/3146
|
2025-04-01T06:38:08.788201
| 2021-09-10T22:01:38
|
993629709
|
{
"authors": [
"ipopescu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4506",
"repo": "casper-network/gitcoin-hackathon",
"url": "https://github.com/casper-network/gitcoin-hackathon/issues/23"
}
|
gharchive/issue
|
QR codes for transactions
Transaction QR codes
Prize Bounty
23,000 (approx. 3,000 USDT) for each of the top 10 intermediate submissions
Challenge Description
Create QR codes for delegation or other transactions. Bring your creativity, ideas, and design to the table.
Winning contributors may receive funding or grants to continue this work beyond the hackathon.
Submission Requirements
Create a public GitHub repository for your project, then submit it using the Gitcoin UI.
Create a design document or a README file.
Find a way to explain your design and implementation. You can use a detailed written document with screenshots, or a video.
You can upload your document or the video here or on a platform of your choice. Use the following naming convention for your file:
Teamname_Title_Serial_YearMonthDate.*
Example: dAppsRUs_DAO_001_20210914.*
Add technical documentation and unit tests as appropriate.
All bounty submissions must be received no later than 11:59 PM EDT on October 11th, 2021, or earlier.
Judging Criteria
This entry will be judged based on the following:
Novelty, design, creativity, and complexity
Code correctness, unit tests, and technical documentation
Deployment of the project to the Testnet
A detailed design description and a plan for future possibilities and enhancements
A video or technical documentation explaining the design and the implementation
The submission will be compared to other advanced projects, and the top 5 submissions will receive a prize.
Winner Announcement Date
Projects will be evaluated within 2 weeks of the hackathon ending or earlier when possible. Winners will be announced the week of October 25th, and the payout will occur after the winners are announced.
CSPR vs USDT
If CSPR cannot be accepted in certain jurisdictions, winners will receive the equivalent amount in USDT. Gitcoin establishes the conversion rate on the day the bounty is issued.
Questions?
Join the Casper Hackathon Discord Server if you have any questions.
We are also holding live ask-me-anything sessions every weekday at 4 pm CEST.
I need to re-open this issue in a new category.
|
2025-04-01T06:38:08.817828
| 2024-03-25T17:18:19
|
2206284477
|
{
"authors": [
"artemisYo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4507",
"repo": "casperstorm/ferra",
"url": "https://github.com/casperstorm/ferra/issues/4"
}
|
gharchive/issue
|
Suggestion: Retouch colors and maybe add some more
TL;DR
Colors (sage in particular) seem off to me; I propose slight adjustments, maybe adding more colors too.
The following section Explanation is a bunch of writing, explaining my thoughts; I recommend skipping it if you're not interested.
Explanation
Motivation
Ferra has a rich earthy tone, partly because of the abundance of browns and oranges used in the scheme. However, this comes at the cost of seeming overly homogeneous. One way to combat it is to put slightly more attention towards the accentual colors present.
Another noticeable outcome of the lacking focus on accentual colors is that they do not fit in as well as they could.
Notes
One thing that is easy to notice is the general lack of consistent saturation or chroma in the accentual colors; while Mist is practically white, Ember and Honey stun with high chromaticity. It would also be beneficial to include more variants of said color, to be able to better cover the need for brighter and darker variations, this goes hand in hand with another easily apparent problem: the colors do not have a consistent luminosity either.
Solution
In the following, I have established 8 categories of accentual colors, most with 3 grades of luminosity. I tried matching the chroma within a category, so as to ensure cohesion.
The chroma between color categories has outliers, but also tries to stay within bounds.[^1]
The luminosity of the grades tries to stay consistent over categories. Furthermore, I propose shifting the base colors (Ash, Umber and Bark) to be slightly more chromatic, with a bigger lean to brown.
Additionally, I added a blue and purplish color to fit the common Base16 color schemes more; while I personally dislike said schemes, it could be beneficial to allow for more color variety.
I am still undecided on whether green should retain the chroma levels present in Sage, or to increase them.[^2]
[^1]: the orange, red and yellow categories having nearly the same chroma, with green and white being low-chroma and rose being somewhere in-between.
[^2]: approximately double the previous chroma
[^3]: Here the vivid and light orange are Coral and Blush respectively, with Blush chroma-adjusted to fit Coral. The vivid low chroma green is Sage without any changes. Vivid rose is Rose, dark red is Ember, light yellow is honey.
The Actual Adjustments
A picture of the adjusted colors in helix using the old Sage color; the change is minimal but perceptible:
The colors[^3] enumerated:
Color
Dark
Vivid
Light
Orange
#d67751
#ffa07a
#ffc49e
Green (low chroma)
#8b906f
#b1b695
#d6dbba
Green (high chroma)
#8c9368
#b2b98e
#d7deb3
Red
#e06b75
None
#ff919b
Rose
None
#f6b6c9
#ffc8db
Yellow
#d8a442
#e9b553
#f5d76e
White
#90909d
#b8b8c6
#d6d6e4
Blue
#839ae3
#9eb3ff
#bbd0ff
Purple
#ba87d8
#d69eff
#f6b3ff
The adjustments to the base colors are as follows:
Color
Hex
Night
Unchanged
Ash
#3c3538
Umber
#4c4245
Bark
#6f5c5e
Considering many other themes featuring light versions of their themes, it would also probably not hurt too much, to display colors for said light variant. Thus, iterating on this proposal, I have narrowed down the changes to only feature 2 shades of each color (per variant), merging rose and ember shades of one color.
|
2025-04-01T06:38:08.840703
| 2019-02-10T16:20:22
|
408551203
|
{
"authors": [
"abbotware",
"jonorossi",
"matgrioni"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4508",
"repo": "castleproject/Windsor",
"url": "https://github.com/castleproject/Windsor/issues/465"
}
|
gharchive/issue
|
change Dependency.OnValue(object value) to (T Value)
currently, this overload of OnValue has this definition:
public static Property OnValue<TDependencyType>(object value);
wouldn't
public static Property OnValue<TDependencyType>(TDependencyType value);
make more sense? (and catch more bugs via the compiler)
Also ran into this issue recently when changing types in a constructor and forgetting these values were supplied via DI. Seems like it should be relatively straightforward, not sure what might might be missing.
It might have been done this way on purpose to force specifying the generic type parameter since linting tools like ReSharper (and now Roslyn) suggest to remove redundant generic parameters.
If you had the following, and removed the generic parameter because it is redundant, then changed the thing variable to var it would pass a different type to Windsor:
IThing thing = new Thing();
Container.Register(
Component.For<X>().DependsOn(Dependency.OnValue<IThing>(thing))
);
Looks like a tradeoff both ways.
That a good point, since always supplying it would mean the type it supplied for DI would always be obvious.
Would a runtime check in the OnValue method be a reasonable addition to make sure the value is always a subtype of the provided generic type?
Would a runtime check in the OnValue method be a reasonable addition to make sure the value is always a subtype of the provided generic type?
Sounds like a reasonable addition, did you want to submit a pull request?
I think we are conflating two separate things:
1
IThing thing = new Thing();
Container.Register(
Component.For<X>().DependsOn(Dependency.OnValue<IThing>(thing))
);
vs
2
var thing = new NotAThing();
Container.Register(
Component.For<X>().DependsOn(Dependency.OnValue<IThing>(thing))
);
the second scenario compiles now! I am proposing a changing this:
public static Property OnValue<TDependencyType>(TDependencyType value);
which would prevent 2 from even compiling
The problem with making this change as is in place that it will more than likely break a lot of code... which is bad for package maintainers - instead the existing needs to be marked obsolete and 2 new methods: should be created:
[Obsolete('use OnObjectValue or OnTypedValue<> instead")]
public static Property OnValue<TDependencyType>(object value);
public static Property OnObjectValue (object value);
// This is the old method for those that really want to use 'object'
// note the lack of generics since it served no purpose in the first place
public static Property OnTypedValue<TDependencyType>(TDependencyType value);
// This is the new / better version with type safety
@abbotware I'm not sure you understood what I described above. I've repeated it with some context.
If you had the following (and OnValue accepted TDependencyType rather than object):
IThing thing = new Thing();
Container.Register(
Component.For<X>().DependsOn(Dependency.OnValue<IThing>(thing))
);
and removed the generic parameter because it is redundant:
then changed the thing variable to var it would pass a different type to Windsor
I'm not saying this is all done in the same step, but refactoring this code using linting tools like ReSharper and Roslyn will break your code.
I understand completely - I create read only/immutable interfaces for my objects and inject those into the container.
Refactoring tools are not a panacea - I believe it is even trying to warn you that something might be undesirable since the warning inferred type will change appears - I use the var = refactoring all the time and I have never seem that message in VS or Resharper! I would wager that warning is not present in in other scenarios for that refactoring feature when it doesn't change the type.. so looks like user error :-)
I haven't verified it, but I think the solution I proposed might actually prevent refactoring tools from making a mistake since it is more strongly typed. Right now, anything you pass to OnValue is only an Object
if OnValue expected a parameter of TDependencyType instead, it would preserve type information as most FluentAPI's do when properly implemented.
public static Property OnObjectValue (object value);
// This is the old method for those that really want to use 'object'
// note the lack of generics since it served no purpose in the first place
@abbotware what do you mean by the generic type served no purpose? It is the key:
public static Property OnValue<TDependencyType>(object value)
{
return Property.ForKey<TDependencyType>().Eq(value);
}
whoops. I forgot about that!
In either case - it's not used to enforce the type of 'object value' It seems like such a trivial change - however, it might have many unintended consequences - Code that compiles today, might stop compiling once this is changed.
Hence the reason recommend to mark it deprecated, and introduce 2 new versions.
Or just provide provide the OnTypedValue variant which is Strongly Typed and hope the old one falls out of favor
This has gone stale so I'm going to close it. I still think the unintended consequences are more important here than the compile time check. We could go with a runtime check in the future, or maybe even a chained API where the generic type parameter can't inferred so linting tools can't remove it being specified.
How is a compile time check not better?
How is a compile time check not better?
I've explained the unintended consequences multiple times in this issue, please reread the issue if you don't recall. I even proposed an idea for a compile time check which would avoid those unintended consequences in my last comment.
So why close this issue then?
Your idea makes no sense as stated,
maybe even a chained API where the generic type parameter can't inferred so linting tools can't remove it being specified.
NEW PROPOSAL
public static Property OnTypedValue<TDependencyType, TValueType>(TValueType value)
where TValueType : TDependencyType
{
return Property.OnValue<TDependencyType>(value);
}
With 2 type parameters, that are related via inheritance, I doubt any linting tool would recommend removal
Your idea makes no sense as stated,
maybe even a chained API where the generic type parameter can't inferred so linting tools can't remove it being specified.
Dependency.OnValue is implemented as a single line chained method call. Property.ForKey<> returns a PropertyKey and Eq returns a Property. My suggestion was pushing the compiler safety into the Property class, i.e. Property.ForKey<TKey> returning a PropertyKey<TKey> so PropertyKey.Eq would become generic and only accept the specified type.
NEW PROPOSAL
...
With 2 type parameters, that are related via inheritance, I doubt any linting tool would recommend removal
Your new proposal could work, but doesn't that mean you always have to specify the generic type parameter twice?
your new proposal could work, but doesn't that mean you always have to specify the generic type parameter twice?
yes, but that is the entire point of using this sort of technique
Specifying the types explicitly just makes it extremely obvious that something special is happening when using OnValue/Dependency notation. Without it can be error prone due to the 'object' in the return - (hence this issue being opened in the first place)
With this overload there would be zero chance this would be used incorrectly, by accident, or change existing behavior. Even in my PR I renamed the function so it would have broken anything, but with 2 Type parameters (I think I prefer this now), it can be keep the same name. I think with 2 types, the compiler won't guess or try to infer the type
@abbotware any opinion about the first half of my comment...?
We've already got:
Property.ForKey<TKey>().Eq(value)
If we changed ForKey<TKey>() to return a new class PropertyKey<TKey> that only handled Type keys (and not string keys) you'd get the compiler type checking without double specifying the generic type parameter, and probably also get compiler type checking for the Is<T> method for service overrides at the same time.
|
2025-04-01T06:38:08.843357
| 2017-05-10T20:30:05
|
227801198
|
{
"authors": [
"alinapopa",
"fir3pho3nixx",
"jonorossi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4509",
"repo": "castleproject/Windsor",
"url": "https://github.com/castleproject/Windsor/pull/233"
}
|
gharchive/pull-request
|
Deploy on tag
Add NuGet publishing, similarly to Castle.Core
Leaving this pull request for now until we've finished with https://github.com/castleproject/Core/pull/259.
Closing this as agreed here: https://github.com/castleproject/Windsor/issues/220#issuecomment-302530186
|
2025-04-01T06:38:08.863122
| 2023-08-09T14:12:58
|
1843339506
|
{
"authors": [
"jdangerx",
"zschira"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4510",
"repo": "catalyst-cooperative/ferc-xbrl-extractor",
"url": "https://github.com/catalyst-cooperative/ferc-xbrl-extractor/pull/113"
}
|
gharchive/pull-request
|
Lost facts fix
This PR adds an initial fix for FERC missing facts initially identified in this issue. The notebook in examples/lost_fact_exploration.ipynb demonstrates exploration of the problem, and outlines which missing facts are dealt with in this PR. There are still more cases that are identified in the notebook, but need more manual exploration before applying a fix. I will break out a separate issue for tracking the remaining missing facts.
Closed in favor of #118 .
|
2025-04-01T06:38:08.866322
| 2024-07-11T21:01:34
|
2404120708
|
{
"authors": [
"bendnorman",
"jdangerx"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4511",
"repo": "catalyst-cooperative/pudl",
"url": "https://github.com/catalyst-cooperative/pudl/pull/3715"
}
|
gharchive/pull-request
|
Superset deployment
Overview
This PR contains superset configuration and cloud deployment changes for our data exploration tool.
Testing
How did you make sure this worked? How can a reviewer verify this?
# To-do list
- [ ] If updating analyses or data processing functions: make sure to update or write data validation tests (e.g., `test_minmax_rows()`)
- [ ] Update the [release notes](../docs/release_notes.rst): reference the PR and related issues.
- [ ] Ensure docs build, unit & integration tests, and test coverage pass locally with `make pytest-coverage` (otherwise the merge queue may reject your PR)
- [ ] Review the PR yourself and call out any questions or issues you have
- [ ] For minor ETL changes or data additions, once `make pytest-coverage` passes, make sure you have a fresh full PUDL DB downloaded locally, materialize new/changed assets and all their downstream assets and [run relevant data validation tests](https://catalystcoop-pudl.readthedocs.io/en/latest/dev/testing.html#data-validation) using `pytest` and `--live-dbs`.
- [ ] For significant ETL, data coverage or analysis changes, once `make pytest-coverage` passes, ensure the full ETL runs locally and [run data validation tests](https://catalystcoop-pudl.readthedocs.io/en/latest/dev/testing.html#data-validation) using `make pytest-validate` (a ~10 hour run). If you can't run this locally, run the `build-deploy-pudl` GitHub Action (or ask someone with permissions to). Then, check the logs on the `#pudl-deployments` Slack channel or `gs://builds.catalyst.coop`.
Thanks for all the great feedback! I pinned the docker base image version, updated auth0 env var instructions and set docker compose env var defaults. I also created some draft issues that I'll flesh out tomorrow.
You're hitting a bunch of this error in CI:
ERROR test/integration/glue_test.py::test_unmapped_utils_eia - TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
Which is related to dagster / python version incompatibilities: https://github.com/dagster-io/dagster/issues/22985
|
2025-04-01T06:38:08.914349
| 2024-08-21T07:09:32
|
2477248779
|
{
"authors": [
"a7351220",
"kidneyweakx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4512",
"repo": "cathayddt/bdk",
"url": "https://github.com/cathayddt/bdk/pull/109"
}
|
gharchive/pull-request
|
Improve/besu network update
PULL REQUEST
Before
[x] 遵守 Commit 規範 (follow commit convention)
[ ] 遵守 Contributing 規範 (follow contributing)
說明 (Description)
相關問題 (Linked Issues)
貢獻種類 (Type of change)
[ ] Bug fix (除錯 non-breaking change which fixes an issue)
[x] New feature (增加新功能 non-breaking change which adds functionality)
[ ] Breaking change (可能導致相容性問題 fix or feature that would cause existing functionality to not work as expected)
[ ] Doc change (需要更新文件 this change requires a documentation update)
測試環境 (Test Configuration):
OS:
NodeJS Version:
NPM Version:
Docker Version:
檢查清單 (Checklist):
[x] 我的程式碼遵從此專案的規範 (My code follows the style guidelines of this project)
[ ] 我有對於自己的程式碼進行測試檢查 (I have performed a self-review of my own code)
[ ] 我有在程式碼中提供必要的註解 (I have commented my code, particularly in hard-to-understand areas)
[ ] 我有在文件中進行必要的更動 (I have made corresponding changes to the documentation)
[ ] 我的程式碼更動沒有顯著增加錯誤數量 (My changes generate no new warnings)
[ ] 我有新增必要的單元測試 (I have added tests that prove my fix is effective or that my feature works)
[ ] 我有檢查並更正程式碼錯誤的拼字 (I have checked my code and corrected any misspellings)
我已完成以上清單,並且同意遵守 Code of Conduct
I have completed the checklist and agree to abide by the code of conduct.
[x] 同意 (I consent)
@johnny30678 you should review this PR.
Wrong lint, empty git commit user, and all failed CI.
I think you should seperate make the improvement
move all files from quorum to eth, or I think evm should much better
update the besu network config
update the kubernetes part
Now it's hard to review and try to find the issues
|
2025-04-01T06:38:08.916596
| 2024-01-30T04:55:35
|
2106952835
|
{
"authors": [
"GenShibe",
"backwardspy",
"uncenter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4513",
"repo": "catppuccin/homebrew-tap",
"url": "https://github.com/catppuccin/homebrew-tap/issues/12"
}
|
gharchive/issue
|
catwalk cannot be installed via home-brew
Whenever i try to install it via brew, it gives me
Downloading https://ghcr.io/v2/catppuccin/tap/catwalk/manifests/1.2.0-1
curl: (22) The requested URL returned error: 401
and i have been advised by hammy to open an issue about this
What is your Homebrew version (brew --version)?
hey @GenShibe thanks for raising this! the taps were made private by mistake.
whiskers, catwalk, and mdbook-catppuccin are now public and i'm able to install all three successfully. let me know if you have any further trouble with them.
|
2025-04-01T06:38:08.920079
| 2024-11-28T12:04:00
|
2701954819
|
{
"authors": [
"Coffee2CodeNL",
"sgoudham"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4514",
"repo": "catppuccin/vscode-icons",
"url": "https://github.com/catppuccin/vscode-icons/issues/365"
}
|
gharchive/issue
|
PHP Typed Icons
Type
[X] File icon
[ ] Folder icon
Context and associations
Phpstorm provides different icons for PHP classes/interfaces/traits, the icon theme does not have these different icons
References
Abstract class, Classes, and an Interface:
Trait:
Readonly:
This is probably for downstream to use in JetBrains. Sorry, I still haven't quite gotten around to decoupling the icons repository from the vscode extension.
|
2025-04-01T06:38:08.931523
| 2016-10-20T16:49:14
|
184282947
|
{
"authors": [
"Githubtordl",
"causefx"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4515",
"repo": "causefx/iDashboard-PHP",
"url": "https://github.com/causefx/iDashboard-PHP/issues/16"
}
|
gharchive/issue
|
Refresh button
Hi,
Could you add a refresh button similar to how Muximux(https://github.com/Tenzinn3/Managethis) has integrated to their dashboard?
Currently if I want to reload a just one tab, I would have to hit the refresh button on the browser, which will take me to the default tab. It would be great if you could implement a refresh button that refreshes only the selected frame/tab.
Overall great dashboard, keep it up. Looking forward to what more you could add it, if you need any suggestions let me know.
Double click the tab name. Would love something like multiple user support.
Thanks for getting back to me.
Yea, i'm planning more stuff, i just need to find more time for it. Hopefully soon, keep the suggestions coming :)
|
2025-04-01T06:38:08.954279
| 2020-03-24T06:51:15
|
586735493
|
{
"authors": [
"cjxd-bot-test"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4518",
"repo": "cb-kubecd/bdd-nh-1585032040",
"url": "https://github.com/cb-kubecd/bdd-nh-1585032040/pull/1"
}
|
gharchive/pull-request
|
My First PR commit
PR comments
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: cjxd-bot-test
If they are not already assigned, you can assign the PR to them by writing /assign @cjxd-bot-test in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
:star: PR built and available in a preview environment cb-kubecd-bdd-nh-1585032040-pr-1 here
|
2025-04-01T06:38:08.984235
| 2020-02-18T08:54:49
|
566739106
|
{
"authors": [
"cjxd-bot-test"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4519",
"repo": "cb-kubecd/environment-pr-162-14-boot-vault-gke-production",
"url": "https://github.com/cb-kubecd/environment-pr-162-14-boot-vault-gke-production/pull/1"
}
|
gharchive/pull-request
|
chore: bdd-spring-1582015162 to 0.0.1
chore: Promote bdd-spring-1582015162 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers:
If they are not already assigned, you can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
|
2025-04-01T06:38:08.988214
| 2020-02-29T20:35:48
|
573406889
|
{
"authors": [
"cjxd-bot-test"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4520",
"repo": "cb-kubecd/environment-pr-170-37-boot-gke-production",
"url": "https://github.com/cb-kubecd/environment-pr-170-37-boot-gke-production/pull/1"
}
|
gharchive/pull-request
|
chore: bdd-spring-1583007820 to 0.0.1
chore: Promote bdd-spring-1583007820 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approvers:
If they are not already assigned, you can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
|
2025-04-01T06:38:09.010931
| 2020-05-23T02:59:40
|
623567270
|
{
"authors": [
"cjxd-bot-test"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4521",
"repo": "cb-kubecd/environment-pr-230-12-gke-upgrade-staging",
"url": "https://github.com/cb-kubecd/environment-pr-230-12-gke-upgrade-staging/pull/1"
}
|
gharchive/pull-request
|
chore: bdd-spring-1590202430 to 0.0.1
chore: Promote bdd-spring-1590202430 to version 0.0.1
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by:
To complete the pull request process, please assign
You can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
OWNERS
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
|
2025-04-01T06:38:09.023503
| 2022-04-10T05:42:46
|
1198888963
|
{
"authors": [
"LameLad007-Sudo",
"runcros"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4522",
"repo": "cb-linux/breath",
"url": "https://github.com/cb-linux/breath/issues/154"
}
|
gharchive/issue
|
Audio Issue: Chromebook 3100, pre 2021.
So I've successfully got ubuntu 20.04 integrated into my Dell Chromebook 3100, and followed the setup procedures, yet nothing, oddly during the first part, I noticed several "Mount Point not found" errors, atleast before the sof-setup-audio, what do I do?
Can you write this in terminal bash -x /usr/local/bin/setup-audio-skl 2>&1 | tee output.txt
Then upload the output.txt
Closing because no answer from author.
|
2025-04-01T06:38:09.035459
| 2021-05-24T07:52:20
|
899398267
|
{
"authors": [
"cbenakis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4523",
"repo": "cbenakis/github-slideshow",
"url": "https://github.com/cbenakis/github-slideshow/pull/4"
}
|
gharchive/pull-request
|
_posts/0000-01-02-cbenakis.md
layout: slide
title: “Welcome to our second slide!”
Your test
Use the left arrow to go back!
How do I merge pull request?
|
2025-04-01T06:38:09.053083
| 2017-11-05T07:42:36
|
271255335
|
{
"authors": [
"HipsterSloth",
"gb2111"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4524",
"repo": "cboulay/PSMoveService",
"url": "https://github.com/cboulay/PSMoveService/issues/477"
}
|
gharchive/issue
|
Create an installer for PSMoveService
Creating an issue to track the working being done on the installer by @gb2111.
I just checked in the following change related to Greg's efforts:
https://github.com/cboulay/PSMoveService/commit/4600f20f2eda3b2d6db3f53b757923d396b7f8e5
Files are now copied to ${ROOT_DIR}/install/${ARCH_LABEL}/ instead of ${ROOT_DIR}/${PSM_PROJECT_NAME}/${ARCH_LABEL}/ when running the "INSTALL" project
This will make it easier for the installer script to find the output build files
Fixed issue with BuildOfficialDistribution.bat script failing to find OpenCV cmake files
hi,
it can be tested on my fork:
https://github.com/gb2111/PSMoveService
let me know if there is better way to let you test.
you need download Inno Setup to compile setup
https://github.com/gb2111/PSMoveService
the script is here
misc/installer/installer_win64.iss
setup PSMoveService-Setup64.exe will be created in 'installer' folder.
i create few shortcuts, not sure if that is good idea
this is bery basic version so fi you think we need more please let me know.
Closing this issue since we now have an Inno Setup based installer
|
2025-04-01T06:38:09.059062
| 2017-06-18T17:13:23
|
236733581
|
{
"authors": [
"Daemon2017",
"cbovar",
"tzaeru"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4525",
"repo": "cbovar/ConvNetSharp",
"url": "https://github.com/cbovar/ConvNetSharp/issues/52"
}
|
gharchive/issue
|
Unpooling and deconvolution layers
Hello!
Is it possible to create a segmentation network using your library? As I know, for that I need an unpooling and deconvolutional layers.
Hi!
Currently there is no unpooling or deconvolution. It's something I have planned to do but it's low on my priority list.
PR welcome :)
Def interested about this too. I'm not sure if my own skill/time (the two tend to be related!) is enough to add this feature, but many of my use cases would need proper learning-enabled upsampling in the form of transposed convolution.
I think I will get cracking on it. It would be very useful for generative networks.
IIRC, the maths are very similar to convolutions (some indexes swapped) and cudnn already handles that.
Some useful links:
Convolution and Transposed Convolution algos: https://arxiv.org/pdf/1603.07285.pdf
Convolution, GPU double: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume.GPU/Double/Volume.cs#L245
Convolution, GPU single: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume.GPU/Single/Volume.cs#L246
Convolution, CPU double: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume/Double/Volume.cs#L145
Convolution, CPU single: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume/Single/Volume.cs#L142
Tests: https://github.com/cbovar/ConvNetSharp/blob/master/src/ConvNetSharp.Volume.Tests/VolumeTests.cs#L252
|
2025-04-01T06:38:09.351937
| 2024-09-16T03:02:17
|
2527404729
|
{
"authors": [
"YGuangye",
"diogomart"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4526",
"repo": "ccsb-scripps/AutoDock-Vina",
"url": "https://github.com/ccsb-scripps/AutoDock-Vina/issues/342"
}
|
gharchive/issue
|
Replace non-bonded interactions of specified atom pairs
Is it possible to cancel non-bonded interactions between specified atom pairs and introduce custom potential instead?
Unfortunately no. You could try smina (not developed by us) or the --modpair option in autodock-gpu, which doesn't allows limited customization, but maybe is enough for your purposes. You can also use meeko to customize the atom typing with smarts.
https://github.com/ccsb-scripps/autoDock-gpu
https://github.com/forlilab/meeko
Thank you very much for your reply. Modifying parameters based on atom types might not be sufficient for my needs, as I intend to replace interactions between specific atoms identified by their indices, regardless of their atom types. To achieve this, would it be necessary to modify the source code?
Yes, it would be necessary to modify the source.
Thank you!
|
2025-04-01T06:38:09.364892
| 2022-08-03T20:39:09
|
1327764933
|
{
"authors": [
"abanuog",
"mmafe",
"tqureshi-uog"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4527",
"repo": "ccswbs/gus",
"url": "https://github.com/ccswbs/gus/pull/105"
}
|
gharchive/pull-request
|
Add Background Color option to Media and Text widget
Summary of changes
Add Background Color option to Media and Text widget so they can be used instead of custom-built YAML components
Frontend
add relevant background color fields to gatsby-node.js schema
add background and text color logic to mediaText.js
add data-title fields to widgets.js and sectionWidgets.js for reference purposes (unlike the html title attribute, data-title does not result in a tooltip on hover and is invisible to end users unless they view source code)
remove obsolete media links field
upgrade packages
Backend
add Background Color taxonomy with choices based on Bootstrap 5 classes
add Background field to Media and Text paragraph
remove obsolete links field from Media and Text paragraph
Test Plan
Go to https://tqtest.gatsbyjs.io/media-text-testing and ensure it's consistent with the original version on https://preview-ugconthub.gtsb.io/media-text-testing
Go to https://tqtest.gatsbyjs.io/media-text-test2 and ensure the colors display correctly (headings and text should be black for uog-blue-muted and light-gray backgrounds, white for dark-gray backgrounds, and default otherwise, i.e. dark text, red heading)
Do the same for https://tqtest.gatsbyjs.io/media-text-video
Review https://tqtest.gatsbyjs.io/bcomm/become-a-global-leader and ensure the two Future You media/text widgets have the uog-blue-muted background color
Review https://tqtest.gatsbyjs.io/study-in-canada and verify the Things You Should Know About the City of Guelph media and text widgets look and behave like the custom YAML widgets on https://www.uoguelph.ca/study-in-canada
Drupal multidev can be reviewed at https://medtxtbg-chug.pantheonsite.io/
The page https://tqtest.gatsbyjs.io/media-text2 doesn't seem to exist.
Sorry, typo - the link is https://tqtest.gatsbyjs.io/media-text-test2
I find the bg-light option so light that I can barely tell the difference between that and the white background. I think we should just limit the options to the light-blue (which will be lighter than it currently is for accessibility) and the dark grey.
Although I worry that we're adding a can of worms by even adding the dark grey option. I like the way its looks, however there may be additional accessibility issues with it. The red headings for example fail colour contrast.
Agree with Miranda that the bg-light option could be removed.
I think you have to use single quotes in the classNames utility
I think you have to use single quotes in the classNames utility
Changed the syntax but it didn't make a difference - the problem was due to another issue which should now be resolved.
If we can fix the primary-outline and info-outline colour contrast, that would be great. Other than that, not seeing roadblocks.
Done - see latest changes on https://tqtest.gatsbyjs.io/media-text-test2
|
2025-04-01T06:38:09.461141
| 2022-10-31T15:21:03
|
1430027647
|
{
"authors": [
"cdalvaro",
"ggmartins"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4528",
"repo": "cdalvaro/docker-salt-master",
"url": "https://github.com/cdalvaro/docker-salt-master/issues/171"
}
|
gharchive/issue
|
question
I had to downgrade docker-compose file to 3.3. Like the file below. I'm getting connection refused for the ports and /var/log/salt/master is logging the messages below, tell me if this looks like a bug, I can report it as so (using the template). I've also noticed that, there's no configuration in /home/salt/data/config/. I've added a master.conf with default ports but no luck. Thanks for any help.
version: '3.3'
volumes:
roots:
keys:
logs:
services:
master:
container_name: salt_master_engage1
image: ghcr.io/cdalvaro/docker-salt-master:latest
restart: always
volumes:
- "roots/:/home/salt/data/srv"
- "keys/:/home/salt/data/keys"
- "logs/:/home/salt/data/logs"
ports:
- "4505:4505"
- "4506:4506"
### salt-api port
# - "8000:8000"
healthcheck:
test: ["CMD", "/usr/local/sbin/healthcheck"]
#start_period: 30s
environment:
DEBUG: 'false'
TZ: America/Chicago
PUID: 1000
PGID: 1000
SALT_LOG_LEVEL: info
### salt-api settings
# SALT_API_SERVICE_ENABLED: 'True'
# SALT_API_USER: salt_api
# SALT_API_USER_PASS: 4wesome-Pass0rd
2022-10-31 10:12:57,968 [salt.modules.network:2143][ERROR ][12952] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:57,986 [salt.modules.network:2143][ERROR ][12951] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:57,995 [salt.modules.network:2143][ERROR ][12950] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:57,996 [salt.modules.network:2143][ERROR ][12949] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:58,007 [salt.modules.network:2143][ERROR ][12948] Exception while creating a ThreadPoolExecutor for resolving FQDNs: can't start new thread
2022-10-31 10:12:58,066 [salt.utils.process:998 ][ERROR ][13192] An un-handled exception from the multiprocessing process 'FileserverUpdate' was caught:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/salt/utils/process.py", line 993, in wrapped_run_func
return run_func()
File "/usr/local/lib/python3.10/dist-packages/salt/master.py", line 508, in run
self.update_threads[interval].start()
File "/usr/lib/python3.10/threading.py", line 935, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Hi!
Thank you very much for opening the issue!
The master.yml is automatically generated when the container starts.
If you need to add some specific configuration that you can't using env variables, create a .conf file that fulfill your needs. (You can see the test directory for some help)
I'll try to reproduce your bug and see if I can figure out a solution.
Hi @ggmartins,
I've tried you compose file with the following tweaks and everything works fine:
version: '3.3'
volumes:
roots:
keys:
services:
master:
container_name: salt_master_engage1
image: ghcr.io/cdalvaro/docker-salt-master:latest
restart: always
volumes:
- "roots:/home/salt/data/srv/"
- "keys:/home/salt/data/keys/"
- "./logs/:/home/salt/data/logs/"
ports:
- "4505:4505"
- "4506:4506"
### salt-api port
# - "8000:8000"
healthcheck:
test: ["CMD", "/usr/local/sbin/healthcheck"]
#start_period: 30s
environment:
DEBUG: 'false'
TZ: America/Chicago
PUID: 1000
PGID: 1000
SALT_LOG_LEVEL: info
### salt-api settings
# SALT_API_SERVICE_ENABLED: 'True'
# SALT_API_USER: salt_api
# SALT_API_USER_PASS: 4wesome-Pass0rd
Please, could you test it??
If you are viewing salt logs inside the logs volume, you should set the [SALT_LEVEL_LOGFILE](https://docs.saltproject.io/en/latest/ref/configuration/master.html#log-level-logfile) to info as well since, SALT_LOG_LEVEL is for the standard output.
I'll make some changes to use SALT_LOG_LEVEL when SALT_LEVEL_LOGFILE is not defined in order to make it less confusing.
ok, thanks, looks like your changes are working on one machine. The other server seems to be having problems connecting the minions, but we think the problem is with the server itself. Thank you so much for your help! Thanks for the tip on logging too.
|
2025-04-01T06:38:09.466367
| 2021-03-15T20:39:33
|
832174503
|
{
"authors": [
"cdalvaro",
"dmlambea"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4529",
"repo": "cdalvaro/docker-salt-master",
"url": "https://github.com/cdalvaro/docker-salt-master/issues/58"
}
|
gharchive/issue
|
Add support for configurating the reactor in master config
As per the documentation, there is a way for telling the master to sync custom types on minions' start. Please refer to: https://docs.saltproject.io/en/latest/topics/reactor/index.html#minion-start-reactor
It would be good to have a config option and directory mapping for configuring the reactor, much like other options are (mounting the keys, roots, etc).
An alternative could be enhancing the config system so a master config template (or zero or more small config files like Apache's conf.d/* files, for example) can be used. This way, future or yet unsupported options could be covered easily.
Hi @dmlambea! Thank you for opening this issue!
Right now, you can set your own reactor settings by creating a reactor.conf file inside the config directory and mounting it:
# config/reactor.conf
reactor:
- 'salt/minion/*/start':
- /srv/reactor/sync_grains.sls
/srv directory is symlinked to /home/salt/data/srv, but you can also specify /home/salt/data/srv/reactor/sync_grains.sls instead of /srv/reactor/sync_grains.sls.
docker run --name salt_stack --detach \
--publish 4505:4505 --publish 4506:4506 \
--volume $(pwd)/roots/:/home/salt/data/srv/ \
--volume $(pwd)/config/:/home/salt/data/config/ \ # Mounts config/ dir
cdalvaro/docker-salt-master:latest
This way, you can add your sync_grains.sls file to roots/reactor:
# roots/reactor/sync_grains.sls
sync_grains:
local.saltutil.sync_grains:
- tgt: {{ data['id'] }}
And that should do the trick. I use this method for my start.sls reactor.
Please let me know if this method does not fit your requirements, or if you see any way to improve this support.
Anyway, I will add this case to the documentation for better support.
Hi @cdalvaro
It worked, thank you!
|
2025-04-01T06:38:09.467519
| 2023-12-19T06:05:09
|
2047985523
|
{
"authors": [
"cdasilvasantos"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4530",
"repo": "cdasilvasantos/is218-group-project",
"url": "https://github.com/cdasilvasantos/is218-group-project/issues/104"
}
|
gharchive/issue
|
PDF Adding Info
Add important information to the PDF that is relevant to the assignment
Added parts of the assignment that is necessary for grading
|
2025-04-01T06:38:09.474964
| 2024-04-27T11:54:28
|
2266981774
|
{
"authors": [
"cakemanny",
"cdk8s-automation"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4531",
"repo": "cdk8s-team/cdk8s-cli",
"url": "https://github.com/cdk8s-team/cdk8s-cli/pull/2194"
}
|
gharchive/pull-request
|
fix(templates): tell jest to prefer ts files over compiled js files
The template project can run completely without emitting .js files, - e.g. ts-node is used for synth and ts-jest.
However, if js files are emitted via npm run compile, they are imported by test files instead of .ts files that may have been updated.
i.e. given this import in main.test.ts, it will import and test main.ts unless main.js exists in which case main.js is imported.
import {MyChart} from './main';
This is rather confusing if e.g. some changes to main.ts have been pulled. Let's fix this!
Approach: per https://jestjs.io/docs/configuration#modulefileextensions-arraystring
We recommend placing the extensions most commonly used in your project on the left, so if you are using TypeScript, you may want to consider moving "ts" and/or "tsx" to the beginning of the array.
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 2194
Questions ?
Please refer to the Backport tool documentation
|
2025-04-01T06:38:09.477810
| 2022-07-30T02:50:02
|
1322851054
|
{
"authors": [
"cdk8s-automation"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4532",
"repo": "cdk8s-team/cdk8s-cli",
"url": "https://github.com/cdk8s-team/cdk8s-cli/pull/426"
}
|
gharchive/pull-request
|
chore(deps): upgrade dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-2.x" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 426
Questions ?
Please refer to the Backport tool documentation
|
2025-04-01T06:38:09.480702
| 2024-09-17T09:02:47
|
2530491441
|
{
"authors": [
"cdk8s-automation"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4533",
"repo": "cdk8s-team/cdk8s-core",
"url": "https://github.com/cdk8s-team/cdk8s-core/pull/2880"
}
|
gharchive/pull-request
|
chore(deps): upgrade dev dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-dev-dependencies-2.x" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 2880
Questions ?
Please refer to the Backport tool documentation
|
2025-04-01T06:38:09.483328
| 2022-08-20T02:46:47
|
1345035214
|
{
"authors": [
"cdk8s-automation"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4534",
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/1060"
}
|
gharchive/pull-request
|
chore(deps): upgrade dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-k8s-24-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 1060
Questions ?
Please refer to the Backport tool documentation
|
2025-04-01T06:38:09.485883
| 2022-09-23T02:59:46
|
1383234519
|
{
"authors": [
"cdk8s-automation"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4535",
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/1186"
}
|
gharchive/pull-request
|
chore(deps): upgrade dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-k8s-22-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 1186
Questions ?
Please refer to the Backport tool documentation
|
2025-04-01T06:38:09.488448
| 2023-10-05T09:06:20
|
1927768240
|
{
"authors": [
"cdk8s-automation"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4536",
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/3041"
}
|
gharchive/pull-request
|
chore(deps): upgrade dev dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-dev-dependencies-k8s-27-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 3041
Questions ?
Please refer to the Backport tool documentation
|
2025-04-01T06:38:09.491138
| 2023-11-07T12:02:19
|
1981192701
|
{
"authors": [
"cdk8s-automation"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4537",
"repo": "cdk8s-team/cdk8s-plus",
"url": "https://github.com/cdk8s-team/cdk8s-plus/pull/3255"
}
|
gharchive/pull-request
|
chore(deps): upgrade compiler dependencies
Upgrades project dependencies. See details in workflow run.
Automatically created by projen via the "upgrade-compiler-dependencies-k8s-27-main" workflow
⚪ Backport skipped
The pull request was not backported as there were no branches to backport to. If this is a mistake, please apply the desired version labels or run the backport tool manually.
Manual backport
To create the backport manually run:
backport --pr 3255
Questions ?
Please refer to the Backport tool documentation
|
2025-04-01T06:38:09.552885
| 2021-08-30T21:05:10
|
983202762
|
{
"authors": [
"jsjoeio",
"senyai",
"tidux"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4538",
"repo": "cdr/code-server",
"url": "https://github.com/cdr/code-server/issues/4073"
}
|
gharchive/issue
|
Terminal font rendering bug in Firefox and Chrome.
OS/Web Information
Web Browser: Firefox, Google Chrome, Microsoft Edge
Local OS: Linux, Windows 10
Remote OS: Amazon Linux 2
Remote Architecture: x86_64
code-server --version: 3.11.1 c680aae973d83583e4a73dc0c422f44021f0140e
Steps to Reproduce
open code-server tab in Firefox
open terminal
type _______
Expected
The underscore characters are visible.
Actual
The underscore characters are not visible, leaving what appears to be blank spaces. In Chrome based browsers, adjusting the terminal's line spacing to 1.1 makes the underscores visible, but this does not work in Firefox.
Logs
Console stdout/stderr:
[2021-08-30T20:58:55.509Z] info - Not serving HTTPS
[2021-08-30T21:02:15.733Z] debug forking vs code...
[2021-08-30T21:02:16.096Z] debug setting up vs code...
[2021-08-30T21:02:16.099Z] debug vscode got message from code-server {"type":"init"}
[2021-08-30T21:02:18.976Z] debug vscode got message from code-server {"type":"socket"}
[2021-08-30T21:02:18.979Z] debug protocol Initiating handshake... {"token":"aae87521-0bbb-4f11-9aac-5b37061ad123"}
[2021-08-30T21:02:19.040Z] debug protocol Handshake completed {"token":"aae87521-0bbb-4f11-9aac-5b37061ad123"}
[2021-08-30T21:02:19.041Z] debug management Connecting... {"token":"aae87521-0bbb-4f11-9aac-5b37061ad123"}
[2021-08-30T21:02:19.042Z] debug vscode 1 active management connection(s)
[2021-08-30T21:02:20.647Z] debug vscode got message from code-server {"type":"socket"}
[2021-08-30T21:02:20.647Z] debug protocol Initiating handshake... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.884Z] debug got latest version {"latest":"3.11.1"}
[2021-08-30T21:02:20.884Z] debug comparing versions {"current":"3.11.1","latest":"3.11.1"}
[2021-08-30T21:02:20.919Z] debug protocol Handshake completed {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.920Z] debug exthost Connecting... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.921Z] debug exthost Getting NLS configuration... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.921Z] debug vscode 1 active exthost connection(s)
[2021-08-30T21:02:20.921Z] debug exthost Spawning extension host... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:20.925Z] debug exthost Waiting for handshake... {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:21.328Z] debug exthost Handshake completed {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
[2021-08-30T21:02:21.328Z] debug exthost Sending socket {"token":"1e6492e6-71b6-4fda-bf1e-5a1e11924ca3"}
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
ERR [File Watcher (nsfw)] terminated unexpectedly and is restarted again...
terminate called after throwing an instance of 'Napi::Error'
what(): Inotify limit reached
IPC "File Watcher (nsfw)" crashed with exit code null and signal SIGABRT
ERR [File Watcher (nsfw)] failed to start after retrying for some time, giving up. Please report this as a bug report!
ERR [File Watcher (nsfw)] failed to start after retrying for some time, giving up. Please report this as a bug report!
^C[2021-08-30T21:02:59.332Z] debug child:72071 disposing {"code":"SIGINT"}
Screenshot
The "command not found" error was produced by typing several underscores and hitting enter.
Notes
This issue can be reproduced in VS Code: No
I can't reproduce this in Firefox + macOS. Are you using a custom font in your terminal by chance?
https://user-images.githubusercontent.com/3806031/131408401-dfb9fbd3-54b8-4cbe-b98f-5bd9ecaab881.mov
No, default options. Please test on a non-retina display.
Underscore is not visible in firefox on 133% zoom:
https://user-images.githubusercontent.com/76137/131497615-2398d771-046a-41f2-9b5b-490c2404c239.mp4
Underscore is not visible in firefox when it's on the last row:
https://user-images.githubusercontent.com/76137/131498038-4310f522-29db-4476-aeea-ab901e724c8f.mp4
Underscore is not visible in firefox on 133% zoom:
I can't reproduce this unfortunately. Are you using a custom font?
I can reproduce it on latest linux mint and fedora. Changing the font to custom made underscore visible all the time and by visible I mean:
there's something wrong in how the underscore is rendered.
Got it!
"terminal.integrated.gpuAcceleration": "off"
fixes the issue:
|
2025-04-01T06:38:09.560847
| 2024-06-03T18:17:41
|
2331769434
|
{
"authors": [
"ben851",
"sastels"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4539",
"repo": "cds-snc/notification-terraform",
"url": "https://github.com/cds-snc/notification-terraform/pull/1349"
}
|
gharchive/pull-request
|
EKS Upgrade to 1.30 in Staging
Summary | Résumé
New version of EKS - 1.30
Verified that there are no deprecations we need to worry about.
Release notes: https://kubernetes.io/blog/2024/04/17/kubernetes-v1-30-release/
Upgraded and working in dev.
Related Issues | Cartes liées
Chore
Test instructions | Instructions pour tester la modification
Smoke test/perf test staging
Release Instructions | Instructions pour le déploiement
None.
Reviewer checklist | Liste de vérification du réviseur
[ ] This PR does not break existing functionality.
[x] This PR does not violate GCNotify's privacy policies.
[x] This PR does not raise new security concerns. Refer to our GC Notify Risk Register document on our Google drive.
[x] This PR does not significantly alter performance.
[x] Additional required documentation resulting of these changes is covered (such as the README, setup instructions, a related ADR or the technical documentation).
⚠ If boxes cannot be checked off before merging the PR, they should be moved to the "Release Instructions" section with appropriate steps required to verify before release. For example, changes to celery code may require tests on staging to verify that performance has not been affected.
tbh, I'd say that we can't really check off all these, at least "This PR does not break existing functionality." should be checked before release 👍
|
2025-04-01T06:38:09.637088
| 2024-07-24T15:28:48
|
2427883355
|
{
"authors": [
"cedricziel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4554",
"repo": "cedricziel/faro-shop",
"url": "https://github.com/cedricziel/faro-shop/pull/641"
}
|
gharchive/pull-request
|
chore(main): release 0.37.1
:robot: I have created a release beep boop
0.37.1 (2024-07-24)
Bug Fixes
bump guzzlehttp/guzzle from 7.9.1 to 7.9.2 (#639) (61de914)
bump laravel/framework from 11.16.0 to 11.17.0 (#640) (2c7a8c4)
bump league/commonmark from 2.5.0 to 2.5.1 (#638) (549978c)
This PR was generated with Release Please. See documentation.
:robot: Created releases:
0.37.1
:sunflower:
|
2025-04-01T06:38:09.638549
| 2015-09-03T13:27:35
|
104693360
|
{
"authors": [
"aredridel"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4555",
"repo": "ceejbot/numbat-collector",
"url": "https://github.com/ceejbot/numbat-collector/issues/7"
}
|
gharchive/issue
|
Startup time is high
numbat-collector is one of the larger chunks of the startup time of my module.
It clocks in at 800ms to require on my machine.
It's a dep of newww, and I was mechanically going through deps with require-time to see what's slow. It's got a 3+ second startup time.
Though looking deeper, it looks like it depends on it but never uses it.
|
2025-04-01T06:38:09.739211
| 2022-01-03T11:29:28
|
1092433355
|
{
"authors": [
"adlerjohn",
"liamsi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4556",
"repo": "celestiaorg/celestia-app",
"url": "https://github.com/celestiaorg/celestia-app/issues/168"
}
|
gharchive/issue
|
Decide on governance
Summary
Currently, we stripped the gov module entirely from celestia-app. We should decide on the gov mechanism if any.
Problem Definition
There is no way to vote on (parameter) changes currently.
Proposal
@musalbas proposed to only have signalling text governance proposals on chain.
The upgrade path must be clearly defined for various scenarios and what role governance should play in this if any.
E.g. should coin holders vote on a block size increase? On upgrading to a new software release? On other params? Only on non-binding signalling text proposals?
Action Items
[ ] Decide on if we want any governance in the sense of "coin-voting" at all
[ ] Decide on what kind of proposals governance can vote
[ ] Summarize findings and decision in a brief ADR
[ ] then: implement changes in app
[ ] add a more fine-grained document about various upgrade-paths including what can be voted on but also beyond what is covered by governance
Related:
https://www.figment.io/resources/cosmos-parameter-change-documentation, https://github.com/gavinly/CosmosParametersWiki/blob/master/param_index.md
https://github.com/celestiaorg/celestia-specs/issues/128
https://github.com/celestiaorg/celestia-specs/issues/171
some SDK discussions and issues to keep an eye on:
https://github.com/cosmos/cosmos-sdk/discussions/9066
https://github.com/cosmos/cosmos-sdk/discussions/9913
https://linktr.ee/cosmos_gov
code that (currently) handles text proposals: https://github.com/cosmos/cosmos-sdk/blob/5725659684fc93790a63981c653feee33ecf3225/x/gov/types/proposal.go#L249-L253, code that (currently) handles param changes: https://github.com/cosmos/cosmos-sdk/blob/58a6c4c00771e766f37f0f8e50adbbfe0bc7362d/x/params/proposal_handler.go#L26
The decision for now:
we will add back the full governance module for the next testnet
if we want to limit anything we should figure this out between testnet and mainnet
IMHO, we should simply use the full governance module at launch. If we ever want to move away from signalling/coin-voting, there either needs to be a governance proposal to do that, or, a coordinated hard-fork. cc @musalbas @adlerjohn
Sounds good
|
2025-04-01T06:38:09.745155
| 2022-03-28T20:35:54
|
1183961588
|
{
"authors": [
"codecov-commenter",
"evan-forbes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4557",
"repo": "celestiaorg/celestia-app",
"url": "https://github.com/celestiaorg/celestia-app/pull/255"
}
|
gharchive/pull-request
|
Orchestrator and relayer client
Description
This PR is the first of three PRs to add the MVP orchestrator and relayer. It contains the various rpc clients used by both the relayer and orchestrator to communicate with celestia-app, celestia-core, and ethereum.
part 1/3 of the orchestrator/relayer MVP
Codecov Report
:exclamation: No coverage uploaded for pull request base (qgb-integration@6d78b9b). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## qgb-integration #255 +/- ##
==================================================
Coverage ? 14.77%
==================================================
Files ? 42
Lines ? 8576
Branches ? 0
==================================================
Hits ? 1267
Misses ? 7223
Partials ? 86
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6d78b9b...8a63b4e. Read the comment docs.
there's still a lot of unused code that will get used later, so the linter is failing
|
2025-04-01T06:38:09.748290
| 2023-10-07T15:10:21
|
1931386287
|
{
"authors": [
"codecov-commenter",
"rootulp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4558",
"repo": "celestiaorg/celestia-app",
"url": "https://github.com/celestiaorg/celestia-app/pull/2626"
}
|
gharchive/pull-request
|
fix: specs for MaxDepositPeriod and VotingPeriod
Closes https://github.com/celestiaorg/celestia-app/issues/2624
Codecov Report
Merging #2626 (caa5c07) into main (44be82a) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #2626 +/- ##
=======================================
Coverage 20.63% 20.63%
=======================================
Files 133 133
Lines 15346 15346
=======================================
Hits 3166 3166
Misses 11877 11877
Partials 303 303
|
2025-04-01T06:38:09.751421
| 2022-03-07T18:51:24
|
1161795449
|
{
"authors": [
"jbowen93"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4559",
"repo": "celestiaorg/evmos",
"url": "https://github.com/celestiaorg/evmos/issues/27"
}
|
gharchive/issue
|
Cut a v0.1.0 Release
Since we have reached a state where we can send transactions, deploy contracts, and call contracts using Optimint as a backend we should create a v0.1.0 release.
Depends on
[x] https://github.com/celestiaorg/ethermint/issues/3
[x] https://github.com/celestiaorg/optimint/issues/310
[x] https://github.com/celestiaorg/optimint/issues/323
[x] https://github.com/celestiaorg/evmos/pull/32
All pre tasks are done. A release can be cut.
|
2025-04-01T06:38:09.762240
| 2023-04-26T15:19:07
|
1685238805
|
{
"authors": [
"codecov-commenter",
"derrandz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4560",
"repo": "celestiaorg/go-header",
"url": "https://github.com/celestiaorg/go-header/pull/35"
}
|
gharchive/pull-request
|
WIP: Feat/bootstrap from previously seen peers
Overview
This PR contains the imlpementation of ADR-14
Checklist
[ ] New and updated code has appropriate documentation
[ ] New and updated code has new and/or updated testing
[ ] Required CI checks are passing
[ ] Visual proof for any user facing features like CLI or documentation updates
[ ] Linked issues closed with keywords
Codecov Report
Merging #35 (cc4d2b0) into main (4a93da2) will decrease coverage by 0.04%.
The diff coverage is 76.00%.
@@ Coverage Diff @@
## main #35 +/- ##
==========================================
- Coverage 66.22% 66.18% -0.04%
==========================================
Files 35 36 +1
Lines 2768 2827 +59
==========================================
+ Hits 1833 1871 +38
- Misses 785 801 +16
- Partials 150 155 +5
Impacted Files
Coverage Δ
p2p/options.go
44.44% <50.00%> (+0.78%)
:arrow_up:
p2p/exchange.go
78.86% <69.69%> (-2.39%)
:arrow_down:
p2p/peer_tracker.go
77.39% <86.36%> (+1.20%)
:arrow_up:
p2p/peerstore/peerstore.go
100.00% <100.00%> (ø)
sync/sync_head.go
62.03% <100.00%> (ø)
... and 1 file with indirect coverage changes
Closed in favor of:
https://github.com/celestiaorg/go-header/pull/36
Incoming
|
2025-04-01T06:38:09.778850
| 2018-08-21T18:04:04
|
352649795
|
{
"authors": [
"3rwww1",
"EricHanLiu",
"michaelbourgatt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4561",
"repo": "celluloid-edu/celluloid",
"url": "https://github.com/celluloid-edu/celluloid/issues/39"
}
|
gharchive/issue
|
Setting up dev environment
I'm trying to get this app setup properly for a project I'm working on.
I've cloned the repo, run npm i (though it appears Lerna is the only dependency), then npm run build. Fixing one only brings another one up, so I figured the fastest way to fix that would be to ask here what the proper setup steps are to get Celluloid working.
What packages am I missing / what steps should I take?
Thanks!
Why was this closed @3rwww1 ? I'd appreciate help on what dependencies are required for the project to run in development
Hi Eric,
Erwan, our IT developer, is about to answer you :-) We've talked about that
just this morning.
All the best,
Michaël
Le lun. 27 août 2018 à 14:54, Eric Han Liu<EMAIL_ADDRESS>a
écrit :
Why was this closed @3rwww1 https://github.com/3rwww1 ? I'd appreciate
help on what dependencies are required for the project to run in development
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/celluloid-edu/celluloid/issues/39#issuecomment-416216850,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AhumWdnvyt-W5r6mqbdoJ3B2tGppzsnYks5uU-wdgaJpZM4WGT9G
.
--
Le cinéma Utopia... Une histoire de militantisme culturel et politique en
vente ici
https://warm-ed.fr/fr/cinema/33-le-cinema-utopia-a-avignon-de-1976-a-1994-de-michael-bourgatte-97829556759.html
Michaël Bourgatte
Maître de Conférences à l'Institut Supérieur de Pédagogie
http://www.icp.fr/a-propos-de-l-icp/decouvrez-l-icp/facultes-et-instituts/isp-faculte-d-education-1600.kjsp
Directeur du département Humanités Numériques et Innovations Educatives
Responsable de l'Atelier du Numérique
http://www.icp.fr/a-propos-de-l-icp/decouvrez-l-icp/campus-numerique/atelier-du-numerique-26791.kjsp
Institut Catholique de Paris
|
Bureau E-63 | 74, Rue de Vaugirard 75270 Paris Cedex 06
✆
+
33 (0)<IP_ADDRESS>.92 |
+33
(0)
<IP_ADDRESS>.8
7
Twitter :
@michaelbourgatt https://twitter.com/michaelbourgatt
Blog : http://celluloid.hypotheses.org/
http://celluloid.hypotheses.org/
Hi @EricHanLiu !
Sorry for closing the issue, I was doing a bit of "housekeeping" on this repository, and for some reason I didn't see this new issue, nor was I notified by mail when you opened it, nor did I check I wasn't the author when closing it.
I'm in the process of writing an extensive README for the project, but in the meantime, here is an excerpt
Prerequisites
using an OSX or Linux operating system is highly recommended. With a bit of tweaking, windows will work too, albeit you'll have to do to a bit more searching on how to install and configure the following tools.
download Yarn and use it instead of NPM. The project is organized as a monorepo so it needs yarn to leverage Yarn workspace
install a local postgresql server, version 9.6 or later, for your environment, optionally using a docker image. Then, create a user for celluloid and then create a database owned by this user. You can follow this tutorial to get setup quickly.
finally, you'll need an SMTP server to send emails for account confirmation. For development purpose, you could use your email account SMTP credentials, for instance gmail, or a dedicated service, such as mailtrap
Configuration
create a .env file at the root of your repository, with the following contents:
1 NODE_ENV=development
2 CELLULOID_LISTEN_PORT=3001
3 CELLULOID_PG_HOST=celluloid-db-postgres.cticqmujyhft.eu-west-1.rds.amazonaws.com
4 CELLULOID_PG_PORT=5432
5 CELLULOID_PG_DATABASE=celluloid
6 CELLULOID_PG_USER=celluloid
7 CELLULOID_PG_PASSWORD=8H#Cjvp!ZmY#!h6p
8 CELLULOID_PG_MAX_POOL_SIZE=20
9 CELLULOID_PG_IDLE_TIMEOUT=30000
10 CELLULOID_JWT_SECRET="bhN63!4A^CAnn@xe53s7d8uD1jCXbMXPtU6H*a0*YZ%Z1#2!C95hgJv7i#53NU1d"
11 CELLULOID_SMTP_USER=AKIAILSQA6FHHYZ24IMA
12 CELLULOID_SMTP_PASSWORD='AjCjLRwqAMudLq62oKksUin6kscuHnP5bE+ieE/ssf3b'
13 CELLULOID_SMTP_HOST=email-smtp.eu-west-1.amazonaws.com
14 COOL_INFRA_PATH=../cool/infra/celluloid```
## Installation
- setup a local database with postgresql and restore
|
2025-04-01T06:38:09.793800
| 2021-02-17T23:37:29
|
810620209
|
{
"authors": [
"aslawson",
"gastonponti",
"timmoreton"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4562",
"repo": "celo-org/celo-monorepo",
"url": "https://github.com/celo-org/celo-monorepo/issues/7158"
}
|
gharchive/issue
|
[cli] Prevent releasegold:withdraw if a ReleaseGold contract has a cEUR balance
Expected Behavior
We currently check to see if a RG contract would self-destruct and take a cUSD balance with it. We should add a similar check for cEUR.
Current Behavior
Someone can send cEUR to a RG contract, then withdraw all remaining CELO, and lose the cEUR.
@gastonponti is this still being worked on or complete?
If not started, I don't think we need to do this now. There's an open question around there being no way to get cEUR or any other token back from the RG contract now. Which means users would be blocked on withdrawing their CELO with this change
Yes, I've asked that in the cap channel
I didn't create the PR until having a clearer answer.
Are just a few changes, I could add a branch to this issue with something like "to be merged when this was fixed"
|
2025-04-01T06:38:09.814653
| 2019-09-04T09:00:57
|
489032183
|
{
"authors": [
"bvwells",
"rjeczalik"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4563",
"repo": "census-instrumentation/opencensus-go",
"url": "https://github.com/census-instrumentation/opencensus-go/issues/1163"
}
|
gharchive/issue
|
wrong metric for ServerResponseCountByStatusCode view
https://github.com/census-instrumentation/opencensus-go/blob/6ddd4bcc9c808594ec82377ce4323c3f7913be6d/plugin/ochttp/stats.go#L263-L269
The ServerLatency metric seems wrong, since there's no ServerResponseCount metric defined nor used. Given that I guess the view was meant to be ServerRequestCountByStatusCode.
Since server latency by status code feels like not that useful view, I was wondering whether ServerResponseCountByStatusCode should be replaced by ServerRequestCountByStatusCode:
ServerRequestCountByStatusCode = &view.View{
Name: "opencensus.io/http/server/request_count_by_status_code",
Description: "Server request count by status code",
TagKeys: []tag.Key{StatusCode},
Measure: ServerRequestCount,
Aggregation: view.Count(),
}
Looks like the same issue as #995...
|
2025-04-01T06:38:09.822129
| 2019-03-05T23:02:26
|
417542443
|
{
"authors": [
"codecov-io",
"rghetia",
"songy23"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4564",
"repo": "census-instrumentation/opencensus-java",
"url": "https://github.com/census-instrumentation/opencensus-java/pull/1786"
}
|
gharchive/pull-request
|
Exporter/Metrics/OcAgent: Add integration test.
Metrics counterpart of https://github.com/census-instrumentation/opencensus-java/pull/1776.
Codecov Report
:exclamation: No coverage uploaded for pull request base (master@b552db4). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1786 +/- ##
=========================================
Coverage ? 83.89%
Complexity ? 2020
=========================================
Files ? 291
Lines ? 9219
Branches ? 890
=========================================
Hits ? 7734
Misses ? 1171
Partials ? 314
Impacted Files
Coverage Δ
Complexity Δ
...porter/metrics/ocagent/OcAgentMetricsExporter.java
69.04% <ø> (ø)
4 <0> (?)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b552db4...cd82544. Read the comment docs.
one nit. LGTM otherwise.
|
2025-04-01T06:38:09.829646
| 2017-06-29T20:56:24
|
239608595
|
{
"authors": [
"codecov-io",
"ubschmidt2"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4565",
"repo": "census-instrumentation/opencensus-java",
"url": "https://github.com/census-instrumentation/opencensus-java/pull/400"
}
|
gharchive/pull-request
|
Package the classes that need to be loaded by the bootstrap classes i…
…n a separate JAR file, bootstrap.jar, and bundle that with the agent's JAR file.
Codecov Report
Merging #400 into master will increase coverage by 0.04%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #400 +/- ##
============================================
+ Coverage 90.56% 90.61% +0.04%
Complexity 595 595
============================================
Files 100 100
Lines 2078 2078
Branches 208 208
============================================
+ Hits 1882 1883 +1
Misses 142 142
+ Partials 54 53 -1
Impacted Files
Coverage Δ
Complexity Δ
...a/io/opencensus/trace/export/SpanExporterImpl.java
91.66% <0%> (+1.66%)
6% <0%> (ø)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 286ab5a...f69cfce. Read the comment docs.
|
2025-04-01T06:38:09.879218
| 2017-10-05T12:24:24
|
263105544
|
{
"authors": [
"leseb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4566",
"repo": "ceph/ceph-ansible",
"url": "https://github.com/ceph/ceph-ansible/pull/1995"
}
|
gharchive/pull-request
|
jewel: remove rbd check
The value of doing this is fairly low compare to the added value.
So we remove these tasks, if rbd pool on Jewel doesn't have the right PG
value you can always increase it.
Signed-off-by: Sébastien Han<EMAIL_ADDRESS>
jenkins test luminous-ansible2.3-centos7_cluster
|
2025-04-01T06:38:09.891626
| 2022-06-21T09:21:28
|
1278153992
|
{
"authors": [
"humblec",
"nixpanic",
"pkalever"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4567",
"repo": "ceph/ceph-csi",
"url": "https://github.com/ceph/ceph-csi/pull/3199"
}
|
gharchive/pull-request
|
rbd: have dummy attacher implementation
previously, it was a requirement to have attacher sidecar for CSI
drivers and there had an implementation of dummy mode of operation.
However skipAttach implementation has been stabilized and the dummy
mode of operation is going to be removed from the external-attacher.
Considering this driver work on volumeattachment objects for NBD driver
use cases, we have to implement dummy controllerpublish and unpublish
and thus keep supporting our operations even in absence of dummy mode
of operation in the sidecar.
This commit make a NOOP controller publish and unpublish for RBD driver.
CephFS driver does not require attacher and it has already been made free
from the attachment operations.
Ref# https://github.com/ceph/ceph-csi/pull/3149
Ref# https://github.com/kubernetes-csi/external-attacher/issues/226
Signed-off-by: Humble Chirammal<EMAIL_ADDRESS>
Why?
/retest all
/retest all
Please don't do this for PRs that are not completely ready, or do not require exhaustive testing yet. The CI environment is rather busy already, re-testing like this just prevents other PRs from getting merged sooner.
/retest ci/centos/mini-e2e/k8s-1.22
/retest ci/centos/k8s-e2e-external-storage/1.23
/retest ci/centos/mini-e2e-helm/k8s-1.22
@nixpanic added the details to the commit and PR. Also tests are passing ! ptal.. thanks..
all the test failures are on ```Errors during downloading metadata for repository 'tcmu-runner':
Status code: 404 for https://3.chacra.ceph.com/r/tcmu-runner/master/245914c1446dddf07d5b58b0a7b2060b50fde4d7/centos/8/flavors/default/x86_64/repodata/repomd.xml (IP: <IP_ADDRESS>)
Error: Failed to download metadata for repo 'tcmu-runner': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Error: error building at STEP "RUN dnf -y install librados-devel librbd-devel /usr/bin/cc make git && dnf clean all && rm -rf /var/cache/yum && true": error while running runtime: exit status 1
make: *** [Makefile:233: image-cephcsi] Error 1
script returned exit code 2```
@Mergifyio rebase
@nixpanic comments are addressed.. ptal.. thanks
/retest ci/centos/k8s-e2e-external-storage/1.22
/retest ci/centos/mini-e2e/k8s-1.23
/retest ci/centos/k8s-e2e-external-storage/1.22
@nixpanic can you revisit as I have addressed the changes requested here ?
Is there a reason to keep ControllerPublishVolume in internal/csi-common/controllerserver-default.go? Or does it make sense to move this dummy implementation there instead?
If ControllerPublishVolume is only needed for NBD support, it should probably be reported as a feature only for NBD backed volumes. But, I guess the controller does not report the capabilities depending on the parameters for the volume?
Is there a reason to keep ControllerPublishVolume in internal/csi-common/controllerserver-default.go? Or does it make sense to move this dummy implementation there instead?
in this case, it default to not implemented ( inherited for cephfs) and only get an implementation in case of RBD.
If ControllerPublishVolume is only needed for NBD support, it should probably be reported as a feature only for NBD backed volumes. But, I guess the controller does not report the capabilities depending on the parameters for the volume?
Yeah, this return fall into get caps call for the controller..
@nixpanic I have answered the queries.. ptal. thanks.
@nixpanic ptal
@nixpanic ptal. Thanks
@humblec is this a preparation for removing the registry.k8s.io/sig-storage/csi-attacher sidecar from RBD provisioner in the future?
@humblec is this a preparation for removing the registry.k8s.io/sig-storage/csi-attacher sidecar from RBD provisioner in the future?
This is not related to that @pkalever
@Mergifyio refresh
@Mergifyio rebase
@humblec is this a preparation for removing the registry.k8s.io/sig-storage/csi-attacher sidecar from RBD provisioner in the future?
This is not related to that @pkalever
Then I'm confused.
previously, it was a requirement to have attacher sidecar for CSI
drivers and there had an implementation of dummy mode of operation.
Well even now we have the csi-attacher sidecar, right?
Don't know what I'm missing here, could you please help me understand?
|
2025-04-01T06:38:10.057022
| 2022-05-30T12:32:52
|
1252676371
|
{
"authors": [
"guits"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4568",
"repo": "ceph/cephadm-ansible",
"url": "https://github.com/ceph/cephadm-ansible/pull/84"
}
|
gharchive/pull-request
|
playbooks: add cephadm-distribute-ssh-key.yml
This playbooks helps to distribute an SSH public key to hosts.
Signed-off-by: Guillaume Abrioux<EMAIL_ADDRESS>
jenkins test unittests
|
2025-04-01T06:38:10.112133
| 2016-05-12T19:17:29
|
154559965
|
{
"authors": [
"Iggnsthe",
"bbaugher",
"cchesser"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4569",
"repo": "cerner/cerner_tomcat",
"url": "https://github.com/cerner/cerner_tomcat/issues/13"
}
|
gharchive/issue
|
Allow specifying a mode for templates
Currently, there's no clean way inside of cerner_tomcat to specify the mode that a file from a template should be created with, forcing the user to either accept the default mode of 750 or use a file block to manually set the created file's mode later in the recipe.
This is duplicate of #4
Was this actually duplicated in #4? It appears in the 3.0.0 version of the cookbook, which includes #4 changes, it doesn't allow you to specify, and rather sets the permissions to 600 (which was previously 750). If so, I can submit a PR to update the README to further clarify how to set this, but wasn't seeing examples, or documentation on how to control this (as this changes from the 2.x -> 3.x behavior).
To clarify, seeing 600 on files being managed for resources used with template (ex. landing in conf) and seeing 644 for remote files landing in the lib directory.
|
2025-04-01T06:38:10.119636
| 2019-11-13T14:47:11
|
522269501
|
{
"authors": [
"jeremyfuksa",
"neilpfeiffer",
"nramamurth"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4570",
"repo": "cerner/terra-framework",
"url": "https://github.com/cerner/terra-framework/issues/970"
}
|
gharchive/issue
|
[terra-tabs] Overflow in icon-only collapsible tabs.
Bug Report
Description
This issue occurs only when all tabs are icon-only. In a scenario when some tabs need to be collapsed into a menu due to unavailability of space, container supposedly appears to overflow — few tabs that needed to be actually hidden are visible pushing the "More" button to the right and causing an overflow.
Steps to Reproduce
Display, let's say, 50 icon-only tabs.
Additional Context / Screenshots
Repro
import React from 'react';
import IconSearch from 'terra-icon/lib/icon/IconSearch';
import Tabs from 'terra-tabs';
const createTabPanes = () => {
const tabPanes = [];
for (let i = 0; i < 50; i += 1) {
const tabPane = (
<Tabs.Pane label={`Search${i}`} icon={<IconSearch />} isIconOnly key={`Search${i}`} id={`search${i}`} />
);
tabPanes.push(tabPane);
}
return tabPanes;
};
const IconOnlyTabs = () => (
<Tabs id="icononlytabs" responsiveTo="none">
{createTabPanes()}
</Tabs>
);
export default IconOnlyTabs;
Expected Behavior
Horizontal scroll should not appear, 'More' button should not look cut off.
Possible Solution
The minimum width set to icon-only tab affects the logic that we have to calculate the hideStateIndex that decides how many tabs to show and how many to collapse into the menu. Increasing the min-width fixes this issue.
https://github.com/cerner/terra-framework/blob/6cf7f8722f57836915c06589297b30efd09fb1df/packages/terra-tabs/src/Tabs.module.scss#L58-L62
Environment
Component Name and Version: terra-tabs v6.18.0
Browser Name and Version: All browsers.
@ Mentions
@neilpfeiffer
@neilpfeiffer, thoughts?
Not fully understanding why min-width fixes the dynamic handleResize calculation, but would interested to see the change applied in PR and evaluate.
@neilpfeiffer
Draft PR - https://github.com/cerner/terra-framework/pull/984
Deployment - [WIP]
Not fully understanding why min-width fixes the dynamic handleResize calculation, but would interested to see the change applied in PR shown with a new test case and evaluate.
I don't understand how it affects that either. But it seems to work.
|
2025-04-01T06:38:10.121432
| 2018-11-05T22:42:23
|
377614544
|
{
"authors": [
"bjankord",
"mmalaker"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4571",
"repo": "cerner/terra-framework",
"url": "https://github.com/cerner/terra-framework/pull/327"
}
|
gharchive/pull-request
|
Update on click outside
Summary
Resolves #237
Deployed URL
https://terra-framework-deploye-pr-327.herokuapp.com/#/tests/terra-hookshot/hookshot/hookshot-close-behaviors
Functional verification completed. The hookshot popup examples display and function as expected.
|
2025-04-01T06:38:10.135213
| 2022-09-08T15:14:59
|
1366545191
|
{
"authors": [
"maelvls"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4572",
"repo": "cert-manager/website",
"url": "https://github.com/cert-manager/website/pull/1074"
}
|
gharchive/pull-request
|
fix the open graph tag "og:title"
The og:title is important because that's the first thing people see when they share a cert-maanger.io link in Twitter or on Slack. Today, all the pages have the same og:title:
cert-manager - Documentation
The title of the page that we set in each front matter is discarded. This change fixes that.
To reproduce the before and after commands below, first run:
./scripts/server-netlify
Before:
$ curl -sL localhost:8888/docs/tutorials/getting-started-with-cert-manager-on-google-kubernetes-engine-using-lets-encrypt-for-ingress-ssl | htmlq meta | grep og:title
<meta content="cert-manager - Documentation" property="og:title">
After:
$ curl -sL localhost:8888/docs/tutorials/getting-started-with-cert-manager-on-google-kubernetes-engine-using-lets-encrypt-for-ingress-ssl | htmlq meta | grep og:title
<meta content="Deploy cert-manager on Google Kubernetes Engine (GKE) and create SSL certificates for Ingress using Let's Encrypt" property="og:title">
Maybe we should append — cert-manager documentation to the titles?
|
2025-04-01T06:38:10.159421
| 2016-06-10T17:39:52
|
159685378
|
{
"authors": [
"cespare",
"jdoklovic"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4574",
"repo": "cespare/reflex",
"url": "https://github.com/cespare/reflex/pull/33"
}
|
gharchive/pull-request
|
Added global flag for watching chmod only changes
this fixes #32
As I mentioned on #32, I don't want a flag for this if we can avoid it. I'd still need to see the research I mentioned in https://github.com/cespare/reflex/issues/32#issuecomment-187435532.
|
2025-04-01T06:38:10.161370
| 2023-08-07T13:41:56
|
1839485575
|
{
"authors": [
"john-shepherdson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4575",
"repo": "cessda/cessda.cdc.versions",
"url": "https://github.com/cessda/cessda.cdc.versions/issues/594"
}
|
gharchive/issue
|
Update About text for version 3.4.0
Change "This is the CESSDA Data Catalogue (CDC) version 3.2.0, released on 2022-12-08."
Incorporate updates made by Service Owner (see https://docs.google.com/document/d/1XzBC65fvuNqCNhctJgDicPKqNWh0DsPucVByly9kiU0/edit)
See https://github.com/cessda/cessda.cdc.searchkit/pull/154
|
2025-04-01T06:38:10.163324
| 2012-02-23T21:25:22
|
3363318
|
{
"authors": [
"chochos"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4576",
"repo": "ceylon/ceylon-web-ide-backend",
"url": "https://github.com/ceylon/ceylon-web-ide-backend/issues/9"
}
|
gharchive/issue
|
Button for code sharing
We need a button that somehow stores the code that is currently in the editor and creates a link to show that code again. This allows for on-the-fly sharing of examples.
maybe we could store the code as gists here in github, if the API's not too convoluted. Anyway this is nice too have but low priority
|
2025-04-01T06:38:10.169579
| 2016-06-14T16:43:26
|
160229132
|
{
"authors": [
"FroMage",
"bjansen",
"gavinking"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4577",
"repo": "ceylon/ceylon",
"url": "https://github.com/ceylon/ceylon/issues/6313"
}
|
gharchive/issue
|
createJavaObjectArray() is allowed without explicit type arguments
As discussed on gitter, it's possible to call createJavaObjectArray without explicit type arguments, which might lead to ClassCastExceptions at runtime:
typeParameters => createJavaObjectArray({}); // expected to return `ObjectArray<PsiTypeParameter>`
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Lcom.intellij.psi.PsiTypeParameter;
at org.intellij.plugins.ceylon.ide.ceylonCode.lightpsi.CeyLightToplevelFunction.getTypeParameters(CeyLightToplevelFunction.ceylon:155)
Related to https://github.com/ceylon/ceylon/issues/6160
Perhaps related, when I have a createJavaObjectArray() that is in fact a createJavaObjectArray<CeyLightToplevelFunction|CeyLightClass>(), I also get a CCE:
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to [Lcom.intellij.psi.PsiClass;
at org.intellij.plugins.ceylon.ide.ceylonCode.lightpsi.CeylonElementFinder.getClasses(CeylonElementFinder.ceylon:85)
Both CeyLightToplevelFunction and CeyLightClass satisfy PsiClass.
This is an SDK issue that was misfiled against the typechecker! Now https://github.com/ceylon/ceylon-sdk/issues/606.
I thought he wanted a type checker error, not a runtime one.
Well, I want a pony, but that doesn't mean you're going to buy me one...
I'm strongly against special casing stuff from ceylon.interop.java in the typechecker.
Even if I ship you 400g of Poney?
Here, have a Pony http://www.ponylang.org/
Even if I ship you 400g of Poney?
http://www.urbandictionary.com/define.php?term=poney
Is that what you mean?
Damnit. This is how we spell it in French :(
|
2025-04-01T06:38:10.216684
| 2016-10-19T11:43:14
|
183938616
|
{
"authors": [
"gavinking",
"lucaswerkmeister",
"quintesse",
"sgalles",
"tombentley"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4578",
"repo": "ceylon/ceylon",
"url": "https://github.com/ceylon/ceylon/issues/6622"
}
|
gharchive/issue
|
Java EE-friendly compiler mode
Thinking further about #6609 #6610, I now think we should go down a slightly different path.
The original mapping of Ceylon->Java was designed to be as "strict" as possible, allowing the minimum possible "misuse" of Ceylon APIs called from Java. More recently we weakened it slightly to make Ceylon objects serializable-by-default, which I still think was the right call.
However, the current mapping is still not perfect in the Java EE universe, where there are a bunch of patterns/rules that both our mapping, and our language defaults, get in the way of. For example,
Several Java EE specs won't let you define final members.
Some Java EE specs don't know what to do with ceylon.language.Integer, ceylon.language.String, and friends.
Some Java EE specs have problems with our late initialization checking.
JAX-RS likes having public default constructors. Grrr.
Therefore, I propose a separate mode where:
We never generate final on methods, even if they're not marked default.
We store Integer?, Float?, String?, Byte?, Boolean? fields as java.lang's Long, Double, String, Byte, Boolean.
We disable runtime assignment checking for late fields. (Hat tip: @jvasileff.)
We make the secret default constructor public instead of protected, since JAX-RS likes it that way.
Now, what triggers this mode? There's a number of possibilities I can think of. For example, it might be triggered by a persistence.xml or beans.xml in META-INF, or it might be triggered by an @Entity annotation.
On the other hand, there should probably also be a commmand line option, because "Java EE-friendly" is also probably "Spring EE-friendly".
I think this is better than what I had proposed earlier, and so I'm going to close the other issues.
@tombentley How much work is involved in what is outlined above?
Potential enhancement to this: we could even somehow mess with the mapping for collection-typed fields, to transform Ceylon collection-typed fields into Java collection types.
Actually we should definitely do this, though it's a little more work.
How much work is involved
1, 3 and 4 are relatively easy. 2 is going to be more fiddly. Maybe a week in total. It should be do-able for 1.3.2. Collection conversion is way harder.
We need to put more work into figuring out when to trigger this stuff. I don't think a compiler options is the right thing, because it doesn't apply to specific program elements at all, just on whatever elements are involved in a particular compilation.
I have the feeling this is something where we should think hard and long about the possible consequences of suddenly having several different mapping from Ceylon types to the types we finally generate. Right now it's two-pronged: boxed or unboxed, which already creates a lot of complexity. It would now become 3-pronged (while hopefully keeping compatibility with old code).
I'd almost be more inclined to just break and switch to using Java types for our boxed types. Except that would introduce the possibility of double boxing perhaps.
2 is going to be more fiddly
Just "more fiddly"? Seems like a pretty big change to me?
We need to put more work into figuring out when to trigger this stuff.
Yeah, let me do a little research on that. I have to re-familiarize myself with some of the specs in this area (some of which I helped write, but still don't remember).
In the meantime...
1, 3 and 4 are relatively easy.
Alright, could you make a start on this bit while I figure out some rules for how it gets turned on?
2 is going to be more fiddly.
Yes, of course. We don't have to deliver this whole issue in 1.3.1 though. So let's play it by ear. It would be great to have it in but we can live without it.
Collection conversion is way harder
I think it's a bit harder, for sure, but I don't think it's all that different from 2.
Yeah, OK maybe I was a little hasty. "Quite a lot more fiddly". It's actually an opportunity to improve the compiler backend if we can make this more pluggable and less hard-coded.
Aren't our fields always protected by getters/setters?
Aren't our fields always protected by getters/setters?
Ah, not quite because they're set directly from the constructor. Still, that's only three locations per field to worry about:
the constructor
the getter
the setter
I don't see how that's an enormous impact.
Quite a lot more fiddly
The English and their understatements! rolls eyes
I don't see how that's an enormous impact.
Famous last words ;)
I hope you're right but I doubt it. I'm sure we'll need changes in the model loaders for one.
I'm sure we'll need changes in the model loaders for one.
Not true. The metadata is read from the getter, not from the field.
Does this have any implications for libraries? Should we, for example, compile future SDK releases in EE mode?
Does this have any implications for libraries? Should we, for example, compile future SDK releases in Java EE mode?
No.
Still, that's only three locations per field to worry about:
I think we also do direct access whenever we know the getter/setter is/are final.
If it's true, I wasn't aware of that @quintesse.
@quintesse not that I have observed.
We disable runtime assignment checking for late fields.
Would we do that for all late fields, or just those where we would otherwise generate an initialized flag field?
we could even somehow mess with the mapping for collection-typed fields, to transform Ceylon collection-typed fields into Java collection types.
So I assume you mean something like:
c.l.Sequential (or c.l.List-c.l.String?) →j.u.List
c.l.Map →j.u.Map
c.l.Set →j.u.Set
Obviously we'd lose identity if we're wrapping and unwrapping these things.
Would we do that for all late fields, or just those where we would otherwise generate an initialized flag field?
Great question ... I'm gunna go with .... um ... (flips coin) ... all of them? For now?
So I assume you mean something like:
List -> j.l.List
Set -> j.l.Set
Map -> j.l.Map
String -> j.l.String (as today)
And optional types:
String? -> j.l.String
Integer? -> j.l.Long
Float? -> j.l.Double
Byte? -> j.l.Byte
Boolean? -> j.l.Boolean
Character? -> j.l.Integer
Obviously we'd lose identity if we're wrapping and unwrapping these things.
Not a problem, since the containder tends to wrap and unwrap them anyway.
We make the secret default constructor public instead of protected, since JAX-RS likes it that way.
Even for a non-shared class (where the initializer constructor would be package access)?
Even for a non-shared class (where the initializer constructor would be package access)?
Not sure, we need to check what the JAX-RS spec says. Perhaps resources have to be public.
They must be public, FTR
They must be public, FTR
Then in that case it would only be necessary to do this for shared classes, it seems. And for un-shared classes we leave them package-access.
For model loading we've previously set the Declaration.isDefault() flag when the method mirror is non-final. But in EE mode all methods are non-final, which means we should fallback to using the @Default annotation, but only when loading EE-mode classes. Which means we need to know a class was compiled in EE mode. Annoying. Or we add a @Final annotation in lieu of final. I think I prefer the latter option.
Or we add a @Final annotation in lieu of final. I think I prefer the latter option.
I think that's reasonable. It will be more reusable if there are other cases in future where we need to suppress the final modifier.
I have items 1 – 4 pretty much working on the ee-mode branch @gavinking, if you want to try it out. You'll need --ee on your command line (or ee in your config).
I still have to do the collection mapping. Presumably we map c.l::List<c.l::Integer> →j.l.List<j.l.Long>? What about List<List<Integer>>?
@tombentley fantastic! That's great news.
@tombentley it did not appear to like the default argument.
Gavins-MacBook-Pro-2:ceylon-wildfly-swarm-jaxrs gavin$ ceylon compile --ee && ceylon swarm --provided-module javax:org.wildfly.swarm:jaxrs jaxrs.example && java -jar jaxrs.example-1.0.0-swarm.jar
source/jaxrs/example/entity/Employee.ceylon:15: error: Ceylon backend error: no suitable method found for valueOf(Integer)
shared entity class Employee(name, year=null) {
^
method Long.valueOf(String) is not applicable
(argument mismatch; Integer cannot be converted to String)
method Long.valueOf(long) is not applicable
(argument mismatch; Integer cannot be converted to long)
Note: Created module jaxrs.example/1.0.0
ceylon compile: Fatal error: The compiler exited abnormally (4) due to a bug in the compiler.
Please report it:
https://github.com/ceylon/ceylon/issues/new
Please include:
* the stacktrace printed below
* a description of what you were trying to compile.
Thank you!
com.redhat.ceylon.compiler.CompilerBugException: Bug
at com.redhat.ceylon.compiler.CeylonCompileTool.handleExitCode(CeylonCompileTool.java:668)
at com.redhat.ceylon.compiler.CeylonCompileTool.run(CeylonCompileTool.java:650)
at com.redhat.ceylon.common.tools.CeylonTool.run(CeylonTool.java:524)
at com.redhat.ceylon.common.tools.CeylonTool.execute(CeylonTool.java:405)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.redhat.ceylon.launcher.Launcher.runInJava7Checked(Launcher.java:115)
at com.redhat.ceylon.launcher.Launcher.run(Launcher.java:41)
at com.redhat.ceylon.launcher.Launcher.run(Launcher.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.redhat.ceylon.launcher.Bootstrap.runVersion(Bootstrap.java:162)
at com.redhat.ceylon.launcher.Bootstrap.runInternal(Bootstrap.java:117)
at com.redhat.ceylon.launcher.Bootstrap.run(Bootstrap.java:93)
at com.redhat.ceylon.launcher.Bootstrap.main(Bootstrap.java:85)
Caused by: java.lang.ClassCastException: com.redhat.ceylon.langtools.tools.javac.code.Symbol$ClassSymbol cannot be cast to com.redhat.ceylon.langtools.tools.javac.code.Symbol$MethodSymbol
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.visitApply(Gen.java:1847)
at com.redhat.ceylon.langtools.tools.javac.tree.JCTree$JCMethodInvocation.accept(JCTree.java:1464)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genExpr(Gen.java:949)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.visitAssign(Gen.java:1988)
at com.redhat.ceylon.langtools.tools.javac.tree.JCTree$JCAssign.accept(JCTree.java:1685)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genExpr(Gen.java:949)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.visitExec(Gen.java:1779)
at com.redhat.ceylon.langtools.tools.javac.tree.JCTree$JCExpressionStatement.accept(JCTree.java:1295)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genDef(Gen.java:739)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genStat(Gen.java:774)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genStat(Gen.java:760)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genStats(Gen.java:811)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.visitBlock(Gen.java:1159)
at com.redhat.ceylon.langtools.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:908)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genDef(Gen.java:739)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genStat(Gen.java:774)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genMethod(Gen.java:1033)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.visitMethodDef(Gen.java:996)
at com.redhat.ceylon.langtools.tools.javac.tree.JCTree$JCMethodDecl.accept(JCTree.java:777)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genDef(Gen.java:739)
at com.redhat.ceylon.langtools.tools.javac.jvm.Gen.genClass(Gen.java:2461)
at com.redhat.ceylon.compiler.java.tools.LanguageCompiler.genCodeUnlessError(LanguageCompiler.java:806)
at com.redhat.ceylon.compiler.java.tools.LanguageCompiler.genCode(LanguageCompiler.java:772)
at com.redhat.ceylon.langtools.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1575)
at com.redhat.ceylon.compiler.java.tools.LanguageCompiler.generate(LanguageCompiler.java:942)
at com.redhat.ceylon.langtools.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1539)
at com.redhat.ceylon.langtools.tools.javac.main.JavaCompiler.compile2(JavaCompiler.java:904)
at com.redhat.ceylon.langtools.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:862)
at com.redhat.ceylon.compiler.java.tools.LanguageCompiler.compile(LanguageCompiler.java:271)
at com.redhat.ceylon.compiler.java.launcher.Main.compile(Main.java:658)
at com.redhat.ceylon.compiler.java.launcher.Main.compile(Main.java:564)
at com.redhat.ceylon.compiler.java.launcher.Main.compile(Main.java:556)
at com.redhat.ceylon.compiler.java.launcher.Main.compile(Main.java:545)
at com.redhat.ceylon.compiler.CeylonCompileTool.run(CeylonCompileTool.java:649)
... 17 more
That's with this:
shared entity class Employee(name, year = null) {
generatedValue id
shared late Integer id;
column { length = 50; }
shared String name;
column
shared variable Integer? year;
}
@tombentley mind if I add an additional requirement at this point?
Store late primitive values using a Java wrapper class instead of an extra field. For example, late Integer id would be persistent as java.lang.Long id.
Well, wait, perhaps that's not what I want after all. The issue is this warning:
source/jaxrs/example/entity/Employee.ceylon:10: warning: the 'late' attribute 'id' cannot be properly initialised just by setting the field value because it is erased to a primitive type: depending on the semantics of 'generatedValue' consider annotating the JavaBean Property getter with generatedValue__GETTER or its setter with generatedValue__SETTER or making it non-'late'
generatedValue id
That's not quite right, since it can be properly initialized now that we got rid of the initialization checking. And I'm not sure if it merits a warning at all now.
figure out some rules for how it gets turned on?
So this isn't actually very easy, but here's a couple of heuristics that might work:
Enable it at the class level if the class is annotated entity or xmlAccessorType, or if it has a constructor, field, or method annotated inject.
Enable it at the package level if the package is annotated xmlAccessorType.
Enable it at the module level if the module imports javax.javaeeapi or maven:"javax:javaee-api", or if it has a beans.xml or persistence.xml file in WEB-INF or META-INF.
WDYT, @tombentley?
Perhaps also turn it on for stateless, stateful, messageDriven, and singleton.
Or perhaps we just autoenable it at the module level whenever the module directly imports:
Java EE,
JPA or Hibernate,
JAXB, or
JAX-RS.
That's much simpler.
Any others that should be on that list, DYT?
I like the idea of enabling it on a per-class basis according to annotations present. That's because we could easily make the enabling annotations configurable, so if people want EE-mode for Spring or any other thing it's not difficult for them to do it. It also means that only the required classes get EE-mode, and the rest still benefit from the ("better") default transformations. Obviously when I say "make it configurable" we could have sensible defaults so that in the usual case it "just works".
I like the idea of enabling it on a per-class basis according to annotations present.
I agree that would be nice, but after some research I don't think it's really workable. It works well for JPA because you have the entity annotation. Also works for EJB, since you have component-defining annotations. Doesn't really work well for CDI, nor for JAXB.
The issue is this warning ... That's not quite right, since it can be properly initialized now that we got rid of the initialization checking.
@tombentley, what's your reaction to this?
@gavinking I'll be pushing fixes to the branch shortly.
@gavinking I've updated the branch:
We now enable EE mode automatically when: javaee-api is a direct dependency of the module, or a class is annotated with any of various annotations.
The enabling module imports and annotations are configurable via --ee-import and --ee-annotation respectively. Setting these replaces (rather than adds to). --ee is still supported.
The warning about late won't be shown in EE mode.
Your other Employee example should now work.
Do we need to mention EE mode as a possible alternative solution in the annotated+late warning?
@tombentley can't we do something like --ee=import and --ee=annotation where --ee would be the same as specifying --ee=import,annotation? All these extra options seem to me will only muddle the help/docs.
Sorry forget that, I hadn't understood they took arguments.
@tombentley excellent, thanks, I will try it out!
@gavinking we now wrap List, Set and Map.
Alright! Fantastic 👍
So DYT this is finished now, @tombentley, or is there still additional work to do?
Perhaps if there is missing functionality, we should open separate issues.
I'm happy to close this if you are @gavinking.
The only thing I think we've not done which you mentioned is to activate EE-mode based on the presence of persistence.xml or beans.xml in META-INF. If you still want this, we can open a separate issue.
OK. So what precisely are the activation rules today? I just want to see them written down ;-)
Ah, that's great that this is actually documented :-)
However, you say:
EE mode is usually activated automatically, in the presence of certain annotations on classes, or certain imports
I would like to see these "certain" things enumerated on that page ;-)
@gavinking done
OK, thanks, that looks good. Thanks for your hard work on this!
@gavinking @tombentley the EE activation works for javax.persistence.Entity but should we also add javax.persistence.Embeddable ?
@sgalles @tombentley yes, of course, +1.
|
2025-04-01T06:38:10.219699
| 2017-01-25T12:10:35
|
203086246
|
{
"authors": [
"tombentley"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4579",
"repo": "ceylon/ceylon",
"url": "https://github.com/ceylon/ceylon/issues/6892"
}
|
gharchive/issue
|
Wrong value when coercing an ObjectArray type literal to a j.l.Class
In 1.3.1, when coercing to a java.lang.Class, the metamodel literal `ObjectArray<String>` results in the value java.lang.Object[].class. It should have the value ceylon.language.String[].class.
Discovered during #6818
|
2025-04-01T06:38:10.229542
| 2020-10-14T13:00:28
|
721438179
|
{
"authors": [
"bdaniel7",
"cezarypiatek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4580",
"repo": "cezarypiatek/MappingGenerator",
"url": "https://github.com/cezarypiatek/MappingGenerator/issues/142"
}
|
gharchive/issue
|
Inconsistent behaviour
Thanks for this life saving nuget! :))
I'm using the nuget version with Rider. And in some situations it works, in some, it doesn't.
I use Alt + Enter.
It's ok.
It should display the Initialize with local variables and use the properties of the class.
Also, it doesn't show the Implement Clone method when I select/set the cursor on the class name.
Ad1. I need to check that, can you provide a sample solution?
Ad2. "Implement Clone metho" - I don't provide such functionality. You probably confused MappingGenerator with something else.
Ad2. In the project wiki there is a paragraph
Generate ICloneable interface implementation - with a gif.
I don't see any problem with "Implement clone" options, works fine in any "space" configuration.
Tested on Rider 2020.2.1
"Initialize with local values" - this action is using only local variables and method parameters. It's not using enclosing type members. This is the current behavior.
Ok, I'm getting the menu when I have the cursor on the class name. Previously I had the cursor just at the end of class name.
But when I try to invoke the action, the IDE freezes. I have to close it from Task Manager. I'm using Rider 2020.2.4
How large is your solution and how complex is the cloned class?
Some classes have around 750 lines, including curly braces and empty lines. Other classes have 45 or 75 lines, with just 2 to 10 properties.
does it hang on that sample solution too?
Nope.
So I guess this is due to solution size. This problem is already reported here https://github.com/cezarypiatek/MappingGenerator/issues/115
I tried to profile MappinGenerator several times but I wasn't able to find anything suspicions. Without real example I'm not able to diagnose this problem.
Alright, in the meantime I will try a workaround.
@bdaniel7 might I ask you to test it once again using v1.19.452?
First, the compilation for the entire solution become waaay slower - when using dotnet watch run.
Even when just a file is modified.
Which makes the CPU and the fans go crazy...
Then, I got all kinds of warnings that I didn't get before, and that I had to disable with VSTHRD200,VSTHRD103,VSTHRD002,VSTHRD110,VSTHRD003.
Thanks for quick reply. It's my mistake, the analyzers were referenced incorrectly, shold be fixed in v1.19.454
I'm closing this issue. The case with performance can be tracked here #115
|
2025-04-01T06:38:10.245811
| 2015-09-16T14:09:42
|
106781507
|
{
"authors": [
"anselmbradford"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4581",
"repo": "cfpb/cfgov-refresh",
"url": "https://github.com/cfpb/cfgov-refresh/issues/986"
}
|
gharchive/issue
|
subscription form error
form-validation.js is throwing Uncaught TypeError: Cannot read property 'message' of undefined when subscribing to the form. Possibly an issue between handlebars and webpack.
Fixed! Magic!
|
2025-04-01T06:38:10.249450
| 2015-05-12T15:00:49
|
75632244
|
{
"authors": [
"dpford",
"jimmynotjim",
"kurtw"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4582",
"repo": "cfpb/cfgov-refresh",
"url": "https://github.com/cfpb/cfgov-refresh/pull/530"
}
|
gharchive/pull-request
|
Fix sheer indexing error in views and fix some formatting
This relieves this issue and fixes some formatting issues in the View processor. Basically, a View post would have a Hero's slug saved in a custom field. That slug is used to lookup and serve the content of the Hero, but when the Hero is deleted an indexing error occurs. That's because the slug used to do the lookup does not exist but the code does not account for this. Now it does.
@dpford @jimmynotjim @sebworks @anselmbradford @KimberlyMunoz
:+1:
This is going to need an update to fixe the conflicts in CHANGELOG.md. Feel free to merge it as soon as it's fixed so it doesn't end up in a new conflict when another PR gets merged.
|
2025-04-01T06:38:10.255563
| 2021-04-21T15:03:59
|
863988727
|
{
"authors": [
"niqjohnson",
"willbarton"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4583",
"repo": "cfpb/consumerfinance.gov",
"url": "https://github.com/cfpb/consumerfinance.gov/pull/6403"
}
|
gharchive/pull-request
|
Ensure regulation section labels are unique within their version
This change validates that a section's label is unique to the version of the regulation it belongs to.
This means that instead of creating a section that then results in 500 errors, content editors will see an error like this:
Checklist
[x] PR has an informative and human-readable title
[x] Changes are limited to a single goal (no scope creep)
@willbarton, is it worth mentioning the uniqueness constraint in the label field's help text (cfgov/regulations3k/models/django.py L224-L226)? Maybe something like:
help_text='Labels must be unique and always require at least 1 '
'alphanumeric character, then any number of alphanumeric '
'characters and hyphens, with no spaces.'
@niqjohnson excellent idea! Included in eec8619.
|
2025-04-01T06:38:10.265289
| 2022-06-23T15:36:39
|
1282601388
|
{
"authors": [
"baruva",
"willbarton"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4584",
"repo": "cfpb/consumerfinance.gov",
"url": "https://github.com/cfpb/consumerfinance.gov/pull/7117"
}
|
gharchive/pull-request
|
Inactive users cron job
Created a cronjob for our inactive users audit. Scheduled to run weekly on Prod and will never run locally as its not really needed for local development.
Additions
New Cronjob to audit inactive users, suspended locally and run weekly on prod
Removals
Changes
How to test this PR
Screenshots
Notes and todos
Checklist
[ ] PR has an informative and human-readable title
PR titles are used to generate the change log in releases; good ones make that easier to scan.
Consider prefixing, e.g., "Mega Menu: fix layout bug", or "Docs: Update Docker installation instructions".
[ ] Changes are limited to a single goal (no scope creep)
[ ] Code follows the standards laid out in the CFPB development guidelines
[ ] Future todos are captured in comments and/or tickets
[ ] Project documentation has been updated, potentially one or more of:
This repo’s docs (edit the files in the /docs folder) – for basic, close-to-the-code docs on working with this repo
CFGOV/platform wiki on GHE – for internal CFPB developer guidance
CFPB/hubcap wiki on GHE – for internal CFPB design and content guidance
Front-end testing
Browser testing
Visually tested in the following supported browsers:
[ ] Firefox
[ ] Chrome
[ ] Safari
[ ] Edge 18 (the last Edge prior to it switching to Chromium)
[ ] Internet Explorer 11 and 8 (via emulation in 11's dev tools)
[ ] Safari on iOS
[ ] Chrome on Android
Accessibility
[ ] Keyboard friendly (navigable with tab, space, enter, arrow keys, etc.)
[ ] Screen reader friendly
[ ] Does not introduce new errors or warnings in WAVE
Other
[ ] Is useable without CSS
[ ] Is useable without JS
[ ] Does not introduce new lint warnings
[ ] Flexible from small to large screens
I'm going to close this PR for now. We're moving to Single Sign-On, and our inactive user audit is only relevant for production, and we won't be in production in EKS for a while. If we're not on SSO by then, we'll revisit this.
|
2025-04-01T06:38:10.269375
| 2024-02-15T22:32:26
|
2137557566
|
{
"authors": [
"Michaeldremy",
"jmurphy-asurity"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4585",
"repo": "cfpb/hmda-combined-documentation",
"url": "https://github.com/cfpb/hmda-combined-documentation/issues/48"
}
|
gharchive/issue
|
Missing spaces in hmda 2023 fig
Many edit texts have some words that are run together without spaces, like invalidIncomewas in V654 and V655.
Above issue has been fixed along with other found formatting issues: https://github.com/cfpb/hmda-frontend/issues/2263
|
2025-04-01T06:38:10.271601
| 2024-09-19T18:29:02
|
2537087009
|
{
"authors": [
"billhimmelsbach"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4586",
"repo": "cfpb/sbl-frontend",
"url": "https://github.com/cfpb/sbl-frontend/issues/947"
}
|
gharchive/issue
|
[Update your financial institution profile] Send full institution data with every requests
Every submission to the mail api should include the full data of the institution to minimize the amount of research required to address issues.
We'll be releasing this one after the bug bash next Thursday (November 7th 2024)
|
2025-04-01T06:38:10.363771
| 2023-01-15T18:42:08
|
1533950422
|
{
"authors": [
"kitblake",
"okpoEkpenyong"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4587",
"repo": "chaineresearch/datastreams",
"url": "https://github.com/chaineresearch/datastreams/issues/43"
}
|
gharchive/issue
|
In publish/2: keep the existing Sample File field and add a Datastream API Docs URL field
It was a mistake to use the Sample File field for the API docs URL. We should have kept it to be consistent with the Market, and added another field for the API docs. Easy to say in retrospect, but it'll be better functionality.
After adding the field we'll need to re-activate the Sample File field in the public view, which displays below the asset description.
The mockup will be updated to reflect these changes and specify the page layout.
Here's the updated mockup of publish/2:
https://htmlpreview.github.io/?https://github.com/chaineresearch/assets/blob/main/market_mockups/publish.html
Besides adding the original "Sample File" field back in, nothing else has changed.
Here's the updated mockup of the Preview (which contains the public page layout too):
https://htmlpreview.github.io/?https://github.com/chaineresearch/assets/blob/main/market_mockups/preview.html
Changes:
The original SAMPLE DATA element needs to be turned on again.
The DATASTREAM API DOCUMENTATION element has moved to under the SAMPLE DATA element.
The DATASTREAM API DOCUMENTATION element title has changed (to "DATASTREAM API DOCUMENTATION").
The content of the link to the API docs is no longer a URL. It's a text description, following the Ocean format.
No CSS adjustments should be needed, it's all default Ocean housestyle.
Noted
|
2025-04-01T06:38:10.366643
| 2022-08-24T23:26:16
|
1350124779
|
{
"authors": [
"SoMuchForSubtlety",
"kaniini"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4588",
"repo": "chainguard-dev/apko",
"url": "https://github.com/chainguard-dev/apko/issues/335"
}
|
gharchive/issue
|
Multi-arch build fails in GitHub action with /bin/busybox: Exec format error
config:
contents:
repositories:
- https://dl-cdn.alpinelinux.org/alpine/latest-stable/main
- https://dl-cdn.alpinelinux.org/alpine/latest-stable/community
packages:
- alpine-baselayout
- ffmpeg
accounts:
groups:
- groupname: svc
gid: 10000
users:
- username: svc
uid: 10000
run-as: svc
archs:
- amd64
- arm64
workflow file: https://github.com/MemeLabs/strims/actions/runs/2922854653/workflow
logs: https://github.com/MemeLabs/strims/runs/8005777547?check_suite_focus=true#step:4:244
relevant part:
Aug 24 23:06:05.423 [INFO] [arch:aarch64] creating group 10000(svc)
Aug 24 23:06:05.425 [INFO] [arch:aarch64] [cmd:/bin/busybox] [use-proot:false] [use-qemu:] running: /usr/sbin/chroot /tmp/apko-1875477164/aarch64 /bin/busybox --install -s
Aug 24 23:06:05.427 [DEBUG] [arch:aarch64] [cmd:/bin/busybox] [use-proot:false] [use-qemu:] chroot: can't execute '/bin/busybox': Exec format error
Error: failed to build layer image for "arm64": failed to install busybox symlinks: failed to install busybox symlinks: exit status 126
2022/08/24 23:07:02 error during command execution: failed to build layer image for "arm64": failed to install busybox symlinks: failed to install busybox symlinks: exit status 126
Running the same command locally works as expected.
You need to do the docker/setup-binfmt-action before using apko on github actions, or pass --arch x86_64.
|
2025-04-01T06:38:10.387329
| 2024-02-06T02:31:44
|
2119814343
|
{
"authors": [
"stormqueen1990"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4589",
"repo": "chainguard-images/images",
"url": "https://github.com/chainguard-images/images/pull/2168"
}
|
gharchive/pull-request
|
WIP: add logstash image
New Image Pull Request Template
Image Size
[ ] The Image is smaller in size than its common public counterpart.
[ ] The Image is larger in size than its common public counterpart (please explain in the notes).
Notes:
Image Vulnerabilities
[ ] The Grype vulnerability scan returned 0 CVE(s).
[ ] The Grype vulnerability scan returned > 0 CVE(s) (please explain in the notes).
Notes:
Image Tagging
[ ] The image is not tagged with version tags.
[ ] The image is tagged with :latest
[ ] The image is not tagged with :latest (please explain in the notes).
Notes:
Basic Testing - K8s cluster
[ ] The container image was successfully loaded into a kind cluster.
[ ] The container image could not be loaded into a kind cluster (please explain in the notes).
Notes:
Basic Testing - Package/Application
[ ] The application is accessible to the user/cluster/etc. after start-up.
[ ] The application is not accessible to the user/cluster/etc. after start-up. (please explain in the notes).
Notes:
Helm
[ ] A Helm chart has been provided and the container image can be used with the chart. If needed, please add a -compat package to close any gaps with the public helm chart.
[ ] A Helm chart has been provided and the container image is not working with the chart (please explain in the notes).
[ ] A Helm chart was not provided.
Notes:
Processor Architectures
[ ] The image was built and tested for x86_64.
[ ] The image could not be built for x86_64 (please explain in the notes).
[ ] The image was built and tested for aarch64.
[ ] The image could not be built for aarch64. (please explain in the notes).
Notes:
Functional Testing + Documentation
[ ] Functional tests have been included and the tests are passing. All tests have been documnted in the notes section.
Notes:
Environment Testing + Documentation
[ ] There has not been a request and/or there is no indication that this image needs tested on a public cloud provider.
[ ] The container image has been tested successfully on a public cloud provider (AWS, GCP, Azure).
[ ] The container image has not been tested successfully on a public cloud provider (AWS, GCP, Azure) (please explain in the notes).
Notes:
Version
[ ] The package version is the latest version of the package. The latest tag points to this version.
[ ] The package version is the not the latest version of the package (please explain in the notes).
Notes:
Dev Tag Availability
[ ] There is a dev tag available that includes a shell and apk tools (by depending on 'wolfi-base')
[ ] There is not a dev tag available that includes a shell and apk tools (by depending on 'wolfi-base') (please explain in the notes).
Notes:
Access Control + Authentication
[ ] The image runs as nonroot and GID/UID are set to 65532 or upstream default
[ ] Alternatively the username and GID/UID may be a commonly used one from the ecosystem e.g: postgres
[ ] The image requires a non-standard username or non-standard GID/UID (please explain in the notes).
ENTRYPOINT
[ ] applications/servers/utilities set to call main program with no arguments e.g. [redis-server]
[ ] applications/servers/utilities not set to call main program with no arguments e.g. [redis-server] (please explain in the notes)
[ ] base images leave empty.
[ ] base image and not empty (please explain in the notes).
[ ] dev variants is set to entrypoint script that falls back to system.
[ ] dev variants is not set to entrypoint script that falls back to system (please explain in the notes).
CMD
[ ] For server applications give arguments to start in daemon mode (may be empty)
[ ] For utilities/tooling bring up help e.g. –help
[ ] For base images with a shell, call it e.g. [/bin/sh]
Environment Variables
[ ] Environment variables added.
[ ] Environment variables not added and not required.
SIGTERM
[ ] The image responds to SIGTERM (e.g., docker kill $(docker run -d --rm cgr.dev/chainguard/nginx))
Logs
[ ] Error logs write to stderr and normal logs to stdout. Logs DO NOT write to file.
Documentation - README
[ ] A README file has been provided and it follows the README template.
Superseded by chainguard-images/images#2197
|
2025-04-01T06:38:10.394119
| 2019-10-10T07:46:36
|
505096804
|
{
"authors": [
"icoxfog417",
"leventm"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4590",
"repo": "chakki-works/doccano",
"url": "https://github.com/chakki-works/doccano/issues/394"
}
|
gharchive/issue
|
Hi,doccano local host page doesnt open up once machine is powered off and powered on again. PLease help me out.
If you open a GitHub issue, here is our policy:
It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
The form below must be filled out.
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Python version:
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug or a feature request.
Source code / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.
Please follow the ISSUE_TEMPLATE.
|
2025-04-01T06:38:10.399225
| 2023-09-08T10:39:18
|
1887406621
|
{
"authors": [
"rudeayelo",
"segunadebayo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4591",
"repo": "chakra-ui/panda",
"url": "https://github.com/chakra-ui/panda/issues/1337"
}
|
gharchive/issue
|
Missing strokeWidth from SystemProperties type
Description
I'm styling an SVG element and the strokeWidth property seems to be missing from the SystemProperties type definition that's generated by the codegen command:
Link to Reproduction
https://stackblitz.com/edit/vitejs-vite-eqkrvv?file=src/App.tsx&terminal=dev
Steps to reproduce
Write a strokeWidth property in a cva/css declaration and see the TS error popping up in the IDE.
I tried to reproduce in a StackBlitz container but TypeScript is not as picky there.
JS Framework
React (TS)
Panda CSS Version
0.9.0
Browser
Not relevant
Operating System
[X] macOS
[ ] Windows
[ ] Linux
Additional Information
No response
Hi @rudeayelo,
Kindly update the latest version. Svg properties are supported there.
Thanks
|
2025-04-01T06:38:10.413346
| 2023-11-28T10:44:59
|
2014150686
|
{
"authors": [
"Andarist",
"alvesvaren",
"ludovicm67"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:4592",
"repo": "changesets/changesets",
"url": "https://github.com/changesets/changesets/issues/1266"
}
|
gharchive/issue
|
pkg:cli missing ./bin.js in exports
Affected Packages
pkg:cli
Problem
I'm getting this error in GitHub Actions after upgrading @changesets/cli to 2.27.0:
Error: Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './bin.js' is not defined by "exports" in<EMAIL_ADDRESS>Error: Package subpath './bin.js' is not defined by "exports" in<EMAIL_ADDRESS>
This is when changeset tag is called.
Proposed solution
Add ./bin.js in the exports in the package.json file.
I get the same error
Thanks for the info - I located the problem to be located here.
Great, thanks for the quick fix! 👍
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.