added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:18.588784
| 2020-09-15T22:12:20
|
702307413
|
{
"authors": [
"michaeljaltamirano"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5028",
"repo": "curology/radiance-ui",
"url": "https://github.com/curology/radiance-ui/pull/371"
}
|
gharchive/pull-request
|
[Accordion] Update Accordion.Container-based Styling
This PR sets out to make the following changes to the Accordion component:
Update the box-shadow property.
Add a default 4px border-radius.
Add the following border-radius behavior: only the top-most and bottom-most accordion elements should be rounded.
The first two updates are relatively trivial. The third update required expanding the styling functionality of Accordion.Container, which has additional CSS selectors that handle the border-radius & focus outline requirements.
border-radius is set via long-hand properties border-top-left-radius, border-top-right-radius, border-bottom-left-radius, and border-bottom-right-radius.
This PR also updates all documentation instances of <Accordion> to be wrapped with <Accordion.Container>.
You can play with the Review App here: https://curology-radiance-pr-371.herokuapp.com/
Before:
After:
Need to refactor the styling since the box-shadows are not applying accurately anymore in TitleWrapper.
|
2025-04-01T06:38:18.592433
| 2014-08-13T18:04:47
|
40181823
|
{
"authors": [
"gbence"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5029",
"repo": "cursorinsight/ci-trap-web",
"url": "https://github.com/cursorinsight/ci-trap-web/issues/11"
}
|
gharchive/issue
|
Add tests with timed mouseMove events
In Karma tests we're unable (or I don't know about it) to trigger mouseMove events with given timing information -- that is essential in our case. :)
Currently, we have tests to cover events with synthetic timestamps (test/trap.time-property.test.js), and tests to cover various timeouts, including buffer timeouts and idle timeouts (test/trap.{buffer,idle}-timeout.test.js).
|
2025-04-01T06:38:18.598164
| 2021-06-30T13:29:41
|
933711331
|
{
"authors": [
"AndreasArvidsson",
"pokey"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5030",
"repo": "cursorless-dev/cursorless",
"url": "https://github.com/cursorless-dev/cursorless/issues/51"
}
|
gharchive/issue
|
Support "every <scope>" / "<ordinal> <scope>"
The goal
For example:
"every line"
"every funk"
"every line in class"
"first funk"
"last line in funk"
See https://github.com/pokey/cursorless-vscode/wiki/Target-overhaul for many more examples
[x] Also support "past last", so eg "past last item air", "past last funk air". This would target from the scope containing the mark through the last instance of the scope in its iteration scope
[ ] Add expansion tests (see also #883)
Implementation
This implementation will rely on #210 and #69.
See https://github.com/cursorless-dev/cursorless/issues/797
Notes
This functionality subsumes today's every funk and first char / last word etc
Questions
How do we handle ranges such as today's first past third word or the future first past third funk?
This one will be great once we have full compositionality
Is there more work to be done on this?
Looks done to me
|
2025-04-01T06:38:18.605264
| 2024-06-25T15:06:46
|
2372939978
|
{
"authors": [
"1klap",
"jathayde",
"t3k4y"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5031",
"repo": "curtis/honeypot-captcha",
"url": "https://github.com/curtis/honeypot-captcha/issues/99"
}
|
gharchive/issue
|
Unknown action ':protect_from_spam' error on default Ruby on Rails 7.1 projects
Describe the bug
The gem causes Ruby on Rails projects with version 7.1.0 or higher to break for all routes if the setting
config.action_controller.raise_on_missing_callback_actions = true is present in an environment (default for development and test).
To Reproduce
Steps to reproduce the behavior:
run rails _<IP_ADDRESS>_ new example
cd example
add gem 'honeypot-captcha' to Gemfile
run bundle install
run bin/rails s
go to localhost:3000
see error below
Unknown action
The create action could not be found for the :protect_from_spam
callback on Rails::WelcomeController, but it is listed in the controller's
:only option.
Raising for missing callback actions is a new default in Rails 7.1, if you'd
like to turn this off you can delete the option from the environment configurations
or set `config.action_controller.raise_on_missing_callback_actions` to `false`.
Expected behavior
The gem should work out of the box with a RoR application. I didn't find a configuration option or documentation to avoid this error except disable the configuration in Rails.
Screenshots
Screenshot with the error
Desktop:
OS: Ubuntu 22.04
Browser Firefox
Version 127.0.1
Smartphone:
Not applicable
Additional context
The setting can be found in the Rails project in config/environments/development.rb
+1 experiencing this. Can turn off with the development.rb and test.rb switches for missing callback actions.
Same here.. @curtis did you find something to fixx this?
|
2025-04-01T06:38:18.608935
| 2021-01-31T21:21:52
|
797815228
|
{
"authors": [
"alandtse",
"sleon76"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5032",
"repo": "custom-components/alexa_media_player",
"url": "https://github.com/custom-components/alexa_media_player/issues/1157"
}
|
gharchive/issue
|
not validate the integration
Describe the bug
is there a problem with the integration? today it stopped working and does not validate the connection from my cell phone in amazon
Screenshots
System details
Home-assistant (2021.1.5):
Hassio (2021.01.7):
Logs
Logger: homeassistant
Source: runner.py:99
First occurred: 22:21:18 (2 occurrences)
Last logged: 22:21:18
Error doing job: Unclosed client session
Error doing job: Unclosed connector
Enable 2FA.
Enable 2FA.
|
2025-04-01T06:38:18.674480
| 2020-09-03T05:52:40
|
691663710
|
{
"authors": [
"alandtse",
"patraRajesh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5033",
"repo": "custom-components/alexa_media_player",
"url": "https://github.com/custom-components/alexa_media_player/issues/902"
}
|
gharchive/issue
|
Unable to control echo dot
Describe the bug
Changes made by echo dot visible in home assistant, but the command given by HA is not processed.
Screenshots
System details
Home-assistant (version): 0.114.2
Hassio (Yes/No): (Please note you may have to restart hassio 2-3 times to load the latest version of alexapy after an update. This looks like a HA bug).
alexa_media (version from const.py or HA startup): v2.10.6
alexapy (version from pip show alexapy or HA startup):
Logs
2020-09-03 11:14:14 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for hacs which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-09-03 11:14:14 WARNING (MainThread) [homeassistant.loader] You are using a custom integration for alexa_media which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant.
2020-09-03 11:14:33 DEBUG (MainThread) [custom_components.alexa_media] Nothing to import from configuration.yaml, loading from Integrations
2020-09-03 11:14:34 INFO (MainThread) [custom_components.alexa_media]
alexa_media
Version: 2.10.6
This is a custom component
If you have any issues with this you need to open an issue here:
https://github.com/custom-components/alexa_media_player/issues
2020-09-03 11:14:34 INFO (MainThread) [custom_components.alexa_media] Loaded alexapy==1.13.1
2020-09-03 11:14:34 DEBUG (MainThread) [alexapy.alexalogin] Trying to load pickled cookie from file<EMAIL_ADDRESS>2020-09-03 11:14:35 DEBUG (MainThread) [alexapy.alexalogin] Trying to load aiohttpCookieJar to session
2020-09-03 11:14:35 DEBUG (MainThread) [alexapy.alexalogin] Loaded 8 cookies
2020-09-03 11:14:35 DEBUG (MainThread) [alexapy.alexalogin] Using cookies to log in
2020-09-03 11:14:38 DEBUG (MainThread) [alexapy.alexalogin] GET:
2020-09-03 11:14:38 DEBUG (MainThread) [alexapy.alexalogin] Logged in as @gmail.com with id: ********
2020-09-03 11:14:38 DEBUG (MainThread) [alexapy.alexalogin] Log in successful with cookies
2020-09-03 11:14:38 DEBUG (MainThread) [custom_components.alexa_media] Testing login status: {'login_successful': True}
2020-09-03 11:14:38 DEBUG (MainThread) [custom_components.alexa_media] Setting up Alexa devices for r1@gm
2020-09-03 11:14:38 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Websocket created: <alexapy.alexawebsocket.WebsocketEchoClient object at 0x6d65ecb8>
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Initating Async Handshake.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Starting message parsing loop.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Received WebSocket: 0x37a3b607 0x0000009c {"protocolName":"A:H","parameters":{"AlphaProtocolHandler.maxFragmentSize":"16000","AlphaProtocolHandler.receiveWindowSize":"16"}}TUNE
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Encoding WebSocket Handshake MSG.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Encoding Gateway Handshake MSG.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Encoding Gateway Register MSG.
2020-09-03 11:14:47 DEBUG (MainThread) [alexapy.alexawebsocket] Encoding PING.
2020-09-03 11:14:47 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Websocket succesfully connected
2020-09-03 11:14:47 DEBUG (MainThread) [custom_components.alexa_media] Creating coordinator
2020-09-03 11:14:47 DEBUG (MainThread) [custom_components.alexa_media] Refreshing coordinator
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/devices-v2/device returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/dnd/device-status-list returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/bluetooth?cached=false returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexawebsocket] Received WebSocket: MSG 0x0000036...............END FABE
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexawebsocket] Received ACK MSG for Registration.
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/bootstrap returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/device-preferences returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/notifications returned 200:OK:application/json
2020-09-03 11:14:48 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Found 3 devices, 3 bluetooth
2020-09-03 11:14:49 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/notifications returned 200:OK:application/json
2020-09-03 11:14:49 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Updated 0 notifications for 1 devices at 2020-09-03 11:14:49.460967+05:30
2020-09-03 11:14:50 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/activities?startTime=&size=10&offset=1 returned 200:OK:application/json
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Updated last_called: {'serialNumber': '0e33', 'timestamp':<PHONE_NUMBER>948}
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: last_called changed: to {'serialNumber': '0e33', 'timestamp':<PHONE_NUMBER>948}
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] raj's Echo Dot: Locale en-in timezone Asia/Kolkata
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] raj's Echo Dot: DND False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] This Device: Locale en-us timezone None
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] This Device: DND False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] raj's Alexa Apps: Locale en-us timezone None
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] raj's Alexa Apps: DND False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Existing: [] New: ["raj's Echo Dot", 'This Device', "raj's Alexa Apps"]; Filtered out by not being in include: [] or in exclude: []
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] Loading media_player
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media] Finished fetching alexa_media data in 2.791 seconds
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Trying with limit 5 delay 2 catch_exceptions True
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.notify] r1@gm: Media player G09F not loaded yet; delaying load
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Try: 1/5 after waiting 0 seconds result: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.media_player] r1@gm: Refreshing This Device
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.media_player] This Device: Last_called check: self: 2712 reported:
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] r1@gm: Adding [<Entity raj's Echo Dot: unavailable>, , <Entity raj's Alexa Apps: unavailable>]
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Loading switches
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found G09F dnd switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found G09F shuffle switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found G09F repeat switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found 2712 dnd switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Skipping shuffle for 2712
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Skipping repeat for 2712
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Found 0e33 dnd switch with status: False
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Skipping shuffle for 0e33
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.switch] r1@gm: Skipping repeat for 0e33
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] r1@gm: Adding [<Entity raj's Echo Dot do not disturb switch: off>, <Entity raj's Echo Dot shuffle switch: off>, <Entity raj's Echo Dot repeat switch: off>, , <Entity raj's Alexa Apps do not disturb switch: off>]
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.sensor] r1@gm: Loading sensors
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.sensor] r1@gm: Found G09F Alarm sensor (0) with next: unavailable
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.sensor] r1@gm: Found G09F Timer sensor (0) with next: unavailable
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.sensor] r1@gm: Found G09F Reminder sensor (0) with next: unavailable
2020-09-03 11:14:50 DEBUG (MainThread) [custom_components.alexa_media.helpers] r1@gm: Adding [<Entity raj's Echo Dot next Alarm: unavailable>, <Entity raj's Echo Dot next Timer: unavailable>, <Entity raj's Echo Dot next Reminder: unavailable>]
2020-09-03 11:14:52 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/phoenix returned 200:OK:application/json
2020-09-03 11:14:52 DEBUG (MainThread) [custom_components.alexa_media.alarm_control_panel] r1@gm: No Alexa Guard entity found
2020-09-03 11:14:52 DEBUG (MainThread) [custom_components.alexa_media.alarm_control_panel] r1@gm: Skipping creation of uninitialized device:
2020-09-03 11:14:54 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Try: 2/5 after waiting 4 seconds result: <custom_components.alexa_media.notify.AlexaNotificationService object at 0x69c29640>
2020-09-03 11:14:54 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Trying with limit 5 delay 2 catch_exceptions True
2020-09-03 11:14:54 DEBUG (MainThread) [custom_components.alexa_media.helpers] alexa_media.notify.async_get_service: Try: 1/5 after waiting 0 seconds result: <custom_components.alexa_media.notify.AlexaNotificationService object at 0x6fe31b08>
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] Disabling polling for raj's Echo Dot
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] r1@gm: Refreshing This Device
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] This Device: Last_called check: self: 2712 reported: 0e33
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] Disabling polling for This Device
2020-09-03 11:15:00 DEBUG (MainThread) [custom_components.alexa_media.media_player] Disabling polling for raj's Alexa Apps
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/dnd/device-status-list returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/notifications returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/bluetooth?cached=false returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/device-preferences returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/devices-v2/device returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Found 3 devices, 3 bluetooth
2020-09-03 11:24:51 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/notifications returned 200:OK:application/json
2020-09-03 11:24:51 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Updated 0 notifications for 1 devices at 2020-09-03 11:24:51.694741+05:30
2020-09-03 11:24:52 DEBUG (MainThread) [alexapy.alexaapi] static GET: https://alexa.amazon.com/api/activities?startTime=&size=10&offset=1 returned 200:OK:application/json
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Updated last_called: {'serialNumber': '0e33', 'timestamp':<PHONE_NUMBER>948}
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] raj's Echo Dot: Locale en-in timezone Asia/Kolkata
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] raj's Echo Dot: DND False
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] This Device: Locale en-us timezone None
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] This Device: DND False
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] raj's Alexa Apps: Locale en-us timezone None
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] raj's Alexa Apps: DND False
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Existing: [<Entity raj's Echo Dot: unavailable>, , <Entity raj's Alexa Apps: unavailable>] New: []; Filtered out by not being in include: [] or in exclude: []
2020-09-03 11:24:52 DEBUG (MainThread) [custom_components.alexa_media] Finished fetching alexa_media data in 2.809 seconds
2020-09-03 11:31:02 DEBUG (MainThread) [alexapy.alexawebsocket] Received WebSocket: MSG
2020-09-03 11:31:02 DEBUG (MainThread) [alexapy.alexawebsocket] Received Standard MSG.
2020-09-03 11:31:02 DEBUG (MainThread) [custom_components.alexa_media] r1@gm: Received websocket command: PUSH_EQUALIZER_STATE_CHANGE : {'destinationUserId': 'AS', 'dopplerId': {'deviceType': 'C', 'deviceSerialNumber': 'G***F'}, 'bass': 0, 'midrange': 0, 'treble': 0}
Additional context
Add any other context about the problem here.
Please confirm your region is amazon.com.
|
2025-04-01T06:38:18.683435
| 2019-07-12T09:44:43
|
467321944
|
{
"authors": [
"Chimestrike",
"eterpstra",
"gmkfak",
"ludeeus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5034",
"repo": "custom-components/hacs",
"url": "https://github.com/custom-components/hacs/issues/262"
}
|
gharchive/issue
|
Intergration not found hacs
Hello,
I am currently on a fresh installation of HASS.IO on docker. the path in docker for hassio is /user/share/hassio/homeassistant/
I created the folder custom_components and downloaded the git files / hacs files.
hacs is placed into /custom_components.
In the configration.yaml i placed the this:
hacs:
token: 4878f7a6265xxxxxxxxxxxxxxxxxxxxxxxxxf9a4
However, HA will not restart due this error:
Integration not found: hacs
What am i doing wrong? I
Restart HA one time before adding it to config.
I had a fully working install of HACS till about a week ago, I had to restart my machine and when it came back up I no longer have the ingress menu and I receive the "Invalid config" message in HA, I've reinstalled HACS from fresh and also had 2 HA upgrades this week and still no luck.
Also a few of our guys in our group have had similar issues, but the error shows as below so this could be further issue and not just a single install error:
Fri Jul 26 2019 11:06:37 GMT+0100 (British Summer Time)
Error during setup of component hacs
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/setup.py", line 153, in _async_setup_component
hass, processed_config)
File "/config/custom_components/hacs/init.py", line 69, in async_setup
await configure_hacs(hass, config[DOMAIN], config_dir)
File "/config/custom_components/hacs/init.py", line 171, in configure_hacs
hacs.store.restore_values()
File "/config/custom_components/hacs/hacsbase/data.py", line 102, in restore_values
store = self.read()
File "/config/custom_components/hacs/hacsbase/data.py", line 41, in read
content = json.loads(content)
File "/usr/local/lib/python3.7/json/init.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Unterminated string starting at: line 3426 column 13 (char 122870)
Hi,
I had the same problem.
I removed hacs:
token: 4878f7a6265xxxxxxxxxxxxxxxxxxxxxxxxxf9a4
and rebooted and found hacs in integrations section.
|
2025-04-01T06:38:18.695099
| 2023-08-17T11:00:49
|
1854772277
|
{
"authors": [
"swebe3qn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5035",
"repo": "cuttle-cards/cuttle",
"url": "https://github.com/cuttle-cards/cuttle/pull/515"
}
|
gharchive/pull-request
|
Issue #513: Implemented fallback value for color prop of BaseSnackbar…
… and removed the color prop from its consumers.
Issue number
Relevant issue number
Resolves #513
Please check the following
[x] Do the tests still pass? (see Run the Tests)
[x] Is the code formatted properly? (see Linting (Formatting))
For New Features:
[ ] Have tests been added to cover any new features or fixes?
[ ] Has the documentation been updated accordingly?
Please describe additional details for testing this change
Thanks for your feedback! Just pushed another commit.
Done.
|
2025-04-01T06:38:18.700241
| 2023-08-03T12:32:39
|
1834973492
|
{
"authors": [
"atomishcv",
"rpautrat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5036",
"repo": "cvg/SOLD2",
"url": "https://github.com/cvg/SOLD2/issues/87"
}
|
gharchive/issue
|
about the line_match module
hello, i want to change the match module to like superglue method. it is possible to get a better match?
Hi, I am sorry, I can't understand your question.
Are you asking whether one could obtain better line matches by using techniques as in SuperGlue? If so, the answer is yes, and we already published a work on that: https://github.com/cvg/GlueStick.
yes,i have the same ideal, thank you very much
发自我的iPhone
在 2023年8月3日,下午10:17,Rémi Pautrat @.***> 写道:
Hi, I am sorry, I can't understand your question.
Are you asking whether one could obtain better line matches by using techniques as in SuperGlue? If so, the answer is yes, and we already published a work on that: https://github.com/cvg/GlueStick.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
2025-04-01T06:38:18.702133
| 2016-03-31T21:27:17
|
145023106
|
{
"authors": [
"Unstinted",
"cviebrock"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5037",
"repo": "cviebrock/eloquent-sluggable",
"url": "https://github.com/cviebrock/eloquent-sluggable/issues/239"
}
|
gharchive/issue
|
Slug in URL issue
i want to display some details from a table no foreign key using slug url but i am having issue with it. here below is my code.
public function getView($slug) {
$cstool = CsTool::where('slug',$slug)->first();
if ($cstool) {
return View::make('childict.softview')
->with('cstool', $cstool);
}
}
What's the issue? That code looks fine to me (although you don't have any code to handle the case where there is no object with the given slug. Maybe ->firstOrFail() would be better?
|
2025-04-01T06:38:18.759026
| 2021-01-11T16:27:33
|
783516522
|
{
"authors": [
"BradleyBoutcher"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5039",
"repo": "cyberark/cloudfoundry-conjur-buildpack",
"url": "https://github.com/cyberark/cloudfoundry-conjur-buildpack/pull/107"
}
|
gharchive/pull-request
|
Bump version to 2.1.6
What does this PR do?
Bump version to 2.1.6 in preparation for release
What ticket does this PR close?
Resolves #73
Checklists
Change log
[X] The CHANGELOG has been updated, or
[ ] This PR does not include user-facing changes and doesn't require a CHANGELOG update
Test coverage
[ ] This PR includes new unit and integration tests to go with the code changes, or
[X] The changes in this PR do not require tests
Documentation
[ ] Docs (e.g. READMEs) were updated in this PR, and/or there is a follow-on issue to update docs, or
[X] This PR does not require updating any documentation
@izgeri Yes, the notices were updated as part of this commit:
https://github.com/cyberark/cloudfoundry-conjur-buildpack/commit/5ab18790822dd996c9ea8bdc975280c88c12436c
@izgeri Yes, the notices were updated as part of this commit:
https://github.com/cyberark/cloudfoundry-conjur-buildpack/commit/5ab18790822dd996c9ea8bdc975280c88c12436c
|
2025-04-01T06:38:18.765483
| 2023-01-27T15:41:04
|
1559973972
|
{
"authors": [
"john-odonnell"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5040",
"repo": "cyberark/secretless-broker",
"url": "https://github.com/cyberark/secretless-broker/pull/1484"
}
|
gharchive/pull-request
|
Run go mod tidy
Desired Outcome
Prepare for 1.7.16 release.
Implemented Changes
Update golang.org/x/crypto to v0.5.0
Update CyberArk packages to latest versions
Update NOTICES.txt
Connected Issue/Story
N/A
Definition of Done
At least 1 todo must be completed in the sections below for the PR to be
merged.
Changelog
[ ] The CHANGELOG has been updated, or
[x] This PR does not include user-facing changes and doesn't require a
CHANGELOG update
Test coverage
[ ] This PR includes new unit and integration tests to go with the code
changes, or
[x] The changes in this PR do not require tests
Documentation
[ ] Docs (e.g. READMEs) were updated in this PR
[ ] A follow-up issue to update official docs has been filed here: [insert issue ID]
[x] This PR does not require updating any documentation
Behavior
[ ] This PR changes product behavior and has been reviewed by a PO, or
[ ] These changes are part of a larger initiative that will be reviewed later, or
[x] No behavior was changed with this PR
Security
[ ] Security architect has reviewed the changes in this PR,
[ ] These changes are part of a larger initiative with a separate security review, or
[ ] There are no security aspects to these changes
Sorry, @szh - I pushed a commit fixing Changelog links right as you approved. Quick re-review?
|
2025-04-01T06:38:18.800450
| 2022-07-15T14:22:39
|
1306099203
|
{
"authors": [
"Prakharkarsh1",
"d-kuro",
"filiprafaj",
"masa213f",
"sachinsejwal",
"yamatcha",
"ymmt2005"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5041",
"repo": "cybozu-go/moco",
"url": "https://github.com/cybozu-go/moco/issues/427"
}
|
gharchive/issue
|
Backup to Google Cloud Storage
Hello, I am trying and failing to store backups to Google Cloud Storage.
I have set up Workload Identity to give the k8s service account the permissions to access the bucket.
I have also tried defining the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables.
Still I get
Error: failed to take a full dump: failed to put dump.tar: operation error S3: PutObject, https response error StatusCode: 403, RequestID: , HostID: , api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
Is there anybody who made backups to GCS working?
Thank you!
Backup to GCS is not supported now.
It can be added by implementing Bucket interface for GCS in this package.
https://github.com/cybozu-go/moco/tree/main/pkg/bucket
We are welcome to a pull request for adding GCS support.
@filiprafaj
GCS supports the S3 compatibility API. (Sorry, I have not verified this.)
Could you please refer to the documentation and try again?
refs:
https://cloud.google.com/storage/docs/interoperability#xml_api
https://vamsiramakrishnan.medium.com/a-study-on-using-google-cloud-storage-with-the-s3-compatibility-api-324d31b8dfeb
If it still doesn't work, it would be helpful if you could report it again, including the definition of BackupPolicy 🙏
https://cybozu-go.github.io/moco/usage.html#backuppolicy
Hi @d-kuro , I have tried now with interoperability credentials and I am getting:
Error: failed to take a full dump: failed to put dump.tar: operation error S3: PutObject, https response error StatusCode: 403, RequestID: , HostID: , api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method. failed to take a full dump: failed to put dump.tar: operation error S3: PutObject, https response error StatusCode: 403, RequestID: , HostID: , api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
the backup file looks like this:
apiVersion: moco.cybozu.com/v1beta2
kind: BackupPolicy
metadata:
namespace: default
name: daily
spec:
schedule: "@daily"
jobConfig:
serviceAccountName: moco-test-mysqlcluster
env:
- name: AWS_ACCESS_KEY_ID
value: ***
- name: AWS_SECRET_ACCESS_KEY
value: ***
bucketConfig:
bucketName: ***
endpointURL: https://storage.googleapis.com
workVolume:
emptyDir: {}
memory: 1Gi
maxMemory: 1Gi
threads: 1
Hi @filiprafaj ,
This issue seems to be a good initiation of my journey towards contribution to FOSS projects.
Can you please assign this to me?
Hii @filiprafaj i want to contribute to this issue
@Prakharkarsh1
Hi,
Thank you for your intention to contribute to this project.
We will review your pull request when it's ready.
@Prakharkarsh1
MOCO uses aws-sdk-go-v2 to connect to s3-compatible storage.
However, aws-sdk-go-v2 is not compatible with third-party platforms and therefore cannot connect GCS.
https://github.com/aws/aws-sdk-go-v2/issues/1816
So it would be better to implement for gcs bucket in moco/pkg/bucket/gcs.
We use minio to test S3 bucket implementation.
Likewise, we may use these tools to test GCS bucket implementation.
https://github.com/oittaa/gcp-storage-emulator
https://github.com/fsouza/fake-gcs-server
@Prakharkarsh1
Hello,
Do you still want to contribute to this feature?
Released https://github.com/cybozu-go/moco/releases/tag/v0.16.1
|
2025-04-01T06:38:18.815483
| 2024-01-30T10:13:32
|
2107451935
|
{
"authors": [
"MyRaspberry",
"cyijun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5042",
"repo": "cyijun/ESP32MQTTClient",
"url": "https://github.com/cyijun/ESP32MQTTClient/issues/7"
}
|
gharchive/issue
|
question: TLS possible
i now use PubSubClient v2.8 for a ESP32-S3 Arduno IDE project
and want use TLS with HIVEMQ FREE ACCOUNT
possible?
Hi, you can refer the functions here
|
2025-04-01T06:38:18.953890
| 2019-07-26T00:30:40
|
473124388
|
{
"authors": [
"ozars"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5044",
"repo": "cython/cython",
"url": "https://github.com/cython/cython/issues/3055"
}
|
gharchive/issue
|
Advancing iterator in range loop invalidates previously dereferenced values for input iterators in C++
I have a user-defined class implementing C++ input iterator requirements. It skips an element in the beginning and prints last element twice when I iterate over it with Cython's range based loop. Looking into produced code, I realized iterator value is increased before the loop body gets executed. Incrementing is done right after dereferenced iterator value is copied into a temporary. However, incrementing the iterator invalidates previously dereferenced values for input iterators. In my case, it was trying to parse next node in a stream.
Here is a toy example for demonstration:
# distutils: language = c++
from cython.operator cimport dereference as deref, preincrement as preinc
cdef extern from *:
"""
//#define PRINT() std::cout << __PRETTY_FUNCTION__ << std::endl
#define PRINT()
#include<iostream>
struct CountDown {
struct Iterator {
CountDown* ptr;
Iterator() = default;
Iterator(CountDown* ptr) : ptr(ptr) {}
Iterator& operator++() { PRINT(); ptr->count--; return *this; }
Iterator& operator++(int) { PRINT(); ptr->count--; return *this; }
const int* operator*() { return &ptr->count; }
bool operator!=(const Iterator&) { PRINT(); return ptr->count > 0; }
};
int count;
CountDown() = default;
CountDown(int count) : count(count) {}
Iterator begin() { PRINT(); return Iterator(this); }
Iterator end() { PRINT(); return Iterator(); }
};
"""
cdef cppclass CountDown:
cppclass Iterator:
Iterator()
Iterator operator++()
Iterator operator++(int)
const int* operator*()
bint operator!=(Iterator)
CountDown()
CountDown(int count)
Iterator begin()
Iterator end()
cdef countdown_range():
cdef CountDown cd = CountDown(5)
cdef const int* num
for num in cd:
print(deref(num))
cdef countdown_expected():
cdef CountDown cd = CountDown(5)
cdef CountDown.Iterator it = cd.begin()
while it != cd.end():
print(deref(deref(it)))
it = preinc(it)
print("Actual output:")
countdown_range()
print("Expected output:")
countdown_expected()
Output:
~/tmp/cyissue python3 -c "import example"
Actual output:
4
3
2
1
0
Expected output:
5
4
3
2
1
Here is the related part in the produced code:
/* "example.pyx":42
* cdef CountDown cd = CountDown(5)
* cdef const int* num
* for num in cd: # <<<<<<<<<<<<<<
* print(deref(num))
*
*/
__pyx_t_1 = __pyx_v_cd.begin();
for (;;) {
if (!(__pyx_t_1 != __pyx_v_cd.end())) break;
__pyx_t_2 = *__pyx_t_1;
++__pyx_t_1;
__pyx_v_num = __pyx_t_2;
This isn't a crucial feature since it can be implemented without range-based loop as in the above example, yet this was quite surprising for me and it took some time to figure this out, so I decided to open this issue.
Below part is where this loop translation happens AFAICS:
https://github.com/cython/cython/blob/ac1c9fe47491d01fb80cdde3ccd3e61152a973c7/Cython/Compiler/Nodes.py#L6820-L6832
This translates for s in seq: loop_body expression to something like:
it = iter(seq)
while True:
s = next(it) or break loop
loop_body
whereas correct interpretation for C++ should have been something like:
it = seq.begin()
while True:
if it is seq.end() break loop
s = *it
loop_body
++it
So... It looks like this issue is caused by a subtle difference between semantics of python's next and C++'s iterators. In python next does what operator++ and operator* together do in C++. I'm not sure how this can be fixed (splitting NextNode into two pieces for C++? Perhaps introducing a ForNode to split C++ implementation altogether?). Given current implementation works fine for the vast majority of cases, it may not worth the effort though, but it may be useful to note this quirk somewhere in the documentation at least.
It turns out I misunderstood requirements of input iterator. It invalidates any copies of the iterator upon advancing it, not copies of values it points. value_type v = *it; it++; do_something_with(v); is perfectly valid. Hence, there is nothing wrong with cython's behavior. Closing this.
|
2025-04-01T06:38:18.958295
| 2020-08-31T12:02:07
|
689152965
|
{
"authors": [
"scoder",
"tashachin"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5045",
"repo": "cython/cython",
"url": "https://github.com/cython/cython/issues/3802"
}
|
gharchive/issue
|
Add glossary to the documentation
Something that could help the documentation would be a glossary that certain terms could link to. A pointer would be a good candidate, or C data structures (struct/union) in general. Also extension type, terms like extern or inline, etc. It wouldn't have to replicate a complete specification or so, just explain shortly what is meant and link to a good place for further reading, be it an internal page or some external resource like the CPython C-API docs or a C/C++ reference.
Hello! I'd like to give this doc issue a shot.
Does this project use an SSG or would establishing a glossary be as simple as editing directly on Github?
Hi @tashachin, with "SSG", do you mean some kind of style guide? We don't have one, definitely not for the docs.
You can just edit the Sphinx .rst files and conf.py to add a glossary.
Apologies for not clarifying, @scoder ! SSG as in static-site generator (like Sphinx).
Where would you want the glossary to live in the docs? I could add it to the main index.rst (landing page) but that seems like it'd become unwieldy very quickly.
My solution was to have a link to the glossary be at the same level as Getting Started and Tutorials (and be between them), which then links to a separate page where all the terms can be read.
What are your thoughts on that structure?
There is an "indices and tables" section in the user guide. Just add a new page there for the glossary.
Closing this ticket since the glossary is there now. We can keep adding to it without the need for a ticket.
|
2025-04-01T06:38:18.959985
| 2017-04-04T12:24:23
|
219232893
|
{
"authors": [
"jdemeyer",
"robertwb"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5046",
"repo": "cython/cython",
"url": "https://github.com/cython/cython/pull/1659"
}
|
gharchive/pull-request
|
Allow "cdef inline" with default values in .pxd
A cdef inline implemented in a .pxd with default values does not work as expected:
cdef inline int my_add(int a, int b=1, int c=0):
return a + b + c
gives default values cannot be specified in pxd files, use ? or *
This pull request fixes that.
Looks good, thanks.
|
2025-04-01T06:38:18.963865
| 2019-01-05T04:15:47
|
396131773
|
{
"authors": [
"scoder",
"wjsi"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5047",
"repo": "cython/cython",
"url": "https://github.com/cython/cython/pull/2784"
}
|
gharchive/pull-request
|
Fix inconsistency between trace files and report files
When solving #2776 which reported Plugin 'Cython.Coverage.Plugin' did not provide a file reporter for '/Users/wenjun.swj/miniconda3/lib/python3.7/site-packages/gevent/_hub_local.py' when executing coverage report, I find the cause is that some tracers built in Cython.Coverage do not have corresponding reporters.
In Cython.Coverage, when Plugin._parse_lines(c_file, filename) returns (None, None), Plugin.file_tracer(filename) returns a tracer, while Plugin.file_reporter(filename) returns None, and then coverage.py reports the error. This happens when shared packages have both *.py and *.c sharing the same base name. For instance, in the wheel package of gevent, both _hub_local.c and _hub_local.py exist, which mislead Cython.Coverage to produce a tracer as it does not ignore shared libraries. However, file_reporter() ignores shared libraries, and the report error is raised.
The simple solution is to ignore shared libraries in file_tracer like it does in file_reporter, and coverage report does not raise errors, thus fixes #2776 .
Thanks. Do you think you could come up with a test case for this? This seems like the kind of setup that will get lost and break right the next time we change the code.
You can look at the existing coverage .srctree tests in tests/run/, they are basically multiple files stuffed into one text archive, with the test commands at the top.
|
2025-04-01T06:38:18.966305
| 2023-11-17T19:41:29
|
1999799173
|
{
"authors": [
"da-woods",
"matusvalo"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5048",
"repo": "cython/cython",
"url": "https://github.com/cython/cython/pull/5835"
}
|
gharchive/pull-request
|
Remove patch utility code in Coroutine.c
A try to remove patch utility code which seems not needed anymore.
~This is experiment PR just to see the CI results.~
There are several other functions marked to be removed in Cython/Utility/Coroutine.c but I am not sure whether they are needed or not.
CI seems turning green so I am marking this PR as ready for review. There is still question whether the code in the end of Coroutine.c should be removed or not...
Looks good to me I think - it looks like all the abc classes are tested and continue to work without this code
Let's merge it since other PRs are waiting for it. Thanks @da-woods for review.
|
2025-04-01T06:38:18.977922
| 2024-05-01T16:37:50
|
2273801566
|
{
"authors": [
"d33bs",
"gwaybio"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5049",
"repo": "cytomining/CytoTable",
"url": "https://github.com/cytomining/CytoTable/pull/204"
}
|
gharchive/pull-request
|
Increase sorting scalability via CytoTable metadata columns
Description
This PR seeks to refine #175 by increasing the performance through generated CytoTable metadata columns which are primarily beneficial during large join operations. Anecdotally, I noticed that ORDER BY ALL memory consumption for joined tables becomes very high when working with a larger dataset. Before this change, large join operations attempt to sort by all columns included in the join. After this change, only CytoTable metadata columns are used for sorting, decreasing the amount of processing required to create deterministic datasets.
I hope to further refine this work through #193 and #176, which would I feel provide additional insights concerning performance and best practice recommendations. I can also see how these might be required to validate things here, but didn't want to hold review comments (as these also might further inform efforts within those issues).
Closes #175
What is the nature of your change?
[ ] Bug fix (fixes an issue).
[x] Enhancement (adds functionality).
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected).
[ ] This change requires a documentation update.
Checklist
Please ensure that all boxes are checked before indicating that a pull request is ready for review.
[x] I have read the CONTRIBUTING.md guidelines.
[x] My code follows the style guidelines of this project.
[x] I have performed a self-review of my own code.
[x] I have commented my code, particularly in hard-to-understand areas.
[ ] I have made corresponding changes to the documentation.
[x] My changes generate no new warnings.
[x] New and existing unit tests pass locally with my changes.
[x] I have added tests that prove my fix is effective or that my feature works.
[x] I have deleted all non-relevant text in this pull request template.
(some additional context @falquaddoomi - we are needing to solve this for an upcoming project that will use cytotable heavily. Thanks!)
Thanks @gwaybio and @falquaddoomi for the reviews! I like the idea of an optional setting for this sorting mechanism, with a possible backup method which doesn't leverage CytoTable metadata.
Generally, I still feel that sorting should be required to guarantee no data loss with LIMIT and OFFSET because this aligns with both DuckDB's docs and general SQL guidance. A hypothesis about what was allowing this to succeed in earlier work: DuckDB may have successfully retained all data with LIMIT and OFFSET queries through low system process and thread competition. The failing tests for LIMIT and OFFSET I believe nearly always dealt with multithreaded behavior in moto, meaning procedures may have been subject to system scheduler decisions about which tasks to delay vs execute (or perhaps there were system thread or memory leaks of some kind).
While we plan to remove moto as a dependency by addressing #198, it feels fuzzy yet to me whether these challenges are all the same. For example, it could be that moto triggered a coincidental mutation test with regard to DuckDB thread behavior (giving us further software visibility through a mutated test state). It could have also been a "perfect storm" through a bug in DuckDB >0.10.x,<1.0.0 combined with moto's behavior in tests. Then again, this could all just be my imagination, I'm not sure!
Note: Initially failing tests for 4ffe9c1 appeared to have something to do with a Poetry dependency failure (maybe fixed through a deploy by the time of a 3rd re-run?). I don't think these are related to CytoTable code as they were at the layer of Poetry installations.
Errors were:
AttributeError: '_CountedFileLock' object has no attribute 'thread_safe' from virtualenv and filelock site-packages.
Thanks again @gwaybio and @falquaddoomi ! I've added some updates which make sorting optional through the use of parameters called sort_output. These changes retain the ability to keep output sorted and also an option to avoid it altogether (reverting to earlier CytoTable behavior). I've kept the default to sort_output=True as I feel this is the safest option for the time being, but understand there may be reasons to avoid it based on the data or performance desired.
Cheers, thanks @falquaddoomi ! Agreed on comparisons; it will be interesting to see the contrast, excited to learn more!
|
2025-04-01T06:38:19.005720
| 2019-03-11T14:10:15
|
419489592
|
{
"authors": [
"Mason117",
"astonzhang"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5051",
"repo": "d2l-ai/d2l-zh",
"url": "https://github.com/d2l-ai/d2l-zh/pull/512"
}
|
gharchive/pull-request
|
Fix a semicolon error in linear-regression-scratch.md
I think here is a tiny error that author mix up python with other languages by mistake.
We added a comment here:
https://github.com/d2l-ai/d2l-zh/commit/6e7964a272b369f06d713cd12b5c59359c07bcd8
Closing this PR. Thanks.
|
2025-04-01T06:38:19.071579
| 2023-12-20T22:43:19
|
2051405874
|
{
"authors": [
"benefacto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5053",
"repo": "dOrgTech/homebase-app",
"url": "https://github.com/dOrgTech/homebase-app/issues/738"
}
|
gharchive/issue
|
homebase-lite-backend: Enhance README, Add Docker Compose, Add Swagger
Overview
The homebase-lite-backend project currently has a placeholder README. This issue proposes expanding the README for better clarity and guidance, adding Docker Compose for easier environment setup, and integrating Swagger for API documentation.
Enhancements
1. Expand README
Goal: To provide comprehensive and clear instructions for new contributors.
Details:
Introduction: Provide a brief overview of the homebase-lite-backend, its purpose, and how it fits within the broader project ecosystem.
Prerequisites: List the software and knowledge prerequisites (e.g., Node.js, Express, MongoDB, Docker, Swagger).
Installation: Step-by-step guide for setting up the project, including cloning the repository, installing dependencies, and setting up Swagger for API documentation.
Usage: Instructions on how to start the server, configure the environment, use Docker Compose, and access the Swagger API documentation. Include basic usage examples.
Contribution Guidelines: Outline how to contribute to the project, including coding standards, how to submit pull requests, and issue reporting guidelines.
Troubleshooting: Common issues and their solutions.
2. Docker Compose Integration
Goal: Simplify the setup and execution process using Docker.
Details:
Create a docker-compose.yml file that defines the Node.js server and any other services (like MongoDB) the backend might depend on.
Ensure that the Docker setup aligns with the project’s current Node.js and database versions.
Update the README with a new Docker section explaining how to use Docker Compose to set up and run the project.
3. Swagger API Documentation Integration
Goal: Provide an interactive and user-friendly way to explore the API.
Details:
Integrate Swagger using swagger-ui-express and swagger-jsdoc.
Document all existing API endpoints.
Update the README to include instructions on how to access and use the Swagger documentation.
Expected Outcome
A detailed and updated README providing clear instructions for setting up, using, and contributing to the homebase-lite-backend, including the use of Swagger for API documentation.
Docker Compose support for easy environment setup and management.
Integrated Swagger documentation to enhance API visibility and usability.
Additional Notes
Ensure that all instructions and configurations are tested to confirm they work as expected.
Consider potential platform-specific instructions (e.g., differences in setup for Windows, Linux, macOS).
References
Current homebase-lite-backend project: https://github.com/dOrgTech/homebase-lite-backend
Pull request is up for review: https://github.com/dOrgTech/homebase-lite-backend/pull/20
Team said I can merge this (need to verify production deploy)
I'm working on getting GitHub permissions to merge this without further action from others
|
2025-04-01T06:38:19.072917
| 2021-01-21T17:23:58
|
791350662
|
{
"authors": [
"da-h"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5054",
"repo": "da-h/miniflask",
"url": "https://github.com/da-h/miniflask/issues/62"
}
|
gharchive/issue
|
Before/After events for non-event & non-state methods.
Methods that do not have a event/state variable do not call before_ and after_ events at the moment.
Changed behavior as of 951a025.
|
2025-04-01T06:38:19.075020
| 2019-07-29T06:51:56
|
473880239
|
{
"authors": [
"MrM40",
"cjxx2016"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5055",
"repo": "daPhie79/tiny7z",
"url": "https://github.com/daPhie79/tiny7z/issues/1"
}
|
gharchive/issue
|
Performance
Nice project :-)
Look forward to test it out.
Just need a .Net std. version....I'll see if I'm lucky to just change target output in the project.
Any idea of the performance difference between native 7z?
It's a greate and simple framwork with tools.!!
|
2025-04-01T06:38:19.076701
| 2020-07-03T06:31:05
|
650375845
|
{
"authors": [
"CyanideBoy",
"daa233"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5056",
"repo": "daa233/generative-inpainting-pytorch",
"url": "https://github.com/daa233/generative-inpainting-pytorch/issues/41"
}
|
gharchive/issue
|
Places2 pretrained weights
Hey,
Do you have the pretrained weights for the Places2 dataset? It would be great if you can share that!
Hey,
Do you have the pretrained weights for the Places2 dataset? It would be great if you can share that!
@CyanideBoy Sorry, I didn't train the model on the Places2 dataset.
|
2025-04-01T06:38:19.082550
| 2022-02-08T11:20:28
|
1127123141
|
{
"authors": [
"dabreegster"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5057",
"repo": "dabreegster/odjitter",
"url": "https://github.com/dabreegster/odjitter/pull/11"
}
|
gharchive/pull-request
|
Separate origin/destination subpoints
This splits the --subpoints-path flag into separate --subpoints-origins-path and --subpoints-destinations-path flags. If either one isn't specified, the tool falls back to picking random points instead.
No support for weighted subpoints yet; I'll do that separately. There was some other cleanup to do first, so this PR is already big
Oops, forgot to associate this with #7
|
2025-04-01T06:38:19.091141
| 2024-02-05T00:05:43
|
2117399693
|
{
"authors": [
"TuxVinyards",
"dabrown645",
"lj3954",
"zen0bit"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5058",
"repo": "dabrown645/quickemu",
"url": "https://github.com/dabrown645/quickemu/issues/5"
}
|
gharchive/issue
|
Several syntax errors on bash versions shipped in many distros
I currently have an Ubuntu 22.04-based system, with Bash 5.1.16 installed.
Line 13 causes the first issue. I'm not sure The formatting leaves BASE_DIR with a blank string regardless of where it's run from, so unless you're already within the quickget directory, the command will fail. Instead, $(dirname "${0}") could be used to find the directory quickget is stored in. Changing directories also should not be used unless absolutely necessary, as it could cause other unforeseen issues. Instead, each specific command should call the directory. Rather than running 'ls' after changing into the ${PLUGINS} directory, for example, you should use ls ${PLUGINS}.
All of the plugins are completely broken on Bash 5.1.16, but they do work on Bash 5.2.26. The error specifically is with the lines using the @k operator on an array. Here's the error a bash shell prior to version 5.2 will throw. ./quickget_plugins/alma.plug: line 48: ${editions[@]@k}: bad substitution. I believe this loop is what you need to achieve the same functionality in other bash versions.
for edition in "${!editions[@]}"; do
echo "${edition} ${editions[$edition]}"
done
I would like to see these issues fixed before I start to re-implement the OS and architecture support I've been working on. The original quickget requires a bash version of 4.0, this must at least work on bash versions prior to the very newest 5.2 to be able to replace it.
There's also many misspellings, including at least one command. Line 306: sensible-brownser. I'm not sure exactly how that happened, since that specific function is ripped straight out of the original quickget.
Temporary files should be created with mktemp, a temp directory is entirely unnecessary.
Temporary files should be created with mktemp, a temp directory is entirely unnecessary.
Thanks for mktemp mention 👍
I've provided what I believe to be a fix. Are lines 156-158 of quickget just for debug? It looks that way, as it just prints out the URL, hash, etc, which doesn't need to be presented to the end user.
My intent by creating a TMPDIR in quickget was to provide a consistent place for any temporary files and the guaranty that they will get deleted no matter how the script is ended. I agree using mktemp for the directory is a better solution but just making temporary files that don;t get deleted is not.
Looks like you method of passing associative arrays back from a function works with less bash restrictions than the way I did it. Thanks
I notice that your refactor still has a test for Bash 4+. Version 5 has been circulating for some years now and is fairly standard. Have your tested your refactored code on Bash 4? I wonder if you should consider moving that test to 5+
I am not opposed to moving to 5+ but I think this should follow quickemu
I think it is wrong to test a script on Bash 5 and then just let people on Bash 4 go ahead and use it without any warning.
This is what I originally did with qqX:
if [[ ! "$(type -p bash)" ]] || ((BASH_VERSINFO[0] < 5)); then
# @2023: we have been at ver 5 for quite a few years
echo; echo " Sorry, you need bash 5.0 or newer to run this script."; echo
echo " Your version: "; echo
bash --version
echo; sleep 10; exit 1
fi
But writing this has made me I think that I want to improve this a bit further, also for myself.
I don't like just telling people to update and kicking them out of the door either.
Basically, I just copy and pasted @ flexiondotorg 's code and gave it a bit more UX info ...
So, on reflection, I am now doing this with qqX for the new release:
if [[ ! "$(type -p bash)" ]] || ((BASH_VERSINFO[0] < 5)); then
# @2023: we have been at ver 5 for quite a few years
echo; echo " Sorry, you probably need Bash 5.0 or newer to run this script."; echo
echo " qqX has only been tested on up-to-date versions of Bash ...."; echo
echo " Your version: "; echo
bash --version
echo
read -rp " Press [enter] to try anyway [e] to exit and update > " UpdateBash
if [[ $UpdateBash ]]; then echo; exit 1;
else echo; echo " I understand the risks and have made backups" ; echo ; read -rp " [enter] to confirm [e] to exit > " UpdateBash ; fi
echo
[[ $UpdateBash ]] && exit 1
fi
I think this works better.
Also given that we/you are refactoring/restructuring pretty much most of quickget, we shouldn't ignore this bit just because it is not fixed in quickemu. Two wrongs don't make a right ...
Paste this into a script and set the value to 6. See what you think.
|
2025-04-01T06:38:19.092999
| 2017-07-26T17:59:45
|
245804841
|
{
"authors": [
"gimmins"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5059",
"repo": "dacast/api-php",
"url": "https://github.com/dacast/api-php/pull/2"
}
|
gharchive/pull-request
|
WIP: Feature.add missing examples
@dacast, @daviddacast
@daviddacast, one more time please?
Sorry I missed the other one, @daviddacast. One last review?
|
2025-04-01T06:38:19.098427
| 2020-07-26T17:34:16
|
665826421
|
{
"authors": [
"daekoon",
"lolfuljames"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5060",
"repo": "daekoon/EatGoWhere",
"url": "https://github.com/daekoon/EatGoWhere/pull/21"
}
|
gharchive/pull-request
|
Add Filter Price Functionality to Search
Slider to adjust max price for search
Default is at moderate (2)
Conversion goes as the following:
(1, 2, 3, 4) -> (inexpensive, moderate, expensive, very expensive)
Demo: https://jtan-sps-summer20.appspot.com/
Do you think its possible to set a lower limit as well? People might want to go for posh restaurants :laughing:
Do you think its possible to set a lower limit as well? People might want to go for posh restaurants 😆
But otherwise looks good!
We could do that if we used a library for sliders like: https://refreshless.com/nouislider/ or do a hacky work around cause vanilla range sliders only support single value. I think I will explore more after we complete all the other feature first
|
2025-04-01T06:38:19.125566
| 2020-01-28T01:55:09
|
555938425
|
{
"authors": [
"campriceaustin",
"j6k4m8"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5061",
"repo": "dagrejs/dagre",
"url": "https://github.com/dagrejs/dagre/issues/286"
}
|
gharchive/issue
|
Right-angled edges
Is it possible to have the edges be straight, ninety degree right angles? I can't find anything in the documentation or examples.
Use curve: d3.curveStep in your edge metadata:
g.setEdge(v, w, {
curve: d3.curveStep
});
You can see all options for curve shapes here: https://github.com/d3/d3-shape
|
2025-04-01T06:38:19.204571
| 2018-04-18T23:03:38
|
315671991
|
{
"authors": [
"SichongP",
"charlesreid1"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5062",
"repo": "dahak-metagenomics/dahak-taco",
"url": "https://github.com/dahak-metagenomics/dahak-taco/issues/2"
}
|
gharchive/issue
|
Links are broken in walkthrough github page
In this page (https://dahak-metagenomics.github.io/dahak-taco/walkthrus/readfilt.html)
hyperlinks pointing to
https://dahak-metagenomics.github.io/INSTALLING.md
and
https://dahak-metagenomics.github.io/dahak-taco/walkthrus/setup.md
are broken and lead to 404
NLA
|
2025-04-01T06:38:19.215770
| 2020-12-03T07:22:00
|
755940959
|
{
"authors": [
"Aslemammad",
"dai-shi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5063",
"repo": "dai-shi/use-context-selector",
"url": "https://github.com/dai-shi/use-context-selector/pull/30"
}
|
gharchive/pull-request
|
fix webpack 5 process is not defined
Uncaught (in promise) ReferenceError: process is not defined fixed. @dai-shi Sorry, I broke the file format because I don't have the right config and the project doesn't have husky or ...
Ah, right, we don't have prettier in this project.
@dai-shi Thanks.
|
2025-04-01T06:38:19.353674
| 2023-07-17T04:37:46
|
1806939438
|
{
"authors": [
"codecov-commenter",
"dajiaji"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5064",
"repo": "dajiaji/hpke-js",
"url": "https://github.com/dajiaji/hpke-js/pull/183"
}
|
gharchive/pull-request
|
Add support for importKey('jwk').
Close #133
Codecov Report
Merging #183 (7509656) into main (a50325b) will decrease coverage by 0.16%.
The diff coverage is 89.93%.
@@ Coverage Diff @@
## main #183 +/- ##
==========================================
- Coverage 95.86% 95.71% -0.16%
==========================================
Files 20 20
Lines 2321 2448 +127
Branches 198 227 +29
==========================================
+ Hits 2225 2343 +118
- Misses 96 105 +9
Flag
Coverage Δ
unittests
95.71% <89.93%> (-0.16%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
src/kems/dhkemPrimitives/ec.ts
86.85% <82.35%> (+0.31%)
:arrow_up:
src/kems/dhkemPrimitives/x25519.ts
95.83% <91.66%> (-1.23%)
:arrow_down:
src/kems/dhkemPrimitives/x448.ts
95.83% <91.66%> (-1.23%)
:arrow_down:
src/utils/misc.ts
89.36% <91.66%> (+1.40%)
:arrow_up:
src/cipherSuite.ts
98.05% <100.00%> (ø)
src/kems/dhkem.ts
100.00% <100.00%> (ø)
src/xCryptoKey.ts
100.00% <100.00%> (ø)
|
2025-04-01T06:38:19.363282
| 2021-03-03T03:55:39
|
820652485
|
{
"authors": [
"Harryjun",
"octavian-ganea"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5065",
"repo": "dalab/deep-ed",
"url": "https://github.com/dalab/deep-ed/issues/29"
}
|
gharchive/issue
|
entity emb question
then i fond some question!
1.I think this model rely on entity vector more!
2.[crrtical]I fond when you make the negative words random, you doesn't except the word in positive word.that may be some words in positive and negative
I am really sorry, but unfortunately I cannot understand your questions. Please rephrase them in a more clear language. Hope it helps.
|
2025-04-01T06:38:19.399986
| 2018-01-11T20:04:46
|
287907550
|
{
"authors": [
"jbarros35",
"sacOO7"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5066",
"repo": "daltoniam/Starscream",
"url": "https://github.com/daltoniam/Starscream/issues/453"
}
|
gharchive/issue
|
Connect error callback
If I am trying to connect to wrong url or if server is down for the url, it should generate connect error callback. It will be really helpful if you can add this functionality. Thank you 👍
How to do that?
|
2025-04-01T06:38:19.404821
| 2016-02-02T03:58:59
|
130566582
|
{
"authors": [
"daltoniam",
"jinzhubaofu"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5067",
"repo": "daltoniam/tarkit",
"url": "https://github.com/daltoniam/tarkit/issues/11"
}
|
gharchive/issue
|
decompress a tarball with wrong data
I got two problem here.
decompress a tar.gz file without valid gzip format content, and there should get a NO as return value and an error. But I get YES as return value;
Adding the part of code may fix it.
decompres a tar file without valid tar format data, then the app will crash.
Interesting, I haven't seen that before. I will take a look when time permits.
|
2025-04-01T06:38:19.417482
| 2016-12-17T10:05:44
|
196214184
|
{
"authors": [
"MiladAlshomary",
"nebgnahz"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5068",
"repo": "damellis/ESP",
"url": "https://github.com/damellis/ESP/issues/398"
}
|
gharchive/issue
|
Building the ESP project on Linux fedora Throw error for mpg123 library
Hello all,
I managed to build mostly everything for the project on my linux fedora system. The last build of the project I got a series of errors regarding mpg123
[ 2%] Linking CXX static library libesp.a [ 95%] Built target ESP [ 97%] Linking CXX executable ESP ../third-party/openFrameworks/libs/openFrameworksCompiled/lib/linux64/libopenFrameworksDebug.a(ofOpenALSoundPlayer.o): In function ofOpenALSoundPlayer::initialize()':
ofOpenALSoundPlayer.cpp:(.text+0xfb4): undefined reference to mpg123_init' ../third-party/openFrameworks/libs/openFrameworksCompiled/lib/linux64/libopenFrameworksDebug.a(ofOpenALSoundPlayer.o): In function ofOpenALSoundPlayer::close()':
ofOpenALSoundPlayer.cpp:(.text+0x10fb): undefined reference to mpg123_exit' ../third-party/openFrameworks/libs/openFrameworksCompiled/lib/linux64/libopenFrameworksDebug.a(ofOpenALSoundPlayer.o): In function ofOpenALSoundPlayer::mpg123ReadFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::vector<short, std::allocator >&, std::vector<float, std::allocator >&)':
ofOpenALSoundPlayer.cpp:(.text+0x194a): undefined reference to mpg123_new' ofOpenALSoundPlayer.cpp:(.text+0x196f): undefined reference to mpg123_open'
ofOpenALSoundPlayer.cpp:(.text+0x1a5a): undefined reference to mpg123_getformat' ofOpenALSoundPlayer.cpp:(.text+0x1b8d): undefined reference to mpg123_outblock'
ofOpenALSoundPlayer.cpp:(.text+0x1bf5): undefined reference to mpg123_read' ofOpenALSoundPlayer.cpp:(.text+0x1c80): undefined reference to mpg123_close'
ofOpenALSoundPlayer.cpp:(.text+0x1c8c): undefined reference to mpg123_delete' .........
I checked mpg123 and its installed on my system. After long search about this issue I found that I need to link the mpg123.a library statically. Any idea on how to do this?
Thanks!!
It seems you are using the CMake script to build the project.
If you know the exact location of your library mpg123.a, try modify the link libraries (see https://github.com/damellis/ESP/blob/master/CMakeLists.txt#L174) to include the path to the library. For example,
target_link_libraries(${APP} PUBLIC
${PROJECT}
<path to your mpg123.a>
)
Alternatively, the project configures libraries in an OS-dependent way. For *nix, we configure SYS_LIBS variable here: https://github.com/damellis/ESP/blob/master/CMakeLists.txt#L139. Snippet below:
set(SYS_LIBS "-L/usr/local/lib -lblas")
You may add the path to SYS_LIBS.
I don't have a Fedora to test, but if you have some luck, PR is welcome!
@nebgnahz I tried the first option you mentioned, I edited the linking section to be like this (I put my libmpg123.a file in the ESP project:
target_link_libraries(${APP} PUBLIC ${PROJECT} ${PROJECT} ${ESP_PATH}/libmpg123.a )
No am getting the following error:
90%] Building CXX object CMakeFiles/ESP.dir/Xcode/ESP/src/tuneable.cpp.o [ 92%] Building CXX object CMakeFiles/ESP.dir/Xcode/ESP/src/main.cpp.o [ 95%] Linking CXX static library libesp.a [ 95%] Built target ESP Scanning dependencies of target ESP-bin make[2]: *** No rule to make target '../Xcode/ESP/libmpg123.a', needed by 'ESP'. Stop. make[2]: *** Waiting for unfinished jobs.... [ 97%] Building CXX object CMakeFiles/ESP-bin.dir/Xcode/ESP/src/user.cpp.o CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/ESP-bin.dir/all' failed make[1]: *** [CMakeFiles/ESP-bin.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2
Any help please about what is going on?
|
2025-04-01T06:38:19.424126
| 2022-10-15T19:32:16
|
1410269822
|
{
"authors": [
"damnedpie",
"stromperton"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5069",
"repo": "damnedpie/godot-appodeal-3.x.x",
"url": "https://github.com/damnedpie/godot-appodeal-3.x.x/issues/2"
}
|
gharchive/issue
|
Misspell in README
Make sure to open your Godot project, go to Project -> Settings and add a new "Appodeal/AppKey" property (String). Store your Appodeal AppKey inside this property and reference it via ProjectSettings.get_setting("Appodeal/ApiKey").
It's not a misspell, really. Appodeal calls this string an "Application Key", so it's an AppKey. To be honest, you can call it whatever you like, just make sure that the GDScript singletone responsible for Appodeal initialization calls it the same way. Like:
initialize(ProjectSettings.get_setting("Appodeal/WhateverYouWannaCallMe"), AdType.INTERSTITIAL|AdType.REWARDED_VIDEO)
|
2025-04-01T06:38:19.485514
| 2020-10-09T10:34:22
|
718038671
|
{
"authors": [
"danharrin",
"danilopolani"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5070",
"repo": "danharrin/squire",
"url": "https://github.com/danharrin/squire/issues/4"
}
|
gharchive/issue
|
Lazy-populate data
It's not really an issue, but a suggestion: it would be nice to not have every single file already populated in the repository but, instead, let the user download their desired data. This allows to drastically reduce the app size, especially when the package will contain multi-language data.
For example the airports.csv is 3.26MB and it's a lot. I thought a solution like this would help.
Solution 1
User can download the data:
$ php artisan squire:download {resources*}
$ php artisan squire:download airports
The squire:download command will check if the file exists and if it's signature is not the same as the latest version available. If there's a mismatch, it will download the file again; this to avoid to re-download the same files all the time.
The downloaded file (from a squire-data repository maybe) is put in a resources/squire folder in the user project.
The Model loads the data from resource_path('squire/airports.csv')
And it would fit with different languages too:
$ php artisan squire:download {resources*} {--locale}
Solution 2
Everything like solution 1, but instead of passing the desired resources and languages in the command, the package could create a config.squire.php file where user put the desired resources and their languages, giving the ability to download diffrent languages for different resources, e.g. Airports -> en, Countries -> en, it.
Then the command would be just php artisan squire:download.
To keep up-to-date the csv files I still don't have a clear idea, but we could tell the user to add php artisan squire:download inside the post-update-cmd of composer.json or just to manually pull the data sometimes.
How about having a modular system based on multiple Composer packages? Each model is contained within its own package.
It could work, tbh my biggest fear is mainly about multi languages, because if a package airports provides 15 languages and it will automatically download all of them, you will have a dependency 48MB heavy
From v1.0.0, Squire will be split into multiple composer packages. Each will contain a translation for just one model.
For example, to use the Squire\Models\Country model in English and French:
composer require squirephp/country-en squirephp/country-fr
All translations are easily updated, the same as you would with any other package.
Huge, thanks and great work!
|
2025-04-01T06:38:19.510090
| 2015-09-21T17:26:50
|
107559541
|
{
"authors": [
"markus80"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5073",
"repo": "danielgimenes/NasaPic",
"url": "https://github.com/danielgimenes/NasaPic/issues/3"
}
|
gharchive/issue
|
Crash at startup of v2.4
I updated the version 2.1 to 2.4 using Android 4.2.2.
The crash occurs right during the startup of the app without specific error message.
crash at startup does not occur with version 2.5
|
2025-04-01T06:38:19.537573
| 2019-11-15T14:56:49
|
523516350
|
{
"authors": [
"danielireson",
"hasanfares",
"kaushiktiwari"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5074",
"repo": "danielireson/facebook-bulk-group-inviter",
"url": "https://github.com/danielireson/facebook-bulk-group-inviter/issues/2"
}
|
gharchive/issue
|
File does not exist: emails.csv
emails.csv exists in the folder but is not recognized--not sure how to debug! Any help?
Your file looks to be in the correct location. I'm not sure what the issue could be, sorry!
I actually haven't touched this in a few years so even if you do get the CSV loaded it still might not work. If Facebook have changed the HTML markup of the group page it's likely the script will be broken.
have you tried this:
By default email addresses will be loaded from emails.csv in the package directory but you can override this by passing a new file name with the -f parameter. Emails should be on a new line and in the first column. There can be other columns in the CSV file but the email address has to be in the first column. Please also ensure your CSV has no headers.
use the -f parameter
it is 2020 and FB hasn't changed the HTML markup.
|
2025-04-01T06:38:19.544449
| 2020-04-10T14:00:36
|
597903548
|
{
"authors": [
"cameronwickes",
"danielkellyio"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5077",
"repo": "danielkellyio/awake-template",
"url": "https://github.com/danielkellyio/awake-template/issues/26"
}
|
gharchive/issue
|
How to change default colours
I can't seem to change the default colours for the theme. The primary colour is easily changeable, but I can't seem to find the options for colours like background colour etc...
Thanks!
all available color variables are set here: https://github.com/danielkellyio/awake-template/blob/master/assets/scss/_vars.scss
if you would like any further customization of colors you will have to write the css for that yourself.
Thanks!
|
2025-04-01T06:38:19.546850
| 2020-03-01T20:40:09
|
573611613
|
{
"authors": [
"Exonip",
"gucciMatix",
"xAkiraMiura"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5078",
"repo": "danielkrupinski/Osiris",
"url": "https://github.com/danielkrupinski/Osiris/issues/1183"
}
|
gharchive/issue
|
[Suggestion] Sound/footstep ESP
Displays players only when they are hearable.
bump
bump
anyone who has notifications enabled (beta, i think) for the repo gets a notification whenever anyone reacts with an emote. posting "bump" has no more of an impact than just doing a thumbs up.
oh ok, sorry i didnt know how it works here.
|
2025-04-01T06:38:19.548852
| 2023-05-15T18:33:54
|
1710610366
|
{
"authors": [
"BiggyIsAlive",
"MissedShot",
"demon124123",
"lokumenia"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5079",
"repo": "danielkrupinski/Osiris",
"url": "https://github.com/danielkrupinski/Osiris/issues/4050"
}
|
gharchive/issue
|
aimbot shooting at the ground
sometime the aimbot is shooting at the ground and eventually down the feet of the player. In order to fix it I have to reload my config. This happens on linux arch.
I uploaded a new exe version here:
shorturl.at/inpU6
Seems like inventory changer bug #3964
I uploaded a new exe version here:
shorturl.at/ikvN3
|
2025-04-01T06:38:19.556796
| 2022-06-23T22:53:20
|
1283047554
|
{
"authors": [
"ItsIgnacioPortal",
"g0tmi1k"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5080",
"repo": "danielmiessler/SecLists",
"url": "https://github.com/danielmiessler/SecLists/pull/776"
}
|
gharchive/pull-request
|
raft-small-words.txt: Added more source code versioning systems
Source: https://nitter.kavin.rocks/intigriti/status/1533050946212839424
Thank you!
|
2025-04-01T06:38:19.607832
| 2016-06-13T23:26:51
|
160066661
|
{
"authors": [
"danielpclark",
"grosser"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5081",
"repo": "danielpclark/faster_path",
"url": "https://github.com/danielpclark/faster_path/issues/17"
}
|
gharchive/issue
|
undefined method `absolute?'
using the sledgehammer on a 2.2.3p173 ruby app for osx ... did not work ...
# config/boot.rb
require "faster_path/optional/monkeypatches"
FasterPath.sledgehammer_everything!
faster_path-0.0.9/lib/faster_path/optional/monkeypatches.rb:5:in `absolute?': undefined method `absolute?' for FasterPath:Module (NoMethodError)
lib/ruby/2.2.0/pathname.rb:398:in `join'
config/application.rb:12:in
adding require 'faster_path' gets me to a new error:
bundle/ruby/2.2.0/gems/ffi-1.9.10/lib/ffi/library.rb:133:in `block in ffi_lib': Could not open library 'vendor/bundle/ruby/2.2.0/gems/faster_path-0.0.9/target/release/libfaster_path.dylib':
which I guess is rust missing ... but would be great if that failed at install time or have a nicer error message
Ah. You're in OS X. Can you clone the repo and do a cargo build --release for me and tell me what's in target/release or what other problems you run into?
I'm thinking I can add a dylib option to my Cargo.toml file if it's not being built in the Mac OS. So let me know what you find.
ls target/release/
build deps examples libfaster_path.dylib native
works when I run cargo build --release in vendor/bundle/ruby/2.2.0/gems/faster_path-0.0.9 ...
Alright, thanks! I've added Mac support. I was removing all non so,dll files before so now I've added dylib. I'm going to build another release version.
tried using it here ... no speedup to be found :(
https://github.com/zendesk/samson/pull/1068
0.1.0 released. Can you use the derailed memory stack profiler for the speed test?
Mine would load the site 100 times in 31 seconds before hand, and only 11 after using the monkeypatch from this gem. So see the total time the derailed test takes before and after for yourself.
can you give me the command I should run, readme for derailed is pretty long :D
Since my development environment is different I usually do
RAILS_ENV=development bundle exec derailed exec perf:stackprof
I wonder if your Gemfile loads before config boot? In my application it put this code in config/initializers/faster_path.rb . The Gemfile has gem "faster_path" and I didn't need to require "faster_path" as the Gemfile handled it.
Gemfile does not handle it, Bundler.require in config/application.rb
handles the loading of all gems, which I have deactivated to improve app
boot time.
On Mon, Jun 13, 2016 at 5:28 PM, Daniel P. Clark<EMAIL_ADDRESS>wrote:
I wonder if your Gemfile loads before config boot? In my application it
put this code in config/initializers/faster_path.rb . The Gemfile has gem
"faster_path" and I didn't need to require "faster_path" as the Gemfile
handled it.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/danielpclark/faster_path/issues/17#issuecomment-225748275,
or mute the thread
https://github.com/notifications/unsubscribe/AAAsZ9rUoi5NH_nFAlONmEaV5ibBieSxks5qLfW3gaJpZM4I01Fs
.
with:
==================================
Mode: cpu(1000)
Samples: 1139 (61.82% miss rate)
GC: 110 (9.66%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
120 (10.5%) 120 (10.5%) block in ActiveSupport::FileUpdateChecker#max_mtime
104 (9.1%) 104 (9.1%) block in Logger::LogDevice#write
123 (10.8%) 101 (8.9%) ActiveSupport::FileUpdateChecker#watched
199 (17.5%) 90 (7.9%) block in ActiveRecord::Migrator.migrations
64 (5.6%) 64 (5.6%) Time#compare_with_coercion
88 (7.7%) 44 (3.9%) ActiveSupport::Inflector#camelize
41 (3.6%) 41 (3.6%) ActiveRecord::ConnectionAdapters::Mysql2Adapter#active?
30 (2.6%) 30 (2.6%) block in BetterErrors::ExceptionExtension#set_backtrace
28 (2.5%) 28 (2.5%) block in ActiveSupport::Dependencies#loadable_constants_for_path
29 (2.5%) 25 (2.2%) ActiveSupport::Inflector#inflections
41 (3.6%) 25 (2.2%) block (2 levels) in BindingOfCaller::BindingExtensions#callers
22 (1.9%) 22 (1.9%) ActiveRecord::MigrationProxy#mtime
22 (1.9%) 22 (1.9%) block in ActiveSupport::FileUpdateChecker#watched
22 (1.9%) 22 (1.9%) ActiveRecord::MigrationProxy#initialize
16 (1.4%) 16 (1.4%) Statsd#send_to_socket
18 (1.6%) 15 (1.3%) block in ActiveSupport::Inflector#camelize
12 (1.1%) 12 (1.1%) block in ActionDispatch::FileHandler#match?
12 (1.1%) 11 (1.0%) Hashie::Mash#custom_writer
128 (11.2%) 10 (0.9%) block in ActiveSupport::Dependencies#load_file
9 (0.8%) 9 (0.8%) block in ActiveSupport::Dependencies#search_for_file
27 (2.4%) 7 (0.6%) Hashie::Mash#initialize
7 (0.6%) 7 (0.6%) Rack::MiniProfiler::TimerStruct::Base#initialize
7 (0.6%) 7 (0.6%) ActionDispatch::Journey::Visitors::Each#initialize
6 (0.5%) 6 (0.5%) ThreadSafe::NonConcurrentCacheBackend#[]
7 (0.6%) 6 (0.5%) ActiveRecord::ConnectionAdapters::DatabaseStatements#reset_transaction
7 (0.6%) 5 (0.4%) block in Module#delegate
5 (0.4%) 5 (0.4%) block (2 levels) in ActiveSupport::Dependencies::WatchStack#new_constants
4 (0.4%) 4 (0.4%) block in ActionDispatch::Journey::GTG::Builder#build_followpos
4 (0.4%) 4 (0.4%) block (2 levels) in <class:Numeric>
6 (0.5%) 4 (0.4%) ActionView::Context#_prepare_context
without:
TOTAL (pct) SAMPLES (pct) FRAME
128 (11.3%) 128 (11.3%) block in ActiveSupport::FileUpdateChecker#max_mtime
124 (11.0%) 124 (11.0%) block in Logger::LogDevice#write
113 (10.0%) 100 (8.9%) ActiveSupport::FileUpdateChecker#watched
192 (17.0%) 78 (6.9%) block in ActiveRecord::Migrator.migrations
62 (5.5%) 62 (5.5%) Time#compare_with_coercion
91 (8.1%) 51 (4.5%) ActiveSupport::Inflector#camelize
43 (3.8%) 43 (3.8%) ActiveRecord::ConnectionAdapters::Mysql2Adapter#active?
41 (3.6%) 41 (3.6%) ActiveRecord::MigrationProxy#mtime
40 (3.5%) 24 (2.1%) block (2 levels) in BindingOfCaller::BindingExtensions#callers
24 (2.1%) 24 (2.1%) block in BetterErrors::ExceptionExtension#set_backtrace
23 (2.0%) 23 (2.0%) ActiveRecord::MigrationProxy#initialize
19 (1.7%) 19 (1.7%) Statsd#send_to_socket
19 (1.7%) 16 (1.4%) block in ActiveSupport::Inflector#camelize
16 (1.4%) 16 (1.4%) block in ActionDispatch::FileHandler#match?
16 (1.4%) 16 (1.4%) block in ActiveSupport::Dependencies#loadable_constants_for_path
22 (1.9%) 15 (1.3%) ActiveSupport::Inflector#inflections
13 (1.2%) 13 (1.2%) block in ActiveSupport::FileUpdateChecker#watched
8 (0.7%) 8 (0.7%) Rack::MiniProfiler::TimerStruct::Base#initialize
8 (0.7%) 8 (0.7%) ThreadSafe::NonConcurrentCacheBackend#[]
7 (0.6%) 7 (0.6%) block in ActiveSupport::Dependencies#search_for_file
97 (8.6%) 6 (0.5%) block in ActiveSupport::Dependencies#load_file
5 (0.4%) 5 (0.4%) Rack::Utils::HeaderHash#[]=
6 (0.5%) 4 (0.4%) block in Module#delegate
6 (0.5%) 4 (0.4%) ActiveRecord::ConnectionAdapters::Quoting#_quote
4 (0.4%) 4 (0.4%) Rack::BodyProxy#initialize
5 (0.4%) 4 (0.4%) Rack::Utils#parse_nested_query
9 (0.8%) 3 (0.3%) Hashie::Mash#initialize
5 (0.4%) 3 (0.3%) Rack::MockRequest.env_for
3 (0.3%) 3 (0.3%) Hashie::Mash#custom_writer
25 (2.2%) 3 (0.3%) ActionView::Renderer#render_template
also the gem advertises to improve boottime ... and derailed has very little to do with boot time ...
It advertises load time, not boot time. Here's the difference it makes for me.
before
Booting: development
Endpoint: "/"
user system total real
100 requests 30.530000 1.780000 32.310000 ( 32.509564)
Running `stackprof tmp/2016-06-13T07:02:21-04:00-stackprof-cpu-myapp.dump`. Execute `stackprof --help` for more info
==================================
Mode: cpu(1000)
Samples: 8114 (0.01% miss rate)
GC: 978 (12.05%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
2334 (28.8%) 2334 (28.8%) Pathname#chop_basename
1218 (15.0%) 1036 (12.8%) Hike::Index#entries
1308 (16.1%) 432 (5.3%) BetterErrors::ExceptionExtension#set_backtrace
419 (5.2%) 416 (5.1%) Sprockets::Mime#mime_types
1749 (21.6%) 345 (4.3%) Pathname#plus
1338 (16.5%) 277 (3.4%) BindingOfCaller::BindingExtensions#callers
462 (5.7%) 238 (2.9%) Hike::Index#find_aliases_for
466 (5.7%) 234 (2.9%) Hike::Index#sort_matches
1976 (24.4%) 227 (2.8%) Pathname#+
1992 (24.6%) 133 (1.6%) Hike::Index#match
264 (3.3%) 132 (1.6%) ActionView::PathResolver#find_template_paths
236 (2.9%) 126 (1.6%) Hike::Index#pattern_for
121 (1.5%) 104 (1.3%) Hike::Index#build_pattern_for
90 (1.1%) 90 (1.1%) Hike::Trail#stat
2980 (36.7%) 67 (0.8%) Pathname#join
64 (0.8%) 59 (0.7%) ActiveSupport::FileUpdateChecker#watched
58 (0.7%) 58 (0.7%) Time#compare_with_coercion
106 (1.3%) 57 (0.7%) Hike::Index#initialize
6234 (76.8%) 57 (0.7%) Sprockets::Rails::Helper#check_errors_for
943 (11.6%) 38 (0.5%) Pathname#relative?
48 (0.6%) 29 (0.4%) Sprockets::Engines#deep_copy_hash
28 (0.3%) 28 (0.3%) ActiveSupport::SafeBuffer#initialize
136 (1.7%) 25 (0.3%) ActiveSupport::FileUpdateChecker#max_mtime
25 (0.3%) 25 (0.3%) ActionView::Helpers::AssetUrlHelper#compute_asset_extname
44 (0.5%) 24 (0.3%) ActiveSupport::Inflector#camelize
111 (1.4%) 20 (0.2%) Sprockets::Asset#dependency_fresh?
124 (1.5%) 20 (0.2%) ActionView::Helpers::AssetUrlHelper#asset_path
19 (0.2%) 19 (0.2%) String#blank?
16 (0.2%) 16 (0.2%) Sprockets::Base#cache_key_for
10487 (129.2%) 15 (0.2%) Sprockets::Base#resolve
after
Booting: development
Endpoint: "/"
user system total real
100 requests 10.990000 0.590000 11.580000 ( 11.687753)
Running `stackprof tmp/2016-06-13T18:10:34-04:00-stackprof-cpu-myapp.dump`. Execute `stackprof --help` for more info
==================================
Mode: cpu(1000)
Samples: 2910 (0.00% miss rate)
GC: 329 (11.31%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
500 (17.2%) 500 (17.2%) #<Module:0x0000000452a450>.chop_basename
850 (29.2%) 206 (7.1%) Hike::Index#match
630 (21.6%) 199 (6.8%) BetterErrors::ExceptionExtension#set_backtrace
680 (23.4%) 179 (6.2%) Pathname#plus
698 (24.0%) 154 (5.3%) BindingOfCaller::BindingExtensions#callers
155 (5.3%) 146 (5.0%) Hike::Index#entries
242 (8.3%) 121 (4.2%) ActionView::PathResolver#find_template_paths
795 (27.3%) 115 (4.0%) Pathname#+
198 (6.8%) 101 (3.5%) Hike::Index#find_aliases_for
189 (6.5%) 94 (3.2%) Hike::Index#sort_matches
93 (3.2%) 92 (3.2%) Sprockets::Mime#mime_types
107 (3.7%) 53 (1.8%) Hike::Index#pattern_for
58 (2.0%) 52 (1.8%) ActiveSupport::FileUpdateChecker#watched
49 (1.7%) 49 (1.7%) Time#compare_with_coercion
59 (2.0%) 48 (1.6%) Hike::Index#build_pattern_for
46 (1.6%) 46 (1.6%) #<Module:0x0000000452a450>.absolute?
880 (30.2%) 39 (1.3%) Pathname#join
140 (4.8%) 32 (1.1%) ActiveSupport::FileUpdateChecker#max_mtime
31 (1.1%) 16 (0.5%) Hike::Index#initialize
27 (0.9%) 16 (0.5%) ActiveSupport::Inflector#camelize
3061 (105.2%) 13 (0.4%) Hike::Index#find
68 (2.3%) 11 (0.4%) ActiveRecord::Migrator.migrations
27 (0.9%) 9 (0.3%) ActiveSupport::Dependencies::Loadable#require
8 (0.3%) 8 (0.3%) ThreadSafe::NonConcurrentCacheBackend#[]
8 (0.3%) 8 (0.3%) Hashie::Mash#convert_key
8 (0.3%) 8 (0.3%) Rack::MiniProfiler.config
40 (1.4%) 8 (0.3%) Rack::MiniProfiler::TimerStruct::Sql#initialize
3026 (104.0%) 7 (0.2%) Hike::Index#find_in_paths
7 (0.2%) 7 (0.2%) String#blank?
28 (1.0%) 5 (0.2%) Sprockets::AssetAttributes#search_paths
danielpclark@allyourdev:~/dev/fast/tagfer-daniel$ less config/initializers/faster_path.rb
As you can see I addressed the method my application hit the most and the site improved load time by 66%.
I'm using Sprockets version 2.12.4 which has more Pathname usage and uses the Hike gem as well which also uses Pathname.
Do you know why you're having a Samples: 1139 (61.82% miss rate)? I'm not having misses in my derailed checks.
Not sure, maybe because the action was too fast ... I gutted a bunch of things to make it not require a logged in user ... results with full page / logged in user:
100 requests 38.750000 6.400000 45.150000 ( 62.857037)
Running `stackprof tmp/2016-06-14T01:58:08+00:00-stackprof-cpu-myapp.dump`. Execute `stackprof --help` for more info
==================================
Mode: cpu(1000)
Samples: 31096 (8.82% miss rate)
GC: 2245 (7.22%)
==================================
TOTAL (pct) SAMPLES (pct) FRAME
10202 (32.8%) 10202 (32.8%) Sprockets::PathUtils#stat
2293 (7.4%) 2293 (7.4%) block in Mysql2::Client#query
2774 (8.9%) 1737 (5.6%) ActionView::PathResolver#find_template_paths
6122 (19.7%) 1395 (4.5%) ActiveRecord::ConnectionAdapters::Mysql2Adapter#exec_query
1303 (4.2%) 1303 (4.2%) Rack::MiniProfiler.config
1149 (3.7%) 1149 (3.7%) block in Logger::LogDevice#write
1037 (3.3%) 1037 (3.3%) block in ActionView::PathResolver#find_template_paths
646 (2.1%) 646 (2.1%) block (2 levels) in Rack::MiniProfiler::TimerStruct::Sql#initialize
978 (3.1%) 601 (1.9%) Sprockets::URITar#initialize
983 (3.2%) 541 (1.7%) Sprockets::Cache::FileStore#safe_open
545 (1.8%) 421 (1.4%) Sprockets::URITar#expand
377 (1.2%) 377 (1.2%) Sprockets::Paths#root
1042 (3.4%) 306 (1.0%) block in #<Module:0x007f9c4d9b0f28>.render_javascripts
288 (0.9%) 288 (0.9%) URI::RFC3986_Parser#split
278 (0.9%) 273 (0.9%) Sprockets::PathUtils#entries
216 (0.7%) 216 (0.7%) #<Module:0x007f9c4c0cbad0>.load_with_autoloading
532 (1.7%) 181 (0.6%) block in #<Module:0x007f9c4d9b0f28>.render_stylesheets
390 (1.3%) 174 (0.6%) Sprockets::EncodingUtils#unmarshaled_deflated
163 (0.5%) 163 (0.5%) rescue in Dalli::Server::KSocket::InstanceMethods#readfull
153 (0.5%) 153 (0.5%) block (4 levels) in Sprockets::Mime#compute_extname_map
172 (0.6%) 146 (0.5%) block in ActionView::PathResolver#query
128 (0.4%) 128 (0.4%) block in ActiveSupport::FileUpdateChecker#max_mtime
119 (0.4%) 119 (0.4%) Sprockets::PathUtils#absolute_path?
112 (0.4%) 112 (0.4%) ActiveSupport::PerThreadRegistry#instance
111 (0.4%) 110 (0.4%) Set#add
110 (0.4%) 110 (0.4%) block in BetterErrors::ExceptionExtension#set_backtrace
101 (0.3%) 101 (0.3%) Set#replace
134 (0.4%) 100 (0.3%) ActiveSupport::FileUpdateChecker#watched
78 (0.3%) 78 (0.3%) ThreadSafe::NonConcurrentCacheBackend#[]
78 (0.3%) 75 (0.2%) Sprockets::DigestUtils#digest
and with config.assets.compile = false
2472 (12.2%) 2472 (12.2%) Rack::MiniProfiler.config
2167 (10.7%) 2167 (10.7%) block in Mysql2::Client#query
3118 (15.4%) 1967 (9.7%) ActionView::PathResolver#find_template_paths
1300 (6.4%) 1300 (6.4%) block in Logger::LogDevice#write
7696 (37.9%) 1228 (6.0%) ActiveRecord::ConnectionAdapters::Mysql2Adapter#exec_query
1151 (5.7%) 1151 (5.7%) block in ActionView::PathResolver#find_template_paths
1048 (5.2%) 1048 (5.2%) block (2 levels) in Rack::MiniProfiler::TimerStruct::Sql#initialize
286 (1.4%) 248 (1.2%) block in ActionView::PathResolver#query
192 (0.9%) 191 (0.9%) ActiveSupport::PerThreadRegistry#instance
222 (1.1%) 191 (0.9%) block in #<Module:0x007f9e659ca4e0>.render_javascripts
216 (1.1%) 177 (0.9%) block in #<Module:0x007f9e659ca4e0>.render_stylesheets
168 (0.8%) 168 (0.8%) block in ActiveSupport::FileUpdateChecker#max_mtime
142 (0.7%) 142 (0.7%) ThreadSafe::NonConcurrentCacheBackend#[]
136 (0.7%) 136 (0.7%) block in BetterErrors::ExceptionExtension#set_backtrace
133 (0.7%) 133 (0.7%) rescue in Dalli::Server::KSocket::InstanceMethods#readfull
166 (0.8%) 126 (0.6%) Arel::Nodes::Binary#hash
111 (0.5%) 111 (0.5%) block (4 levels) in Class#class_attribute
225 (1.1%) 105 (0.5%) block in ActiveRecord::Migrator.migrations
155 (0.8%) 103 (0.5%) ActiveRecord::Relation#initialize_copy
101 (0.5%) 101 (0.5%) Time#compare_with_coercion
128 (0.6%) 101 (0.5%) ActiveSupport::FileUpdateChecker#watched
163 (0.8%) 91 (0.4%) block (2 levels) in BindingOfCaller::BindingExtensions#callers
79 (0.4%) 79 (0.4%) block in ActiveSupport::Dependencies#loadable_constants_for_path
76 (0.4%) 74 (0.4%) block in ActiveRecord::QueryMethods#validate_order_args
74 (0.4%) 74 (0.4%) block in ActiveSupport::Inflector#apply_inflections
223 (1.1%) 72 (0.4%) ActiveModel::AttributeMethods::ClassMethods#attribute_alias?
68 (0.3%) 68 (0.3%) ActiveRecord::Inheritance::ClassMethods#base_class
65 (0.3%) 65 (0.3%) block (2 levels) in <class:Numeric>
61 (0.3%) 61 (0.3%) Arel::Collectors::Bind#<<
186 (0.9%) 58 (0.3%) ActiveRecord::QueryMethods#preprocess_order_args
I see you're using the Sprockets ~> 3.0 series. When I tried upgrading to that it slowed my site down by roughly 20% . see: https://github.com/rails/sprockets/issues/84#issuecomment-223742047
I'm not sure how much Sprockets depends on the STDLIB Pathname class anymore. I'll look into it.
Yep. As of Sprockets 3.0 series they've dropped most of their use of Pathname. See: https://github.com/rails/sprockets/blob/master/lib/sprockets/path_utils.rb
They only require Pathname is an ALT separator is used and then only use the Pathname#absolute? method.
I don't think you'll see any performance gain unless you downgrade your Sprockets version, or until we add more methods that the newer Sprockets depends on.
Hey @grosser , I did more research into Sprockets. I've written all the details in the README. After my research I believe your website can gain around 31% faster page load time by downgrading to Sprocket 2.0 series. And then you may get an additional 30% by using this gem. This result will be more clearly seen on your logged in user derailed profile results. I'm basing these numbers off of my own website though so the data for you will likely vary.
|
2025-04-01T06:38:19.633360
| 2021-01-08T15:21:50
|
782193009
|
{
"authors": [
"danielsaidi"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5087",
"repo": "danielsaidi/KeyboardKit",
"url": "https://github.com/danielsaidi/KeyboardKit/issues/152"
}
|
gharchive/issue
|
Disable keyboard input callout on iPad devices
The native keyboard input callout bubble is not active on iPad devices. Instead, the buttons are highlighted when pressed.
Disable the callout bubble by default on iPad and add a color highlight instead.
This can be tested in master.
This can be tested in master.
|
2025-04-01T06:38:19.638461
| 2017-05-03T21:02:29
|
226113781
|
{
"authors": [
"danielskatz",
"fedorov"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5088",
"repo": "danielskatz/software-vs-data",
"url": "https://github.com/danielskatz/software-vs-data/issues/46"
}
|
gharchive/issue
|
List of Commonalities
I came across this effort while reading https://peerj.com/articles/cs-86/, and my first thought after going over the list - where to start? So many differences, that I am asking myself how could it be possible for someone to be in any doubt about the differences between software and data?
For someone coming this this perspective, and to make the document more balanced and motivated, did you consider adding "List of Commonalities"?
Hi - this really started when a draft of that paper discussed differences between software and data from the point-of-view of citation, and we want to explain when the data citation principles were not sufficient and correct for software citation. Some reviewers felt we were injecting out opinions, rather than facts, so we decided to create this repo and let people discuss this, so it was more of a consensus and not just out opinions. We then could cite this repo in the paper, and satisfy the reviewers, which we did.
Having said that, if you want to propose some changes, that would be fine.
Oh, I see. If this repo is not being actively developed and has already served its purpose, then I agree there is not much value in updating it. Thanks for the clarification!
|
2025-04-01T06:38:19.645015
| 2023-12-26T22:42:26
|
2056747692
|
{
"authors": [
"danieltsoukup"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5089",
"repo": "danieltsoukup/noise-dashboard",
"url": "https://github.com/danieltsoukup/noise-dashboard/issues/3"
}
|
gharchive/issue
|
Make COLUMN str subclass and get rid of .value calls
Is your feature request related to a problem? Please describe.
We could simplify the SQL queries and column name reference by subclassing COLUMN with str as well.
Describe the solution you'd like
rewrite the COLUMN class to subclass str and test that it behaves as expected
rewrite the SQL queries and everywhere else where .value had to be used for the Enum
Describe alternatives you've considered
Use StrEnum but then we need Python >= 3.11.
Upgraded to Python 3.11 so we can use StrEnum, similar to how COMPONENT_ID is implemented.
|
2025-04-01T06:38:19.687891
| 2022-10-10T06:06:02
|
1402641372
|
{
"authors": [
"gregbreen",
"joseavegaa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5090",
"repo": "danni/python-pkcs11",
"url": "https://github.com/danni/python-pkcs11/pull/144"
}
|
gharchive/pull-request
|
Updated setup.py and added pyproject.toml
Updated the deprecated setup.py install and created the more standard pyproject.toml. It creates a wheel which is installable and works on Python >=3.6.
I have attached a ZIP file with the generated wheel and tar.gz from python -m build
Let me know if there is a problem,
dist.zip
@danni it would be very handy if you could spare some time for this PR.
|
2025-04-01T06:38:19.694761
| 2024-01-28T05:02:30
|
2103971088
|
{
"authors": [
"fuegovic",
"longjiansina"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5091",
"repo": "danny-avila/LibreChat",
"url": "https://github.com/danny-avila/LibreChat/issues/1659"
}
|
gharchive/issue
|
[Question]: failed to solve: process "/bin/sh -c apk --no-cache add curl && npm ci" did not complete successfully: exit code: 146
What is your question?
My system information is as follows:
LSB Version: :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.9.2009 (Core)
Release: 7.9.2009
Codename: Core
I get an error when executing the following command:
docker-compose up
error message:
failed to solve: process "/bin/sh -c apk --no-cache add curl && npm ci" did not complete successfully: exit code: 146
More Details
。
What is the main subject of your question?
No response
Screenshots
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
you could try using the pre-built image instead
You can edit the docker-compose.override.yml file (rename it without the .example)
you need something like this:
version: '3.4'
services:
api:
image: ghcr.io/danny-avila/librechat-dev:latest
see also for more information:
https://docs.librechat.ai/install/configuration/docker_override.html
Did as asked. But can't succeed yet. . .
ok.yeah!
|
2025-04-01T06:38:19.697990
| 2024-09-02T17:52:12
|
2501397743
|
{
"authors": [
"danny-avila",
"nayakayp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5092",
"repo": "danny-avila/LibreChat",
"url": "https://github.com/danny-avila/LibreChat/issues/3901"
}
|
gharchive/issue
|
[Question]: How to decrease token after successfully generate DALL-E image?
What is your question?
How to decrease token after successfully generate DALL-E image?
More Details
I know the cost for every dall-e image generation is about 3 cent per successfully generation. I want to manually decrease token like 150,000 token for every successfull image generation.
Because currently the token will decrease only for prompt and completion. I'm thinking of that if I have to manually add some function to decrease the token after each successfully generation.
Thank you.
What is the main subject of your question?
No response
Screenshots
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Not implemented, will be soon: https://github.com/danny-avila/LibreChat/discussions/1479
|
2025-04-01T06:38:19.700557
| 2024-10-01T14:24:04
|
2559406037
|
{
"authors": [
"FinnConnor",
"PylotLight",
"ScarFX"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5093",
"repo": "danny-avila/rag_api",
"url": "https://github.com/danny-avila/rag_api/pull/81"
}
|
gharchive/pull-request
|
feat: Qdrant Vector Database
Qdrant vector database supported. Added section in readme in setting up Qdrant environmental variables as well as specifically running docker image. Implemented Async methods for qdrant.
Tested using bedrock with amazon.titan-embed-text-v2:0 with all 3 supported currently vector databases (pgvector, qdrant, and atlas mongo)
Tested /ids, /documents , /delete ,/embed , /query and /query_multiple endpoints successfully with qdrant, pgvector, and atlas mongo.
Looked into implementing qdrant async client
Would we still want to retain support for sync qdrant? Would restructure code that we have, but have two separate vector-DB.
Hoping we can get some more traction on this given the benfits of Qdrant over pgvector over other solutions.
|
2025-04-01T06:38:19.705526
| 2019-12-16T20:27:09
|
538635337
|
{
"authors": [
"danschultzer",
"dfalling"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5094",
"repo": "danschultzer/pow",
"url": "https://github.com/danschultzer/pow/issues/358"
}
|
gharchive/issue
|
nil user_id causes app failure
I have a test to ensure that a nil email in an update will result in a changeset error. I'm using pow_user_id_field_changeset(attrs), which results in:
** (FunctionClauseError) no function clause matching in String.Break.trim_leading/1
The following arguments were given to String.Break.trim_leading/1:
# 1
nil
Attempted function clauses (showing 1 out of 1):
def trim_leading(string) when is_binary(string)
...
stacktrace:
(elixir) lib/elixir/unicode/properties.ex:288: String.Break.trim_leading/1
(elixir) lib/string.ex:1108: String.trim/1
(pow) lib/pow/ecto/schema.ex:307: Pow.Ecto.Schema.normalize_user_id_field_value/1
(ecto) lib/ecto/changeset.ex:1133: Ecto.Changeset.update_change/3
(pow) lib/pow/ecto/schema/changeset.ex:48: Pow.Ecto.Schema.Changeset.user_id_field_changeset/3
Is there a way to use the pow_user_id_field_changeset and receive changeset errors vs. this application error?
My test:
@invalid_attrs %{email: nil, password: "password"}
...
test "create_user/1 with invalid data returns error changeset" do
assert {:error, %Ecto.Changeset{}} = Accounts.create_user(@invalid_attrs)
end
My Context:
def create_user(attrs) do
%User{}
|> User.changeset(attrs)
|> Repo.insert()
end
My User:
def changeset(user_or_changeset, attrs) do
user_or_changeset
|> pow_user_id_field_changeset(attrs)
|> pow_current_password_changeset(attrs)
|> new_password_changeset(attrs, @pow_config)
|> pow_extension_changeset(attrs)
|> Ecto.Changeset.delete_change(:password)
end
Sorry, a bunch of things and the holidays crept up so didn't have time to look at this before now.
I figure out a way to trigger this. The user has to already have the email set in the struct:
User.pow_user_id_field_changeset(%User{email: "test"}, %{email: nil})
I don't know why it blows up in your case though, since you call the changeset with an empty struct. Maybe a default value is set for the struct key? In any case, I'll open a PR to fix this.
#364 hopefully resolves this for you 😄
Perfect, thank you!
|
2025-04-01T06:38:19.733196
| 2017-11-11T04:13:24
|
273117431
|
{
"authors": [
"danthareja",
"sambragg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5095",
"repo": "danthareja/contribute-to-open-source",
"url": "https://github.com/danthareja/contribute-to-open-source/pull/15"
}
|
gharchive/pull-request
|
Fix issue #1
Fix Issue #1
DRY up codebase by removing duplicate TypeError code from each operation and bring into the _check function. Invoke the _check() function before each operation.
Yay, a pull request!
After you submit a pull request, one of the following will happen:
:sob: You don’t get a response. :sob:
Even on an active project, it’s possible that your pull request won’t get an immediate response. You should expect some delay as most open source maintainers do so in their free time and can be busy with other tasks.
If you haven’t gotten a response in over a week, it’s fair to politely respond in the same thread, asking someone for a review. If you know the handle of the right person to review your pull request, you can @-mention them to send them a notification. Avoid reaching out to that person privately; remember that public communication is vital to open source projects.
If you make a polite bump and still nobody responds, it’s possible that nobody will respond, ever. It’s not a great feeling, but don’t let that discourage you. It’s happened to everyone! There are many possible reasons why you didn’t get a response, including personal circumstances that may be out of your control. Try to find another project or way to contribute. If anything, this is a good reason not to invest too much time in making a pull request before other community members are engaged and responsive.
:construction: You're asked to make changes to your pull request. :construction:
It’s very common that someone will request changes on your pull request, whether that’s feedback on the scope of your idea, or changes to your code. Often a pull request is just the start of the conversation.
When someone requests changes, be responsive. They’ve taken the time to review your pull request. Opening a PR and walking away is bad form. If you don’t know how to make changes, research the problem, then ask for help if you need it.
If you don’t have time to work on the issue anymore (for example, if the conversation has been going on for months, and your circumstances have changed), let the maintainer know so they’re not expecting a response. Someone else may be happy to take over.
:-1: Your pull request doesn’t get accepted. :-1:
It's possible your pull request may or may not be accepted in the end. If you’re not sure why it wasn’t accepted, it’s perfectly reasonable to ask the maintainer for feedback and clarification. Ultimately, however, you’ll need to respect that this is their decision. Don’t argue or get hostile. You’re always welcome to fork and work on your own version if you disagree!
:tada: Your pull request gets accepted and merged. :tada:
Hooray! You’ve successfully made an open source contribution!
Thank you for the submission, @sambragg!
Whether this was your first pull request, or you’re just looking for new ways to contribute, I hope you’re inspired to take action. Don't forget to say thanks when a maintainer puts effort into helping you, even if a contribution doesn't get accepted.
Remember, open source is made by people like you: one issue, pull request, comment, and +1 at a time.
What's next?
Find your next project:
Up For Grabs - a list of projects with beginner-friendly issues
First Timers Only - a list of bugs that are labelled "first-timers-only"
Awesome-for-beginners - a GitHub repo that amasses projects with good bugs for new contributors, and applies labels to describe them.
YourFirstPR - starter issues on GitHub that can be easily tackled by new contributors.
Issuehub.io - a tool for searching GitHub issues by label and language
Learn from other great community members:
"How to contribute to an open source project on github" by @kentcdodds
"Bring Kindness Back to Open Source" by @shanselman
"Getting into Open Source for the First Time" by @mcdonnelldean
"How to find your first open source bug to fix" by @Shubheksha
"How to Contribute to Open Source" by @Github
"Make your first open source contribution in 5 minutes" by @Roshanjossey
Elevate your Git game:
Try git - an interactive Git tutorial made by GitHub
Atlassian Git Tutorials - various tutorials on using Git
Git Cheat Sheet - PDF made by GitHub
GitHub Flow - YouTube video explaining how to make a pull request on GitHub talk on how to make a pull request
Oh shit, git! - how to get out of common Git mistakes described in plain English
Questions? Comments? Concerns?
I'm always open to feedback. If you had a good time with the exercise, or found some room for improvement, please let me know on twitter or email.
Want to start over? Just delete your fork.
Want to see behind the scenes? Check out the server code.
|
2025-04-01T06:38:19.758644
| 2023-10-04T11:57:23
|
1926051641
|
{
"authors": [
"Azaeres",
"marcus-pousette"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5096",
"repo": "dao-xyz/peerbit-examples",
"url": "https://github.com/dao-xyz/peerbit-examples/issues/6"
}
|
gharchive/issue
|
@peerbit/react -- Error occurred during XX handshake: ciphertext cannot be decrypted using that key
I have a Next.js app in which I'm trying to use PeerProvider from @peerbit/react, like so: https://github.com/Azaeres/etherion-lab/blob/main/src/components/scenes/Experiment4/index.tsx
'use client'
import { PeerProvider } from '@peerbit/react'
export default function Experiment4() {
console.log('Experiment4 render :')
return (
<PeerProvider network="remote">
<></>
</PeerProvider>
)
}
However, I'm getting the error "Failed to resolve relay addresses. Error: Error occurred during XX handshake: ciphertext cannot be decrypted using that key".
Live demo of error can be found here: https://lab.etherion.app/experiment4
When I try to provide my own keypair, Peerbit sees it as invalid. It looks like there are a few different defined Ed25519Keypair classes in the Next.js bundle, and Peerbit's instanceof check fails when there's a class reference mismatch. I've also tried creating a keypair using @peerbit/react's getKeypair() function, but Peerbit also sees it as invalid. Not sure if this is related to the "XX handshake" error.
In another, separate area, I've successfully created my own peer by borrowing bits of @peerbit/react. You can see this in a live demo at: https://lab.etherion.app/experiment3
The peer creation code I got to work can be found here: https://github.com/Azaeres/etherion-lab/blob/main/src/components/scenes/Experiment3/hooks/usePeerbitDatabase.ts
Interestingly
https://lab.etherion.app/experiment4
Worked once for me. Then next time I tried it I got the problem. I wonder if there is some caching going on..
Anyway.
https://github.com/Azaeres/etherion-lab/blob/c7d9864e42c86c26c7a8bbb3c1d824d600dd8662/yarn.lock#L7998C12-L7998C12
Looks like you have a old version of Peerbit lurking around. Can you see if you can bump all Peerbit related dependencies.
Most importantly
https://github.com/Azaeres/etherion-lab/blob/c7d9864e42c86c26c7a8bbb3c1d824d600dd8662/yarn.lock#L1577
this one should not exist in the lock file but only the 13^ one
Super cool that your are creating multiplayer (?) a space shooter game with Peerbit. Could feature it on this repo if you want later
Okay, thank you for the tip on what to dig into!
<EMAIL_ADDRESS>off of NPM is asking for<EMAIL_ADDRESS>which is in turn asking for<EMAIL_ADDRESS>However, I see that react-utils in the peerbit-examples is asking for<EMAIL_ADDRESS>See https://github.com/dao-xyz/peerbit-examples/blob/fe1729f1268c5b29fb61b59611e460d553ed3180/packages/react-utils/package.json#L28
If you're publishing this react-utils folder to NPM, maybe it's time to publish an update?
Super cool that your are creating multiplayer (?) a space shooter game with Peerbit. Could feature it/link it from this repo if you want later
Yeah, that's the idea! Would love for this to come together. Thanks again for your help.
Well it is more the
<EMAIL_ADDRESS>noise implementation, that had a bug which yields your error message.
If you somehow manage to get rid of peerbit v1 dependency https://github.com/Azaeres/etherion-lab/blob/c7d9864e42c86c26c7a8bbb3c1d824d600dd8662/yarn.lock#L7998C12-L7998C12
and only use peerbit v2 I think your problems will be gone.
I have not actually used @peerbit/react in a separate repo yet. Been building it along side all the examples to reach a good API in the end, and I kan see that there are a few dependencies there that perhaps needs to be removed or updated (however it should not affect your problem)
These are the listed dependencies of the @peerbit/react I grabbed off of NPM.
"dependencies": {
"@emotion/react": "^11.10.5",
"@emotion/styled": "^11.10.5",
"@libp2p/webrtc": "^2.0.11",
"@mui/icons-material": "^5.10.16",
"@mui/material": "^5.10.13",
"@peerbit/proxy-window": "^1.0.1",
"@types/react": "^18.0.25",
"@types/react-dom": "^18.0.8",
"path-browserify": "^1.0.1",
"peerbit": "^1",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.8.0",
"react-use": "^17.4.0"
},
I think that's where the peerbit@^1 is coming from, which explains why the<EMAIL_ADDRESS>disappears when I uninstall @peerbit/react.
Ah! I see, this CI in github does not automatically release stuff in this repo.
Just relased
<EMAIL_ADDRESS>now. Try it out !
Nice! No longer getting the XX handshake error!
I've got a bunch of these "'Recieved hello message that did not verify. Header: false, Ping info true, Signatures false'" warnings, though. Does this mean I haven't configured something correctly?
Great! No you don't have to worry about that error.
There is non-optimal logging now.
The warning and error messages should be gone when this issue is fixed.
The 87ecf9778ccaa08bd9f1e8c6104d82c469b35511.peerchecker.com address is not part of the bootstrapping nodes. And that server is down. But this should not affect your stuff running. The error messages you see are basically just the autodialer failing to establish connections
|
2025-04-01T06:38:19.767502
| 2022-07-27T03:47:04
|
1318981833
|
{
"authors": [
"YuqiHuai",
"daohu527"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5097",
"repo": "daohu527/pycyber",
"url": "https://github.com/daohu527/pycyber/issues/8"
}
|
gharchive/issue
|
Initializing cyber node without using environment variable
Hi, thank you for the amazing tools!
I want to ask you if it is possible to use CYBER_IP as a parameter when initializing (e.g. cyber.init(cyber_ip='111.222.333.444')), rather than using an environment variable.
export CYBER_IP=<IP_ADDRESS>
As pycyber is a wrap of apollo cyber, cyber does not currently support this assignment method. Therefore I do not intend to support the above interfaces unless necessary!
Cool! Thanks for answering!
|
2025-04-01T06:38:19.805143
| 2021-09-07T17:19:02
|
990191339
|
{
"authors": [
"fjvela"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5098",
"repo": "dapr/components-contrib",
"url": "https://github.com/dapr/components-contrib/issues/1125"
}
|
gharchive/issue
|
Update GCP Storage Bucket
Describe the feature
Update GCP Storage Bucket binding:
update create operation: support upload files in base64 (https://docs.dapr.io/reference/components-reference/supported-bindings/gcpbucket/#upload-a-file doesn't work)
add get operation
add delete operation
add list operation
Release Note
RELEASE NOTE:
UPDATE GCP Storage Bucket binding. Create operation: support upload files in base64, return location and version id
ADD GCP Storage Bucket binding: get operation
ADD GCP Storage Bucket binding: delete operation
ADD GCP Storage Bucket binding: list operation
/assing
/assign
|
2025-04-01T06:38:19.808215
| 2023-12-20T12:29:18
|
2050457513
|
{
"authors": [
"ItalyPaleAle",
"pravinpushkar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5099",
"repo": "dapr/components-contrib",
"url": "https://github.com/dapr/components-contrib/pull/3283"
}
|
gharchive/pull-request
|
azappconfig SDk upgrade
Description
Please explain the changes you've made
Issue reference
We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation.
Please reference the issue this PR will close: #3267
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list:
[ ] Code compiles correctly
[ ] Created/updated tests
[ ] Extended the documentation / Created issue in the https://github.com/dapr/docs/ repo: dapr/docs#[issue number]
/ok-to-test
|
2025-04-01T06:38:19.813517
| 2021-07-14T14:39:12
|
944498624
|
{
"authors": [
"GregorBiswanger"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5100",
"repo": "dapr/dapr",
"url": "https://github.com/dapr/dapr/issues/3432"
}
|
gharchive/issue
|
Kubernetes (AKS) - Azure Key Vault Secret Store: failed to get oauth token from certificate auth: failed to read the certificate file
Hey Community,
I have problems using my certificate in Kubernetes (AKS) for my Azure Key Vault Secret Store.
It works wonderfully with local hosting. I made the configuration according to the instructions and also added the certificate file to the Kubernetes Store. But unfortunately I get the following error message with Kubernetes when starting the dapr sidecar:
time="2021-07-14T14:31:57.756966579Z" level=warning msg="failed to init state store secretstores.azure.keyvault/v1 named azurekeyvault: failed to get oauth token from certificate auth: failed to read the certificate file (0\x82\nP\x0...a\xd0: invalid argument" app_id=mywebapp instance=mywebapp-5557c78c9b-v86ss scope=dapr.runtime type=log ver=1.2.2
time="2021-07-14T14:31:57.757159681Z" level=fatal msg="process component azurekeyvault error: failed to get oauth token from certificate auth: failed to read the certificate file (0\x82\nP\x02\x\xde: invalid argument" app_id=mywebapp instance=mywebapp-5557c78c9b-v86ss scope=dapr.runtime type=log ver=1.2
i have done all the steps according to this documentation:
https://docs.dapr.io/reference/components-reference/supported-secret-stores/azure-keyvault/
My Kubectl command:
kubectl create secret generic k8s-secret-store --from-file=myapp-certificate=myapp-secrets-myapp-certificate-20210713.pfx
My azurekeyvault.yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: myapp-secrets
- name: spnTenantId
value: "460d88b8-d055-4149-9f03-XXX" #changed to XXX only on this post
- name: spnClientId
value: "dd964473-808e-4a82-a167-XXX" #changed to XXX only on this post
- name: spnCertificateFile
secretKeyRef:
name: k8s-secret-store
key: myapp-certificate
auth:
secretStore: kubernetes
It was my fault. I used spnCertificateFile and that is for local.
I changed it to spnCertificate and now it works.
|
2025-04-01T06:38:19.826303
| 2023-07-01T22:59:37
|
1784305554
|
{
"authors": [
"cgillum",
"olitomlinson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5101",
"repo": "dapr/dapr",
"url": "https://github.com/dapr/dapr/issues/6614"
}
|
gharchive/issue
|
[Workflows] Raise event payload is always null in my sample app
cc @cgillum
As per this thread on Discord, I'm not sure why this workflow code example here is not working. For some reason the raise event payload is null when I expect it to be a string of the value "OK".
The DaprWorkflowClient.RaiseEventAsync happens here
I'm sure this is something to do with my code, and not actually a but I can't figure it out.
my local environment
Mac OS - M1
Docker Desktop 4.20.1
Can reproduce this on the following dapr sidecar versions
1.11.1
client libraries
<PackageReference Include="Dapr.Client" Version="1.11.0" />
<PackageReference Include="Dapr.Workflow" Version="1.11.0" />`
Repro steps
pull repo https://github.com/olitomlinson/dapr-workflow-examples
docker compose build
docker compose up
Use insomnia/postman/whatever to start a workflow :
POST http://localhost:5112/start-raise-event-workflow?runId=100
note : The runId will become part of the workflow instance Id. i.e runId : 100 will become a workflow instance Id of 0-100
Raise an event to the workflow (you have 30 seconds) :
POST http://localhost:5112/start-raise-event-workflow-event?runId=100
note : The event payload is hardcoded to "OK"
Check the status of the workflow :
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/0-100
Observe the workflow output is :
"dapr.workflow.output": "\"external event : \""
The expected output should be :
"dapr.workflow.output": "\"external event : \"OK"
/assign
@cgillum I reduced the workflow right down to this, and it still shows the payload as null
@cgillum If i use the HTTP interface to raise the event (not the dotnet SDK) then it comes through just fine. So this would imply its a problem with the code that is raising the event, not the workflow itself.
@cgillum Ok, I've narrowed it down to the client code.
If I use DaprClient to raise the event, everything works as expected.
However, If I use DaprWorkflowClient this does not.
@cgillum
It looks like the OrderProcessing example is using DaprClient and not DaprWorkflowClient which would explain why the example works, and why my code works now that I've switched over to DaprClient
https://github.com/dapr/dotnet-sdk/blob/8e9db70c0f58050f44970cda003297f561ab570a/examples/Workflow/WorkflowConsoleApp/Program.cs#L167
I think its safe to say DaprWorkflowClient is where the problem lies.
Thanks. I'm converting the sample to use DaprWorkflowClient instead of DaprClient now and will hopefully be able to reproduce the issue soon.
I've confirmed that this is an issue in the .NET Workflow SDK and not an issue in the runtime. PR with the fix is here: https://github.com/dapr/dotnet-sdk/pull/1119.
|
2025-04-01T06:38:19.831051
| 2020-02-11T18:11:37
|
563385745
|
{
"authors": [
"RicardoNiepel"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5102",
"repo": "dapr/docs",
"url": "https://github.com/dapr/docs/issues/346"
}
|
gharchive/issue
|
Threat Model
Current situation
There is a Security Concepts page which lists the the current security features of Dapr very high-level without concrete recommendations or possible threats for:
Dapr-to-app communication
Dapr-to-Dapr communication
Network security
Bindings security
State store security
Management security
Component secrets
Challenge
Dapr states to "codifies the best practices for building microservice applications". This also includes security best practices and lessons learned.
On top, Dapr should also help developers to develop microservices in (strictly) restricted enterprise scenarios and/or industries.
Describe the proposal
Creating a threat model for Dapr
Analyzing it for potential
Recommend mitigations for these security issues
Put these together in a living security review docs (for making Dapr usage possible in strictly restricted environments with documentation obligations) and create a Security Guidelines / Best Practices page for practical usage across usages.
@yaron2 can you also please upload the original Threat Modeling Tool file - if we need to change/add anything to it, we don't need to start over. Thx!
|
2025-04-01T06:38:19.845418
| 2020-12-30T15:53:39
|
776521641
|
{
"authors": [
"mcmacker4",
"peoplenarthax"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5103",
"repo": "darkaqua/pathfinding.ts",
"url": "https://github.com/darkaqua/pathfinding.ts/pull/2"
}
|
gharchive/pull-request
|
Delete compiled files from src/
This PR removes all compiled files from the src folder.
Also adds src/**/*.js to .gitignore to avoid this problem in the future.
I think this PR is not doing what you think it does
|
2025-04-01T06:38:19.922084
| 2024-07-12T02:41:58
|
2404531332
|
{
"authors": [
"breakstring",
"darrenburns"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5104",
"repo": "darrenburns/posting",
"url": "https://github.com/darrenburns/posting/issues/31"
}
|
gharchive/issue
|
unicode characters in response body
https://github.com/darrenburns/posting/blob/7b1d0ae86d2990fa89d52b612284af3aaf590b55/src/posting/widgets/response/response_area.py#L80
In some JSON-type API requests, the returned content may contain Unicode characters, it will show as '\uxxx....' and making the response body unreadable.
Maybe, will you please add an 'ensure_ascii=False' parameter in json.dumps to resolve this kind of issue?
Thanks for the report! Fixed in 1.1.0: https://github.com/darrenburns/posting/releases/tag/1.1.0
|
2025-04-01T06:38:19.949690
| 2017-10-12T20:23:26
|
265073769
|
{
"authors": [
"alorenzen",
"jonahwilliams",
"matanlurey"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5105",
"repo": "dart-lang/angular",
"url": "https://github.com/dart-lang/angular/issues/670"
}
|
gharchive/issue
|
AppView performance investigation
Here are some remaining elements of AppView that look suspect:
.flatRootNodes:
List<Node> get flatRootNodes {
return _flattenNestedViews(viewData.rootNodesOrViewContainers);
}
List<Node> _flattenNestedViews(List nodes) {
return _flattenNestedViewRenderNodes(nodes, <Node>[]);
}
List<Node> _flattenNestedViewRenderNodes(List nodes, List<Node> renderNodes) {
int nodeCount = nodes.length;
for (var i = 0; i < nodeCount; i++) {
var node = nodes[i];
if (node is ViewContainer) {
ViewContainer appEl = node;
renderNodes.add(appEl.nativeElement);
if (appEl.nestedViews != null) {
for (var k = 0; k < appEl.nestedViews.length; k++) {
_flattenNestedViewRenderNodes(
appEl.nestedViews[k].viewData.rootNodesOrViewContainers,
renderNodes);
}
}
} else {
renderNodes.add(node);
}
}
return renderNodes;
}
@jonahwilliams noticed this was critical path in creating standalone embedded views (i.e. to use in a table, or other standalone repetitive component). He found the following API in use to get the "first" root node:
final rootNodes = (ref.hostView as EmbeddedViewRef).rootNodes;
intoDomElement.append(rootNodes.first);
return ref;
He tried using ComponentRef.location, but that seems to have (non?)significant whitespace compared to the above code, which causes tests to fail. The tests might be too strict, or it's possible we need to expose some sort of .firstRootNode as a convenience.
I did some more investigation and I haven't noticed any significant whitespace when using componentRef.location. The test in question is most likely too strict.
Doesn't sound like there is any direct next steps here, so going to close for now.
|
2025-04-01T06:38:19.963135
| 2018-05-03T12:30:20
|
319900186
|
{
"authors": [
"chalin",
"devoncarew"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5106",
"repo": "dart-lang/dart-pad",
"url": "https://github.com/dart-lang/dart-pad/issues/811"
}
|
gharchive/issue
|
Update favicon to use new Dart logo
cc @kwalrath @JekCharlsonYu
Let me know which resolutions you'd like in that .ico and I can create one for you. That one was created using:
convert dart/logo/default.png -define icon:auto-resize=128,64,48,32,16 dart/favicon.ico
You might not need that many sizes.
(If there is a general agreement on which sizes are needed, I can change the main favicon.ico file too.)
We should likely have a single one we can apply to all dart web properties. From some very casual browsing, we should be good with just 16x16 and 32x32.
I don't know how efficient convert is in terms of the size of the file it produces (I don't know that it isn't either, but we may want to check on it).
If there is a general agreement on which sizes are needed, I can change the main favicon.ico file too
Sounds great! I'm happy to use whatever we end up using for dartlang.org.
flutter.io uses a single 64x64 PNG. I'm inclined to do the same for darglang.org. Does that work for you?
👍
Done: you can pick up assets from, e.g., https://github.com/dart-lang/site-www/pull/835/files.
Thanks!
|
2025-04-01T06:38:19.965058
| 2020-12-16T05:48:38
|
768473490
|
{
"authors": [
"RedBrogdon"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5107",
"repo": "dart-lang/dart-pad",
"url": "https://github.com/dart-lang/dart-pad/pull/1705"
}
|
gharchive/pull-request
|
Removing doc code relating to MDN.
Removes some methods that used to grab MDN links on the fly for documentation
Updates the Code responsible for generating HTML versions of docs provided by the analysis server for display.
Removes related tests.
CC @miquelbeltran, since this is currently blocking dart-pad deploys and therefore his work.
I started this change this afternoon, and wasn't expecting anyone else to be working on it. Since @parlough's PR came in first, we should land that one instead.
|
2025-04-01T06:38:19.996320
| 2024-02-08T06:23:38
|
2124442412
|
{
"authors": [
"Douglas-Pontes",
"Luvti",
"codelovercc",
"stwarwas"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5108",
"repo": "dart-lang/i18n",
"url": "https://github.com/dart-lang/i18n/issues/797"
}
|
gharchive/issue
|
Intl is not working for multiple packages project
I created some packages and one flutter application , these projects use Intl for localizations.
Packages and application:
a package named member_end, pure dart package, contains business logic codes, it uses Intl and intl_translation for localizations, it has a custom MemberLocalizations class that defined localization message getters and has a load method.
member_end_flutter, Flutter package, contains common widgets and implements for Flutter, it depends on member_end, it uses intl and intl_utils for localizations, the localizations class is named MemberFlutterLocalizations.
member_end_app, Flutter application, it depends on member_end and member_end_flutter, it uses intl and intl_utils for localizations, the localizations class is default S.
These projects supports en and zh Locales.
Files:
member_end
member_end
|---lib
|---|---l10n
|---|---|---intl_en.arb
|---|---|---intl_zh.arb
|---|---src
|---|---|---intl
|---|---|---|---messages_all.dart
|---|---|---|---messages_en.dart
|---|---|---|---messages_zh.dart
member_end_flutter
member_end_flutter
|---lib
|---|---l10n
|---|---|---intl_en.arb
|---|---|---intl_zh.arb
|---|---generated
|---|---|---l10n.dart
|---|---|---intl
|---|---|---|---messages_all.dart
|---|---|---|---messages_en.dart
|---|---|---|---messages_zh.dart
member_end_app
member_end_app
|---lib
|---|---l10n
|---|---|---intl_en.arb
|---|---|---intl_zh.arb
|---|---generated
|---|---|---l10n.dart
|---|---|---intl
|---|---|---|---messages_all.dart
|---|---|---|---messages_en.dart
|---|---|---|---messages_zh.dart
Let's say the current locale is zh, the Localizations classes are loaded in order
MemberLocalizations
MemberFlutterLocalizations
S
The problem is only the first MemberLocalizations will load its member_end/lib/src/intl/messages_zh.dart, this cause member_end_flutter and member_end_app can't get the correct locale messages.
In Localizations classes static Future<S> load(Locale locale) method, it use Future<bool> initializeMessages(String localeName) method to init and load messages, Future<bool> initializeMessages(String localeName) use CompositeMessageLookup to add locale messages, let's check CompositeMessageLookup.addLocale method:
/// If we do not already have a locale for [localeName] then
/// [findLocale] will be called and the result stored as the lookup
/// mechanism for that locale.
@override
void addLocale(String localeName, Function findLocale) {
if (localeExists(localeName)) return;
var canonical = Intl.canonicalizedLocale(localeName);
var newLocale = findLocale(canonical);
if (newLocale != null) {
availableMessages[localeName] = newLocale;
availableMessages[canonical] = newLocale;
// If there was already a failed lookup for [newLocale], null the cache.
if (_lastLocale == newLocale) {
_lastLocale = null;
_lastLookup = null;
}
}
}
When the first MemberLocalizations load, the locale zh is not exists, so localeExists(localeName) returns false, and then the member_end package's zh locale message will load. MemberFlutterLocalizations will be loaded by next in the order, when it runs into CompositeMessageLookup.addLocale, localeExists(localeName) returns true, because locale zh MessageLookupByLibrary is already added by MemberLocalizations in member_end package, S will be the same when it's loading.
To solve this issue, I have few ways to do:
Write hardcode local messages in sub Localizations class, like Flutter framework do. But this is not the way to use intl.
Create a subclass of CompositeMessageLookup named CustomCompositeMessageLookup and override method addLocale, check if locale exists and then merge the new MessageLookupByLibrary into the old MessageLookupByLibrary, if the locale message name is already exists then overwrite with the new value that provided by the new MessageLookupByLibrary, then call void initializeInternalMessageLookup(()=>CustomCompositeMessageLookup()) method in the main method to init global field MessageLookup messageLookup. But initializeInternalMessageLookup is not a public API.
As a feature request, maybe you guys can do this awesome work, make intl works in multiple projects.
If there is other better way to solve this, please tell me :)
I have the same problem, can you share with me the solution 2 you did?
@Douglas-Pontes
Solution 2:
class MultiCompositeMessageLookup extends CompositeMessageLookup {
@override
void addLocale(String localeName, Function findLocale) {
final canonical = Intl.canonicalizedLocale(localeName);
final newLocale = findLocale(canonical);
if (newLocale != null) {
final oldLocale = availableMessages[localeName];
if (oldLocale != null && newLocale != oldLocale) {
if (newLocale is! MessageLookupByLibrary) {
throw Exception('Merge locale messages failed, type ${newLocale.runtimeType} is not supported.');
}
// solve issue https://github.com/dart-lang/i18n/issues/798 if you are using intl_translate and intl_util both.
if (oldLocale.messages is Map<String, Function> && newLocale.messages is! Map<String, Function>) {
final newMessages = newLocale.messages.map((key, value) => MapEntry(key, value as Function));
oldLocale.messages.addAll(newMessages);
} else {
oldLocale.messages.addAll(newLocale.messages);
}
return;
}
super.addLocale(localeName, findLocale);
}
}
}
Then call initializeInternalMessageLookup(() => MultiCompositeMessageLookup()); before any localizations class load method.
I have the same problem. Almost all examples I found use a very simple one-package setup. How do people use this in larger projects?
@stwarwas Just call initializeInternalMessageLookup(() => MultiCompositeMessageLookup()); at the first line in your main method.
simple solution is - https://github.com/Luvti/i18n
dependency_overrides:
intl: #0.19.0
git:
url: https://github.com/Luvti/i18n
path: pkgs/intl
|
2025-04-01T06:38:20.067433
| 2024-12-30T18:42:18
|
2763569975
|
{
"authors": [
"dcharkes"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5109",
"repo": "dart-lang/native",
"url": "https://github.com/dart-lang/native/issues/1847"
}
|
gharchive/issue
|
[native_assets_builder] Git errors on invoking hooks
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8727104420872302577/+/u/run_test.dart_for_tool_integration_tests_shard_and_subshard_5_5/stdout
[ ] Running `ANDROID_HOME=/Volumes/Work/s/w/ir/cache/android/sdk TMPDIR=/Volumes/Work/s/w/ir/x/t TEMP=/Volumes/Work/s/w/ir/x/t PATH=/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Library/Xcode/Plug-ins/XCBSpecifications.ideplugin/Contents/Resources:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Library/Xcode/Plug-ins/XCBSpecifications.ideplugin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/appleinternal/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/local/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/libexec:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/usr/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/usr/appleinternal/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/usr/local/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/local/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/usr/bin:/Volumes/Work/s/w/ir/cache/osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/usr/local/bin:/Volumes/Work/s/w/ir/cache/ruby/bin:/Volumes/Work/s/w/ir/x/w/flutter/bin:/Volumes/Work/s/w/ir/x/w/flutter/bin/cache/dart-sdk/bin:/Volumes/Work/s/w/ir/cache/chrome/chrome:/Volumes/Work/s/w/ir/cache/chrome/drivers:/Volumes/Work/s/w/ir/cache/java/contents/Home/bin:/Volumes/Work/s/w/ir/bbagent_utility_packages:/Volumes/Work/s/w/ir/bbagent_utility_packages/bin:/Volumes/Work/s/w/ir/cipd_bin_packages:/Volumes/Work/s/w/ir/cipd_bin_packages/bin:/Volumes/Work/s/w/ir/cipd_bin_packages/cpython3:/Volumes/Work/s/w/ir/cipd_bin_packages/cpython3/bin:/Volumes/Work/s/w/ir/cache/cipd_client:/Volumes/Work/s/w/ir/cache/cipd_client/bin:/Volumes/Work/s/cipd_cache/bin:/opt/infra-tools:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/usr/local/git/bin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOME=/Users/chrome-bot TMP=/Volumes/Work/s/w/ir/x/t /Volumes/Work/s/w/ir/x/w/flutter/bin/dart --packages=/Volumes/Work/s/w/ir/x/t/8oo7bh/uses_package_native_assets_cli/.dart_tool/package_config.json /Volumes/Work/s/w/ir/x/t/8oo7bh/uses_package_native_assets_cli/.dart_tool/native_assets_builder/db695ec90c18f778434de1b29c08c462/hook.dill --config=/Volumes/Work/s/w/ir/x/t/8oo7bh/uses_package_native_assets_cli/.dart_tool/native_assets_builder/db695ec90c18f778434de1b29c08c462/config.json`.
[ +1 ms] Persisting file store
[ +3 ms] Done persisting file store
[ +3 ms] "flutter assemble" took 6,081ms.
[ ] Running 2 shutdown hooks
[ ] Shutdown hooks complete
[ ] exiting with code 1
error: [ +39 ms] fatal: Not a valid object name origin/master
[ +1 ms] Building assets for package:uses_package_native_assets_cli failed.
This looks like it is related to tying down the environment variables. It's unclear what environment variable is missing that causes a git issue. Possibly, the Flutter SDK is pinging home with it's flutter_tools logic looking at what branch it is on?
It's not reproducible locally for me.
Context:
https://github.com/flutter/flutter/pull/160672
[2024-12-30 10:07:51.286688] [STDOUT] stderr: fatal: Not a valid object name origin/master
[2024-12-30 10:07:51.286688] [STDOUT] stderr: Error: Unable to determine engine version...
[2024-12-30 10:07:51.286688] [STDOUT] stderr: Building assets for package:ffi_package failed.
[2024-12-30 10:07:51.286688] [STDOUT] stderr: build.dart returned with exit code: 1.
[2024-12-30 10:07:51.286688] [STDOUT] stderr: To reproduce run:
[2024-12-30 10:07:51.286688] [STDOUT] stderr: C:\b\s\w\ir\x\w\rc\tmprpq8zzff\flutter sdk\bin\dart --packages=C:\b\s\w\ir\x\t\flutter_module_test.ed577fda\hello\.dart_tool\package_config.json C:\b\s\w\ir\x\t\flutter_module_test.ed577fda\hello\.dart_tool\native_assets_builder\755ebf6d30040ac7ce9fb4d3c5afe976\hook.dill --config=C:\b\s\w\ir\x\t\flutter_module_test.ed577fda\hello\.dart_tool\native_assets_builder\755ebf6d30040ac7ce9fb4d3c5afe976\config.json
[2024-12-30 10:07:51.286688] [STDOUT] stderr: stderr:
[2024-12-30 10:07:51.286688] [STDOUT] stderr: fatal: Not a valid object name origin/master
[2024-12-30 10:07:51.286688] [STDOUT] stderr: Error: Unable to determine engine version...
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8727104420706748945/+/u/run_build_android_host_app_with_module_aar/stdout
It does look like a phone home issue.
|
2025-04-01T06:38:20.093704
| 2013-05-30T17:22:41
|
84538857
|
{
"authors": [
"jbdeboer",
"uralbash"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5110",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/10981"
}
|
gharchive/issue
|
dartanalyzer is too slow.
$ time dartanalyzer <redacted>.dart
Analyzing <redacted>.dart...
<redacted>
1 error and 1 warning found.
real 0m4.150s
user 0m10.053s
sys 0m0.248s
This is on a current-model ThinkPad, analyzing a 71 line source file.
IMO, an acceptable run time for this class of tool is <500ms.
I use dartanalyser with vim plugin syntastic and it's blocking vim when I run :SyntasticCheck about 2~3 sec. There is my time result:
$ time dartanalyzer foo.dart
Analyzing [foo.dart]...
No issues found
real 0m2.469s
user 0m2.404s
sys 0m0.156s
$ dart --version
Dart VM version: 1.12.1 (Tue Sep 8 11:14:08 2015) on "linux_x64"
|
2025-04-01T06:38:20.096835
| 2013-09-01T12:40:45
|
84549317
|
{
"authors": [
"bkonyi",
"peter-ahe-google"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5111",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/12969"
}
|
gharchive/issue
|
Please review new void used as type message
I have looked at these error messages:
foo.dart:1:1: expected identifier, but got 'void'
void x;
^^^^
foo.dart:1:5: Error: Type "void" is only allowed in a return type.
foo(void x) {}
^^^^
It is changed to:
foo.dart:1:1: Error: Type 'void' can't be used here because it isn't a return type.
Try removing 'void' keyword or replace it with 'var', 'final', or a type.
void x;
^^^^
foo.dart:1:5: Error: Type 'void' can't be used here because it isn't a return type.
Try removing 'void' keyword or replace it with 'var', 'final', or a type.
foo(void x) {}
^^^^
Let me know what you think.
I'm guessing this has been resolved and can be closed, right @peter-ahe-google?
|
2025-04-01T06:38:20.099177
| 2013-11-21T20:38:38
|
84559803
|
{
"authors": [
"DartBot"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5112",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/15248"
}
|
gharchive/issue
|
Extend OAuth2 package to support OpenID connect
This issue was originally filed by<EMAIL_ADDRESS>
After thinking this through, I realize this is a fairly major feature request.
The current OAuth2 package (http://pub.dartlang.org/packages/oauth2)
does not support OpenID connect. For example, Openid connect returns a id_token as part of the authorization flow.
A small enhancement would be to extend Credentials.dart to provide the raw value of id_token if it is present.
Ideally, support would be provided for JSON web tokens, signature verification, etc.
This issue has been moved to dart-lang/oauth2#8.
|
2025-04-01T06:38:20.115453
| 2014-01-26T00:45:47
|
84564741
|
{
"authors": [
"DartBot",
"EPNW",
"linuxjet",
"neaplus"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5113",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/16300"
}
|
gharchive/issue
|
dart:io WebSocket needs OnBadCertificate() callback for wss:// socket connections
This issue was originally filed by<EMAIL_ADDRESS>
Using Dart VM version: 1.1.1 (Wed Jan 15 04:11:49 2014) on "linux_x64"
The following dart program starts a secure HTTP server and waits for a websocket connection
import 'dart:io';
void main(List<String> args){
String password = new File('pwdfile').readAsStringSync().trim();
SecureSocket.initialize(database: "./",
password: password);
HttpServer.bindSecure(InternetAddress.ANY_IP_V4, 4443,
certificateName: "CN=devcert")
.then((HttpServer server) {
print("Secure server listening on 4443...");
server.serverHeader = "Secure WebSocket server";
server.listen((HttpRequest request) {
if (request.headers.value(HttpHeaders.UPGRADE) == "websocket"){
WebSocketTransformer.upgrade(request).then(handleWebSocket);
}
else {
request.response.statusCode = HttpStatus.FORBIDDEN;
request.response.reasonPhrase = "WebSocket connections only";
request.response.close();
}
});
});
}
void handleWebSocket(WebSocket socket){
print("Secure client connected!");
socket.listen((String s) {
print('Client sent: $s');
socket.add('echo: $s');
},
onDone: () {
print('Client disconnected');
});
}
The following program is a client that can connect to websockets.
import 'dart:io';
WebSocket ws;
void main(List<String> args){
if (args.length < 1){
print('Please specify a server URI. ex ws://example.org');
exit(1);
}
String server = args[0];
//Open the websocket and attach the callbacks
WebSocket.connect(server).then((WebSocket socket) {
ws = socket;
ws.listen(onMessage, onDone: connectionClosed);
});
//Attach to stdin to read from the keyboard
stdin.listen(onInput);
}
void onMessage(String message){
print(message);
}
void connectionClosed() {
print('Connection to server closed');
}
void onInput(List<int> input){
String message = new String.fromCharCodes(input).trim();
//Exit gracefully if the user types 'quit'
if (message == 'quit'){
ws.close();
exit(0);
}
ws.add(message);
}
What is the expected output? What do you see instead?
When I run this server using a self signed cert and try to connect with a client I get the following exception
$ dart secureWebSocketClient.dart wss://localhost:4443
Uncaught Error: HandshakeException: Handshake error in client (OS Error: Issuer certificate is invalid., errno = -8156)
Unhandled exception:
HandshakeException: Handshake error in client (OS Error: Issuer certificate is invalid., errno = -8156)
0 _rootHandleUncaughtError.<anonymous closure>.<anonymous closure> (dart:async/zone.dart:677)
1 _asyncRunCallback (dart:async/schedule_microtask.dart:18)
2 _asyncRunCallback (dart:async/schedule_microtask.dart:21)
3 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:119)
However there is no way of indicating to the WebSocket class to ignore certificate errors.
The server works if I use a "plain" HTTP server, but not a secure server. It would appear that the WebSocket class should have an onBadCertificate(X509Certificate) callback like the SecureSocket classes.
Has this been addressed? I am having an issue where I need to accept a self signed cert and this is holding me up.
after 5 years, this is still open. A solution needed.
At least for the dart:io version of websockets, an onBadCertificate() would be great!
|
2025-04-01T06:38:20.124298
| 2014-09-17T00:20:46
|
84590415
|
{
"authors": [
"DartBot"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5114",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/20978"
}
|
gharchive/issue
|
Comparison of String and dynamic (function) fails with uncaught error in core_ajax_dart.dart
This issue was originally filed by<EMAIL_ADDRESS>
What steps will reproduce the problem?
Fire a CoreAjax request with go()
What is the expected output? What do you see instead?
I get an uncaught error from the core_ajax_dart.dart.
The essential code snippet from core_ajax_dart.dart is:
if (!hasContentType && this.contentType) {
headers['Content-Type'] = this.contentType;
}
whereby hasContentType is a function with a boolean as return value and this.contentType is a String.
Exception: Uncaught Error: type 'String' is not a subtype of type 'bool' of 'boolean expression'.
Stack Trace:
#0 CoreAjax.go (package:core_elements/core_ajax_dart.dart:285:33)
#1 AelAjax.go (http://localhost:8080/components/ael-ajax/ael-ajax.dart:39:17)
#2 AelCtrl.getUserProfile (http://localhost:8080/app.dart:157:26)
#3 AelCtrl.AelCtrl (http://localhost:8080/app.dart:77:19)
#4 main.<anonymous closure>.<anonymous closure> (http://localhost:8080/app.dart:27:18)
#5 _RootZone.runUnary (dart:async/zone.dart:1082)
#6 _Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:488)
#7 _Future._propagateToListeners (dart:async/future_impl.dart:571)
#8 _Future._completeWithValue (dart:async/future_impl.dart:331)
#9 _Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:393)
#10 _asyncRunCallbackLoop (dart:async/schedule_microtask.dart:41)
#11 _asyncRunCallback (dart:async/schedule_microtask.dart:48)
#12 _handleMutation (dart:html:39006)
What version of the product are you using?
core_elements 0.2.1+1
Dart 1.6.0
On what operating system?
Windows 7 64 bit
What browser (if applicable)?
Dartium 37.0.2062.76
Please provide any additional information below.
This issue has been moved to dart-lang/polymer-dart#301.
|
2025-04-01T06:38:20.127601
| 2015-08-13T17:20:39
|
100823041
|
{
"authors": [
"jbdeboer",
"srawlins"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5115",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/24073"
}
|
gharchive/issue
|
Analyzer: When multiple imports provide the same symbol, one import should be marked as "unused"
e.g. In Angular, the package:angular/angular.dart file exports package:di/di.dart.
In the following file, the Module symbol is coming from di.dart, but also exported through angular.
import package:angular/angular.dart
import package:di/di.dart
class MyModule extends Module { ... }
Currently, the analyzer does not give any hints about unused imports.
However, I would expect angular.dart to be flagged as "unused". angular.dart is not used since Module is also available through di.dart.
Even a subset of this, examining just shown names would be useful. I found some code with:
import 'package:a/a.dart';
import 'package:a/src/foo.dart' show foo;
because at one point, a.dart did not export foo. But now it does, so the second import is unnecessary. Not sure if one is easier to implement or faster to run than the other...
I'll close this in favor of the issue I've been referencing when landing changes. https://github.com/dart-lang/sdk/issues/44569
|
2025-04-01T06:38:20.137367
| 2015-08-18T23:14:06
|
101770654
|
{
"authors": [
"bwilkerson",
"hterkelsen",
"pq",
"sethladd"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5116",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/24126"
}
|
gharchive/issue
|
dartanalyzer crashes when .packages target is empty
.packages:
foo:
stderr:
Bad state: No element
#0 List.last (dart:core-patch/growable_array.dart:212)
#1 startsWith (package:analyzer/src/generated/utilities_dart.dart:47:20)
#2 SourceFactory._getPackageMapping.<anonymous closure> (package:analyzer/src/generated/source.dart:786:13)
#3 _HashVMBase&MapMixin&&_LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:340)
#4 MapView.forEach (dart:collection/maps.dart:194)
#5 SourceFactory._getPackageMapping (package:analyzer/src/generated/source.dart:784:23)
#6 SourceFactory.restoreUri (package:analyzer/src/generated/source.dart:762:32)
#7 Driver._computeLibrarySource (package:analyzer_cli/src/driver.dart:341:38)
#8 Driver._analyzeAll (package:analyzer_cli/src/driver.dart:137:23)
#9 Driver.start.<anonymous closure> (package:analyzer_cli/src/driver.dart:99:16)
#10 _BatchRunner.runAsBatch.<anonymous closure> (package:analyzer_cli/src/driver.dart:536:39)
#11 _RootZone.runUnaryGuarded (dart:async/zone.dart:1103)
#12 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#13 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#14 _SinkTransformerStreamSubscription._add (dart:async/stream_transformers.dart:67)
#15 _EventSinkWrapper.add (dart:async/stream_transformers.dart:14)
#16 _StringAdapterSink.add (dart:convert/string_conversion.dart:256)
#17 _LineSplitterSink._addLines (dart:convert/line_splitter.dart:127)
#18 _LineSplitterSink.addSlice (dart:convert/line_splitter.dart:102)
#19 StringConversionSinkMixin.add (dart:convert/string_conversion.dart:180)
#20 _ConverterStreamEventSink.add (dart:convert/chunked_conversion.dart:80)
#21 _SinkTransformerStreamSubscription._handleData (dart:async/stream_transformers.dart:119)
#22 _RootZone.runUnaryGuarded (dart:async/zone.dart:1103)
#23 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#24 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#25 _SinkTransformerStreamSubscription._add (dart:async/stream_transformers.dart:67)
#26 _EventSinkWrapper.add (dart:async/stream_transformers.dart:14)
#27 _StringAdapterSink.add (dart:convert/string_conversion.dart:256)
#28 _StringAdapterSink.addSlice (dart:convert/string_conversion.dart:260)
#29 _Utf8ConversionSink.addSlice (dart:convert/string_conversion.dart:336)
#30 _Utf8ConversionSink.add (dart:convert/string_conversion.dart:329)
#31 _ConverterStreamEventSink.add (dart:convert/chunked_conversion.dart:80)
#32 _SinkTransformerStreamSubscription._handleData (dart:async/stream_transformers.dart:119)
#33 _RootZone.runUnaryGuarded (dart:async/zone.dart:1103)
#34 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#35 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#36 _StreamController&&_SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:744)
#37 _StreamController._add (dart:async/stream_controller.dart:616)
#38 _StreamController.add (dart:async/stream_controller.dart:562)
#39 _Socket._onData (dart:io-patch/socket_patch.dart:1793)
#40 _RootZone.runUnaryGuarded (dart:async/zone.dart:1103)
#41 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#42 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#43 _StreamController&&_SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:744)
#44 _StreamController._add (dart:async/stream_controller.dart:616)
#45 _StreamController.add (dart:async/stream_controller.dart:562)
#46 _RawSocket._RawSocket.<anonymous closure> (dart:io-patch/socket_patch.dart:1344)
#47 _NativeSocket.issueReadEvent.issue (dart:io-patch/socket_patch.dart:728)
#48 _microtaskLoop (dart:async/schedule_microtask.dart:43)
#49 _microtaskLoopEntry (dart:async/schedule_microtask.dart:52)
#50 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:96)
#51 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:149)
This looks like an invalid .packages file that should have been caught by the code that reads the .packages file.
We are asking Lasse what the appropriate thing to do is in this case:
https://codereview.chromium.org/1298323002/
On Tue, Aug 18, 2015, 4:50 PM Brian Wilkerson<EMAIL_ADDRESS>wrote:
This looks like an invalid .packages file that should have been caught by
the code that reads the .packages file.
—
Reply to this email directly or view it on GitHub
https://github.com/dart-lang/sdk/issues/24126#issuecomment-132393040.
We are asking Lasse what the appropriate thing to do is in this case:
https://codereview.chromium.org/1298323002/
Awesome. Please update this issue when we know where we're headed. It'd be easy enough to guard against on our end but probably better handled in package_config once and for all rather than in all the clients.
This crash looks like a "real" crash, and thus a candidate for fixing for 1.12.
This crash looks like a "real" crash, and thus a candidate for fixing for 1.12.
Agreed.
We should fix it in package_config. Feel free to open a bug there and assign it to me and I'll happily take a look.
We should fix it in package_config. Feel free to open a bug there and assign it to me and I'll happily take a look.
Actually, I'm less sure now. I'm looking into it.
https://codereview.chromium.org/1298393004/
Fixed with e11ce8ba87952ee2efeb7ed8211801f6cb6d9c9d.
Request to merge to dev filed here: https://github.com/dart-lang/sdk/issues/24138.
|
2025-04-01T06:38:20.145050
| 2018-08-24T16:54:03
|
353857000
|
{
"authors": [
"a-siva",
"sjindel-google"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5117",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/34252"
}
|
gharchive/issue
|
BitTestImmediate crashes on Windows 32-bit in Dart 1 mode
This tests started failing after landing 2beb05b8. There is a corresponding line in vm.status. Marking as P2 because it only occurs in Dart 1 mode.
https://dart-review.googlesource.com/c/sdk/+/78861 should fix this.
|
2025-04-01T06:38:20.158394
| 2022-03-23T15:07:51
|
1178276743
|
{
"authors": [
"DetachHead",
"parlough",
"scheglov",
"srawlins"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5118",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/48650"
}
|
gharchive/issue
|
documentation for strong-mode rules
Describe the issue
there seems to be no documentatioin for the analyzer.strong-mode options. though the implicit-casts option is mentioned briefly here, i can't seem to find any documentation for the other options.
i think there are 2 more (implicit-dynamic and declaration-casts)
raised #48651
@srawlins
The strong-mode rules are soft deprecated; soon to be for reals deprecated, so we will not be writing documentation for them.
I think this can be closed since the strong-mode rules were removed in Dart 3 and the strict language modes are documented in https://dart.dev/guides/language/analysis-options#enabling-additional-type-checks :D
I'm going to close this as the replacement strict language modes are documented at https://dart.dev/tools/analysis#enabling-additional-type-checks and https://github.com/dart-lang/sdk/issues/50679 is tracking removing the old strong-mode options.
Please open an issue on site-www if you'd like to see any further improvements to the docs. Thanks!
|
2025-04-01T06:38:20.166513
| 2024-04-14T17:55:09
|
2242287343
|
{
"authors": [
"a-siva",
"brianquinlan",
"devoncarew",
"lrhn",
"meowofficial"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5119",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/55469"
}
|
gharchive/issue
|
gzip decodes only first line of the file
GZipCodec can't decode attached file. It decodes only the first line of the file. I could decode it with gzip shell comand. I have also tried other libraries from pub.dev but with no luck.
void main(List<String> arguments) async {
final file = File('test.csv.gz');
final bytes = file.readAsBytesSync();
print(bytes.length); // prints 3796
print(gzip.decode(bytes).length); // prints 65
}
test.csv.gz
I suspect the archive itself is corrupt or somehow non-standard. On macos, I see:
Closing as I think the issue is with the archive and not the GZipCodec class. If on further investigation of the archive you believe its well-formed / something GZipCodec should parse, please re-open.
@devoncarew I updated the file. Now it opens with the standard macos archive utility. Could you check it again?
//cc @brianquinlan
I can repro. Python is able to decompress this file:
>>> s = open('test.csv.gz', 'rb').read()
>>> import gzip
>>> t = gzip.decompress(s)
>>> len(t)
68541
The bytes that Dart actually decodes are:
>>> x = [105, 100, 83, 117, 98, 67, 97, 109, 112, 97, 105, 103, 110, 84, 105, 116, 108, 101, 44, 105, 100, 83, 117, 98, 65, 100, 83, 101, 116, 84, 105, 116, 108, 101, 44, 105, 100, 83, 117, 98, 67, 97, 109, 112, 97, 105, 103, 110, 44, 105, 100, 67, 97, 109, 112, 97, 105, 103, 110, 84, 105, 116, 108, 101, 10]
>>> bytes(x)
b'idSubCampaignTitle,idSubAdSetTitle,idSubCampaign,idCampaignTitle\n'
Which is the first line of the file. If I understand correctly, the GZIP file format consists of concatenated compressed data sets. So maybe we are only decoding the first data set?
I get the same output as Dart when use the zpipe example after changing:
- ret = inflateInit(&strm);
+ ret = inflateInit2(&strm, 32 + 15);
The Python implementation looks very similar to ours.
If I extract the file and recompress it with gzip, both Dart and zpipe and decompress the file. How did you generate this archive?
@brianquinlan This archive is from raw data export API response of https://docs.tracker.my.com/api/export-api/raw/about
Seems related to https://github.com/dart-lang/sdk/issues/47244
Yep. OK, I missed that Python deals with gzip data starting in Python code:
https://github.com/python/cpython/blob/fc21c7f7a731d64f7e4f0e82469f78fa9c104bbd/Lib/gzip.py#L622
I also found an example on how to handle concatenated gzip streams in C from Mark Adler himself:
https://stackoverflow.com/questions/17820664/is-this-a-bug-in-this-gzip-inflate-method/17822217#17822217
I have a straightforward fix for this but it will take a while for me to convince myself that it always works.
|
2025-04-01T06:38:20.167766
| 2023-01-29T06:01:27
|
1561140395
|
{
"authors": [
"anujcontractor",
"mraleph"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5120",
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/pull/51155"
}
|
gharchive/pull-request
|
issue 4433: updated web.dart & create_test.dart
added --webdev serve to add a small caveat
Spam.
|
2025-04-01T06:38:20.174474
| 2021-06-14T13:51:08
|
920435146
|
{
"authors": [
"hauketoenjes",
"jpelgrim",
"mit-mit"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5121",
"repo": "dart-lang/setup-dart",
"url": "https://github.com/dart-lang/setup-dart/issues/35"
}
|
gharchive/issue
|
Setting up dart on a self hosted runner using tool cache not working as expected
We are in the process of migrating from GitHub's macos-latest runners to self hosted runners running on Mac Minis. When we use the dart-lang/setup-dart action now, the first run is fine, but from the second and onwards runs we run into the following issue.
Installing Dart SDK version "2.13.3" from the stable channel on macos-x64
12Downloading https://storage.googleapis.com/dart-archive/channels/stable/release/2.13.3/sdk/dartsdk-macos-x64-release.zip...
13 % Total % Received % Xferd Average Speed Time Time Time Current
14 Dload Upload Total Spent Left Speed
15
16 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
17 0 183M 0 20786 0 0 74501 0 0:42:56 --:--:-- 0:42:56 74235
18 26 183M 26 48.2M 0 0 34.8M 0 0:00:05 0:00:01 0:00:04 34.8M
19 64 183M 64 118M 0 0 52.9M 0 0:00:03 0:00:02 0:00:01 52.8M
20100 183M 100 183M 0 0 58.6M 0 0:00:03 0:00:03 --:--:-- 58.6M
21replace /Users/runner/actions-runner/_work/_tool/dart-sdk/bin/dart? [y]es, [n]o, [A]ll, [N]one, [r]ename: NULL
22(EOF or read error, treating as "[N]one" ...)
23Error: Download failed! Please check passed arguments.
24Error: Process completed with exit code 1.
TL;DR How do we use the dart-lang/setup-dart properly, in combination with the tool cache and caching multiple dart versions. Maybe as a workaround we can just answer yes instead of NULL on the replace /Users/runner/actions-runner/_work/_tool/dart-sdk/bin/dart? question?
More detail
I see this in setup.sh
Unzipping dartsdk.zip into the RUNNER_TOOL_CACHE directory
# Download installation zip.
curl --connect-timeout 15 --retry 5 "$URL" > "${HOME}/dartsdk.zip"
unzip "${HOME}/dartsdk.zip" -d "${RUNNER_TOOL_CACHE}" > /dev/null
Then appending to the GITHUB_PATH
# Update paths.
echo "${HOME}/.pub-cache/bin" >> $GITHUB_PATH
echo "${RUNNER_TOOL_CACHE}/dart-sdk/bin" >> $GITHUB_PATH
So this action is not doing anything with versioning or checking if the requested version is already installed, like we see done in flutter for example
/Users/runner/actions-runner/_work/_tool runner$ ls -l flutter/
total 0
drwxr-xr-x 4 runner staff 128B Jun 14 12:52 ./
drwxr-xr-x 6 runner staff 192B Jun 14 14:26 ../
drwxr-xr-x 4 runner staff 128B Jun 14 12:52 2.0.3-stable/
drwxr-xr-x 4 runner staff 128B Jun 14 09:54 2.2.1-stable/
/Users/runner/actions-runner/_work/_tool runner$ ls -l dart-sdk
total 40
drwx------ 10 runner staff 320B Jun 9 13:02 ./
drwxr-xr-x 6 runner staff 192B Jun 14 14:26 ../
-rw-r--r-- 1 runner staff 1.5K Jun 7 13:14 LICENSE
-rw-r--r-- 1 runner staff 981B Jun 7 13:14 README
drwx------ 14 runner staff 448B Jun 10 10:05 bin/
-rw-r--r-- 1 runner staff 189B Jun 9 13:02 dartdoc_options.yaml
drwxr-xr-x 9 runner staff 288B Jun 9 13:02 include/
drwxr-xr-x 28 runner staff 896B Jun 9 13:19 lib/
-rw-r--r-- 1 runner staff 41B Jun 9 13:02 revision
-rw-r--r-- 1 runner staff 7B Jun 9 13:02 version
Anybody else running into this?
Update: The https://github.com/cedx/setup-dart action doesn't have this issue, so reverting to that for now.
The issue seems to be related to the unzip command in this line:
https://github.com/dart-lang/setup-dart/blob/ade92c2f32c026078e6297a030ec6b7933f71950/setup.sh#L80
A possible solution would be to pass -o to disable user input and force overriding of files like described in the man page here: https://linux.die.net/man/1/unzip .
@hauketoenjes did you confirm that the -o option fixes the issue? If so, are you interested in sending a PR for that?
|
2025-04-01T06:38:20.241322
| 2019-06-23T21:34:19
|
459620389
|
{
"authors": [
"UdjinM6",
"bitfex"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5122",
"repo": "dashpay/dash",
"url": "https://github.com/dashpay/dash/issues/2996"
}
|
gharchive/issue
|
Wallet node stop syncing
Hello,
After 22.06.2019 wallet node stop syncing.
Dashd version: v<IP_ADDRESS> (from releases binary)
Machine specs:
OS: Ubuntu 18.04.2 LTS (Bionic Beaver)
CPU: Intel(R) Celeron(R) CPU J3355 @ 2.00GHz
RAM: 8Gb
Disk size: 500Gb
Disk Type (HD/SDD): HDD
debug.log attached: debug.log
Possible duplicate of #2995 , pls try the solution mentioned there i.e. reconsiderblock<PHONE_NUMBER>0000112e41e4b3afda8b233b8cc07c532d2eac5de097b68358c43e
Thank you, seems that helped (I was need to wait about a 15-60 minutes to node started syncing)
|
2025-04-01T06:38:20.258846
| 2024-02-21T15:33:35
|
2147055106
|
{
"authors": [
"codecov-commenter",
"lgray",
"martindurant"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5123",
"repo": "dask-contrib/dask-awkward",
"url": "https://github.com/dask-contrib/dask-awkward/pull/475"
}
|
gharchive/pull-request
|
feat: stage getitem calls
@lgray , this doesn't actually work, but I thought you would appreciate a glimpse of what I had in mind.
Yeah this is more or less what I did for the histograms in the end, so that makes sense. I guess I just don't see how to pull it to the end in the case of getitems.
don't see how to pull it to the end
What do you mean?
I am thinking, that the with_field case is essntially identical, and instead of queueing a specific set of things (items to get) like this, we can have a small structure of stuff to do, where there can be a couple of specific sorts, and for each a single method says how to execute. Execution happens as soon as we see an operation that doesn't map into the queue (or ._dask,._meta get accessed).
Oh - as in - starting from that entry point I don't see how to get it to a functioning implementation because my brain is occupied with other tasks. :-)
I'm sure I could see the whole way through in a more quiet moment. The initial direction makes a lot of sense though.
A couple of failures here to wrap my head around, perhaps because of mutation somewhere; but here are the timings
Post
In [1]: import dask_awkward as dak
In [2]: arr = dak.from_lists([{"a": {"b": [1, 2, 3]}}]*5)
In [3]: arr2 = arr.a.b
In [4]: %timeit arr2 = arr.a.b
85.9 µs ± 280 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Pre
In [1]: import dask_awkward as dak
In [2]: arr = dak.from_lists([{"a": {"b": [1, 2, 3]}}]*5)
In [3]: arr2 = arr.a.b
In [4]: %timeit arr2 = arr.a.b
215 µs ± 3.12 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Notes:
a typetracer does not have a deterministic token in cache; am using meta.typestr
small optimization in .fields that I'm pretty sure is harmless.
Yeah I went ahead and tried it - definitely a noticeable improvement!
It leaves fancy indexing as the last thing that's taking significant time.
However, this PR has some annoying rebasing issues with #477 so I can't compose it all together pleasantly. Can't quite yet see the full picture of what is left.
@lgray , merge done, back to the same three failures as before
OK, so the problem is, that the cache also contains the output divisions, which depend on the input divisions at the time of first call. If those divisions become known, the result would be different. Interestingly, one of the couple of failing tests has this at the start:
def test_single_int(daa: dak.Array, caa: ak.Array) -> None:
daa = dak.copy(daa)
daa.eager_compute_divisions()
because it wants known divisions, but doesn't want to mutate the object held by the fixture.
Shouldn't be too hard to keep track of the state of divisions somehow as well?
I should have translated: I know what's wrong, I can fix it.
Ah, notes rather than discussion, gotcha. No problem, and cool!
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 93.16%. Comparing base (8cb8994) to head (9e39611).
Report is 29 commits behind head on main.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #475 +/- ##
==========================================
+ Coverage 93.06% 93.16% +0.09%
==========================================
Files 23 23
Lines 3290 3322 +32
==========================================
+ Hits 3062 3095 +33
+ Misses 228 227 -1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
OK, that fixes it. @lgray , maybe a speed test would be nice - I got around the former problem by simply not caching the divisions, as I don't think this part was contributing a lot compared to making meta and layers. I could instead make the cache more complex, if necessary.
I perhaps ought to write a test looking at the contents of the cache? I'm not sure we need that if all passes and times are clearly faster.
Yes with the awkward _new patch + multifill + this we're getting all the speedup we've seen thus far.
one nitpick here:
tokenize(fn, *args, meta is not None and meta.typestr, **kwargs)
appears to be noticeable only due to the meta.typestr call (ok it's half a second but that's not small when we are down to 4 seconds). Particular str(self.type) over in awkward that this calls is costly when spammed.
Would be good savings if we can get around that.
The remaining place that may give us some time back after all these improvements appears to be:
I thought meta.typestr was faster than str(meta), which is what the previous version would do. It sounds like it doesn't matter. So question: should map_partitions be expected to produce the identical result whether or not meta is provided? If yes, it doesn't need to be in this tokenize call at all. If no, then it does, and I don't know of a faster way to get a unique identifier of it.
Also, I reckon output_divisions should probably have been in the tokenize, since that does change the nature of the layer produced.
For building the layers themselves it doesn't matter. But I'd like to ruminate on it for a bit.
Yeah I think my position is as follows:
the _meta only alters the outcome of evaluating typetracers, not graph structure
the from-uproot io layer, as an example, does not change its output keys when its columns are projected/optimized
this applies to any AwkwardInputLayer
likewise when mocking / optimizing we don't change the keys of layers based on the meta
but we do generate the key based on the meta which is inconsistent with typical meaning
similarly, in dask.array.Array the meta not tokenized after checking in a few expensive algorithms, as well as dask.array.Array
Therefore I agree with not tokenizing the meta.
Done. This is final once green, unless we can think of some test that might help.
Furthermore if a user is trying to manually overwrite keys they'll probably have found the cache in the first place and can manipulate it as they need to.
Agreed, I think the mapping of collection name to graph/meta is natural. I don't even think there's any particular documentation that should go with this, except that maybe the cache size should be configurable? That doesn't need to happen yet.
I'd motion for going ahead and merging this today and getting a release out, then a bunch of wheels can turn on the coffea side of things.
@martindurant can I go ahead and merge/release? You tend to do the honors on these PRs, but I'm happy to turn the cranks.
Go ahead
|
2025-04-01T06:38:20.266550
| 2023-09-19T14:30:34
|
1903129160
|
{
"authors": [
"charlesbluca",
"griffith-maker",
"qwebug"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5124",
"repo": "dask-contrib/dask-sql",
"url": "https://github.com/dask-contrib/dask-sql/issues/1226"
}
|
gharchive/issue
|
[BUG][GPU Logic Bug] "SELECT ()||(<column(decimal)>) FROM " brings Error
What happened:
"SELECT (<string>)||(<column(decimal)>) FROM <table>" brings different results, when using CPU and GPU.
What you expected to happen:
It is the same result, when using CPU and GPU.
Minimal Complete Verifiable Example:
import pandas as pd
import dask.dataframe as dd
from dask_sql import Context
c = Context()
df = pd.DataFrame({
'c0': [0.5113391810437729]
})
t1 = dd.from_pandas(df, npartitions=1)
c.create_table('t1', t1, gpu=False)
c.create_table('t1_gpu', t1, gpu=True)
print('CPU Result:')
result1= c.sql("SELECT ('A')||(t1.c0) FROM t1").compute()
print(result1)
print('GPU Result:')
result2= c.sql("SELECT ('A')||(t1_gpu.c0) FROM t1_gpu").compute()
print(result2)
Result:
CPU Result:
Utf8("A") || t1.c0
0 A0.5113391810437729
GPU Result:
Utf8("A") || t1_gpu.c0
0 A0.511339181
Anything else we need to know?:
Environment:
dask-sql version: 2023.6.0
Python version: Python 3.10.11
Operating System: Ubuntu22.04
Install method (conda, pip, source): Docker deploy by https://hub.docker.com/layers/rapidsai/rapidsai-dev/23.06-cuda11.8-devel-ubuntu22.04-py3.10/images/sha256-cfbb61fdf7227b090a435a2e758114f3f1c31872ed8dbd96e5e564bb5fd184a7?context=explore
Trying out your reproducer with latest main gives me an error 😕 looks like at some point between now and 2023.6.0 our logical plan has changed such that we skip the casting of the non-string column:
# 2023.6.0
Projection: Utf8("A") || CAST(t1.c0 AS Utf8)
TableScan: t1 projection=[c0]
# main
Projection: Utf8("A") || t1.c0
TableScan: t1 projection=[c0]
Leading to errors in the binary operation; cc @jdye64 if you have any capacity to look into this. As for the original issue, it seems like that generally comes down to difference in the behavior of cast operations on CPU/GPU, as the following shows the same issue:
print('CPU Result:')
result1= c.sql("SELECT CAST(c0 AS STRING) FROM t1").compute()
print(result1)
print('GPU Result:')
result2= c.sql("SELECT CAST(c0 AS STRING) FROM t1_gpu").compute()
print(result2)
Can look into that, would you mind modifying your issue description / title to reflect this?
Dask-sql version 2024.3.0 has fixed it.
|
2025-04-01T06:38:20.270993
| 2020-08-19T20:04:32
|
682142800
|
{
"authors": [
"TomAugspurger",
"wfondrie"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5125",
"repo": "dask/dask-ml",
"url": "https://github.com/dask/dask-ml/pull/728"
}
|
gharchive/pull-request
|
Add a 'decision_function()' method to the 'LogisticRegression' class.
I noticed that the dask_ml.linear_model.LogisticRegression class lacked a decision_function() method like is implemented in the corresponding scikit-learn API.
This PR adds a decision_function() method and updates one corresponding test.
Thanks. The CI failures are known and unrelated.
|
2025-04-01T06:38:20.282575
| 2019-05-02T18:08:42
|
439718980
|
{
"authors": [
"Timshel",
"andersy005",
"basnijholt",
"bocklund",
"kmpaul"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5126",
"repo": "dask/dask-mpi",
"url": "https://github.com/dask/dask-mpi/issues/30"
}
|
gharchive/issue
|
dask-mpi not working
I have this script dask_mpi_test.py:
from dask_mpi import initialize
initialize()
from distributed import Client
import dask
client = Client()
df = dask.datasets.timeseries()
print(df.groupby(['time', 'name']).mean().compute())
print(client)
When I try to run this script with:
mpirun -np 4 python dask_mpi_test.py
I get these errors:
~/workdir $ mpirun -np 4 python dask_mpi_test.py
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO - Scheduler at: tcp://xxxxxx:8786
distributed.scheduler - INFO - bokeh at: :8787
distributed.worker - INFO - Start worker at: tcp://xxxxx:44712
/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/bokeh/core.py:57: UserWarning:
Port 8789 is already in use.
Perhaps you already have a cluster running?
Hosting the diagnostics dashboard on a random port instead.
warnings.warn('\n' + msg)
distributed.worker - INFO - Start worker at: tcp://xxxxxx:36782
distributed.worker - INFO - Listening to: tcp://:44712
distributed.worker - INFO - bokeh at: :8789
distributed.worker - INFO - Listening to: tcp://:36782
distributed.worker - INFO - Waiting to connect to: tcp://xxxxxx:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - bokeh at: :43876
distributed.worker - INFO - Waiting to connect to: tcp://xxxxx:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Threads: 1
distributed.worker - INFO - Threads: 1
distributed.worker - INFO - Memory: 3.76 GB
distributed.worker - INFO - Memory: 3.76 GB
distributed.worker - INFO - Local Directory: /gpfs/fs1/scratch/abanihi/worker-uoz0vtci
distributed.worker - INFO - Local Directory: /gpfs/fs1/scratch/abanihi/worker-bb0u_737
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
Traceback (most recent call last):
File "dask_mpi_test.py", line 6, in <module>
client = Client()
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/client.py", line 640, in __init__
self.start(timeout=timeout)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/client.py", line 763, in start
sync(self.loop, self._start, **kwargs)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/utils.py", line 321, in sync
six.reraise(*error[0])
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/utils.py", line 306, in f
result[0] = yield future
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1141, in run
yielded = self.gen.throw(*exc_info)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/client.py", line 851, in _start
yield self._ensure_connected(timeout=timeout)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1141, in run
yielded = self.gen.throw(*exc_info)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/client.py", line 892, in _ensure_connected
self._update_scheduler_info())
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
tornado.util.Timeout
$ conda list dask
# packages in environment at /glade/work/abanihi/softwares/miniconda3/envs/analysis:
#
# Name Version Build Channel
dask 1.2.0 py_0 conda-forge
dask-core 1.2.0 py_0 conda-forge
dask-jobqueue 0.4.1+28.g5826abe pypi_0 pypi
dask-labextension 0.3.3 pypi_0 pypi
dask-mpi 1.0.2 py37_0 conda-forge
$ conda list tornado
# packages in environment at /glade/work/abanihi/softwares/miniconda3/envs/analysis:
#
# Name Version Build Channel
tornado 5.1.1 py37h14c3975_1000 conda-forge
$ conda list distributed
# packages in environment at /glade/work/abanihi/softwares/miniconda3/envs/analysis:
#
# Name Version Build Channel
distributed 1.27.0 py37_0 conda-forge
Is anyone aware of anything that must have happened in an update to dask or distributed to cause dask-mpi to break?
Ccing @kmpaul
The last CircleCI tests ran with dask=1.1.0 and distributed=1.25.2. However, I've tried to reproduce the same environment as was run in the last CircleCI test, and it fails on my laptop. ...Yet, rerunning the CircleCI test worked fine.
I can reproduce this with the following environment on macOS.
I am running
mpirun dask-mpi --scheduler-file my_scheduler.json --nthreads 1
python -c "from distributed import Client; c = Client(scheduler_file='my_scheduler.json')"
I see this issue with:
dask 1.2.0 py_0 conda-forge
dask-core 1.2.0 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.28.1 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
and (downgraded dask)
dask 1.1.5 py_0 conda-forge
dask-core 1.1.5 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.28.1 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
and (downgraded distributed to 1.27.1)
dask 1.2.0 py_0 conda-forge
dask-core 1.2.0 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.27.1 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
and
dask 1.1.5 py_0 conda-forge
dask-core 1.1.5 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.26.1 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
and
dask 1.1.1 py_0 conda-forge
dask-core 1.1.1 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.25.3 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
however, the following works!
dask 0.20.2 py_0 conda-forge
dask-core 0.20.2 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.24.2 py36_1000 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
Downgrading distributed below 1.25 to 1.24 and dask to 0.20 (below 1.0) seems to work. Since they are coupled, I'm not sure where the issue is, but it's clearly upstream of dask-mpi.
I had the same timeout problem.
I was able to run my job while using dask-scheduler instead of dask-mpi to create the scheduler.
After some search it appears that the main difference of the dask-scheduler cli is that it's using the current tornado IoLoop : https://github.com/dask/distributed/blob/1.28.1/distributed/cli/dask_scheduler.py#L197
Using current instead of a new instance here : https://github.com/dask/dask-mpi/blob/master/dask_mpi/cli.py#L52 and it's running.
@Timshel, @bocklund, thank you for chiming in. I am going to take a stab at a fix.
Moving forward, we may need to extend our testing environment to test different combinations of dask and distributed versions (or at least make sure that everything works with the latest versions).
I am getting the same problem with the latest versions of dask and distributed and running the example from the docs.
This is blocking https://github.com/basnijholt/adaptive-scheduler/pull/11.
Fixed with #33. Thank you @Timshel for the tip.
|
2025-04-01T06:38:20.314905
| 2020-05-30T23:53:07
|
627850616
|
{
"authors": [
"Hoeze",
"alexis-intellegens",
"dhirschfeld",
"jakirkham",
"jcrist",
"mrocklin"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5127",
"repo": "dask/dask",
"url": "https://github.com/dask/dask/issues/6267"
}
|
gharchive/issue
|
Accelerate intra-node IPC with shared memory
When implementing e.g. a data loading pipeline for machine learning with Dask, I can choose either:
threaded scheduler: Only fast, when GIL is released
forking scheduler: Only fast, when the data calcuation is very CPU intense compared to the result size.
I often face the issue that the threaded scheduler effectively uses only 150% CPU, no matter how many cores it gets, because of python code that does not parallelize.
The forking scheduler sometimes works better but only if the data loading is very CPU-intense.
Recently, I tried Ray and it could speed up some of my prediction models by 5-fold due to some reason.
I'm not 100% up to date with the latest development in Dask, but AFAIK Dask serializes all data when sending it between workers. That's why I assume the huge speed difference is due to the shared-memory object store Plasma that allows zero-copy transfers of Arrow arrays from the worker to Tensorflow.
=> I'd like to share two ideas how Plasma or Ray could be helpful for Dask:
Have a shared object cache between all threads/forks in dask/cachey
Shared memory communication:
Allow producer to calculate data and consumer to read it without (de)serialization or copying
Related issues:
Investigate using plasma
Investigate UNIX domain sockets
What is the workload?
Have you tried the dask.distributed scheduler? You can set up a system with sensible defaults by running the following:
from dask.distributed import Client
client = Client()
# then run your normal Dask code
https://docs.dask.org/en/latest/scheduling.html#dask-distributed-local
In general a system like Plasma will be useful when you want to do a lot of random access changes to a large data structure and you have to use many processes for some reason.
In my experience, the number of cases where this is true is very low. Unless you're doing something like a deep learning parameter server on one machine and can't use threads for some reason there is almost always a simpler solution.
When implementing e.g. a data loading pipeline for machine learning with Dask, I can choose either:
A data loading pipeline shouldn't really require any communication, and certainly not high speed random access modifications to a large data structure. It sounds like you just want a bunch of processes (because you have code that holds the GIL) and want to minimize data movement between those processes. The dask.distributed scheduler should have you covered there, you might want to add the threads_per_worker=1 (or 2) if you have a high core machine.
In addition to what Matt said, we have tended to keep Dask's dependencies pretty lightweight when possible. My guess is if we were to implement shared memory it would either involve multiprocessing.shared_memory (added in Python 3.8 with a backport package) or using UNIX domain sockets ( https://github.com/dask/distributed/issues/3630 ) (as noted above).
That said, if serialization is really a bottleneck for you, would suggest you take a closer look at what is being serialized. If it's not something that Dask serializes efficiently (like NumPy arrays), then it might just be you need to implement Dask serialization. If you have some simple Python classes consisting of things Dask already knows how to serialize efficiently, you might be able to just register those classes with Dask. It will then recurse through them and serialize them efficiently.
Additionally if you are Python with pickle protocol 5 support and a recent version of Dask, you can get efficient serialization with plain pickle thanks to out-of-band pickling ( https://github.com/dask/distributed/pull/3784 ). Though you would have to check and make sure you are meeting those requirements. This may also require some work on your end to ensure your objects use things that can be handled out-of-band by either wrapping them in PickleBuffers (like in the docs) or using NumPy arrays, which have builtin support.
plasma might be ideally suited for e.g. shuffling operations, https://github.com/dask/dask/issues/6164
Maybe. We're not really bound by bandwidth there yet. Even if we were,
the people who are concerned about performance for dataframe shuffle
operations are only really concerned when we start talking about very large
datasets, for which single-node systems wouldn't be appropriate.
On Thu, Jun 11, 2020 at 5:11 PM Dave Hirschfeld<EMAIL_ADDRESS>wrote:
plasma might be ideally suited for e.g. shuffling operations, #6164
https://github.com/dask/dask/issues/6164
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/dask/dask/issues/6267#issuecomment-642991348, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AACKZTGABCAUEKJ6C2SYJ23RWFXCZANCNFSM4NO46HZA
.
plasma might be ideally suited for e.g. shuffling operations, #6164
Though if you have thoughts on how plasma would help in that issue, please feel free to suggest over there. I'm sure people would be interested to hear 😉
In the context of distributed you could have a plasma store per node and instead of having workers communicating data directly, have them send the data to the plasma store on the receiving node and only send the guid / unique reference directly to the worker. All workers on that node would then have access to that data (by passing around the guid) without having to copy or deserialize the data.
I think that could have pretty big performance benefits for a number of workloads. IIUC that's basically what ray does.
To illustrate the benefits of Plasma, we demonstrate an 11x speedup (on a machine with 20 physical cores) for sorting a large pandas DataFrame (one billion entries). The baseline is the built-in pandas sort function, which sorts the DataFrame in 477 seconds. To leverage multiple cores, we implement the following standard distributed sorting scheme...
Anyway, it would would be very big piece of work so, not something I could invest time in.I thought I'd mention it as an option if people are considering big changes to improve performance.
Yeah, I think that having some sort of shuffling service makes sense (this
is also what Spark does). I'm not sure that we need all of the machinery
that comes along with Plasma though, which is a bit of a bear. My guess is
that a system that just stores data in normal vanilla RAM on each process
would do the trick.
On Thu, Jun 11, 2020 at 5:32 PM Dave Hirschfeld<EMAIL_ADDRESS>wrote:
In the context of distributed you could have a plasma store per node and
instead of having workers communicating data directly, have them send the
data to the plasma store on the receiving node and only send the guid /
unique reference directly to the worker. All workers on that node would
then have access to that data (by passing around the guid) without having
to copy or deserialize the data.
I think that could have pretty big performance benefits for a number of
workloads. IIUC that's basically what ray
https://ray-project.github.io/2017/08/08/plasma-in-memory-object-store.html
does.
To illustrate the benefits of Plasma, we demonstrate an 11x speedup (on a
machine with 20 physical cores) for sorting a large pandas DataFrame (one
billion entries). The baseline is the built-in pandas sort function, which
sorts the DataFrame in 477 seconds. To leverage multiple cores, we
implement the following standard distributed sorting scheme...
Anyway, it would would be very big piece of work so, not something I could
invest time in.I thought I'd mention it as an option if people are
considering big changes to improve performance.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/dask/dask/issues/6267#issuecomment-643000455, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AACKZTE7ESNRUJGQDG7DQ73RWFZSDANCNFSM4NO46HZA
.
I could totally be wrong though. It would be great if people wanted to run
experiments here and report back.
On Thu, Jun 11, 2020 at 6:58 PM Matthew Rocklin<EMAIL_ADDRESS>wrote:
Yeah, I think that having some sort of shuffling service makes sense (this
is also what Spark does). I'm not sure that we need all of the machinery
that comes along with Plasma though, which is a bit of a bear. My guess is
that a system that just stores data in normal vanilla RAM on each process
would do the trick.
On Thu, Jun 11, 2020 at 5:32 PM Dave Hirschfeld<EMAIL_ADDRESS>wrote:
In the context of distributed you could have a plasma store per node and
instead of having workers communicating data directly, have them send the
data to the plasma store on the receiving node and only send the guid /
unique reference directly to the worker. All workers on that node would
then have access to that data (by passing around the guid) without having
to copy or deserialize the data.
I think that could have pretty big performance benefits for a number of
workloads. IIUC that's basically what ray
https://ray-project.github.io/2017/08/08/plasma-in-memory-object-store.html
does.
To illustrate the benefits of Plasma, we demonstrate an 11x speedup (on a
machine with 20 physical cores) for sorting a large pandas DataFrame (one
billion entries). The baseline is the built-in pandas sort function, which
sorts the DataFrame in 477 seconds. To leverage multiple cores, we
implement the following standard distributed sorting scheme...
Anyway, it would would be very big piece of work so, not something I
could invest time in.I thought I'd mention it as an option if people are
considering big changes to improve performance.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/dask/dask/issues/6267#issuecomment-643000455, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AACKZTE7ESNRUJGQDG7DQ73RWFZSDANCNFSM4NO46HZA
.
cc @rjzamora @madsbk (in case this is of interest)
Has there been any further discussion on the multiprocessing shared memory implementation? I also run dask on single machines with high core counts and have read-only datastructures that I want shared.
@alexis-intellegens the ray depelopers created a Dask scheduler for this called dask-on-ray.
I'd recommend you to try this one, it magically dropped my memory usage by an order of magnitue.
Note that you may need to use sth like this:
# don't do this:
dask.compute(dask_fn(large_object))
# instead do this:
large_object_ref = ray.put(large_object)
dask.compute(dask_fn(large_object_ref))
ray will automatically de-reference the object for you.
Very interesting! I'll give it a go. Thanks @Hoeze
Out of curiosity, what were to happen if I made a shared memory object (via Python 3.8 multiprocessing) and tried to access it in dask workers? I'll try it later today.
Out of curiosity, what were to happen if I made a shared memory object (via Python 3.8 multiprocessing) and tried to access it in dask workers? I'll try it later today.
That should work, they'd pickle as references to the shared memory buffer and be remapped in the receiving process (provided all your workers are running on the same machine, otherwise you'd get an error). In general I think we're unlikely to add direct shared memory support in dask itself, but users are free to make use of it in custom workloads using e.g. dask.delayed. So if you have an object you want to share between workers, you can explicitly build this into your dask computations yourself (using either multiprocessing shared_memory or something more complicated like plasma).
As stated above, shared memory would make the most sense if you have objects that can be mapped to shared memory without copying (meaning they contain large buffers, like a numpy array) but also still hold the GIL. In practice this is rare - if you're using large buffers you also probably are doing something numeric (like numpy) in which case you release the GIL and threads work fine.
Closing.
|
2025-04-01T06:38:20.327593
| 2020-11-02T19:48:02
|
734773731
|
{
"authors": [
"quasiben",
"steff456"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5128",
"repo": "dask/dask",
"url": "https://github.com/dask/dask/issues/6788"
}
|
gharchive/issue
|
Some methods of the DataFrame API Documentation are not in the summary table
What happened: In the summary table of the Dataframe API, https://docs.dask.org/en/latest/dataframe-api.html some methods are not present.
What you expected to happen: I will expect to have the following methods in the summary table,
[ ] DataFrame.abs
[ ] DataFrame.align
[ ] DataFrame.all
[ ] DataFrame.any
[ ] DataFrame.applymap
[ ] DataFrame.bfill
[ ] DataFrame.copy
[ ] DataFrame.diff
[ ] DataFrame.divide
[ ] DataFrame.eq
[ ] DataFrame.eval
[ ] DataFrame.ffill
[ ] DataFrame.first
[ ] DataFrame.ge
[ ] DataFrame.gt
[ ] DataFrame.idxmax
[ ] DataFrame.idxmin
[ ] DataFrame.info
[ ] DataFrame.isin
[ ] DataFrame.items
[ ] DataFrame.iteritems
[ ] DataFrame.last
[ ] DataFrame.le
[ ] DataFrame.lt
[ ] DataFrame.melt
[ ] DataFrame.mode
[ ] DataFrame.ne
[ ] DataFrame.nsmallest
[ ] DataFrame.pivot_table
[ ] DataFrame.resample
[ ] DataFrame.round
[ ] DataFrame.select_dtypes
[ ] DataFrame.sem
[ ] DataFrame.size
[ ] DataFrame.squeeze
[ ] DataFrame.to_html
[ ] DataFrame.to_string
[ ] DataFrame.to_timestamp
Minimal Complete Verifiable Example: For example, DataFrame.abs is not present in the summary table,
Anything else we need to know?: If I receive instructions of how can I help and add this methods in the documentation I will like to open the PR :)
Thank you @steff456 for the report! I think you can find those name can be added to the RST file here:
https://github.com/dask/dask/blob/master/docs/source/dataframe-api.rst
A PR would be most welcome!
Thanks for the quick response! I'll create the PR shortly 👍
|
2025-04-01T06:38:20.335217
| 2020-12-26T13:28:07
|
774874859
|
{
"authors": [
"jsignell",
"yohplala"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5129",
"repo": "dask/dask",
"url": "https://github.com/dask/dask/issues/7009"
}
|
gharchive/issue
|
Dask repartition / trouble with keeping bounds as returned by divisions
Hello,
Trying out and completing this example provided in the doc works.
import pandas as pd
from dask import dataframe as dd
df = pd.DataFrame(dict(a=list('aabbcc'), b=list(range(6))),index = pd.date_range(start='20100101', periods=6))
ddf = dd.from_pandas(df, npartitions=3)
ddf.divisions
ddf = ddf.repartition(partition_size="10MB")
ddf.divisions
First divisions returns
(Timestamp('2010-01-01 00:00:00', freq='D'),
Timestamp('2010-01-03 00:00:00', freq='D'),
Timestamp('2010-01-05 00:00:00', freq='D'),
Timestamp('2010-01-06 00:00:00', freq='D'))
Second one returns
(Timestamp('2010-01-01 00:00:00', freq='D'),
Timestamp('2010-01-06 00:00:00', freq='D'))
Now, trying on another example, the 2nd divisions fails this time.
from dask import dataframe as dd
import pandas as pd
import numpy as np
dti = pd.date_range(start='1/1/2018', end='1/08/2018', periods=100000)
df = pd.DataFrame(np.random.randint(100,size=(100000, 20)),columns=['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T'], index=dti)
ddf = dd.from_pandas(df, npartitions=1)
ddf.divisions
ddf = ddf.repartition(partition_size="10MB")
ddf.divisions
First divisions returns
(Timestamp('2018-01-01 00:00:00'), Timestamp('2018-01-08 00:00:00'))
Second one returns
(None, None, None)
Please, why is that so? Is there a bug somewhere?
Only displaying ddf shows me the index appears to have been lost after the repartition.
Dask Name: repartition, 6 tasks
Thanks for your help and support,
Bests,
Further comment.
Using npartitions instead of partition_size produces expected results.
from dask import dataframe as dd
import pandas as pd
import numpy as np
n_per=20*5000
dti = pd.date_range(start='1/1/2018', end='2/08/2019', periods=n_per)
col = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T']
df = pd.DataFrame(np.random.randint(100,size=(n_per, len(col))),columns=col, index=dti)
ddf = dd.from_pandas(df, npartitions=1)
#ddf = ddf.repartition(partition_size="10MB")
ddf = ddf.repartition(npartitions=2)
ddf.divisions
Produces
(Timestamp('2018-01-01 00:00:00'),
Timestamp('2018-07-21 12:00:00'),
Timestamp('2019-02-08 00:00:00'))
Seems related to issue #6362
I think https://github.com/dask/dask/issues/6362#issuecomment-652507357 describes the behavior that you are seeing. Note that divisions does not have to be set. It is perfectly fine to have unknown divisions. If you would like them to be set you can use ddf.reset_index().set_index("index")
I think https://github.com/dask/dask/issues/6362#issuecomment-652507357 describes the behavior that you are seeing. Note that divisions does not have to be set. It is perfectly fine to have unknown divisions. If you would like them to be set you can use ddf.reset_index().set_index("index")
|
2025-04-01T06:38:20.337099
| 2017-03-20T20:37:48
|
215547891
|
{
"authors": [
"jakirkham",
"mrocklin"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5130",
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/2101"
}
|
gharchive/pull-request
|
Check and compare shapes in assert_eq
Add some checks for shapes in assert_eq. Particularly compare shapes before computing results and compare shapes between dask arrays and computed results.
Merging this soon if there are no further comments
LGTM
|
2025-04-01T06:38:20.339640
| 2017-05-24T17:28:19
|
231114790
|
{
"authors": [
"jakirkham",
"jcrist"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5131",
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/2383"
}
|
gharchive/pull-request
|
Add token kwarg to da.map_blocks
Add the token kwarg to map_blocks, mirroring the token kwarg of atop. If provided, this is the prefix of the output key, but not the key itself.
Fixes #2380.
We may want to rethink these keyword names at some point. It'd be a bit of a pain to deprecate since this is public api, but the current keywords aren't the clearest (existing for historical reasons).
If I was to redo them I'd probably have key_name be for specifying the full key (name currently), and key_prefix for just the prefix (token currently). If we were to change them we'd probably want to mirror this convention in dask.dataframe and dask.bag as well.
LGTM. Thanks @jcrist.
|
2025-04-01T06:38:20.342040
| 2017-12-14T14:21:42
|
282116827
|
{
"authors": [
"TomAugspurger",
"jakirkham"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5132",
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/2997"
}
|
gharchive/pull-request
|
COMPAT: Pandas 0.22.0 astype for categorical dtypes
Change in https://github.com/pandas-dev/pandas/pull/18710 caused a dask failure
when reading CSV files, as our .astype relied on the old (broken) behavior.
Closes https://github.com/dask/dask/issues/2996
All green. Since master is currently failing on this I'll merge this later today, but if I'd appreciate it if someone could take a look.
Whoops, thanks.
Thanks @TomAugspurger.
|
2025-04-01T06:38:20.344959
| 2022-02-17T18:40:54
|
1141715202
|
{
"authors": [
"Dranaxel",
"GPUtester",
"jsignell"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5133",
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/8734"
}
|
gharchive/pull-request
|
Added compute method to raise error on use
[ ] Closes #8695
[ ] Tests added / passed
[ ] Passes pre-commit run --all-files
First (very) naive implementation of compute method
Can one of the admins verify this patch?
ok to test
It would also be great if you could add a test for this. Probably you can just add a few lines to this test: https://github.com/dask/dask/blob/2ed45454bde5a3406a0df9f492bf2917e3d15b37/dask/dataframe/tests/test_groupby.py#L103-L123
Thanks for taking this on @Dranaxel! I think this will really help people :)
|
2025-04-01T06:38:20.348912
| 2023-02-07T09:32:32
|
1573986398
|
{
"authors": [
"fjetter",
"gjoseph92"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5134",
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/9925"
}
|
gharchive/pull-request
|
Do not ravel array when slicing with same shape
I'm not very familiar with the array API so I might be missing something obvious. However, I encountered a troubling complex graph when doing an operation like array[array > 100]
import dask.array as da
arr = da.random.random((200,200,200), chunks=(20, 20, 20))
arr[arr > 1].dask
This is under the hood raveling the arrays before indexing. Raveling is effectively a rechunking operation which is relatively expensive, specifically for an operation that should be elemwise.
I'm wondering if I'm missing anything here or why there is a reason for this complexity
Three test cases failed. I assume I'm missing something. I'd be happy to be educated about the topic more. Maybe there is a way to get this working with minor modifications
I ran into this a bit ago, and I think I tried something like what you're doing here, but also found it didn't work.
I vaguely recall it had to do with the order of the elements not matching with NumPy if you just do it blockwise on multidimensional arrays. Kind of like this warning mentions: https://github.com/dask/dask/blob/834a19eaeb6a5d756ca4ea90b56ca9ac943cb051/dask/array/slicing.py#L1149-L1152
Because x[x > 100] produces a 1D array when x is N-D, if you do the operation elemwise, each chunk will be flattened. But if you just concatenate all those 1D arrays, the elements will not be overall row-major order like you'd get from NumPy. If the chunks are squares, say, chunk 0 will contain elements from multiple rows, then chunk 1 will contain elements from multiple rows. You'd expect all the elements from row 0 to come before elements in row 1.
So it kind of makes sense that rechunking is involved, since there isn't a 1:1 mapping between chunks in the input and chunks in the output.
|
2025-04-01T06:38:20.408336
| 2019-02-22T05:15:12
|
413246991
|
{
"authors": [
"choldgraf",
"franasa",
"jkuruzovich",
"vipasu",
"zednis"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5135",
"repo": "data-8/gofer_service",
"url": "https://github.com/data-8/gofer_service/issues/2"
}
|
gharchive/issue
|
Implementing Gofer suite of components as a third-party
Hi all,
I am a graduate student at RPI who is evaluating the gofer suite of auto-grading components as a possible architecture for implementing grading of Jupyter notebooks within courses at RPI.
I see this is a new project that hasn't been matured for use by third parties and I wanted to reach out to see if you can help me get this working as a third-party and in return I am willing to help mature the project (documentation, refactoring for generalization, etc.)
In looking through the code I have been able to update the gofer_nb.py script to work on my system, but I hit a wall when trying to figure out how to construct the docker container invoked by grade_lab.py.
Is there any existing documentation on how this docker image is created and how it can be customized for different courses and or labs?
Thanks,
Stephan
@yuvipanda
you probably want to ask @vipasu, who has been doing most of the work on the gofer grader + service!
you probably want to ask @vipasu, who has been doing most of the work on the gofer grader + service!
@zednis I created a sample directory with a sample Dockerfile and 3 notebooks with various levels of correctness. Check it out and maybe it will be able to clarify some things. The binder directory has the docker file. https://github.com/RPI-DATA/submitty
@zednis I created a sample directory with a sample Dockerfile and 3 notebooks with various levels of correctness. Check it out and maybe it will be able to clarify some things. The binder directory has the docker file. https://github.com/RPI-DATA/submitty
Hi! Sorry for joining the party late, here. We actually have an public dockerfile here: https://github.com/data-8/materials-x18/blob/master/Dockerfile
As you can see, it's quite minimal. Apart from listing your packages, there is also a line that copies the tests/etc. for the course (contained in the repo) into the docker image. Because we have all of the assignments in the directory, we only need a single image rather than one per assignment (though this would also be a reasonable approach). Building currently happens manually since it shouldn't need to be rebuilt, but if assignments are changing frequently, then it might be worth automating the rebuild procedure.
Let me know if this helps and if you have additional questions!
Hi @zednis, hi everyone,
Did you get to implement the service for your courses at RPI?
We at Leuphana are trying to achieve something in the same direction. I found gofer_service and gofer_submit and thought they sound perfect for integrating them into our JupyterHub deployment.
... and in return I am willing to help mature the project (documentation, refactoring for generalization, etc.)
I would also be happy to contribute in this regard if there are intentions to further develop this extension
|
2025-04-01T06:38:20.429862
| 2021-08-28T18:45:06
|
981902795
|
{
"authors": [
"Lenivaya",
"ajthinking"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5136",
"repo": "data-story-org/core",
"url": "https://github.com/data-story-org/core/pull/74"
}
|
gharchive/pull-request
|
Downloader nodes
Updates
Downloader nodes
It’s now possible to create nodes which will download resulting data in the browser.
Implementation details
DownloaderNode
Simple class that extends default Node type adding one extra property — downloadData, which is supposed to be filled by Node on run execution.
DownloadData
Class that holds:
data
mimeType of that data
fileName
fileExtension
This class have pretty generic download method that can be reused among most common cases, so all required work for every specific down-loader node is to specify right data, mimeType and fileName, everything about the downloading will be handled by DownloadData.
Class also supports generics so it’s possible to specify type of the Data downloaded, for now ti just any
this.downloadData = new DownloadData<any>({
data: [],
mimeType: 'application/json',
fileName: fileName,
fileExtension: 'json',
});
Diagram run
To check what nodes are supposed to run with Diagram run method, we are simply doing a couple of checks
if (
// Check of whether node is downloader
node instanceof DownloaderNode &&
// Check of whether code runs in browser
// environment
(isBrowser || isJsDom)
) {
await node.downloadData.download();
}
DownloadJSON node
Can be ran with dot-notated paths, so if we for example specify title as an attribute to download, we will get only those attributes from all features. Filename will be in such format
[node_name] [date_downloaded].json
Example of usage
Things need to be added/fixed/considered
Tests
Find a good way of testing downloader nodes. Possibly it might be testing of right DownloadData creation for the nodes in the core, and e2e tests for the download functionality in the gui
Downloading in headless mode
For now downloading functionality works only when code runs in the browser environment, it may be good feature to add separate downloading cases for both browser and node environment. So in node environment download will save data to some cross-platform data folder like dataDir/data-story/filename_date.json
Possibly, method DownloadData.download can be implemented in gui as a callback or similar instead since it is so heavily involved with browser (document, createElement etc) ? I see we have a guard clause to see if it is running in browser environment, but anyways might be a good separation to make
That's how I started implementing it at first, but that approach will require extra actions looping through the node list and applying callback on the gui side, so I decided to do all things in one place with just environment checking.
gui/"file-saver": "^2.0.5", - this can be removed. Nice to implement this without a package
Yes, current solution works pretty nice, though it may be a good idea to take a look on StreamSaver, which can handle bigger files creation asynchronously using streams directly to the file-system
Add pretty print option on json
I think that this must be an option in form of select parameter in DownloadJSON node, so user can choose whether he want's to format downloaded data or not.
When handling dot notation, we can make use of Obj.get helper
Yeah, it handles just what I've done manually, feels like nice possibility to decrease code verbosity
Might be another reason to move the actual downloading part to gui?
The problem here is by moving download function to the gui we'll still have to create the same testing workflow as with download isolated at the DataStory class in the core. So still e2e tests of downloading functionality in gui and tests of right data creation in core.
Updates
It's now possible to specify multiple attributes which then will be downloaded, so for example if we had such a config for a DownloadJSON node
We would get all attributes we specified downloaded
That's how I started implementing it at first, but that approach will require extra actions looping through the node list and applying callback on the gui side, so I decided to do all things in one place with just environment checking.
Ok that makes sense, lets keep it in core 👍
|
2025-04-01T06:38:20.454377
| 2016-01-28T10:27:29
|
129410044
|
{
"authors": [
"JoshRosen",
"codecov-io",
"emlyn"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5137",
"repo": "databricks/spark-redshift",
"url": "https://github.com/databricks/spark-redshift/pull/165"
}
|
gharchive/pull-request
|
Add CSV option
We've got to the stage where our jobs are big enough that Redshift loads are becoming a major bottleneck with its AVRO performance problems. We're hoping Amazon will fix them, but in the mean time we've started running with a modified spark-redshift to allow us to import via CSV (Redshift COPYs are running at least 5 times faster with this for us).
It's probably not a good idea to merge this in at the moment - I'm not sure how robust it is to different data types / unusual characters in strings etc., and it would at least need some tests and documentation. And hopefully Amazon will soon fix AVRO import making it unnecessary anyway. But I thought I'd share my changes in case they are useful to anyone else.
Current coverage is 75.07%
Merging #165 into master will decrease coverage by -13.99% as of ed40281
@@ master #165 diff @@
======================================
Files 13 13
Stmts 649 662 +13
Branches 144 146 +2
Methods 0 0
======================================
- Hit 578 497 -81
Partial 0 0
- Missed 71 165 +94
Review entire Coverage Diff as of ed40281
Powered by Codecov. Updated on successful CI builds.
Thanks for sharing. I'm glad to see that this was a relatively small change.
I agree that it's probably best to wait and see if Amazon speeds up Avro ingest; let's wait a couple of months and re-assess this feature later if there's significant interest / demand.
|
2025-04-01T06:38:20.510863
| 2017-08-18T08:23:04
|
251174036
|
{
"authors": [
"Rodgelius",
"alexius2"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5138",
"repo": "dataegret/pgcompacttable",
"url": "https://github.com/dataegret/pgcompacttable/issues/13"
}
|
gharchive/issue
|
SQL Error: ОШИБКА: нет прав для изменения параметра "session_replication_role"
Hello, Maxim!
There is problem (in subject) that I can't understand.
Also I can't find default mechanism of authentication without password (under 'postgres' user)
Here is listing:
root@someserver:~# perl pgcompacttable.pl -U someuser -W somepass -d somedb -t history -v
[Fri Aug 18 09:39:02 2017] (somedb) Connecting to database
[Fri Aug 18 09:39:02 2017] (somedb) Postgress backend pid: 8163
Wide character in print at pgcompacttable.pl line 187.
[Fri Aug 18 09:39:02 2017] (somedb) SQL Error: ОШИБКА: нет прав для изменения параметра "session_replication_role"
[Fri Aug 18 09:39:02 2017] (somedb) Database handling interrupt.
[Fri Aug 18 09:39:02 2017] (somedb) Disconnecting from database
[Fri Aug 18 09:39:02 2017] Processing incomplete: 1 databases left.
Best regards,
Vladimir
There is a method to authenticate without password using -h /path/to/unix/socket/dir (usually /tmp or somthing like /var/run/postgresql) under postgres user. Not very convenient, I agree.
For changing session_replication_role you have to be superuser - this setting is used to disable all triggers in session so DB won't have to do additional (and potentially dangerous) work when pgcompacttable performing fake updates.
Please add next to your code, to prevent 'Wide character in print at' error message.
#!/usr/bin/perl
use strict;
use utf8;
binmode(STDOUT,':utf8');
Alexius2,
Explain please what do you mean here:
For changing session_replication_role you have to be superuser
Because I run my 'perl pgcompacttable.pl.....' with root. :)
My first message was "....root@someserver:~#...."
default superuser in db is postgres. so to connect as superuser need to switch to postgres OS user and connect via unix socket or set password to postgres user and connect with -U postgres or set authentication method to trust in pg_hba if it's local/test machine.
Dear Alexius2,
Thanks a lot, it works for me!
IMHO, this hint should be written in --man. (about -h parameter and socket path)
also I was needed to enable pgstattuple, and it was first time I faced with, so please AUTHOR, if you're reading this - add some more info about how to use.
Some kind like this:
If you're not sure about pgstattuple was installed do this:
su - postgres
psql
\c
create extension pgstattuple;
Best regards!
Vladimir
|
2025-04-01T06:38:20.516131
| 2024-09-26T17:32:02
|
2551170192
|
{
"authors": [
"dataesri"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5139",
"repo": "dataesr/openalex-affiliations",
"url": "https://github.com/dataesr/openalex-affiliations/issues/4688"
}
|
gharchive/issue
|
Correction for raw affiliation Canadian Rivers Institute, Fredericton, NB, Canada, E3B 5A3; School of Environment and Sustainability, University of Saskatchewan, Saskatoon, SK, Canada, S7N 5C8
Correction needed for raw affiliation Canadian Rivers Institute, Fredericton, NB, Canada, E3B 5A3; School of Environment and Sustainability, University of Saskatchewan, Saskatoon, SK, Canada, S7N 5C8
raw_affiliation_name: Canadian Rivers Institute, Fredericton, NB, Canada, E3B 5A3; School of Environment and Sustainability, University of Saskatchewan, Saskatoon, SK, Canada, S7N 5C8
new_rors: 010x8gc63;05nkf0n29
previous_rors: 010x8gc63
works_examples: W3035374088
contact: 96f5c8d7bcc1169187bc3130133af506:08c5533f @ ourresearch.org
This issue was accepted and ingested by the OpenAlex team on 2024-10-10. The new affiliations should be visible within the next 7 days.
|
2025-04-01T06:38:20.549711
| 2024-08-23T09:31:46
|
2482729728
|
{
"authors": [
"backkem",
"hozan23"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:5140",
"repo": "datafusion-contrib/datafusion-federation",
"url": "https://github.com/datafusion-contrib/datafusion-federation/issues/46"
}
|
gharchive/issue
|
Migrate SQLFederationOptimizerRule to OptimizerRule::rewrite
DataFusion 40 changed the OptimizerRule format, ref apache/datafusion#9954. We'll need to migrate over.
We did the migration in this PR #64
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.