id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
940666592
Not installing styled-componnets While resolving: csnative@0.0.1 npm ERR! Found: react@16.13.1 npm ERR! node_modules/react npm ERR! react@"16.13.1" from the root project npm ERR! peer react@">= 16.8.0" from styled-components@5.3.0 npm ERR! node_modules/styled-components npm ERR! styled-components@"" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer react@"17.0.2" from react-dom@17.0.2 npm ERR! node_modules/react-dom npm ERR! peer react-dom@">= 16.8.0" from styled-components@5.3.0 npm ERR! node_modules/styled-components npm ERR! styled-components@"" from the root project Version Details "react-native": "0.64.1", "react": "17.0.1", Something about this error message seems really mangled and missing. I'm not quite sure what's going on but if you still need help, consider going to StackOverflow or describing what exactly you're doing in terms of executing commands, the package.json entries you have, and so on. For instance, I've never seen csnative, so if that's your package then something is definitely going wrong there.
gharchive/issue
2021-07-09T11:12:30
2025-04-01T06:45:53.811156
{ "authors": [ "kitten", "naxir" ], "repo": "styled-components/styled-components", "url": "https://github.com/styled-components/styled-components/issues/3530", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1590525605
libsvtav1 fast-decode=1 not supported ? Also Svt[warn]: Failed to set thread priority error. [libsvtav1 @ 0x55b234b77440] Error parsing option fast-decode: 3. Svt[warn]: Failed to set thread priority like 100 times. f"vspipe -c y4m inference_batch.py --arg source={f} - | ffmpeg -i pipe: -c:v libsvtav1 -preset 7 -crf 35 -svtav1-params fast-decode=1 {mux_path}" ) [libsvtav1 @ 0x55b234b77440] Error parsing option fast-decode: 3. Svt[warn]: Failed to set thread priority like 100 times. f"vspipe -c y4m inference_batch.py --arg source={f} - | ffmpeg -i pipe: -c:v libsvtav1 -preset 7 -crf 35 -svtav1-params fast-decode=1 {mux_path}" ) If you dont specify any svt av1 params it works? Another Error, it just randomly stops outputing, i guess ill just avoid this for now, and use something else, this is the error it stops after few mintues: Error: fwrite() call failed when writing frame: 1841, plane: 0, errno: 32 Output 1850 frames in 451.75 seconds (4.10 fps) ffmpeg -i test.mp4 -c:v libsvtav1 -preset 7 -crf 35 -svtav1-params fast-decode=1 test_out.mp4 does render a video. Svt[info]: ------------------------------------------- Svt[info]: SVT [version]: SVT-AV1 Encoder Lib v1.4.1-64-gd5e1e2a5-dirty Svt[info]: SVT [build] : GCC 12.2.1 20230111 64 bit Svt[info]: LIB Build date: Feb 4 2023 00:54:10 Svt[info]: ------------------------------------------- Svt[info]: Number of logical cores available: 16 Svt[info]: Number of PPCS 136 Svt[info]: [asm level on system : up to avx2] Svt[info]: [asm level selected : up to avx2] Svt[info]: ------------------------------------------- Svt[info]: SVT [config]: main profile tier (auto) level (auto) Svt[info]: SVT [config]: width / height / fps numerator / fps denominator : 1920 / 1080 / 24 / 1 Svt[info]: SVT [config]: bit-depth / color format : 8 / YUV420 Svt[info]: SVT [config]: preset / tune / pred struct : 7 / PSNR / random access Svt[info]: SVT [config]: gop size / mini-gop size / key-frame type : 161 / 32 / key frame Svt[info]: SVT [config]: BRC mode / rate factor : CRF / 35 Svt[info]: ------------------------------------------- Svt[warn]: Failed to set thread priority Svt[warn]: Failed to set thread priority ... Seems like new compiled svt now spams Failed to set thread priority, but it procudes a video. It randomly stopping is most likely related to hardware usage like running out of ram or full drive. I hope just recompiling svt will solve that and won't write Svt[warn]: Failed to set thread priority. ffmpeg -i test.mp4 -c:v libsvtav1 -preset 7 -crf 35 -svtav1-params fast-decode=1 test_out.mp4 does render a video. Svt[info]: ------------------------------------------- Svt[info]: SVT [version]: SVT-AV1 Encoder Lib v1.4.1-64-gd5e1e2a5-dirty Svt[info]: SVT [build] : GCC 12.2.1 20230111 64 bit Svt[info]: LIB Build date: Feb 4 2023 00:54:10 Svt[info]: ------------------------------------------- Svt[info]: Number of logical cores available: 16 Svt[info]: Number of PPCS 136 Svt[info]: [asm level on system : up to avx2] Svt[info]: [asm level selected : up to avx2] Svt[info]: ------------------------------------------- Svt[info]: SVT [config]: main profile tier (auto) level (auto) Svt[info]: SVT [config]: width / height / fps numerator / fps denominator : 1920 / 1080 / 24 / 1 Svt[info]: SVT [config]: bit-depth / color format : 8 / YUV420 Svt[info]: SVT [config]: preset / tune / pred struct : 7 / PSNR / random access Svt[info]: SVT [config]: gop size / mini-gop size / key-frame type : 161 / 32 / key frame Svt[info]: SVT [config]: BRC mode / rate factor : CRF / 35 Svt[info]: ------------------------------------------- Svt[warn]: Failed to set thread priority Svt[warn]: Failed to set thread priority ... Seems like new compiled svt now spams Failed to set thread priority, but it procudes a video. It randomly stopping is most likely related to hardware usage like running out of ram or full drive. I hope just recompiling svt will solve that warning and won't write Svt[warn]: Failed to set thread priority. the random stopping has happened for me on longer videos, i have plenty of space and ram, 32GB Ram i9 9900k 2070s, Either way i will be sticking with x265 once the 10bit gets fixed. Seems weird it would just stop. Got randomly exited on new version, not sure what is causing it tbh, is it the encode or the upscale. CPU is at like 70-90% its not even capping, GPU is max but it still exited with no errors shown just Error: fwrite() call failed when writing frame: 3389, plane: 1, errno: 32 Output 3398 frames in 717.94 seconds (4.73 fps) Fixed in Failed to set thread priority in https://github.com/styler00dollar/VSGAN-tensorrt-docker/commit/4bf410d02d74a2d88a848ffc0c9794d3e1d42f13. Dockerhub image will be updated soon. Closing.
gharchive/issue
2023-02-19T01:40:02
2025-04-01T06:45:53.860906
{ "authors": [ "BestRestUp3300", "banjaminicc", "styler00dollar" ], "repo": "styler00dollar/VSGAN-tensorrt-docker", "url": "https://github.com/styler00dollar/VSGAN-tensorrt-docker/issues/21", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2425793883
SDK-1843 update sms endpoints to accept autofill parameter Linear Ticket: SDK-1843 Changes: Adds an enableAutofill property to SMS parameters. The UI already marks code entry as one time code, so this should be all we need to do to enable SMS autofill for iOS (pending the backend metadata work) Checklist: [ ] I have verified that this change works in the relevant demo app, or N/A [ ] I have added or updated any tests relevant to this change, or N/A [ ] I have updated any relevant README files for this change, or N/A Changing base branch tot he SDK migration branch since this depends on that!
gharchive/pull-request
2024-07-23T17:58:28
2025-04-01T06:45:53.863897
{ "authors": [ "jhaven-stytch" ], "repo": "stytchauth/stytch-ios", "url": "https://github.com/stytchauth/stytch-ios/pull/272", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1232468372
Ordering of items in the calendar When there are multiple states/events in a single day in the calendar view on the homepage, they should be ordered like this: Events (so they always come first) Upcoming states (this only applies to the current day; upcoming states should be at the top) (Past) states Items in each of these 'categories' should be then ordered chronologically. Fixed in #124.
gharchive/issue
2022-05-11T11:22:41
2025-04-01T06:45:53.865749
{ "authors": [ "ondryaso" ], "repo": "su-fit-vut/kachna-online", "url": "https://github.com/su-fit-vut/kachna-online/issues/120", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
27126958
Installer doesn't worl in some cases Installation of DB Components fail if the user have manipulated the oxarticle table. If your can't buy anything when the module is activated, you have to check if you see a new product artno. "elefunds-donation". If not you can manually fix this by adding your fields to the sql query in setup/sql/oxarticleslfndsdonationarticle.sql and import it via the OXID Backend Service > Tools or your phpmyadmin. The creation of the donation article will no longer get you into trouble. We now create them via OXID API.
gharchive/issue
2014-02-07T10:54:18
2025-04-01T06:45:53.869325
{ "authors": [ "suabo" ], "repo": "suabo/elefunds-OXID", "url": "https://github.com/suabo/elefunds-OXID/issues/6", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2344293517
Added content of recursion Added more about recursion ,provided quality content with sufficient examples that everyone understand easily. @subhadipbhowmik review this, i have added everything about recursion. @subhadipbhowmik check the updated content, i have improve the structure and arrangement of content nicely.
gharchive/pull-request
2024-06-10T16:00:20
2025-04-01T06:45:53.874214
{ "authors": [ "Sahil-Vaidya" ], "repo": "subhadipbhowmik/30-Days-Of-CPP", "url": "https://github.com/subhadipbhowmik/30-Days-Of-CPP/pull/262", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
226745944
Fillscreen methods for macOS added new methods for macOS: defos.toggle_fullscreen() defos.isFullScreen() Should probably be defos.is_fullscreen() ? Yes, you are right. I will chage it
gharchive/pull-request
2017-05-06T07:05:59
2025-04-01T06:45:53.954840
{ "authors": [ "AGulev", "subsoap" ], "repo": "subsoap/defos", "url": "https://github.com/subsoap/defos/pull/11", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
2062317426
Add DID Onchain Description This PR adds DID related functionalities on-chain into backend folder - sdk: Add Auto ID on-chain via DID Registry contract deployed on Nova Check if an Auto ID is verified on-chain. Added corresponding examples as well. And also modified some functions reducing the arguments by defining into constants.ts file. Merging after waited too long. All looks ok, confirmed offline.
gharchive/pull-request
2024-01-02T11:56:37
2025-04-01T06:45:53.956649
{ "authors": [ "abhi3700" ], "repo": "subspace/auto-mobile", "url": "https://github.com/subspace/auto-mobile/pull/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
267138928
Invalid Journal - Errors Dear Team, First of all thanks much for the beautiful tool. I recently download it using npm. When I am trying to process a valid Journal XML the editor unable to parse it. Instance 1: <fig id="F1" position="float" orientation="portrait"> <label>Figure 1</label> <caption> <title>Patients’ pre-operative concerns</title> </caption> <graphic xlink:href="cnp_v11_n3_9036_0001.jpg" position="float" orientation="portrait"/> </fig> The above structure is JATS valid, however, the structure defined in the skeleton XML (texture source code) says label is not valid and also a <p> tag necessary inside the title element. Instance 2: In the contributors section (affiliation) and reference section string-name is not supporting and when I replace it with name then it works. Again string-name is valid in both places as per NLM JATS standards. <string-name name-style="western"> <surname>...</surname> <given-names>...</given-names> </string-name> Instance 3: mixed-citation is not working and the code is supporting only element-citation for references. Also, how easily this code can make it supportive to BITS (DTD for books)? Thanks in advance, Praveen Hey @praveennair! Thanks for reporting and sorry for the late reply. With Texture we want to follow a stricter model for more consistent tagging (see JATS4R group and Texture issues tagged "requirements"). Another reason is that we can't provide visual user interfaces for any possible JATS tagging style. Instance 1: We always want to have a p element so there's a placeholder for editing. However think it makes sense to add a transformer to expand it on import an drop empty p's on export. Instance2: We are working on that. However we will not support string-name and allow only surename, given-names tags. See: https://github.com/substance/texture/issues/239 Instance3: Also here we will go for structured references and only support <element-citation> in Texture. See: #238 Dear @michael, Thanks for your reply. We are currently using a CKEditor based implementation for editing and managing XML content. We mostly process books using BITS.dtd. I am very much impressed by the work you guys did here and planning to create solution for Books as well by doing some tweaks in the texture code base. Hope this is fine. Thanks, Praveen @praveennair sure! Please keep me updated of your work, maybe there's potential to make Texture support BITS out of the box in the future. I think we can close this. @praveennair please verify the latest spec and create an issue if a requirement isn't met. JATS4M Spec: https://github.com/substance/dar/blob/master/specs/JATS4M.md
gharchive/issue
2017-10-20T10:57:07
2025-04-01T06:45:53.986644
{ "authors": [ "michael", "praveennair" ], "repo": "substance/texture", "url": "https://github.com/substance/texture/issues/280", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1078414227
[Tutorial: Start a private network] Missing option --scheme in step to add the aura and grandpa secret key Content request The code to add the aura secret key in the Allow other participants to join section does not have the --scheme option. I get the error message that this option is required. So I had to add --scheme Sr25519 to the code to add the aura secret key and --scheme Ed25519 to the code to add the grandpa secret key. Original code: ./target/release/node-template key insert --base-path /tmp/node02 \ --chain customSpecRaw.json \ --suri <second-participant-secret-seed> \ --password-interactive \ --key-type aura Fixed code: ./target/release/node-template key insert --base-path /tmp/node02 \ --chain customSpecRaw.json \ --scheme Sr25519 \ --suri <second-participant-secret-seed> \ --password-interactive \ --key-type aura Are you willing to help with this request? Yes! This issue has been fixed. For reference, see this issue: https://github.com/substrate-developer-hub/substrate-docs/issues/714
gharchive/issue
2021-12-13T11:28:26
2025-04-01T06:45:53.989862
{ "authors": [ "lisa-parity", "sanitu" ], "repo": "substrate-developer-hub/substrate-docs", "url": "https://github.com/substrate-developer-hub/substrate-docs/issues/685", "license": "0BSD", "license_type": "permissive", "license_source": "github-api" }
158640988
on subutai update management init process ~90% cpu load after executing subutai update management /sbin/init process cpu load increases to 90%~95%. management update process runs for one hour and more but sometimes, such behavior is not observed: 1 of 23 peers updated in 10~15 minutes OS bug
gharchive/issue
2016-06-06T09:55:56
2025-04-01T06:45:53.999978
{ "authors": [ "p1gmale0n" ], "repo": "subutai-io/base", "url": "https://github.com/subutai-io/base/issues/994", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
262044826
Getting "Signature verification failed" when sending signed file too often I did a manual test of this and found that gorjun starts to authorize normally approximately after one minute of idle. Figured out that this is a cache problem @crioto should work now Thank you, Timur. I will test and report On Oct 2, 2017, at 5:14 PM, Timur Zununbekov notifications@github.com wrote: @crioto https://github.com/crioto should work now — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/subutai-io/gorjun/issues/118#issuecomment-333505654, or mute the thread https://github.com/notifications/unsubscribe-auth/AAo9q2lHMaxEPbOgRmelPh83TsYtDlNBks5soMWugaJpZM4PqdxC. Thanks, guys! Works perfectly! On Oct 2, 2017, at 5:15 PM, Mikhail Savochkin i@crioto.com wrote: Thank you, Timur. I will test and report On Oct 2, 2017, at 5:14 PM, Timur Zununbekov <notifications@github.com mailto:notifications@github.com> wrote: @crioto https://github.com/crioto should work now — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/subutai-io/gorjun/issues/118#issuecomment-333505654, or mute the thread https://github.com/notifications/unsubscribe-auth/AAo9q2lHMaxEPbOgRmelPh83TsYtDlNBks5soMWugaJpZM4PqdxC. @crioto welcome
gharchive/issue
2017-10-02T10:39:53
2025-04-01T06:45:54.006628
{ "authors": [ "crioto", "tzununbekov" ], "repo": "subutai-io/gorjun", "url": "https://github.com/subutai-io/gorjun/issues/118", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
366275480
PeerOs not responding exact error message on unregister peer REST API CC uses following PeerOs REST API for unregistering the peer. I have peer registered to the bazaar. My peer has the environment on the bazaar. REST API /rest/v1/hub/unregister?sptoken= Current Behavior PeerOS not responding exact error message if The peer has environment on the bazaar. Just send response: Internal server error When I try to unregister via peer os UI, it gives the exact error message. Expected Behavior PeerOS should response exact error message. (This peer has environments on Bazaar. Please remove environments first to unregister peer.) Response body contains the error description Now PeerOS response exact error message:
gharchive/issue
2018-10-03T10:39:12
2025-04-01T06:45:54.010489
{ "authors": [ "Dilshat", "jadilet" ], "repo": "subutai-io/peer-os", "url": "https://github.com/subutai-io/peer-os/issues/2673", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2549892054
Question: LLVM SP1 claimed to be friendly for any LLVM-compiled language, I wonder how can I using another languauge to wirte SP1 specification, like C++. If I can't do that, it will be helpful if you told me about how to adapt more LLVM supported frontend (It may be hard, but it's helpful if you guys can share some insight) If it compiles to RISC-V, it should be possible. We have some syscalls we call at the end of the program to commit to the public values hash, and we use syscalls for precompiles and I/O.
gharchive/issue
2024-09-26T08:30:04
2025-04-01T06:45:54.011996
{ "authors": [ "UIZorrot", "ctian1" ], "repo": "succinctlabs/sp1", "url": "https://github.com/succinctlabs/sp1/issues/1545", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2555127211
Add Support for Spoiler Requesting the markdown for a spoiler box. This is best used in terms of studying, homework, etc when you're studying and don't want to see the answer right away. Discord uses the syntax: ||<text>|| Supernotes uses: !!<text>!! I personally like Discord's variant but I'm probably biased because Discord introduced me to spoilers. Also the pipes kinda look like curtains, which makes sense for hiding text :smile: https://github.com/element-hq/element-meta/issues/869#issuecomment-1353555407 could be useful. ||reason|spoiler|| seems like an interesting option as well, but I don't see many use cases for this in a note-taking application.
gharchive/issue
2024-09-29T19:52:17
2025-04-01T06:45:54.066919
{ "authors": [ "Docfips", "k8ieone" ], "repo": "suchnsuch/Tangent", "url": "https://github.com/suchnsuch/Tangent/issues/98", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
133862432
Persistent Resizing Save changes to the following attributes: [x] Window size [x] Whether the window is maximized [ ] The size of each resizable column Partially implemented in 676564fa1b3eefbf296ef3be5ee764d8410065e0.
gharchive/issue
2016-02-16T02:05:27
2025-04-01T06:45:54.076654
{ "authors": [ "sudiamanj" ], "repo": "sudiamanj/TuneJar", "url": "https://github.com/sudiamanj/TuneJar/issues/20", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2658369247
Support for 2FA I was wondering if in the current version, there is support for 2FA? I put in my user I believe you should use App Passwords for these 3rd party tools, is there any reason not to use it? https://bsky.app/settings/app-passwords App passwords should work great; perhaps it should be documented somewhere that App Passwords are the intended way to log on?
gharchive/issue
2024-11-14T10:37:49
2025-04-01T06:45:54.118405
{ "authors": [ "Gamer-Kold", "kaning", "sugyan" ], "repo": "sugyan/tuisky", "url": "https://github.com/sugyan/tuisky/issues/39", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2454077668
Update versioned URL pattern when linking to previous versions and formatting citations The version URL pattern should be {druid}/version/{N} PURL link inserted into autogenerated citation needs to have the updated pattern, in addition to links to all purl pages on the item show page.
gharchive/issue
2024-08-07T18:28:13
2025-04-01T06:45:54.144364
{ "authors": [ "aaron-collier", "amyehodge" ], "repo": "sul-dlss/happy-heron", "url": "https://github.com/sul-dlss/happy-heron/issues/3594", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
816174852
state differentiation to allow update_metadata while purl res background/async work happens Why was this change made? address this issue discovered while trying to reproduce #1191. How was this change tested? existing test suite for regressions. will also deploy to stage in the morning to test whether it resolves the problem encountered there. will also add unit tests to cover new state transitions. Which documentation and/or configurations were updated? the state machine diagram superseded by #1223 and its yet-to-be-implemented companion PR (the latter will more directly address what this PR does, by preventing editing of the work before the purl_reserved state is hit, saving us the added complexity of all the new state transitions of this PR).
gharchive/pull-request
2021-02-25T07:38:00
2025-04-01T06:45:54.146585
{ "authors": [ "jmartin-sul" ], "repo": "sul-dlss/happy-heron", "url": "https://github.com/sul-dlss/happy-heron/pull/1202", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2334416280
skip transmission for transmit_data_oclc_api_task when env is not production Blocked by #988 For the transmit_data_oclc_api_task, we're not calling the return_success_test_instance when the env is not production. https://github.com/sul-dlss/libsys-airflow/blob/8760131926c03da19ac19e6036819e8a550209f5/libsys_airflow/plugins/data_exports/transmission_tasks.py#L164-L183 We should add that once we have completed #988 Fixed by https://github.com/sul-dlss/libsys-airflow/pull/1380
gharchive/issue
2024-06-04T21:39:23
2025-04-01T06:45:54.148772
{ "authors": [ "jgreben", "shelleydoljack" ], "repo": "sul-dlss/libsys-airflow", "url": "https://github.com/sul-dlss/libsys-airflow/issues/1054", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1629962460
Removes modifying ITEM_TYPE. Closes #240 Maybe this whole test should be removed? https://github.com/sul-dlss/libsys-airflow/blob/bab5b5426bcd8205d2d46cd920b18152c2381805/plugins/tests/helpers/test_tsv.py#L106 Maybe this whole test should be removed? https://github.com/sul-dlss/libsys-airflow/blob/bab5b5426bcd8205d2d46cd920b18152c2381805/plugins/tests/helpers/test_tsv.py#L106 I think we still want to test the BARCODE and CATKEY transforms.
gharchive/pull-request
2023-03-17T21:53:08
2025-04-01T06:45:54.151208
{ "authors": [ "jermnelson", "shelleydoljack" ], "repo": "sul-dlss/libsys-airflow", "url": "https://github.com/sul-dlss/libsys-airflow/pull/242", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
72445790
[FEATURE] Add clear cache command for the local image cache Simply renames the cache dir, deletes it as a whole and creates a new one. The command name is sulu:media:cache:clear Tasks: [ ] Add entry to CHANGELOG.md Informations: Q A Tests pass? n/a Fixed tickets #808 BC Breaks none Doc none License MIT @alexander-schranz could you please take a look at this? @patkar @wachterjohannes I think every FormatCache should implement a clear method so we should add the clear function to the FormatCacheInterface.php and implement this logic into the LocalFormatCache. I also would prefer to rename the command into sulu:media:format:cache:clear. When I implement FastProxy this command can also have an option which format:cache I want to clear (local or proxy). Ok, didn't know there was a dedicated 'FormatCacheInterface`. Didn't search as well, 'cause you said, cleaning the folder would be enough ;) @patkar yeah thats right (= but i also think that @alexander-schranz is right we should move that into the "service"... Yes, in the service it's definitely better. First I thought about it, 'cause the symfony clear cache has such services too. yeah but it should not be coupled with the symfony cache anyway! so its good to place it in the service! yeah but it should not be coupled with the symfony cache anyway! Of course! @alexander-schranz ping @patkar @alexander-schranz Is this PR dead? Eh sorry. So much work aside, forgot a little bit about it. Glad that you pinged me. What are the open TODOs? Only the interface? The interface has to be added for sure, I don't know what else @alexander-schranz wants. It's dead and alive xD There was some open questions in the code review from me. Yeah, wait for @alexander-schranz answers, but it might take some time, because he is on holiday :smile: I hope I can finish it this week. @patkar @danrot ready to merge? Need rebase and feedback @danrot @chirimoya rebased. Didn't saw the removement of the feedback label and thought it misses feedback. Such a long running PR. Akward :blush:
gharchive/pull-request
2015-05-01T14:41:17
2025-04-01T06:45:54.183185
{ "authors": [ "alexander-schranz", "chirimoya", "danrot", "patkar", "wachterjohannes" ], "repo": "sulu-io/sulu", "url": "https://github.com/sulu-io/sulu/pull/1052", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1157714241
🛑 SISTEMA ROYAL-HOLIDAY is down In f5db06e, SISTEMA ROYAL-HOLIDAY (https://www.royalholiday.com.ar/reservas/admin/) was down: HTTP code: 0 Response time: 0 ms Resolved: SISTEMA ROYAL-HOLIDAY is back up in bcda692.
gharchive/issue
2022-03-02T21:30:51
2025-04-01T06:45:54.195983
{ "authors": [ "sumito74" ], "repo": "sumito74/upptime", "url": "https://github.com/sumito74/upptime/issues/109", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2058219597
Performance Improvements on user opening The user opening currently takes five times longer than the grand sum opening and I assume this is due to the exponentiation necessary to obtain the user X coordinate. I think it's feasible to maintain a dictionary of pre-calculated powers of $\omega$ against an index and move this calculation outside of the benchmark. This is a new feature so I'll create a separate issue for that if you agree. Originally posted by @alxkzmn in https://github.com/summa-dev/summa-solvency/pull/232#pullrequestreview-1790325891 As we discussed today, an alternative to this approach is just to generate all the opening proofs at once. In the loop generating individual user proof, the power of gamma is just calculated by multiplying the previous powers of gamma by gamma. @alxkzmn Do you think this issue is covered with the amortized KZG? Yes, this was solved.
gharchive/issue
2023-12-28T09:44:06
2025-04-01T06:45:54.198711
{ "authors": [ "alxkzmn", "enricobottazzi", "sifnoc" ], "repo": "summa-dev/summa-solvency", "url": "https://github.com/summa-dev/summa-solvency/issues/243", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1943735210
Write space and time complexity chapter This MR aims to solve issue https://github.com/sumn2u/learn-javascript/issues/112 Considering space and time complexity is a significant aspect of programming that ensures code efficiency, scalability, and maintainability. Having a brief introduction to these concepts, followed by example, can greatly benefit JavaScript developers. @sumn2u Can you have a look if I am going on to the right path or not ? @sumn2u Can you have a look if I am going on to the right path or not ? Looking great. If possible can you describe O(log n) and O(n^2) too? Looking great. If possible can you describe O(log n) and O(n^2) too? Roger that! I will add code snippets for that. Been busy with some work. I will ping you with the updated changes Been busy with some work. I will ping you with the updated changes Starting with the work from today. Thanks for being patient with me @sumn2u Hey @starvader13, today is the last day of Hacktoberfest. Do you think you can create a pull request by the end of the day? Should I divide it into two PRs, if possible?? You can accept this change for now and the rest I can change it in the next PR. Or if this is not possible we can move forward with this without considering it for Hacktoberfest and I will push the changes ASAP. If it's going to easier for you to split, i suggest to split. If not it's ok to have it in a single PR. @starvader13 You can add your create another PR with remaining contents.
gharchive/pull-request
2023-10-15T05:58:39
2025-04-01T06:45:54.240861
{ "authors": [ "starvader13", "sumn2u" ], "repo": "sumn2u/learn-javascript", "url": "https://github.com/sumn2u/learn-javascript/pull/114", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
825838685
Modifiy built-in libraries see https://github.com/sumneko/lua-language-server/issues/409#issuecomment-791601248 Done
gharchive/issue
2021-03-09T12:19:39
2025-04-01T06:45:54.242392
{ "authors": [ "sumneko" ], "repo": "sumneko/lua-language-server", "url": "https://github.com/sumneko/lua-language-server/issues/448", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1057389340
ngx.lua is to big with Sumneko 2.4.9 Describe the bug When opening Visual Studio Code, it prompt this error message: When looking at the file in the directory Is the maximum size allowed customizable? To Reproduce Steps to reproduce the behavior: Go to '...' Click on '....' Scroll down to '....' See error Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Environment (please complete the following information): OS: [e.g. Windows, macOS, Ubuntu] Is WSL remote? Client: [e.g. VSCode, neovim] Additional context Add any other context about the problem here. Provide logs Find the log at https://github.com/sumneko/lua-language-server/wiki/Default-log-path Find setting Lua.workspace.preloadFileSize This should be a bug, the inner meta files should ignore the size limit. I cannot reproduce this, please provide your log. Should be fixed.
gharchive/issue
2021-11-18T13:58:52
2025-04-01T06:45:54.248642
{ "authors": [ "slambon", "sumneko" ], "repo": "sumneko/lua-language-server", "url": "https://github.com/sumneko/lua-language-server/issues/808", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2467669699
Add padding to floatingHeader and floatingFooter When using floatingHeader and floatingFooter, it is important that the content is behind them, but at the same time has the necessary margins when scrolling completely up or completely down. This can be achieved by adding paddings to the scroll element. You need to add transfer of padding for the child element. If possible, I would like to see an implementation where it would not be necessary to transfer this padding to all child elements. For example, the child element can be wrapped in Scaffold and the padding from the main screen would be transferred there. floatingHeader and floatingFooter work this way intentionally. If you're looking for a way to make the child of scaffold scrolls behind the floatingHeader/footer, you need to set the scrollview clip to Clip.none and set the floatingHeader and floatingFooter to false. The scaffold is designed to render the child first and then header and footer, this way you can achieve the scrolling behind app bar. You can increase the cacheExtent by a few pixels to avoid the issue. Another alternative would be to have a fixed-size appbar, then insert your padding with the fixed size defined for the appbar to your listview. Or you can use NestedScrollView with SliverPersistentHeader or maybe CustomScrollView. Those two are the alternatives so far. In the meantime, I'm gonna try to figure out a way to pass the app bar layout data to the child component. I added ScaffoldHeaderPadding and ScaffoldFooterPadding which you can add to your ListView to add padding with size depending on your header/footer size. This is the rough example: return Scaffold( floatingHeader: true, floatingFooter: true, headers: [ AppBar( title: const Text('Counter App'), subtitle: const Text('A simple counter app'), surfaceOpacity: 0.2, leading: [ GhostButton( onPressed: () { openDrawer( context: context, builder: (context) { return Container( alignment: Alignment.center, constraints: const BoxConstraints( maxWidth: 300, ), child: const Text('Drawer'), ); }, position: OverlayPosition.left, ); }, density: ButtonDensity.icon, child: const Icon(Icons.menu), ), ], trailing: [ GhostButton( density: ButtonDensity.icon, onPressed: () { openSheet( context: context, builder: (context) { return Container( alignment: Alignment.center, constraints: const BoxConstraints( maxWidth: 200, ), child: const Text('Sheet'), ); }, position: OverlayPosition.right, ); }, child: const Icon(Icons.search), ), ], ), const Divider(), ], footers: [ const Divider(), if (_selected == 1) Container( color: Colors.purple, height: 30, ), NavigationBar( surfaceOpacity: 0.2, onSelected: (i) { setState(() { _selected = i; }); }, index: _selected, children: [ _buildButton('Home', Icons.home), _buildButton('Explore', Icons.explore), _buildButton('Library', Icons.library_music), ], ), ], child: Container( color: Colors.red, child: ListView( children: [ const ScaffoldHeaderPadding(), for (int i = 0; i < 50; i++) OutlinedContainer( child: Text('Item $i'), ), const ScaffoldFooterPadding(), ], ), )); can you give an example? So far I have achieved the desired effect using this code, but it seems to me that there is an easier option @override Widget build(BuildContext context) { return CustomScrollView( slivers: [ const SliverToBoxAdapter( child: ScaffoldHeaderPadding(), ), SliverPadding( padding: const EdgeInsets.all(12), sliver: SliverGrid( gridDelegate: const SliverGridDelegateWithFixedCrossAxisCount( crossAxisCount: 2, crossAxisSpacing: 10, mainAxisSpacing: 0, ), delegate: SliverChildBuilderDelegate( (BuildContext context, int index) { return CardImage(...); }, childCount: 20, ), )), const SliverToBoxAdapter( child: ScaffoldFooterPadding(), ) ], ); } I think this should be the final answer to your question since a Scaffold with a floating header and footer cannot magically add padding to your scrollable viewport. ScaffoldHeaderPadding and ScaffoldFooterPadding listen to layout changes, so it resizes automatically when your header/footer is resized. I'm not sure what else would be a better approach than this one.
gharchive/issue
2024-08-15T08:49:45
2025-04-01T06:45:54.263338
{ "authors": [ "goottime", "sunarya-thito" ], "repo": "sunarya-thito/shadcn_flutter", "url": "https://github.com/sunarya-thito/shadcn_flutter/issues/57", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
10665009
Implement tests w mock objects Would help with issues like #9 Could use GHUnit or Kiwi with OCMock for testing. Began in 92c90e7 with OHTTPStubs
gharchive/issue
2013-02-05T20:01:20
2025-04-01T06:45:54.305311
{ "authors": [ "dcloud" ], "repo": "sunlightlabs/congress-ios", "url": "https://github.com/sunlightlabs/congress-ios/issues/44", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2201861954
微软 Copilot登不上 Describe the bug / 描述问题 微软 Copilot 登不上 To Reproduce / 复现步骤 登录输入账号密码之后一直重复显示这个界面 Expected behavior / 期望行为 登录 微软 Copilot Screenshots / 截图 No response Devtools Info / 开发者工具信息 OS and version / 操作系统版本 win10 ChatALL version / ChatALL 版本 v1.63.96 Network / 网络 VPN Additional context / 其它相关信息 No response 这不是chatall的问题,而是copilot自身原因导致的,需要网络代理,微软账号地区,当前pc环境地区均为其支持的地区,比如新加坡 这不是chatall的问题,而是copilot自身原因导致的,需要网络代理,微软账号地区,当前pc环境地区均为其支持的地区,比如新加坡 感谢回答,解答了 我疑惑很久的问题,非常感谢
gharchive/issue
2024-03-22T07:23:23
2025-04-01T06:45:54.319146
{ "authors": [ "tangzeyu7", "ttyy33", "xianerhello" ], "repo": "sunner/ChatALL", "url": "https://github.com/sunner/ChatALL/issues/765", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2467565991
chat: add configure openai api base and model Add support to configure OPENAI_API_BASE & OPENAI_MODEL in env local Thanks @lroolle! Would you mind adding some docs for this in the apps/postgres-new README? We should also add a disclaimer that says that the current prompts/tools were designed around GPT-4o, so expect other models to behave differently and produce different results. And also that you will need to choose a model that supports tool (function) call capabilities. hi @gregnr, README updated
gharchive/pull-request
2024-08-15T07:30:10
2025-04-01T06:45:54.334621
{ "authors": [ "lroolle" ], "repo": "supabase-community/postgres-new", "url": "https://github.com/supabase-community/postgres-new/pull/75", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1083348809
Please introduce tags Please introduce tags and releases to this package. As you know Swift Playgrounds 4 got released, but it only supports SPM packages by version number, not by branch. Thus it is currently impossible to use this SDK in a Playgrounds project. I absolutely second this request. Versioning tags are missing for supabase, and from what I can see are the only thing stopping this from being usable in swift playgrounds. Further to this, no BAAS can be used because although they have swift packages, they are wrappers for C++ code, which cannot be used in swift playgrounds. This seems to be the only package solely written in swift. Admittedly not a big market here, but unless I am much mistaken, this should be 10 minute update for somebody to change. Thanks for the top work Tag created 0.0.1.
gharchive/issue
2021-12-17T15:06:48
2025-04-01T06:45:54.336793
{ "authors": [ "chFlorian", "grsouza", "samjproctor" ], "repo": "supabase-community/supabase-swift", "url": "https://github.com/supabase-community/supabase-swift/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2418369050
Supabase gen types --db-url Fails Without Docker Desktop Runneing the following command npx supabase gen types --lang=typescript --db-url postgresql://postgres:pass@localhost/webstudio produces the error message: Docker Desktop is a prerequisite for local development. Follow the official docs to install: https://docs.docker.com/desktop To Reproduce npx supabase gen types --lang=typescript --db-url postgresql://postgres:pass@localhost/webstudio Expected behavior I am not using any Supabase services to generate types. Installing Docker for type generation seems unnecessary. Are you generating types for local database or a remote project? It seems like local from your comment but starting a local database relies on docker anyway so it's ok for type generation to rely on that. If you are generating for a remote project, you can use npx supabase gen types --lang=typescript --project-id <your-project-ref> instead. I am using the cli for https://github.com/supabase/postgrest-js, which can operate independently of Supabase dependencies. This means it doesn’t require the entire Supabase stack to be installed via Docker. In my setup, the entire environment is placed inside a devcontainer, which is essentially a Docker container without internal Docker support. When a CLI tool depends on Docker, it often results in a substantial download of several gigabytes of various images. This can be quite costly in some locations. It would be greatly appreciated if the CLI could provide an estimate of the data it will download, especially for one-time executions. When a CLI tool depends on Docker, it often results in a substantial download of several gigabytes of various images. This can be quite costly in some locations. It would be greatly appreciated if the CLI could provide an estimate of the data it will download, especially for one-time executions. That's a good idea. What I will probably do is to add series of prompts when running supabase init for services that users actually want. If you don't need any services, they can also be disabled via supabase/config.toml. That would save a lot of time / data downloading potentially unused images.
gharchive/issue
2024-07-19T08:34:02
2025-04-01T06:45:54.341491
{ "authors": [ "istarkov", "sweatybridge" ], "repo": "supabase/cli", "url": "https://github.com/supabase/cli/issues/2536", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1228705703
Podman support Feature request Would be really nice to be able to use Podman instead of Docker. Podman is more lightweight, and from my understanding can run containers rootless +1 Yes. You can! Just follow these instructions: Install podman podman-docker and docker-compose i'm using linux pop_os sudo apt-get -y install podman sudo apt-get -y install podman-docker sudo apt-get -y install docker-compose Now. You have to clone the supabase repo: git clone --depth 1 https://github.com/supabase/supabase Then: # Get the code git clone --depth 1 https://github.com/supabase/supabase # Go to the docker folder cd supabase/docker # Copy the fake env vars cp .env.example .env Now you have to configure Docker Hub mirror in /etc/containers/registries.conf: sudo nano /etc/containers/registries.conf Add these lines to the end of file: unqualified-search-registries = ['docker.io'] [[registry]] prefix = "docker.io" location = "docker.io" [[registry.mirror]] prefix = "docker.io" location = "docker.mirrors.ustc.edu.cn" Now. You are ready to install supabase studio: sudo docker-compose up Finally. visit http://localhost:3000 to start using Supabase Studio. In case you face this error: ERROR: for db error preparing container 22e6838a1fd10267d63f69ef400000a84a1a857a5e46718003f8a5f63388bb0f for attach: cannot listen on the TCP port: listen tcp4 :5432: bind: address already in use Just change the POSTGRES_PORT in .env file. Yes. You can! https://dev.to/zakariachahboun/supabase-self-hosting-with-podman-1mhl That's great, though personally not looking to self host, just running Supabase for local development The local development consume a lot of resources! I don't know why but it seems like there is an issue with the "authentication" part The issue executed in a loop .. Don't forget to tell us your experience with it! I think the OP meant that when running a local dev instance using supabase start, cli can use podman to spin up the containers instead of using docker. The main reason for this is that afaik on many Linux systems (including mine), running docker requires user privileges, so any command that requires interaction with the containers (e.g. supabase start, supabase db remote commit, etc.) must be ran using sudo, which can cause a variety of confusing errors such as unable to find credentials because the developer ran supabase init and supabase login as non-root. Also, I think entering the root password every five minutes is probably not a pleasant experience. I recently discovered that cli has the capability to use podman. It seems like under the hood, cli uses docker compose to start & orchestrate the containers, so all we have to do is to let docker compose know that we want to run our containers using podman. On Linux, the steps are similar to: systemctl --user enable podman.socket systemctl --user start podman.socket systemctl --user status podman.socket export DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock Now the only problem I have is when I run supabase start, it gives me Error: unable to upgrade to tcp, received 409. This seems like a permission issue on the podman side. Instructions for running docker compose using podman are from here: https://fedoramagazine.org/use-docker-compose-with-podman-to-orchestrate-containers-on-fedora/ I think that the biggest issue running many docker containers on local machine is possibility to run out of memory. Podman is much more lightweight. Is there any other reason to use it instead of Docker? The network errors are coming from here. Specifically, errdefs.IsConflict doesn't actually return true when a network with same name already exists with Podman, which raises some questions: Why does this presumably work with Docker but not Podman? I'm not entirely sure, but I would guess that errdefs.IsConflict incidentally returns true when using Docker but not Podman. The errdefs does warn users: Packages should not reference these interfaces directly, only implement them. Why don't I see any networks with podman network ls? Again I'm not entirely sure, but DockerNetworkCreateIfNotExists is ran once before the error occurs with the same network ID and doesn't run into any issues. It's only when it's called the second time does it report an error. My guess is there's some tear down code that deletes the network if it fails to start, but I didn't bother looking for that code. How can this be fixed? Quite easily actually. Just replace errdefs.IsConflict with a more suitable alternative, such as NetworkInspect in internal/utils/docker.go: func DockerNetworkCreateIfNotExists(ctx context.Context, networkId string) error { + existing, err := Docker.NetworkInspect( + ctx, + networkId, + types.NetworkInspectOptions{}, + ) + + // if network already exists, abort + if existing.ID != "" && err == nil { + return nil + } + _, err = Docker.NetworkCreate( ctx, networkId, types.NetworkCreate{ CheckDuplicate: true, Labels: map[string]string{ "com.supabase.cli.project": Config.ProjectId, "com.docker.compose.project": Config.ProjectId, }, }, ) - // if error is network already exists, no need to propagate to user - if errdefs.IsConflict(err) { - return nil - } return err } The following paragraph suggests that errdef.IsConflict is the intended use case. To check if a particular error implements one of these interfaces, there are helper functions provided (e.g. Is<SomeError>) which can be used rather than asserting the interfaces directly. I suspect the problem is with podman not returning the same http status code as docker daemon when there's a network name conflict. Do you mind checking with podman upstream if this is indeed the case? If so, would they be accepting PRs to address it? I've released a fix for the podman network issue. It's available on the beta release channel: npx supabase@beta start Let me know if there are other incompatibilities with podman that I can help iron out. I tried npx supabase@beta start but got this error: node:internal/process/promises:288 triggerUncaughtException(err, true /* fromPromise */); ^ Error: getaddrinfo EAI_AGAIN supabase_db_website at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) { errno: -3001, code: 'EAI_AGAIN', syscall: 'getaddrinfo', hostname: 'supabase_db_website' } Node.js v18.16.0 error running container: exit 1 Try rerunning the command with --debug to troubleshoot the error. I'm trying to run the Supabase CLI on Replit, which uses Nix and doesn't give the sudo access required to run Docker, so I'd also like Podman support. @sweatybridge It looks like your PR went out in the 1.71.1 release. The main Nix channel (23.05) only seems to have 1.62.3, but unstable has 1.75.6, so I will try that. For my particular usecase I don't think I will be able to ue Podman because I'm in a Replit unprivileged container. I will have to try running the Supabase CLI with Podman on my host laptop to confirm that your PR fixed things, but that will take me a few days. 😅 I tried npx supabase@beta start but got this error: node:internal/process/promises:288 triggerUncaughtException(err, true /* fromPromise */); ^ Error: getaddrinfo EAI_AGAIN supabase_db_website at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) { errno: -3001, code: 'EAI_AGAIN', syscall: 'getaddrinfo', hostname: 'supabase_db_website' } Node.js v18.16.0 error running container: exit 1 Try rerunning the command with --debug to troubleshoot the error. Got this error, too when calling DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock supabase start, using podman in arch. The --debug flag doesn't show any more messages. @sweatybridge I am still experiencing the same issue with the beta release. Details: Need to install the following packages: supabase@1.99.6 Ok to proceed? (y) y Supabase CLI 1.99.6 Error response from daemon: error configuring network namespace for container 8f754093eeeea5228d757190ad56f343a9de3864746277dab1cbe97672550378: CNI network "supabase_network_MyProj" not found I then try and create the network manually: podman network create --label com.supabase.cli.project=MyProj --label com.docker.compose.project=MyProj supabase_network_MyProj Then trying to run again: Supabase CLI 1.99.6 Error response from daemon: the network name supabase_network_MyProj is already used I am on ubuntu with podman 3.4.4, which is a bit older at this point... so it is possible podman version is a variable as well. @addisonj I'm experiencing the same thing with 3.4.4 It seems it's been patched in recent versions. Depending on your setup, you could to build it from source or install from Kubic's repo (script). Though if you do this make sure to look at podman's install page first as it's not recommended. There were a few more errors after this. Something like CNI bridge not working after upgrading to podman 4. Fixed by installing containernetworking-plugins. Then there was database is not healthy when running supabase start (a more descriptive error message would be very helpful). docker logs -f supabase_db_project showed this was caused by pgsodium_root.key: Permission denied. I'm hoping there's another way to fix this but for now supabase starts with podman.service as root. pgsodium_root.key: Permission denied The permission error could be due to named volumes being mounted from host to container. If you know the path to podman volume directory, try giving it read / write permission to postgres user. I'm not even getting any errors when I attempt to start the containers. I just get service not healthy: [realtime-dev.supabase_realtime_sylvester supabase_pg_meta_sylvester supabase_studio_sylvester] Debug logging shows nothing useful beyond the standard output. The container logs for those three services are dumped but don't really show anything either: realtime-dev.supabase_realtime_sylvester: realtime-dev.supabase_realtime_sylvester container logs: 02:55:23.311 [info] == Running 20210706140551 Realtime.Repo.Migrations.CreateTenants.change/0 forward 02:55:23.315 [info] create table tenants 02:55:23.322 [info] create index tenants_external_id_index 02:55:23.328 [info] == Migrated 20210706140551 in 0.0s 02:55:23.390 [info] == Running 20220329161857 Realtime.Repo.Migrations.AddExtensionsTable.change/0 forward 02:55:23.390 [info] create table extensions 02:55:23.396 [info] create index extensions_tenant_external_id_type_index 02:55:23.401 [info] == Migrated 20220329161857 in 0.0s 02:55:23.408 [info] == Running 20220410212326 Realtime.Repo.Migrations.AddTenantMaxEps.up/0 forward 02:55:23.408 [info] alter table tenants 02:55:23.411 [info] == Migrated 20220410212326 in 0.0s 02:55:23.414 [info] == Running 20220506102948 Realtime.Repo.Migrations.RenamePollIntervalToPollIntervalMs.up/0 forward 02:55:23.420 [warning] Replica region not found, defaulting to Realtime.Repo 02:55:23.456 [debug] QUERY OK source="extensions" db=0.3ms SELECT e0."id", e0."type", e0."settings", e0."tenant_external_id", e0."inserted_at", e0."updated_at" FROM "extensions" AS e0 WHERE (e0."type" = $1) ["postgres_cdc_rls"] 02:55:23.456 [info] == Migrated 20220506102948 in 0.0s 02:55:23.462 [info] == Running 20220527210857 Realtime.Repo.Migrations.AddExternalIdUniqIndex.change/0 forward 02:55:23.462 [info] execute "alter table tenants add constraint uniq_external_id unique (external_id)" 02:55:23.465 [info] == Migrated 20220527210857 in 0.0s 02:55:23.468 [info] == Running 20220815211129 Realtime.Repo.Migrations.NewMaxEventsPerSecondDefault.change/0 forward 02:55:23.469 [info] alter table tenants 02:55:23.471 [info] == Migrated 20220815211129 in 0.0s 02:55:23.475 [info] == Running 20220815215024 Realtime.Repo.Migrations.SetCurrentMaxEventsPerSecond.change/0 forward 02:55:23.475 [info] execute "update tenants set max_events_per_second = 1000" 02:55:23.481 [info] == Migrated 20220815215024 in 0.0s 02:55:23.490 [info] == Running 20220818141501 Realtime.Repo.Migrations.ChangeLimitsDefaults.change/0 forward 02:55:23.491 [info] alter table tenants 02:55:23.492 [info] == Migrated 20220818141501 in 0.0s 02:55:23.498 [info] == Running 20221018173709 Realtime.Repo.Migrations.AddCdcDefault.up/0 forward 02:55:23.498 [info] alter table tenants 02:55:23.499 [info] == Migrated 20221018173709 in 0.0s 02:55:23.502 [info] == Running 20221102172703 Realtime.Repo.Migrations.RenamePgType.up/0 forward 02:55:23.502 [info] execute "update extensions set type = 'postgres_cdc_rls'" 02:55:23.503 [info] == Migrated 20221102172703 in 0.0s 02:55:23.506 [info] == Running 20221223010058 Realtime.Repo.Migrations.DropTenantsUniqExternalIdIndex.change/0 forward 02:55:23.506 [info] execute "ALTER TABLE IF EXISTS tenants DROP CONSTRAINT IF EXISTS uniq_external_id" 02:55:23.508 [info] == Migrated 20221223010058 in 0.0s 02:55:23.513 [info] == Running 20230110180046 Realtime.Repo.Migrations.AddLimitsFieldsToTenants.change/0 forward 02:55:23.513 [info] alter table tenants 02:55:23.514 [info] == Migrated 20230110180046 in 0.0s 02:55:23.518 [info] == Running 20230810220907 Realtime.Repo.Migrations.AlterTenantsTableColumnsToText.change/0 forward 02:55:23.518 [info] alter table tenants 02:55:23.522 [info] == Migrated 20230810220907 in 0.0s 02:55:23.526 [info] == Running 20230810220924 Realtime.Repo.Migrations.AlterExtensionsTableColumnsToText.change/0 forward 02:55:23.526 [info] alter table extensions 02:55:23.529 [info] == Migrated 20230810220924 in 0.0s 02:55:23.532 [info] == Running 20231024094642 :"Elixir.Realtime.Repo.Migrations.Add-tenant-suspend-flag".change/0 forward 02:55:23.532 [info] alter table tenants 02:55:23.533 [info] == Migrated 20231024094642 in 0.0s 02:55:24.360 [debug] QUERY OK db=1.8ms queue=118.1ms idle=0.0ms begin [] 02:55:24.384 [debug] QUERY OK source="tenants" db=0.4ms SELECT t0."id", t0."name", t0."external_id", t0."jwt_secret", t0."postgres_cdc_default", t0."max_concurrent_users", t0."max_events_per_second", t0."max_bytes_per_second", t0."max_channels_per_client", t0."max_joins_per_second", t0."suspend", t0."inserted_at", t0."updated_at" FROM "tenants" AS t0 WHERE (t0."external_id" = $1) ["realtime-dev"] 02:55:24.439 [debug] QUERY OK db=1.7ms INSERT INTO "tenants" ("external_id","jwt_secret","max_bytes_per_second","max_channels_per_client","max_concurrent_users","max_events_per_second","max_joins_per_second","name","suspend","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12) ["realtime-dev", "iNjicxc4+llvc9wovDvqymwfnj9teWMlyOIbJ8Fh6j2WNU8CIJ2ZgjR6MUIKqSmeDmvpsKLsZ9jgXJmQPpwL8w==", 100000, 100, 200, 100, 100, "realtime-dev", false, ~N[2023-11-27 02:55:24], ~N[2023-11-27 02:55:24], <<165, 86, 32, 78, 29, 51, 75, 35, 174, 167, 248, 212, 42, 119, 216, 160>>] 02:55:24.447 [debug] QUERY OK db=5.7ms INSERT INTO "extensions" ("settings","tenant_external_id","type","inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6) [%{"db_host" => "f23Hm+RKKCxIK6ehAjN45GlQ0FDUt0uPXpwEKlJrfrg=", "db_name" => "sWBpZNdjggEPTQVlI52Zfw==", "db_password" => "sWBpZNdjggEPTQVlI52Zfw==", "db_port" => "+enMDFi1J/3IrrquHHwUmA==", "db_user" => "uxbEq/zz8DXVD53TOI1zmw==", "ip_version" => 4, "poll_interval_ms" => 100, "poll_max_changes" => 100, "poll_max_record_bytes" => 1048576, "publication" => "supabase_realtime", "region" => "us-east-1", "slot_name" => "supabase_realtime_replication_slot", "ssl_enforced" => false}, "realtime-dev", "postgres_cdc_rls", ~N[2023-11-27 02:55:24], ~N[2023-11-27 02:55:24], <<16, 110, 81, 172, 26, 192, 71, 186, 129, 66, 231, 218, 92, 169, 236, 206>>] 02:55:24.457 [debug] QUERY OK db=10.4ms commit [] 02:55:25.607 [info] Elixir.Realtime.SignalHandler is being initialized... 02:55:25.607 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.Realtime.Tenants.Connect> 02:55:25.607 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.Realtime.Tenants.Connect> 02:55:25.608 [notice] SYN[realtime@127.0.0.1|registry<Elixir.Realtime.Tenants.Connect>] Discovering the cluster 02:55:25.608 [notice] SYN[realtime@127.0.0.1|pg<Elixir.Realtime.Tenants.Connect>] Discovering the cluster 02:55:25.608 [notice] SYN[realtime@127.0.0.1] Adding node to scope <users> 02:55:25.608 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <users> 02:55:25.608 [notice] SYN[realtime@127.0.0.1|registry<users>] Discovering the cluster 02:55:25.608 [notice] SYN[realtime@127.0.0.1|pg<users>] Discovering the cluster 02:55:25.608 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.RegionNodes> 02:55:25.608 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.RegionNodes> 02:55:25.608 [notice] SYN[realtime@127.0.0.1|registry<Elixir.RegionNodes>] Discovering the cluster 02:55:25.608 [notice] SYN[realtime@127.0.0.1|pg<Elixir.RegionNodes>] Discovering the cluster 02:55:25.608 [warning] Replica region not found, defaulting to Realtime.Repo 02:55:28.607 [debug] Tzdata polling for update. 02:55:28.813 [info] tzdata release in place is from a file last modified Fri, 22 Oct 2021 02:20:47 GMT. Release file on server was last modified Tue, 28 Mar 2023 20:25:39 GMT. 02:55:28.813 [debug] Tzdata downloading new data from https://data.iana.org/time-zones/tzdata-latest.tar.gz 02:55:28.907 [debug] Tzdata data downloaded. Release version 2023c. 02:55:29.447 [info] Tzdata has updated the release from 2021e to 2023c 02:55:29.447 [debug] Tzdata deleting ETS table for version 2021e 02:55:29.449 [debug] Tzdata deleting ETS table file for version 2021e 02:55:34.980 [info] Running RealtimeWeb.Endpoint with cowboy 2.10.0 at :::4000 (http) 02:55:34.988 [info] Access RealtimeWeb.Endpoint at http://realtime.fly.dev 02:55:34.989 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.PostgresCdcStream> 02:55:34.989 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.PostgresCdcStream> 02:55:34.989 [notice] SYN[realtime@127.0.0.1|registry<Elixir.PostgresCdcStream>] Discovering the cluster 02:55:34.989 [notice] SYN[realtime@127.0.0.1|pg<Elixir.PostgresCdcStream>] Discovering the cluster 02:55:34.990 [notice] SYN[realtime@127.0.0.1] Adding node to scope <Elixir.Extensions.PostgresCdcRls> 02:55:34.990 [notice] SYN[realtime@127.0.0.1] Creating tables for scope <Elixir.Extensions.PostgresCdcRls> 02:55:34.990 [notice] SYN[realtime@127.0.0.1|registry<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster 02:55:34.990 [notice] SYN[realtime@127.0.0.1|pg<Elixir.Extensions.PostgresCdcRls>] Discovering the cluster supabase_pg_meta_sylvester: supabase_pg_meta_sylvester container logs: > @supabase/postgres-meta@0.0.0-automated start > node dist/server/server.js (node:21) ExperimentalWarning: Importing JSON modules is an experimental feature. This feature could change at any time (Use `node --trace-warnings ...` to show where the warning was created) {"level":"info","time":"2023-11-27T02:55:24.680Z","pid":21,"hostname":"3a9950ac0e89","msg":"Server listening at http://0.0.0.0:8080"} {"level":"info","time":"2023-11-27T02:55:24.688Z","pid":21,"hostname":"3a9950ac0e89","msg":"Server listening at http://0.0.0.0:8081"} supabsae_studio_sylvester: supabase_studio_sylvester container logs: ▲ Next.js 13.5.3 - Local: http://localhost:3000 - Network: http://0.0.0.0:3000 ✓ Ready in 523ms This is with Podman 4 and after installing containernetworking-plugins Adding my notes to the conversation ... I had to remove the host.docker.internal:host-gateway option from supabase CLI because podman doesn't support this option. Instead they've opted to automatically add a host.docker.internal entry to every container's /etc/hosts by default. diff --git a/internal/db/start/start.go b/internal/db/start/start.go index dd7558f..ced03d5 100644 --- a/internal/db/start/start.go +++ b/internal/db/start/start.go @@ -95,7 +95,6 @@ func NewHostConfig() container.HostConfig { utils.DbId + ":/var/lib/postgresql/data", utils.ConfigId + ":/etc/postgresql-custom", }, - ExtraHosts: []string{"host.docker.internal:host-gateway"}, }) return hostConfig } diff --git a/internal/functions/serve/serve.go b/internal/functions/serve/serve.go index dc8fb66..66199ab 100644 --- a/internal/functions/serve/serve.go +++ b/internal/functions/serve/serve.go @@ -163,7 +163,6 @@ EOF }, start.WithSyslogConfig(container.HostConfig{ Binds: binds, - ExtraHosts: []string{"host.docker.internal:host-gateway"}, }), network.NetworkingConfig{ EndpointsConfig: map[string]*network.EndpointSettings{ Now I'm getting Error response from daemon: lsetxattr /Users/wryfi/src/github.com/wryfi/supaflut/supabase/functions: operation not supported. It appears that one of the containers is trying to set an extended attribute – which macOS doesn't support but podman doesn't prevent – on a volume-mounted folder. But I can't readily tell which container is responsible and I hit my time box for investigating further. Hope to see the supabase CLI fully functioning with podman soon! Also: for anyone just looking for a docker desktop alternative, colima is working for me with supabase with the docker runtime that's available in macports/homebrew. @wryfi Using your log, I was able to start the local containers using supabase start --ignore-health-check. Even though podman ps shows that realtime is unhappy: > podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9f4fdb0f28ac public.ecr.aws/supabase/postgres:15.1.0.117 postgres -c confi... About a minute ago Up About a minute (healthy) 0.0.0.0:54322->5432/tcp supabase_db_buoj fd3d49376f2f public.ecr.aws/supabase/kong:2.8.1 About a minute ago Up About a minute (healthy) 0.0.0.0:54321->8000/tcp supabase_kong_buoj 414830bb3ffc public.ecr.aws/supabase/gotrue:v2.99.0 gotrue About a minute ago Up About a minute (healthy) supabase_auth_buoj 4d5a45a185b8 public.ecr.aws/supabase/inbucket:3.0.3 -logjson About a minute ago Up About a minute (healthy) 0.0.0.0:54324->9000/tcp supabase_inbucket_buoj 2d059c4c9cbc public.ecr.aws/supabase/realtime:v2.25.35 /bin/sh -c /app/b... About a minute ago Up About a minute (unhealthy) realtime-dev.supabase_realtime_buoj c2389193d94b public.ecr.aws/supabase/postgrest:v11.2.2 /bin/postgrest About a minute ago Up About a minute supabase_rest_buoj b387e3033dd3 public.ecr.aws/supabase/storage-api:v0.43.11 node dist/server.... About a minute ago Up About a minute (healthy) supabase_storage_buoj 14c75c57f04d public.ecr.aws/supabase/imgproxy:v3.8.0 imgproxy About a minute ago Up About a minute (healthy) storage_imgproxy_buoj 8b3f6dac3aa2 public.ecr.aws/supabase/edge-runtime:v1.23.0 About a minute ago Up About a minute supabase_edge_runtime_buoj f347e5c9df83 public.ecr.aws/supabase/postgres-meta:v0.75.0 npm run start About a minute ago Up About a minute (unhealthy) supabase_pg_meta_buoj 848ac49c7b0e public.ecr.aws/supabase/studio:20231123-64a766a node apps/studio/... About a minute ago Up About a minute (unhealthy) 0.0.0.0:54323->3000/tcp supabase_studio_buoj I wrote some quick tests to test realtime and realtime is still working. I inspected the logs using podman logs realtime-dev.supabase_realtime_buoj and I am not seeing anything suspicious. I think there may be some problems with the healthchecks. Will report back when I encounter any issues. My podman installation has netavark, aardvark-dns, and cni-plugins installed as well, no sure if that would make a difference. Also @wryfi your issue with lsexattr looks like containers/podman#13631. You may want to check relevant commits at the bottom of that issue to see if your version of podman contains the relevant fix. As of today, I was still unable to run supabase env on my M2 Mac. What I did: # install podman, initialize the machine sudo podman-mac-helper install podman machine stop podman machine set --rootful podman machine start export DOCKER_HOST="unix:///var/run/docker.sock" supabase start but I end up with: failed to start docker container: Error response from daemon: failed to create new hosts file: unable to replace "host-gateway" of host entry "host.docker.internal:host-gateway": host containers internal IP address is empty It looks to be working on my Linux machine with rootless podman running as my user, but only when I ignore the health checks. When I start with health checks - it starts, I can browse around in the web UI, then it disappears. Maybe the healthchecks are incorrectly being reported as not healthy, and then the command decides to shut down the containers, and ignoring the health checks makes it work? Maybe the healthchecks are incorrectly being reported as not healthy, and then the command decides to shut down the containers, and ignoring the health checks makes it work? That is possible. Is there any logs you can share when start fails due to health check? The podman health check issues have been fixed by https://github.com/supabase/cli/pull/2359 You can now use podman with cli beta release. npx supabase@beta start Stable release will be updated next week once it's fully verified. I have addressed the other podman compatibility issues mentioned in https://github.com/supabase/cli/issues/265#issuecomment-1832282812. Please give the beta release a spin and let me know if anything is still broken. @sweatybridge Thanks it's working for me at least on mac but it might need some work. Health check fails for pg_meta and studio but this can be bypassed with --ignore-health-check so it's not critical. Also seems like studio is running just fine so no idea why the health check is failing. Also analytics has to be disabled or it won't start. Regardless thanks for getting this working. For pgmeta health check, could you show the output of podman inspect --format '{{json .Config.Healthcheck}}' supabase_pg_meta_<id> | jq? Analytics will require https://github.com/supabase/cli/pull/2061 to be merged. @sweatybridge Hello! Thank you for the patches :) Unfortunately, I am still getting one of the errors above with npx supabase@1.178.2 start: Stopping containers... failed to start docker container: Error response from daemon: failed to create new hosts file: unable to replace "host-gateway" of host entry "host.docker.internal:host-gateway": host containers internal IP address is empty Try rerunning the command with --debug to troubleshoot the error. Is there anything I should adjust with my configuration? @Hoolean which version of podman are you using? Based on upstream issue https://github.com/containers/podman/issues/14390#issuecomment-1693194203, It should be fixed in v4.7 and above. @sweatybridge Thanks for the speedy reply :) Podman appears to be up-to-date: $ podman --version podman version 5.1.1 An apology though: I may have had an environment issue yesterday anyway, as now the error message is different: $ npx supabase@1.178.2 start Stopping containers... failed to start docker container: Error response from daemon: setting up Pasta: pasta failed with exit code 1: Couldn't get any nameserver address Failed to open() /dev/net/tun: No such device Failed to set up tap device in namespace Try rerunning the command with --debug to troubleshoot the error. the problem still occurs in my mac m chip with podman v5 Completely fresh installation of Podman and Supabase CLI, everything runs except for the Health checks. Starting with --ignore-health-check allows for me to use everything. Two containers remain unhealthy, namely: supabase_pg_meta_supaflags container is not ready: unhealthy supabase_studio_supaflags container is not ready: unhealthy Studio logs: ▲ Next.js 14.2.3 - Local: http://localhost:3000 - Network: http://0.0.0.0:3000 ✓ Starting... ✓ Ready in 330ms pg logs: (node:1) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time (Use `node --trace-warnings ...` to show where the warning was created) {"level":"info","time":"2024-08-21T12:43:05.758Z","pid":1,"hostname":"11787cc5ed12","msg":"Server listening at http://0.0.0.0:8080"} {"level":"info","time":"2024-08-21T12:43:05.763Z","pid":1,"hostname":"11787cc5ed12","msg":"Server listening at http://0.0.0.0:8081"} I didn't the network issue or hosts issue. but I got the error about mount docker sock. what I have: export DOCKER_HOST=unix:///run/user/$(id -u)/podman/podman.sock supabase start What I got: WARNING: analytics requires mounting default docker socket: /var/run/docker.sock Stopping containers... failed to create docker container: Error response from daemon: make cli opts(): making volume mountpoint for volume /var/run/docker.sock: mkdir /var/run/docker.sock: permission denied Try rerunning the command with --debug to troubleshoot the error. the supabase start indeed pulled the docker images and run a few containers. so the DOCKER_HOST and podman are working. @sweatybridge please re-check again this issue Was having the same issue on mac m1 using podman v5.0.0, but upgrading supabase package (v1.153.1 to v1.223.10) fixed the issue. I forgot that I had the --ignore-health-check flag on the initial supabase db start command that downloaded the container, so not sure if that flag was needed on the initial install, but restarting it again without the flag worked. I resolved the issue by removing /var/run/docker.sock and creating a new symbolic link to the socket at /run/user/$UID/podman/podman.sock. However, I consider this an hack, so I'll wait for better solutions and suggestions. Thank you. sudo rm /var/run/docker.sock ln -s /run/user/$(id -u)/podman/podman.sock docker.sock supabase start --ignore-health-check I following the below steps and make a workaroud [!IMPORTANT] 1、podman using rootless docker systemctl --user enable podman.socket systemctl --user start podman.socket systemctl --user status podman.socket [!IMPORTANT] 2、For compatible with /var/run/docker.sock sudo vim /etc/systemd/system/create-docker-sock.service [Unit] Description=Create docker.sock symlink After=podman.socket # 或者其他 podman 相关的服务,确保 podman socket 已经启动 [Service] Type=oneshot ExecStart=/bin/ln -sf /run/user/1000/podman/podman.sock /var/run/docker.sock [Install] WantedBy=multi-user.target sudo systemctl enable create-docker-sock.service sudo systemctl start create-docker-sock.service sudo systemctl status create-docker-sock.service [!IMPORTANT] 3、For server restart with supabase cli run using podman sudo vim /usr/local/bin/supabase-cli-auto-start.sh #!/bin/bash # 切换到 Supabase 配置所在目录 cd /home/admin || exit # 停止 Supabase /home/linuxbrew/.linuxbrew/bin/supabase stop --all # 启动 Supabase /home/linuxbrew/.linuxbrew/bin/supabase start sudo chmod a+x /usr/local/bin/supabase-cli-auto-start.sh 3.5、create another systemd service sudo vim /etc/systemd/system/supabase-cli-auto-start.service [Unit] Description=Supabase Stop and Start Service After=network.target [Service] Type=oneshot ExecStart=/usr/local/bin/supabase-cli-auto-start.sh RemainAfterExit=true [Install] WantedBy=multi-user.target Finally , make the service take effect. sudo systemctl enable supabase-cli-auto-start.service sudo systemctl start supabase-cli-auto-start.service sudo systemctl status supabase-cli-auto-start.service
gharchive/issue
2022-05-07T19:31:50
2025-04-01T06:45:54.394472
{ "authors": [ "GZGavinZhao", "GanymedeIndustries", "Hoolean", "Nezteb", "RedChops", "Rocke1001feller", "Wakeful-Cloud", "addisonj", "amr3k", "andrew-w-ross", "arvalaan", "chiqors", "hnykda", "kaleocheng", "n3oney", "o1dnik", "perroboc", "probablykasper", "sameoldlab", "sweatybridge", "wryfi", "zakaria-chahboun" ], "repo": "supabase/cli", "url": "https://github.com/supabase/cli/issues/265", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2153732646
fix: update release process to create PR What kind of change does this PR introduce? Bug fix, feature, docs update, ... What is the current behavior? Please link any relevant issues here. What is the new behavior? Feel free to include screenshots if it includes visual changes. Additional context Add any other context or screenshots. Pull Request Test Coverage Report for Build 8046881118 Details 0 of 0 changed or added relevant lines in 0 files are covered. 5 unchanged lines in 1 file lost coverage. Overall coverage decreased (-0.03%) to 58.859% Files with Coverage Reduction New Missed Lines % internal/gen/keys/keys.go 5 11.76% Totals Change from base Build 8043803378: -0.03% Covered Lines: 6192 Relevant Lines: 10520 💛 - Coveralls
gharchive/pull-request
2024-02-26T09:53:36
2025-04-01T06:45:54.404932
{ "authors": [ "coveralls", "sweatybridge" ], "repo": "supabase/cli", "url": "https://github.com/supabase/cli/pull/1991", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2241494240
feat: storage bump v1.0.4 What kind of change does this PR introduce? Feature What is the new behavior? Bump storage to v1.0.4 with support for the S3 protocol supabase local development setup is running. API URL: http://127.0.0.1:54321 GraphQL URL: http://127.0.0.1:54321/graphql/v1 S3 Storage URL: http://127.0.0.1:54321/storage/v1/s3 DB URL: postgresql://postgres:postgres@127.0.0.1:54322/postgres Studio URL: http://127.0.0.1:54323 Inbucket URL: http://127.0.0.1:54324 JWT secret: super-secret-jwt-token-with-at-least-32-characters-long anon key: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0 service_role key: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImV4cCI6MTk4MzgxMjk5Nn0.EGIM96RAZx35lJzdJsyH-qQwv8Hdp7fsn3W0YpN81IU S3 Access Key: 625729a08b95bf1b7ff351a663f3a23c S3 Secret Key: 850181e4652dd023b7a98c58ae0d2d34bd487ee0cc3254aed6eda37307425907 S3 Region: local Pull Request Test Coverage Report for Build 8680420430 Warning: This coverage report may be inaccurate. This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes. For more information on this, see Tracking coverage changes with pull request builds. To avoid this issue with future PRs, see these Recommended CI Configurations. For a quick fix, rebase this PR at GitHub. Your next report should be accurate. Details 26 of 26 (100.0%) changed or added relevant lines in 2 files are covered. 14 unchanged lines in 1 file lost coverage. Overall coverage increased (+0.08%) to 57.813% Files with Coverage Reduction New Missed Lines % internal/migration/repair/repair.go 14 83.12% Totals Change from base Build 8666347059: 0.08% Covered Lines: 6397 Relevant Lines: 11065 💛 - Coveralls
gharchive/pull-request
2024-04-13T10:24:04
2025-04-01T06:45:54.414443
{ "authors": [ "coveralls", "fenos" ], "repo": "supabase/cli", "url": "https://github.com/supabase/cli/pull/2150", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2527163670
Неправильные OnClicked_Events Контейнеры не являются кнопками, а значит в них нет OnClicked ивента. Если заменить их рутовые StackPanel на Button, это исправит ситуацию. Тем не менее, код в XAML скорее всего станет существенно больше из-за необходимости переписывать полностью стили у кнопок, чтобы те не становились синими или не генерили обводку вокруг себя ну и т.д. В данный момент OnClicked ивент заменяет OnMouseLeftButtonUp, что в принципе не так и плохо, если возвращаться к идеалам компактного кода. Ещё, как вариант, можно сделать кастомный OnClicked для StackPanel, тогда это вообще все проблемы решит. Проблем не вижу, при тестах ни разу не случилось ложных нажатий, а следовательно - закрываю проблему, не смотря на глобальное отсутствие изменений в ивентах
gharchive/issue
2024-09-15T19:50:03
2025-04-01T06:45:54.424821
{ "authors": [ "supchyan" ], "repo": "supchyan/lolibar", "url": "https://github.com/supchyan/lolibar/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2572013845
Can't prove withdrawal transaction on https://blast.gfxlabs.io/ I can't prove withdrawal transaction on https://blast.gfxlabs.io/ I got the error message when call the API https://api.superbridge.app/api/bridge/op_prove Error message: {"statusCode":500,"message":"Internal server error"} Could you reach out to support@superbridge.app with your wallet address? We'll take a closer look for you.
gharchive/issue
2024-10-08T04:23:13
2025-04-01T06:45:54.430097
{ "authors": [ "AlexBHarley", "tuannm91" ], "repo": "superbridgeapp/superbridge-app", "url": "https://github.com/superbridgeapp/superbridge-app/issues/144", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2205915695
TextSimilaritySpace issue UserWarning on CUDA initialization Describe the bug TextSimilaritySpace function returns user warning if no Cuda driver available on the machine. python3.10/site-packages/torch/cuda/__init__.py:141: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 To Reproduce Just follow the boiler plate notebook https://github.com/superlinked/superlinked/blob/main/notebook/semantic_search_news.ipynb and you should get the warning when running TextSimilaritySpace in cell 10 Expected behavior Either the function can't run without cuda driver (and no GPU) and you should probably make that clear in the requirements, or it can and then why issue a warning? System Version: [e.g. 3.10.0] List of package installed on the codeenv used: altair==5.2.0 annotated-types==0.6.0 attrs==23.2.0 backcall==0.2.0 beartype==0.17.2 certifi==2024.2.2 charset-normalizer==3.3.2 click==8.1.7 comm==0.2.2 decorator==5.1.1 filelock==3.13.1 fsspec==2024.3.1 furl==2.1.3 huggingface-hub==0.21.4 idna==3.6 ipykernel==4.8.2 ipython==7.34.0 ipython-genutils==0.2.0 ipywidgets==8.1.2 jedi==0.19.1 Jinja2==3.1.3 joblib==1.3.2 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 jupyter-client==6.1.12 jupyter_core==4.12.0 jupyterlab_widgets==3.0.10 MarkupSafe==2.1.5 matplotlib-inline==0.1.6 mpmath==1.3.0 networkx==3.2.1 nltk==3.8.1 numpy==1.22.4 nvidia-cublas-cu12==12.1.3.1 nvidia-cuda-cupti-cu12==12.1.105 nvidia-cuda-nvrtc-cu12==12.1.105 nvidia-cuda-runtime-cu12==12.1.105 nvidia-cudnn-cu12==8.9.2.26 nvidia-cufft-cu12==11.0.2.54 nvidia-curand-cu12==10.3.2.106 nvidia-cusolver-cu12==11.4.5.107 nvidia-cusparse-cu12==12.1.0.106 nvidia-nccl-cu12==2.19.3 nvidia-nvjitlink-cu12==12.4.99 nvidia-nvtx-cu12==12.1.105 orderedmultidict==1.0.1 packaging==24.0 pandas==2.2.1 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 pillow==10.2.0 prompt-toolkit==3.0.43 ptyprocess==0.7.0 pydantic==2.6.4 pydantic_core==2.16.3 Pygments==2.17.2 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 pyzmq==23.2.1 referencing==0.34.0 regex==2023.12.25 requests==2.31.0 rpds-py==0.18.0 safetensors==0.4.2 scikit-learn==1.4.1.post1 scipy==1.12.0 sentence-transformers==2.2.2 sentencepiece==0.2.0 simplegeneric==0.8.1 six==1.16.0 superlinked==3.10.0 sympy==1.12 threadpoolctl==3.4.0 tokenizers==0.15.2 toolz==0.12.1 torch==2.2.1 torchvision==0.17.1 tornado==6.1 tqdm==4.66.2 traitlets==5.1.1 transformers==4.39.0 triton==2.2.0 typing_extensions==4.10.0 tzdata==2024.1 urllib3==2.2.1 wcwidth==0.2.13 widgetsnbextension==4.0.10 Did the code actually run correctly or you couldn't get it to run? The warning could make sense because of speed considerations - it's useful to realize that a CPU fall back has been triggered if you are running on a GPU-enabled kernel and you were hoping for a speedup. @jhk753 just wanted to check in here and if you were able to get the code to run? @kembala was there a solution on this? Could we replicate the issue or can this be closed? @kembala was there a solution on this? Could we replicate the issue or can this be closed? To be able to replicate I would need more information on what notebook environment and machine was used. We are smoke testing these notebooks and by checking it on Google Colab (here) it seems to function well within the provided environment. Based on a quick search this could be some nvidia specific issue connected to the environment it was run at. @jhk753 can you provide more details on what notebook environment you used for the experiment? As a quick fix i would suggest to try it out in Google Colab or Jupyter (or locally). @jhk753 closing this ticket for now but please feel free to reopen if you see the issue agan
gharchive/issue
2024-03-25T14:37:33
2025-04-01T06:45:54.458922
{ "authors": [ "ClaireSuperlinked", "jhk753", "kembala", "svonava" ], "repo": "superlinked/superlinked", "url": "https://github.com/superlinked/superlinked/issues/43", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1853774052
🛑 Node - tr-2d50afca.yalaso.top is down In 4421836, Node - tr-2d50afca.yalaso.top (http://tr-2d50afca.yalaso.top/api/v1/ping) was down: HTTP code: 0 Response time: 0 ms Resolved: Node - tr-2d50afca.yalaso.top is back up in af56859.
gharchive/issue
2023-08-16T19:14:05
2025-04-01T06:45:54.481987
{ "authors": [ "RealYalaSo" ], "repo": "superrr-vpn/status", "url": "https://github.com/superrr-vpn/status/issues/676", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
863234241
Form.Submit cannot contain custom components Hello there, I am currently trying out surface. Rather enjoying the experience so far, although it takes a bit to shift my mental model from the SPA world I am used to :) I am running into an issue that the Form.Submit button cannot contain custom components in its label: <Surface.Components.Form.Submit class="bg-sahnee-pink-500 rounded hover:bg-sahnee-pink-400 p-2 text-white"> <Components.Icon icon="mdi-content-save" /> Save </Surface.Components.Form.Submit> I'd assume that this is due to it being passed down to Phoenix which then in turn actually renders it. (Which uses the content_tag function that apparently can not contain custom components) ** (exit) an exception was raised: ** (ArgumentError) cannot convert component Sahnee.Business.Web.Components.Icon with id nil to HTML. A component must always be returned directly as part of a LiveView template. For example, this is not allowed: <%= content_tag :div do %> <%= live_component @socket, SomeComponent %> <% end %> Is there any workaround or can this be fixed on the Surface side? I was considering ditching the form wrappers altogether anyways later on, but am wary of doing so since I suspect that would also make me loose out of the built in security features like CSRF. Thank you for your time and the great library! Hi @PatrickSachs! Assuming the underlying submit/2 function provided by phoenix just creates a <button type="submit">...</button>, I believe you could use Surface's own <Button> component, which is content_tag free. Something like this should work: <Button opts={{ type: "submit" }} ...> ... </Button> @miguel-s do you think we should reimplement <Submit> wrapping our <Button> instead of submit/2? Or maybe phoenix's implementation does something extra? Also, would a type prop be welcome in <Button>? @msaraiva just double checked Phoenix's submit/2 and seems to be pretty straighforward to reimplement our own version, let's go for it. We might also want to rename the <Button> component to something closer to what it actually does. The current name implies that this component builds a generic button, but actually it's based on Phoenix's button/2, which builds a very specialized button for submitting data to the server. This is what the docs say: Generates a button that uses a regular HTML form to submit to the given URL. Useful to ensure that links that change data are not triggered by search engines and other spidering software. @PatrickSachs in the meanwhile you should be ok using a regular html submit button <button type="submit" class="...">...</button> Thanks for the reply! I've gone ahead and done that. This is probably a second issue, but while doing that I've ran into a separate issue: I've gone ahead and created my own button component that applies the correct classes, etc. defmodule MyProjectWeb.Components.Button do use MyProjectWeb, :component slot default def render(assigns) do ~H""" <button class="bg-sahnee-pink-500 text-white rounded p-2 text-xs"> <slot /> </button> """ end end This works well, however I'm really missing features like event/attribute forwarding. Since now I'd like to use this button this way: <Components.Button :on-click="user_detail" phx-value-user={{user.id}}> <Components.Icon icon="mdi-content-save" /> Inspect user </Components.Button> Which doesn't work since I have not declared :on-click and phx-value-user on my button as props. I'd love to see a feature that would allow me for "forward" undeclared props to another component (e.g. <button class="…" @forward> - would probably have to work differently to support static analysis by IDEs). Since I don't want to declare every possible event/event value before hand, which would also have to use a different syntax as well. I've already done so in the past with class and style attributes on some other components, but I feel like such a feature would be more elegant and expressive (+ frameworks like Vue 2 also support this, they inherit undeclared props to the root DOM node) Just looked through the issues, I guess this essential is #340? @PatrickSachs big improvements coming, stay tuned :) But you can already do somthing like that using the :attrs directive in your <Button> component, check out #344 to see an example with the new <Submit> component. If you have any questions drop by the Surface Slack channel. PR is merged, so closing this issue. We might also want to rename the component to something closer to what it actually does. The current name implies that this component builds a generic button, but actually it's based on Phoenix's button/2, which builds a very specialized button for submitting data to the server. @miguel-s Personally I think it's a feature (not a bug!) that these core Surface components merely wrap Phoenix.HTML's core functionality. Surface adds an extra layer of abstraction that you must understand in order to "get stuff done". In the development shop where I work I'm training people who are completely fresh to Phoenix, LiveView, and Surface. It's really useful to say, "this component is just a wrapper around Phoenix.HTML.Form.submit/3" or whatever it is. So if you understand that, you're 90% of the way to understanding this.
gharchive/issue
2021-04-20T20:55:29
2025-04-01T06:45:54.513450
{ "authors": [ "PatrickSachs", "miguel-s", "msaraiva", "paulstatezny" ], "repo": "surface-ui/surface", "url": "https://github.com/surface-ui/surface/issues/342", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
449778804
Remove unused dependencies Description Remove unused dependencies. org.apache.poi and itext are not used anymore in the backend @devnied : Can u plz fix the build ? :)
gharchive/pull-request
2019-05-29T12:29:04
2025-04-01T06:45:54.517933
{ "authors": [ "devnied", "yodamad" ], "repo": "suricate-io/suricate", "url": "https://github.com/suricate-io/suricate/pull/501", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2076019633
Address Rust String Null Byte Handling Description: Rust strings can freely contain null bytes. For example, "foo\0bar" is allowed in Rust and has a length of 7. Adding a null byte in Rust might result in a prefixed string. As an alternative, consider using something like 0b10111111. All Rust strings are valid UTF-8, and a UTF-8 character can never begin with a 10-bit pattern, as this is the starting pattern of a continuation byte. Utilizing 0b10111111 in Rust strings would ensure that no byte slice derived from a string serves as a prefix for another. Action Items: Evaluate the possibility of using 0b10111111 as an alternative to null bytes. Additional Context: This consideration aims to prevent unintended string prefixing when adding bytes in Rust. This was raised by @delskayn here closing in favor of https://github.com/surrealdb/tart/issues/13
gharchive/issue
2024-01-11T08:16:33
2025-04-01T06:45:54.537152
{ "authors": [ "arriqaaq" ], "repo": "surrealdb/tart", "url": "https://github.com/surrealdb/tart/issues/12", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2302208910
Add rancher information to sriov charts This PR makes the chart more "Rancher manager" friendly for users that would start using this repo in v2.9 to consume sriov. Changes: It hids the sriov-crd chart as we are currently doing in rancher/charts and creates a "cattle" dependency in the sriov chart so that it installs sriov-crd chart automatically Adds the icon Adds extra info like "experimental" Applying these changes makes it look and feel better: Compared to: LGTM but I think you skipped the index.html generation :) LGTM but I think you skipped the index.html generation :) I thought I had run make index, was there another step? Let me RTFM! I wish all chart repos were working in the same way :P BTW, there was something not really working with the sriov-crd depenceny but I found the reason. (nit) wondering if we might also want to add these annotations too? catalog.cattle.io/certified: rancher catalog.cattle.io/kube-version: '>= 1.16.0-0' The one about kube-version is so trivial that I decided not to add it. And the "certified: rancher", if I understood correctly, has to be added on the charts that are present in the rancher/charts. For the rest I don't know, so I did not add it
gharchive/pull-request
2024-05-17T09:12:52
2025-04-01T06:45:54.568519
{ "authors": [ "e-minguez", "manuelbuil" ], "repo": "suse-edge/charts", "url": "https://github.com/suse-edge/charts/pull/129", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2065446791
Package Scope etc Description dependsOn ^dev Possible Solution No response etc. No response No way to use dev with watchMode in turbo
gharchive/issue
2024-01-04T11:03:44
2025-04-01T06:45:54.574355
{ "authors": [ "manudeli" ], "repo": "suspensive/react", "url": "https://github.com/suspensive/react/issues/551", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2633370531
Update list of current NSF OSE related grants NSF has released a couple of other discipline-specific OSE grants, so I am adding both of these to the list. I think it might be worthwhile at some point converting this to a table to provide some top level info for each grant (amount, date/recurring, etc.) @RichardLitt what do you think about that? Also happy to create corresponding issues for each PR if you prefer that flow. I think this is good. If you wanted to narrow down https://github.com/ralphtheninja/open-funding/ or other funds into a separate type of data template here, that would also be useful, I think.
gharchive/pull-request
2024-11-04T17:11:16
2025-04-01T06:45:54.576028
{ "authors": [ "Nolski", "RichardLitt" ], "repo": "sustainers/academic-map", "url": "https://github.com/sustainers/academic-map/pull/113", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2237850260
Question: There is a way to access AutoIncrementCounter? Hello there, sorry to bother you with this, but I can't figure this out, do you know if there is a way to access this model via console, like: AutoIncrementCounter.all? I need to update all the counters to a new ID, but I can't find a way to do it using db migrations. I really appreciate any help you can provide. The counters are stored in a collection of their own. In theory you should be able to access this directly. Since this is not its original intended use, there's no interface exposed to reach directly into this collection. I'm curious if you are able to make it work, and would love to see how you ended up doing it 🙏 https://github.com/suweller/mongoid-autoinc/blob/30cfe694da15ddfa2709249fdfdd5d22b93e63c1/lib/autoinc/incrementor.rb#L37
gharchive/issue
2024-04-11T14:20:32
2025-04-01T06:45:54.578168
{ "authors": [ "niightly", "suweller" ], "repo": "suweller/mongoid-autoinc", "url": "https://github.com/suweller/mongoid-autoinc/issues/42", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
807618707
FAQ ja translation. Except 600 FAQは#600を除いて全て翻訳されていますが、これは scale を日本語に翻訳する方法がわからないからです。 close #270 レビューはまだ着手できていないのですが、お先に #270 になにかコメントをして頂けますか。 そうすると #270 のAssigneesに @takoyaro さんを割り当てることができるようになるので! @takoyaro VueJS だとそのまま スケールする になってますね、なので 「Does Svelte scale?」 なら Svelteはスケールしますか? でも、それほど違和感はないと思います。 原語を重視せず、意訳で良いのであれば Svelteは大規模なアプリケーション開発にも耐えられますか? Svelteは大規模開発も出来ますか? という感じになると思います。 @takoyaro せっかくほとんど翻訳できているで、今の段階でこれをマージしちゃいますね! 修正と600の翻訳については別のIssue/PRで実施することにしましょう! 時間がかかってすみません、東京出張が思ったより忙しいです... 大丈夫ですよ!むしろほとんど翻訳できているのに、今までマージしてなくてごめんなさい!
gharchive/pull-request
2021-02-12T23:48:04
2025-04-01T06:45:54.600512
{ "authors": [ "myLifeAsaDog", "takoyaro", "tomoam" ], "repo": "svelte-jp/svelte-site-jp", "url": "https://github.com/svelte-jp/svelte-site-jp/pull/268", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1784230770
export const prerender = true not working Describe the bug I am trying to render a static route with: export const prerender = true; inserted inside page.ts. Reproduction Logs node:internal/event_target:1006 process.nextTick(() => { throw err; }); ^ Error: The following routes were marked as prerenderable, but were not prerendered because they were not found while crawling your app: - /terms See https://kit.svelte.dev/docs/page-options#prerender-troubleshooting for info on how to solve this at prerender (file:///C:/Backup/Projects/Crypto/tier10k/webapp/node_modules/@sveltejs/kit/src/core/postbuild/prerender.js:475:9) at async MessagePort.<anonymous> (file:///C:/Backup/Projects/Crypto/tier10k/webapp/node_modules/@sveltejs/kit/src/utils/fork.js:22:16) Emitted 'error' event on Worker instance at: at [kOnErrorMessage] (node:internal/worker:290:10) at [kOnMessage] (node:internal/worker:301:37) at MessagePort.<anonymous> (node:internal/worker:202:57) at [nodejs.internal.kHybridDispatch] (node:internal/event_target:731:20) at exports.emitMessage (node:internal/per_context/messageport:23:28) System Info System: OS: Windows 10 10.0.19045 CPU: (32) x64 AMD Ryzen 9 5950X 16-Core Processor Memory: 35.78 GB / 63.92 GB Binaries: Node: 18.12.1 - C:\Program Files\nodejs\node.EXE npm: 8.19.2 - C:\Program Files\nodejs\npm.CMD Browsers: Edge: Spartan (44.19041.1266.0), Chromium (114.0.1823.67) npmPackages: @sveltejs/adapter-node: ^1.2.4 => 1.2.4 @sveltejs/kit: ^1.20.4 => 1.21.0 svelte: ^4.0.0 => 4.0.1 vite: ^4.3.6 => 4.3.9 Severity serious, but I can work around it Additional Information No response Did you read the documentation linked to in the errors message? GitHub issues aren't the right place for support questions like this. Please ask on StackOverflow or in our Discord chat room. And when you do, provide a reproduction, or they won't be able to help you either.
gharchive/issue
2023-07-01T20:54:27
2025-04-01T06:45:54.604220
{ "authors": [ "Conduitry", "gigitalz" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/issues/10302", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1401881611
Redirect in +layout.server.ts still leads to execution of load method in current +page.server.ts Describe the bug After login, the user is redirected to a specific route (in my case 'dashboard'). In the +layout.server.ts I am checking for specific conditions (user has a team) and redirecting the user to another page if no team is assigned. /** @type {import('./$types').LayoutServerLoad} */ export async function load({ locals }) { let redirectPath = null; const user: VerifyUser | null = locals.auth; if (!user) { redirectPath = '/auth/login'; } else { if (!user.team) { redirectPath = '/wizard'; } } if (redirectPath !== null) { throw redirect(302, redirectPath); } return { user }; } Even though the user is being redirected the load method within +page.server.ts of the original URL ('dashboard') is being called. I would expect that the redirect is happening without the load method of the page is being called. Reproduction create route test/+page.server.ts Add load method to the test/+page.server.ts create +layout.server.ts add redirect within +layout.server.ts call test within the browser user gets redirected, but the load method of test/+page.server.ts is being executed Logs No response System Info System: OS: Linux 5.15 Ubuntu 22.04.1 LTS 22.04.1 LTS (Jammy Jellyfish) CPU: (16) x64 Intel(R) Core(TM) i9-9900KS CPU @ 4.00GHz Memory: 2.41 GB / 15.54 GB Container: Yes Shell: 5.1.16 - /bin/bash Binaries: Node: 16.14.0 - ~/.nvm/versions/node/v16.14.0/bin/node Yarn: 1.22.19 - ~/.yarn/bin/yarn npm: 8.4.1 - ~/.nvm/versions/node/v16.14.0/bin/npm Browsers: Chrome: 105.0.5195.125 npmPackages: @sveltejs/adapter-node: 1.0.0-next.96 => 1.0.0-next.96 @sveltejs/kit: 1.0.0-next.511 => 1.0.0-next.511 svelte: ^3.50.1 => 3.50.1 vite: ^3.1.6 => 3.1.6 Severity annoyance Additional Information No response This is intended. For speed, all loads (for the layouts and for the page) are run in parallel. You can use await parent() in a child load if you want it to wait for its parent before proceeding. I believe there are other issues in this repo discussing potential features to provide another way to deal with this.
gharchive/issue
2022-10-08T09:10:03
2025-04-01T06:45:54.609623
{ "authors": [ "Cluster2a", "Conduitry" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/issues/7185", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1560707035
[Feature Request] Default/Easy Rate Limit strategy on all API routes Describe the problem rate limiting is a basic security problem that is left up to the developer. but it really shouldn't be, particularly for a backend/fullstack framework. not handling this by default makes sveltekit apps with any significant api/backend calls extremely prone to Denial of Service/Denial of Wallet attacks out of the box. Describe the proposed solution it should be as easy as it is in comparable ecosystems: Rails Django Alternatives considered do nothing, almost guarantee nobody is doing this well if at all Importance nice to have Additional Information the next obvious need will be to document/pave the path of how to do the rate limiting on a per user or per team basis while letting other users/teams proceed i implemented this https://github.com/vercel/next.js/blob/canary/examples/api-routes-rate-limit/utils/rate-limit.ts but it should be default/declarative This is where the Railses and the Djangos of the world have an advantage, being properly coupled to a backend — the logic part is easy (would be trivial to implement as a handle hook, whether as a third party thing or exported from @sveltejs/kit/hooks), it's the storage that's difficult. In a serverless context we obviously can't just rely on an in-memory store. Any thoughts on how to approach that? (in case anyone reading this didnt know, in-memory is obviously janky but also works for low concurrency, non-edge serverless https://twitter.com/swyx/status/1230731059903336448?s=20) yeah so i acknowledge that the infra layer should give us more to work with here. perhaps something that vercel could push the envelope on internally that sveltekit can then use? (this really is a pervasive flaw in fully serverless systems). also notable that simple storage apis are built into deno and cloudflare workers. i was toying with some kind of etag based timestamp hashing approach but i dont think itd lead to the right rate limiting experience :/ anyway i do feel strongly that backends calling any kind of api where potentially serious money is on the line (eyes my openai bill) should have default rate limiting so i figured i’d file this issue. it’s understandable if we cant solve this for now but feels like a material thing that someone will have to solve at some point for serverless to be a thing This would probably be a good use-case for https://vercel.com/dashboard/stores, given the very little I know about them - though - I understand that this happens on the edge, so isn't a whole solution, and of course, it poses the age-old question about how we would emulate/test this locally. Then there's the even older question about implementing things which work across all platforms equally. We could provide adapter specific config for this, but what would the syntax look like when we're trying not to build something specific to vercel, or the edge. Here's a rate limiter library that may be useful in the meantime: https://github.com/ciscoheat/sveltekit-rate-limiter Ok so I have a proposal... I found this library called Unstorage https://github.com/unjs/unstorage which provides uniform api across different types of KV stores. Can we modify @upstash 's implementation of rate limit to work with this and then put is as a core utility in sveltekit..... So devs are free to use any type of storage they want. Depending on their platform. Is it ok to use this example https://vercel.com/guides/rate-limiting-edge-middleware-vercel-kv with hooks.server.js and target the '/api/*' routes? I had a need for rate-limiting, but also coupled with an API key token system, and came up with this: https://www.npmjs.com/package/svelte-api-keys Still working on tidy-up and a proper demo page but the TL;DR is that it works similar to GitHub Fine-Grained Personal Access tokens: The generated API key is only displayed once / never stored Keys can be associated with multiple permissions or scopes Approval of each call is based on the key permissions, any expiry date, plus token-bucket rate-limiting Rate limits can apply globally for the entire API or per group or individual endpoints (and limits are based on key owner, not the key itself, so people could create and manage their own keys) Token bucket implementations are in-memory (OK-ish for dev / single server) or Redis Stack Key storage is a simple interface, with in-memory (suitable for dev / demo use only), Redis and Firestore implementations LRU cache with TTL saves on database hits for key info lookups You can layer on tier-based rate limits as well, again, similar to how Github works @Rich-Harris Maybe you could implement it in-memory by default, with an easy way to plug in storage? Cloudflare Workers has an official rate limiting API in open beta: main = "src/index.js" # The rate limiting API is in open beta. [[unsafe.bindings]] name = "MY_RATE_LIMITER" type = "ratelimit" # An identifier you define, that is unique to your Cloudflare account. # Must be an integer. namespace_id = "1001" # Limit: the number of tokens allowed within a given period in a single # Cloudflare location # Period: the duration of the period, in seconds. Must be either 10 or 60 simple = { limit = 100, period = 60 } Could this be utilized with the adapter-cloudflare?
gharchive/issue
2023-01-28T05:55:46
2025-04-01T06:45:54.623805
{ "authors": [ "CaptainCodeman", "Rich-Harris", "antony", "ciscoheat", "hyunbinseo", "jesuscovam", "rnbokade", "sw-yx", "swyxio", "xl0" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/issues/8764", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1611537148
Cloudflare adapter: confusing semantics for accessing $env/* Describe the bug tl;dr Using the Cloudflare adapter, are you supposed to always access environment variables via event.platform.env? Or can you still do it the regular way: import { FOO } from '$env/static/private'? Not sure if this is a bug, a feature request, a documentation request, me doing it wrong, or just the way it has to be. Sorry. I'm using Cloudflare. The environment variables are a mix of "normal" environment variables (database urls, api keys, and so forth) and Cloudflare features, like KV namespaces... // app.d.ts declare global { namespace App { interface Platform { env: { FOO: string; BAR: import('@cloudflare/workers-types').KVNamespace; }; } } } In dev, miniflare populates event.platform... // hooks.server.ts import { dev } from '$app/environment'; import type { Handle } from '@sveltejs/kit'; export const handle: Handle = async ({ event, resolve }) => { if (dev) { const { devMiniflare } = await import('./miniflare'); event.platform = await devMiniflare(event.platform); } return await resolve(event); }; Miniflare uses .dev.vars for the "normal" variables and wrangler.toml for the KV bindings and so forth. Here's what's confusing me. At some point I also had a .env file with the same "normal" variables as in .dev.vars. I blithely deleted .env, assuming that SvelteKit or the adapter was using the type in app.d.ts to produce $env/*. That's not the case. The import { FOO } from '$env/static/private' method only worked in dev because of .env. For some reason I was always accessing the KV namespace (BAR in my example) via event.platform.env, so the issue of it not appearing in .env never came up. AFAICT, in a deployed app, import { FOO } from '$env/static/private' still works. But don't quote me on that. So, should I just entirely ignore $env/* and always access everything from the event? It's a minor inconvenience to pass the event around, but whatever. If not, how should the dev environment be set up so the import statements work? Can $env/* get produced statically from app.d.ts rather than from the runtime, at least in these bespoke platform cases? (I'm not sure this is actually a good idea, just throwing it out there.) Thanks! Reproduction Minimal setup... https://github.com/cdcarson/sveltekit-kv-namespaces-repro (I committed .env and .dev.vars on purpose. Don't do this at home.) Logs No response System Info System: OS: macOS 13.2.1 CPU: (8) arm64 Apple M1 Memory: 37.31 MB / 8.00 GB Shell: 5.8.1 - /bin/zsh Binaries: Node: 18.13.0 - ~/.nvm/versions/node/v18.13.0/bin/node npm: 8.19.3 - ~/.nvm/versions/node/v18.13.0/bin/npm Browsers: Chrome: 110.0.5481.177 Safari: 16.3 npmPackages: @sveltejs/adapter-auto: ^2.0.0 => 2.0.0 @sveltejs/adapter-cloudflare: ^2.2.0 => 2.2.0 @sveltejs/kit: ^1.5.0 => 1.10.0 svelte: ^3.54.0 => 3.55.1 vite: ^4.0.0 => 4.1.4 Severity annoyance Additional Information No response Honestly the whole env.MY_NAMESPACE thing (as opposed to, say, context.kv.MY_NAMESPACE) is something I've always found rather strange. Anything in $env/static/[private|public] is populated at build time (from .env and/or process.env, i.e. it will respect any env vars you have configured in the Cloudflare Pages dashboard) and statically replaced — it doesn't depend on env in any way, so is guaranteed to work. This is the recommended way to use FOO since it will result in dead code elimination etc, unless for whatever reason you need to reuse the same worker with different env vars (not sure if that's even possible on Cloudflare Pages?). $env/dynamic/[private|public] are populated from env: https://github.com/sveltejs/kit/blob/5dada256dc05674a18946f8f19db1f25ba91aeaf/packages/adapter-cloudflare/src/worker.js#L9-L11 This does result in a discrepancy between dev/preview and prod behaviour owing to the aforementioned strangeness — at dev/preview time, the SvelteKit-provided env objects... import { env as privateEnv } from '$env/dynamic/private'; import { env as publicEnv } from '$env/dynamic/public'; ...will be populated from .env and process.env, but in production they will also include KV namespaces. I would recommend only using event.platform.env for KV namespaces, and preferring $env/static/* over $env/dynamic/* for everything else.
gharchive/issue
2023-03-06T14:21:57
2025-04-01T06:45:54.635269
{ "authors": [ "Rich-Harris", "cdcarson" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/issues/9336", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1619288986
docs: mention the types without types feature Please don't delete this checklist! Before submitting the PR, please make sure you do the following: [ ] It's really useful if your PR references an issue where it is discussed ahead of time. In many cases, features are absent for a reason. For large changes, please create an RFC: https://github.com/sveltejs/rfcs [ ] This message body should clearly illustrate what problems it solves. [ ] Ideally, include a test that fails without this PR but passes with it. Tests [ ] Run the tests with pnpm test and lint the project with pnpm lint and pnpm check Changesets [ ] If your PR makes a change that should be noted in one or more packages' changelogs, generate a changeset by running pnpm changeset and following the prompts. Changesets that add features should be minor and those that fix bugs should be patch. Please prefix changeset messages with feat:, fix:, or chore:. I wonder if we should change the other docs and create-svelte to remove the usage of $types and then in the IDE section say you'll need to use $types if you want to use something other than VS Code? I'm against this. In the docs these types provide valuable context to me. And since not all IDEs - intellij in particular - can't take advantage of it, it would be confusing for those who can't use it Yeah, you have me convinced. Referencing the type is helpful just for understanding the type of things when reading the docs The other thing I wonder is if this should go under "Editor setup" rather than the routing section Why_not_both.gif
gharchive/pull-request
2023-03-10T16:53:33
2025-04-01T06:45:54.640981
{ "authors": [ "benmccann", "dummdidumm" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/pull/9397", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1718449929
New rule - SV Constraint using array reduction method, check for casting Is your feature request related to a new rule? Please describe. SV Constraints using array reduction methods may yield surprising results. From https://www.linkedin.com/posts/avidan-efody_constraintabrsolverabrtips-servingthenextbug-activity-7065262658182160384-VFXc?utm_source=share&utm_medium=member_desktop class bug_c; rand bit num_list[32]; constraint cst_sum { num_list.sum() == 1; } endclass The sum function stores the sum in a variable. By default, this variable is the same size as the item, I.e. in this case 1. If this variable wraps around (as is very likely to happen in this case), you’ll get unexpected results. Mihael Fugenfirov a very common one. an array reduction method will return the result of the same type as an array element type (a single bit here) Solutions: cast into a variable that won’t overflow num_list.sum() with (6’(item)) == 1 Describe the solution you'd like Add a lint rule to check for missing cast in such cases. PySlint: Violation: [FUNC_CNST_MISSING_CAST]: Potentially incorrect constraint expression! An expression involving array-reduction method sum() was found, but is missing an explicit cast. This can lead to strange results as array reduction methods return an expression of the size of its elements, check if you need a with (int'( cast around the following expression: num_list.sum() Added rule and a test.
gharchive/issue
2023-05-21T10:21:59
2025-04-01T06:45:54.688737
{ "authors": [ "svenka3" ], "repo": "svenka3/pyslint", "url": "https://github.com/svenka3/pyslint/issues/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
718441203
Assets: Images for docs Images for documentation. Small 'icons' image logos:
gharchive/issue
2020-10-09T21:50:24
2025-04-01T06:45:54.728631
{ "authors": [ "Mathijs-Bakker" ], "repo": "svermeulen/Extenject", "url": "https://github.com/svermeulen/Extenject/issues/182", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
435314992
convertTransform not always converting matrix properly (version 1.2.2) Attached source file and resulting file for reproduction: sourceAndTarget.zip Original transform is "matrix(-0.10443115234375 0 0 -0.10443115234375 182.15 61.15)" Resulting transform is "translate(182.15 61.15) scale(.10443)" Is seems like matrix "translate(182.15 61.15) rotate(180) scale(.10443)" is working. Nice catch! Thanks for reporting. SVGO v1.3.0 with the fix is released.
gharchive/issue
2019-04-19T21:53:29
2025-04-01T06:45:54.730906
{ "authors": [ "GreLI", "stevebrisebois" ], "repo": "svg/svgo", "url": "https://github.com/svg/svgo/issues/1101", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
177369845
transformsWithOnePath NaN <svg xmlns="http://www.w3.org/2000/svg" width="1000" height="1000" viewBox="0 0 1000 1000"><path d="M791 541H541v250h-82V541H209v-82h250V209h82v250h250v82z"/></svg> config: transformsWithOnePath: { width: 16, height: 16, } output: <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1000 1000" width="16" height="16"><path d="M12.656 8.656HNaN NaNl0 4l-1.312 0VNaN NaNHNaN NaNl0-1.312l4 0VNaN NaNl1.312 0l0 4l4 0l0 1.312z"/></svg> In addition, viewbox is not resized. https://github.com/svg/svgo/issues/547 Yes, the plugin is a bit outdated since it relies on the fact that paths use only relative commands, which is no true for a long ago. @TrySound I believe the plugin has been removed so this could be closed too.
gharchive/issue
2016-09-16T08:24:28
2025-04-01T06:45:54.733462
{ "authors": [ "GreLI", "XhmikosR", "breeswish" ], "repo": "svg/svgo", "url": "https://github.com/svg/svgo/issues/599", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2015508289
fix(convertPathData): handle setting prev properly to fix path joining and #1855 I think the issue there is caused by the path joining acting weird because prev is set to a nonexistent (filtered out) command. This PR moves where prev is set to fix. Fix #1855 I was just looking at this, and came to the same conclusion/fix. Just went back to the issue to test with the original SVG, and see you'd already solved it an hour ago. ^-^' The only difference is between yours and what I was going to PR is that I added a test case: Should merge M and m, even when Z command is used between. === <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"> <path d="M1 1m1 1"/> <path fill="black" d="M8.5 12Zm0 8q3.35 0 5.675-2.325"/> </svg> @@@ <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"> <path d="M2 2"/> <path fill="black" d="M8.5 20q3.35 0 5.675-2.325"/> </svg> Mind adding this to yours, then I'd be happy to merge! Awesome, thank you very much for resolving this! Nice! Thanks for fixing it!
gharchive/pull-request
2023-11-28T23:38:45
2025-04-01T06:45:54.736481
{ "authors": [ "KTibow", "SethFalco", "cyberalien" ], "repo": "svg/svgo", "url": "https://github.com/svg/svgo/pull/1856", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
302623820
Fixed attribution of CSS Color Module 4 fix Actually, I did that :P Oh, really #755, seems I have mislooked this. 👍 thanks! 😝
gharchive/pull-request
2018-03-06T09:29:01
2025-04-01T06:45:54.737630
{ "authors": [ "GreLI", "ydaniv" ], "repo": "svg/svgo", "url": "https://github.com/svg/svgo/pull/918", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1021858328
setup-java v2 v2 is the latest version of setup-java https://github.com/actions/setup-java/blob/main/README.md @frantuma Obsolete. I will open a new PR
gharchive/pull-request
2021-10-10T02:14:50
2025-04-01T06:45:54.817391
{ "authors": [ "sullis" ], "repo": "swagger-api/swagger-core", "url": "https://github.com/swagger-api/swagger-core/pull/4038", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1055466512
$ sign in router path generates parse error Describe the bug Our API path includes $ signs, as they're widely used in the FHIR medical record format. Including the $ signs in the router comment leads to a swag error: ParseComment error in file main.go :can not parse router comment "/objects$modifier [get]" To Reproduce Create a main.go file with the following content: package swag_test // @id specialObjectList // @Summary Special objects listing // @Router /objects$modifier [get] // @Description A special listing method... // @Tags Test func specialObjectList() { } Run swag init Expected behavior To have the /objects$modifier route correctly registered. Your swag version e.g. 1.7.4 Your go version go1.17.2 darwin/amd64 Desktop (please complete the following information): OS: macOS 11.3.1 Additional context I believe #551 was a similar issue; I'd be happy to make a similar PR to add a $ sign into the regex if it would be accepted. try use version 1.6.7 @sam-hoffman I don't believe "/objects$modifier" is a valid route in OpenAPi spec. This may be the reason we are not accepting $ sign in route definitions. @ubogdan is there a part of the OpenAPI spec that specifically prohibits objects$modifier? It would be super helpful to add, as FHIR is a widely used specification for medical data that calls for $modifier @sam-hoffman it's not necessary about prohibiting then it's about you won't be able to use the generated definition with swagger-ui to test the API. Extracted from paths-and-operations Swagger supports path templating, meaning you can use curly braces {} to mark parts of a URL as path parameters: /users/{id} /organizations/{orgId}/members/{memberId} /report.{format} We actually do manually add $ signs in to our generated definitions in order to have our paths display correctly in swagger-ui. We were hoping to avoid having to manually add them. I see. According to rfc3986, the provided path "/objects$modifier" contains a special character. I feel sorry for you, but this is not a valid issue, and we won't go forward with it. What about #551, which also added a special character? What about it? It looks like it was accepted and swag now allows a : in router paths We accepted that pr because It's a well-known practice (used in google API) and we felt like it's nice to have. I don't see any benefit to introduce another special character. Why can't you simply use ":"? I can't use : because FHIR, a well known and widely used specification for medical data, uses $ Well, can you prove it? Specifically, the FHIR spec defines operations which make use of the $ character. https://www.hl7.org/fhir/operations.html In the end, this is an OSS project , feel free to contribute and break it :smile:. Nice, thank you :) Will send a PR soon
gharchive/issue
2021-11-16T22:25:35
2025-04-01T06:45:54.848596
{ "authors": [ "EmuuGrass", "gmontagu", "sam-hoffman", "ubogdan" ], "repo": "swaggo/swag", "url": "https://github.com/swaggo/swag/issues/1053", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1365117217
WIP - PHI v2 Checklist [ ] I have read the CONTRIBUTING guide [ ] Good naming (as clear and simple as possible) [ ] Correct behavior if external API endpoints are down, return 404, 504 (or no answer), 401 errors (ddos simulation) [ ] I tested desktop/mobile resolution [ ] I tested light/dark theme [ ] I tested different languages [ ] I checked the functionality once again (AFFECT MONEY) [ ] I checked the work on the Testnet [ ] I checked the work on the Mainnet [ ] I checked the work in the plugin [ ] I checked the PR once again Tests Please start auto tests as follows: add a label swap test to start swap tests add a label withdraw test to start withdraw tests You can skip these tests if you completely sure that your changes aren't related to this functional Original issue Video / screenshot proof Looks like it's coming along good. What would you say is the ETA on PHI Network v2 integration Looks like it's coming along good. What would you say is the ETA on PHI Network v2 integration Will be ready by the end of this week Looks like it's coming along good. What would you say is the ETA on PHI Network v2 integration Will be ready by the end of this week Amazing great news. thanks for the update.
gharchive/pull-request
2022-09-07T19:54:13
2025-04-01T06:45:54.860297
{ "authors": [ "Phinetwork", "shendel" ], "repo": "swaponline/MultiCurrencyWallet", "url": "https://github.com/swaponline/MultiCurrencyWallet/pull/5129", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
113167110
Glowy canvas is very slow. Needs to be converted to either svg or canvas gg ez Really?
gharchive/issue
2015-10-24T15:26:33
2025-04-01T06:45:54.868813
{ "authors": [ "swaschan", "talhaat" ], "repo": "swaschan/boxer6_web", "url": "https://github.com/swaschan/boxer6_web/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1419171065
Variables unnecessarily renamed Describe the bug Function parameters are unnecessarily renamed when a reference to an undefined variable is made, if that variable is the name of an enclosing object's field. Input code const vals = { x: () => 0, y: (x) => { console.log(x); }, z: () => { console.log(x); }, }; SWC output: const vals = { x: ()=>0, y: (x1)=>{ console.log(x1); }, z: ()=>{ console.log(x); } }; Note that the param of y is changed from x to x1, because x is not defined in z, but it exists as a field of the object. Output from calling the functions: > vals { x: [Function: x], y: [Function: y], z: [Function: z] } > vals.x() 0 > vals.y() undefined undefined > vals.z() Uncaught ReferenceError: x is not defined at Object.z (REPL9:7:21) Config No response Playground link https://play.swc.rs/?version=1.3.9&code=H4sIAAAAAAAAA0vOzysuUShLzClWsFWo5lIAggorBQ1NBVs7BQMdML8SyK8AC0DkQSAZqC0%2FJ1UvJz8dKGcNFq%2BFKK%2BCaSeoutaaCwAleDfafwAAAA%3D%3D&config=H4sIAAAAAAAAA02OsQrDMAxE%2F0VzhpKhQ%2Bau%2FQjhKsHBtoykQI3xv9cOgXSTdO%2FuVGFXB0uFjKIkY9KSDL%2BwgJVM6sRngwlM%2B2nFoNT6grKRdYR0fsxzlwOz0gVMEH3yaxlhjmMWUr0lTFugvyjBpCtLHHSgDV15kWNB4%2F6NyUGtU5E%2Fx3DV86mz%2BAntLrrCvb4v8DT%2BAEx1eRDdAAAA Expected behavior x should not be renamed to x1 in the parameter of y Actual behavior x is renamed to x1 in the parameter of y Version No idea, latest used with Parcel Additional context No response It's by design. I may change it in the future but I don't think it's an issue It is an issue for my code. I use the names of function parameters to build a dataflow DAG, and when parameters are renamed, it breaks the graph. Only SWC does this, other tools do not. (basically I am using parameter names to inject values automatically into functions -- so the names cannot change) I keep running into another issue with variables being renamed. If I copy code from "Sources" and paste it into the Javascript console in the browser, a lot of times the code won't run, because a variable has internally renamed to have a 1 suffix. This is extremely annoying. The Javascript variable shadowing semantics are perfectly well-defined. Variables should never be renamed in SWC if they are properly and appropriately shadowing some other definition from a parent scope. However this seems to even happen with locals in for-loops, for no apparent reason: for (const order of orderbook) { /* ... */ } renames order to order1, and I can't see why it happens other than the fact that the same type of loop with the same loop variable name occurs twice in the same function. There is no global that is even being shadowed in this case. I have a similar issue; consider this test case: test!( Syntax::Typescript(TsConfig { tsx: true, ..Default::default() }), |_| as_folder(test_visitor(Engine::Emotion)), emotion, // Input codes r#" import { TailwindStyle } from "stailwc"; export default function App({ Component, pageProps }) { return <> <TailwindStyle /> <Component {...pageProps} /> </>; } "#, // Output codes after transformed with plugin r#" import { Global } from "@emotion/react"; export default function App({ Component, pageProps }) { return <> <Global styles={css``}/> <Component {...pageProps} /> </>; } "# ); This test passes, however swc in a later pass renames the imported Global to Global0 causing a reference error. Another similar test case: test!( Syntax::Typescript(TsConfig { tsx: true, ..Default::default() }), |_| as_folder(test_visitor(Engine::StyledComponents)), styled_components, // Input codes r#" import { TailwindStyle } from "stailwc"; export default function App({ Component, pageProps }) { return <> <TailwindStyle /> <Component {...pageProps} /> </>; } "#, // Output codes after transformed with plugin r#" import { createGlobalStyle } from "styled-components"; export default function App({ Component, pageProps }) { return <> <Global /> <Component {...pageProps} /> </>; } const Global = createGlobalStyle(``); "# ); We transform the import, call createGlobalStyle, create a var Global, and try to use it, but similarly Global is renamed and causes a reference error. Closing as fixed by https://github.com/swc-project/swc/pull/6670
gharchive/issue
2022-10-22T08:08:33
2025-04-01T06:45:54.925496
{ "authors": [ "arlyon", "hiroshi-yamamoto-dublr", "kdy1" ], "repo": "swc-project/swc", "url": "https://github.com/swc-project/swc/issues/6233", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1425080609
ReferenceError: Cannot access 'isServer' before initialization Describe the bug Hi kyd1: swc is a very awesome tool. Last year I forked Next.js v11 to modify it. used swc version is 1.2.205, I add some custom code like below in some file export const isServer = typeof window === 'undefined' export const isClient = typeof window !== 'undefined' Recently when I work in my monorepo project which include the forked next.js v11, packages which compiled with swc had a symlink of dependency problem, so find the CHANGELOG from swc github, upgrade swc to 1.3.11, the symlink problem resolved. My another project use modify next.js as meta framework, the project run next dev works well, finally crash at next build, error looks like: ReferenceError: Cannot access 'isServer' before initialization full example code at https://github.com/zackshen/swc-build-error-repro Input code src/index.js export * from "./error"; export * from './mod_a'; export const isServer = typeof window === 'undefined' export const isClient = typeof window !== 'undefined' src/mod_a.js import { mod_b } from "./mod_b"; export function mod_a() { mod_b() } src/mod_b.js import { isServer } from "./index"; export function mod_b() { if (isServer) { console.log('blablabla') } } function new_instance() { if (isServer) { return { server: true } } return { server: false } } export const instance = new_instance() src/error.js export const Error = () => {} Config { "jsc": { "target": "es2016", "parser": { "syntax": "typescript", "dynamicImport": true } }, "module": { "type": "commonjs" } } final compiled file dist/index.js if _export statement moved to the file end , yarn test will pass. "use strict"; Object.defineProperty(exports, "__esModule", { value: true }); function _export(target, all) { for (var name in all) Object.defineProperty(target, name, { enumerable: true, get: all[name] }); } _export(exports, { isServer: () => isServer, isClient: () => isClient }); _exportStar(require("./error"), exports); // ======= Error ============ // ** Call chain ---> mod_a ---> mod_b ---> index // ======= Error ============ _exportStar(require("./mod_a"), exports); function _exportStar(from, to) { Object.keys(from).forEach(function (k) { if (k !== "default" && !Object.prototype.hasOwnProperty.call(to, k)) Object.defineProperty(to, k, { enumerable: true, get: function () { return from[k]; } }); }); return from; } const isServer = typeof window === 'undefined'; const isClient = typeof window !== 'undefined'; Playground link https://github.com/zackshen/swc-build-error-repro Expected behavior yarn test pass successfully. Actual behavior Run yarn test cause error: ReferenceError: Cannot access 'isServer' before initialization Version 1.3.11 Additional context No response Your code is wrong I did "type": "module" and patched imports/exports and did node src/index.js @kdy1 thanks,It's wrong of my code.
gharchive/issue
2022-10-27T06:03:29
2025-04-01T06:45:54.934220
{ "authors": [ "kdy1", "zackshen" ], "repo": "swc-project/swc", "url": "https://github.com/swc-project/swc/issues/6271", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
733485828
BREAKING(bundler): Change Hook API to get_import_meta_props() Addresses https://github.com/denoland/deno/issues/8088. I realised the visitor was only catching import.meta in member expressions, pushed a fix.
gharchive/pull-request
2020-10-30T20:12:23
2025-04-01T06:45:54.935783
{ "authors": [ "nayeemrmn" ], "repo": "swc-project/swc", "url": "https://github.com/swc-project/swc/pull/1195", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2523802318
Quantity Issue I was testing the quantity part of the cards and I am able to type in a quantity that is over the shown in the stock. Please correct this error to show that a user can not go over the number shown in the stock. fixed the code for the issue
gharchive/issue
2024-09-13T03:30:47
2025-04-01T06:45:54.958868
{ "authors": [ "Faiza-Haque", "goodritu1" ], "repo": "swe-thinhnguyen1701/TeaLicious", "url": "https://github.com/swe-thinhnguyen1701/TeaLicious/issues/38", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1746935584
isRunning: Ability to see if the server instance is currently running or not. Hi! First of all, You've done an amazing job with this library. Thank you very much. I was wondering if there's a cleaner way to see if HTTPServer is active or not. Current work around seems to be either using a custom wrapper around HTTPServer or use self.listeningAddress != nil. Let me know what you think Thank you 🙏🏻 There is currently an internal HTTPServer.isListening property that I could make public? This will be true when the server is running and currently listening on the listening Address.
gharchive/issue
2023-06-08T01:43:30
2025-04-01T06:45:55.008537
{ "authors": [ "Aayush9029", "swhitty" ], "repo": "swhitty/FlyingFox", "url": "https://github.com/swhitty/FlyingFox/issues/58", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2721290154
swiftc crashes on -emit-supported-features Description Locally built swiftc crashes when called with -emit-supported-features. Reproduction No input program needed, just run: $ build/Ninja-RelWithDebInfoAssert/swift-macosx-arm64/bin/swiftc -emit-supported-features Stack dump output: SwiftDriver/Driver.swift:2311: Fatal error: unhandled output mode option -emit-supported-features fish: Job 1, '/Volumes/UserData/Code/src-swif…' terminated by signal SIGTRAP (Trace or breakpoint trap) ### Expected behavior An appropriate error message or list of supported features ### Environment Swift version 6.1-dev (LLVM 71fe56bbf73af5a, Swift 7d044bd62c4672a) Target: arm64-apple-macosx15.0 ### Additional information _No response_ @hamishknight This look like it should be transferred to the driver repo.
gharchive/issue
2024-12-05T20:06:45
2025-04-01T06:45:55.020733
{ "authors": [ "AnthonyLatsis", "hnrklssn" ], "repo": "swiftlang/swift", "url": "https://github.com/swiftlang/swift/issues/77990", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2419278543
[cxx-interop] Fix the printing of types with generic arguments Previously the code got the declaration for types with generic arguments and the printer used the declaration. This was a lossy operation, we printed the type with generic parameters instead of the arguments. This patch makes sure we print the type with the arguments. Unfortunately, the code structure is not the most clear, type printing is currently inherently part of the function signature printing. This code path needs to be factored out in the future to make the code easier to understand. rdar://130679337 @swift-ci please smoke test
gharchive/pull-request
2024-07-19T16:10:18
2025-04-01T06:45:55.022222
{ "authors": [ "Xazax-hun" ], "repo": "swiftlang/swift", "url": "https://github.com/swiftlang/swift/pull/75372", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
285192690
Automatic summary row basing on given function I'm submitting a ... (check one with "x") [ ] bug report => search github for a similar issue or PR before submitting [x] feature request [ ] support request => Please do not submit support request here, post on Stackoverflow or Gitter Current behavior Currently, the only way to make some kind of summary is to add some manually calculated metrics to a footer of a table. Expected behavior I want to have a separate row for a summary like here: https://imgur.com/a/2VziI. A width of each cell should be the same as rest of table. In addition, it should act as a regular row and as a user, I want to define a function which will compute a value for each cell in such row. That will be nice if I will be able to set a position of summary row: either top or bottom of a page. Reproduction of the problem What is the motivation / use case for changing the behavior? The current solution is unreadable in case of a table with many columns. Please tell us about your environment: Arch Linux, Node 9.3.0, Angular 5 Table version: 11.1.7 Angular version: 5.0.3 Browser: [all] Language: [all] I want to offer help with writing that feature if you like to have it. I was going to request this feature last week so please consider implementing this! I would like for the summary to sit above the pagination and stick, to allow the content to be scrolled normally. (eg. each column should have a 'totals' column that sits above the pagination bar as per the image: here) I would be willing to help where I can as well :) what if i don't want columns to be defined in ts like this and instead i need them inline html. columns = [ { name: 'Description', prop: 'Item', flexGrow: 3, summaryFunc: () => null }, { name: 'Expenditures', prop: 'Expenses', flexGrow: 1, summaryFunc: (cells) => this.avgAge(cells) }, { name: 'Receipts' , prop: 'Reciepts', flexGrow: 1, summaryFunc: (cells) => this.avgAge(cells) }, { name: 'Accounting' , prop: 'Accounting', flexGrow: 2, summaryFunc: () => null }, ]; how can i apply inline summaryFunc in below columns: <ngx-datatable-column name="Description" prop="Item" [flexGrow]="3" [resizeable]="false"></ngx-datatable-column> <ngx-datatable-column name="Expenditures" prop="Expenses" [flexGrow]="1" [resizeable]="false"></ngx-datatable-column> <ngx-datatable-column name="Receipts" prop="Reciepts" [flexGrow]="1" [resizeable]="false"></ngx-datatable-column> <ngx-datatable-column name="Accounting" prop="Accounting" [flexGrow]="2" [resizeable]="false"></ngx-datatable-column> So you want to have a table with a structure defined only in HTML like that: <ngx-datatable> <ngx-datatable-column></ngx-datatable-column> <ngx-datatable-column></ngx-datatable-column> </ngx-datatable> Current implementation didn't cover such case but it definetaly should. I'll try to add support for that syntax to the PR connected to that issue. Thank you for your feedback, I really appreciate that! @farhanmughal222 Good news! Summary row now works in inline HTML: <ngx-datatable [summaryRow]="true" [rows]="rows"> <ngx-datatable-column prop="prop1" [summaryFunc]="summaryForProp1"></ngx-datatable-column> <ngx-datatable-column name="Prop2" [summaryFunc]="summaryForProp2"></ngx-datatable-column> </ngx-datatable> https://github.com/swimlane/ngx-datatable/pull/1233/commits/ec98687d9f56b8097c08d4325275903b4e65ed81 @SirWojtek Thank you so much bro, you are life saver. 👍 @SirWojtek When this code will be pushed to the master of swimlane/ngx-datatable? Haha, you're welcome :smiley: According to https://github.com/swimlane/ngx-datatable/pull/1233#issuecomment-364030162 we are waiting for the final approval from @amcdnl. @SirWojtek [summaryTemplate]="nameSummaryCell" not working yet? No, need more time to finish that. Should be available until end of this week. @farhanmughal222 I've just finished summaryTemplate for inline HTML table. I hope you'll enjoy it and sorry for the delay! @SirWojtek Greate job! I can't wait for your PR to be merged. Any hope to see it on master this week, maybe next week? Can't wait to see this in master! My team has been eagerly awaiting this! Hey @SirWojtek, thank you for your work, it's nice and useful feature. But it seems that summaryRow with summaryPosition=bottom doesn't work when scrollbarV is turned on. There is an example: https://stackblitz.com/edit/angular-ngx-datatable-summary Hello @ESadouski, thank you for your help. I've created a separate issue to keep track on this problem: https://github.com/swimlane/ngx-datatable/issues/1417. Could you check if I wrote down all information correctly? Everything is correct, except Table and Angular versions. I've reproduced it with ngx-datatable v12.0.0 and Angular 5.0.0 Hi @SirWojtek Is the functionality to exclude the summary row from the vertical scrolling area available or still planned to be a feature? Thanks! Hey @JohnathanCena, right now I'm going to create a documentation explaining how to use a summary row but if you need that feature feel free to create a separate issue and mention me there. Cheers!
gharchive/issue
2017-12-30T09:04:35
2025-04-01T06:45:55.063202
{ "authors": [ "Corona17", "ESadouski", "JohnathanCena", "SirWojtek", "farhanmughal222", "maciej-karpus", "rsnider19" ], "repo": "swimlane/ngx-datatable", "url": "https://github.com/swimlane/ngx-datatable/issues/1215", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2546375707
🛑 Integrations is down In f59ac73, Integrations ($STATUS_INTEGRATIONSURL) was down: HTTP code: 0 Response time: 0 ms Resolved: Integrations is back up in a44d30a after 6 minutes.
gharchive/issue
2024-09-24T20:52:39
2025-04-01T06:45:55.065896
{ "authors": [ "AntiAliasing" ], "repo": "swipload/status", "url": "https://github.com/swipload/status/issues/189", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
109980470
Not possible to browse pictures in gallery Gallery: It is not possible to browse pictures in lightbox. Reference: http://swiss.github.io/styleguide/content_modules_-_multimedia.html#a-photo-gallery-overview Duplicate of #402
gharchive/issue
2015-10-06T10:33:13
2025-04-01T06:45:55.075111
{ "authors": [ "GS-VBS-Webteam", "tonifisler" ], "repo": "swiss/styleguide", "url": "https://github.com/swiss/styleguide/issues/408", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1765183938
Checkbox and Radio: Adjust Focus State the grey border at the moment is not accessible due to the missing contrast https://www.figma.com/file/xZ0IW0MJO0vnFicmrHiKaY/Components-Post?type=design&node-id=7777%3A34266&t=b3kxhLmIteQxIkkq-1 https://www.figma.com/file/xZ0IW0MJO0vnFicmrHiKaY/Components-Post?type=design&node-id=7777%3A34196&t=b3kxhLmIteQxIkkq-1 adjusted some colors: https://www.figma.com/file/ojCcgC5Zd12eUSzq6V5m24/Foundations-Post?type=design&node-id=2215%3A13427&mode=design&t=Quk03By6o0pMYVBW-1 @Cian77 I'm closing this as duplicate of #2179 and #2187
gharchive/issue
2023-06-20T11:13:27
2025-04-01T06:45:55.077590
{ "authors": [ "gfellerph", "wueestd" ], "repo": "swisspost/design-system", "url": "https://github.com/swisspost/design-system/issues/1542", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1904238879
Inconsistent type spec for deliver(email, config) Describe the bug Based on the type spec for deliver(email, config), the return value type is @spec deliver_many(list(%Swoosh.Email{}), Keyword.t()) :: {:ok, term} | {:error, term} but it seems that the function only returns :ok on success when I test. Also in the same module, the example code seems to suggest the same. Steps to Reproduce the Bug or Issue Set up the config for the Mailer and check the return value of deliver func. Expected behavior Put the correct type spec for deliver func. Your reproducible example No response Screenshots or Videos No response Platform OTP: Elixir: Additional context No response https://github.com/swoosh/swoosh/blob/main/lib/swoosh/adapters/brevo.ex#L77-L88 All adapters return {:ok, term} | {:error, term}. The doc is wrong. @princemaple Thanks for the quick fix. My app still returns :ok only, I will look into it. Which adapter is used?
gharchive/issue
2023-09-20T06:18:33
2025-04-01T06:45:55.114527
{ "authors": [ "princemaple", "ryukikikie" ], "repo": "swoosh/swoosh", "url": "https://github.com/swoosh/swoosh/issues/812", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1441043985
Author page should display author details Scenario Given an author named 'Alan Turing' When a user visits Turing's details page Then no error should occour Hints A 'show' page displays an author's details. In a test, visit the show page of @alan with visit author_path(@alan). Create a separate file for author show page feature tests. Make sure it ends with _spec.rb, so that RSpec recognizes it as a test and runs it. Test for existence of text on a page using the have_text matcher: expect(page).to have_text(...). Create a show action on the AuthorsController and a 'show' view (app/views/authors/show.html.erb). Here is the relevant section (5.7) of the Rails Guide. Tip: FactoryBot.create :author creates and saves a valid author object in your tests. If you un-comment the code in spec/factories/authors.rb, the author will be initialized with meaningful values. Error Got AbstractController::ActionNotFound: The action 'show' could not be found for AuthorsController 7 exercise tests have passed. There are 44 in total. You will solve multiple at once towards the end. Sorry, it's not working for me yet: Scenario Given an author named 'Alan Turing' When a user visits Turing's details page Then no error should occour Error Got ActionController::MissingExactTemplate: AuthorsController#show is missing a template for request formats: text/html If you have problems solving this task, please don't hesitate to contact the teaching team! Sorry, it's not working for me yet: Scenario Given an author named 'Alan Turing' When a user visits Turing's details page Then no error should occour Error Got AbstractController::ActionNotFound: The action 'show' could not be found for AuthorsController If you have problems solving this task, please don't hesitate to contact the teaching team! Sorry, it's not working for me yet: Scenario Given an author named 'Alan Turing' When a user visits Turing's details page Then no error should occour Error Got ActionController::MissingExactTemplate: AuthorsController#show is missing a template for request formats: text/html If you have problems solving this task, please don't hesitate to contact the teaching team! Issue resolved succesfully
gharchive/issue
2022-11-08T23:08:53
2025-04-01T06:45:55.128459
{ "authors": [ "johannaschlimme", "swt2public" ], "repo": "swt2-intro-exercise/rails-exercise-22-johannaschlimme", "url": "https://github.com/swt2-intro-exercise/rails-exercise-22-johannaschlimme/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1221808565
🛑 Random site is down In af673e0, Random site (https://www.southernjive.co.uk) was down: HTTP code: 0 Response time: 0 ms Resolved: Random site is back up in ea004b3.
gharchive/issue
2022-04-30T11:28:33
2025-04-01T06:45:55.131155
{ "authors": [ "sxa" ], "repo": "sxa/aoupptime", "url": "https://github.com/sxa/aoupptime/issues/1685", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1428490039
🛑 雙龍職棒SLB is down In ba63425, 雙龍職棒SLB (https://slbtw.cf/) was down: HTTP code: 0 Response time: 0 ms Resolved: 雙龍職棒SLB is back up in 6104072.
gharchive/issue
2022-10-29T22:47:17
2025-04-01T06:45:55.133753
{ "authors": [ "sxadxsx" ], "repo": "sxadxsx/upptime", "url": "https://github.com/sxadxsx/upptime/issues/162", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2304500921
feat: allow themes to add a custom style for file extensions Request for comments This is a draft PR, and I'm looking for feedback on the feature itself, the implementation, and the API. This adds a feature that allows (optionally) specifying a custom style for file extensions. by default, the extension is styled the same as the filename, so the user has to opt in to this feature. this can help to visually distinguish between different file types. an extension is defined as rust defines it, i.e. the part of the file name after the last .. directories don't have extensions, so they are not affected by this feature. (possible in the future) solving https://github.com/sxyazi/yazi/pull/1036 instead of displaying a single filename, could now split the filename into two parts, the name (stem) and the extension. Then could truncate the stem to fit the screen. this PR can help applying search highlights to these two parts separately. (possible in the future) allowing different styling for different types of extensions. this PR can help applying search highlights while maintaining the custom style. Example: a custom style for file extensions Example: highlights are displayed correctly when highlighting searches Example: the way to configure the style in theme.toml diff --git a/.config/yazi/theme.toml b/.config/yazi/theme.toml index b6392db..63d24ed 100644 --- a/.config/yazi/theme.toml +++ b/.config/yazi/theme.toml @@ -7,6 +7,7 @@ syntect_theme = "~/dotfiles/.config/bat/themes/catppuccin-bat/themes/Catppuccin Macchiato.tmTheme" cwd = { fg = "#8bd5ca" } +extension = { fg = "#6e738d", italic = true } # Hovered hovered = { fg = "#24273a", bg = "#8aadf4" } Open questions do you think this feature is useful? I can finish it if you think it is. currently, some multibyte unicode characters position the extension highlight incorrectly. I think this is possible to solve, but I'm not sure it might behave on different platforms. I noticed searching uses regex on the byte level, so maybe this could be a good way to try to solve this. Sorry, I can't accept this PR. Its implementation is too complex and contradicts Yazi's design principles - features related to UI and insensitive to performance should be prioritized to be implemented on the Lua side, allowing users the ability to rewrite them to customize their own interactive experience. features related to UI and insensitive to performance should be prioritized to be implemented on the Lua side, allowing users the ability to rewrite them to customize their own interactive experience Do you think it would be a good idea to rework this idea a little bit, and allow plugins to add arbitrary highlights? I could see this enabling a lot of features for plugins to write. I actually started writing this in lua, but had the following issues: no testing framework is available for lua code very difficult to maintain without tests support for type checking is limited difficult to get the implementation right Do you think it would be a good idea to rework this idea a little bit, and allow plugins to add arbitrary highlights? I could see this enabling a lot of features for plugins to write. Yes, I absolutely love the idea. It's just that I don't like its current implementation - implementing it in Rust almost bids farewell to scalability. I actually started writing this in lua, but had the following issues: Yeah this is a long-standing issue with the plugin system. But I can't afford to spend too much time on it right now, there are more pressing matters at hand - I'm planning to redesign some of the plugin system's APIs, and these will all be breaking changes, so I need to get them in place by Yazi v0.3. Perhaps we can look into addressing them after Yazi v0.3, it should be soon.
gharchive/pull-request
2024-05-19T08:39:31
2025-04-01T06:45:55.161332
{ "authors": [ "mikavilpas", "sxyazi" ], "repo": "sxyazi/yazi", "url": "https://github.com/sxyazi/yazi/pull/1057", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1268858077
Chucker toast is RenderFlex overflowed on long api url Describe the bug Chucker toast is renderflex overflowed so I couldn't click the details button To Reproduce Steps to reproduce the behavior: Use long api url See error Expected behavior Screenshots Version (please complete the following information): [✓] Flutter (Channel stable, 2.10.2, on macOS 12.3.1 21E258 darwin-arm, locale en-ID) • Flutter version 2.10.2 at /Users/Tara/flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 097d3313d8 (4 months ago), 2022-02-18 19:33:08 -0600 • Engine revision a83ed0e5e3 • Dart version 2.16.1 • DevTools version 2.9.2 [✓] Android toolchain - develop for Android devices (Android SDK version 32.1.0-rc1) • Android SDK at /Users/Tara/Library/Android/sdk • Platform android-32, build-tools 32.1.0-rc1 • Java binary at: /Applications/Android Studio.app/Contents/jre/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 11.0.11+0-b60-7772763) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 13.4.1) • Xcode at /Applications/Xcode.app/Contents/Developer • CocoaPods version 1.11.2 [✓] Chrome - develop for the web • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome [✓] Android Studio (version 2021.1) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 11.0.11+0-b60-7772763) [✓] VS Code (version 1.67.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension can be installed from: 🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter Hello, @yoseptara thank you for reporting this issue. I will surely looking into this in the very first available slot. Till then there, is a workaround to solve this issue. You can place Chucker Flutter's button anywhere in the your app as a widget to access its main page. SingleChildScrollView( padding: const EdgeInsets.all(16), child: Column( crossAxisAlignment: CrossAxisAlignment.stretch, children: [ const SizedBox(height: 16), ChuckerFlutter.chuckerButton, const SizedBox(height: 16), ElevatedButton( onPressed: get, child: const Text('GET'), ), ], ), ),
gharchive/issue
2022-06-13T03:43:10
2025-04-01T06:45:55.176546
{ "authors": [ "syedmurtaza108", "yoseptara" ], "repo": "syedmurtaza108/chucker-flutter", "url": "https://github.com/syedmurtaza108/chucker-flutter/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2068523693
🛑 Syfr is down In 8dfbd33, Syfr (https://syfr.app) was down: HTTP code: 0 Response time: 0 ms Resolved: Syfr is back up in 6edaded after 9 minutes.
gharchive/issue
2024-01-06T10:30:00
2025-04-01T06:45:55.179074
{ "authors": [ "jonathanstanley" ], "repo": "syfrtech/status", "url": "https://github.com/syfrtech/status/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1900784468
🛑 Sensitive Clean is down In 0a41a0f, Sensitive Clean (https://sensitive-clean.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensitive Clean is back up in 503fc54 after 36 minutes.
gharchive/issue
2023-09-18T12:01:55
2025-04-01T06:45:55.269722
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/11359", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1912410651
🛑 Absolute Landscape Solutions is down In a1dfd81, Absolute Landscape Solutions (https://absolutelandscapesolutions.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Absolute Landscape Solutions is back up in a4340c6 after 16 minutes.
gharchive/issue
2023-09-25T23:12:53
2025-04-01T06:45:55.272246
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/14000", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1920710737
🛑 Superior Surface is down In 3b98ba4, Superior Surface (https://superior-surface.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Superior Surface is back up in ff91266 after 40 minutes.
gharchive/issue
2023-10-01T11:16:04
2025-04-01T06:45:55.274702
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/15966", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1921204121
🛑 Concrete Tree is down In 288d343, Concrete Tree (https://concretetree.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Concrete Tree is back up in f4d511b after 45 minutes.
gharchive/issue
2023-10-02T03:55:29
2025-04-01T06:45:55.277063
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/16229", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1930637077
🛑 Earth Works Lawn Cares is down In 5ddc3ef, Earth Works Lawn Cares (https://earthworkslawncares.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Earth Works Lawn Cares is back up in a434664 after 38 minutes.
gharchive/issue
2023-10-06T17:08:15
2025-04-01T06:45:55.279625
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/18036", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1932157097
🛑 Bricktically is down In 213340c, Bricktically (https://bricktically.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Bricktically is back up in 31b3a40 after 1 hour, 37 minutes.
gharchive/issue
2023-10-09T01:20:04
2025-04-01T06:45:55.281990
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/19329", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1932288904
🛑 Scrubble Carpet Cleaners is down In 2d7e1bc, Scrubble Carpet Cleaners (https://srubblecarpetcleaners.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Scrubble Carpet Cleaners is back up in 2668b4f after 1 hour, 11 minutes.
gharchive/issue
2023-10-09T04:33:03
2025-04-01T06:45:55.284506
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/19455", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1943376874
🛑 Simply Fluent is down In 012ff5c, Simply Fluent (https://simply-fluent.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Simply Fluent is back up in f8e5784 after 47 minutes.
gharchive/issue
2023-10-14T16:14:08
2025-04-01T06:45:55.286883
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/24964", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1944034843
🛑 Tidy Cleaners is down In 2b9e962, Tidy Cleaners (https://tidycleanerstx.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Tidy Cleaners is back up in 9ed1170 after 59 minutes.
gharchive/issue
2023-10-15T19:55:03
2025-04-01T06:45:55.289249
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/25903", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1952142402
🛑 Clean Standards is down In 0530d6f, Clean Standards (https://clean-standards.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Clean Standards is back up in ddd5624 after 1 hour, 12 minutes.
gharchive/issue
2023-10-19T12:33:19
2025-04-01T06:45:55.291639
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/28720", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1200317928
🛑 Clean Wise is down In 4d5a278, Clean Wise (https://cleanwisenv.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Clean Wise is back up in 59c2108.
gharchive/issue
2022-04-11T17:44:16
2025-04-01T06:45:55.294212
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/2948", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1954708079
🛑 Speedy Drywall is down In 43df6cd, Speedy Drywall (https://speedy-drywall.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Speedy Drywall is back up in 001d833 after 38 minutes.
gharchive/issue
2023-10-20T17:00:43
2025-04-01T06:45:55.296632
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/29671", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1955797585
🛑 Crycket Pest Control is down In eeaa2ab, Crycket Pest Control (https://crycketpestcontrol.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Crycket Pest Control is back up in eb91504 after 9 minutes.
gharchive/issue
2023-10-22T07:38:38
2025-04-01T06:45:55.299022
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/30863", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1079412663
🛑 Patio Pros Elpaso is down In 7b7b8cf, Patio Pros Elpaso (https://patioproselpaso.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Patio Pros Elpaso is back up in 152d18b.
gharchive/issue
2021-12-14T07:38:15
2025-04-01T06:45:55.301426
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/34", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1966086528
🛑 Pest Busters is down In 7dfa5a2, Pest Busters (https://pestbustersnc.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Pest Busters is back up in 14ea715 after 1 hour, 10 minutes.
gharchive/issue
2023-10-27T19:35:52
2025-04-01T06:45:55.303782
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/35007", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1967624803
🛑 Precise Word AZ is down In 8ad4f3c, Precise Word AZ (https://precisewordaz.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Precise Word AZ is back up in e1b4bb3 after 57 minutes.
gharchive/issue
2023-10-30T06:46:31
2025-04-01T06:45:55.306327
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/36931", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1967904238
🛑 Well Repairman is down In 9dd6e6e, Well Repairman (https://wellrepairman.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Well Repairman is back up in a750a08 after 20 minutes.
gharchive/issue
2023-10-30T09:30:49
2025-04-01T06:45:55.308701
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/37036", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1970070976
🛑 Arborcare Ohio is down In 58d393f, Arborcare Ohio (https://arborcareohio.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Arborcare Ohio is back up in 8446b9f after 1 hour, 43 minutes.
gharchive/issue
2023-10-31T09:57:05
2025-04-01T06:45:55.311173
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/37831", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1711205135
🛑 Audiohive is down In 31a55fd, Audiohive (https://audiohivetx.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Audiohive is back up in 15dbccd.
gharchive/issue
2023-05-16T04:13:55
2025-04-01T06:45:55.313442
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/5697", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }