added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:43.369327
| 2012-01-17T16:06:38
|
2870557
|
{
"authors": [
"mndrix",
"thoughtpolice"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11464",
"repo": "thoughtpolice/hs-cityhash",
"url": "https://github.com/thoughtpolice/hs-cityhash/issues/1"
}
|
gharchive/issue
|
upgrade to CityHash version 1.0.3
On October 6, 2011, CityHash version 1.0.3 was released with changes to the hash functions to improve hash quality.
Thanks, I'll get to this soon.
|
2025-04-01T04:35:43.386514
| 2023-07-28T22:31:19
|
1827152186
|
{
"authors": [
"brliron",
"saufall"
],
"license": "unlicense",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11465",
"repo": "thpatch/thcrap",
"url": "https://github.com/thpatch/thcrap/issues/228"
}
|
gharchive/issue
|
touhou 13 crashes during dialogue of the last boss at stage 2 in japanese version
I have touhou 13 patched to the English language and applied vpatch.
but i wanted to play Japanese so I patched through the outdated thcrap_configure.exe to japanese version (patch #304).
and run the game windowed at 1280960 resolution.
i then accidentally clicked the maximization button so it becomes fullscreen. when i changed it back to window it now only has a small window (around 680720 or so)
so i changed the resolution in vpatch.ini to 128*960. it worked. but the game crashes at stage 2 when the last boss gives her dialogue before entering combat.
i tried it windowed, fullscreen. i updated the thcrap and reinstalled the jpn pack through thcrap.exe. i made sure i installed only jpn patch and the english patch is not mixed with it. but it still crashes.
is it possibly due to a bug, or a corrupted game file? or something with vptach?
"at stage 2 when the last boss gives her dialogue before entering combat" sounds like something that would be caused by thcrap.
The patch lang_jp tend to not be really well maintained, because, well, if someone wants to play in Japanese, they don't need to apply more Japanese over their Japanese game, it's already in Japanese. I don't even know why we have this patch.
I suggest re-running the configuration tool (the outdated one or the current one, it doesn't really matter) and to select only the patch nmlgc/base_tsa.
Thank you.
Can I just use the vpatch as I dont need a language patch anyway? Does it run the game in the unaltered japanese verision if i simply run vpatch.exe?
Yes, if you don't use a thcrap patch, you don't really need thcrap. Vpatch works just fine without thcrap.
|
2025-04-01T04:35:43.418263
| 2021-03-19T19:58:37
|
836313350
|
{
"authors": [
"dtmelt",
"kvark"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11466",
"repo": "three-rs/three",
"url": "https://github.com/three-rs/three/issues/242"
}
|
gharchive/issue
|
Panicking on Rust 1.50.0
Building and running the following minimum example panics on Rust 1.50.0:
fn main() {
let mut win = three::Window::new("test");
let cam = win.factory.perspective_camera(60.0, 1.0..1000.0);
cam.look_at([5.0, 5.0, 5.0], [0.0, 0.0, 0.0], None);
while win.update() {
win.render(&cam)
}
}
The backtrace is:
thread 'main' panicked at 'attempted to leave type `platform::platform::x11::util::input::PointerState` uninitialized, which is invalid', /home/dtmelt/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/mem/mod.rs:659:9
stack backtrace:
0: rust_begin_unwind
at /rustc/cb75ad5db02783e8b0222fee363c5f63f7e2cf5b/library/std/src/panicking.rs:493:5
1: core::panicking::panic_fmt
at /rustc/cb75ad5db02783e8b0222fee363c5f63f7e2cf5b/library/core/src/panicking.rs:92:14
2: core::panicking::panic
at /rustc/cb75ad5db02783e8b0222fee363c5f63f7e2cf5b/library/core/src/panicking.rs:50:5
3: core::mem::uninitialized
at /home/dtmelt/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/mem/mod.rs:659:9
4: winit::platform::platform::x11::util::input::<impl winit::platform::platform::x11::xdisplay::XConnection>::query_pointer
at /home/dtmelt/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.18.1/src/platform/linux/x11/util/input.rs:94:51
5: winit::platform::platform::x11::EventsLoop::process_event
at /home/dtmelt/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.18.1/src/platform/linux/x11/mod.rs:884:45
6: winit::platform::platform::x11::EventsLoop::poll_events
at /home/dtmelt/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.18.1/src/platform/linux/x11/mod.rs:203:13
7: winit::platform::platform::EventsLoop::poll_events
at /home/dtmelt/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.18.1/src/platform/linux/mod.rs:505:44
8: winit::EventsLoop::poll_events
at /home/dtmelt/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.18.1/src/lib.rs:273:9
9: three::window::Window::update
at /home/dtmelt/.cargo/registry/src/github.com-1ecc6299db9ec823/three-0.4.0/src/window.rs:198:9
10: rsboids::main
at ./src/main.rs:25:11
11: core::ops::function::FnOnce::call_once
at /home/dtmelt/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
I believe this is happening due to a problem in the version of winit that three currently depends on.
The dependency gfx_window_glutin="0.28.0" has dependency glutin="0.19.0", which depends on an outdated version of winit="0.18.1".
The current version of gfx_window_glutin="0.31.0" has dependency glutin="0.21.2" and uses a slightly more recent version of winit="0.19.3" but I am not sure if this version (not the most recent) fixes the issue or not.
Indeed, looks like we'd need to update winit here.
The old gfx crates are all long been deprecated though.
Are you currently using three and would like to continue using it in the future?
I suppose we would accept a PR that bumps glutin/winit versions to gfx's pre-ll branch and publish it.
Hi, I actually came across three while trying to begin this tutorial:
https://blog.bitsacm.in/a-fistful-of-boids/
Do you know of a crate with similar abstraction levels to three that works with whichever crates replaced gfx?
If not I would be willing to take a swing at writing that PR.
That's a great tutorial indeed! I wish we had something like this on modern tooling...
You could take a look at Bevy engine - https://bevyengine.org/
Otherwise, please file PRs to https://github.com/gfx-rs/gfx/tree/pre-ll (pre-ll branch), and we'll get it published.
Thanks kvark. I will give Bevy a shot and circle back to those PRs if I am not successful.
|
2025-04-01T04:35:43.427633
| 2021-06-27T05:07:32
|
930849974
|
{
"authors": [
"DylanVerstraete",
"robvanmieghem"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11467",
"repo": "threefoldfoundation/tft",
"url": "https://github.com/threefoldfoundation/tft/issues/109"
}
|
gharchive/issue
|
Withdrawals from bsc to stellar are not being processed
From the master logs:
INFO [06-26|14:00:23.123] Remembering withdraw event for txHash=2c322f…c3a5a1 height=8635075 network=stellar
INFO [06-26|14:19:11.190] Remembering withdraw event for txHash=b67d12…cd4f45 height=8635451 network=stellar
No further logs found for the processing of these withdrawals
going to try to reproduce on testnet once https://github.com/threefoldfoundation/tft/issues/111 is done
Restarted of the master, after a while it bstarted processing the withdrawals but could not reach the cosigner. Restarted the cosigner and they went through. Closing for now, hope the little bit of extra logging might make us a bit wiser.
|
2025-04-01T04:35:43.428684
| 2020-05-26T10:02:05
|
624761157
|
{
"authors": [
"Hamdy",
"Vilnite"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11468",
"repo": "threefoldfoundation/tfwebserver_projects_people",
"url": "https://github.com/threefoldfoundation/tfwebserver_projects_people/issues/70"
}
|
gharchive/issue
|
fix true carbon logo
does not look good
This one is fixed. I'll add logos in correct size from now on.
|
2025-04-01T04:35:43.434532
| 2020-04-06T11:42:23
|
595043861
|
{
"authors": [
"alichaddad",
"ashraffouda"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11469",
"repo": "threefoldtech/js-ng",
"url": "https://github.com/threefoldtech/js-ng/issues/174"
}
|
gharchive/issue
|
Factories and child factories review
We need to make sure that a child factory object has a reference to its factory parent.
It is not clear from the flow whether this is implemented or not.
StoredFactory takes parent name as an argument but that's not the case with Factory.
verifying
|
2025-04-01T04:35:43.440731
| 2021-02-03T16:22:00
|
800485359
|
{
"authors": [
"abom",
"gmachtel",
"waleedhammam"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11470",
"repo": "threefoldtech/js-sdk",
"url": "https://github.com/threefoldtech/js-sdk/issues/2451"
}
|
gharchive/issue
|
Deployed wiki does not appear in My Workloads
I deployed a wiki in my eVDC and all went well. But the wiki instance does not appear in My workloads.
In the overview of Deployed Solutionsit does appear.
Duplicate of #2375, it should be fixed now, when did you create this VDC? and on which network?
@waleedhammam Can a restart/update reloads latest changes? and a "refresh" to chatflows is needed?
Note that previously deployed wikis will not be listed in "My Workloads", but they will be listed in "Deployed workloads" because they already took this name in current k8s deployment. Newer deployments should be working.
OK VDC was created yesterday. But indeed, a refresh of the VDC is needed I think. As a user has paid for a full month, he doesn't want to throw his VDC away to get the latest update. And if workloads run, you don't want to interrupt them anymore.
@gmachtel There's a button called Update Dashboard under your name menu, this updates the deployed vdc dashboard to the latest verstion
I pushed the 'Update Dashboard' button, but the wiki deployments still don't appear when i click on them afterwards.
I pushed the 'Update Dashboard' button, but the wiki deployments still don't appear when i click on them afterwards.
for old workloads, it may not appear cause the changes were in the deployment chart labels/chatflows, so it will be on the newly deployed deployments. as for now we can get it via deployed solutions button
@gmachtel Yes, I mentioned this at https://github.com/threefoldtech/js-sdk/issues/2451#issuecomment-772639869
Note that previously deployed wikis will not be listed in "My Workloads", but they will be listed in "Deployed workloads" because they already took this name in current k8s deployment. Newer deployments should be working.
If they are new deployment, please give us a screenshot from configuration (when you enter wiki URL) in the chatflow, and make sure to restart/refresh.
OK new wiki workload deployed, and that one indeed appeared in My workloads.
|
2025-04-01T04:35:43.447120
| 2020-06-05T08:10:06
|
631400493
|
{
"authors": [
"BolaNasr",
"Dina-Abd-Elrahman",
"MathiasDeWeerdt",
"Pishoy",
"grimpy",
"ranatrk",
"zaibon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11471",
"repo": "threefoldtech/jumpscaleX_core",
"url": "https://github.com/threefoldtech/jumpscaleX_core/issues/892"
}
|
gharchive/issue
|
Deleting a network, doesn't really delete it
The network just pops back up if I restart the container.
can you give more information please? reservation ID, network name...
i deleted network from admin and then i can't ping any solutions using this network
So , it works good.
I already delete the network but it is still shown in network tab in chatflow while the container is not reachable as the network is deleted, below is reservation id
http://explorer.testnet.grid.tf/explorer/reservations/461633
jsx version
root@3bot:/sandbox/code/github/threefoldtech/jumpscaleX_threebot# git log -1
commit 48b9f9876e494526e2a9c1450805f77a203dd7d7 (HEAD -> development, origin/development, origin/HEAD)
Merge: 118975c a362fc7
Author: Bola E.Nasr<EMAIL_ADDRESS>Date: Sun Jun 14 05:04:48 2020 -0700
Merge pull request #768 from threefoldtech/development_flist_check
check about md5 flist
@Pishoy check that the reservation ID is actually the same. Because of the way how the admin panel deals with network, you can have multiple reservation modifying the same network.
sorry, network reservation is 461631
I mean check the reservation ID, then delete it. If it still shows, check the reservation ID again. Most probably it will not be the same.
works now!
@Pishoy What do you mean works now?
after I tried network deletion (reservation id 461631) again, it works
also, I did the same steps again and worked now
not sure what was the issue, seems connection issue
threebot logs tmux
This seems weird, I'm re-opening, there are still things to figure out here.
https://github.com/threefoldtech/js-sdk/issues/48 relates to let's backport this fix
The fix suggested and implemented, is recursively linking each updated network with its parent network by saving the parent reservation id in the metadata of the child. In case the network is to be cancelled from the admin dashboard, all its parent networks will be cancelled as well and so the previous version of the network(in an older reservation) will not reappear.
verified.
|
2025-04-01T04:35:43.452635
| 2024-02-20T14:06:44
|
2144462754
|
{
"authors": [
"khaledyoussef24",
"muhamadazmy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11472",
"repo": "threefoldtech/tfgrid-sdk-ts",
"url": "https://github.com/threefoldtech/tfgrid-sdk-ts/issues/2214"
}
|
gharchive/issue
|
mycelium messaging sub system which is mentioned in docs is not working
Description
as mentioned in docs it was supporting message subsystem to make communication between machines.
https://github.com/threefoldtech/mycelium/blob/master/docs/message.md
after installing mycelium on the machines and trying to make the machines to contact each others using mycelium address it does not work.
network used : devnet
steps to produce
make two machines and install mycelium on them
try to use commands to send message to the other machine using the address
it did not work
even receiving messages commands are not working
I can see from the logs that the message send and receive are BOTH trying to connect to port 8989 on localhost. Which means there must be something listening on that port locally.
the message features is a way to send an arbitrary message over mycelium network HENCE it requires to have mycelium running on both sender and receiver side. When you run send, the send actually connects to your local mycelium and push the message to destination. later the message is received by the end mycelium and then pushed to the receive process.
I am sure you can do this locally but also means you need to have mycelium running before you can use send/receive.
|
2025-04-01T04:35:43.459328
| 2023-09-19T21:03:17
|
1903720290
|
{
"authors": [
"danielclough",
"ixxie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11473",
"repo": "threlte/threlte",
"url": "https://github.com/threlte/threlte/issues/634"
}
|
gharchive/issue
|
Steps for "Setting up dev environment" in CONTRIBUTING.md not correct.
There is no script dev in package.json as is required by the steps in CONTRIBUTING.md.
I tried pnpm preview -- --host but all routes are 404.
I've noticed the same thing but thought I was doing something wrong 😄
I'm going to look into it. For now, I can say that you can go into /apps/docs and run the dev server from there.
Okay, we discussed this and discovered the original dev script was removed and we find it clearer to change the documentation to reflect this change.
Now merged with https://github.com/threlte/threlte/pull/687
|
2025-04-01T04:35:43.461145
| 2020-10-12T16:50:59
|
719520291
|
{
"authors": [
"critzo",
"jheretic",
"rudietuesdays"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11474",
"repo": "throneless-tech/murakami-viz",
"url": "https://github.com/throneless-tech/murakami-viz/issues/104"
}
|
gharchive/issue
|
custom dns servers field documentation and UI display
When adding a library network that has multiple custom DNS servers, the field help should state that the comma separated list of DNS servers should not include spaces. A space after a comma causes the save step to fail. Additionally, the display of DNS servers is inside curly braces in the UI which should be removed when displaying the saved value:
Looks like some of this code that had been done to address this was lost in a merge, adding back in now!
Fixed by #106
|
2025-04-01T04:35:43.494141
| 2022-10-25T04:06:51
|
1421811039
|
{
"authors": [
"FlameSky-S",
"IndustryZoe",
"MC-wither",
"cloveryz11"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11475",
"repo": "thuiar/MMSA",
"url": "https://github.com/thuiar/MMSA/issues/59"
}
|
gharchive/issue
|
百度云链接失效请进 BaiduYun Dead Link Issue
如遇百度云链接失效,请在此issue下回复。我们会尽快更新链接。
If the BaiduYun link is dead again, please reply under this issue. We'll update as soon as possible.
如遇百度云链接失效,请在此issue下回复。我们会尽快更新链接。
If the BaiduYun link is dead again, please reply under this issue. We'll update as soon as possible.
您好,百度云盘连接需要提取码,请问提取码是多少?
百度云链接失效了,希望作者更新一下谢谢
百度云链接失效了,希望作者更新一下谢谢
|
2025-04-01T04:35:43.510675
| 2024-05-31T09:44:35
|
2327400142
|
{
"authors": [
"MillionsToOne",
"cketti",
"pwd-github"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11476",
"repo": "thunderbird/thunderbird-android",
"url": "https://github.com/thunderbird/thunderbird-android/issues/7892"
}
|
gharchive/issue
|
Download full emails by default
Checklist
[X] I have used the search function to see if someone else has already submitted the same bug report.
[X] I will describe the problem with as much detail as possible.
App version
6.901
Where did you get the app from?
Google Play
Android version
14
Device model
Pixel 8 Pro
Steps to reproduce
I received an email from Uber just now and I'm like WTF? So I open the email, scroll to the bottom looking for the unsubscribe link and its not there, instead I see a download full message email. Huh? I'm confused as to what the reasoning for not giving full emails is?
Expected behavior
Full emails should be fetched by default. If anything there should be a preference for a minimalist broken experience.
Actual behavior
Not even being able to unsubscribe from uber emails because I can't see the bottom of the email.
Logs
No response
It's a feature. Not everybody has the bandwith to download possibly tens of emails a day.
I don't think we'll change the default to always download complete messages. But we might change it to something larger than 32 KiB. We'll discuss this internally.
|
2025-04-01T04:35:43.517021
| 2018-06-07T11:37:52
|
330233933
|
{
"authors": [
"Djfe",
"MadMattAu"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11477",
"repo": "thundernest/addons-server",
"url": "https://github.com/thundernest/addons-server/issues/2"
}
|
gharchive/issue
|
spellcheckers/dictionaries are almost invisible in Thunderbird (too hard to reach)
This is from mozilla/addons/issues/733:
Describe the problem and steps to reproduce it:
I opened Thunderbird > Add-ons > Explore Addons > addons.mozilla.org (over one of the links there) -> search "english" or "deutsch"
What happened?
The search returned no result or just normal add-ons
What did you expect to happen?
I clicked in the navigation on
more -> Dictionaries & Language Packs -> The language(s) I wanted.
Now suddenly german/english dictionaries appear in the search.
Before they didn't for some reason.
I don't have the issue in Firefox, when I open the addon domain. (dictionaries are always part of the search results, when the search term fits)
But everytime I open the Page in Thunderbird (until I visited more -> language packs)
They are also way too hidden, considering how useful they are when writing mails.
It would be nice to see them listed next to themes and apps on the local "explore add-ons" page.
Also there are no backwards/forwards buttons when browsing on addons.mozilla.org in Thunderbird.
Regards,
Djfe
Anything else we should know?
I know these are partly Thunderbird bugs, but they are all related to addons.mozilla.org , so you and the Thunderbird team probably have to join efforts anyways, so I would rather report it here :)
That is what I see the above in Thunderbird addon manager and is I assume the basis of most of your comments. I click dictionaries and search and all works well. We are trying to encourage all add-ons and dictionaries to be installed via the appropriate location in Thunderbird. Personally I suggest you use the link in options.
Options > composition > Spelling has a link to dictionaries and language packs which I generally recommend as it is not a search, and opens https://addons.thunderbird.net/thunderbird/language-tools/ in a Thunderbird tab from which install is two clicks away, one to select the dictionary and one to click the install button. Using the options method each and every selection I make opens a new tab, negating the need is any way for a back or forward button.
|
2025-04-01T04:35:43.538513
| 2021-06-22T09:33:30
|
927032521
|
{
"authors": [
"Mathew-Estafanous",
"parsn1psoup",
"thwidge"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11478",
"repo": "thwidge/pairing-bot",
"url": "https://github.com/thwidge/pairing-bot/pull/18"
}
|
gharchive/pull-request
|
Refactor with interfaces for DB and Zulip interaction
Interfaces will allow us to mock the DB and Zulip interaction, and have more unit tests.
This PR does the following:
Split up code into several files
Add interfaces ´RecurserDB´ for DB lookups of subscriber information, and ´APIAuthDB´ for DB lookups of API/authentication tokens. Move existing implementation of DB queries into ´FirestoreRecurserDB´ and ´FirestoreAPIAuthDB´ types implementing these interfaces.
Add interfaces ´userRequest´ for handling user requests to the bot through the client (atm Zulip), and ´userNotification´ for sending notifications to users (e.g for matches and offboarding). Move existing implementation of client interaction into ´zulipUserRequest´ and ´zulipUserNotification´ types implementing these interfaces.
Add ´PairingLogic´ type that holds the types for DB and client interaction.
Make the HTTP handle functions methods with ´PairingLogic´ as the receiver.
To do:
´context´ was added, but not used in a meaningful way yet. It shouldn't do any harm but maybe some ´context.Background()´ can be cleaned up even before merging this.
Error handling might not be super consistent yet (when to log? when to panic?), but can be addressed separately I guess.
This PR is pretty big and I'm not sure if it's reviewable or testable enough to be merged as is. Even if we don't end up merging it, it has been useful as a learning project and can maybe serve as a reference for future discussions.
Thanks @rickypai, @nyghtowl, @Mathew-Estafanous and @ameliariely for advice and pairing sessions on this!
This is such a great contribution to pairing bot and i'm stoked to do whatever's necessary to get all these changes into production! @parsn1psoup and I are chatting out-of-band, and it sounds like we're gonna work on merging (and thus going to production) sometime this week.
Seconding @parsn1psoup -- huge thanks to all of you @rickypai @nyghtowl @Mathew-Estafanous @ameliariely 🙏
Sounds great! If you are ok with it, could I join you guys as you work on getting this update on to production?
@Mathew-Estafanous Yes absolutely! I'll start a group chat on zulip
Thank you for the comments @rickypai ! Should be resolved now.
|
2025-04-01T04:35:43.548300
| 2023-08-18T22:29:13
|
1857356847
|
{
"authors": [
"jagger"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11479",
"repo": "thycotic-ps/thycotic.secretserver",
"url": "https://github.com/thycotic-ps/thycotic.secretserver/issues/368"
}
|
gharchive/issue
|
Secret Field property "ExposeForDisplay" is an alias for "MustEncrypt" even though they have opposite connotations
https://github.com/thycotic-ps/thycotic.secretserver/blob/996427548cdefd482d1c0b14ec0824e7a4633e82/src/Thycotic.SecretServer.Types.ps1xml#L113
ExposeForDisplay is an alias for MustEncrypt so the values would be the same when pulled even though the meanings are opposite. A field that is encrypted is not exposed in the database.
PS > $xnine | Select-Object FieldSlugName, MustEncrypt, ExposeForDisplay
FieldSlugName MustEncrypt ExposeForDisplay
------------- ----------- ----------------
notes False False
PS > $xnine.ExposeForDisplay = $true
PS > $xnine | Select-Object FieldSlugName, MustEncrypt, ExposeForDisplay
FieldSlugName MustEncrypt ExposeForDisplay
------------- ----------- ----------------
notes True True
PS > $xnine.ExposeForDisplay = $false
PS > $xnine | Select-Object FieldSlugName, MustEncrypt, ExposeForDisplay
FieldSlugName MustEncrypt ExposeForDisplay
------------- ----------- ----------------
notes False False
PS > $xnine.MustEncrypt = $true
PS > $xnine | Select-Object FieldSlugName, MustEncrypt, ExposeForDisplay
FieldSlugName MustEncrypt ExposeForDisplay
------------- ----------- ----------------
notes True True
PS > $xnine.MustEncrypt = $false
PS > $xnine | Select-Object FieldSlugName, MustEncrypt, ExposeForDisplay
FieldSlugName MustEncrypt ExposeForDisplay
------------- ----------- ----------------
notes False False
Originally posted by @jagger in https://github.com/thycotic-ps/thycotic.secretserver/issues/367#issuecomment-1684493664
Probably the best way is to switch from ExposeForDisplay to NotExposedForDisplay as this is more in line with secret server's presentation of this item, but without the confusion
the best solution would be to change from exposeForDisplay to notExposedForDisplay as it keeps with the secret server wording, but removes the backwards meaning.
|
2025-04-01T04:35:43.551777
| 2021-04-14T05:41:39
|
857540415
|
{
"authors": [
"hi-rustin",
"ran-huang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11480",
"repo": "ti-community-infra/configs",
"url": "https://github.com/ti-community-infra/configs/pull/254"
}
|
gharchive/pull-request
|
config: enable contribution for docs sig
Signed-off-by: Ran<EMAIL_ADDRESS>Enable contribution plugin, which will add contribution or first-time-contributor labels to PR accordingly.
@yikeke PTAL.
I just found that ti-sre-bot hasn't added any first-time-contributor labels to PRs for a few weeks. We need to enable this tichi plugin.
/ok-to-test
/hold
/merge
/unhold
|
2025-04-01T04:35:43.560529
| 2020-10-29T16:56:48
|
732489699
|
{
"authors": [
"fprin",
"tiangolo",
"ycd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11481",
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/issues/2264"
}
|
gharchive/issue
|
What is the equivalent of response_model_exclude_defaults in fastapi version 0.44.1?
First check
[X] I added a very descriptive title to this issue.
[X] I used the GitHub search to find a similar issue and didn't find it.
[X] I searched the FastAPI documentation, with the integrated search.
[X] I already searched in Google "How to X in FastAPI" and didn't find any information.
[X] I already read and followed all the tutorial in the docs and didn't find an answer.
[ X] I already checked if it is not related to FastAPI but to Pydantic.
[X] I already checked if it is not related to FastAPI but to Swagger UI.
[X] I already checked if it is not related to FastAPI but to ReDoc.
[X] After submitting this, I commit to one of:
Read open issues with questions until I find 2 issues where I can help someone and add a comment to help there.
I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
Implement a Pull Request for a confirmed bug.
These are the docs that I am reading: https://fastapi.tiangolo.com/tutorial/response-model/
They seem not to be related to version 0.44.1 of fastapi.
I just want to know how could I achieve this in version 0.44.1:
from typing import List, Optional
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class Item(BaseModel):
name: str
description: Optional[str] = None
price: float
tax: Optional[float] = None
tags: List[str] = []
@router.get("/my_ip/{id}", response_model=Item, response_model_exclude_defaults=True)
async def my_ip(gid: str, fields: str = None):
....
I keep getting this error:
from routes import group, customer_service, categories, users
File "./routes/group.py", line 86, in <module>
@router.get("/my_ip/{id}", response_model=response_groups__gid, response_model_exclude_defaults=True)
TypeError: get() got an unexpected keyword argument 'response_model_exclude_defaults'
You can use response_model_skip_defaults.
Thanks for the help @ycd! 🍰
Sorry for the long delay! 🙈 I wanted to personally address each issue/PR and they piled up through time, but now I'm checking each one in order.
|
2025-04-01T04:35:43.566847
| 2021-03-03T10:47:24
|
820988687
|
{
"authors": [
"MyHardWay",
"chbndrhnns",
"falkben",
"juntatalor",
"tiangolo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11482",
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/issues/2892"
}
|
gharchive/issue
|
Can app.state from middleware cached or used on another request?
Hello!
I had auth middleware , which set client_id, username, password to request.app.state
class AuthMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request, call_next):
if request.headers.get("Authorization"):
db_pool = DB.get_pool()
query = "select user_id, enc_password from users where email = $1"
security = HTTPBasic()
security = await security(request)
##TODO cant find better decision.
login, password = security.__dict__['username'], security.__dict__['password'].encode('utf-8')
async with db_pool.acquire() as connection:
results = await connection.fetch(query, login)
if results:
hash = results[0][1].encode('utf-8')
is_password_same = bcrypt.checkpw(password, hash)
if is_password_same:
request.app.state.client_id = results[0][0]
request.app.state.username = login
request.app.state.password = password
else:
return JSONResponse(content={'result': 'error', 'descr': "00000000;BADLOGIN"})
else:
return JSONResponse(content={'result': 'error', 'descr': "00000000;BADLOGIN"})
response = await call_next(request)
return response
Later in router i used app.state for future actions. Suddenly i had very strange issue:
Depends on nginx logs, user with own login and password used login and password of another, during handling request in Fastapi.
Is it possible to Fastapi cache state and use in another request? Or how its could be real?
Use request.state instead of request.app.state?
Also, it strikes me that this functionality should probably be a Depends function
@MyHardWay you can use Depends like this:
from pydantic import BaseModel
from fastapi import Request, Depends
from typing import Optional
class Client(BaseModel):
client_id: int
username: str
password: str
async def get_client(request: Request) -> Optional[Client]:
if request.headers.get("Authorization"):
db_pool = DB.get_pool()
query = "select user_id, enc_password from users where email = $1"
security = HTTPBasic()
security = await security(request)
##TODO cant find better decision.
login, password = security.__dict__['username'], security.__dict__['password'].encode('utf-8')
async with db_pool.acquire() as connection:
results = await connection.fetch(query, login)
if results:
hash = results[0][1].encode('utf-8')
is_password_same = bcrypt.checkpw(password, hash)
if is_password_same:
return Client(client_id=results[0][0], username=login, password=password)
else:
return None
else:
return None
@tracking_management_router.delete("/tracks", response_model=DeleteResponse)
async def remove_from_tracking(request: Request, args: DeleteReq, client: Optional[Client] = Depends(get_client)) -> List:
if client is None:
return JSONResponse(content={'result': 'error', 'descr': "00000000;BADLOGIN"})
... # whatever
More information on Depends here: https://fastapi.tiangolo.com/tutorial/dependencies/
@juntatalor Thank you, for an example, but in this way i need to use same code in every router, while with middleware it can be done in one place.
Isn’t it the case that you could use the dependency on the app inject the variables into the request from there?
C.f. https://fastapi.tiangolo.com/tutorial/dependencies/global-dependencies/
Thanks for the help here everyone! 👏 🙇
You should probably not use a custom middleware, you will be losing all the typing information that checks that your code is correct, and all the integration with OpenAPI, the automatic docs, etc. Also, middlewares have some subtleties that become problematic. The sort-of official way of doing authentication in FastAPI would be indeed in dependencies.
If in the end, you solved the original problem, then you can close this issue ✔️
Sorry for the long delay! 🙈 I wanted to personally address each issue/PR and they piled up through time, but now I'm checking each one in order.
|
2025-04-01T04:35:43.573977
| 2021-04-15T18:51:35
|
859160639
|
{
"authors": [
"insomnes",
"postelrich",
"tiangolo",
"ycd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11483",
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/issues/3087"
}
|
gharchive/issue
|
Add meta to responses
I have a response model that has a meta field and a data field. The data field is what is returned from the wrapped function. And the meta adds some meta of the data. Right now the wrapped function has to return a dictionary with meta and data keys in order to work. I would like to be able to just return the raw data and the response model would get constructed after. I can't get this to work with root_validators and other attempts. This would let me use the wrapped functions elsewhere in the application without getting the data nested in a dictionary.
Example
from typing import List
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class Widget(BaseModel):
id: str
class WidgetResponse(BaseModel):
meta: str
data: List[Widget]
@app.get("/widgets", response_model=WidgetResponse)
def get_widgets():
return {"meta": "meta", "data": [{"id": 1}, {"id: 2}]}
# would like to just return
return [{"id": 1}, {"id: 2}]
def some_widget_func():
widgets = get_widgets()
# process widgets
@postelrich hello!
May be i don't get you clearly, sorry for that if so. First of all you need to change your response or add List[Widget] annotation to data field.
If you have some field in model and you don't want it to appear in response, you can use response_model_include and response_model_exclude:
from typing import List
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class Widget(BaseModel):
id: str
class WidgetResponse(BaseModel):
meta: str
data: List[Widget]
@app.get("/widgets", response_model=WidgetResponse, response_model_exclude={"meta"})
def get_widgets():
return {"meta": "meta", "data": [{"id": 1}, {"id": 2}]}
This will return:
{
"data": [
{
"id": "1"
},
{
"id": "2"
}
]
}
To be clear I want it when someone calls http://myapp.com/widget they get back {"meta": "meta", "data": [{"id": 1}, {"id: 2}]}. When someone calls get_widgets they get back [{"id": 1}, {"id: 2}]
So is this a feature request or how can i do X type of question?
You can simplify exclude it from being serialized/validated using Pydantic but i'm actually against this idea of adding this kind of logic in the FastAPI. It could be an external package but it shouldn't be the part of the FastAPI by default.
exclude wouldn't work. I'm looking to add stuff to the result.
Thanks for the help @ycd! :bow: :coffee:
Yeah, that's currently not supported, and I don't see an easy way to support it as part of FastAPI in a way that is generalizable enough for other use cases.
Maybe there are tricks that could be done with validators in Pydantic models to generate the additional data from the returned value, but I suspect there would be many cases where the additional data requires using more than just the returned values, for example, getting more data from the database, from settings, etc.
And Pydantic would not be the right place to do that. So, the best approach would really be to keep returning the dict as you currently do. You could create a utility function that takes your return value and wraps it in the shape you need, adding the data you need. That way you wouldn't have to always return the whole data structure, but the return of a function call with your own data.
Sorry for the long delay! 🙈 I wanted to personally address each issue/PR and they piled up through time, but now I'm checking each one in order.
|
2025-04-01T04:35:43.576999
| 2021-05-10T09:14:48
|
883796096
|
{
"authors": [
"laurence-lin",
"tiangolo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11484",
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/issues/3206"
}
|
gharchive/issue
|
How could I send an array of list as query parameters via POST method?
I have to send a list or array as parameters with POST method, like this:
@app.post('endpoint')
async def feedback(feed: List[int])
I expect to be able to send parameters of a list of integer, and return the response.
However, in the UI document, the endpoint couldn't receive request parameters, instead only showing return 0.
I'd looked for request body, and tried to do something like this:
class Item(BaseModel):
feedback: int
async def feedback(feed: List[Item])
But that returns this: feedback: 0 and I still could not send parameters of a list.
How could I fulfill this task?
Thanks for the help here @hard-coders ! 👏 🙇
Thanks for reporting back and closing the issue @laurence-lin 👍
Sorry for the long delay! 🙈 I wanted to personally address each issue/PR and they piled up through time, but now I'm checking each one in order.
|
2025-04-01T04:35:43.583383
| 2021-06-11T02:07:12
|
918129299
|
{
"authors": [
"David-Lor",
"Garito",
"Kludex",
"nilansaha",
"raphaelauv"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11485",
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/issues/3363"
}
|
gharchive/issue
|
FastApi is not running every request on same endpoint in separate Thread
So, I think I understand all async def and def stuff. I have this piece of code and I am running it with uvicorn main:app
import time
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
print("Hitted Root")
time.sleep(10)
return {"message": "Hello World"}
@app.get("/hi")
def root_hi():
print("Hitted Root Hi")
time.sleep(10)
If I visit /hi and / at the same time. The print statement comes instantly and they each end after 10 seconds approximately which must mean that they are starting at the same time in different threads
However if I open two requests to /hi the first one ends and the second one then starts i.e. I see the print statement from the first one and then 10 seconds later the print statement from the second one which must mean they are not running on different threads.
I want to know why is that the case and if this is the default behaviour where requests to different endpoints run in different threads but requests to the same endpoint run one after the other. I also wonder if there is a way to make the requests to the same endpoint run in different threads and at the same time without using multiple uvicorn workers.
time.sleep should be async (which is not) and you should await for it
You could use await asyncio.sleep(10) instead
await asyncio.sleep(10)
I am using normal def functions so I shouldn't have to use await and asyncio. Fastapi doc says for normal def functions it runs everything in a separate threadpool
I can't reproduce your behavior. It works just fine here.
I think I understand all async def and def stuff
apparently no
I think I understand all async def and def stuff
apparently no
Care to elaborate? Or where I am wrong ?
@nilansaha How are you performing the requests? I found some time ago that (some) browsers (Firefox in my case) seem to avoid performing simultaneous requests to the same endpoint (or localhost?).
Maybe try this code to perform two concurrent requests using Python, I'd say your example would work as expected after trying it:
import threading
import requests
def req():
requests.get("http://localhost:8000/hi")
threads = [threading.Thread(target=req, daemon=True) for _ in range(5)]
[th.start() for th in threads]
[th.join() for th in threads]
Output from fastapi server:
INFO: Started server process [33905]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://<IP_ADDRESS>:8000 (Press CTRL+C to quit)
Hitted Root Hi
Hitted Root Hi
Hitted Root Hi
Hitted Root Hi
Hitted Root Hi
INFO: <IP_ADDRESS>:52508 - "GET /hi HTTP/1.1" 200 OK
INFO: <IP_ADDRESS>:52506 - "GET /hi HTTP/1.1" 200 OK
INFO: <IP_ADDRESS>:52510 - "GET /hi HTTP/1.1" 200 OK
INFO: <IP_ADDRESS>:52504 - "GET /hi HTTP/1.1" 200 OK
INFO: <IP_ADDRESS>:52512 - "GET /hi HTTP/1.1" 200 OK
Indeed that is the case. Chrome was not allowing to run another request on same endpoint until the first one is finished. And thats why the confusion.
|
2025-04-01T04:35:43.593817
| 2022-03-17T15:29:59
|
1172472047
|
{
"authors": [
"charlax",
"jarviliam",
"ramilmsh",
"timofurrer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11486",
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/issues/4696"
}
|
gharchive/issue
|
ContextVar modified in async deps are not available in middleware (with structlog)
First Check
[X] I added a very descriptive title to this issue.
[X] I used the GitHub search to find a similar issue and didn't find it.
[X] I searched the FastAPI documentation, with the integrated search.
[X] I already searched in Google "How to X in FastAPI" and didn't find any information.
[X] I already read and followed all the tutorial in the docs and didn't find an answer.
[X] I already checked if it is not related to FastAPI but to Pydantic.
[X] I already checked if it is not related to FastAPI but to Swagger UI.
[X] I already checked if it is not related to FastAPI but to ReDoc.
Commit to Help
[X] I commit to help with one of those options 👆
Example Code
import logging
import uuid
from typing import Callable, Awaitable
import structlog
from fastapi import Depends, FastAPI, Request, Response
app = FastAPI()
Middleware = Callable[[Request], Awaitable[Response]]
logger = structlog.get_logger(__name__)
@app.middleware("http")
async def request_middleware(request: Request, call_next: Middleware) -> Response:
structlog.contextvars.clear_contextvars()
request_id = str(uuid.uuid4())
structlog.contextvars.bind_contextvars(request_id=request_id)
logger.info(
"request received (in middlware)",
method=request.method,
path=request.url.path,
client=request.client and request.client.host,
ua=request.headers.get("User-Agent"),
)
response = await call_next(request)
# THIS LINE WON'T INCLUDE in_dep="true"
logger.info("request finished (in middleware)")
response.headers["Request-ID"] = request_id
return response
def setup_logging() -> None:
"""Configure logging."""
logging.basicConfig(format="%(message)s", level="INFO")
processors = [
structlog.contextvars.merge_contextvars,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M.%S"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.dev.ConsoleRenderer(),
]
structlog.configure(
processors=processors, # type: ignore
wrapper_class=structlog.stdlib.BoundLogger,
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)
async def dep():
logger.info("dep start")
structlog.contextvars.bind_contextvars(in_dep="true")
logger.info("dep end")
return "foo"
@app.get("/")
async def home(value: str = Depends(dep)):
logger.info("hello")
return value
setup_logging()
Description
curl http://<IP_ADDRESS>:8001
The included code lets you reproduce the issue. This is what's logged:
2022-03-17 15:22.38 [info ] request received (in middlware) [app] client=<IP_ADDRESS> method=GET path=/ request_id=3cd6772f-4cb4-4cd4-9ace-44c2a3f19710 ua=curl/7.77.0
2022-03-17 15:22.38 [info ] dep start [app] request_id=3cd6772f-4cb4-4cd4-9ace-44c2a3f19710
2022-03-17 15:22.38 [info ] dep end [app] in_dep=true request_id=3cd6772f-4cb4-4cd4-9ace-44c2a3f19710
2022-03-17 15:22.38 [info ] hello [app] in_dep=true request_id=3cd6772f-4cb4-4cd4-9ace-44c2a3f19710
2022-03-17 15:22.38 [info ] request finished (in middleware) [app] request_id=3cd6772f-4cb4-4cd4-9ace-44c2a3f19710
In the last "request finished" line, I was expected in_dep=true to be kept, but it's not there.
It might be coming from structlog, but I doubt it since it's probably just using contextvars under the hood.
Operating System
macOS
Operating System Details
No response
FastAPI Version
0.75.0
Python Version
3.9.10
Additional Context
Tickets that seem relevant to this issue:
https://github.com/tiangolo/fastapi/issues/953 (but that's only about deps, not middleware)
https://github.com/tiangolo/fastapi/issues/397
https://github.com/encode/starlette/issues/1081
I have read the following pages:
https://fastapi.tiangolo.com/async/
In the meantime I might be using https://github.com/snok/asgi-correlation-id
I have a similar case and for me it's not even related to Depends, but also happens if you simply use bind_contextvars() in the route handler.
Experiencing the same thing here on 0.78.0.
same here, cannot seem to come up with a fix that would allow me to use structlog with fastapi. Because of concurrency, cannot use threadlocal, but contextvars don't seem to work either
|
2025-04-01T04:35:43.600254
| 2019-12-29T15:55:51
|
543454375
|
{
"authors": [
"GeorgeFischhof",
"dmontagu",
"tiangolo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11487",
"repo": "tiangolo/fastapi",
"url": "https://github.com/tiangolo/fastapi/pull/824"
}
|
gharchive/pull-request
|
bugfix bug ID #821
[BUG] [DOC] None should be checked with is None in docs #821
Hi All
the library and the documentation are fantastic
I am a QA eng., and teach Python for my colleagues.
I saw in some examples that None is checked as it would be a boolean expression
for example:
https://fastapi.tiangolo.com/tutorial/body/
async def create_item(item_id: int, item: Item, q: str = None):
result = {"item_id": item_id, **item.dict()}
if q:
I saw that the documentation is written for beginners too.
I think checking the None as it would be a boolean expression in the docs encourages beginners to use this formula instead of checking with is operator. And it is not good, because not None will them result unexpected results.
These examples should be corrected.
BR,
George
Admittedly, this is a question of personal preference, but for performance reasons I would advise against the use of
q is not None and len(q) > 0
There is a relatively high variance, but it seems to run ~2-3x slower than just if q: for string inputs:
import time
N = 1_000_000
q = "abc"
# Warm up
for _ in range(N):
pass
t0 = time.time()
for _ in range(N):
if q:
pass
t1 = time.time()
print(t1 - t0)
# 0.07553
t0 = time.time()
for _ in range(N):
if q is not None and len(q) > 0:
pass
t1 = time.time()
print(t1 - t0)
# 0.20973
If you want to preserve the semantics but make the null check explicit, a more performant (and in my opinion more pythonic) alternative would be:
if q is not None and q:
But this is still ~20% slower than just if q:.
In general, many linters in many languages will raise warnings for a check of the form if len(x) > 0 instead of if x.is_empty() (which is typically done via just if x in python), due to the overhead involved with grabbing the length and then comparing it to zero. Since checks like this can frequently happen inside relatively hot loops in python, I think it is worth being in the habit of thinking about performance (as long as your code remains pythonic, which I would argue this pattern is).
Admittedly for an endpoint like this, the overhead of the checks is not going to be significant. But since the whole point of the PR is to help expose beginners to better practices, I'm not sure that I would say this is a strict improvement on that front.
In this case though, it's not clear to me that the goal should really be to keep the semantics exactly the same -- if q="", I think it would be reasonable to perform the update.
So then it could make sense to change to if q is not None. I would totally agree that if q is not None is better than if q if that is a reasonable modification to the semantics.
I think if q is not None will be good, also pydantic which will check the string type. I will rewrite the PR accordingly
Thanks for your contribution and interest @GeorgeFischhof :memo:
But I would prefer to keep it as is. As @dmontagu says, it's a bit more of a personal preference than best practices.
In fact, using q is not None would mean that q would be used even if it has an empty string "", which is probably not the intention. But thanks again for your time and interest! :rocket:
|
2025-04-01T04:35:43.602434
| 2023-12-22T01:36:37
|
2053248736
|
{
"authors": [
"MrMEEE",
"tiangolo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11488",
"repo": "tiangolo/poetry-version-plugin",
"url": "https://github.com/tiangolo/poetry-version-plugin/pull/49"
}
|
gharchive/pull-request
|
Extended the git tags functionality
Added functionality to format the version based on git tags with following commits and hashes
Primarily meant for autobuilding commits
Thanks for the great work
What does "would reformat 1 file" mean??
Thanks for the patience with my reply! :sweat_smile:
I just marked this project as deprecated, I'm currently not using it and I think these ideas can be achieved in better ways: https://github.com/tiangolo/poetry-version-plugin#-warning-deprecated-
Given that I'll close this one, but thanks for the interest! :coffee:
|
2025-04-01T04:35:43.608007
| 2022-07-13T22:07:19
|
1303993641
|
{
"authors": [
"Niccolum",
"Zaffer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11489",
"repo": "tiangolo/sqlmodel",
"url": "https://github.com/tiangolo/sqlmodel/issues/376"
}
|
gharchive/issue
|
.exec() with text() does not provide auto-suggestion methods
First Check
[X] I added a very descriptive title to this issue.
[X] I used the GitHub search to find a similar issue and didn't find it.
[X] I searched the SQLModel documentation, with the integrated search.
[X] I already searched in Google "How to X in SQLModel" and didn't find any information.
[X] I already read and followed all the tutorial in the docs and didn't find an answer.
[X] I already checked if it is not related to SQLModel but to Pydantic.
[X] I already checked if it is not related to SQLModel but to SQLAlchemy.
Commit to Help
[X] I commit to help with one of those options 👆
Example Code
from typing import Any, List
from fastapi import APIRouter, Depends
from sqlmodel import Session, select, text, SQLModel
from pydantic import BaseModel
from app.api.deps import get_session_legacy
router = APIRouter(prefix="/")
class Test(BaseModel):
hello: str
@router.get("/data", response_model=List[Test])
async def get_data(
db: Session = Depends(get_session_legacy)
):
sql = text(
"""
SELECT 'hi' AS "hello"
"""
)
result = db.exec(sql).all()
for res in result:
logger.info(f"response: {res}")
return result
Description
When using db.exec(select(Hero)) you can use auto-suggestion in vscode to see the methods that belong to exec().
However when using db.exec(text("SELECT * FROM hero")) then there are no suggestions following the exec().
Wanted Solution
I want to see the suggestions: db.exec(text()).[one(), all(), scalars(), etc...]
Wanted Code
from typing import Any, List
from fastapi import APIRouter, Depends
from sqlmodel import Session, select, text, SQLModel
from pydantic import BaseModel
from app.api.deps import get_session_legacy
router = APIRouter(prefix="/")
class Test(BaseModel):
hello: str
@router.get("/data", response_model=List[Test])
async def get_data(
db: Session = Depends(get_session_legacy)
):
sql = text(
"""
SELECT 'hi' AS "hello"
"""
)
result = db.exec(sql).[all(), one(), scallars()... etc]
for res in result:
logger.info(f"response: {res}")
return result
Alternatives
No response
Operating System
Linux
Operating System Details
wsl2 on windows, vscode
SQLModel Version
0.0.6
Python Version
3.10
Additional Context
No response
It's issue incorrect, i think, because of https://github.com/tiangolo/sqlmodel/blob/main/sqlmodel/init.py#L77
So, you must send to to sqlalchemy or sqlalchemy_stubs.
|
2025-04-01T04:35:43.616711
| 2023-02-04T02:11:17
|
1570708919
|
{
"authors": [
"NikosAlexandris",
"austin-silk",
"jonaslb",
"lachaib",
"luispsantos",
"svlandeg",
"tiangolo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11490",
"repo": "tiangolo/typer",
"url": "https://github.com/tiangolo/typer/pull/548"
}
|
gharchive/pull-request
|
Fix #533: UnionType handling as with Union for Optional values
Fixes #533, which already had good details and a proposal for a solution, based on using get_args and get_origin from the typing module instead of the __args__ and __origin__ attributes. So I went and did that.
We also needed to actually handle the UnionType in addition to the Union from before.
Now there was a bit of a headache in terms of writing this in a backwards compatible way. What I've done is place the compatibility code into _compat_utils.py. But let me know if you'd prefer to keep it in main.py.
I also added a single test, just an adapted copy of another one with the x | None syntax. It is conditioned on Python >= 3.10, so it shouldn't fail on older Pythons.
It looks like mypy behaves differently in the different tests across versions. 3.6, 3.10 and 3.11 are ok, but 3.7, 3.8 and 3.9 are not. That seems confusing. I can maybe look at it another day again, otherwise if someone can see why it happens let me know.
Failures are down to mypy version I think, it works fine here with latest. I see some errors in other places though.
Also realized there is already another PR for this issue (just references other issue numbers, so I hadn't seen it): #522
Regardless, hope to see either PR merged. Ping @tiangolo
@tiangolo - can we merge this please?
Rebased on master and now all is green 👍
Still happy to make adaptations or answer questions.
@svlandeg I gather that #676 is the preferred solution to the UnionType situation? Should I just close this? Will we ever get anything on the topic merged - ie. what's blocking?
Hi @jonaslb, we've been going through the backlog of PRs to label them and connect related ones. We've already reviewed, adapted and merged a bunch for last week's releases 0.9.1 through 0.11.0. Open-source maintenance costs time and effort as I hope you can appreciate ;-) Thanks for your contributions (and patience) so far, we'll definitely get to this!
Open-source maintenance costs time and effort as I hope you can appreciate ;-)
Of course, and it's great to see that time is now being spent on typer also from the maintainer side ;-) I'm looking forward to engaging about details here if this PR turns out to be the preferred one out of the many submitted on the topic.
Ruff tries to automatically fix these Optional[str] in my parameters and I had to disable it, so I'm really looking forward to seeing this merged and released.
@tiangolo I understand you have other time commitments and it must be overwhelming, but I would like to have this PR merged when possible. I use Ruff with pyupgrade rules like @lachaib and I find lack of the modern union support a considerable hindrance when using Typer and Ruff together.
Great! Thank you for your contribution @jonaslb! :cake:
And in particular, thank you for your patience and your kindness. :hugs:
Thank you @svlandeg for the great job as always investigating, reviewing, and reporting everything. :bow:
This will be available in Typer 0.12.4, released in the next hours. :tada:
Does this "fix" not include support for X | Y | Z ?
This was always a fix for X | None only, i.e. #533. It looks like an error that it was not closed yet (likely because I didn't use the magic words in the description..). #461 looks like the issue for what you seek.
|
2025-04-01T04:35:43.628711
| 2016-07-20T20:57:58
|
2751550862
|
{
"authors": [
"tianocore-issues"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11491",
"repo": "tianocore/tianocore.github.io",
"url": "https://github.com/tianocore/tianocore.github.io/issues/23"
}
|
gharchive/issue
|
Website menus are not mobile friendly (Bugzilla Bug 46)
This issue was created automatically with bugzilla2github
Bugzilla Bug 46
Date: 2016-07-20T20:57:58+00:00
From: @mdkinney
To: Brian Richardson (Intel) <<brian.richardson>>
CC: yonghong.zhu
Last updated: 2017-07-11T17:29:44+00:00
Comment 1120
Date: 2016-12-06 19:37:07 +0000
From: Brian Richardson (Intel) <<brian.richardson>>
Working with contractor on new webdesign.
Comment 2422
Date: 2017-07-11 17:29:44 +0000
From: Brian Richardson (Intel) <<brian.richardson>>
Resolved with recent update of tianocore.org
Comment 669
Date: 2016-10-20 20:28:18 +0000
From: Yonghong Zhu <<yonghong.zhu>>
bug scrub: it is website.
Comment 71
Date: 2016-07-20 20:57:58 +0000
From: @mdkinney
Industry Specification: ---
Releases to Fix: EDK II Master
Target OS: ---
The menus sometimes render very small on mobile devices.
Comment 774
Date: 2016-10-27 20:18:00 +0000
From: Yonghong Zhu <<yonghong.zhu>>
Bug scrub: Assign to module owner
|
2025-04-01T04:35:43.639874
| 2020-04-19T07:13:58
|
602643832
|
{
"authors": [
"tiarebalbi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11492",
"repo": "tiarebalbi/lockdown-map",
"url": "https://github.com/tiarebalbi/lockdown-map/issues/2"
}
|
gharchive/issue
|
[DevOps] Add support to GitHub Actions - Test
Steps
For: [Pull Requests, Merges]
Run Install
Run Test
Completed
|
2025-04-01T04:35:43.750818
| 2023-05-06T15:12:14
|
1698668440
|
{
"authors": [
"sumkincpp",
"ties"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11494",
"repo": "ties/rpki-client-web",
"url": "https://github.com/ties/rpki-client-web/pull/99"
}
|
gharchive/pull-request
|
fix(parsing): Track correct hostname for .rrdp dirs
Closes #98
Thanks for this pull request!
I missed it as well. I thought the linter would run on pull requests, but apparently only on mine. Will take a look at the settings
|
2025-04-01T04:35:43.756139
| 2024-07-05T13:26:24
|
2392610693
|
{
"authors": [
"cb22",
"dyc3"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11495",
"repo": "tigerbeetle/tigerbeetle",
"url": "https://github.com/tigerbeetle/tigerbeetle/issues/2076"
}
|
gharchive/issue
|
Tigerbeetle docker image exits with superblock not found
I'm having a bit of trouble setting up tigerbeetle for my dev environment. All I'm doing is formatting the data file with format and then running the container.
This is my docker-compose.yml:
version: '3.8'
services:
tigerbeetle:
image: ghcr.io/coilhq/tigerbeetle:latest
command: ["start", "--addresses=<IP_ADDRESS>:6001", "/data/0_0.tigerbeetle"]
volumes:
- ./data/tigerbeetle:/data
ports:
- "6001:6001"
restart: unless-stopped
Full steps:
format data file
docker run -v $(pwd)/data/tigerbeetle:/data ghcr.io/tigerbeetle/tigerbeetle:latest format --cluster=0 --replica=0 --replica-count=1 /data/0_0.tigerbeetle
docker-compose up
See logs
tigerbeetle_1 | info(io): opening "0_0.tigerbeetle"...
tigerbeetle_1 | thread 1 panic: superblock not found
tigerbeetle_1 | /opt/beta-beetle/src/vsr/superblock.zig:0:0: 0x3606b7 in vsr.superblock.SuperBlockType(storage.Storage).read_sector_callback (tigerbeetle)
Hey @dyc3! Could you let me know where you found that example or the image address ghcr.io/coilhq/tigerbeetle?
That image is quite out of date (~2 years!) If you follow our Docker instructions here, it should work.
To be honest, I don't remember. Thanks for pointing me in the right direction though.
|
2025-04-01T04:35:43.760494
| 2017-09-06T07:14:03
|
255503343
|
{
"authors": [
"DanielCoulbourne",
"the94air"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11496",
"repo": "tightenco/ziggy",
"url": "https://github.com/tightenco/ziggy/pull/71"
}
|
gharchive/pull-request
|
Recommend v0.3.0
For now lets recommend Ziggy v0.3.0 for #69
Update:
:sweat_smile: After thinking, this may not be a good idea since this issue only effect me and @mgsmus
Feel free to reject this PR
Yeah I'm gonna close this one haha.
|
2025-04-01T04:35:43.771198
| 2022-10-21T09:25:34
|
1418037989
|
{
"authors": [
"pussuw"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11497",
"repo": "tiiuae/px4-firmware",
"url": "https://github.com/tiiuae/px4-firmware/pull/258"
}
|
gharchive/pull-request
|
PX4 NuttX kernel mode build part 0.1
Add a skeleton to enable building PX4 as a set of processes on NuttX with CONFIG_BUILD_KERNEL=y
The result is a bootable ROMFS which is placed into the kernel binary and mounted into /sbin by the board initialization logic. The kernel knows the mount point and starts the user space from /sbin/init, which is the nsh shell.
DO NOT MERGE; ONLY INTENDED TO HOLD THE PROGRESS IN A SAFE PLACE AND FOR COMMENTS.
Closing as obsolete
|
2025-04-01T04:35:43.772156
| 2023-07-21T05:19:00
|
1815170245
|
{
"authors": [
"gengliqi",
"purelind"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11498",
"repo": "tikv/client-c",
"url": "https://github.com/tikv/client-c/pull/151"
}
|
gharchive/pull-request
|
Support cop stream iterator
support cop stream iterator
fallback to use cop rpc if cop stream is not implemented
/run-all-tests
/run-all-tests
|
2025-04-01T04:35:43.794645
| 2021-10-11T12:36:43
|
1022653698
|
{
"authors": [
"tabokie",
"yuqi1129"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11499",
"repo": "tikv/raft-engine",
"url": "https://github.com/tikv/raft-engine/pull/115"
}
|
gharchive/pull-request
|
Add config to ignore all corrupted records when recover raft-log
see #114
Closing this in favor of #140
|
2025-04-01T04:35:43.800558
| 2022-04-12T05:59:09
|
1201141155
|
{
"authors": [
"glorv",
"youjiali1995"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11500",
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/12346"
}
|
gharchive/pull-request
|
server: support dynamically change grpc-memory-pool-quota
Signed-off-by: glorv<EMAIL_ADDRESS>
What is changed and how it works?
Issue Number: Close #xxx
What's Changed:
Related changes
PR to update pingcap/docs/pingcap/docs-cn:
Need to cherry-pick to the release branch
Check List
Tests
Unit test
Integration test
Manual test (add detailed scripts or steps below)
No code
Side effects
Performance regression
Consumes more CPU
Consumes more MEM
Breaking backward compatibility
Release note
Please add a release note.
If you don't think this PR needs a release note then fill it with None.
If this PR will be picked to release branch, then a release note is probably required.
depending on https://github.com/tikv/grpc-rs/pull/568.
/build
/test
/merge
|
2025-04-01T04:35:43.804489
| 2023-07-13T19:17:40
|
1803624343
|
{
"authors": [
"TonsnakeLin",
"overvenus",
"tabokie",
"v01dstar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11501",
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/15119"
}
|
gharchive/pull-request
|
cdc: Exponentialize resolved ts scan backoff
What is changed and how it works?
Issue Number: Close #15112
What's Changed:
Make the scan task retry wait time exponential, so that less logs will be produced.
Related changes
Check List
Tests
Unit test
Integration test
Side effects
N/A
Release note
None
/merge
Actually, it's not about CDC but resolved_ts that is relied by stale read and flashback. 😂
/cherry-pick release-7.1
|
2025-04-01T04:35:43.807852
| 2019-04-25T06:05:46
|
437023110
|
{
"authors": [
"lonng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11502",
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/4566"
}
|
gharchive/pull-request
|
import: use memory to cache split sst instead of disk file
Signed-off-by: Lonng<EMAIL_ADDRESS>
What have you changed? (mandatory)
Fix importer BUG
The importer will return success even though some ranges failure in the current version.
Use memory to cache split SST file instead of disk file to reduce disk IO.
We need to split Engine File to small SST files to fit the region size. There are some steps required in the current implementation, which heavily depend on disk.
Read Engine file
Write split SST file to disk
Read small SST file to upload
@zhangjinpeng1987 @kennytm PTAL again.
|
2025-04-01T04:35:43.811315
| 2019-08-28T04:36:02
|
486155434
|
{
"authors": [
"breeswish",
"kennytm",
"overvenus",
"zhouqiang-cl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11503",
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/5350"
}
|
gharchive/pull-request
|
Revert "*: deny unknown fields when deserializing config TOML (#5190)"
What have you changed?
This reverts commit 034379605e8805c2ce2016903e21e3bd0521f64e (PR #5190).
What is the type of the changes?
Pick one of the following and delete the others:
Bugfix (a change which fixes an issue)
How is the PR tested?
n/a
Does this PR affect documentation (docs) or should it be mentioned in the release notes?
None
Does this PR affect tidb-ansible?
None
Refer to a related PR or issue link (optional)
#5286
Benchmark result if necessary (optional)
Any examples? (optional)
/run-all-tests
/merge
did this need to be cherry-pick to release-3.0?
@zhouqiang-cl No, reverted commits are not cherry picked previously.
|
2025-04-01T04:35:43.814780
| 2020-02-11T08:31:41
|
563046111
|
{
"authors": [
"BusyJay",
"overvenus",
"siddontang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11504",
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/6583"
}
|
gharchive/pull-request
|
raftstore: move raftstore to components
What have you changed?
Make raftstore as a crate and move it to components.
What is the type of the changes?
Engineering (engineering change which doesn't change any feature or fix any issue)
How is the PR tested?
Unit test
Integration test
Does this PR affect documentation (docs) or should it be mentioned in the release notes?
No.
Does this PR affect tidb-ansible?
No.
Cool, what's the compile time difference between master and this branch?
PTAL, thanks!
Compile-time difference:
master: Finished dev [unoptimized + debuginfo] target(s) in 4m 26s
PR: Finished dev [unoptimized + debuginfo] target(s) in 3m 47s
maybe we can trace the release compiling time every day
@brson @zhouqiang-cl
|
2025-04-01T04:35:43.817809
| 2020-07-06T09:09:49
|
651376431
|
{
"authors": [
"Renkai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11505",
"repo": "tikv/tikv",
"url": "https://github.com/tikv/tikv/pull/8197"
}
|
gharchive/pull-request
|
[WIP]Rpn expr
What problem does this PR solve?
Issue Number: part of https://github.com/tikv/tikv/issues/6281
Problem Summary:
This patch make tikv possible to receive RPN expression when executing selection copr
Check List
Tests
Side effects
Release note
new PR at https://github.com/tikv/tikv/pull/8238
|
2025-04-01T04:35:43.827403
| 2015-04-13T16:56:54
|
68143602
|
{
"authors": [
"matthewrobb",
"mmun",
"ryanhollister"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11506",
"repo": "tildeio/htmlbars",
"url": "https://github.com/tildeio/htmlbars/issues/326"
}
|
gharchive/issue
|
Using the less than less than operator in script tag is incorrectly parsed
Appears that htmlbars is incorrectly parsing javascript blocks when there is a '<' operator in the block:
http://emberjs.jsbin.com/jocafazaxa/1/edit?html,js,console
Expected behavior here would be that the sort function is available on the window object.
Note that changing the operator to a '>' works fine.
We don't have plans to support this.
Appreciate you taking the time to look. But to be clear for future reference, what is it you are not planning to support? Less than operators in script tags?
Sorry. I mean we only intended to support HTML that 1. Goes inside the tag 2. has no
What about this case? http://emberjs.jsbin.com/feyuse/1/edit?html,js,console
Yeah, that's a bug in https://github.com/tildeio/simple-html-tokenizer.
|
2025-04-01T04:35:43.829148
| 2017-04-14T00:17:06
|
221713714
|
{
"authors": [
"stefanpenner"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11507",
"repo": "tildeio/route-recognizer",
"url": "https://github.com/tildeio/route-recognizer/pull/135"
}
|
gharchive/pull-request
|
Avoid extra allocations
handler.{names,shouldDecodes} are often empty, we shouldn’t bother allocating them if they are.
These are very very often empty arrays, we shouldn't alloc/iterate them if we don't need to.
Tests are red, I need to actually see yet if this is tannable.
Benchmarks seem very similar unfortunately. I did make sure the existing add benchmark was hitting this code-path. I hope not allocating so many arrays will be nice on the GC of a larger app...
released as v0.3.3 🎉
|
2025-04-01T04:35:43.832710
| 2021-05-17T09:58:30
|
893161348
|
{
"authors": [
"matthewkayy",
"tiliadavid"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11508",
"repo": "tilialabs/tilia-phoenix-connect",
"url": "https://github.com/tilialabs/tilia-phoenix-connect/issues/10"
}
|
gharchive/issue
|
Flat products set to "One product per file" file handling are set to OnePerPage
When tilia Phoenix Connect creates a new product with "One product per file" file handling it gets defaulted to OnePerPage and is never set back to OnePerFile:
https://github.com/tilialabs/tilia-phoenix-connect/blob/52cadcdadc703afd2fbba4c46763ae91b57ad897/source.js#L1279
Created pull request to fix this issue: Fix 'One product per file' page handling #11
Merged in pull request
|
2025-04-01T04:35:43.854211
| 2023-02-19T09:01:35
|
1590613984
|
{
"authors": [
"LoranRendel",
"gchtr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11509",
"repo": "timber/timber",
"url": "https://github.com/timber/timber/pull/2708"
}
|
gharchive/pull-request
|
Fix php warning
Fix php warning (Trying to access array offset on value of type int)
Hey @LoranRendel
I’m sorry that I have to reject this, but we won’t change the package name :).
If you want to use Twig v3, you should use Timber v2.
If you wanted that PHP error to be fixed, we would need more information about the change. It would be best if you filled out the pull request template with all the information requested. What was the input that caused the error in the first place? How did that object property that caused the error look like?
You can tell us if you need help with that.
|
2025-04-01T04:35:43.867087
| 2022-01-22T13:19:35
|
1111554780
|
{
"authors": [
"jhpratt",
"vincentdephily"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11510",
"repo": "time-rs/time",
"url": "https://github.com/time-rs/time/issues/436"
}
|
gharchive/issue
|
API mismatch between Parsed::with_offset_* and UtcOffset:{minutes_after_hour,seconds_after_minute}
Context: I'm parsing dates from the CLI, which are assumed to be local time unless a --utc flag has also been set. So I grab a UtcOffset at program start and initialize my Parsed with it.
Because of the i8 (UtcOffset) vs u8 (Parsed), the initialization looks like this:
let mut p = Parsed::new()
.with_offset_hour(offset.whole_hours())
.unwrap()
.with_offset_minute(offset.minutes_past_hour().try_into().unwrap())
.unwrap()
.with_offset_second(offset.seconds_past_minute().try_into().unwrap())
.unwrap();
This is inconvenient and error-prone (I nearly used offset.whole_minutes().try_into().unwrap()).
Also, it seems to me that the Parsed API is incapable of handling a sub-hour negative offset ? There's no such timezone in the real world, but it still feels dissatisfying.
I propose these new Parsed methods to fix this: with_offset(self, UtcOffset) -> Self, with_time(self, Time) -> Self, with_date(self, Date) -> Self, and the set_ equivalents. The date and time versions are here for consistency, but it's not hard to imagine usecases that would benefit from them. AFAICT these methods should be infallible, an added usability gain.
Also, it seems to me that the Parsed API is incapable of handling a sub-hour negative offset?
I just checked — this is unfortunately correct. I will introduce a field and relevant methods to set the sign explicitly. The format description would still require the offset hour to be present to set the sign, but it should realistically be present in any situation.
Adding methods to set the offset, date, and time seem reasonable due to the obvious benefit. They will be fallible, though, as future checks for consistency might be implemented (in this situation tzdb would be required for checks). This is similar to how set_month and set_weekday are fallible. Right now nothing, not even range validity, is actually checked on setting, but I want to keep that possibility open.
Case in point: the code above is buggy, correct one is .with_offset_minute(offset.minutes_past_hour().abs().try_into().unwrap()) or .with_offset_minute(offset.minutes_past_hour().abs() as u8) :sweat_smile:
I'll make a PR for the new set/with methods in the next few days - that sounds like a good first-time-issue.
Coming back around to this, I'm going to fix the buggy behavior shortly. I've changed my mind on with_date and similar methods. Ultimately the Parsed struct is intended for parsing, and there's no component for a full date, time, or offset.
I've just pushed a commit that fixes this to the extent possible. The existing methods (getters, setters, and builders) have been deprecated and replaced with signed equivalents. The now-deprecated methods will attempt to convert to the signed value and fail if not possible.
|
2025-04-01T04:35:43.873370
| 2024-12-21T05:56:07
|
2753785109
|
{
"authors": [
"chenziliang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11511",
"repo": "timeplus-io/proton",
"url": "https://github.com/timeplus-io/proton/issues/886"
}
|
gharchive/issue
|
from_unix_timestamp64_milli(unix_timestamp)
Use case
we have second granularity version from_unix_timestamp64(unix_seconds), but we don't have milliseconds version.
We can have all counterparts of
to_unix_timestamp
to_unix_timestamp64_milli
to_unix_timestamp64_micro
to_unix_timestamp64_nano
Describe the solution you'd like
Describe alternatives you've considered
Additional context
already there
|
2025-04-01T04:35:43.890442
| 2021-02-01T13:07:40
|
798334492
|
{
"authors": [
"VineethReddy02",
"callrua"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11512",
"repo": "timescale/promscale",
"url": "https://github.com/timescale/promscale/issues/467"
}
|
gharchive/issue
|
"failed to connect to host=localhost user=postgres database=timescale: dial error (dial tcp <IP_ADDRESS>:5432: connect: connection refused)"
docker-compose.yml.txt
[r.callaghan@host ~ monitoring]$ sudo podman logs 29e9b4ccadb3
unknown seccomp syscall `clock_adjtime64` ignored
unknown seccomp syscall `clock_getres_time64` ignored
unknown seccomp syscall `clock_gettime64` ignored
unknown seccomp syscall `clock_nanosleep_time64` ignored
unknown seccomp syscall `faccessat2` ignored
unknown seccomp syscall `openat2` ignored
unknown seccomp syscall `pidfd_getfd` ignored
unknown seccomp syscall `ppoll_time64` ignored
unknown seccomp syscall `pselect6_time64` ignored
unknown seccomp syscall `timer_gettime64` ignored
unknown seccomp syscall `timerfd_gettime64` ignored
unknown seccomp syscall `timerfd_settime64` ignored
unknown seccomp syscall `utimensat_time64` ignored
level=info ts=2021-02-01T12:57:16.000Z caller=runner.go:149 msg="Version:0.1.4; Commit Hash: "
level=info ts=2021-02-01T12:57:16.000Z caller=runner.go:150 config="&{ListenAddr::9201 PgmodelCfg:{Host:localhost Port:5432 User:postgres password:**** Database:timescale SslMode:allow DbConnectRetries:10 AsyncAcks:false ReportInterval:0 LabelsCacheSize:10000 MetricsCacheSize:10000 SeriesCacheSize:0 WriteConnectionsPerProc:4 MaxConnections:-1 UsesHA:false DbUri:} LogCfg:{Level:debug Format:logfmt} APICfg:{AllowedOrigin:^(?:.*)$ ReadOnly:false AdminAPIEnabled:false TelemetryPath:/metrics-text Auth:0xc00023a410} ConfigFile:config.yml TLSCertFile: TLSKeyFile: HaGroupLockID:0 PrometheusTimeout:-1ns ElectionInterval:5s Migrate:true StopAfterMigrate:false UseVersionLease:true InstallTimescaleDB:true}"
level=error ts=2021-02-01T12:57:16.005Z caller=runner.go:160 msg="aborting startup due to error" err="failed to connect to `host=localhost user=postgres database=timescale`: dial error (dial tcp <IP_ADDRESS>:5432: connect: connection refused)"
Seeing the above when attempting to spin up timescaledev/promscale-extension:latest-ts1-pg12 and timescale/promscale:latest.
Tried to get around the race-condition with 'restart: on-failure' and have -db-ssl-mode=allow set. Connecting to the database looks OK, so don't think it's a network problem.
[r.callaghan]$ sudo podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40c40011b8f3 docker.io/timescaledev/promscale-extension:latest-ts1-pg12 -c hba_file=/var/... 3 minutes ago Up 3 minutes ago <IP_ADDRESS>:5432->5432/tcp timescale_db
8fa9a63286da docker.io/prom/node-exporter:latest 3 minutes ago Up 3 minutes ago <IP_ADDRESS>:9100->9100/tcp node_exporter
fd96417c0c8f docker.io/prom/prometheus:latest --web.enable-life... 3 minutes ago Up 3 minutes ago <IP_ADDRESS>:9090->9090/tcp prometheus
57a5f04ec414 docker.io/grafana/grafana:latest 3 minutes ago Up 3 minutes ago <IP_ADDRESS>:3000->3000/tcp grafana
29e9b4ccadb3 docker.io/timescale/promscale:latest -db-ssl-mode=allo... 3 minutes ago Exited (1) 3 minutes ago <IP_ADDRESS>:9201->9201/tcp timescale_promscale
[r.callaghan]$ psql -h localhost -p 5432 -U postgres -d timescale
Password for user postgres:
psql (10.15, server 12.4)
WARNING: psql major version 10, server major version 12.
Some psql features might not work.
Type "help" for help.
timescale=#
Am I doing something dumb? :)
Attached my docker-compose.yml file - the pg_hba.conf being mounted is just a default file with the following appended.
host all all all md5
Looks like I am just hitting the race condition and there's an issue with the restart on-failure taking effect, #468
Hi @callrua
There few issues in the docker-compose file you shared
In Promscale env variables PROMSCALE_DB_HOST is set to localhost in containers localhost points to local network of that container where DB i.e. TimescaleDB is not available this needs to be changed to timescale_db the service name of TimescaleDB as you defined in compose file.
By default Promscale tries to use the database name as timescaledb if not explicitly provided but the TimescaleDB image you are using doesn't have a database with such a name by default so either create one or provide PROMSCALE_DB_NAME as postgres in Promscale env variables.
I see twice ssl-mode configured to allow in Promscale i.e PROMSCALE_DB_SSL_MODE: allow & command: -db-ssl-mode=allow you can drop command argument here. (But this isn't a reason for Promscale dial error).
For more details on setting up Promscale, TimescaleDB & Prometheus using docker-compose you can use this docker-compose file
I hope this answers your question & helps you in running Promscale.
Hey @VineethReddy02,
Thanks for taking a look at the compose file!
I noticed this too and have corrected it to match the container name.
Good to know.
Will remove the command for this.
After correcting 1) I am still seeing the issue with the container failing to start - I do suspect it's the race condition that has been mentioned in this repo before. I raised #468 for that - essentially, the restart: on-failure doesn't seem to be being applied.
I'm happy to close this one out and look at the race condition with compose under #468.
Cheers,
Ruairi
I don't think it's the race condition. I personally use docker-compose very often never faced this issue in the past. Did you update database name to postgres as mentioned in point 2 of my previous comment?
Yep, I did.
[r.callaghan@monitoring]$ pod-up
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating network "paas-monitoring_default" with the default driver
Creating timescale_db ... done
Creating prometheus ... done
Creating node_exporter ... done
Creating grafana ... done
Creating timescale_promscale ... done
[r.callaghan@monitoring]$ pod-ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3628ca599d36 docker.io/timescaledev/promscale-extension:latest-ts1-pg12 -csynchronous_com... About a minute ago Up About a minute ago <IP_ADDRESS>:5432->5432/tcp timescale_db
0cb8a90535ee docker.io/prom/node-exporter:latest About a minute ago Up About a minute ago <IP_ADDRESS>:9100->9100/tcp node_exporter
5f1d3c971717 docker.io/grafana/grafana:latest About a minute ago Up About a minute ago <IP_ADDRESS>:3000->3000/tcp grafana
377a3437135e docker.io/prom/prometheus:latest --web.enable-life... About a minute ago Up About a minute ago <IP_ADDRESS>:9090->9090/tcp prometheus
69989419f52c docker.io/timescale/promscale:latest About a minute ago Exited (1) About a minute ago <IP_ADDRESS>:9201->9201/tcp timescale_promscale
timescale_db:
container_name: timescale_db
image: docker.io/timescaledev/promscale-extension:latest-ts1-pg12
#image: localhost/vela/timescale_db:latest
command: -csynchronous_commit=off #-c hba_file=/var/lib/postgresql/docker_configs/pg_hba.conf
ports:
- "5432:5432/tcp"
volumes:
- ./config/timescale_db/:/var/lib/postgresql/docker_configs/
- timescale_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: postgres
timescale_promscale:
container_name: timescale_promscale
image: docker.io/timescale/promscale:latest
ports:
- "9201:9201/tcp"
build:
context: .
restart: on-failure
#command: -db-ssl-mode=allow #-db-password=postgres -db-name=postgres -db-host=timescale_db
depends_on:
- timescale_db
- prometheus
environment:
PROMSCALE_LOG_LEVEL: debug
PROMSCALE_DB_CONNECT_RETRIES: 10
PROMSCALE_DB_HOST: timescale_db
PROMSCALE_DB_NAME: postgres
PROMSCALE_DB_PASSWORD: postgres
PROMSCALE_WEB_TELEMETRY_PATH: /metrics-text
PROMSCALE_DB_SSL_MODE: allow
Yep, I updated the db_name to postgres:
container_name: timescale_db
image: docker.io/timescaledev/promscale-extension:latest-ts1-pg12
#image: localhost/vela/timescale_db:latest
command: -csynchronous_commit=off #-c hba_file=/var/lib/postgresql/docker_configs/pg_hba.conf
ports:
- "5432:5432/tcp"
volumes:
- ./config/timescale_db/:/var/lib/postgresql/docker_configs/
- timescale_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: timescale
timescale_promscale:
container_name: timescale_promscale
image: docker.io/timescale/promscale:latest
ports:
- "9201:9201/tcp"
build:
context: .
restart: on-failure
#command: -db-ssl-mode=allow #-db-password=postgres -db-name=postgres -db-host=timescale_db
depends_on:
- timescale_db
- prometheus
environment:
PROMSCALE_LOG_LEVEL: debug
PROMSCALE_DB_CONNECT_RETRIES: 10
PROMSCALE_DB_HOST: timescale_db
PROMSCALE_DB_NAME: postgres
PROMSCALE_DB_PASSWORD: postgres
PROMSCALE_WEB_TELEMETRY_PATH: /metrics-text
PROMSCALE_DB_SSL_MODE: allow
Does the POSTGRES_DB variable on timescale_db container being set to timescale cause a conflict there?
[r.callaghan@monitoring]$ pod-up
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating network "paas-monitoring_default" with the default driver
Creating timescale_db ... done
Creating grafana ... done
Creating prometheus ... done
Creating node_exporter ... done
Creating timescale_promscale ... done
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e2a747967eb docker.io/grafana/grafana:latest 9 seconds ago Up 9 seconds ago <IP_ADDRESS>:3000->3000/tcp grafana
0fe3d6e3ad3d docker.io/prom/prometheus:latest --web.enable-life... 9 seconds ago Up 9 seconds ago <IP_ADDRESS>:9090->9090/tcp prometheus
9934082bf5ca docker.io/prom/node-exporter:latest 9 seconds ago Up 9 seconds ago <IP_ADDRESS>:9100->9100/tcp node_exporter
e7156f86c7c8 docker.io/timescaledev/promscale-extension:latest-ts1-pg12 -csynchronous_com... 9 seconds ago Up 9 seconds ago <IP_ADDRESS>:5432->5432/tcp timescale_db
920053e5cf93 docker.io/timescale/promscale:latest 8 seconds ago Exited (1) 8 seconds ago <IP_ADDRESS>:9201->9201/tcp timescale_promscale
unknown seccomp syscall `clock_adjtime64` ignored
unknown seccomp syscall `clock_getres_time64` ignored
unknown seccomp syscall `clock_gettime64` ignored
unknown seccomp syscall `clock_nanosleep_time64` ignored
unknown seccomp syscall `faccessat2` ignored
unknown seccomp syscall `openat2` ignored
unknown seccomp syscall `pidfd_getfd` ignored
unknown seccomp syscall `ppoll_time64` ignored
unknown seccomp syscall `pselect6_time64` ignored
unknown seccomp syscall `timer_gettime64` ignored
unknown seccomp syscall `timerfd_gettime64` ignored
unknown seccomp syscall `timerfd_settime64` ignored
unknown seccomp syscall `utimensat_time64` ignored
level=info ts=2021-02-02T11:07:59.808Z caller=runner.go:149 msg="Version:0.1.4; Commit Hash: "
level=info ts=2021-02-02T11:07:59.808Z caller=runner.go:150 config="&{ListenAddr::9201 PgmodelCfg:{Host:timescale_db Port:5432 User:postgres password:**** Database:postgres SslMode:allow DbConnectRetries:10 AsyncAcks:false ReportInterval:0 LabelsCacheSize:10000 MetricsCacheSize:10000 SeriesCacheSize:0 WriteConnectionsPerProc:4 MaxConnections:-1 UsesHA:false DbUri:} LogCfg:{Level:debug Format:logfmt} APICfg:{AllowedOrigin:^(?:.*)$ ReadOnly:false AdminAPIEnabled:false TelemetryPath:/metrics-text Auth:0xc0000d31d0} ConfigFile:config.yml TLSCertFile: TLSKeyFile: HaGroupLockID:0 PrometheusTimeout:-1ns ElectionInterval:5s Migrate:true StopAfterMigrate:false UseVersionLease:true InstallTimescaleDB:true}"
level=error ts=2021-02-02T11:08:00.366Z caller=runner.go:160 msg="aborting startup due to error" err="failed to connect to `host=timescale_db user=postgres database=postgres`: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e2a747967eb docker.io/grafana/grafana:latest 21 seconds ago Up 21 seconds ago <IP_ADDRESS>:3000->3000/tcp grafana
0fe3d6e3ad3d docker.io/prom/prometheus:latest --web.enable-life... 21 seconds ago Up 21 seconds ago <IP_ADDRESS>:9090->9090/tcp prometheus
9934082bf5ca docker.io/prom/node-exporter:latest 21 seconds ago Up 21 seconds ago <IP_ADDRESS>:9100->9100/tcp node_exporter
e7156f86c7c8 docker.io/timescaledev/promscale-extension:latest-ts1-pg12 -csynchronous_com... 21 seconds ago Up 21 seconds ago <IP_ADDRESS>:5432->5432/tcp timescale_db
920053e5cf93 docker.io/timescale/promscale:latest 20 seconds ago Exited (1) 20 seconds ago <IP_ADDRESS>:9201->9201/tcp timescale_promscale
[r.callaghan@monitoring]$ pod-up
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
timescale_db is up-to-date
grafana is up-to-date
prometheus is up-to-date
node_exporter is up-to-date
Starting timescale_promscale ... done
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e2a747967eb docker.io/grafana/grafana:latest 32 seconds ago Up 31 seconds ago <IP_ADDRESS>:3000->3000/tcp grafana
0fe3d6e3ad3d docker.io/prom/prometheus:latest --web.enable-life... 32 seconds ago Up 31 seconds ago <IP_ADDRESS>:9090->9090/tcp prometheus
9934082bf5ca docker.io/prom/node-exporter:latest 32 seconds ago Up 31 seconds ago <IP_ADDRESS>:9100->9100/tcp node_exporter
e7156f86c7c8 docker.io/timescaledev/promscale-extension:latest-ts1-pg12 -csynchronous_com... 32 seconds ago Up 31 seconds ago <IP_ADDRESS>:5432->5432/tcp timescale_db
920053e5cf93 docker.io/timescale/promscale:latest 31 seconds ago Up 2 seconds ago <IP_ADDRESS>:9201->9201/tcp timescale_promscale
[r.callaghan@monitoring]$ pod-log 920053e5cf93
unknown seccomp syscall `clock_adjtime64` ignored
unknown seccomp syscall `clock_getres_time64` ignored
unknown seccomp syscall `clock_gettime64` ignored
unknown seccomp syscall `clock_nanosleep_time64` ignored
unknown seccomp syscall `faccessat2` ignored
unknown seccomp syscall `openat2` ignored
unknown seccomp syscall `pidfd_getfd` ignored
unknown seccomp syscall `ppoll_time64` ignored
unknown seccomp syscall `pselect6_time64` ignored
unknown seccomp syscall `timer_gettime64` ignored
unknown seccomp syscall `timerfd_gettime64` ignored
unknown seccomp syscall `timerfd_settime64` ignored
unknown seccomp syscall `utimensat_time64` ignored
level=info ts=2021-02-02T11:07:59.808Z caller=runner.go:149 msg="Version:0.1.4; Commit Hash: "
level=info ts=2021-02-02T11:07:59.808Z caller=runner.go:150 config="&{ListenAddr::9201 PgmodelCfg:{Host:timescale_db Port:5432 User:postgres password:**** Database:postgres SslMode:allow DbConnectRetries:10 AsyncAcks:false ReportInterval:0 LabelsCacheSize:10000 MetricsCacheSize:10000 SeriesCacheSize:0 WriteConnectionsPerProc:4 MaxConnections:-1 UsesHA:false DbUri:} LogCfg:{Level:debug Format:logfmt} APICfg:{AllowedOrigin:^(?:.*)$ ReadOnly:false AdminAPIEnabled:false TelemetryPath:/metrics-text Auth:0xc0000d31d0} ConfigFile:config.yml TLSCertFile: TLSKeyFile: HaGroupLockID:0 PrometheusTimeout:-1ns ElectionInterval:5s Migrate:true StopAfterMigrate:false UseVersionLease:true InstallTimescaleDB:true}"
level=error ts=2021-02-02T11:08:00.366Z caller=runner.go:160 msg="aborting startup due to error" err="failed to connect to `host=timescale_db user=postgres database=postgres`: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"
unknown seccomp syscall `clock_adjtime64` ignored
unknown seccomp syscall `clock_getres_time64` ignored
unknown seccomp syscall `clock_gettime64` ignored
unknown seccomp syscall `clock_nanosleep_time64` ignored
unknown seccomp syscall `faccessat2` ignored
unknown seccomp syscall `openat2` ignored
unknown seccomp syscall `pidfd_getfd` ignored
unknown seccomp syscall `ppoll_time64` ignored
unknown seccomp syscall `pselect6_time64` ignored
unknown seccomp syscall `timer_gettime64` ignored
unknown seccomp syscall `timerfd_gettime64` ignored
unknown seccomp syscall `timerfd_settime64` ignored
unknown seccomp syscall `utimensat_time64` ignored
level=info ts=2021-02-02T11:08:28.139Z caller=runner.go:149 msg="Version:0.1.4; Commit Hash: "
level=info ts=2021-02-02T11:08:28.139Z caller=runner.go:150 config="&{ListenAddr::9201 PgmodelCfg:{Host:timescale_db Port:5432 User:postgres password:**** Database:postgres SslMode:allow DbConnectRetries:10 AsyncAcks:false ReportInterval:0 LabelsCacheSize:10000 MetricsCacheSize:10000 SeriesCacheSize:0 WriteConnectionsPerProc:4 MaxConnections:-1 UsesHA:false DbUri:} LogCfg:{Level:debug Format:logfmt} APICfg:{AllowedOrigin:^(?:.*)$ ReadOnly:false AdminAPIEnabled:false TelemetryPath:/metrics-text Auth:0xc000143540} ConfigFile:config.yml TLSCertFile: TLSKeyFile: HaGroupLockID:0 PrometheusTimeout:-1ns ElectionInterval:5s Migrate:true StopAfterMigrate:false UseVersionLease:true InstallTimescaleDB:true}"
level=warn ts=2021-02-02T11:08:28.173Z caller=runner.go:293 msg="No adapter leader election. Group lock id is not set. Possible duplicate write load if running multiple connectors"
level=warn ts=2021-02-02T11:08:28.180Z caller=config.go:144 msg="had to reduce the number of copiers due to connection limits: wanted 112, reduced to 25"
level=info ts=2021-02-02T11:08:28.181Z caller=client.go:91 msg="host=timescale_db port=5432 user=postgres dbname=postgres password='****' sslmode=allow connect_timeout=10" numCopiers=25 pool_max_conns=50 pool_min_conns=28
level=debug ts=2021-02-02T11:08:28.200Z caller=query_engine.go:17 msg="Lookback delta is zero, setting to default value" value=5m0s
level=info ts=2021-02-02T11:08:28.202Z caller=runner.go:172 msg="Starting up..."
level=info ts=2021-02-02T11:08:28.202Z caller=runner.go:173 msg=Listening addr=:9201
level=info ts=2021-02-02T11:08:29.392Z caller=write.go:90 msg="Samples write throughput" samples/sec=500
level=info ts=2021-02-02T11:08:30.713Z caller=write.go:90 msg="Samples write throughput" samples/sec=1000
level=info ts=2021-02-02T11:08:35.880Z caller=write.go:90 msg="Samples write throughput" samples/sec=6000
|
2025-04-01T04:35:43.900565
| 2023-12-13T11:43:12
|
2039536503
|
{
"authors": [
"lquenti",
"nbourdi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11513",
"repo": "timescale/tsbs",
"url": "https://github.com/timescale/tsbs/issues/247"
}
|
gharchive/issue
|
Unable to benchmark InfluxDB 2.x - Clarify supported versions
Loading the generated data for influx using tsbs_load / tsbs_load_influx doesn't seem to work on the latest images (2.x).
However, it works just fine on older versions (1.x).
Could you clarify if 2.x is indeed unsupported?
I can confirm that it is, bc tsbs is a fork of influxdatas initial benchmarks when they were in 1.x, but this repo unfortunately seems abandoned
|
2025-04-01T04:35:43.943292
| 2022-12-07T14:36:24
|
1482072494
|
{
"authors": [
"MatthewAlner",
"timhor"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11514",
"repo": "timhor/obsidian-editor-shortcuts",
"url": "https://github.com/timhor/obsidian-editor-shortcuts/pull/47"
}
|
gharchive/pull-request
|
Add toggle case command
Hey! great plugin, it has nearly everything I want other than this and #44
i'm a big jetbrains fan, they have a toggle case command which cycles the case of the selected text. this saves on remembering / using up 3 hotkeys.
I've implemented this here as an example, this code is a little hacky so feel free to bin this and just treat it as a feature request if you like.
Thanks!
Matt
https://user-images.githubusercontent.com/2782730/206207631-d9f5fb6e-b7c3-4e92-a7ff-717936a9703c.mp4
Heya! Thanks for the contribution, and sorry for the delay in getting to this – it's been a busy month!
Code looks good to me, but any chance you'd be able to add some unit tests to go along with it? Existing ones are located in src/__tests__/*.spec.ts 🙏
|
2025-04-01T04:35:43.950787
| 2024-04-04T05:07:56
|
2224508773
|
{
"authors": [
"CuB3y0nd",
"PxlSyl",
"timlrx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11515",
"repo": "timlrx/tailwind-nextjs-starter-blog",
"url": "https://github.com/timlrx/tailwind-nextjs-starter-blog/issues/883"
}
|
gharchive/issue
|
Add support for blog Pin to Top
Add the priority attribute to blog posts and sort the blogs based on the value.
It is assumed that the lower the value, the higher the priority.
If empty, the default is to sort by release/modification time.
Implementation wise is relatively straightforward - add a new priority field to contentlayer and sort by priority and date.
Design wise is where it is tricky as I feel that there should be a change or notice to indicate that a particular post is pinned. Not sure if the current layout is the best for this. If you implement it on your end end, happy to add it to the list of blogs as a reference, but I don't think I will add it to the general repository.
Hi!
Another solution could be to add a featured section, with a "featured" field (instead of priority) set to true when needed.
If you want to implement this without making too many changes within the application, I think the best would be to create a "Featured" layout, and then import it into the desired page inside the app folder.
I think it is even possible not to show this new layout if there is no featured post, so the integration within the current application would be rather smooth.
Should be quite straightforward as well!
Hi!
Another solution could be to add a featured section, with a "featured" field (instead of priority) set to true when needed.
If you want to implement this without making too many changes within the application, I think the best would be to create a "Featured" layout, and then import it into the desired page inside the app folder.
I think it is even possible not to show this new layout if there is no featured post, so the integration within the current application would be rather smooth.
Should be quite straightforward as well!
Sounds like a good idea, I'll try it!
Hi!
Another solution could be to add a featured section, with a "featured" field (instead of priority) set to true when needed.
If you want to implement this without making too many changes within the application, I think the best would be to create a "Featured" layout, and then import it into the desired page inside the app folder.
I think it is even possible not to show this new layout if there is no featured post, so the integration within the current application would be rather smooth.
Should be quite straightforward as well!
Sounds like a good idea, I'll try it!
Hi again, if you want to have a look, I just implemented this on my i18n template!
Hi!
Another solution could be to add a featured section, with a "featured" field (instead of priority) set to true when needed.
If you want to implement this without making too many changes within the application, I think the best would be to create a "Featured" layout, and then import it into the desired page inside the app folder.
I think it is even possible not to show this new layout if there is no featured post, so the integration within the current application would be rather smooth.
Should be quite straightforward as well!
Sounds like a good idea, I'll try it!
Hi again, if you want to have a look, I just implemented this on my i18n template!
Woah, what a neat implementation! I will refer it, you really saved me a lot of trouble, thank you!
|
2025-04-01T04:35:43.957658
| 2017-06-02T17:13:11
|
233245801
|
{
"authors": [
"chary1n",
"timmyomahony"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11516",
"repo": "timmyomahony/django-pagedown",
"url": "https://github.com/timmyomahony/django-pagedown/issues/53"
}
|
gharchive/issue
|
PIp install django-pagedown RuntimeError maximum recursion depth exceeded
pip install django-pagedown:
i got RuntimeError maximum recursion depth exceeded,
then i try
pip install -e<EMAIL_ADDRESS>still got the same error,i am using py2.7.
Thanks you
Apologies for the very late reply, but I can't replicate this. I tried installing in a virtualenv on OSX with Python 2.7 using pip install django-pagedown and it worked OK:
Collecting django-pagedown
Downloading django-pagedown-0.1.3.tar.gz (80kB)
100% |████████████████████████████████| 81kB 1.2MB/s
Collecting Django>=1.3 (from django-pagedown)
Downloading Django-1.11.6-py2.py3-none-any.whl (6.9MB)
100% |████████████████████████████████| 7.0MB 177kB/s
Collecting pytz (from Django>=1.3->django-pagedown)
Using cached pytz-2017.2-py2.py3-none-any.whl
Building wheels for collected packages: django-pagedown
Running setup.py bdist_wheel for django-pagedown ... done
Stored in directory: /Users/timmyomahony/Library/Caches/pip/wheels/d0/65/f3/ee048e721f6234cbc2d75a09dd4a450f03b3885a378db7493e
Successfully built django-pagedown
Installing collected packages: pytz, Django, django-pagedown
Successfully installed Django-1.11.6 django-pagedown-0.1.3 pytz-2017.2
I'm closing this for the moment but if you can replicate the issues please let me know.
|
2025-04-01T04:35:43.960596
| 2012-01-04T08:20:06
|
2720433
|
{
"authors": [
"icambron",
"rockymeza",
"timrwood"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11517",
"repo": "timrwood/moment",
"url": "https://github.com/timrwood/moment/issues/122"
}
|
gharchive/issue
|
make test && make build
It is not obvious how to run the tests or how to build the site for moment. It would be nice to have some documentation on how to contribute to the project, how to run the tests from the command line, and how to build the site.
Additionally, it would be nice to have a familiar interface (via a Makefile) for testing and building the site.
It's all through nodeJS, so you just do node build and node test from the root directory. I'll add docs on how to do this and which npm modules are needed.
I don't want to push the issue at all, but here's a good argument for using Makefiles: http://dailyjs.com/2011/08/11/framework-75/.
I think that node build and node test are totally acceptable, so long as there is documentation for it. It took me a while to get the devDependencies installed and to learn how to test and build by myself.
+1 for a makefile
Thanks for adding the Makefile, I'm going to close this issue. But I also added a pull request for some changes to the Makefile, #167.
|
2025-04-01T04:35:43.977778
| 2016-01-10T06:35:11
|
125803871
|
{
"authors": [
"mithro"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11518",
"repo": "timvideos/HDMI2USB-litex-firmware",
"url": "https://github.com/timvideos/HDMI2USB-litex-firmware/issues/150"
}
|
gharchive/issue
|
Add PTZ support to uvc device
From @shenki on September 3, 2014 5:37
It appears that the v4l2 uvcvideo device can support PTZ controls, and the kernel has support for this: https://lkml.org/lkml/2014/7/8/837
We should add support to our uvc device, and make the fx2 firmware pass on the commands through to the serial board that does PTZ control.
Copied from original issue: timvideos/HDMI2USB-jahanzeb-firmware#95
This will be possible after https://github.com/timvideos/HDMI2USB-misoc-firmware/issues/149
This issue was moved to timvideos/HDMI2USB-fx2-firmware#15
|
2025-04-01T04:35:43.987507
| 2020-07-07T23:51:06
|
652751513
|
{
"authors": [
"kendallstrautman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11519",
"repo": "tinacms/tinacms",
"url": "https://github.com/tinacms/tinacms/pull/1318"
}
|
gharchive/pull-request
|
feat: Adds git:commit event to git client
Starts to address #1316.
You can optionally pass cms when you instantiate the git client.
const client = new GitClient('http://localhost:3000/___tina', this.cms)
If the cms is passed, a git:commit event will dispatch when commit is invoked, returning the response. Devs can then parse the response to handle various alerts.
React.useEffect(() => {
cms.events.subscribe("git:commit", function handleCommitAlerts(event) {
if (!event.response.ok) {
cms.alerts.error("Something went wrong! Changes weren't saved")
} else {
cms.alerts.info("Content saved successfully!")
}
})
}, [])
Not sure if this is the best way to go about it, looking for feedback before adding events to other actions (push etc.). Checkout demo-gatsby/src/components/bio.js for an example.
Ooo I just looked at the GitFile PR and maybe this event stuff would be better there
Closing to implement on GitFile instead of the client
|
2025-04-01T04:35:43.991369
| 2022-02-08T22:08:17
|
1127809824
|
{
"authors": [
"spbyrne"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11520",
"repo": "tinacms/tinacms",
"url": "https://github.com/tinacms/tinacms/pull/2563"
}
|
gharchive/pull-request
|
Big Block Selector
Work in progress -- posting draft code for feedback, there's still some rough edges.
How to provide image and (optional) category from the template schema:
const coolEightBlockSchema: TinaTemplate = {
name: "cooleight",
label: "Team",
ui: {
category: "Page Section",
previewSrc:
"https://res.cloudinary.com/dghp13dmb/image/upload/v1644345838/Block%20Picker/team_zgif9y.png",
},
fields: [
{
type: "string",
label: "String",
name: "string",
},
],
};
And how to enable the visual block picker:
...
fields: [
{
type: "object",
list: true,
name: "blocks",
label: "Sections",
ui: {
visualSelector: true,
},
...
I'm happy with how things are working at this point. My only complaint is that CSS columns are far from perfect, but I feel like adding something like Masonry would be overkill. Basically blocks shouldn't be forced to have images of the same size, given the fact blocks come in all shapes/sizes, but this means that we can't adhere to a grid layout, we need something more like 'Pinterest'. CSS columns works pretty good, but in cases where there are fewer blocks than columns they'll often 'stick' to the first column.
Here's what it typically looks like:
And here's the column issue:
So not a huge issue, but just a UI quark to be aware of.
Added to the cloud starter. This is what it looks like with no categories or search. The search only appears if there's more than 6 items, but we can tweak this value.
|
2025-04-01T04:35:44.018068
| 2022-04-23T17:43:26
|
1213371433
|
{
"authors": [
"chrisdoherty4"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11521",
"repo": "tinkerbell/hegel",
"url": "https://github.com/tinkerbell/hegel/pull/87"
}
|
gharchive/pull-request
|
Remove pkg level state
The primary purpose of this PR is to strip package level state from the service. Global state contributes to application complexity that is considered the cause for many bugs. Package state is 1 notch behind global state.
Package state has been replaced with explicit dependency declarations in functions and, consequently, injected dependencies. This lays a foundation for further refactoring to bring about cleaner code that's easier to work with.
With the state removal some additional fixes and restructuring was done; specifics:
all http handlers are now high-order functions returning an http.Handler that leverages the injected dependencies.
metrics gathered by polling the health function of clients has been isolated in the metrics package and is launched as a distinct goroutine.
a series of bugs in the http health check handler have been fixed. Namely continuing on to write payloads after http.StatusInternalServerError codes are written.
removal of redundant Content-Type headers as the setter was called after WriteHeader() meaning it has no effect.
changed the version end-point to return a known JSON structure in contrast to the previous implementation that, presumably, relied on build time information being injected; the structure used is modeled after the health check endpoint and uses git_rev as the JSON key.
changed http-server package to http and grpc-server package to grpc; these changes have been isolated on their own commit for ease of reviewing.
Metric package state is untouched given it'll likely require a more thought out refactor than simply removing the state and injecting a dependency.
@mmlb
Opened new PR because I can't reopen this after Github automatically closed it due to a rebase error.
|
2025-04-01T04:35:44.024092
| 2022-02-03T03:44:16
|
1122619988
|
{
"authors": [
"Mans22r",
"iqbalpb01"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11522",
"repo": "tinkerhubkmea/learnweb",
"url": "https://github.com/tinkerhubkmea/learnweb/pull/6"
}
|
gharchive/pull-request
|
created new branch
created new branch name "Mansoor" and also add new readme file
Make sure that Main branch should be clean @Mans22r
|
2025-04-01T04:35:44.063178
| 2023-12-03T02:30:06
|
2022297199
|
{
"authors": [
"chenyuxyz",
"imaolo",
"nullhook"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11523",
"repo": "tinygrad/tinygrad",
"url": "https://github.com/tinygrad/tinygrad/issues/2581"
}
|
gharchive/issue
|
Incorrect GPU buffer loading on M1
Noticed the problem when trying "GPU=1 python examples/stable_diffusion.py". I have tried loading a single buffer, and it does not reproduce. On M1 air:
from examples.stable_diffusion import StableDiffusion
from tinygrad.nn.state import torch_load, load_state_dict
from tinygrad.helpers import fetch
from tinygrad.device import Device
import numpy as np
def get_model_on_device(dev: str):
Device.DEFAULT = dev
model = StableDiffusion()
load_state_dict(model, torch_load(fetch('https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt', 'sd-v1-4.ckpt'))['state_dict'], strict=False)
Device[Device.DEFAULT].synchronize()
return model
gpu_model = get_model_on_device('GPU')
metal_model = get_model_on_device('METAL')
assert not np.array_equal(gpu_model.alphas_cumprod.numpy(), metal_model.alphas_cumprod.numpy())
My guess is a sync issue. If we synchronize on each CLDevice.copyin() the issue is resolved.
A more minimal test
from tinygrad.device import Device
from tinygrad.device import Buffer, BufferCopy
import numpy as np
DEVICE_TO_TEST = 'GPU'
TOTAL_MEM = int(4.26e9) # size of all the stable diffusion weights
NUM_BUFS = 1131 # the number of stable diffusion weights
BUF_SIZE = int(TOTAL_MEM/NUM_BUFS)
cpu_buf = Buffer.fromCPU('CPU', np.random.randint(0, high=255, size=BUF_SIZE, dtype=np.uint8))
gpu_bufs = [Buffer(DEVICE_TO_TEST, cpu_buf.size, cpu_buf.dtype) for _ in range(NUM_BUFS)]
for gb in gpu_bufs: BufferCopy([gb, cpu_buf], None)
Device[DEVICE_TO_TEST].synchronize()
for gb in gpu_bufs: assert np.array_equal(gb.toCPU(), cpu_buf.toCPU())
no assertion failure on mbp (m1 max), runs fine
no assertion failure on mbp (m1 max), runs fine
Thanks for testing.
Can you try increasing TOTAL_MEM? The reason being the issue isn't necessarily the assert failure, but that we don't a bad return status.
increased till 20gb and it still works
It must be an issue with apple's opencl binaries
opencl on macOS uses metal behind scenes btw. opencl buffers are same as metal's buffers.
Yes, their "binaries" are wrappers.
Maybe I need to update mine, but 20GB shouldn't work on METAL either.
closing as stale
|
2025-04-01T04:35:44.095759
| 2020-07-20T18:34:23
|
662131191
|
{
"authors": [
"Angelin01",
"tiredofit"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11524",
"repo": "tiredofit/docker-self-service-password",
"url": "https://github.com/tiredofit/docker-self-service-password/issues/34"
}
|
gharchive/issue
|
Environment variable configs with spaces get truncated by sed
I have been trying to set the "MAIL_FROM_NAME" variable for a bit now, however I can't for the life of me work around the sed's replacements. With the signature, where I wanted new lines, I used a block and escaped the backslashes once for the sed and that is that. Docker compose snippet:
environment:
MAIL_USE_LDAP: 'true'
MAIL_FROM_NAME: >-
Password Service
MAIL_SIGNATURE: >-
\\n\\n--\\n\\n
NOTIFY_ON_CHANGE: 'true'
The end result is always similar to $mail_from_name = "Password";.
I have tried using Password\ Service, however that leads to no change. With Password\\ Service I get $mail_from_name = "Password\";. Adding further slashes just increases the amount of slashes on the output every other slash, it doesn't actually escape the space character.
Any help would be appreciated, I'd prefer to configure the entire thing through the environment variables instead of manually as it makes it easier to also configure sensitive variables like the email password.
Interesting! I just ran the script through shellcheck and quoted some more variables. Could you try the results on tiredofit/self-service-password:develop ?
Alternative you could set it up, then switch SETUP_TYPE=MANUAL which would keep the first generated configuration file available for you which would then let you make that hack.
If all else fails we can add custom script support to the image and you can have an extra exported volume that fires a bash script on each run to modify the field you wish to whatever you'd like. But lets see what happens with :develop.
Wonderful! Whatever variables you quoted fixed it perfectly. A simple MAIL_FROM_NAME: 'Password Service' works now. Thank you for the quick fix.
Great! - Tagged as tiredofit/self-service-password:5.1.2 and committed changes here
|
2025-04-01T04:35:44.098053
| 2018-03-18T03:43:14
|
306210684
|
{
"authors": [
"Horaddrim",
"havanagrawal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11525",
"repo": "tirthajyoti/pydbgen",
"url": "https://github.com/tirthajyoti/pydbgen/issues/2"
}
|
gharchive/issue
|
Variable referenced before assignment in "realistic_email"
https://github.com/tirthajyoti/pydbgen/blob/master/pydbgen/pydbgen.py#L99
import pydbgen
myDB = pydbgen.pydb()
myDB.realistic_email("Peter Parker")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "...../pydbgen/pydbgen/pydbgen.py", line 99, in realistic_email
print(path)
UnboundLocalError: local variable 'path' referenced before assignment
Additionally, the dir path uses an OS specific separator, which causes it to fail on Mac.
path = dir_path + "\Domains.txt"
Do you think it's cool to use os.path.abspath to? I'm fixing it
|
2025-04-01T04:35:44.121820
| 2019-06-05T20:51:21
|
452711071
|
{
"authors": [
"8vius",
"AxelTheGerman",
"rylanb",
"sharkeyryan",
"titusfortner",
"twalpole",
"victorhazbun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11527",
"repo": "titusfortner/webdrivers",
"url": "https://github.com/titusfortner/webdrivers/issues/126"
}
|
gharchive/issue
|
Gitlab CI fails when running Capybara tests
I get several different errors when trying to use selenium with the Gitlab CI, I have to retry several times to get my tests to pass. These are the errors I get:
Selenium::WebDriver::Error::WebDriverError: invalid session id
Errno::ECONNREFUSED: Failed to open TCP connection to <IP_ADDRESS>:9515
This is the config I have for my tests:
Webdrivers.cache_time = 86_400
if ENV["CI"]
Webdrivers::Chromedriver.required_version = "74.0.3729.6"
else
Webdrivers::Chromedriver.update
end
Capybara.register_driver :headless_chrome do |app|
options = ::Selenium::WebDriver::Chrome::Options.new
options.add_argument("--headless")
options.add_argument("--no-sandbox")
options.add_argument("--disable-gpu")
options.add_argument("window-size=2560x2560")
options.add_argument("disable-dev-shm-usage") if ENV['CI']
Capybara::Selenium::Driver.new(app, browser: :chrome, options: options)
end
Capybara.javascript_driver = :headless_chrome
class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
include Devise::Test::IntegrationHelpers
driven_by :headless_chrome, screen_size: [1280, 2400], options: {}
end
Please verify the version of Chrome being used. The most likely issue is that you are hard coding a chromedriver version for a system that has updated to Chrome v75.
@titusfortner thanks, I'll give it a check.
Just checked it's still on version 74.
I'm having the same issue after chrome being upgraded to 75 - but I forced chromedrivers as well
$ google-chrome --version
Google Chrome 75.0.3770.80
$ bundle exec rake webdrivers:chromedriver:update[75.0.3770.8]
The HashDiff constant used by this gem conflicts with another gem of a similar name. As of version 1.0 the HashDiff constant will be completely removed and replaced by Hashdiff. For more information see https://github.com/liufengyun/hashdiff/issues/45.
2019-06-05 22:00:03 INFO Webdrivers Updated to chromedriver 75.0.3770.8
Also:
why does the chromedriver version has to be the exact version string without the last digit - doesn't make any sense to me (but prob a chromedriver issue)
I assumed webdrivers:chromedriver:update would install the latest version - doesn't seem to be the case?
$ rake webdrivers:chromedriver:update[75.0.3770.8]
The HashDiff constant used by this gem conflicts with another gem of a similar name. As of version 1.0 the HashDiff constant will be completely removed and replaced by Hashdiff. For more information see https://github.com/liufengyun/hashdiff/issues/45.
2019-06-05 14:41:06 INFO Webdrivers Updated to chromedriver 75.0.3770.8
$ rake webdrivers:chromedriver:update
The HashDiff constant used by this gem conflicts with another gem of a similar name. As of version 1.0 the HashDiff constant will be completely removed and replaced by Hashdiff. For more information see https://github.com/liufengyun/hashdiff/issues/45.
2019-06-05 14:41:18 INFO Webdrivers Updated to chromedriver 2.41.578700
Validate with the log: Webdrivers::Logger.level = :debug to see what versions the gem is finding and comparing.
You can either specify the path of Chrome you want to use (Selenium::WebDriver::Chrome.path = '/path/to/chrome') or webdrivers gem will locate the Chrome browser that chromedriver will be using by default and reference that.
webdrivers:chromedriver:update will install the latest version that matches the version of Chrome you are using. It will use 2.41 if the version of Chrome it finds is less than v70 (https://github.com/titusfortner/webdrivers/blob/master/lib/webdrivers/chromedriver.rb#L32).
chromedriver version has to be the exact version string without the last digit
This is only if you are specifying it yourself. I think we could add some extra logic to find the latest driver for a specific major version, but this isn't something we are encouraging people to do, so if you want to go that route, you should be able to figure out exactly the one you want.
I'm getting failures sort of randomly starting yesterday around 1pm Mountain time. I don't know if they are related to Webdrivers but maybe a chrome/driver update instead?
Errors like:
Selenium::WebDriver::Error::ElementClickInterceptedError: element click intercepted: Element
and
Capybara::Ambiguous: Ambiguous match
When we haven't updated anything around the test gems or those tests in a long time.
Just putting in my two cents to see if there is someone who knows more about it.
Thanks for this gem, btw!
@rylanb this looks like a different issue. Chrome 75 was released yesterday and defaults to w3c: true, which is going to have some different behaviors to make it more cross-browser compatible. Those issues will not be webdrivers related.
@titusfortner yup, very likely! I was more providing another data point in case that helped anyone with troubleshooting things and also to gather any salient data like the Chrome 75 with w3c: true! Thanks!
After the auto update of Chrome in my gitlab ci to version 75.0.3770.8 I was running into the following error:
Selenium::WebDriver::Error::UnknownError: unknown error: Chrome failed to start: exited abnormally (unknown error: DevToolsActivePort file doesn't exist) (The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
I Included the following in my rails_helper.rb to override the use of the new chromedriver 75 and instead use the old 74.0.3729.6 version. Tests are now passing with Chrome 75 installed:
require "webdrivers/chromedriver" Webdrivers.cache_time = 1 Webdrivers::Chromedriver.required_version = '74.0.3729.6'
Hope this helps someone.
@sharkeyryan I was experimenting the same issue, now my TravisCI is passing. Thanks.
My setup:
require "selenium/webdriver"
require "webdrivers/chromedriver"
Webdrivers::Chromedriver.required_version = "74.0.3729.6"
Capybara.server = :puma, { Silent: true }
Capybara.register_driver :chrome do |app|
Capybara::Selenium::Driver.new(app, browser: :chrome)
end
Capybara.register_driver :headless_chrome do |app|
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOptions: {
args: %w(no-sandbox headless disable-gpu window-size=1280,800),
},
)
Capybara::Selenium::Driver.new app,
browser: :chrome,
desired_capabilities: capabilities
end
Capybara.javascript_driver = :headless_chrome
Closing this since most peoples issues appear to have been fixed and other appear to be more about the chromedriver 75 update and not this gem. If anyone is still having an issue they believe is caused by this gem please open a new issue with enough data to replicate the issue.
|
2025-04-01T04:35:44.146621
| 2016-05-22T16:37:21
|
156160494
|
{
"authors": [
"nathanfdunn",
"tjmehta"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11528",
"repo": "tjmehta/is-circular",
"url": "https://github.com/tjmehta/is-circular/pull/3"
}
|
gharchive/pull-request
|
Fixes algorithm and adds more tests
Fixes issue #2 so that algorithm works in all cases
Thank you! This looks good, but I don't understand a few things (comments above)
|
2025-04-01T04:35:44.171655
| 2020-10-06T04:41:06
|
715332843
|
{
"authors": [
"dominicduffin1",
"tkshill"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11530",
"repo": "tkshill/Quarto",
"url": "https://github.com/tkshill/Quarto/issues/22"
}
|
gharchive/issue
|
Add ARIA attributes for better Accessiblity
Issue Context
Our main page elements do not have the appropriate Accessible Rich Internet Applications (ARIA) labels.
Suggested Solution
Add as many of the accessbility attributes in the Elm-ui region library as are applicable. In particular:
Add the navigation attribute msg to the row element containing the links in the Shared.elm view function.
Add the mainContent attribute msg to the column containing the page.body.
Add the announce attribute message to the top level column in the viewBoard function.
Alternatives Considered
Additional Resources
See here in the read me for how to run and install the application.
See here in contributing for the basics on forking and cloning the repository.
Hi! I'd like to take this on. Could you confirm - is the page.body also in Shared.elm and viewBoard the one in GamePage.elm.?
Hey @dominicduffin1 ,
I've assigned you to the issue. The page.body in shared.elm is equivalent to the body of the top level record returned by view function in GamePage.elm.
So this page.body in Shared.elm
view :
{ page : Document msg, toMsg : Msg -> msg }
-> Model
-> Document msg
view { page } _ =
...
, column [ height fill, centerX ] page.body
...
is this body in GamePage.elm
view : Model -> Document Msg
view model =
{ title = "Quarto - Play"
, body =
[ column [ spacing 10, centerX ]
...
closed by #39
|
2025-04-01T04:35:44.196423
| 2023-06-03T10:51:34
|
1739405824
|
{
"authors": [
"freQuensy23-coder",
"nyck33"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11531",
"repo": "tloen/alpaca-lora",
"url": "https://github.com/tloen/alpaca-lora/issues/496"
}
|
gharchive/issue
|
What are the minimum system requirements?
I have a gaming laptop with 24GB of RAM and Nvidia GEFORCE GTX 1650. I read on Tom's Hardware that this is not even close to being enough. Is this true? Can the Readme be updated to show this information?
To finetune 7b qlora you need at least 16GB GPU RAM. So you need something like 3090 or greater.
To finetune 3b models you can use google colab with no problems
|
2025-04-01T04:35:44.197928
| 2023-06-20T16:51:54
|
1765782654
|
{
"authors": [
"patosullivan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11532",
"repo": "tloncorp/landscape-apps",
"url": "https://github.com/tloncorp/landscape-apps/pull/2630"
}
|
gharchive/pull-request
|
diary: fix more cache issues
Fixes LAND-597 and another missing quips issue (i.e., the note in the cache was the wrong shape, did not include quips).
Also updates our optimistic update functionality to be in-line with the tanstack query docs on setQueryData: https://tanstack.com/query/v4/docs/react/reference/QueryClient#queryclientsetquerydata
Should fix LAND-589 as well.
|
2025-04-01T04:35:44.221605
| 2015-06-10T18:25:53
|
87061127
|
{
"authors": [
"domenic",
"totopia"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11534",
"repo": "tmpvar/jsdom",
"url": "https://github.com/tmpvar/jsdom/issues/1147"
}
|
gharchive/issue
|
question with doing $.ajax in jsdom@3
I am trying to do a mocha test against some ajax calls with jsdom. I am getting
statusText: 'TypeError: Cannot call method \'open\' of undefined' which refers to following lines of code in jquery where options.xhr() returned undefined. If xhr is undefined, xhr.open() obviously will fail.
support.cors = !!xhrSupported && ( "withCredentials" in xhrSupported );
support.ajax = xhrSupported = !!xhrSupported;
jQuery.ajaxTransport(function( options ) {
var callback;
// Cross domain only allowed if supported through XMLHttpRequest
if ( support.cors || xhrSupported && !options.crossDomain ) {
return {
send: function( headers, complete ) {
var i,
xhr = options.xhr(),
id = ++xhrId;
xhr.open( options.type, options.url, options.async, options.username, options.password );
Here's how I am initializing jQuery in my test:
var jsdom = require("jsdom");
global.window = jsdom.jsdom().parentWindow;
global.$ = require('jquery/dist/jquery');
global.$.support.cors = true;
Am I missing something in order to use the ajax request with jsdom?
That is not the usual way of initializing a jQuery instance you can use. Usually it is more like
var jsdom = require("jsdom");
jsdom.env(
"",
[require.resolve("jquery")],
function (errors, window) {
// use window.$
}
);
The thing you are doing seems kind of crazy but also like it kind of might work so I am not sure exactly what the problem would be with it.
@domenic got it. the missing piece was the require("xmlhttprequest").XMLHttpRequest
Isn't xmlhttprequest part of jsdom by default?
Yes, it is, but it's on the jsdom window, not the global. Maybe jQuery just references XMLHttpRequest instead of window.XMLHttpRequest.
In general the approach in your OP seems pretty fragile unless you also copy over every property from window to global.
hmm...good call. maybe I don't need global namespace there, just use window.
|
2025-04-01T04:35:44.223638
| 2017-08-24T21:02:38
|
252730408
|
{
"authors": [
"domenic",
"laginha"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11535",
"repo": "tmpvar/jsdom",
"url": "https://github.com/tmpvar/jsdom/issues/1960"
}
|
gharchive/issue
|
Error: Cannot read property '_location' of null
Command: react-scripts test --env=jsdom (jest)
react-scripts version: 1.0.11
jsdom version: 9.8.3 (I later upgraded to 11.2.0 but the results were the same)
/code/node_modules/react-scripts/scripts/test.js:22
throw err;
^
TypeError: Cannot read property '_location' of null
at Window.get location [as location] (/code/node_modules/jest-environment-jsdom/node_modules/jsdom/lib/jsdom/browser/Window.js:148:79)
Any ideas on how to fix or avoid this error?
Sorry, this did not follow the issue template and its instructions on how to create a minimal reproduction case with no third-party dependencies. Let me close this for now, and you can reply when you have updated the OP to follow the issue template and we'll reopen.
|
2025-04-01T04:35:44.246350
| 2015-07-24T03:26:36
|
96954518
|
{
"authors": [
"Revenaunt",
"bradmontgomery"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11536",
"repo": "tndatacommons/android-app",
"url": "https://github.com/tndatacommons/android-app/pull/51"
}
|
gharchive/pull-request
|
Refactor how we store user-selected data
This PR contains work that
extracts the user-selected Category, Goal, Behavior, and Action objects from the CompassApplication into a container UserData class--a single instance of which is stored in the CompassApplication instead.
UserData should do a better job of keeping the hierarchy of content up-to-date when something is added or removed.
Adds a GetUserDataTask which pulls all of the user-selected data from the /api/users/ endpoint in one request
Simplifies the MainActivity by removing a bunch of unnecessary callbacks, and...
Simplifies the CategoryFragment and MyGoalsFragment classes by removing unnecessary listeners.
It looks good :+1:
|
2025-04-01T04:35:44.302374
| 2019-06-08T09:36:59
|
453776003
|
{
"authors": [
"1232154pp4749687",
"2025-sagar-1",
"Arrvindraa",
"DeiRep",
"Gahamelas",
"NOBHACKER120",
"Sonya1122",
"a1phat0ny",
"air-10",
"beratsumay",
"harshditu08",
"hilmiazizi",
"jayu27-sudo",
"kel50",
"kevin-weiss",
"kh3rad",
"mvajid",
"terpelajar",
"tnychn",
"yessure"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11537",
"repo": "tnychn/instascrape",
"url": "https://github.com/tnychn/instascrape/issues/2"
}
|
gharchive/issue
|
login not possible (csrftoken)
Unfortunately I cannot login:
Traceback (most recent call last):
File $HOME/.virtualenvs/insta/bin/instascrape", line 11, in <module>
load_entry_point('instascrape-ax==1.1.1', 'console_scripts', 'instascrape')()
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape_ax-1.1.1-py3.7.egg/instascrape/cli.py", line 649, in main
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape_ax-1.1.1-py3.7.egg/instascrape/cli.py", line 217, in login
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape_ax-1.1.1-py3.7.egg/instascrape/instascraper.py", line 118, in login
KeyError: 'csrftoken'
The problem appears also in v1.1.0. I tried turning off 2FA, but it makes no difference.
Please upgrade InstaScrape to v1.1.1 and try logging in with instascrape --debug login. This will print out some debug messages (including the http response of your login). They are helpful for me to think what caused the error!
Remember to censor your credentials!
Thanks!
instascrape --debug login
DEBUG: loading object from $HOME/.instascrape/insta.pkl...
DEBUG: $HOME/.instascrape/insta.pkl pickle file not found.
Choose Account
(1) + Login New Account
choice> 1
Username: $USER
Password:
DEBUG: Logging in...
DEBUG: trying to load cookie from $HOME/.instascrape/accounts/$USER.cookie...
DEBUG: cookie file for $USER not found
DEBUG: getting cookie by username and password
<instascrape.instascraper.InstaScraper object at 0x7f2b9e768e10>
Traceback (most recent call last):
File "$HOME/.virtualenvs/insta/bin/instascrape", line 11, in <module>
load_entry_point('instascrape-ax==1.1.1', 'console_scripts', 'instascrape')()
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape_ax-1.1.1-py3.7.egg/instascrape/cli.py", line 649, in main
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape_ax-1.1.1-py3.7.egg/instascrape/cli.py", line 217, in login
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape_ax-1.1.1-py3.7.egg/instascrape/instascraper.py", line 120, in login
KeyError: 'csrftoken'
I got the same issue
same issue here
same issue here
[root@s2 ~]# instascrape --debug login -u saeed
DEBUG: loading object from /root/.instascrape/insta.pkl...
DEBUG: /root/.instascrape/insta.pkl pickle file not found.
* Cookie file does not exist yet. Getting credentials...
Password:
DEBUG: Logging in...
DEBUG: trying to load cookie from /root/.instascrape/accounts/saeed.cookie...
DEBUG: cookie file for saeed not found
DEBUG: getting cookie by username and password
Traceback (most recent call last):
File "/usr/local/bin/instascrape", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/instascrape/cli.py", line 651, in main
args.func(args)
File "/usr/local/lib/python3.7/site-packages/instascrape/cli.py", line 217, in login
insta.login() # keep the logged in state and store it in the pickle
File "/usr/local/lib/python3.7/site-packages/instascrape/instascraper.py", line 118, in login
csrftoken = self._session.cookies.get_dict()["csrftoken"]
KeyError: 'csrftoken'
Instascrape v2.0.0 has just released. Please try it out, and let me know if the problem persists.
Thanks!
I finally managed to test it with v2.0.1 and it seems that the problem persists:
instascrape --debug login
Saved Cookies
(1) + [Login New Account]
(1-1)choice> 1
Username: $USER
Password:
- Logging in...
getting cookie by username and password
ERROR KeyError: 'csrftoken'
File "$HOME/.virtualenvs/insta/bin/instascrape", line 10, in <module>
sys.exit(main())
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape/__main__.py", line 196, in main
args.func(**vars(args))
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape/commands/login.py", line 65, in login_handler
insta.login(username, password, load_cookie(cookie_name) if cookie_name else None)
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape/instascrape.py", line 112, in login
session.headers.update({"X-CSRFToken": session.cookies.get_dict()["csrftoken"]})
Traceback (most recent call last):
File "$HOME/.virtualenvs/insta/bin/instascrape", line 10, in <module>
sys.exit(main())
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape/__main__.py", line 196, in main
args.func(**vars(args))
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape/commands/login.py", line 65, in login_handler
insta.login(username, password, load_cookie(cookie_name) if cookie_name else None)
File "$HOME/.virtualenvs/insta/lib/python3.7/site-packages/instascrape/instascrape.py", line 112, in login
session.headers.update({"X-CSRFToken": session.cookies.get_dict()["csrftoken"]})
KeyError: 'csrftoken'
It seems that the following lines of code are causing the problem.
https://github.com/a1phat0ny/instascrape/blob/92ba6e073790c0475bb51d5ce8d03b2f97cc6c2d/instascrape/instascrape.py#L110-L112
Since I failed to reproduce the error, I'm not sure if my persumption is correct.
I guess we should get the value of the csrftoken cookie from the response, rather than the cookie jar of the session itself. So I changed the code into the following lines:
https://github.com/a1phat0ny/instascrape/blob/ff2127d93bb0a0799bd45c33a8c3a9e01e046aa8/instascrape/instascrape.py#L110-L112
Upgrade instascrape to the lastest GitHub commit using pip:
$ pip install git+https://github.com/a1phat0ny/instascrape@master --upgrade
Please test it out and let me know if the problem persists.
I just tested it with the current version. The error is basically the same, only the second last line differs:
session.headers.update({"X-CSRFToken": req.cookies.get_dict()["csrftoken"]})
KeyError: 'csrftoken'
`ERROR HTTPError: 400 Client Error: Bad Request for url: https://i.instagram.com/api/v1/users/21788670033/info/
File "/data/data/com.termux/files/usr/bin/instascrape", line 11, in
load_entry_point('instascraper==2.0.1', 'console_scripts', 'instascrape')()
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/main.py", line 196, in main
args.func(**vars(args))
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/commands/login.py", line 65, in login_handler
insta.login(username, password, load_cookie(cookie_name) if cookie_name else None)
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/instascrape.py", line 139, in login
self._post_login()
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/instascrape.py", line 207, in _post_login
self.my_username = get_username_from_userid(self.my_user_id)
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/utils.py", line 60, in get_username_from_userid
resp.raise_for_status()
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
Traceback (most recent call last):
File "/data/data/com.termux/files/usr/bin/instascrape", line 11, in
load_entry_point('instascraper==2.0.1', 'console_scripts', 'instascrape')()
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/main.py", line 196, in main
args.func(**vars(args))
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/commands/login.py", line 65, in login_handler
insta.login(username, password, load_cookie(cookie_name) if cookie_name else None)
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/instascrape.py", line 139, in login
self._post_login()
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/instascrape.py", line 207, in _post_login
self.my_username = get_username_from_userid(self.my_user_id)
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/instascrape/utils.py", line 60, in get_username_from_userid
resp.raise_for_status()
File "/data/data/com.termux/files/usr/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://i.instagram.com/api/v1/users/21788670033/info/`
Help
any updates ?
@terpelajar Instascrape v2.0.2 has just been released. This release introduces a fix to this problem. Please upgrade your instascrape. Thanks!
$ pip install instascraper --upgrade
- Logging in...
getting cookie by username and password
failed to find 'csrftoken' in first attempt, using endpoint '/web/__mid'
✖ cannot find 'csrftoken' from cookies
I have no idea about how to fix this problem too... since I cannot reproduce this error. Maybe you should check the logfile at ~/.instascrape/instascrape.log and see what's the JSON response from Instagram tells you.
My logs of the login action:
───────┬────────────────────────────────────────────────────────────────────────
│ File: /Users/tony/.instascrape/instascrape.log
───────┼────────────────────────────────────────────────────────────────────────
1 │ [17:35:29] [MainThread/INFO] (file='instascrape.py' line=87 func='login
│ '): Logging in...
2 │ [17:35:29] [MainThread/DEBUG] (file='instascrape.py' line=99 func='logi
│ n'): getting cookie by username and password
3 │ [17:35:32] [MainThread/DEBUG] (file='instascrape.py' line=115 func='log
│ in'): login response data -> {'authenticated': True, 'user': True, 'use
│ rId': 'censored', 'oneTapPrompt': False, 'fr': 'censored', 'status': 'ok'}
4 │ [17:35:32] [MainThread/DEBUG] (file='instascrape.py' line=198 func='_po
│ st_login'): Cookie: <RequestsCookieJar[<Cookie csrftoken=censored for .instagram.com/>,
| <Cookie ds_user_id=censored for .instagram.com/>, <Cookie mid=censored for .in
│ stagram.com/>, <Cookie rur=censored for .instagram.com/>, <Cookie sessionid=
│ censored for .instagram.com/>, <Cookie shbid=censored for .instagram.com/>, <Cookie
| shbts=censored for .instagram.com/>]>
5 │ [17:35:32] [MainThread/INFO] (file='instascrape.py' line=203 func='_pos
│ t_login'): Logged in -> @censored (censored)
───────┴────────────────────────────────────────────────────────────────────────
Same problem. Has somebody figured out how to fix it? Otherwise, I can't use this tool at all.
Mmmm...maybe I should try reading how other projects implement the login logic...
Mmmm...maybe I should try reading how other projects implement the login logic...
Does it work on your side or is it just us?
It works fine on my side... that's why it's hard for me to debug this issue as I can't reproduce the error on my machine.
It works fine on my side... that's why it's hard for me to debug this issue as I can't reproduce the error on my machine.
Is there something I can provide you with to help reproduce the error?
Also, what are you running? There may be some missing dependencies or something like that on our side...
It works fine on my side... that's why it's hard for me to debug this issue as I can't reproduce the error on my machine.
Ok, I'm tried to use this tool on my local machine running Linux Mint, no errors were faced. Logged in successfully.
But the error persists on my remote machine running Linux Debian 9.
I also have another question since now I'm able to generate cookies on my local machine, where do I get the cookie file on my local machine to (and where) put it on my remote machine?
The cookies are stored at ~/.instascrape/cookies/{username}.cookie. You can copy the file to the same path on your remote machine.
The cookies are stored at ~/.instascrape/cookies/{username}.cookie. You can copy the file to the same path on your remote machine.
Ty, do I need to copy the instasession.pickle aswell?
No I don’t think it’s needed as it would cause the same session exists in two different machines.
Hey, sorry to bother and offtop, is there any way to contact you? Like Telegram or email?
https://t.me/tnychn
Login not possible here too, got this on Mojave Traceback (most recent call last): File "/opt/miniconda3/bin/instascrape", line 8, in <module> sys.exit(main()) File "/opt/miniconda3/lib/python3.7/site-packages/instascrape/__main__.py", line 196, in main args.func(**vars(args)) File "/opt/miniconda3/lib/python3.7/site-packages/instascrape/commands/login.py", line 65, in login_handler insta.login(username, password, load_cookie(cookie_name) if cookie_name else None) File "/opt/miniconda3/lib/python3.7/site-packages/instascrape/instascrape.py", line 114, in login data = resp.json() File "/opt/miniconda3/lib/python3.7/site-packages/requests/models.py", line 897, in json return complexjson.loads(self.text, **kwargs) File "/opt/miniconda3/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/opt/miniconda3/lib/python3.7/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/miniconda3/lib/python3.7/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Also cannot login..
Traceback (most recent call last):
File "/home/mohamad/.local/bin/instascrape", line 8, in
sys.exit(main())
File "/home/mohamad/.local/lib/python3.8/site-packages/instascrape/main.py", line 196, in main
args.func(**vars(args))
File "/home/mohamad/.local/lib/python3.8/site-packages/instascrape/commands/login.py", line 65, in login_handler
insta.login(username, password, load_cookie(cookie_name) if cookie_name else None)
File "/home/mohamad/.local/lib/python3.8/site-packages/instascrape/instascrape.py", line 114, in login
data = resp.json()
File "/usr/lib/python3/dist-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3/dist-packages/simplejson/init.py", line 518, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
(1-1)choice> 1
Username: **********
Password:
Logging in...
getting cookie by username and password
ERROR JSONDecodeError: Expecting value: line 1 column 1 (char 0)
File "/home/mohamad/.local/bin/instascrape", line 8, in
sys.exit(main())
File "/home/mohamad/.local/lib/python3.8/site-packages/instascrape/main.py", line 196, in main
args.func(**vars(args))
File "/home/mohamad/.local/lib/python3.8/site-packages/instascrape/commands/login.py", line 65, in login_handler
insta.login(username, password, load_cookie(cookie_name) if cookie_name else None)
File "/home/mohamad/.local/lib/python3.8/site-packages/instascrape/instascrape.py", line 114, in login
data = resp.json()
File "/usr/lib/python3/dist-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3/dist-packages/simplejson/init.py", line 518, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
I came up with a good workaround to save your saved posts without faulty instascrape:
add a chrome extention called "Save All Resources"
open your saved tab in a chrome browser (click on your userpic - then Saved) and scroll down all of your saved posts. The server cuts down every 2000 pics or so, so be patient and give it 10 min if the page refuses to scroll deeper at some point.
when you reach the end of the page open devtools, click ResourcesSaver tab there
leave all the settings as it is and click download
voila, you now got a zip file with all your saved posts
search there for ".jpg" with your file browser and move all of them in one folder
hope that helps. Worked for me.
I wouldn't trust this air-10 (it has "assholishness" written all over it.
[@kevin-weiss also not```
__``
This could be a possible solution. I fiddled a bit with beautifulsoup to get the csrftoken. It is quick and dirty by now, but it sure can be optimized.
I tested my approach in this fork. But just adding the csrf header doesn't seem to resolve the issue. Sending the login payload data resulted in an error 400. I guess I've got to dig a little deeper into it.
Traceback (most recent call last):
File "crack.py", line 108, in
sess = instabrute.Login(password)
File "crack.py", line 83, in Login
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
KeyError: 'csrftoken' Pls help me guys😢😢😢😢
@terpelajar Instascrape v2.0.2 has just been released. This release introduces a fix to this problem. Please upgrade your instascrape. Thanks!
$ pip install instascraper --upgrade
Traceback (most recent call last): File "crack.py", line 108, in
sess = instabrute.Login(password)
File "crack.py", line 83, in Login sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']}) KeyError: 'csrftoken'
It's showing like this 😭😭😭Even i used this comment `$ pip install instascraper --upgrade
Traceback (most recent call last):
File "hackinsta.py", line 154, in
sess = instabrute.Login(password)
File "hackinsta.py", line 118, in Login
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
KeyError: 'csrftoken'. Please help me
Traceback (most recent call last):
File "hackinsta.py", line 154, in
sess = instabrute.Login(password)
File "hackinsta.py", line 118, in Login
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
KeyError: 'csrftoken'. Please help me
WARNING: You are using pip version 20.1.1; however, version 20.2.2 is available.
You should consider upgrading via the '/data/data/com.termux/files/usr/bin/python3 -m pip install --upgrade pip' command.
$ python hackinsta.py
Please enter a username: varun_sagar_2_17
[] 1315 Passwords loads successfully
[] Do you want to use proxy (y/n): n
[*] Please add delay between the bruteforce action (in seconds): 1
Traceback (most recent call last):
File "hackinsta.py", line 154, in
sess = instabrute.Login(password)
File "hackinsta.py", line 118, in Login
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
KeyError: 'csrftoken'. Please help me please 🙏🙏🥺
Ok bro !! But plzzz teach me or show me ur article that how real hacking
works on Instagram.....I want learn coz my sister is getting some backmails
on her private photos by her Bf 😭😭😭 So plzzz tell me the real hacking
workings
On Aug 19, 2020 1:17 PM, "2025-sagar-1"<EMAIL_ADDRESS>wrote:
Traceback (most recent call last):
File "hackinsta.py", line 154, in
sess = instabrute.Login(password)
File "hackinsta.py", line 118, in Login
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
KeyError: 'csrftoken'. Please help me
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/tnychn/instascrape/issues/2#issuecomment-675911836,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AQPDCKQE5BO5TSSEGXP5CF3SBN7RFANCNFSM4HWG4BDQ
.
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
Traceback (most recent call last):
File "hackinsta.py", line 154, in
sess = instabrute.Login(password)
File "hackinsta.py", line 118, in Login
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
KeyError: 'csrftoken'
$. Please help me please 🙏🥺🙏
pleease fix
@1232154pp4749687
Already attempted to fix this issue in c006b25485c3f00c85f546aae17d107ef613583b.
Please upgrade instascrape to v2.0.4 by using pip3 install instascraper --upgrade.
If you still have any problem logging in, please refer to #17.
Hi issue in instascrape (key error csrftoken)
Traceback (most recent call last):
File "/data/data/com.termux/files/home/alkrinsta/alkrinsta.py", line 146, in
sess = instabrute.Login(password)
File "/data/data/com.termux/files/home/alkrinsta/alkrinsta.py", line 110, in Login
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
KeyError: 'csrftoken'
Traceback (most recent call last):
File "/data/data/com.termux/files/home/alkrinsta/alkrinsta.py", line 146, in
sess = instabrute.Login(password)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/home/alkrinsta/alkrinsta.py", line 110, in Login
sess.headers.update({'X-CSRFToken' : r.cookies.get_dict()['csrftoken']})
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^
KeyError: 'csrftoken'
Please help
|
2025-04-01T04:35:44.317255
| 2019-03-09T17:03:54
|
419093029
|
{
"authors": [
"debsahu",
"dermodmaster"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11538",
"repo": "toblum/McLighting",
"url": "https://github.com/toblum/McLighting/issues/355"
}
|
gharchive/issue
|
More than 3 custom effects slots?
Hello, how can I add more buttons for custom effects?
There are only 4 slots for WS2812FX.
https://github.com/toblum/McLighting/blob/93cf8a919fccc262ac3effe59e83f229989d4640/Arduino/McLighting/McLighting.ino#L296
Call strip->setCustomMode(XXX, F("Custom1"), Function) 4 times. Where XXX is 0 or 1 or 2 or 3.
Oh well, thank you for quick response.. I asked because I reached the limit ^^ Thought, it was a limit in your repo.
Found https://github.com/kitesurfer1404/WS2812FX/issues/132 where you can modify the library so you can add more slots 👍
THX for quick response!!!
Yes 4 is a good number and be careful in managing memory with many animations.
I'll thx m8 x)
|
2025-04-01T04:35:44.386955
| 2021-05-02T10:30:00
|
873906503
|
{
"authors": [
"ChristianTacke",
"Remigrator",
"Taratect",
"TheThunderspy",
"WPFilmmaker",
"WittyNameHere",
"chill10n",
"marcdw1289",
"poiNt3D",
"rustforfuture",
"tobykurien"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11539",
"repo": "tobykurien/WebApps",
"url": "https://github.com/tobykurien/WebApps/issues/253"
}
|
gharchive/issue
|
Future of WebApps, Web and Android, and other philisophical issues
Moving the discussion from #249 into a new issue:
Unfortunately the bad news is that I will probably stop supporting this app this year, despite this app being a labour of love and one I'm proud of. The sandbox leaks mentioned in the README, combined with browser fingerprinting, supercookies, FLoC, and other hostile abuses of Web technology, have made me come to the conclusion that the Web is a lost cause for private browsing. Yes, WebApps offers only limited protection, and that protection will probably decrease every year. Something like Gemini (https://gemini.circumlunar.space/docs/faq.gmi) is probably the way forward for privacy minded geeks, and I for one am moving as much of my content creation/consumption there as possible.
Another factor is that Android is also increasingly hostile to the developer (you need at least 16Gb RAM to compile a 3Mb APK, every year devs are forced onto new APIs, Gradle builds break often, Google now wants your developer signing key, the tooling I had used is now deprecated, etc, etc, etc). Couple that with the spyware preloaded onto phones and the increasing difficulty of getting control over your device (e.g. my phone kills VLC after 30 mins and there's nothing I can do about it), and I'm ready to ditch Android and go back to good 'ol desktop computing (https://tobykurien.com/post-1618319359/).
@rustforfuture said:
Sorry for off topic and you probably know but did you tried dontkillmyapps.com? It solved most of my problems.
Yep I know about it and it didn't help, because this is something the manufacturer put into the kernel. Every new device I buy has similar problems of killing apps and services, and each time the hoops I have to jump through get worse or more obscure, or in my current case, impossible.
Also sorry again for off topic, but so you as a veteran android dev, think i as a completely noob and newbie shouldn't start android learning if i also can't upgrade my PC much or other problems you mentioned?
I wouldn't say you shouldn't start learning Android dev, but you should probably know what you're getting yourself into. Everything you learn today will be obsolete in a few years, and so will your hardware. Google is also generally hostile towards developers, just have a look at the /r/androiddev subreddit: https://libredd.it/r/androiddev/search?q=suspended&restrict_sr=on&sort=relevance&t=all - accounts are automatically suspended/deleted for no reason and there's no recourse. This is detrimental if your livelihood depends on it.
The web, in contrast, has none of these issues.
What about flutter? Or kotlin?
Flutter and Kotlin just proves the point about how everything is obsolete every few years when newer/shinier things come along. And Flutter needs even more resources as it is built on top of the already bloated SDK. Maintaining an app beyond a year or two is a nightmare because of how things used to be done, vs how they are done now, vs what will be mandated by Play store at the end of the year.
Thank you very much for all your great efforts and time and health you put in your apps.
Thanks, I put in the effort only because I myself rely on the app for my own needs. Or at least I did, until a month ago when I ditched my smartphone for a dumbphone.
@rustforfuture said:
Sorry for off topic and you probably know but did you tried dontkillmyapps.com? It solved most of my problems.
Yep I know about it and it didn't help, because this is something the manufacturer put into the kernel. Every new device I buy has similar problems of killing apps and services, and each time the hoops I have to jump through get worse or more obscure, or in my current case, impossible.
Also sorry again for off topic, but so you as a veteran android dev, think i as a completely noob and newbie shouldn't start android learning if i also can't upgrade my PC much or other problems you mentioned?
I wouldn't say you shouldn't start learning Android dev, but you should probably know what you're getting yourself into. Everything you learn today will be obsolete in a few years, and so will your hardware. Google is also generally hostile towards developers, just have a look at the /r/androiddev subreddit: https://libredd.it/r/androiddev/search?q=suspended&restrict_sr=on&sort=relevance&t=all - accounts are automatically suspended/deleted for no reason and there's no recourse. This is detrimental if your livelihood depends on it.
The web, in contrast, has none of these issues.
What about flutter? Or kotlin?
Flutter and Kotlin just proves the point about how everything is obsolete every few years when newer/shinier things come along. And Flutter needs even more resources as it is built on top of the already bloated SDK. Maintaining an app beyond a year or two is a nightmare because of how things used to be done, vs how they are done now, vs what will be mandated by Play store at the end of the year.
Thank you very much for all your great efforts and time and health you put in your apps.
Thanks, I put in the effort only because I myself rely on the app for my own needs. Or at least I did, until a month ago when I ditched my smartphone for a dumbphone.
Thank you very much forvery informative and kind reply to my off topic questions.
I hope you stay safe, healthy and good luck with your futur :)
I'm sad that your app is aproaching EoL, what alternatives do you recomend for moving now....?
Thank you for your support.
If I had to make a recommendation, probably Firefox for Android with NoScript or uBlock Origin addon or similar, though I haven't used Firefox on Android in a few years. Firefox (at least on desktop, maybe on Android too) has mitigations for CNAME cloaking and some of the other problems I described above.
@tobykurien Thank you for the hard work you put in WebApps, I hope you will change your mind and the project won't die.
You are right that big companies do everything to make development hard (sites as well, they always recommend you to download their app instead of using the browser etc.). Android is tied to Google and it sucks privacy wise, but what's the alternative?Apple?
Yes there are dumbphones but in an interconnected world smartphones are becoming essential (even in poorer countries many don't have a pc but have a smartphone).
I would look into postmarketos if you are interested in developing FLOSS apps. Linux is slowly entering the smartphone (postmarket os want to support devices for 10 years!) and I hope it will succeed (but more apps will be needed).
@WPFilmmaker thanks for the kind words. Indeed, I'll keep developing solutions and I am interested in developing FLOSS apps. I am keeping an eye on PinePhone and postmarketos for sure. I jumped into FirefoxOS back in the day and was devastated when they discontinued it. De-googled Android was fine for me until all the app killing (and other) shenanigans. I hope that instead of corporation controlled smartphones, the future will have more open and fully fledged devices for even poorer countries to use, preferably a full computer rather than the locked-down consumption-only devices of today.
My current "mobile computing" solution is a Raspberry Pi tablet in a couple of form factors that I built (see https://github.com/tobykurien/rpi_tablet_os ). Yes I know about CutiePi and all the other Pi projects out there, I keep a keen eye on them. When a Pi with power management becomes available, I hope that will be a game changer, but for now, this device gives me everything I need without any spyware or walled gardens. Sure, I need to boot it up and shut it down each time I use it, but that little friction helps prevent me reaching for it too often (which was a major reason I ditched the smartphone). This is a full mobile computer running desktop Linux that I can have today :)
I really, fully understand your concerns and issues with the way things are currently moving forward. My current Android device is LinageOS based without any google play components (which means that some corporation / commercial apps do not work–so be it). Firefox on the PC has "Containers" (or whatever it's called today). That's missing from Firefox for Android. uBlock Origin seems to be available. Haven't yet tried it (will soon).
So that's where WebApps comes into the thing: I consider it like multiple instances of a "minimalistic just works" browser. I don't expect too much privacy features inside each "instance". I just expect that one instance doesn't see the other instances. The minimal uBlock-Origin like feature is a well received feature addon!
If Firefox for Android gains Container support, switching away from Webapps will be much easier. But until then, I would like to continue using Webapps!
yes its really bad concerning privacy. firefox is a leader in decline of standard here i think. as the market shares of other browsers (from the two biggest data krakens) rise, its decline seems only to be slowed down, probably also due to the fact that firefox to today has not implemented a nice miracast feature, where firefox loses its purpose with media oriented customers. i think they get money to lack that feature. fx_cast a chromecast project for firefox is no longer even be found at the firefox plugins and trying to have a firefox in your own flavour by about:config or settings is on mobile devices now a worthless effort. i am really frustrated. i guess the best way to go is just to get a new non-custom rom for your mobiles and fck google off where it belongs to --> graveyard.
I just learned of this app today, and wanted to say it fills an important role in being an app that streams HTTP video to devices like the Chromecast without the advertisements and closed source that a dozen apps in the Play Store do.
As someone not particularly aware/interested in web privacy but just trying to watch local news on the TV off the web, a FOSS web-to-Chromecast app is a useful resource. Sad it was deprecated.
I have found this program just a week ago. This is just what i need!
Modern life is so frustrating: app for this, app for that.
All it comes to who can grab the most data on your phone. Calculator that requires access to internet, delivery app needs to read your contacts! And a browser is not a solution, every site can see where you've been.
I will not allow this. Let them know only what they need to know.
What is the condition of this browser? Sadly, I found out bout this browser recently and really liked the concept.
I wanna know if the app is still in working condition and are there any security concerns or bugs in general. I have some food delivery service or other miscellaneous stuff I would love to separate from my browser activity. I'm aware that it's not a complete sandbox but ig something is better than nothing. I wanna use the app for that purpose. But I'm worried bout the security and safety since the project is not maintained as I'll have payment info stored on those services. So is it still safe to make transaction on this app?
It should still be functional, some of my family members still use it. There aren't any glaring security issues I am aware of and the sandboxing should still work, although I can't guarantee that changes to system WebView or Android versions haven't changed/broken things.
…
Thanks for letting me know. Also Can I assume the same for gapps browser too?
I still make heavy use of this app on my devices.
Along with up-to-date Mulch WebView I always feel confident using it for a lot of my needs.
I usually recommend folks try it out if I feel it may fit a use case.
I use your app all the time and it has always served me well. I can't imagine my life without it. Not even kidding!
|
2025-04-01T04:35:44.837575
| 2020-03-25T19:34:13
|
587940800
|
{
"authors": [
"ppannuto",
"teonaseverin"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11540",
"repo": "tock/libtock-c",
"url": "https://github.com/tock/libtock-c/pull/76"
}
|
gharchive/pull-request
|
User library for HD44780 LCD capsule
User library and example for HD44780 LCD capsule(https://github.com/tock/tock/pull/1715).
The other issue: currently this library just suppresses all possible errors from the underlying driver (the if (ret) return pattern). I'd assumed at first that the underlying commands couldn't error, but it looks like they can (i.e. EBUSY).
What's the rationale for not bubbling the error back up to the caller? Silently dropping errors seems like a real nightmare to debug.
|
2025-04-01T04:35:44.870813
| 2022-01-14T15:22:24
|
1103784858
|
{
"authors": [
"RafMazza",
"fgira",
"gyanverma2",
"k-yle"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11541",
"repo": "toddbluhm/env-cmd",
"url": "https://github.com/toddbluhm/env-cmd/issues/337"
}
|
gharchive/issue
|
process is not defined
Hi,
I have two . env files, one for dev and one for prod.
In package.json I have add:
"scripts": {
"start": "env-cmd -f ./environments/.dev.env yarn start:dev",
"start:dev": "set PORT=4200 && react-scripts start",
},
But in the browser console, I have this error message:
Hi @RafMazza,
If think your problem is the same as : process is not defined.
I have upgrade to react-script 5.0.0 and it solved the problem.
If you want all details and all solutions of this problem see this
If you are installing react-script .. u dont have to install env-cmd.
Hey, this error is coming from create-react-app, not env-cmd. I would joining the discussion at https://github.com/facebook/create-react-app/issues/11773
|
2025-04-01T04:35:44.875915
| 2017-03-02T19:31:49
|
211492989
|
{
"authors": [
"BlackRayquaza",
"toddw123"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11542",
"repo": "toddw123/RotMG_Clientless",
"url": "https://github.com/toddw123/RotMG_Clientless/issues/31"
}
|
gharchive/issue
|
php script for your accounts.js to settings.xml
I was getting pretty annoyed with trying to swap out different accounts in my settings.xml file for testing, i was just looking at my accounts.js file from muledump and then copy/pasting the info over. But it just gets annoying to do it, especially if you are trying to set up more then 2 or 3 accounts!
Anyways, i actually do php programming for my job. I know some people like to give php a bad time and say its awful blah blah blah but in all my years of programming, and all my years of programming php, ive never run into something that made me think php was complete garbage. Quite the opposite really. Anytime i learn something new in php im usually pretty amazed at how versatile and all-inclusive php is. I also love that it was built using C/C++, which is then reflected in php's syntax. This is what made it so easy for me to pick up since i already knew c/c++.
But let me get back on track here sorry, i get distracted sometimes when talking about php just because i cant understand the undeserved hate it seems to get by the web developing community. So, php, i normally dont touch the stuff unless im at work anymore but on some occasions when i need to put together a quick tool to do something ill use php because its so easy to just open up a basic text editor and write down a few lines of code and its done. No need to go set up a huge project and all the build config. No need to find out which libraries i need to include to use functions x/y/z. No need to wait for it to compile every time i make a change. Etc etc the list could go on.
I finally sat down and put together this quick little php script that will read in your muledump's accounts.js file and convert it to the proper settings.xml format. Its set up where it also rotates the <Server> value (you could change it pretty easily if you didnt want that). So, without further ado i give you the php script:
<?php
$accountsjs = "PATH_TO_ACCOUNTS_JS_FILE";
// Remove any servers you might not want used, or leave as is to use all of them
$servers = array( "USWest", "USWest2", "USWest3", "USMidWest", "USMidWest2", "USSouth", "USSouth2", "USSouth3", "USSouthWest", "USNorthWest", "USEast", "USEast2", "USEast3", "EUWest", "EUWest2", "EUEast", "EUNorth", "EUNorth2", "EUSouth", "EUSouthWest", "AsiaSouthEast", "AsiaEast" );
if ( file_exists( $accountsjs ) && is_readable( $accountsjs )) {
$js = file_get_contents( $accountsjs );
if(!preg_match_all( "/'([^'\n]+)'/", $js, $matches )) die( 'failed to find any account info' );
echo "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Config>\n\t<BuildVersion>27.7</BuildVersion>\n\t<MinorVersion>X11</MinorVersion>\n";
$email = '';
foreach ( $matches[1] as $i => $match) {
if ( stripos( $match, "@" ) !== FALSE ) {
$email = $match;
}
else if ( !empty( $email ))
{
echo <<<XML
<Client>
<Server>{$servers[ $i / 2 % count( $servers ) ]}</Server>
<GUID>{$email}</GUID>
<Password>{$match}</Password>
</Client>
XML;
$email = '';
}
}
echo "</Config>\n";
}
?>
i did it with regex, replace '(.+)': '(.+)', with <Client><Server>enter server</Server><GUID>$1</GUID><Password>$2</Password></Client>
Yeah essentially what the script does as its using regex on the accounts.js file and then outputting the xml. What did you use to do the regex with? do you have a command-line tool or something like that? reason i used php was ease-of-use basically.
copy pasted all account lines from MD, then ran that replace in notepad++
oh neat i didnt know notepad++ had that feature. I use Sublime Text 3 when im not doing C++ stuff, im sure it has something like that too but ive never looked into it.
|
2025-04-01T04:35:44.890908
| 2017-03-10T15:46:05
|
213375582
|
{
"authors": [
"AlanBaumgartner",
"BlackRayquaza",
"VoOoLoX",
"Zeroeh",
"bluuman",
"toddw123"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11543",
"repo": "toddw123/RotMG_Clientless",
"url": "https://github.com/toddw123/RotMG_Clientless/issues/38"
}
|
gharchive/issue
|
GUI Client
I just started working on the GUI for the client and I'm wondering if anyone would like to help, or is it just a waste of time?
http://i.imgur.com/S2RQawF.gif
(Moving the player using the arrow keys)
.....what are you planning to make a gui for? I dont really see the point of it honestly. I guess you could have a listview that holds all the clients and various information about each that are running, but other then that there isnt much else a gui would be useful for that i can think of.
But dont let me stop you if you want to do this though, if you want to go and make something unique off this code i fully support that idea! If you have some idea in your mind that you want to make using this code i would love to see what you put together! So dont feel like you are wasting your time if this is something you want, i can always make a new branch that you can add this too (apart from your own of course) if you want it to be with the other stuff on this project 👍
My idea is to somewhat replicate current production client, render a basic preview of what you'd see in the actual client without all the features, since that would be too big project for me to manage and maintain
Gotcha. I have 0 desire to make a full client, but dont let me stop you from doing it. I just dont see it being possible or making too much sense when this program is intended to support multiple clients/accounts at a time. Not sure how that would be rendered lol. I think there is atleast one other person who watches this project that wanted to do a custom client, so you could always work with them if you decide to actually go that route.
I will say though, one thing i do recall about a couple different bots for some older games i use to play that i really liked was they would render a minimap in the gui (that was the only graphics, the rest was just text values/buttons/etc). So instead of seeing the standard game, the bot would have just a black and white image of the current map (guess this wouldnt work that well with rotmg since you dont get the full map right away) and then it would put a dot on the map to show you where it was in the game. And it would add other dots for mobs and whatnot. This made it slightly easier to keep track of where your bot was at and what it was doing. I guess even copying the minimap style/system from the game client and having that in the gui would be neat. But thats something i dont have any priority for right now, especially since i cant say that i would really want that in the end.
I also thought about trying to recreate the prod client in someting other than flash so that I could run the game better. My desktop's processor is very old and struggles to run realm. Good thing I plan to get a ryzen soon :^)
There is a lot in the client that i would rather not deal with making. In my opinion, the reason they used flash to make the game instead of a better/faster language is because they wanted to do it quickly. The game was originally made during a game program contest (https://forums.tigsource.com/index.php?topic=10310.0), and from the dates on the contests starting post and ending post, they made it in about 2 months. So they went with flash because you can create a game pretty quickly with flash, whereas other games you have to do a lot more work for the same exact output. Plus flash can be played in a web browser directly from a website and i think that was something they wanted instead of having to download and install everything.
I know there have been attempts to make customer clients in the past, one of the resources i attempted to use when i started this project (but ultimately ended up being useless for me) was Oryx Hates C. There was also something like JOryx or something that was a custom client in java that was actually very close to being finished before it stopped being worked on.
So while i have no plans to take this project in that direction, this code is free to use and i fully support anyone who wants to do something unique with it.
I'd like to recreate the client fully but it's way out of my skills set, even this simpler/basic client idea that I have will be hard to make.It's gonna be a client for single account (pretty much a clone of prod client but way simpler and with way less features.... that is if it all goes well). I've seen both Oryx Hates C and jOryx, I updated jOryx for the private server while back but it was a total mess.
I didn't mean that in a way where I was asking you to do it, meant it like I had similar thoughts and think it is a cool idea to try although I would not want/try to do it myself.
i wouldnt mind helping @VoOoLoX
I wouldn't mind helping @VoOoLoX
See, there you go VoOoLoX. Told you there was a few watching that wanted to go the route of a custom client.
And yes i imagine it would be fairly easy to draw just a minimap for each client the program is running, i might eventually want something like this, but right now its not on the list of things i am looking to add. In the future maybe!
Adding a map gui is simple, just take the bmp data from the mapinfo packet and render it.
https://www.youtube.com/watch?v=Ys3PRMZsi28
scellow or izilife made a really sick client, actually a few of them. check this vid :)
I recall seeing that uploaded before. It looks like, from the comments, he made it in C#. Pretty neat.
I just dont have any desire to make and then maintain a custom client. A lot goes into that compared to just making a bot. A bot i only have to worry about the packet structure and a few other things. A full on game client i have to do way more, and thats not what i started this project to do. I started this project to build a bot that would play the game for me, not make a client for me to continue playing the game 😃
If you guys do get into making a custom client, ive never really played with graphics much besides a few directx/opengl tutorial, but ive heard some decent things about Oxygine (http://oxygine.org/). Its a graphics engine in pure C++. Not sure what you used to make the gui in the top post, if it was just win32 or if you used something like smfl (which ive only looked at never used).
Not too fond of game engines. I'd rather just use raw graphics buffers and manual rendering via opengl.
Why reinvent the wheel? Engines are built to make life easier.
I guess I'm just doing it for the learning aspect rather than making life easier.
Gotcha. Learning it is different, I think it's good to learn how to do it the hard way first, gives you more understanding of what's going on and appreciation later.
Mini progress update:
http://i.imgur.com/p1KqkpY.gif
Rendering wood floor tiles as a pixels
(Gotta work on proper rendering, this is just for testing)
neat. you never answered, what are you using to render gui/graphics right now?
Must have missed that in the text, I use SDL
gotcha, i was looking at sdl the other day. And yes you will probably have to modify movement to work differently, technically the bot only moves every 200ms, but that would be very choppy to watch.
You'd have to do frame interpolation to make it not so jumpy.
Closing this topic. If anyone wants to continue this discussion or ask for a GUI branch to be made or something just let me know.
|
2025-04-01T04:35:44.897577
| 2023-10-23T16:23:17
|
1957544046
|
{
"authors": [
"ajlittle",
"tofodroid"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11544",
"repo": "tofodroid/mimi-mod",
"url": "https://github.com/tofodroid/mimi-mod/issues/92"
}
|
gharchive/issue
|
[QUESTION] Looping Broadcasters
Question
Is there any way to loop Broadcaster blocks? Wasn't able to find any documentation on this
Hello! Apologies for the super late reply. Life got super crazy for awhile and I had pretty much no time for the mod. I've just released Beta versions for a major 4.0.0 update for versions 1.19.2-1.20.4 which replaces Broadcasters with a new Server Transmitter block that supports looping!
Please let me know if you have any other issues!
|
2025-04-01T04:35:44.899590
| 2020-01-27T23:33:12
|
555897229
|
{
"authors": [
"rachelcgrant"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11545",
"repo": "tofugu/wanikani-knowledgebase",
"url": "https://github.com/tofugu/wanikani-knowledgebase/pull/33"
}
|
gharchive/pull-request
|
Added information to Audio section and created new section for credit…
… card information
Changes proposed in this pull request:
The credentials to view the Netlify deploy is the following:
username: crabigator
password: googlebegone
Added audio gif with Kyoko and Kenichi's voices displayed.
|
2025-04-01T04:35:44.914028
| 2024-08-29T01:37:48
|
2493273463
|
{
"authors": [
"Zena-park",
"nguyenzung"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11546",
"repo": "tokamak-network/tokamak-thanos",
"url": "https://github.com/tokamak-network/tokamak-thanos/issues/231"
}
|
gharchive/issue
|
Change the bridges function's visibility to external
Since the bridges function has no internal calls, it is appropriate to modify public to external.
bridgeETH
https://github.com/tokamak-network/tokamak-thanos/blob/5ad9baac98217a0c1533969b00076d9a4443edba/packages/tokamak/contracts-bedrock/src/universal/StandardBridge.sol#L214-L215
function bridgeETH(uint256 _amount, uint32 _minGasLimit, bytes calldata _extraData) external payable onlyEOA {
bridgeETHTo
https://github.com/tokamak-network/tokamak-thanos/blob/5ad9baac98217a0c1533969b00076d9a4443edba/packages/tokamak/contracts-bedrock/src/universal/StandardBridge.sol#L230-L231
function bridgeETHTo(address _to, uint256 _amount, uint32 _minGasLimit, bytes calldata _extraData) external payable {
bridgeERC20
https://github.com/tokamak-network/tokamak-thanos/blob/5ad9baac98217a0c1533969b00076d9a4443edba/packages/tokamak/contracts-bedrock/src/universal/StandardBridge.sol#L245-L254
function bridgeERC20(
address _localToken,
address _remoteToken,
uint256 _amount,
uint32 _minGasLimit,
bytes calldata _extraData
)
external
onlyEOA
{
bridgeERC20To
https://github.com/tokamak-network/tokamak-thanos/blob/5ad9baac98217a0c1533969b00076d9a4443edba/packages/tokamak/contracts-bedrock/src/universal/StandardBridge.sol#L270-L279
function bridgeERC20To(
address _localToken,
address _remoteToken,
address _to,
uint256 _amount,
uint32 _minGasLimit,
bytes calldata _extraData
)
external
{
Hi @Zena-park
I think we should keep it. There are 2 reasons
public function can give us better gas cost than external function
Optimism uses public for bridging functions
I don't think there will be much difference in gas.
However, I think it is better to use external when using it only in external calls according to the basic grammar.
Anyway, it is just a difference of opinion, not an error, so I will close the issue.
|
2025-04-01T04:35:44.916496
| 2017-07-02T02:28:08
|
239984206
|
{
"authors": [
"EFanZh",
"alexcrichton"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11547",
"repo": "tokio-rs/tokio-io",
"url": "https://github.com/tokio-rs/tokio-io/issues/56"
}
|
gharchive/issue
|
Is it possible for Copy to loop forever?
Here is some code from https://github.com/tokio-rs/tokio-io/blob/master/src/copy.rs#L73-L78:
while self.pos < self.cap {
let writer = self.writer.as_mut().unwrap();
let i = try_nb!(writer.write(&self.buf[self.pos..self.cap]));
self.pos += i;
self.amt += i as u64;
}
Is it possible for try_nb!(writer.write(&self.buf[self.pos..self.cap])) to return 0? If so, wouldn’t it cause the code to loop forever?
@EFanZh ah yes indeed! I think that this should explicitly handle the 0-write case and return an error. Do you want to send a PR?
@alexcrichton Sure. A pull request is submitted: #57.
Fixed in #57
|
2025-04-01T04:35:44.932864
| 2022-12-08T02:00:08
|
1483367474
|
{
"authors": [
"bfreezie",
"tom5454"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11548",
"repo": "tom5454/CustomPlayerModels",
"url": "https://github.com/tom5454/CustomPlayerModels/issues/323"
}
|
gharchive/issue
|
Animation priority doesn't always apply to [Hidden by default] or the [Visible] button in the animation frame editor
I duplicated the player arms(including the held item) in order to have a set of arms that follows the player aim with the bow/crossbow, and then a set of arms that can do custom arm swing for the walking/running animation without holding the crossbow. The custom, non-additive walk animation is set to low priority with the appropriate arms toggled visible, and the non-additive crossbow holding animation is set to high with the appropriate arms set to visible.
Despite this, when I am holding the crossbow and start walking, the hidden arms of the low priority walk animation appear alongside the high priority arms of of the crossbow hold, and the crossbow item itself jumps back to the position of the walking animation.
It's a known issue similar to this (#280). Make a 0x0x0 sized cube that controls the visible/hidden effect, having animations moving parts can sometimes break the visibility.
This will be fixed when I clean up the project formats.
Thank you for the workaround
|
2025-04-01T04:35:44.934303
| 2020-07-26T20:48:36
|
665857486
|
{
"authors": [
"AK9Official",
"tom5454"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11549",
"repo": "tom5454/Toms-Storage",
"url": "https://github.com/tom5454/Toms-Storage/issues/10"
}
|
gharchive/issue
|
[1.16.1 Fabric] Liquids can't be placed with other mods added.
Duplicate of tom5454/Toms-Fabric-Lib#3
|
2025-04-01T04:35:44.938931
| 2015-10-25T10:18:00
|
113217874
|
{
"authors": [
"tomaka",
"tyoc213"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11550",
"repo": "tomaka/glium",
"url": "https://github.com/tomaka/glium/issues/1324"
}
|
gharchive/issue
|
Remove the dependency on "obj"
Because of dependency hell.
I just want a wavefront loading crate with no dependency at all. Just plain data. No vectors imported from somewhere else.
Doesnt in toml work wth just obj = { version = "0.5", features = [], dev-dependencies = [] } that way there is no `genmesh in the final toml.lock and thought I didnt understand at all: http://doc.crates.io/manifest.html#the-dev-dependencies-section and http://doc.crates.io/manifest.html#rules
So in https://github.com/csherratt/obj/blob/master/Cargo.toml#L20 if you clone that repo and delete the optional depencencies building it only generates a obj without extra downloads in its .lock file (genmesh and cgmath IIRC).
And searching for genmesh on glium source, it only shows on the examples https://github.com/tomaka/glium/blob/master/examples/support/mod.rs#L63 (and as crate on the top).
If that is the change if you really need it I can make the PR, or you can do it directly ;).
If you remove genmesh from the Cargo.toml, then the code won't compile anymore. You can't use obj without using genmesh as well.
What I'd like to do is either make deep fixes in the obj library so that it doesn't need to depend on genmesh, or change the glium example code so that it uses a library other than obj.
How this relates to #602 (because it uses a vec library https://github.com/simnalamburt/obj-rs/blob/master/Cargo.toml#L24 )?
|
2025-04-01T04:35:44.943489
| 2017-06-17T06:50:52
|
236643711
|
{
"authors": [
"ZeGentzy",
"kvark",
"mitchmindtree"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11551",
"repo": "tomaka/glutin",
"url": "https://github.com/tomaka/glutin/issues/898"
}
|
gharchive/issue
|
Move glutin to the new glium organisation?
Recently, glium was moved to its own organisation in order to make it more obvious that glium is now moreso maintained by the community around it as @tomaka (the original maintainer) will be busy with other projects for the foreseeable future. See the discussion here.
I think it would be worth moving glutin to the glium organisation for the same reasons - to indicate that glutin is now moreso maintained by the community than by tomaka.
I think the glium organisation is suitable as their histories are intertwined, they were founded by the same author and although neither require the other, they were originally designed to work as building blocks towards a common purpose.
To be clear, I don't mean to suggest that glium requires/prioritises glutin or vice versa (e.g. I believe gfx use glutin without glium and some sdl2 users use glium without glutin), just that glium is probably the most reasonable community organisation that currently exists that could host glutin for community development. Similarly to how webrender is maintained under the servo organisation, despite not being exclusively useful to servo (e.g. this user is attempting to use it for game UI).
Alternative
Another organisation could be created specifically for glutin. However, this might require more work on behalf of tomaka in terms of setup, and more work for maintainers to keep track of both organisations.
I think it's a good move to take some load off @tomaka as they are no longer focused on GL. Moving glutin into the same group as glium doesn't sound too bad, as long as:
the group it self is not called glium, since this signifies heavy affiliation
winit is left out of the group
Closing in favor of discussion over at winit.
|
2025-04-01T04:35:44.944651
| 2017-10-11T08:54:12
|
264505579
|
{
"authors": [
"dariost",
"tomaka"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11552",
"repo": "tomaka/glutin",
"url": "https://github.com/tomaka/glutin/pull/944"
}
|
gharchive/pull-request
|
Add OpenGL 4.6 to GLX context creation
OpenGL 4.6 has been released, this PR add the needed check when creating a GLX context.
Thanks
|
2025-04-01T04:35:44.950095
| 2017-10-20T13:44:17
|
267180572
|
{
"authors": [
"kryptan",
"tomaka"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11553",
"repo": "tomaka/winit",
"url": "https://github.com/tomaka/winit/pull/327"
}
|
gharchive/pull-request
|
Use EnumDisplayMonitors to enumerate monitors on Windows
This PR changes monitor enumeration on Windows to use EnumDisplayMonitors instead of EnumDisplayDevices. The difference between them is that EnumDisplayDevices returns raw information while EnumDisplayMonitors returns mapped and modified information that applications should actually use.
More specifically:
When there are two monitors which duplicate each other, EnumDisplayDevices will list both monitors while EnumDisplayMonitors will only list just one.
EnumDisplayMonitors will allow us to get correct DPI value for monitor using GetDpiForMonitor. EnumDisplayDevices also gives us DPI in the dmLogPixels field but this is the raw DPI of the monitor, not the actual value that user has set in the settings. This PR does not yet implement any support for DPI, I intend to implement all of the newly introduced high-DPI API for Windows in the next PR.
EnumDisplayDevices always gives the physical resolution of the monitor, EnumDisplayMonitors gives us the value in physical pixels if application is DPI aware and in logical pixels if it is not.
Other differences:
EnumDisplayDevices gives us a user-friendly name for each monitor. For me it always returns "Generic PnP Monitor". EnumDisplayMonitors doesn't give us any user friendly name at all, I resorted to using name of the adapter for MonitorId::get_name(), i.e. it will return something like "\\.\DISPLAY1" instead of "Generic PnP Monitor".
MonitorId::native_id() will now return name of the adapter instead of monitor, i.e. instead of "\\.\DISPLAY1\Monitor0" you will get "\\.\DISPLAY1".
I also added MonitorIdExt::hmonitor() method to return winapi::HMONITOR for each monitor.
Except for the difference in the value returned by MonitorId::get_name() I thing all other changes are good.
Thanks, looks good.
I added requested changes
Is there anything else that's need to be done to merge this? I want to submit next PR but it depends on this one.
Sorry, forgot about it.
I wish github had a button to automatically merge after CI passes.
|
2025-04-01T04:35:44.960882
| 2021-12-22T06:27:37
|
1086495498
|
{
"authors": [
"tomara-x"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11554",
"repo": "tomara-x/witches",
"url": "https://github.com/tomara-x/witches/issues/15"
}
|
gharchive/issue
|
Basemath instance drift
When running 2 instances of uBasemath (same behavior must happen with Basemath) with one of them having steps of int lengths and the other with fractional steps
kl1[] fillarray 1, 1, .2,.2,.2,.2,.2,.2,.2,.2,.2,.2
kl2[] fillarray 1, 3
This first step trigger on both of them should happen at the same time since the lengths add up.
It does happen for a while, but the drift keeps increasing. (around 0.2 seconds after 4 minutes)
I'm thinking floating point inaccuracy from the time unit division:
#define TEMPO #128#
ktimeunit = 1/($TEMPO/60)
This is printed as "0.46875" but I'm not sure how csound handles it internally or how to control the variable bit size.
Imma try with time-unit=1 and see..
edit: nope, I was totally wrong about the floating point thing.
july, 27 2022 edit: okay maybe not totally wrong, kiddo
ooo even this is problematic:
kl1[] fillarray 2, 2
kl2[] fillarray 2, 1, 1
(still on tempo=128)
lmao even at time-unit=1 and integer lengths there's still a drift
kl1[] fillarray 2, 2
k1AS, kb1[] uBasemath 1, kl1
schedkwhen kb1[0], 0, 0, "test", 0, .05, 440*(2^4)
kl2[] fillarray 2, 1, 1
k2AS, kb2[] uBasemath 1, kl2
schedkwhen kb2[0], 0, 0, "test", 0, .05, 440*(2^3)
^ this is weird! it stops drifting after a bit
rtevent: T 4.001 TT 4.001 M: -3.05 -3.05 rtevent: T 4.002 TT 4.002 M: -7.96 -7.96 rtevent: T 8.001 TT 8.001 M: -1.97 -1.97
rtevent: T 12.000 TT 12.000 M: -3.05 -3.05 rtevent: T 12.001 TT 12.001 M: -7.96 -7.96 rtevent: T 15.999 TT 15.999 M: -1.97 -1.97 rtevent: T 16.001 TT 16.001 M: -7.96 -7.96 rtevent: T 19.998 TT 19.998 M: -2.38 -2.38 rtevent: T 20.001 TT 20.001 M: -7.96 -7.96 rtevent: T 23.997 TT 23.997 M: -2.26 -2.26 rtevent: T 24.001 TT 24.001 M: -7.96 -7.96 rtevent: T 27.996 TT 27.996 M: -2.02 -2.02 rtevent: T 28.001 TT 28.001 M: -7.96 -7.96 rtevent: T 31.995 TT 31.995 M: -2.84 -2.84 rtevent: T 32.001 TT 32.001 M: -7.96 -7.96 rtevent: T 35.994 TT 35.994 M: -1.95 -1.95 rtevent: T 36.001 TT 36.001 M: -7.96 -7.96 rtevent: T 39.993 TT 39.993 M: -2.52 -2.52 rtevent: T 40.001 TT 40.001 M: -7.96 -7.96 rtevent: T 43.993 TT 43.993 M: -2.15 -2.15
soooo I have no clue!
I hate to say it.. I very much hate to say it, but it must be the metro...
time for a manual dive
I don't wanna go the clock way! Give it a clock input that's as fast as you want the resolution to be and the lengths become counters (clock dividers). It would solve this since the two instances would run on the same clock. But we'd lose the fractional stuff and it's so cool and easy! You'd have to have a multiplied x15 clock on the outside and go:
15, 5,5,5, 15, 3,3,3,3,3 instead of the 1, 1/3, 1/3, 1/3, 1, 1/5, 1/5, 1/5, 1/5, 1/5
we lose the divisions too.. these are very sad times!
oh yeah! you can just * an array! so have fun with the fractions and then multiply it until it's just ints! (◠‿☆)
yeah, the divisions though...
ooo I already have a clock divider!
would this be fixed if instr of the metro, we use a phasor and we rest phase each step? doesn't the metro already be at phase 0 when it outputs a click though?
I'm hungry!
I haven't been working on the clocked versions, been playing around with FM and the numeric score (and pondering the orb of course)
I wanna investigate why this happens in the first place. Like, I get where the problem is (when we switch the frequency of the metro) but I don't understand why this is a problem.
It's kinda interesting so I'll leave doing this here, but I probably won't change the Basemathes so I'll switch it to an investigation, and move to a new issue where I track the implementation of the BasemathC and uBasemathC
tBasemath rules!
i'll have to move basma.orc to tbasma.orc cause will need an equal version?
still will have to do the flexible-active-step versions of all of them
a power-of-2 sampling rate and ksmps solves this
by the way since this turned out to be a metro thing, not a basmath thing, using more than one metro to run different tBasmath would still have possibility of drifting if under the same sr/kr conditions. the only way to keep them in sync would be to either do the sr thing or to run them usi g the same metro. (that's as far as i understand it now. maybe i'll figure out other ways to workaround this, maybe syncphasor? something)
not a perfect solution, there are still some frquencies which will go out of sync (it's so much better though)
also i wanted to test the hrtf opcodes which depend on sr being 44.1, 48, or 96kHz so...
|
2025-04-01T04:35:44.966187
| 2015-03-24T20:29:38
|
64092318
|
{
"authors": [
"blakemcbride",
"tomas"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11555",
"repo": "tomas/needle",
"url": "https://github.com/tomas/needle/issues/113"
}
|
gharchive/issue
|
Post is displaying debug info
When I do a post, I am seeing:
Making request #1 { protocol: 'http:',
host: 'localhost',
port: '9200',
path: '/components-1427228775787',
method: 'post',
headers:
{ Accept: 'application/json',
Connection: 'close',
'User-Agent': 'Needle/0.8.2 (Node.js v0.12.0; linux x64)',
'Content-Type': 'application/json',
'Content-Length': 102,
Host: 'localhost:9200' } }
Got response { 'content-type': 'application/json; charset=UTF-8',
'content-length': '21' }
The post function is printing this to the console. Can it be turned off?
Thanks.
Blake McBride
You're probably setting a DEBUG env var. This will be removed once we replace the current inhouse debugging system for the debug module itself.
blake@blake-Dell-M6800 ~ $ echo $DEBUG
blake@blake-Dell-M6800 ~ $
No DEBUG set. I am running it through WebStorm & node.
Thanks.
Blake
That's odd. What do you get with?
node -e "console.log(process.env.DEBUG)"
Anyway, I just pushed Needle v0.9.0 that replaces the current console.log with the debug module. You should no longer have this problem.
blake@blake-Dell-M6800 ~/Downloads $ node -e "console.log(process.env.DEBUG)"
undefined
blake@blake-Dell-M6800 ~/Downloads $
Then I guess there's some other module that's putting that env variable before Needle checks for its presence. Did you try 0.9.0?
It isn't there.
blake@blake-Dell-M6800 ~/ComponentServer $ npm update needle
<EMAIL_ADDRESS>node_modules/needle
└──<EMAIL_ADDRESS>blake@blake-Dell-M6800 ~/ComponentServer $
Thanks for all the help!
Try with:
npm install<EMAIL_ADDRESS>--save
That fixed everything. Thanks a lot!
|
2025-04-01T04:35:44.975265
| 2020-02-20T13:00:21
|
568283324
|
{
"authors": [
"AlirezaEbrahimkhani",
"mucan",
"tomastrajan",
"yousafnawaz"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11556",
"repo": "tomastrajan/angular-ngrx-material-starter",
"url": "https://github.com/tomastrajan/angular-ngrx-material-starter/issues/513"
}
|
gharchive/issue
|
Use Angular Material TestHarness in tests
It would be great to migrate tests to use new Angular Material Component Test Harness in the unit tests.
The migration should be mostly mechanical as it's basically replacing ad-hoc selectors with the selectors provided by the @angular/cdk.
Test can be run in watch mode using npm run watch...
Please write comment if you wanna migrate some component tests so that we can prevent duplicate effort
Hi! I would like to give it a try.
@mcanoglu thank you for great contribution!
For others, there are still many components that can be migrated to use TestHarness and now you even have great example of how to do it by @mcanoglu !
Hi @tomastrajan, I would like to migrate some components to use . https://material.angular.io/guide/using-component-harnesses Angular Material Component Test Harness
Hi @tomastrajan
I'm very happy to be able to help with this migration and I'm starting and working on the crud.component.ts and after finishing the next component that I will work on, I will inform ...
|
2025-04-01T04:35:44.991810
| 2022-06-16T14:53:45
|
1273675587
|
{
"authors": [
"Alexey-T",
"aguador",
"davidbannon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11557",
"repo": "tomboy-notes/tomboy-ng",
"url": "https://github.com/tomboy-notes/tomboy-ng/issues/266"
}
|
gharchive/issue
|
'about' dialog has weird wrapping of info text, Xubuntu 20
Davo, I just looked at the .po file and the first three lines are separate strings, which is why it has the odd breaks. Any way to make that a single string? Roy
From memory, the Mac had trouble rendering a single string, but that might have been on Carbon rather than Coco. And, over time, the content has changed there explaining the different length of the strings.
Does it seem that important ?
Davo
I have reopened this issue because Alexey has raised a good point.
Davo
The other option is a single string with line breaks (br between <>) inserted manually. While that has a similar effect to three separate strings, it is easier to adjust manually.
Note that when translating one simply moves those breaks around to produce a similar format in the target language message.
OK, just looking at this now. I wonder if its time that message was shortened ? Drop the lecture about backups ?
tomboy-ng, a note tool built using FPC and Lazarus.
https://github.com/tomboy-notes/tomboy-ng
ver XXX
build date XXX
target XXX
That way there is no wrapping to worry about and two less string to translate ?
Davo
All for ending lectures. A shame to loose the note about the relationship with Tboy, but at this point seems unnecessary. Tomboy is dead. Long live tomboy-ng! (Sorry, couldn't resist that.)
Yeah, pretty good.
According to South Park, it was 22 years before we could make jokes about AIDS, you have set a much shorter time here. Pretty good !
Davo
|
2025-04-01T04:35:44.992890
| 2022-10-16T13:01:08
|
1410482054
|
{
"authors": [
"tombulled"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11558",
"repo": "tombulled/param",
"url": "https://github.com/tombulled/param/issues/21"
}
|
gharchive/issue
|
Implement typing.BiConsumer
T_contra = TypeVar("T_contra", contravariant=True)
U_contra = TypeVar("U_contra", contravariant=True)
class BiConsumer(Protocol[T_contra, U_contra]):
def __call__(self, t: T_contra, u: U_contra, /) -> None:
...
Is this still needed? Can't figure out where/if this is required
|
2025-04-01T04:35:45.026522
| 2021-06-28T11:46:46
|
931483633
|
{
"authors": [
"Trolltrolling",
"khaihkd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11559",
"repo": "tomochain/tokens",
"url": "https://github.com/tomochain/tokens/issues/311"
}
|
gharchive/issue
|
Made by Ra's al Ghul League
Token Address
e.g: https://scan.tomochain.com/address/0x
Token Symbol
e.g: TOMO, UFO
Token Name
e.g: TomoChain, UFO Ianau
Token Logo
PNG 256px by 256px
https://ufoinu.com/coin_logo/ufo_icon.png
Token Description
A clear and concise description of the token.
We will use a kind of artificial intelligence technology beyond the comprehensiveness of today humanity.
Website
Link to the website that will use the token.
http://ufoinu.com/
Social links
e.g Github, telegram, twitter
http://ufoinu.com/
#312
|
2025-04-01T04:35:45.033356
| 2019-04-03T09:57:39
|
428675179
|
{
"authors": [
"coveralls",
"pqv199x"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11560",
"repo": "tomochain/tomomaster",
"url": "https://github.com/tomochain/tomomaster/pull/587"
}
|
gharchive/pull-request
|
show candidate name and capacity of masternode in voter tx table
Pull Request Test Coverage Report for Build 1210
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 79.47%
Totals
Change from base Build 1209:
0.0%
Covered Lines:
92
Relevant Lines:
103
💛 - Coveralls
|
2025-04-01T04:35:45.047588
| 2024-10-01T20:49:04
|
2560185638
|
{
"authors": [
"cyberlord-coder-228",
"tompi"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11561",
"repo": "tompi/cheapino",
"url": "https://github.com/tompi/cheapino/issues/82"
}
|
gharchive/issue
|
Entire top row not working
Hi, I'm in the process of building my first cheapino, and I've noticed that entire top row and outer most thumb key on the left side of the board don't register.
I haven't touched right side yet. Only soldered the diodes and MCU in; and flushed the MCU. (Haven't even clipped diodes' legs off, as you can see in the picture))
Shortening keys in the bottom & middle rows and 2 of the 3 thumb keys give correct key presses, but top row does not register. I've tried shortening some pin pairs on the MCU directly, and I managed to get q, w, and e.
In the troubleshooting section it is mentioned that entire row not working is probably due to bad connections on the MCU or rj-45. Since I only use one part of the board, I dont think it's rj-45 related. Didn't notice any shorts in it's place either.
I've rechecked and redone all the MCU pins (and socket pins) several times, but the issue persisits. Granted, the soldering job is very sloppy, as it is my first soldering project, but I couldn't detect any shorts or any other obvious issues with it.
All diodes are also directed "line to the square".
I didn't manage to get the veil website working, and just tested the keyboard by entering some text, if that's relevant.
So, what direction should I take to troubleshoot it further?
// Those are the best quality pics that my phone can make //
Btw, thanks for awesome project :3
The Lone thumb key not working is probably a defective diode. Check with a multimeter?(or just replace it)
For the row: i suggest you test with the vial fw and yes the matrix tester there.
Can you trigger the row by shorting 2 and 2 of the bridge pads(just 2 of the lowest pads, dont bridge them)
Thank you for such quick reply :3
And thank you for all the advice
As for thumb diode, I don't have tools right now, so I'll check more thoroughly tommorow. But I noticed that shortening one of the closer circles + cirle part of the diode gives some output. Not on this key, though.
I'm not sure what did you mean about vial. Should I install a native app and fiddle with that a bit?
There's matrix tester's result
And I'm completely not sure what did you mean about shorting 2 and 2 of the pads.
I tried poking at pins and connections until I got some output) Two big square points, where the hotswaps are to be soldered to, give output. So do diagonal circles (that's how I tested them), And so do two bottom circles, or two top circles.
If you meant parts that must be bridged on the right part of the board, then no, no combination that I've tried seem to give any kind of output. I did, however, got a few keys by shortening the rj-45 pins.
I played a bit with shortening pins on the MCU and (apart from pretty sparkes on pins... whait, GND and 5U?... damn) , I think I triggered reset, cause all of the keys stopped working.
I reflushed it, and 1) I noticed that my system did not automatically detected the MCU as a flash drive, as it did the first time when I connected it (the first time I dismissed it as some type of Linux magic, but now I think that something was shortened somewhere, and I fixed it when I resoldered MCU pins; which is good, probably).
And 2) some more buttons don't work now.
It's... bizzare. But I think reinforces that something is terribly wrong with the MCU
Okay, sparks are not a good sign :(
You should only short 2 and 2 data pins on the mcu, never ground or power...
But is sounds like the mcu is damaged...
Good thing you socketed the mcu.
I would take out the mcu and try shorting 2 and 2 of rows: gp27-gp29 and gp8, and columns: gp14,gp15,gp26
If you are not able to completely fill the vial test matrix doing this your mcu is not working...
Ohhhh, thats what you meant by 2 and 2...
I did not take out the MCU (I'm intimidated by desoldering it, I barely figured how to solder that beast in), just tried that test as is, by poking (diode legs') pin pairs {27-29} and {14-26}. Everything works :|
But on the actual keys' pins everything as it was after evertyhing got worse)
// like that
Another interesting observation is that i cannot trigger the pins by touching solder "volcanoes", only diode legs. Not sure if thats intended behavior or not.
Oh and not sure if I overstated the sparks, that was just arcs, not New Year type performance
Ok. Thats a good sign that you can trigger them. The mcu should be fine then.
You probably have some bad soldering, or maybe the legs that go into the socket Are bent/not long enough? You did socket the mcu it seems, so it should pull right out.
Anyway, can you trigger the keys from the underside of the board as well? If not its the soldering of the socket or the socket pins thats the problem.
Try reflowing on both sides. Use plenty of flux and correct temp, and do not spend too much time...
Another possibility is that you burnt off or scratched off one column trace and one row trace...
Hey, just made a tool for debugging these cases, check it out if you havent given up:
https://tompi.github.io/cheapino/doc/troubleshooting/routing.html
Should be easy to narrow down which joints to check.
|
2025-04-01T04:35:45.063800
| 2023-05-05T20:01:32
|
1698131767
|
{
"authors": [
"AirconLah",
"tomzorz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11562",
"repo": "tomzorz/Sodalite",
"url": "https://github.com/tomzorz/Sodalite/issues/11"
}
|
gharchive/issue
|
Notes become left-indent
Hi there, I notice that all the notes became left-indent instead of centralized view (default) after I have updated the recent patch of yours. May I know is this on purpose or?
@AirconLah oh huh, could you show me with a screenshot what you mean exactly?
Currently it looks like this:
Before the upgrade (this is not a solidate theme, but rather to demonstrate the differences):
this before the update:
Ooooh I see what you mean now. The first picture is how it used to be in my theme until they reworked themes, which broke it. I now changed it back, but I see you prefer the original design with large paddings.
Hmm, I think I can provide you with a css snippet you can enable to restore that.
That would be fantastic because I've been used to the original design for a long time and would love to have the original version.
How can I obtain your CSS snippet? =)
I'll write it sometime soon™, upload it to the repo and I'll @ you when it's there :)
Thanks mate! You have been a great help! :)
@AirconLah sorry it took me a while, but this snippet should make the content 700px wide as your screenshot has it https://github.com/tomzorz/Sodalite/blob/main/snippets/content-max-width-700.css - I've also added a 900px and 1100px version
|
2025-04-01T04:35:45.065235
| 2022-11-20T11:20:14
|
1456865278
|
{
"authors": [
"KuznetsovNikita",
"shaharyakir"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11563",
"repo": "ton-blockchain/minter",
"url": "https://github.com/ton-blockchain/minter/issues/131"
}
|
gharchive/issue
|
Support OpenMask extension wallet
It would be nice to support OpenMask extension wallet!
The PR:
https://github.com/ton-blockchain/minter/pull/132
merged via #157
|
2025-04-01T04:35:45.067922
| 2024-02-25T17:20:49
|
2152859886
|
{
"authors": [
"gitjeet",
"markdrrr",
"samyarkd",
"tina1998612",
"zapletnev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11564",
"repo": "ton-community/assets-sdk",
"url": "https://github.com/ton-community/assets-sdk/issues/11"
}
|
gharchive/issue
|
Error: Cannot find module 'boxen'
I try code transfer-jettons.ts and get this error.
npx ts-node .\transfer-jettons.ts
Error: Cannot find module 'boxen'
The same
@markdrrr temporary you can install boxen package globally:
npm install -g<EMAIL_ADDRESS>
i have this issue too, installing it globally did not work for me
same error
This worked for me npm install -g<EMAIL_ADDRESS>after this there was issue with chalk
this fix it
npm install -g<EMAIL_ADDRESS>
|
2025-04-01T04:35:45.084632
| 2022-12-10T18:01:05
|
1488758679
|
{
"authors": [
"AntoineKM",
"maddy020"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11565",
"repo": "tonightpass/kitchen",
"url": "https://github.com/tonightpass/kitchen/pull/80"
}
|
gharchive/pull-request
|
button.mdx file updated
Solved issue number #70
Button.mdx file updated
On each review, don't forget to click on the Resolve conversation button when you're done fixing it in order to be well organized! 😄
On each review, don't forget to click on the Resolve conversation button when you're done fixing it in order to be well organized! 😄
Ok Sure
I have done the changes
Please review and let me know if any changes required
There are still some reviews that remain unresolved, otherwise all is good for the rest bravo!
Ok
Thanks for reviewing
Thank you @maddy020, for your contribution, you helped us a lot, don't hesitate to give us a star ⭐ it will surely help us to become more popular! 🚀
|
2025-04-01T04:35:45.097909
| 2019-04-29T19:29:36
|
438471113
|
{
"authors": [
"isaacsanders",
"jdemilledt",
"matehat",
"tony612"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11566",
"repo": "tony612/google-protos",
"url": "https://github.com/tony612/google-protos/pull/4"
}
|
gharchive/pull-request
|
Add Google.Protobuf.Any.Codec module
This new module provides pack/1 and unpack/2 functions to manipulate google.protobuf.Any structs. This follows the equivalent Python API.
This PR depends on https://github.com/tony612/protobuf-elixir/pull/54
@tony612 any chance this and tony612/protobuf-elixir#54 can be reviewed and merged?
I don't understand why you want to add this. You can write a module by yourself in your code.
Language implementations of the google. messages need to support the google.protobuf.Any message. This message is useless if you don't provide the canonical Pack() and Unpack() equivalent. It's what the google.protobuf.Any type is used for. If your library provides the structure, it needs to provide the code to manipulate it don't you think?
I think this documentation is why @matehat made this PR: https://developers.google.com/protocol-buffers/docs/proto3#any
This says that libraries should provide this.
If you (@tony612) have your own use case for this software, and don't want to/can't devote time to supporting the use case that others have, I think there might be interest in helping to build/maintain new features.
@isaacsanders @tony612 yep, I'd be up for it. Our company uses protobuf extensively through Elixir, Python and Dart, and we'd definitely be able to sponsor work into this particular library when needs arise.
@matehat @isaacsanders As @demilletech has a use for Protobuf in Elixir, we have made a fork we plan to maintain. Feel free to reopen this PR there.
|
2025-04-01T04:35:45.102061
| 2024-03-06T03:22:41
|
2170546526
|
{
"authors": [
"alex-liupeng",
"tonydangblog"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:11567",
"repo": "tonydangblog/liveview-svelte-pwa",
"url": "https://github.com/tonydangblog/liveview-svelte-pwa/issues/1"
}
|
gharchive/issue
|
There is no UserStates module in the project.
↳ :elixir_compiler_1.FILE/1, at: priv/repo/seeds.exs:18
** (Protocol.UndefinedError) protocol Ecto.Queryable not implemented for LiveViewSvelteOfflineDemo.UserStates.UserState of type Atom, the given module does not exist. This protocol is implemented for the following type(s): Atom, BitString, Ecto.Query, Ecto.SubQuery, Tuple
(ecto 3.11.0) lib/ecto/queryable.ex:41: Ecto.Queryable.Atom.to_query/1
(ecto 3.11.0) lib/ecto/repo/queryable.ex:184: Ecto.Repo.Queryable.delete_all/3
priv/repo/seeds.exs:19: (file)
(elixir 1.16.1) lib/code.ex:1485: Code.require_file/2
(mix 1.16.1) lib/mix/tasks/run.ex:146: Mix.Tasks.Run.run/5
(mix 1.16.1) lib/mix/tasks/run.ex:85: Mix.Tasks.Run.run/1
(mix 1.16.1) lib/mix/task.ex:478: anonymous fn/3 in Mix.Task.run_task/5
(mix 1.16.1) lib/mix/task.ex:544: Mix.Task.run_alias/6
(mix 1.16.1) lib/mix/cli.ex:96: Mix.CLI.run_task/2
/home/alex/.asdf/installs/elixir/1.16.1-otp-26/bin/mix:2: (file)
Hi! Thanks for the catch. This seed script is outdated. There used to be a UserStates module in a previous version of the app when I had a hand-rolled CRDT implementation. However, after I switched to using Yjs as my CRDT library, this module was no longer used and I no longer had a seed script.
I went ahead and deleted the seed script so that there isn't any confusion!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.