id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
344346498 | Feature/upload symbols to tpa action
Created a new action called upload_symbols_to_tpa_action which allows for only uploading dSYM files directly to TPA.
This is useful for apps which have bitcode enabled where dSYM files needs to be downloaded from App Store Connect.
@kringelbach, @bjarkehs please review again
Nice work! 🚀
Best test cases evah 🎉
We have already implemented it on LEGO Life.
| gharchive/pull-request | 2018-07-25T08:38:51 | 2025-04-01T04:33:08.784460 | {
"authors": [
"Fogh",
"Kumuluzz",
"mbogh"
],
"repo": "ThePerfectApp/fastlane-plugin-tpa",
"url": "https://github.com/ThePerfectApp/fastlane-plugin-tpa/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1637079742 | Harpoon menu conflicts with LSPsaga popup
When I open Lspsaga lsp_finder, harpoon menu also shows up. Can anyone please help me with this, many thanks
Is this when you're using a default keymap? Or in any scenaro?
If it's a keymap conflict and you're using a Lua configuration for Neovim, you can redefine the keymaps for either plugin:
Default lspsaga keymaps (use as an example to set your own): https://github.com/glepnir/lspsaga.nvim#example-configuration
Use harpoon's API with Neovim keymap settings: https://github.com/ThePrimeagen/harpoon#-harpooning, https://neovim.io/doc/user/lua-guide.html#lua-guide-mappings-set
@timothyis I don't think it's a keymap conflict
-- Harpoon
vim.keymap.set("n", "<C-e>", ui.toggle_quick_menu, {silent = true, noremap = true})
-- LSP Saga
vim.keymap.set("n", "gh", "<cmd>Lspsaga lsp_finder<CR>", { silent = true })
Also experiencing the same problem. Have to switch back to native lsp hover rather than lsp saga
Same here.
My keymap is for harppon is vim.keymap.set("n", "<C-m>", ui.toggle_quick_menu) . It's even weird cause the menu also opens when a press return...
Same issue, happens 90% of the time I use lspsaga lsp_finder.
Having the same issue. It's not conflicting with keymap. It probably something else. Here is my error and stacktrace
Error detected while processing CursorMoved Autocommands for "<buffer=10>": Error executing lua callback: ...te/pack/packer/start/lspsaga.nvim/lua/lspsaga/finder.lua:497: Line number outside range stack traceback: [C]: in function 'nvim_buf_add_highlight' ...te/pack/packer/start/lspsaga.nvim/lua/lspsaga/finder.lua:497: in function <...te/pack/packer/start/lspsaga.nvim/lua/lspsaga/finder.lua:458>
Same issue
Possibly related?
https://github.com/nvimdev/lspsaga.nvim/issues/1070
Turns out this issue and the one I linked in my previous comment are related.
All of us(besides one) use <C-e> to open harpoon(This is the binding that Prime uses and it seemed sane). However, Lspsaga seemingly inputs <C-e> when popup windows are created/rendered. I checked @nguyenanhhao221 & @koyuncukaan and verified that they are also using <C-e>, however I do not understand how @fcarvalhopacheco's issue with their <C-m> binding fits into this. I do know that <C-m> is default bound to <CR> in vim/linux/unix/dos, and maybe that's somehow causing the problem? Honestly stumped on that one.
I've alerted the lead dev of Lspsaga, and hopefully this gets fixed soon. For the time being if you wish to continue using Lspsagas call hierarchy/peek definition/lsp_finder you can not rebind <C-e> in general as the default vim binding is required for Lspsaga to function.
Lastly, @LeVuMinhHuy could you please close this issue as it pertains to Lspsaga and not harpoon?
| gharchive/issue | 2023-03-23T08:38:04 | 2025-04-01T04:33:08.800615 | {
"authors": [
"JasonPanosso",
"LeVuMinhHuy",
"fcarvalhopacheco",
"koyuncukaan",
"nbili",
"nguyenanhhao221",
"timothyis"
],
"repo": "ThePrimeagen/harpoon",
"url": "https://github.com/ThePrimeagen/harpoon/issues/267",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2129262844 | Added new config variable API_BASE_URL
Added new config variable API_BASE_URL.
Removed old OPENAI_BASE_URL.
Minor fixes in show_messages.
Works brilliantly for me running ollama in a docker container with 0.0.0.0:11434->11434/tcp, :::11434->11434/tcp mapped. From the host running the docker container, API_BASE_URL=default finds it without issue, and from another device on the same LAN API_BASE_URL=http://<ipv4:port> finds it and similarly works without issue.
I saw you mention somewhere you wanted were looking for people to test, so just my confirmation. Probably gonna try it out with the listener endpoints in the text-generation-web-ui as well and can comment on that here.
Brilliant work. Been using sgpt for a while now and nice to slowly start moving to something fully locally hosted. :)
Been using sgpt for a while now and nice to slowly start moving to something fully locally hosted. :)
That is quite amazing @hrfried!
Are you planning to do RAG locally?
Question:
What hardware did you use to run your Ollama server?
Thanks @TheR1D for putting this amazing piece of software together!
My intension, and I'm in baby steps right now, just educating myself:
Get the Ollama Docker to run on a Hetzner VMs.
Experiment with the various LLM models.
See if it would be possible to put a REST-API before Ollama.
Query from our ERP software, for ex. from the Helpdesk module for ticket-responses.
Figure out tools and procedures of RAG, so as to improve the quality of generative outputs.
Sorry for digressing, but here seem to be likeminded folks.
Regards,
Ashant
Currently just my main desktop which has a ryzen 9 7900x, nvidia 4060-Ti 16-gig, and 64 gigs of DDR5. I've gotten small models to run okay on less powerful hardware but they weren't really super performant. Mostly just using LLMs for productivity and exploring the space, training some LORAs on codebases for work to see what's feasible, etc. Nothing crazy really.
Not super familiar with RAG tbh--but ollama is pretty simple to use. I hadn't used it until today when I saw it was possible to set it as an endpoint in shell_gpt. I normally use other methods (e.g. textgen-webui) for running local LLMS. Just a docker pull and a docker run, honestly.
Don't know a whole lot about "true" cloud computing but I imagine you could run an nginx (or similar) reverse proxy into ollama with a docker-compose workflow and it'd be pretty simple, at least to set up a test-case. Not sure what kind of performance you'd get on shared servers though, especially if it's not GPU compute. Or if you're locked into the cloud provider's networking tools or anything. A little out of my wheelhouse, ha.
This repo/Issues should probably not be polluted with OTs, so I'll just conclude here by posting a few pointers to the various topics touched upon.
Currently just my main desktop which has a ryzen 9 7900x, nvidia 4060-Ti 16-gig, and 64 gigs of DDR5.
That looks pretty good!
Unfortunately I'm running on an Macbook Air, business device, and yet have to look into bare-metal options.
Still, took a blind shot at installing the Ollama Docker on a 2x vCPU, 4GB RAM on Debian latest stable.
The install and run was surprisingly smooth (used non-root /home user).
The outputs took forever to generate, like a few seconds per word!
And the first quick-test gave this output (after about an hour of compute, without any tuning/optimisations).
Been researching a few other topics, and here are some pointers for everyone's benefit:
Ollama Docker, code & readmes.
Embedchain, Open Source RAG Framework
Don't know a whole lot about "true" cloud computing but I imagine you could run an nginx (or similar) reverse proxy into ollama with a docker-compose workflow and it'd be pretty simple, at least to set up a test-case.
I'll probably stay away from cloud-computes, due to the prohibitive costs, especially as we scale towards production.
Tending to get dedicated machines from Hetzner (no affiliation), that recycle their pre-used machines, or shiny new ones.
Thanks @TheR1D and @hrfried for the great software and valuable inputs!
Ashant Chalasani
| gharchive/pull-request | 2024-02-12T00:55:44 | 2025-04-01T04:33:08.831667 | {
"authors": [
"TheR1D",
"euroblaze",
"hrfried"
],
"repo": "TheR1D/shell_gpt",
"url": "https://github.com/TheR1D/shell_gpt/pull/477",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2354349832 | Unlimited Free Dice Rolls in Monopoly Go using Monopoly Go Hack
Easy! 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go players can get free dice rolls using our free dice links updated daily. uTPoVm_@%48 Players can search for free dice roll links and get them instantly.
➤🔴 CLICK HERE TO GET GENERATOR NOW📺📱
➤🔴 CLICK HERE TO GET GENERATOR NOW📺📱
If you're a 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go player, you know the thrill of rolling the dice to determine your fate in the game. Those dice rolls can make or break your strategy. But what if I told you that you could get unlimited free dice rolls to boost your chances of success? Yes, it's possible, and in this guide, we'll show you how to do just that.
Embrace the Power of 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go Cheats
The digital realm of 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go opens up new possibilities, and using cheats is one of them. Cheats can be a game-changer, giving you the upper hand and more dice rolls. Here's how to make it work:
Unlock Special Cheat Codes: Look for special cheat codes that are compatible with the 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go platform. These codes can give you additional rolls, making your game more exciting. Explore Forums: Join 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go communities and forums where players share cheat codes and strategies. You might stumble upon a hidden treasure of cheat codes that will help you roll those dice endlessly. Use Cheat Apps: Some dedicated apps are designed to provide cheats for various games, including 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go.one of these apps, and you may discover a feature that allows you to roll the dice to your heart's content. Stay In-the-Know: Game developers frequently release updates and patches to address cheat codes. Stay informed about these updates to keep your unlimited dice rolls going.
Level Up Your Game with 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go Strategies
While cheats can give you unlimited dice rolls, it's also essential to improve your game strategy to make the most of those rolls. Here are some strategies that can help:
Invest Wisely: Make shrewd decisions when buying properties. The more properties you own, the more dice rolls you get. So, invest in properties strategically to increase your chances of winning. Trade Smartly: Trading with other players can be a powerful tool. Make deals that benefit you, like acquiring properties that complete sets and give you more dice rolls. Upgrade Your Properties: Upgrading properties not only increases your chances of getting free dice rolls but also boosts your overall income. Make it a priority to upgrade when possible. Strategize Your Dice Rolls: Understand the game board and plan your moves accordingly. Some spots offer more dice rolls, so aim for those to maximize your advantage.
Join 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go Tournaments
Many 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go enthusiasts are unaware that the game often hosts tournaments. Participating in these tournaments can be a great way to earn free dice rolls.
Here's how to make the most of it:
Check for Upcoming Tournaments: Keep an eye on the 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go app or website for news about upcoming tournaments. These events usually come with special rewards, including free dice rolls. Practice Regularly: To succeed in a tournament, practice regularly to sharpen your skills. The more you practice, the better your chances of winning and earning those coveted extra dice rolls. Network with Fellow Players: Build connections with other 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go players. They might have tips, tricks, and strategies to share that can help you secure more free dice rolls.
With the right mix of cheats, strategies, and participation in tournaments, you can unlock a world of unlimited free dice rolls on 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go. So, why wait? Start implementing these tips, and watch your dice rolls multiply, taking your 𝚖𝚘𝚗𝚘𝚙𝚘𝚕𝚢 Go gaming experience to a whole new level!
This is top level secret information, only the pro people get to have this info, so keep it and enjoy your luck today bro!
.
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
✅🔴👉CLICK HERE TO GET FREE DICE ROLLS LINKS NOW✅🔴👉
| gharchive/issue | 2024-06-15T01:01:23 | 2025-04-01T04:33:08.842711 | {
"authors": [
"Fahim19712112",
"ghost"
],
"repo": "TheRealJoelmatic/RemoveAdblockThing",
"url": "https://github.com/TheRealJoelmatic/RemoveAdblockThing/issues/1364",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
353382426 | Allow URIs instead of pathnames
Currently, only pathnames are supported for any form of file.
As a user I want to address files at various locations, such as local files (file://), remote files (ssh://), S3 buckets (s3://), HDFS (hdfs://), Cassandra FS (cfs://), etc. for configuration files and directories (currently only local), data files (currently only on execution host), plugins (currently only local).
This could be implemented with Apache Commons VFS or similar libraries (e.g. see this StackOverflow post).
Metadata table is another useful file to access rather remotely, like the files -- currently files are used to determine what to do (datasets etc.) -- and metadata table is used like this, but does not allow remote files.
| gharchive/issue | 2018-08-23T13:31:22 | 2025-04-01T04:33:08.851637 | {
"authors": [
"vinjana"
],
"repo": "TheRoddyWMS/Roddy",
"url": "https://github.com/TheRoddyWMS/Roddy/issues/299",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2570831264 | 🛑 Trident API is down
In 715aa6b, Trident API (https://thetrident.one/api) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Trident API is back up in c5356f1 after 2 minutes.
| gharchive/issue | 2024-10-07T15:47:17 | 2025-04-01T04:33:08.886934 | {
"authors": [
"an-lee"
],
"repo": "TheTridentOne/upptime",
"url": "https://github.com/TheTridentOne/upptime/issues/1574",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2080123430 | Fix/performance issues
In this PR I add some limitations to the number of rendered emojis to prevent rendering emojis out of view.
Previously when we open a keyboard and it was on first category even emojis from the last one were rendered. That was causing problems not only in the keyboard itself but also on the screen where it was used. There's still a small delay on changing category but it's significantly reduced. However it may be caused by development build so I decided to go with it for now and search for more improvements in the future if needed.
Edit: as you can see from commits I was forced to also fix them in this PR to not break workflows on main after merging this one
closes https://github.com/TheWidlarzGroup/rn-emoji-keyboard/issues/148
good job 👍 tested on iPhone 15 simulator and Pixel 3a emulator, you can feel the difference on both. Looks much smoother than it used to
Nice! There is noticeably better performance. Unfortunately, it introduces an issue with the scroll indicator (screen recording below). Despite this, I think we could merge it and then think about a solution.
RPReplay_Final1705348348.MP4
I saw that too, but I didn't find a nice way to maintain a static size of scrollbar. However i thought that the overall performance is more important than scrollbar which is usually hidden in most applications. Anyway thanks for checks and I'm merging it
| gharchive/pull-request | 2024-01-13T06:22:47 | 2025-04-01T04:33:08.897933 | {
"authors": [
"jan-kozinski",
"mateki0"
],
"repo": "TheWidlarzGroup/rn-emoji-keyboard",
"url": "https://github.com/TheWidlarzGroup/rn-emoji-keyboard/pull/163",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2068479918 | Add a plus button under the last sequence in the left panel
I realise that it's not convenient to navigate into the "Editor Settings" tab whenever you want to create a new sequence. A simple plus button which is always visible below the last sequence in the left panel would make this action a lot quicker.
Solved by 9522a3304de832f0ba2310298dd83be0d7355770.
| gharchive/issue | 2024-01-06T09:05:23 | 2025-04-01T04:33:08.898982 | {
"authors": [
"TheWilley"
],
"repo": "TheWilley/FruityDancitor",
"url": "https://github.com/TheWilley/FruityDancitor/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1174605679 | 🛑 Stream staging CMS is down
In b7728a5, Stream staging CMS (https://stream-cms.internal.therapy-box.co.uk/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Stream staging CMS is back up in 4da0aa6.
| gharchive/issue | 2022-03-20T16:31:08 | 2025-04-01T04:33:08.915096 | {
"authors": [
"TherapyBox"
],
"repo": "TherapyBox/upptime",
"url": "https://github.com/TherapyBox/upptime/issues/82",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1781948678 | Neue Ersetzungsregeln (Spotify 8.8.48.523)
Wie es aussieht ist Spotify mal wieder linguistisch kreativ unterwegs gewesen.
Metainformationen:
SPOTIFY_VERSION = 8.8.48.523
[BEGIN ENTRIES]
res/values-de/strings.xml|artist_attribution_sheet_title
res/values-de/strings.xml|first_use_ts_dialog_message
res/values-de/strings.xml|ipl_volume_control_granted_for_participants
[END ENTRIES]
Spracheinträge:
Kopiere diesen Block (MIT dem BEGIN/END-Tag) in deine Antwort, entferne die Gendersternchen und sende die Antwort ab.
[BEGIN VALUES]
An diesem Song beteiligte Künstler*innen
Hier sind einige Regeln, die Creator*innen und Fans schützen sollen.
Alle Sessionteilnehmer*innen können die Lautstärke ändern.
[END VALUES]
Daraufhin wird eine neue PR mit den Änderungen an der Ersetzungstabelle erzeugt.
[BEGIN VALUES]
An diesem Song beteiligte Künstler
Hier sind einige Regeln, die Creators und Fans schützen sollen.
Alle Sessionteilnehmer können die Lautstärke ändern.
[END VALUES]
| gharchive/issue | 2023-06-30T06:02:42 | 2025-04-01T04:33:08.919300 | {
"authors": [
"Theta-Dev",
"Thetamatic"
],
"repo": "Theta-Dev/Spotify-Gender-Ex",
"url": "https://github.com/Theta-Dev/Spotify-Gender-Ex/issues/132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2371382176 | 1.21 Visual Enhancement to look modern.
I don't think ill do everything but i've already set for an example already with the battle axe since everything looks too old. 1.7.10 was my first experience with this mod and i kinda want to upgrade it a bit. Anyone is free to contribute textures to only this area.
I'd like to keep the old textures in the base mod, but I am more than happy to share links to custom resource packs on the CurseForge, Modrinth and GitHub pages :)
Thaanks. I would link my sources once it is finished.
Perfect, thank you very much :)
Just keep me updated and I will link your resource pack together with #6
I manage to create a resource pack and still testing out.
Will close this for now :)
If you are finished at some point and want me to link it here, just give me a heads up!
Will close this for now :) If you are finished at some point and want me to link it here, just give me a heads up!
Thanks for notifying me. But right now ill be busy testing other mods so it'll be sooner or later.
i'll be changing the knifes now.
| gharchive/issue | 2024-06-25T00:55:30 | 2025-04-01T04:33:08.923796 | {
"authors": [
"ThexXTURBOXx",
"makumaku1974"
],
"repo": "ThexXTURBOXx/Balkons-WeaponMod-Legacy",
"url": "https://github.com/ThexXTURBOXx/Balkons-WeaponMod-Legacy/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1782746489 | 🛑 Linkedin is down
In 89e7c04, Linkedin (https://www.linkedin.com) was down:
HTTP code: 429
Response time: 33 ms
Resolved: Linkedin is back up in 9a0bf8e.
| gharchive/issue | 2023-06-30T15:39:32 | 2025-04-01T04:33:08.926158 | {
"authors": [
"ptoone"
],
"repo": "Thexyz/downly",
"url": "https://github.com/Thexyz/downly/issues/398",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
840357215 | Package installation fails on linux environment
I am not sure if this package is designed for use in linux environments. But i did a simple
pip install geomapviz -U
on my azure environment (docker running on linux). It fails with an error:
Building wheels for collected packages: cartopy Building wheel for cartopy (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/jovyan/envs/tacolight/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-sk2ibzss/cartopy_9244987cf4b54e6fba3f7704fed08818/setup.py'"'"'; __file__='"'"'/tmp/pip-install-sk2ibzss/cartopy_9244987cf4b54e6fba3f7704fed08818/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-8i2d_iie cwd: /tmp/pip-install-sk2ibzss/cartopy_9244987cf4b54e6fba3f7704fed08818/
Is this a known issue?
This is a cartopy installation error. Try to install first from pre-built binaries conda install -c conda-forge cartopy. For installing on Linux platform, some dependencies are required, see the cartopy documentation for details.
Installation seems to work after a clean cartopy installation on both windows and gdp envs. So, this can be closed.
| gharchive/issue | 2021-03-24T23:53:45 | 2025-04-01T04:33:08.944373 | {
"authors": [
"ThomasBury",
"shashidhar26"
],
"repo": "ThomasBury/geomapviz",
"url": "https://github.com/ThomasBury/geomapviz/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2064846381 | 🛑 Nextcloud is down
In 89c05e3, Nextcloud (https://files.cond.dk) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Nextcloud is back up in f505f53 after 46 minutes.
| gharchive/issue | 2024-01-04T01:11:16 | 2025-04-01T04:33:08.946872 | {
"authors": [
"ThomasConrad"
],
"repo": "ThomasConrad/uptime_monitor",
"url": "https://github.com/ThomasConrad/uptime_monitor/issues/189",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
196785323 | Add "Not Started" as Possible Task Status
Added "Not Started" as a possible task status (this prevents a warning when trying to create a task with a status of "Not Started").
This PR is an enhancement.
| gharchive/pull-request | 2016-12-20T21:26:44 | 2025-04-01T04:33:08.959673 | {
"authors": [
"fhightower"
],
"repo": "ThreatConnect-Inc/threatconnect-javascript",
"url": "https://github.com/ThreatConnect-Inc/threatconnect-javascript/pull/1",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
67538950 | Disable Travis CI email notifications
Disable Travis CI email notifications
Currently it's not really helpful.
Yeah, OK for now. We'll re-enable it when the unit tests are all fixed. (We're down to 6 now, besides the ones that need updating for the comment fix.)
Sorry, I should have said "approved for pull."
No problem, I got it
| gharchive/pull-request | 2015-04-10T08:16:25 | 2025-04-01T04:33:08.971162 | {
"authors": [
"EBatTiVo",
"as3boyan"
],
"repo": "TiVo/intellij-haxe",
"url": "https://github.com/TiVo/intellij-haxe/pull/223",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2126971003 | 🛑 Melon Ticket is down
In 4de5bdd, Melon Ticket (https://ticket.melon.com) was down:
HTTP code: 406
Response time: 1032 ms
Resolved: Melon Ticket is back up in 6046393 after 32 minutes.
| gharchive/issue | 2024-02-09T11:44:20 | 2025-04-01T04:33:08.991635 | {
"authors": [
"SOLPLPARTY"
],
"repo": "TicketOpen/status",
"url": "https://github.com/TicketOpen/status/issues/1757",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1060480573 | Support file create and export
Usage:
from tiledb import cloud
created = cloud.create_file("demo", "s3://bucket/bubble.png", "s3://bucket/bubble_tdb")
print(created)
exported = cloud.export_file("demo", created.file_uuid, "s3://bucket/bubble.png")
print(exported)
Ref: @snagles
| gharchive/pull-request | 2021-11-22T18:43:01 | 2025-04-01T04:33:09.040180 | {
"authors": [
"antalakas"
],
"repo": "TileDB-Inc/TileDB-Cloud-Py",
"url": "https://github.com/TileDB-Inc/TileDB-Cloud-Py/pull/204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1631480908 | Questions about the quantization implementation
Hi @TimDettmers
First of all, very impressive work! I do have some questions about the details:
If we set the has_fp16_weights=False, the weights will be pre-quantized into 8 bits beforehand so we don't need to quantize the weights during the inference. To calculate the outlier parts, we up-cast the sub weight into bf16 and do the matmul. Is my understanding correct? And the results in the paper is with has_fp16_weights=False ?
Another alternative of the quantization is to keep weights in 8 bits and always keep activation values in fp16 so each time we just need to de-quantize the weight values. Compared to this approach, the gain of LLM.int8 is that we compute some vectors in int8 gemm so it is faster in very big matmul. Is that correct ?
@zachzzc Hi, bro. You can read this paper for more details: LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale.
If I understand correctly, LLM.int8() use a llm_int8_threshold (default is 6.0) to filter FP16 channels as below.
The core insight is that outliers are limited to some specific hidden dims. For these dims, activation and weights well be computed with FP16_GEMM, while the rest dims use INT8_GEMM.
And the authors found that the number of outlier feature dimensions is not larger than 7 (|O| ≤ 7) for transformers up to 13B parameters, this INT8/FP16 GEMM decomposition operation only consumes about 0.1% additional memory.
Thes configs can be set with BitsAndBytesConfig of this bitsandbytes lib.
default_config = {
# 4bits
"load_in_4bit": False,
"bnb_4bit_compute_dtype": "float32", # float16, bfloat16, float32
"bnb_4bit_quant_type": "fp4", # fp4, np4
"bnb_4bit_use_double_quant": False,
# 8bits
"load_in_8bit": False,
"llm_int8_enable_fp32_cpu_offload": False,
"llm_int8_has_fp16_weight": False,
"llm_int8_skip_modules": None,
"llm_int8_threshold": 6.0, # In our work, we find that α = 6.0 is sufficient to reduce transformer performance degradation close to zero
}
| gharchive/issue | 2023-03-20T06:33:56 | 2025-04-01T04:33:09.054740 | {
"authors": [
"Shuai-Xie",
"zachzzc"
],
"repo": "TimDettmers/bitsandbytes",
"url": "https://github.com/TimDettmers/bitsandbytes/issues/206",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
401306394 | didnt updated domain - Cloudflare
GoDNS works correctly on subdomains like "www" or "test". But it seems to be that only subdomains got updated. Domains like "example.com" got ignored. (Only tested by using Cloudflare)
Is this not working for cloudflare?
The exemple.com domain is ignored and your ip is not being updated.
[GoDNS] 2019/07/24 12:27:22 GoDNS started, entering main loop...
[GoDNS] 2019/07/24 12:27:23 Creating DNS handler with provider: Cloudflare
[GoDNS] 2019/07/24 12:27:24 Current IP is: 123.123.123.123
[GoDNS] 2019/07/24 12:27:24 Checking IP for domain example.com
[GoDNS] 2019/07/24 12:27:39 Skiping record: example.com
[GoDNS] 2019/07/24 12:27:39 Going to sleep, will start next checking in 300 seconds...
I'm using version 1.8
my config.json
{
"provider": "Cloudflare",
"email": "myemail@gmail.com",
"password": "786dasdasdas8d9as798s7s8cxz8x8xx",
"domains": [{
"domain_name": "example.com"
}
],
"ip_url": "https://myip.biturl.top",
"interval": 300,
"socks5_proxy": ""
}
Currently only supports to update subdomains.
I understand. however Do you have any plans to add duckdns support?
The current options are very good and stable, but have a slightly high dns propagation time.
@fabianskibr Do you mean https://www.duckdns.org?
Yes, exactly
Checked, it is possible to support.
would be great
When it is completed, I will notify you.
ok thanks i will test
@fabianskibr , GoDNS V1.9 is released to support DuckDNS, you may try it: https://github.com/TimothyYe/godns/releases/tag/V1.9
@fabianskibr, GoDNS V1.9 is released to support DuckDNS, you may try it: https://github.com/TimothyYe/godns/releases/tag/V1.9
Seems not working for windows, godns.exe executable doesn't open
I just added my token and my subdomain (it's only numbers)
my config:
{
"provider": "DuckDNS",
"password": "",
"login_token": "123123123132132",
"domains": [
{
"domain_name": "www.duckdns.org",
"sub_domains": [
"84714641"
]
}
],
"ip_url": "https://icanhazip.com",
"interval": 30,
"socks5_proxy": "",
}
I tried too
"domain_name": "duckdns.org",
"sub_domains": [
"84714641"
]
and
"domain_name": "84714641.duckdns.org",
"sub_domains": [
""
]
It's working on linux, I just have to remove the comma at the end of this line:
"socks5_proxy": "",
I realize that it always updates the ip even if there are no changes, is that how it should work anyway?
@fabianskibr README already updated, the comma is removed.
It is suggested to set the interval as 300 seconds.
And the released is also updated, please download the latest version.
It will not always update the IP unless it is changed.
Everything works now, great!
@fabianskibr Thanks!
| gharchive/issue | 2019-01-21T11:18:33 | 2025-04-01T04:33:09.075121 | {
"authors": [
"Excel1",
"TimothyYe",
"fabianskibr"
],
"repo": "TimothyYe/godns",
"url": "https://github.com/TimothyYe/godns/issues/33",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1212088369 | Как парсить nano при возвращении последней цены?
Что случилось?
Метод market_data.get_last_prices() возврщает units - целую часть и nano - дробную часть.
Вопрос такой, а как в nano отличить например 1 копейку от 10, если число заполняется нулями в ответе?
{'units': 0, 'nano': 10000000}
Ясно, нужно 7 нулей убрать, и будет точная цифра.
Можно попробовать использовать https://github.com/Tinkoff/invest-python/blob/89cf3419d881c6aaa33377336c3fe24dfccb3f84/tinkoff/invest/utils.py#L32
Спасибо.
| gharchive/issue | 2022-04-22T09:36:32 | 2025-04-01T04:33:09.092003 | {
"authors": [
"cdies",
"irusland"
],
"repo": "Tinkoff/invest-python",
"url": "https://github.com/Tinkoff/invest-python/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
748263827 | Howto refresh auth token?
I'm wondering how to use the refresh toking for getting a new access token in long-running applications? Which endpoint to use?
Found it: refreshing works like this:
uri := fmt.Sprintf("%s/oauth2/%s/access_token", nissanAuthBaseURL, nissanRealm)
data := url.Values{
"client_id": []string{nissanClientID},
"client_secret": []string{nissanClientSecret},
"grant_type": {"refresh_token"},
"refresh_token": {v.tokens.RefreshToken},
}
uri += "?" + data.Encode()
req, err := request.New(http.MethodPost, uri, nil, request.URLEncoding)
if err == nil {
var tokens oidc.Tokens
if err = v.DoJSON(req, &tokens); err == nil && v.tokens.AccessToken == "" {
err = errors.New("missing access token")
}
fmt.Println(tokens)
}
@andig Great stuff! Thank you so much for the update!
| gharchive/issue | 2020-11-22T15:14:17 | 2025-04-01T04:33:09.107882 | {
"authors": [
"Tobiaswk",
"andig"
],
"repo": "Tobiaswk/dartnissanconnect",
"url": "https://github.com/Tobiaswk/dartnissanconnect/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2104298682 | DuckDB catalogs config option doesn't commit changes
Context: https://tobiko-data.slack.com/archives/C044BRE5W4S/p1706126317634699
Commit with repro: https://github.com/plaflamme/sqlmesh/commit/0b50c8bb877892053fcc1a4fe6585b316b6667b1
Summary: basically, running sqlmesh plan and immediately running sqlmesh fetchdf … fails when using the catalogs configuration (at least on some models).
@plaflamme This should now be fixed with 0.69.0: https://github.com/TobikoData/sqlmesh/releases/tag/v0.69.0
| gharchive/issue | 2024-01-28T18:44:15 | 2025-04-01T04:33:09.110099 | {
"authors": [
"eakmanrq"
],
"repo": "TobikoData/sqlmesh",
"url": "https://github.com/TobikoData/sqlmesh/issues/2037",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
954160753 | Change default date sorts to first_seen descending
Change default date sorts to first_seen descending
This has been made the default
| gharchive/issue | 2021-07-27T18:34:07 | 2025-04-01T04:33:09.110951 | {
"authors": [
"TobySalusky",
"WilliamSalusky"
],
"repo": "TobySalusky/cont3xt",
"url": "https://github.com/TobySalusky/cont3xt/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2226763630 | 🛑 Autoservicio (Ciberseguridad) is down
In 2b85f44, Autoservicio (Ciberseguridad) (https://autoservicio.aysa.com.ar/authorization.do) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Autoservicio (Ciberseguridad) is back up in 9ff0b09 after 9 hours, 27 minutes.
| gharchive/issue | 2024-04-05T00:55:08 | 2025-04-01T04:33:09.114952 | {
"authors": [
"TogijorOK"
],
"repo": "TogijorOK/Monitor_WEB",
"url": "https://github.com/TogijorOK/Monitor_WEB/issues/179",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
276955097 | Drop vue-template-loader dependency in favor of SFC
vue-template-loader is causing more complexity than what it solves. Main issue for now if that upstream included support for jest, but jest can't handle webpack loaders. It would require to write a jest transformer implementing same features as vue template loader.
So we will stick to SFC files, but to keep template, style and script and 3 distinct files, such boilerplate will be introduced:
<template src=./HelloWorld.vue.html></template>
<style scoped src=./HelloWorld.vue.css></style>
<script>
import component from './HelloWorld.vue.ts'
export default component
</script>
This has to be merged : https://github.com/eddyerburgh/vue-jest/pull/29
| gharchive/issue | 2017-11-27T09:43:37 | 2025-04-01T04:33:09.129176 | {
"authors": [
"Toilal"
],
"repo": "Toilal/vue-webpack-template",
"url": "https://github.com/Toilal/vue-webpack-template/issues/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1246251222 | Revert "Add html5 doctype to static renderer (#486)"
This reverts commit 3081f5521a7bfc10e4b3087f5744887a1c7f4947 which causes failing snapshots.
The rest of the failing tests are from #483. Those are caused by an extra blank line after the title tag.
CC @AndrewBarba
| gharchive/pull-request | 2022-05-24T09:19:00 | 2025-04-01T04:33:09.134840 | {
"authors": [
"MaxDesiatov",
"ezraberch"
],
"repo": "TokamakUI/Tokamak",
"url": "https://github.com/TokamakUI/Tokamak/pull/487",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
667954167 | 增加一个功能
希望在主题设置的选项处增加一个关闭评论的选项,这样更加方便,就不用去设置里再更改了。
非常感谢你的建议,但是如果WP能够设置,一般不会考虑重复开发相同功能。
| gharchive/issue | 2020-07-29T15:55:26 | 2025-04-01T04:33:09.135815 | {
"authors": [
"Tokinx",
"pyfn2030"
],
"repo": "Tokinx/Adams",
"url": "https://github.com/Tokinx/Adams/issues/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2630103203 | Profile Picture Cannot Be Displayed Correctly
Description
The profile picture cannot be displayed correctly in Simplified Chinese mode.
Reproduce
https://blog.thgirls.yt/zh-cn/关于少女们/
Tested Browsers
Arc
Safari
Can be closed, seems to be fixed at https://blog.thgirls.yt/zh-cn/about-us/
| gharchive/issue | 2024-11-02T01:24:16 | 2025-04-01T04:33:09.138468 | {
"authors": [
"CelDaemon",
"justapig9020"
],
"repo": "TokyoHackerGirls/TokyoHackerGirls.github.io",
"url": "https://github.com/TokyoHackerGirls/TokyoHackerGirls.github.io/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
640743791 | Add docs for the project (v0.1)
Currently the app has zero documentation and no description whatsoever. Add a description with images, and usage instructions so that it's all documented.
This issue is conserned only with the initial documentation for version 0.1. Docs for future versions will have separate tasks under the new 'Documentation' epic (#55).
| gharchive/issue | 2020-06-17T21:09:35 | 2025-04-01T04:33:09.139696 | {
"authors": [
"TolikPylypchuk"
],
"repo": "TolikPylypchuk/MovieList",
"url": "https://github.com/TolikPylypchuk/MovieList/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1841215975 | Alternative way to install it?
Hello, i installed/used itermocil years ago (v0.2.1). Sadly this version doesn't work well with my version of iterm (3.4.5).
I have a macbook 2015 with macos 10.14 and the problem is i can't seem to be able to upgrade it on brew.
I have a brew message saying i can't upgrade brew or install xcode because of my macos version.
1/ Is there a manual or alternative way to upgrade itermocil?
Or can i install an old version of itermocil but i can't figure out which one is available on brew: for example: brew install itermocil@0.8
any help please. I really loved itermocil and would love to use it again.
you can download any version from the releases and put the binary into one of your PATH folder and MAGIC!
thanks it works
welcome. enjoy.
| gharchive/issue | 2023-08-08T12:29:02 | 2025-04-01T04:33:09.142579 | {
"authors": [
"godbout",
"microSoftware"
],
"repo": "TomAnthony/itermocil",
"url": "https://github.com/TomAnthony/itermocil/issues/139",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
131293185 | Release jars after building jar file (copy of #45).
A re-opened version #45.
The issue should have been fixed as part of #51.
| gharchive/pull-request | 2016-02-04T09:46:37 | 2025-04-01T04:33:09.143424 | {
"authors": [
"TomDmitriev"
],
"repo": "TomDmitriev/gradle-bundle-plugin",
"url": "https://github.com/TomDmitriev/gradle-bundle-plugin/pull/47",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1504338048 | remove tool config files from release artifacts
analog https://github.com/TomasVotruba/unused-public/pull/2
Thanks :+1:
| gharchive/pull-request | 2022-12-20T10:34:00 | 2025-04-01T04:33:09.156230 | {
"authors": [
"TomasVotruba",
"staabm"
],
"repo": "TomasVotruba/type-coverage",
"url": "https://github.com/TomasVotruba/type-coverage/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2074152423 | Spdhg with stochastic sampler
Describe your changes
-Allow SPDHG to take a sampler either from our sampler class or any class with a next(self) function
-Deprecated prob from SPDHG, taking the probabilities instead from the sampler class or from a new argument prob_weights, choosing the default [1/num_subsets]*num_subsets if one is not provided in either place.
-Created two setters for the step sizes. set_step_sizes_from_ratio resets the step sizes if the user provides one/both/none of gamma and rho - note that this closes #1860. step_sizes_custom takes in one/both/none of sigma and tau allowing the user to use a custom sigma and tau with those not provided calculated from the defaults. Calculating sigma from tau probably needs checking with someone else.
-Added a check_convergence function that checks self._sigma[i] * self._tau * self.norms[i]**2 <= self.prob_weights[i] for all i. This probably needs checking with someone else.
-Deprecated the kwarg "norms" to be replaced by the set_norms method in BlockOperator: added a function to return a list of norms and the ability to set this list of norms BlockOperator: added a function to return a list of norms and the ability to set this list of norms #1513.
-Unit tests for SPDHG setters and convergence check
-Speed up of current SPDHG unit tests.
Describe any testing you have performed
Please add any demo scripts to CIL-Demos/misc/
Test with SPDHG https://github.com/TomographicImaging/CIL-Demos/blob/main/misc/testing_sampling_SPDHG.ipynb
Similar results gained for all samplers for SPDHG, with 10 subsets
With 80 subsets:
Link relevant issues
Part of the stochastic work plan. Closes #1575. Closes #1576. Closes #1500. Closes #1496
Checklist when you are ready to request a review
[x] I have performed a self-review of my code
[x] I have added docstrings in line with the guidance in the developer guide
[x] I have implemented unit tests that cover any new or modified functionality
[x] CHANGELOG.md has been updated with any functionality change
[x] Request review from all relevant developers
[x] Change pull request label to 'Waiting for review'
Contribution Notes
Please read and adhere to the developer guide and local patterns and conventions.
[x] The content of this Pull Request (the Contribution) is intentionally submitted for inclusion in CIL (the Work) under the terms and conditions of the Apache-2.0 License.
[x] I confirm that the contribution does not violate any intellectual property rights of third parties
For reference:
PDHG parameters:
f (Function) – A convex function with a “simple” proximal method of its conjugate.
g (Function) – A convex function with a “simple” proximal.
operator (LinearOperator) – A Linear Operator.
sigma (positive float, or np.ndarray, DataContainer, BlockDataContainer, optional, default=None) – Step size for the dual problem.
tau (positive float, or np.ndarray, DataContainer, BlockDataContainer, optional, default=None) – Step size for the primal problem.
initial (DataContainer, optional, default=None) – Initial point for the PDHG algorithm.
gamma_g (positive float, optional, default=None) – Strongly convex constant if the function g is strongly convex. Allows primal acceleration of the PDHG algorithm.
gamma_fconj (positive float, optional, default=None) – Strongly convex constant if the convex conjugate of f is strongly convex. Allows dual acceleration of the PDHG algorithm.
PDHG functions:
set_gamma_g (this is due to strong convexity and something we might consider for SPDHG in the future)
set_gamma_fconj (this is due to strong convexity and something we might consider for SPDHG in the future)
check_convergence
set_step_sizes
update_step_sizes (this is due to strong convexity and something we might consider for SPDHG in the future)
New SPDHG parameters:
f (BlockFunction) – Each must be a convex function with a “simple” proximal method of its conjugate
g (Function) – A convex function with a “simple” proximal
operator (BlockOperator) – BlockOperator must contain Linear Operators
tau positive float, optional, default=None) – Step size parameter for Primal problem
sigma (list of positive float, optional, default=None) – List of Step size parameters for Dual problem
initial (DataContainer, optional, default=None) – Initial point for the SPDHG algorithm
gamma (float) – parameter controlling the trade-off between the primal and dual step sizes
sampler (an instance of a cil.optimisation.utilities.Sampler class or another class with the function next(self) implemented outputting an integer from {1,…,len(operator)}.) – Method of selecting the next index for the SPDHG update. If None, a sampler will be created for random sampling with replacement and each index will have probability = 1/len(operator)
New SPDHG functions:
set_step_sizes_from_ratio
set_step_sizes
check_convergence
See also Gemma's comment https://github.com/TomographicImaging/CIL/issues/1496#issuecomment-2039494809
I am struggling with a test which compares PDHG and SPDHG where the number of subsets is one: #1863
As discussed in the software meeting today, we are resurrecting this PR. @paskino @jakobsj - I would be grateful for a brief look
| gharchive/pull-request | 2024-01-10T11:19:44 | 2025-04-01T04:33:09.221339 | {
"authors": [
"MargaretDuff"
],
"repo": "TomographicImaging/CIL",
"url": "https://github.com/TomographicImaging/CIL/pull/1644",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1102467098 | 🛑 VTP.XYZ is down
In a459151, VTP.XYZ (https://vtp.xyz/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: VTP.XYZ is back up in 7411d6f.
| gharchive/issue | 2022-01-13T22:11:50 | 2025-04-01T04:33:09.224272 | {
"authors": [
"TomsProject"
],
"repo": "TomsProject/uptime",
"url": "https://github.com/TomsProject/uptime/issues/1450",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
216446661 | 2017/03/23果壳精选的API更改了一下,导致果壳精选获取不到数据了。
把GuokrHandpickNews内的内部类source_data中的id从int变为String就能解决问题了,望更新哦~
你的代码写的很棒,受益匪浅,感谢!
好的,我会在下次重构的时候更换的。谢谢你的好评,其实代码还有很多不足的地方啦,嘿嘿。感谢你指出问题哟😀
@TokyoAndroid
Hi,
The API of guokr handpick was changed into another one. You can check the RetrofitService.java for more details.
If there is no other question, I will close this issue.
| gharchive/issue | 2017-03-23T14:27:28 | 2025-04-01T04:33:09.229229 | {
"authors": [
"TokyoAndroid",
"TonnyL"
],
"repo": "TonnyL/PaperPlane",
"url": "https://github.com/TonnyL/PaperPlane/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2115074014 | run the “python main.py configs=configs.yaml DATASET.target=bottle” command always wrong cause by datasets
now, i prepare the dataset/dtd/images file and dataset/MVTec file,the dataset/MVTec have bottle,tile,carpet,cable and so on data download from MVTec web。but always wrong when run the command "python main.py configs=configs.yaml DATASET.target=bottle"(ValueError: num_samples should be a positive integer value, but got num_samples=0), where need to correct in the code, in configs.yaml, i change the "target=bottle",any other need to correct? so complicated.
@philchenup same error. how did you solve it?
请问怎么解决的
| gharchive/issue | 2024-02-02T14:30:18 | 2025-04-01T04:33:09.238999 | {
"authors": [
"liuaohunter",
"philchenup",
"shellxiao"
],
"repo": "TooTouch/MemSeg",
"url": "https://github.com/TooTouch/MemSeg/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
124559584 | TTI LEAKED SOURCE CODE NOW HERE https://github.com/craigy109/Toontown-Infinite-Leak
TTI LEAKED SOURCE CODE NOW HERE https://github.com/craigy109/Toontown-Infinite-Leak
TTI LEAKED SOURCE CODE NOW HERE https://github.com/craigy109/Toontown-Infinite-Leak
TTI LEAKED SOURCE CODE NOW HERE https://github.com/craigy109/Toontown-Infinite-Leak
| gharchive/issue | 2016-01-01T20:06:39 | 2025-04-01T04:33:09.241434 | {
"authors": [
"craigy109"
],
"repo": "Toontown-Infinite/toffee",
"url": "https://github.com/Toontown-Infinite/toffee/issues/4",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1251777447 | How to ensemble several different classification models into a voter?
Hi,
I just start learning neural network and I want to create a voter for several different classification models trained on some dataset. It seems that this package is what I need. And I find the sample code in the document https://ensemble-pytorch.readthedocs.io/en/latest/index.html.
model = VotingClassifier( estimator=MLP, n_estimators=10, cuda=True, )
My naive understanding is that here we have 10 same estimators. Can we transfer a list of models instead of just MLP into the arguments?
Thanks
Hi @jisutich, please refer to #49 for details on why heteregenous ensemble is hard to implement, thanks.
OK I got it. Thanks.
| gharchive/issue | 2022-05-29T03:34:02 | 2025-04-01T04:33:09.246491 | {
"authors": [
"jisutich",
"xuyxu"
],
"repo": "TorchEnsemble-Community/Ensemble-Pytorch",
"url": "https://github.com/TorchEnsemble-Community/Ensemble-Pytorch/issues/117",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1869705238 | Sync main to dev
Description
some fixes went directly to main
so now syncing them back
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.9 out of 10 committers have signed the CLA.:white_check_mark: Tarraann:white_check_mark: TransformerOptimus:white_check_mark: luciferlinx101:white_check_mark: boundless-asura:white_check_mark: Maverick-F35:white_check_mark: Fluder-Paradyne:white_check_mark: sayan1101:white_check_mark: rounak610:white_check_mark: jedan2506:x: Akki-jainYou have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2023-08-28T12:51:40 | 2025-04-01T04:33:09.308372 | {
"authors": [
"CLAassistant",
"Fluder-Paradyne"
],
"repo": "TransformerOptimus/SuperAGI",
"url": "https://github.com/TransformerOptimus/SuperAGI/pull/1138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
659911417 | Create safe and unsafe HTML files for emails
Safe version of email will block remote resources for privacy and security but may display crippled version of email, unsafe version will be as is copy of email without CSP. Possible fix for https://github.com/TrashEmail/TrashEmail/issues/59
Keeping it on hold, for a bit. I like the solution though. Bit busy in revamping the code to make it more plug able, i think after that we are good to push with some minor enhancements. @dcRUSTy thanks for doing it.
@dcRUSTy You would need to rebase/restructure this to the wrt to latest master. Since this is TG functionality this will be added to TG Connector plugin and once that is done, please update this repo master. Once all that is done, you will end up with 2PRs.
| gharchive/pull-request | 2020-07-18T05:11:45 | 2025-04-01T04:33:09.311586 | {
"authors": [
"dcRUSTy",
"r0hi7"
],
"repo": "TrashEmail/TrashEmail",
"url": "https://github.com/TrashEmail/TrashEmail/pull/62",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2152837692 | 🛑 MeVuelo CRM API (production) is down
In c039955, MeVuelo CRM API (production) (https://api.crm.production.travelonux.com/healthcheck) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MeVuelo CRM API (production) is back up in 87bbd2b after 18 minutes.
| gharchive/issue | 2024-02-25T16:22:20 | 2025-04-01T04:33:09.317021 | {
"authors": [
"travelonux-dev"
],
"repo": "Travelonux/upptime",
"url": "https://github.com/Travelonux/upptime/issues/1485",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1290178627 | 🛑 Delta ARG/UY is down
In 1c7aed8, Delta ARG/UY (https://www.deltaargentinauruguay.com) was down:
HTTP code: 403
Response time: 126 ms
Resolved: Delta ARG/UY is back up in 27ccd6b.
| gharchive/issue | 2022-06-30T13:57:12 | 2025-04-01T04:33:09.319524 | {
"authors": [
"rodobarcaaa"
],
"repo": "Travelonux/upptime",
"url": "https://github.com/Travelonux/upptime/issues/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1561015676 | many button colors too light. Implies buttons insensitive.
When I first started using the app, it seemed that many of the buttons were
"greyed out". The light green is pleasant but is too close to the color many
apps use to imply that a button cannot be clicked on.
So, I suggest that a different color be used.
This is tested with version 4.0.5 (prerelease)
now that clicking on the camera icon performance is better now, you are less likely to think that the icon unresponsiveness was because the color indicated that the button was intentionally insensitive. Let's close this.
| gharchive/issue | 2023-01-28T21:10:51 | 2025-04-01T04:33:09.330390 | {
"authors": [
"jgrenadier"
],
"repo": "TreeMama/iSeaTree-React-Prototype",
"url": "https://github.com/TreeMama/iSeaTree-React-Prototype/issues/433",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
2152129189 | Remove the California Black Oak
Students found that it looked quite unremarkable. It's also not Climate Action and not Drought Tolerant even though it is native. Also there's another stop pretty close to it.
Note that in the future we might add it back as a seasonal stop, because it might look better during the spring / autumn / summer (since it's not an evergreen tree).
Ref #27 to later make the removal seasonal
| gharchive/issue | 2024-02-24T05:40:29 | 2025-04-01T04:33:09.331915 | {
"authors": [
"MrCsabaToth"
],
"repo": "TreeWalks/TreeWalks.github.io",
"url": "https://github.com/TreeWalks/TreeWalks.github.io/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
713473447 | Column.display_name and filter form
When one declares a Column like so:
class MyTable(Table):
foo = Column(display_name='bar', filter__include=True, ...)
Then the label used for filter field will be foo (similar for the field name in the advanced search). I would argue it should be bar.
Failing that I would argue that the documentation for display_name should mention how to adjust those names.
A good scenario to keep in mind might be someone who writes his code in English but is trying to write a small business website/app that doesn't need full blown localisation (aka the website doesn't need to be translated into several languages), but should display every label / helptext and error message in forms in a language that isn't English.
Well, it works if "bar" has no spaces, but if it's "Foo bar" then we have a problem in the query language. So we'd have to at least convert it to "foo_bar" then for the advanced filtering. But given that I might agree...
But you can also do
bar = Column(attr='foo')
At least in the declarative style.. in the auto style there is no way to override the name I believe. What about this @jlubcke ?
Concerning:
bar = Column(attr='foo')
That is a little sad if you are writing your program in English, but your labels are e.g. German. For what it is worth with my limited experience so far I consider the auto style to be primarily a prototyping tool, and are not that sad if in that style I can't overwrite the name. Well If I wanted to be difficult I could say that one of iommi's selling points is the ease by which one can override behaviour in a minimal way. So maybe this indicates that the query system isn't yet fully fulfilling its potential wrt the iommi way ;-)
That all said at least my experience with https://django-tables2.readthedocs.io/en/latest/ quickly let me to believe that eventually I will declare all columns anyway. So all magic aside, more important seems to me that one should be able to specify explicitely different names for each of the following concepts:
the name in the source
the display_name of the Column
the display_name of the Field in the simple query
the field name in the query language
We (or at least I) do want the auto feature to be more than a prototyping tool, even if it might not be ready for all cases just yet.
Hm.. thinking about this more I believe that iommi should:
read the verbose_name from your model and it should use that for display_name by default in the auto mode
use display_name.replace(' ', '_') as the user facing name in the advanced search language as you said
support that verbose_name on your model (and display_name when given explicitly) can be a LazyText instance for translation.
Does this seem reasonable to you?
Yes that sounds pretty good.
But in the end I fear one should still support explicit overrides. So that the user facing name in the advanced search language can be different from the labels of the column or field -- even if it is only to make it easy to include other information in the column or field label. E.g.
speed = Column(attr='engine__speed', filter__query_name='Geschwindigkeit', display_name='Geschwindigkeit (km/h)')
100% agree.
After a discussion with Johan our plan for translated query language is:
The advanced language can be localized by us doing a simple replace when rendering it. We also write what language was to write the query in the URL. This means that if you send a German URL to your French friend we can see that the URL you got used a German advanced query so we can translate that to the internal English, and then render it out as French. If you then click "filter" again the friend will now get a URL with French names for stuff and the URL will contain a language identifier so we can do the same translation back to you when the friend sends their URL back.
I hope that made sense :P
Oh yea, and the plan is for sorting and simple filter request parameters to use the internal name still. So while you'd search for bilmodell=ford in the advanced search, the URL would be ....?car_model=ford. Would this be ok from a translation point of view for you? We think ideally users aren't really interacting with the URL so it doesn't need to be translated.
Agreed
Some notes from me to me on implementation:
>>> from django.utils.translation.trans_real import translation
>>> translation('sv')
<DjangoTranslation lang:sv>
>>> t = translation('sv')
>>> t.
t.BE_MAGIC t.charset( t.install( t.merge( t.set_output_charset(
t.LE_MAGIC t.domain t.language( t.ngettext( t.to_language(
t.VERSIONS t.gettext( t.lgettext( t.output_charset(
t.add_fallback( t.info( t.lngettext( t.plural(
>>> t._catalog
<django.utils.translation.trans_real.TranslationCatalog object at 0x1044942d0>
>>> t._catalog.
t._catalog.get( t._catalog.items( t._catalog.keys( t._catalog.plural( t._catalog.update(
>>> t._catalog.items()
<generator object TranslationCatalog.items at 0x104528450>
>>> list(t._catalog.keys())
Adding query_name as suggested above in #137
| gharchive/issue | 2020-10-02T09:15:25 | 2025-04-01T04:33:09.361956 | {
"authors": [
"bgrundmann",
"boxed"
],
"repo": "TriOptima/iommi",
"url": "https://github.com/TriOptima/iommi/issues/56",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1377974975 | [Feature Request] Allow creating a change log for a specific path
By modifying this line, we can support creating a change log for a specific path (this can help to build changelogs for projects under monorepos):
https://github.com/TriPSs/conventional-changelog-action/blob/78239f16d25a195e3c21374740c952ce611c2f7a/src/helpers/generateChangelog.js#L23
Would you be wiling to create a PR?
@TriPSs done :P
| gharchive/issue | 2022-09-19T13:37:31 | 2025-04-01T04:33:09.364236 | {
"authors": [
"AlmogBaku",
"TriPSs"
],
"repo": "TriPSs/conventional-changelog-action",
"url": "https://github.com/TriPSs/conventional-changelog-action/issues/178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2013975489 | libgcc_s.so.1 must be installed for pthread_cancel to work
Device: Jetson AGX Xavier
OS: L4T 32.7.4 (Linux 18.04)
ROS Version: Melodic
Python version: 3.6.9
gcc Versions: 8.4.0 and 7.5.0 (Currently 8.4.0 in use)
g++ Version: 8.4.0
bazel Version: 5.2.0
Mediapipe Version: 0.8.9 (This one)
OpenCV Version: 4.8.1
Tensorflow Version: 2.7.0
scikit_learn Version: 0.24.2
Hello,
I am trying to use this package at a Jetson AGX Xavier. After solving the issue here when I try to roslaunch ros_hand_gesture_recognition hand_sign.launch, I see such an output on my terminal screen:
process[hand_sign_recognition-1]: started with pid [22518]
libgcc_s.so.1 must be installed for pthread_cancel to work
[hand_sign_recognition-1] process has died [pid 22518, exit code -6, cmd /home/mericgeren/catkin_ws/src/ros_hand_gesture_recognition/src/hand_sign_recognition.py __name:=hand_sign_recognition __log:=/home/mericgeren/.ros/log/b02b8112-8dc7-11ee-9d36-00044bcc3306/hand_sign_recognition-1.log].
log file: /home/mericgeren/.ros/log/b02b8112-8dc7-11ee-9d36-00044bcc3306/hand_sign_recognition-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
Then I have tried the command export OMP_NUM_THREADS=1 and then, I have seen such an output:
Traceback (most recent call last):
File "/home/mericgeren/catkin_ws/src/ros_hand_gesture_recognition/src/hand_sign_recognition.py", line 50, in <module>
hand_sign = HandSignRecognition()
File "/home/mericgeren/catkin_ws/src/ros_hand_gesture_recognition/src/hand_sign_recognition.py", line 25, in __init__
rospy.get_param("hand_sign_recognition/keypoint_classifier_model"))
File "/home/mericgeren/catkin_ws/src/ros_hand_gesture_recognition/src/gesture_recognition.py", line 32, in __init__
self.hands, self.keypoint_classifier, self.keypoint_classifier_labels = self.load_model()
File "/home/mericgeren/catkin_ws/src/ros_hand_gesture_recognition/src/gesture_recognition.py", line 41, in load_model
min_tracking_confidence=self.min_tracking_confidence,
File "/home/mericgeren/.local/lib/python3.6/site-packages/mediapipe/python/solutions/hands.py", line 129, in __init__
'multi_handedness'
File "/home/mericgeren/.local/lib/python3.6/site-packages/mediapipe/python/solution_base.py", line 262, in __init__
self._graph.start_run(self._input_side_packets)
RuntimeError: ; eglGetDisplay() returned error 0x3000ontext_egl.cc:156)
libgcc_s.so.1 must be installed for pthread_cancel to work
Could you offer your advice and guidance please?
Kindest regards.
I have tried to switching back to 7.5.0 version which has it's libgcc_s.so.1 file inside /lib/aarch64-linux-gnu. But, the error persisted. Also, tried this command:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib/aarch64-linux-gnu
But nodes still shut themselves down and I still see that " libgcc_s.so.1 must be installed for pthread_cancel to work " message on the screen.
| gharchive/issue | 2023-11-28T09:09:19 | 2025-04-01T04:33:09.403643 | {
"authors": [
"mericgeren"
],
"repo": "TrinhNC/ros_hand_gesture_recognition",
"url": "https://github.com/TrinhNC/ros_hand_gesture_recognition/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2757181200 | 🛑 Navigator API is down
In 7969197, Navigator API (https://api.trocdigital.io/ping) was down:
HTTP code: 503
Response time: 427 ms
Resolved: Navigator API is back up in 64985d0 after 1 hour, 4 minutes.
| gharchive/issue | 2024-12-24T05:02:55 | 2025-04-01T04:33:09.506786 | {
"authors": [
"Oslan17"
],
"repo": "Trocdigital/Navigator-uptime",
"url": "https://github.com/Trocdigital/Navigator-uptime/issues/454",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
456772511 | Please Update PESO Logo to this new one [0x30fef258d2728f9d1edf038059c725faf785697e]
In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:
[x] Filename in the lowercase format
[x] Image format: png
[x] Image size: 256 by 256 px
Updated here https://github.com/TrustWallet/tokens/commit/8b25225289369e7a3b4535b3f9b6f6efe11eb718
| gharchive/pull-request | 2019-06-17T06:56:46 | 2025-04-01T04:33:09.512861 | {
"authors": [
"kolya182",
"pesotoken"
],
"repo": "TrustWallet/tokens",
"url": "https://github.com/TrustWallet/tokens/pull/1834",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
162605171 | CSS Sources?
Hi there,
I would like to use casper as a starting point to my own theme and I wonder, if there are Sass or LESS sources anywhere?
Greets Jan
Nope, it's all straight CSS :smile:
| gharchive/issue | 2016-06-28T05:27:42 | 2025-04-01T04:33:09.514041 | {
"authors": [
"acburdine",
"ausminternet"
],
"repo": "TryGhost/Casper",
"url": "https://github.com/TryGhost/Casper/issues/255",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
425019935 | Fixed should have a unique "key" prop error when using component
closes https://github.com/TryGhost/Ghost-SDK/issues/42
Why the error was occurring:
the tags(data, opts) helper from @tryghost/helpers returns an array
our use of the helper with <></> around the output of each option resulted in an array of <React.Fragment> elements with no keys set
we return the array of react fragment elements, React maps over this and sees that each top-level element in the list has no unique key
How it was fixed:
return <span> elements directly rather than wrapping in fragments so top-level elements have a key
define opts.separator as a getter so that a unique key can be generated each time it is accessed by the tags() helper
@rishabhgrg, @AileenCGN this is my first dip into react so it would be great to get a quick review to make sure I've understood the problem properly and the solution is valid 😄
| gharchive/pull-request | 2019-03-25T17:13:49 | 2025-04-01T04:33:09.517350 | {
"authors": [
"kevinansfield"
],
"repo": "TryGhost/Ghost-SDK",
"url": "https://github.com/TryGhost/Ghost-SDK/pull/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2549678976 | Add Estonian (et) translations
This pull request adds Estonian (et) language translations to Ghost.
Translated files:
comments.json
ghost.json
portal.json
search.json
signup-form.json
Changes:
Created new 'et' folder in the locales directory
Translated all strings from English to Estonian in the above files
I'm a native Estonian speaker, but if there are any questions about specific translations, please let me know.
Thank you for considering this contribution to make Ghost more accessible to Estonian-speaking users!
Thank you for submitting translations to Ghost! 🙏🏻 We really appreciate every string you’ve added. There might be a few new strings that need translation in Estonian to ensure your site. If you spot any untranslated or missing strings, let us know, and we’ll fix them ASAP! 🎉
| gharchive/pull-request | 2024-09-26T06:52:52 | 2025-04-01T04:33:09.520445 | {
"authors": [
"WindTheWhisperer",
"ronaldlangeveld"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/pull/21129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113202625 | Add nodejs v4.0.0 and up as usable engines.
chore(package.json): Add node ^4.0.0 as usable engines.
closes #6002
Adds node v4.0.0 and up as usable engines in package.json
Please see the discussion in #5821 for the prerequisites for support
| gharchive/pull-request | 2015-10-25T02:45:59 | 2025-04-01T04:33:09.522185 | {
"authors": [
"ErisDS",
"kaustavha"
],
"repo": "TryGhost/Ghost",
"url": "https://github.com/TryGhost/Ghost/pull/6001",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1063346644 | when build, ts error TS4032: 'LoadingBarInst' cannot be named
This function solves the problem (这个功能解决的问题)
Exported variable '_sfc_main' has or is using name 'LoadingBarInst' from external module "D:/WorkSpace/ithinkdt-cloud/ithinkdt-frame-front/node_modules/naive-ui/lib/loading-bar/src/LoadingBarProvider" but cannot be named.
Expected API (期望的 API)
fix it
给个复现吧,看这个我真猜不出来
先关闭,提供复现后将重新打开
| gharchive/issue | 2021-11-25T09:31:40 | 2025-04-01T04:33:09.557312 | {
"authors": [
"07akioni",
"Talljack",
"liuzw2579"
],
"repo": "TuSimple/naive-ui",
"url": "https://github.com/TuSimple/naive-ui/issues/1679",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1266082719 | fix: set min height of alert
The alert component should have at least the height of the icon. Right now, the icon overlapses the container in case it's height is higher than the container itself.
The solution may not fix the problem.
If you want empty alert works, I think you can pass this to n-alert.
<template #header> </template>
@07akioni my problem is the other way around: I want bigger icons but since the icon is absolutely positioned the box doesn't take the icon's size into consideration when calculating the layout.
@07akioni my problem is the other way around: I want bigger icons but since the icon is absolutely positioned the box doesn't take the icon's size into consideration when calculating the layout.
The design is intended to solve customization problem. Icon's position is customizable, so I used absolute position for icon.
If we want to make it take space we may need to figure out a way to keep icon's position customizable.
Hm, how's the icon's position customizable right now? Based on the code I can't see any possibility to move it anywhere else except for using specific custom styles.
Hm, how's the icon's position customizable right now? Based on the code I can't see any possibility to move it anywhere else except for using specific custom styles.
iconMargin can be customized.
If it's just about the margin I still don't understand why position absolute is used. A simple flex layout should do the same and even respect the size of the icon.
If it's just about the margin I still don't understand why position absolute is used. A simple flex layout should do the same and even respect the size of the icon.
Margin can make icon placed in arbitrary position.
Some designers in companys may request it placed in some strange position.
However I think this is a problem, it should be fixed in some way.
| gharchive/pull-request | 2022-06-09T12:49:39 | 2025-04-01T04:33:09.562362 | {
"authors": [
"07akioni",
"jaulz"
],
"repo": "TuSimple/naive-ui",
"url": "https://github.com/TuSimple/naive-ui/pull/3074",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2162718889 | 🛑 Content Delivery Network (S3 Bucket) is down
In d5711ea, Content Delivery Network (S3 Bucket) (https://cdn.tubnet.gg/minecraft-resourcepack/TubPack-production.zip) was down:
HTTP code: 522
Response time: 15496 ms
Resolved: Content Delivery Network (S3 Bucket) is back up in ed2273e after 15 minutes.
| gharchive/issue | 2024-03-01T06:41:36 | 2025-04-01T04:33:09.565043 | {
"authors": [
"PublicQualityAcc"
],
"repo": "Tubnom/tubnet-uptime",
"url": "https://github.com/Tubnom/tubnet-uptime/issues/1637",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1804248173 | What happened to v2 tags in docker hub?
It seems after v1.6.1 release (or v1.6.0), all docker image tags with v2 are removed, so suddenly our image ref tufin/oasdiff:v2.0.1 is not valid anymore
Hi @aghajani
We decided to abandon v2 due to golang complexities (see discussion and issue for more info).
We're back on the v1 track now and v1.6.1 is the latest release.
Sorry about the confusion.
Reuven
By the way, docker pull tufin/oasdiff:stable should always give you the latest stable version.
Thanks for the update, I could guess and already adjusted our system to use the v1.6.1, but as a general note, deleting tags is not a good practice since many pipelines and tools are using pinned versions/tags for good reasons and a change/decision like that can easily break them all. The releases are supposed to be immutable ;)
| gharchive/issue | 2023-07-14T06:09:28 | 2025-04-01T04:33:09.570076 | {
"authors": [
"aghajani",
"reuvenharrison"
],
"repo": "Tufin/oasdiff",
"url": "https://github.com/Tufin/oasdiff/issues/324",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
859132336 | Integration of an auto-updater
We want to integrate an auto-updater to not only remind the user about a new version to download, but also remove the hassle to extract the files and so on.
Basically we want to stay with zip files and not change to an installer.
suggestions:
https://github.com/ravibpatel/AutoUpdater.NET
https://github.com/vslavik/winsparkle
https://github.com/synhershko/NAppUpdate
https://github.com/Squirrel/Squirrel.Windows
already sorted out:
https://github.com/NetSparkleUpdater/NetSparkle - no zip file support
Does anyone have any experience with any of them?
Which one would you suggest?
Do you have any other ideas?
Starting with version 2.2.0 not only a reminder to download the new version is shown, but the zip file will be conveniently downloaded and extracted to the program folder after update confirmation.
For that to work the zip files no longer contain the root folder. For now it also only works if the current user has write permission to the TumblThree folder.
| gharchive/issue | 2021-04-15T18:09:59 | 2025-04-01T04:33:09.576939 | {
"authors": [
"thomas694"
],
"repo": "TumblThreeApp/TumblThree",
"url": "https://github.com/TumblThreeApp/TumblThree/issues/143",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
72700532 | Fix Door Remove Cancell
Fixes /cancel removedoor bugging arenas.
again can you explain what you are attempting to do here? I'm not quite following
/z cancel removedoor glitches the full arena doors.
The problem here is that they are not reloaded when enabling or reload, glitching arenas.
@realmaster42 I made some changes, tell me if that looks similar to what you were trying to do
Yup.
i would send it to you if it worked 100% still some zombie spawning problems
| gharchive/pull-request | 2015-05-02T21:22:20 | 2025-04-01T04:33:09.584088 | {
"authors": [
"Turkey2349",
"realmaster42"
],
"repo": "Turkey2349/Call_Of_Minecraft-Zombies",
"url": "https://github.com/Turkey2349/Call_Of_Minecraft-Zombies/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1424887918 | 🛑 TurtleWiki is down
In 74c8206, TurtleWiki (https://turtlewiki.turtlecode84.repl.co/wiki) was down:
HTTP code: 0
Response time: 0 ms
Resolved: TurtleWiki is back up in 5778530.
| gharchive/issue | 2022-10-27T01:20:16 | 2025-04-01T04:33:09.592041 | {
"authors": [
"TurtleCode84"
],
"repo": "TurtleCode84/status",
"url": "https://github.com/TurtleCode84/status/issues/444",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1954051679 | Teams alert is not showing as expected
Describe the bug
I have integrated teams alerting system but the alert that I am getting is not as expected.
I am not getting the proper icon in the alert.
What do you see?
as you can see in this image icon is not getting resolved, I am getting some value "🚨"
What do you expect to see?
I want to see like this, the proper icon
List the steps that must be taken to reproduce this issue
configure teams alerting system
Version
latest
Additional information
@TwiN pls look into this bug
I see the same.
I've also experienced this problem, it started when I updated teams a few weeks ago 😞
I'm not using the "Try the new Teams" feature although the same issue exists when I turn it on
I think color code needs to be changed (https://github.com/TwiN/gatus/blob/de7256e671c3cc6106a154a8fd49df426f4ce56a/alerting/provider/teams/teams.go#L108), not sure
I don't know what side is wrong on this since other similar tools also got broken notifications...
This topic was recently reported at Microsoft at https://techcommunity.microsoft.com/t5/microsoft-teams/new-teams-unicode-hex-character-and-emoji/m-p/3963936
google also show up earlier similar issues...
Both flavours of Teams (traditional and "new") are affected according to my experience.
To be completely honest, that seems more like an issue with Microsoft Teams than with Gatus, however, it's an issue nonetheless.
If allowing the customization of the title in the message generated by the webhook payload sent by Gatus resolves the situation, then I'd be willing to add a new alerting.teams.title field.
There's already a precedent for this anyways: see #602, which was made for Discord's title to be customizable.
@TwiN sounds like a new feature. thanks
To be completely honest, that seems more like an issue with Microsoft Teams than with Gatus, however, it's an issue nonetheless.
If allowing the customization of the title in the message generated by the webhook payload sent by Gatus resolves the situation, then I'd be willing to add a new alerting.teams.title field.
There's already a precedent for this anyways: see #602, which was made for Discord's title to be customizable.
The issue is limited to the Microsoft Teams classic desktop client.
Gatus alert cards are rendered correctly on my phone Teams App and on the web client via Office 365.
| gharchive/issue | 2023-10-20T10:59:47 | 2025-04-01T04:33:09.612849 | {
"authors": [
"221bshashank",
"JamesHillyard",
"Rodri9o",
"TwiN",
"amai2012"
],
"repo": "TwiN/gatus",
"url": "https://github.com/TwiN/gatus/issues/598",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
865654566 | [Feature Request] Chat Replies
The chat replies feature has rolled out to all Twitch channels, would be nice to have it on the backlog (low priority) to send a message in chat as a reply to a thread.
More info: https://help.twitch.tv/s/article/chat-basics?language=en_US#replies
ctx.reply now exists as of bc65337ea2cbaefb0ce641a4acc9b51b548f9adc
| gharchive/issue | 2021-04-23T01:31:42 | 2025-04-01T04:33:09.630987 | {
"authors": [
"GavinEke",
"IAmTomahawkx"
],
"repo": "TwitchIO/TwitchIO",
"url": "https://github.com/TwitchIO/TwitchIO/issues/163",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1155839838 | feat: expose limit parameter of fetch_following
Pull request summary
This PR addresses the issue #216 exposing the parameter limit. It needs the PR #274 to work properly.
# test.py
import twitchio
import asyncio
async def main():
token = "<token>"
client = twitchio.Client(token=token)
try:
user = (await client.fetch_users(names=["s4"]))[0]
following = await user.fetch_following(limit=400)
names = set([f.to_user.name for f in following])
print(len(following), len(names)) # 400, 100
finally:
if client._http.session:
await client._http.session.close()
if __name__ == "__main__":
"""Run test."""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(main())
Checklist
[x] If code changes were made then they have been tested.
[ ] I have updated the documentation to reflect the changes.
[ ] I have updated the changelog with a quick recap of my changes.
[x] This PR fixes an issue.
[ ] This PR adds something new (e.g. new method or parameters).
[ ] This PR is a breaking change (e.g. methods or parameters removed/renamed)
[ ] This PR is not a code change (e.g. documentation, README, ...)
This is something that would have to be implemented on all pageable http methods for it to be considered
This is something that would have to be implemented on all pageable http methods for it to be considered
I converted it to draft for now. I will check how many pageable http methods there are later.
Hello, I'm closing this due to inactivity. If you wish to continue working on this, feel free to send a message or open a new PR. as it stands i believe this may be superseded by #359 so it might not be required
| gharchive/pull-request | 2022-03-01T21:32:11 | 2025-04-01T04:33:09.635270 | {
"authors": [
"IAmTomahawkx",
"tiagovla"
],
"repo": "TwitchIO/TwitchIO",
"url": "https://github.com/TwitchIO/TwitchIO/pull/276",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
596158477 | Fix Healthy Scout & Peak Form perks, implementation via model wrapping
Fixes https://github.com/Tyler-IN/MnB2-Bannerlord-CommunityPatch/issues/30
Yeah if another mod changes out the model it doesn't apply.
Might need to add a fallback patch to handle that scenario, but if a mod is replacing the model, it might not even want to take the default behavior into account...
I also don't know if this even works yet, haven't tested in game.
Well, I belive they definitely still want the perk fixes even with custom models. That's what I'm getting at. Swapping out a model breaking a perk fix doesn't feel right.
I like the approach, but agreed, I'd imagine lots of mods will be going down the custom models approach, which means they're losing out on the perk fixes themselves if they haven't implemented them. I think i'd prefer not checking if the models a custom one myself but I can see arguments for and against that.
Is it possible to add a check for if TaleWorlds fixes this, that way when they put a fix out (assuming they fix it in the default model, seems sensible to me though) we aren't doubling perk effects.
This needs an IL hash check
Prefer #60
| gharchive/pull-request | 2020-04-07T21:18:54 | 2025-04-01T04:33:09.687034 | {
"authors": [
"Skau",
"Tyler-IN",
"tynakuh"
],
"repo": "Tyler-IN/MnB2-Bannerlord-CommunityPatch",
"url": "https://github.com/Tyler-IN/MnB2-Bannerlord-CommunityPatch/pull/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1894977095 | The 'LockErr' option is missing from the 'Usage' section (the "help" output)
Hello,
The 'LockErr' option is missing from the 'Usage' section (the "help" output) of this GitHub repository (https://github.com/Tylous/Talon#usage-1).
The 'Usage' section looks like this:
But when running './Talon_3.1_linux_amd64 --help', it shows that '-LockErr' is an available option for version 3.1:
thats odd let me look into it.
@s-miller-001, that was my bad, it was a pull request I made in the original that added that feature. I updated and made a pull request that will add it to the README.
| gharchive/issue | 2023-09-13T17:42:06 | 2025-04-01T04:33:09.693779 | {
"authors": [
"Tylous",
"ZerkerEOD",
"s-miller-001"
],
"repo": "Tylous/Talon",
"url": "https://github.com/Tylous/Talon/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
352073102 | Cannot use with lazy-loaded modules
When trying to import the module into a lazy loaded module (or shared module), this error appears on loading the module
Error: BrowserModule has already been loaded. If you need access to common directives such as NgIf and NgFor from a lazy loaded module, import CommonModule instead.
:tada: This issue has been resolved in version 1.1.2 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/issue | 2018-08-20T10:22:29 | 2025-04-01T04:33:09.699123 | {
"authors": [
"greg73",
"scttcper"
],
"repo": "TypeCtrl/ngx-rightclick",
"url": "https://github.com/TypeCtrl/ngx-rightclick/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
516013473 | Documentation for generated json file --json
Is there a place where the codes & structure of the --json output are documented?
I need this to create a tool that can consume it.
#936 adds types for the output, they don't all have doc comments, but it's a lot better than what's currently available.
Ow, that is a great job you're doing. Thanks.
When do you think this will get merged?
I'm just waiting for Anthony to have another look at the most recent changes, should be merged after that. I'm not sure when he'll have the time though.
v0.16.0 has a new export JSONOutput which types the outputted JSON.
The root symbol is JSONOutput.ProjectReflection
Thanks for this @Gerrit0, I've been following it for a few months so I'm happy to see it land.
A couple of questions I have after trying this out.
From what I understand, some reflections like those with kindStrings of "External module" and "Interfaces" have a property called children, but none of the exported reflection types have a property called children.
The DeclarationReflection model has a signatures property, but the exported DeclarationReflection type does not.
Please could you also clarify what type the root of the output JSON is. If I type it as JSONOutput.ProjectReflection I see a type error.
Conversion of type '{ "id": number; "name": string; "kind": number; "flags": {}; "originalName": string; "children": ({ "id": number; "name": string; "kind": number; "kindString": string; "flags": { "isExported": boolean; }; "originalName": string; "children": ({ ...; } | { ...; })[]; "groups": { ...; }[]; "sources": { ...; }[]; } | .....' to type 'ProjectReflection' may be a mistake because neither type sufficiently overlaps with the other. If this was intentional, convert the expression to 'unknown' first.
Types of property 'groups' are incompatible.
Type '{ "title": string; "kind": number; "children": number[]; }[]' is not comparable to type 'ReflectionGroup[]'.
Type '{ "title": string; "kind": number; "children": number[]; }' is missing the following properties from type 'ReflectionGroup': flags, name, idts(2352)
Thanks! I believe that was just missed, checking the types now. The error with groups also looks like a bug. I guess I should have added a test to ensure importing the JSON as the serialized type doesn't give an error. Fix later today :)
| gharchive/issue | 2019-11-01T09:48:42 | 2025-04-01T04:33:09.714392 | {
"authors": [
"Gerrit0",
"HHogg",
"pirix-gh"
],
"repo": "TypeStrong/typedoc",
"url": "https://github.com/TypeStrong/typedoc/issues/1125",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1512850765 | chore(DIST-1899): Add "yarn preview" to serve embed lib
Wait for approval on https://github.com/Typeform/typortal-chrome-extension/pull/33
[BOT] Preview available with hash 9198c262409ea0152b4725d9e187d2dfc171b567 here.
[BOT] Preview available with hash 9198c262409ea0152b4725d9e187d2dfc171b567 here.
:tada: This PR is included in version @typeform/embed-v2.6.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2022-12-28T14:27:11 | 2025-04-01T04:33:09.718590 | {
"authors": [
"mathio",
"typeform-ops-gha"
],
"repo": "Typeform/embed",
"url": "https://github.com/Typeform/embed/pull/550",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2291791646 | 🛑 Web OPL is down
In 935f434, Web OPL (https://opl.uab.cat) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Web OPL is back up in 21bfae1 after 12 minutes.
| gharchive/issue | 2024-05-13T04:28:40 | 2025-04-01T04:33:09.740056 | {
"authors": [
"JordiRoman"
],
"repo": "UAB-OPL/opl-uab-monitoring",
"url": "https://github.com/UAB-OPL/opl-uab-monitoring/issues/1687",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2270467036 | 🛑 Web OPL is down
In 8023b2b, Web OPL (https://opl.uab.cat) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Web OPL is back up in 452401a after 8 minutes.
| gharchive/issue | 2024-04-30T04:16:29 | 2025-04-01T04:33:09.742370 | {
"authors": [
"JordiRoman"
],
"repo": "UAB-OPL/opl-uab-monitoring",
"url": "https://github.com/UAB-OPL/opl-uab-monitoring/issues/232",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2308599112 | 🛑 Web OPL is down
In 64c510a, Web OPL (https://opl.uab.cat) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Web OPL is back up in 351fed4 after 5 minutes.
| gharchive/issue | 2024-05-21T15:44:48 | 2025-04-01T04:33:09.744816 | {
"authors": [
"JordiRoman"
],
"repo": "UAB-OPL/opl-uab-monitoring",
"url": "https://github.com/UAB-OPL/opl-uab-monitoring/issues/2668",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1728623285 | [Documentation Update] Update Maintainers Section in README
Issue ‼️ :
Currently the Maintainer Section is like this :
How it should be ✅:
Update the Title from Maintainers to Project Admin
There should be a proper Profile Photo and name of the project admins
After Clicking it should Redirect to the Project admin's Social Profile.
Resources 📃:
Project Admins Social Profile links :
Siddhant Patil (Project Lead)
Github: https://github.com/Siddhant-Patil0203
LinkedIn: https://www.linkedin.com/in/sidd0203
Instagram: https://www.instagram.com/sidd.0203/
Twitter: https://twitter.com/Sidd0203
Naresh Chandanbatve
Github: https://github.com/Naresh-chandanbatve
LinkedIn: https://www.linkedin.com/in/naresh-chandanbatve
Instagram: https://www.instagram.com/naresh_chandanbatve/
Twitter: https://twitter.com/NareshChandanb1
Harshal Lade
Github: https://github.com/LadeHarshal
LinkedIn: https://www.linkedin.com/in/harshal-lade-08749a214/
Instagram: https://www.instagram.com/harshallade2/
Twitter: https://twitter.com/Sidd0203
Vishal Kesharwani
Github: https://github.com/vishal10kesharwani
LinkedIn: https://www.linkedin.com/in/vishal-kesharwani-1004vs/
Instagram: https://www.instagram.com/dev.vishalvsk/
Twitter: https://twitter.com/Vishal46255005
Saurabh Yadav
Githbu: https://github.com/Saurabb-coder
LinkedIn: https://www.linkedin.com/in/saurabh-yadav-469323208/
Instagram: https://www.instagram.com/saurabh739/
Twitter: https://twitter.com/Sidd0203
Note 📝: Search for the best Readme Admin Template from other repositories or the internet.
I want to work on this issue
Okay @Tushar98644
Go ahead
| gharchive/issue | 2023-05-27T10:10:51 | 2025-04-01T04:33:09.757048 | {
"authors": [
"Siddhant-Patil0203",
"Tushar98644"
],
"repo": "UBA-GCOEN/StichHub",
"url": "https://github.com/UBA-GCOEN/StichHub/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1328724578 | how to connect the botnet.exe to the control panel
i have compiled the botnet and setting up the control panel successfully, but i don't have any idea how to connect them together.
note: after executing the botnet (bot.exe) nothing happened
any help please
Did you have updated gate.h with your panel endpoint?
Did you have updated gate.h with your panel endpoint?
i m working on my local-host , so i think i don't have to update anything
What should be the size of the agent ??
| gharchive/issue | 2022-08-04T14:29:39 | 2025-04-01T04:33:09.796553 | {
"authors": [
"AnaMazda",
"matricali"
],
"repo": "UBoat-Botnet/UBoat",
"url": "https://github.com/UBoat-Botnet/UBoat/issues/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1658200506 | login error
Fatal error: Uncaught PDOException: SQLSTATE[HY000] [1045] Access denied for user 'root'@'localhost' (using password: YES) in C:\xampp\htdocs\PHP\vendor\model.php:10 Stack trace: #0 C:\xampp\htdocs\PHP\vendor\model.php(10): PDO->__construct('mysql:host=loca...', 'root', Object(SensitiveParameterValue)) #1 C:\xampp\htdocs\PHP\vendor\controller.php(48): Model->__construct() #2 C:\xampp\htdocs\PHP\private\controllers\login.php(30): Controller->loadModel('user') #3 C:\xampp\htdocs\PHP\vendor\goat.php(90): Login->index() #4 C:\xampp\htdocs\PHP\index.php(22): goat->__construct(Array) #5 {main} thrown in C:\xampp\htdocs\PHP\vendor\model.php on line 10
https://imgur.com/a/EOnFxhC
You should change the database username and password in the config file (config.php)
https://github.com/UBoat-Botnet/UBoat/wiki/Panel-Setup#31-locate-panelsrcconfigconfigphp
| gharchive/issue | 2023-04-07T00:32:04 | 2025-04-01T04:33:09.799847 | {
"authors": [
"AhmedSakrr",
"matricali"
],
"repo": "UBoat-Botnet/UBoat",
"url": "https://github.com/UBoat-Botnet/UBoat/issues/67",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1508791000 | Test data
Is there no annotation labels for test data?
here is my version off anotation test label
| gharchive/issue | 2022-12-23T02:20:29 | 2025-04-01T04:33:09.801035 | {
"authors": [
"dickymuhr"
],
"repo": "UCAS-GYX/YouTube-GDD",
"url": "https://github.com/UCAS-GYX/YouTube-GDD/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1696278973 | Write .h5 files every 10th run to save space. Some small bugfixes.
This is for issue #9 to reduce the total volume of the saved diagnostic beamformer output. We should save the beamformer outputs only every 10th time that we process.
Looks good to me
| gharchive/pull-request | 2023-05-04T15:48:39 | 2025-04-01T04:33:09.803793 | {
"authors": [
"danielczech",
"lacker"
],
"repo": "UCBerkeleySETI/commensal-automator",
"url": "https://github.com/UCBerkeleySETI/commensal-automator/pull/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2266452982 | 🛑 Student Media - Motley is down
In 0afc405, Student Media - Motley (https://motley.ie) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Student Media - Motley is back up in 8514d45 after 6 minutes.
| gharchive/issue | 2024-04-26T20:23:11 | 2025-04-01T04:33:09.806142 | {
"authors": [
"gal"
],
"repo": "UCCNetsoc/upptime",
"url": "https://github.com/UCCNetsoc/upptime/issues/1203",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2762493601 | Question about disabling the encryption in PicoQUIC
Hi @ntyunyayev,
I'm currently working on a paper and would like to disable encryption in PicoQUIC to perform some tests. In section 4.8 of the paper you mention "To remove the cost of encryption, we decided to replace the encrypt and decrypt operations with a call to the memcpy function."
I've already looked through several repositories*, but I haven't been able to find the code that removes the encryption. Can you explain where exactly you did this?
Thanks in advance!
* https://github.com/ntyunyayev/picoquic/tree/dev_pquic_dpdk , https://github.com/UCLouvain-ENSG/picoquic-dpdk-experiments , https://github.com/IPNetworkingLab/picoquic-dpdk
Hello !
Thanks for showing interest in our work :)
It has been a while since I looked at the code. If I remember correctly, the trick was to modify picotls directly. I think we modified the aead_encrypt/aead_decrypt functions.
You can find those functions here : https://github.com/h2o/picotls/blob/master/lib/picotls.c
I see that now there is a flag called PTLS_FUZZ_HANDSHAKE that compiles alternative versions of those functions which looks similar to what we used, you will probably need to modify them slightly, but the idea is here.
I hope it helps,
Best regards,
Nikita
Hi Nikita,
thanks for your quick reply. This is very helpful, I will try it!
I see that now there is a flag called PTLS_FUZZ_HANDSHAKE that compiles alternative versions of those functions which looks similar to what we used, you will probably need to modify them slightly, but the idea is here
This is great. I will test this and post here if there is anything special to consider, in case anyone else wants to replicate this.
Kind regards
David
| gharchive/issue | 2024-12-29T23:52:47 | 2025-04-01T04:33:09.813349 | {
"authors": [
"DavidoTek",
"ntyunyayev"
],
"repo": "UCLouvain-ENSG/picoquic-dpdk",
"url": "https://github.com/UCLouvain-ENSG/picoquic-dpdk/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
188595813 | Possible bug in missing value assertions
Just noticed that in my test script, there's an assertion that should fail but does not. development_tests.py#L96
Probably minor, but i don't have time to diagnose it right now and wanted to make a note.
We should convert to using a test runner and continuous integration. add it to the todo list...
Right. We're using TravisCI and Coveralls generally. Need to get this hooked up.
| gharchive/issue | 2016-11-10T19:25:21 | 2025-04-01T04:33:09.822005 | {
"authors": [
"Eh2406",
"smmaurer",
"waddell"
],
"repo": "UDST/orca_test",
"url": "https://github.com/UDST/orca_test/issues/7",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
432921838 | lrts smp/multicore megacon build fails with undefined reference to `TraceTimerCommon'
Original author: Jim Phillips (@jcphill)
Original issue: https://charm.cs.illinois.edu/redmine/issues/1574
One example, on Bridges:
./build charm++ multicore-linux64 iccstatic --no-build-shared --enable-tracing --enable-tracing-commthread -optimize
cd multicore-linux64-iccstatic/tests/converse/megacon
make pgm
../../../bin/charmc -o pgm blkinhand.o megacon.o ringsimple.o ring.o fibobj.o fibthr.o broadc.o priotest.o deadlock.o vars.o nodenum.o specmsg.o bigmsg.o vecsend.o posixth.o future.o multicast.o multisend.o handler.o reduction.o -language converse++
icpc: warning #10237: -lcilkrts linked in dynamically, static library not available
../../../bin/../lib/libconv-cplus-y.a(machine.o): In function `AssembleDatagram':
machine.c:(.text+0x2084): undefined reference to `TraceTimerCommon'
../../../bin/../lib/libconv-cplus-y.a(machine.o): In function `AssembleReceivedDatagrams':
machine.c:(.text+0x16cd0): undefined reference to `TraceTimerCommon'
../../../bin/../lib/libconv-cplus-y.a(machine.o): In function `IntegrateMessageDatagram':
machine.c:(.text+0x19ebc): undefined reference to `TraceTimerCommon'
Fatal Error by charmc in directory /pylon5/mtsg4cp/jphillip/charm-6.8.0-proj-build-2017-May-25-21817-multicore-linux64-iccstatic/charm-6.8.0-beta2/multicore-linux64-iccstatic/tests/converse/megacon
Command icpc -static-intel -o pgm -L../../../bin/../lib -I../../../bin/../include blkinhand.o megacon.o ringsimple.o ring.o fibobj.o fibthr.o broadc.o priotest.o deadlock.o vars.o nodenum.o specmsg.o bigmsg.o vecsend.o posixth.o future.o multicast.o multisend.o handler.o reduction.o moduleinit8409.o -lmemory-default -lthreads-default -lconv-cplus-y -lconv-core -ltmgr -lconv-util -lconv-partition -ltrace-converse -lmemory-default -lthreads-default -lldb-rand -lconv-ldb -lpthread -lckqt -ldl -lmoduleNDMeshStreamer -lmodulecompletion -lm returned error code 1
Note that megatest (Charm++) works, as does NAMD.
The following NAMD builds have this issue, which is exactly the lrts layers with smp/multicore:
Beagle-smp.log
BlueGeneQ-lrts.log
Bridges-MPI-smp.log
Bridges-multicore.log
BW-smp.log
Cori-KNL-smp.log
Cori-smp.log
Edison-smp.log
Jetson.log
JYC-smp.log
Linux-KNL-multicore.log
Linux-x86_64-lrts-smp.log
Linux-x86_64-multicore-gcc.log
Linux-x86_64-multicore.log
Linux-x86_64-verbs-smp.log
MacOSX-x86_64.log
Stampede2.log
Stampede2-multicore.log
Stampede-verbs-smp.log
Taub-verbs-smp.log
Theta.log
Titan-smp.log
Win64.log
Win64-MPI-smp.log
Original date: 2017-05-26 17:17:51
OK, reproduced with ./build charm++ multicore-darwin-x86_64 --no-build-shared --enable-tracing --enable-tracing-commthread -optimize - no need for the Intel compilers to hit this. Looking at the relevant code, and rebuilding accordingly, it's specifically --enable-tracing-commthread
The same issue arises with
./build charm++ netlrts-darwin-x86_64-smp --enable-tracing-commthread -j5 -g
cd netlrts-darwin-x86_64-smp/tests/converse/megacon
make
Original date: 2017-05-26 17:27:48
~~https://charm.cs.illinois.edu/gerrit/2558~~ https://github.com/UIUC-PPL/charm/commit/ef205bc39a5469b853c82e31cf5ed3513d13aaaa
| gharchive/issue | 2017-05-25T23:36:13 | 2025-04-01T04:33:09.852374 | {
"authors": [
"PhilMiller",
"pplimport"
],
"repo": "UIUC-PPL/charm",
"url": "https://github.com/UIUC-PPL/charm/issues/1574",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
436995715 | Add concept of exclusions to automated provisioning arguments
Original issue: https://charm.cs.illinois.edu/redmine/issues/1939
Sometimes it is desirable to leave a portion of a machine's resources unused by Charm++ so that noise sources like the OS kernel can execute on its own core (for example).
Some private planning documentation indicates this as a potential design for these arguments, though it is not clear what is meant by an exclusion:
Exclusions (low priority, can be solved implicitly)
++exPerHost
++exPerSocket
++exPerCore
Another possibility is:
++excludeSocketsPerHost
++excludeCoresPerHost
++excludePUsPerHost
++excludeCoresPerSocket
++excludePUsPerSocket
++excludePUsPerCore
Original date: 2018-07-18 19:13:13
I think the proposed ones look good:
++excludeSocketsPerHost
++excludeCoresPerHost
++excludePUsPerHost
++excludeCoresPerSocket
++excludePUsPerSocket
++excludePUsPerCore
The most common usage scenario is to leave one core or one PU idle per socket or host for the OS kernel to run uninterrupted on.
| gharchive/issue | 2018-06-21T21:22:08 | 2025-04-01T04:33:09.856591 | {
"authors": [
"evan-charmworks",
"stwhite91"
],
"repo": "UIUC-PPL/charm",
"url": "https://github.com/UIUC-PPL/charm/issues/1939",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1702994791 | Avoid padding in converse message header due to alignment issues
Reordering members of converse message header to fix alignment padding
If you run git grep CMK_MSG_HEADER, there are more files that could benefit from this change.
| gharchive/pull-request | 2023-05-10T02:10:48 | 2025-04-01T04:33:09.857884 | {
"authors": [
"adityapb",
"evan-charmworks"
],
"repo": "UIUC-PPL/charm",
"url": "https://github.com/UIUC-PPL/charm/pull/3733",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
122267415 | Enable using Build vars inside of the RCs
When deploying an RC to Kubernetes, it would be good if the $BUILD_ID var could be used, so we deploy the RC with the image we're just building
In reality, what we want to add to the RCs is the tag of the image, so, I've added that functionality.
You can do this:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: nginx
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- image: nginx:{{ .TAG }}
name: nginx
ports:
- name: nginx
containerPort: 8000
You need to define the tag in the drone file, the plugin will resolve that tag and set the value in the RC:
deploy:
kubernetes:
image: quay.io/ukhomeofficedigital/drone-kubernetes
registry: quay.io
replicationcontrollers:
- kubernetes/app-rc.yaml
services: []
token: $$TOKEN
apiserver: https://kubeapi-dev.dsp.notprod.homeoffice.gov.uk:6443
namespace: default
debug: false
tag: v1.4.$$BUILD_NUMBER
But be aware that you have to use the same tag in the publish section:
publish:
docker:
registry: quay.io
username: $$QUAY_USER
password: $$QUAY_PASSWORD
email: $$QUAY_EMAIL
repo: quay.io/ukhomeofficedigital/deep-ui
storage_driver: vfs
tag:
- latest
- v1.4.$$BUILD_NUMBER
when:
branch: master
branch: master
| gharchive/issue | 2015-12-15T13:01:08 | 2025-04-01T04:33:09.865947 | {
"authors": [
"ipedrazas"
],
"repo": "UKHomeOffice/drone-kubernetes",
"url": "https://github.com/UKHomeOffice/drone-kubernetes/issues/2",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2065370186 | Won't you add UHV+ textures?
I don't know if such a question should be asked, I used machine translation, the tone may not be very good.
I assume you're here for the 1.12 one? Since I'm following the GTCEu Modern and UHV+ items are not yet implemented in it, I will be focusing on the Items available first!
I will eventually make it tho, just not on priority!
Is 1.20.2
| gharchive/issue | 2024-01-04T10:15:04 | 2025-04-01T04:33:09.882246 | {
"authors": [
"3453890470",
"ULSTICK"
],
"repo": "ULSTICK/GregTechRefreshed",
"url": "https://github.com/ULSTICK/GregTechRefreshed/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2656223896 | Change controls for highlighting
Changed the controls for highlighting to Right click
added radio buttons to pick your highlight color
added a way keep all of a cells edges bold while having the background be white (weird edge-case still hard to describe)
made our highlight colors more pastel and have better contrast
fixed wierd websocketing bug where every edge would bold at once, then unbold, and re bold each edge one edge at a time on ctrl click
and implemented e2e tests
For a possible future task it might be worth it to have one of the radio buttons default selected. (and to maybe add control instructions, but I think that's a different story)
| gharchive/pull-request | 2024-11-13T17:22:04 | 2025-04-01T04:33:09.898352 | {
"authors": [
"0beyer",
"Bonesbhbhbh"
],
"repo": "UMM-CSci-3601-F24/it-3-mary-shellys-cool-1918-howard-frankendogs-football-team",
"url": "https://github.com/UMM-CSci-3601-F24/it-3-mary-shellys-cool-1918-howard-frankendogs-football-team/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1667507972 | 🖥Smart Maps Documentation Sub-working group
Please comment in this thread or add your self to volunteers if you are interested in working with Smart Maps sub-working group by 4/21 (4/20 PST)!!
Purpose
Who is the documentation for?
Recommended languages:
English
Japanese
Implementation
What tool would we like to use?
Recommendation
Material Mkdocs](https://squidfunk.github.io/mkdocs-material/)
Docusaurus
Existing Resources
Objective-7 have tried to write documents in Markdown and convert it into Web site on GitHub Pages
UNVT Training list
Volunteers
@albertkun
@hfu
@ubukawa
Next steps
Have a list of volunteers above and schedule a meeting to decide on:
Target audience - who is this for?
Documentation authoring tool - how will we be building/maintaining this tool?
Smart Maps Content - what content should be included
Create a Use Case Catalog
@albertkun Thanks your your contribution!
Do you think it is a good idea to include other outreach items such as #88 and #115 to "Documentation Sub-working group", or shoud we think about them separately?
@hfu You are welcome! Thank you for providing a positive and welcoming space!
Personally, I think it depends if we want to use the Documentation as either:
a) Smart Maps Volunteer Hub that "documents the working group"
or
b) Smart Maps technical documentation that "documents technical aspects of the working group".
Under a) yes! It would make sense to pull together all outreach material, like #88 and #115, and share the process of how Smart Map does stickers and community cards.
With b) the documentation would be more limited to explaining and training on how anyone can use the deliverables that the DWG7 is working on, like UNVT or the Smart Map Bazaar.
We can also combine the two a) + b) but, I think a key for successful documentation is knowing who it is for, so we can target those people first.
Your input as the lead on DWG7 would be extremely valuable on how you think we should proceed!
Thank you, @albertkun. I tend to like to write documentation which only we can write like a project document. In addition, I tend to focus on developer happiness rather than capacity building. So my initial idea is a) above.
On the other hand, I suppose @ubukawa is a good document writer for capacity building as in b). I see he is also a motivated writer to help and train others.
Another suggestion is that we can more casually document by sharing idea sketches and work logs so that we developers can motivate each other. I named this concept 伝習 (Denshu) but I would say that this approach was not a big success.
I personally like to follow a) where we write the document for existing and future DWG 7 participants.
@hfu Thank you for your thoughtful reply!
I like the idea of a) as above and would love to also include 伝習 as well for existing and future DWG 7 participants.
Even by going with a) we can still make b) a sub-section of the documentation.
Overall, this is a good start to focusing in on the content!
@hfu @yuiseki For the time being, as a proof of concept, can I copy some of the content over from the Wiki and Objective 7 repository?
@albertkun Of course, you are most welcome! Objective 7 repository is MIT License, so you can redistribute it freely!
@albertkun Of course there is no problem to make a copy and improve the Project document. I really appreciate your effort.
Meanwhile, I was thinking about using ChatGPT to summarize the Project document ;-)
@hfu @yuiseki Thank you for both your support! Should we create a new repository?
If so, may I suggest: https://github.com/UNopenGIS/7-docs?
I think we should definitely use ChatGPT where we can to streamline! 😄🤖
Yes, '7-docs' would be a good name.
I invited @albertkun to the 'unopengis' organization with an owner role.
@hfu Thank you very much! I'll get started just putting together existing resources so that when the Documentation sub group meets we can discuss how to improve it! 🙂
Dear @albertkun
Thank you for your effort!
I am interested in working with you.
(But, I am still at my transition period, and I am not sure about my next responsibility allows me to spare some time for it. Please register me as a volunteer with such consideration;D)
For your informaiton, I wrote my Qiita articles for the following audiences:
My UN colleagues (to explain the background of our vector tile tools)
My UN colleague who will take my job responsibility of taking care of vector tile]
Myself (I often forget some command, so it is a kind of my technical note)
I think that the term "Smart Maps" can cover various things, and it is not practical to develop all documents for all the topics. As you already discussed, I think starting from the possible audiences and training would be a good idea.
I look forward to working with you!!
Thank you for your support and feedback @ubukawa !
No problem about your transition period! I agree that "smart maps" can mean various things, including the UN Vector Tile initiative! We should make sure to have a place to put your helpful training, so it is great that you will be helping us to work on the Documentation!
Excited to be working with you!!
For those interested, the documentation repository is:
https://github.com/UNopenGIS/7-docs
The current documentation (beta) is here:
https://unopengis.github.io/7-docs/
@hfu @ubukawa (@yuiseki?) could we schedule a working meeting next week for Documentation and explaining how to add content! I'm pretty flexible but would like to suggest Friday 4/28 at 1pm JST (4/27 at 9pm PST) time! Please give a thumbs up or comment with an alternate time/day!
I created a separate issue on the working meeting at #137.
Thank you for creating the separate issue @hfu! Will follow-up on it!
Forgive me for writing one-sidedly about my thoughts.
If you like, please use it as a reference.
What we are doing in this working group is in fact very diverse and complex, but it may be easier to understand if we divide it into the following layers, as in the OSI model, for example
Layer 1: GIS data format layer
Understand industry standards for what data formats geospatial information should be handled in computing
Also, an understanding of projection, coordinate systems, and geodesy is included here.
This layer is the most important because it is the foundation for all upper layers.
(personal opinion)
The difficulty with GIS is that historically, the data format and the distribution and presentation are closely linked and progressing every day
I personally feel that this makes GIS difficult to understand for many beginners
Layer 2: GIS data construction, persistence, and retrieval layer
Understand GIS databases like PostGIS
Also, understanding and contribution to open data such as OpenStreetMap should be included here
Layer3: GIS data conversion and processing layer
Understand how to convert and process GIS data as needed
Certain GIS data formats may not be suitable for distribution or presentation
In that case, the GIS data format must be converted or processed into another GIS data format
Understanding the process of extracting only the required data from GIS data is also included in this layer
A vast array of Linux commands exist for this conversion and processing
Layer 4: GIS data distribution layer
Understand how to distribute GIS data so that end users can receive it
This layer requires an understanding of not only GIS, but also Internet communication protocols, various cloud infrastructures, and in some cases, server management and operations
Layer 5: GIS data presentation layer
Understand how to display the GIS data being delivered in a way that is easy for the end user to view and handle
This layer requires an understanding of HTML/CSS/Javascript for web applications
The official documentation and official wiki are the most organized and complete in terms of individual documentation for each layer.
And practical techniques are already available as technical articles on qiita and other web sites.
I think the point of writing our own documentation is to help the reader quickly understand each layer and the connections between these layers, to see the big picture of Smat Maps, and to be able to reach existing resources and explore them on their own.
I'm obsessed with making large-scale language models contribute to humanity, so I developed something like this!
I would be happy if you were aware that this kind of thing is possible...
Introducing TRIDENT, an UN dedicated interactive document exploration and humanity assistance system.
https://trident.yuiseki.net/
This system currently targets only 18 English-language documents written by @ubukawa san for exploration,
However, as an architecture, the target documents can be expanded as much as possible.
For example, it is also possible to input and explore all UN resolutions in this system.
This system does not write texts like ChatGPT instead of having limited capacity, so the chances of lying are extremely low.
(However, this system can be wrong answer.)
I release this system as free and open source software:
https://github.com/yuiseki/TRIDENT
Although not related to GIS, apart from TRIDENT, I have developed and operate a system called OPTIMIZER, which explores and summarises the welfare programs of the Tokyo Metropolitan Government's municipalities.
https://optimizer.yuiseki.net/
https://github.com/yuiseki/OPTIMIZER
This means that it is technically possible to not only search existing documents, but also to have them concisely summarised, depending on the user's requirements.
Key points:
The Markdown format is an excellent format in which present large-scale language models can be accurately understood
By allowing Markdown files to be explored in a large-scale language model, learners can search for and learn even ambiguous terms
Looks like we can close this issue because we are in the next stage. First engine cut-off.
| gharchive/issue | 2023-04-14T04:57:49 | 2025-04-01T04:33:09.938649 | {
"authors": [
"albertkun",
"hfu",
"ubukawa",
"yuiseki"
],
"repo": "UNopenGIS/7",
"url": "https://github.com/UNopenGIS/7/issues/119",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1349892666 | Sprint 1 - Team 2 Task Ticket 1: Design low/medium- fidelity prototype, with basic UI and UX
Description
Task: Design low/medium- fidelity prototype, with basic UI and UX.
Feature: Achievement System
Design the layout and interaction methods of the achievement interface, draw prototypes and conduct user research, and further design according to the research results.
Dependencies
Milestones
[x] Conduct background research and design low-fidelity prototypes (Aug. 23)
[ ] Conduct user surveys to get feedback (Aug. 25)
[ ] Based on feedback, do further research, modify and finalize the prototype. (Aug. 26)
Completion Deadline: Aug. 26
Documentation
Main description of feature
JavaDoc
...
Member
Minrui Xu (@wsxmr1234)
Qicheng Chen (@Wayneecc)
Xinkai Tang (@Kai9613)
Zihao Xia (@zihao-xia)
Each of our team members researched the achievement system of existing games and found some essential elements of the achievement system interface. Usually each achievement has a title and a corresponding icon, and the icon is an abstract representation of the corresponding achievement information. In addition, achievements of different categories are usually displayed in different categories in the achievement system interface of existing games. In addition to these basic elements, the achievement interface in Overwatch also displays the achievement time and achievement rewards.
The figure below is the achievement interface of Genshin Impact. Usually mobile games display the achievement information in the form of a single column because of the small screen of the mobile phone. While computer monitors are usually much larger than mobile phone screens, there is more space to display content, so double-column is the more common form. So our achievement interface is also designed in the form of two columns.
This is my preliminary design of the achievement interface (low-fidelity prototype).
Another style of prototype:
Each cube with dark background colour is an achievement. Purple Eye Icon at the bottom is a "Button". When clicking this eye icon, there is a pop-ups to show the conditions for getting this achievement, or the details about this achievement.
We also designed 'Items Collection Achievement', this achievement is displayed as a list to show users how many game items they have discovered. Clicking each item also will appear the 'Details'.
| gharchive/issue | 2022-08-24T18:59:48 | 2025-04-01T04:33:09.975523 | {
"authors": [
"Wayneecc",
"wsxmr1234"
],
"repo": "UQdeco2800/2022-ext-studio-1",
"url": "https://github.com/UQdeco2800/2022-ext-studio-1/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2398841113 | Added userid and addDate in vwAnnualUnitData view
Ticket
https://github.com/US-EPA-CAMD/easey-ui/issues/6292
Changes:
Added userid and addDate in vwAnnualUnitData view
Code is compiled on dev database and data is visible in the view.
| gharchive/pull-request | 2024-07-09T18:09:43 | 2025-04-01T04:33:09.980687 | {
"authors": [
"usmanahmederg"
],
"repo": "US-EPA-CAMD/easey-db-scripts",
"url": "https://github.com/US-EPA-CAMD/easey-db-scripts/pull/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
943585921 | Finish ECMPS Responsiveness
Compete the design for ECMPS responsiveness
Developer Assets
Responsive Screens
Icons Used
Little Menus
ATTN @Qb10 @tdavydets @jwhitehead77
ECMPS Responsiveness and Layout Updates
Here are the screens for ECMPS responsiveness.
Notes
There are a couple menus introduced in these updated medium-fi wireframes.
☝️ The first menu should use the Utility Menu design pattern from the CAMPD side
☝️ The second menu should use the Main Menu design pattern from the CAMPD side
☝️ The other two menus (Profile and Tabs) can use the new "Little Menu" pattern—which I tried to model off of the React Menu component.
Please let me know if you have any questions.
Best,
Alex
CC @JorjaComer @MoAdeyoju
| gharchive/issue | 2021-07-13T16:04:25 | 2025-04-01T04:33:09.987352 | {
"authors": [
"JorjaComer",
"cvp-alexrobinson"
],
"repo": "US-EPA-CAMD/easey-ui",
"url": "https://github.com/US-EPA-CAMD/easey-ui/issues/1680",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
173122225 | [SPARKLER 16] Output status code to log.debug
Outputs the status code to log.debug
the format is
STATUS CODE : XXX url
@thammegowda
Is this what you had in mind?
@rahulpalamuttam Looks good as of now.
We are planning to replace the fetcher code with Nutch's fetcher down the line.
Until that time, this works.
Thanks
| gharchive/pull-request | 2016-08-25T06:15:27 | 2025-04-01T04:33:10.001603 | {
"authors": [
"rahulpalamuttam",
"thammegowda"
],
"repo": "USCDataScience/sparkler",
"url": "https://github.com/USCDataScience/sparkler/pull/29",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
184991690 | Started Sparkler-UI & added Banana as a Git submodule
Linked to #23
Documentation will follow soon.
@thammegowda : Kindly review the PR.
Wiki for this has been added - https://github.com/USCDataScience/sparkler/wiki/Sparkler-Dashboard---Setup
Moving banana submodule inside sparkler-ui directory as it is more appropriate there.
Merged. Thanks @karanjeets @manishdwibedy
| gharchive/pull-request | 2016-10-25T00:50:32 | 2025-04-01T04:33:10.003933 | {
"authors": [
"karanjeets",
"thammegowda"
],
"repo": "USCDataScience/sparkler",
"url": "https://github.com/USCDataScience/sparkler/pull/39",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2303363912 | Color Palette Set Up
Added two new color palette related functions:
TADA_ColorPalette - creates colorblind accessible palette for use in TADA visualizations
TADA_ViewColorPalette - view palette color swatches with labels identifying their position in palette and hex code for easier future development (or if users want to design additional figures with a similar look)
Applied the new palette to the following functions:
TADA_Boxplot
TADA_Histogram
TADA_OverviewMap
TADA_FieldValuesPieChart
TADA_Scatterplot
TADA_TwoCharacteristicScatterplot
We may want to discuss some additional options for TADA_FieldValuesPieChart as the number of colors required can exceed the number in the palette (n = 9). Right now I am using color ramp to generate additional colors, but it doesn't always look great. Might be worth adding some additional colors to the palette in TADA_ColorPalette? And then only using color ramp if a really large number of colors is needed?
Merged depth profile functions with color palette setup. Depth profile functions include: TADA_IDDepthProfile, TADA_DepthProfilePlot, TADA_DepthCategory.Flag
@cristinamullin - this is ready for your review
@cristinamullin - thanks for catching that. I like your proposed solution and will make those updates.
@hillarymarler I noticed that the new columns generated by some of the depth functions are appended to the end of the dataframe (TADA.ConsolidatedDpeth.Bottom, TADA.DepthCategory.Flag, TADA.DepthProfileAggregation.Flag & there may be more). Can you add any new columns that are created to the require.cols list at the top of RequiredCols.R. It looks like a few are there already under the ActivityDepth or ResultDepth sections but not all.
@hillarymarler I like your idea for addressing the pie chart colors: "adding some additional colors to the palette in TADA_ColorPalette and then only using color ramp if a really large number of colors is needed". Do you want to create a separate issue for that or include in this PR?
Recommendations for figure/map colors:
The orange outline for sites in the TADA_OverviewMap is a bit faint/hard to see. Should we use the same dark blue outline for the sites that is used for TADA_Scatterplot point outlines (is that #6 in the TADA palette)?
In TADA_TwoCharacteristicScatterplot the second char points are orange with a blue outline. Could these have a darker orange outline instead? So the inside would be #2 in the TADA palette and outline would be #7?
TADA_FieldValuesPie. For figures like this that use a lot of colors, I am wondering if the TADA_ColorPalette could specify by default which colors to use first and in which order? For example, if we always want to use #3 blue first followed by #2 orange) can that specification be included in TADA_ColorPalette?
My thoughts so far:
...new columns generated by some of the depth functions are appended to the end of the dataframe (TADA.ConsolidatedDpeth.Bottom, TADA.DepthCategory.Flag, TADA.DepthProfileAggregation.Flag & there may be more). Can you add any new columns that are created to the require.cols list at the top of RequiredCols.R.
Yes I will add those to require.cols in RequiredCols.R
"adding some additional colors to the palette in TADA_ColorPalette and then only using color ramp if a really large number of colors is needed". Do you want to create a separate issue for that or include in this PR?
This is not difficult technically, but I will do that separately as I will need seem time to figure out which colors to add so that the palette remains colorblind accessible
The orange outline for sites in the TADA_OverviewMap is a bit faint/hard to see. Should we use the same dark blue outline for the sites that is used for TADA_Scatterplot point outlines
Yes, I'll change this
In TADA_TwoCharacteristicScatterplot the second char points are orange with a blue outline. Could these have a darker orange outline instead? So the inside would be Created TADA_AutoClean.R #2 in the TADA palette and outline would be [7]
Yes, I'll change this
TADA_FieldValuesPie. For figures like this that use a lot of colors, I am wondering if the TADA_ColorPalette could specify by default which colors to use first and in which order? For example, if we always want to use blue first followed by orange) can that specification be included in TADA_ColorPalette?
I have noticed that the order of the colors in the palette does not seem to determine which colors are selected first in TAD_FieldValuesPie. I think we should specify the order we want the colors used in, but it may need to happen within the plotly functions used to create the figure. I think there is a way to tell it to use the colors in order from the palette or specify the order within the function. I'll take a look at this and figure out the best place to do it.
@hillarymarler I tried running TADA_DepthProfilePlot with the surfacevalue and bottomvalue null which currently produces as error message. Recommendation: update this so that when these inputs are null, the function returns the figure without the black horizontal lines and text on the figure marking surface, middle, and bottom.
> TADA_DepthProfilePlot(Data_6Tribes_5y_Harmonized,
+ groups = c("TEMPERATURE_NA_NA_DEG C", "PH_NA_NA_NA", "DEPTH, SECCHI DISK DEPTH_NA_NA_M"),
+ location = "REDLAKE_WQX-ANKE",
+ activity_date = "2018-10-04",
+ surfacevalue = NULL,
+ bottomvalue = NULL)
[1] "TADA_DepthProfilePlot: Running TADA_DepthCategoryFlag function to add required columns to data frame"
[1] "TADA_DepthCategory.Flag: checking data set for depth values. 69516 results have depth values available."
[1] "TADA_DepthCategory.Flag: assigning depth categories."
[1] "TADA_DepthCategory.Flag: Grouping results by MonitoringLocationIdentifier, OrganizationIdentifier, CharacteristicName, and ActivityStartDate for aggregation for entire water column."
[1] "TADA_DepthCategory.Flag: No aggregation performed."
[1] "TADA_DepthProfilePlot: Depth unit in data set matches depth unit specified by user for plot. No conversion necessary."
[1] "TADA_DepthProfilePlot: Identifying available depth profile data."
[1] "TADA_DepthProfilePlot: Any results for DEPTH, SECCHI DISK DEPTH, DEPTH, SECCHI DISK DEPTH (CHOICE LIST), DEPTH, SECCHI DISK DEPTH REAPPEARS, DEPTH, DATA-LOGGER (NON-PORTED), DEPTH, DATA-LOGGER (PORTED), RBP STREAM DEPTH - RIFFLE, RBP STREAM DEPTH - RUN, THALWEG DEPTH match the depth unit selected for the figure."
Joining with `by = join_by(ActivityTypeCode, TADA.ActivityType.Flag, ActivityMediaName, TADA.ActivityMediaName,
ActivityMediaSubdivisionName, ResultSampleFractionText, TADA.ResultSampleFractionText, TADA.SampleFraction.Flag,
TADA.FractionAssumptions, CharacteristicName, TADA.CharacteristicName, TADA.CharacteristicNameAssumptions, SubjectTaxonomicName,
SampleTissueAnatomyName, MethodSpeciationName, TADA.MethodSpeciationName, TADA.MethodSpeciation.Flag,
TADA.SpeciationAssumptions, TADA.ComparableDataIdentifier, TADA.Harmonized.Flag, ActivityStartDate, ActivityStartTime.Time,
ActivityStartTime.TimeZoneCode, ActivityStartDateTime, ResultMeasureValue, TADA.ResultMeasureValue,
TADA.ResultMeasureValueDataTypes.Flag, ResultValueTypeName, TADA.ResultValueAboveUpperThreshold.Flag,
TADA.ResultValueBelowLowerThreshold.Flag, ResultMeasure.MeasureUnitCode, TADA.ResultMeasure.MeasureUnitCode,
TADA.WQXResultUnitConversion, TADA.ResultUnit.Flag, ResultDetectionConditionText, DetectionQuantitationLimitTypeName,
DetectionQuantitationLimitMeasure.MeasureValue, TADA.DetectionQuantitationLimitMeasure.MeasureValue,
TADA.DetectionQuantitationLimitMeasure.MeasureValueDataTypes.Flag, DetectionQuantitationLimitMeasure.MeasureUnitCode,
TADA.DetectionQuantitationLimitMeasure.MeasureUnitCode, TADA.CensoredData.Flag, TADA.CensoredMethod, TADA.ConsolidatedDepth,
TADA.ConsolidatedDepth.Unit, ResultDepthHeightMeasure.MeasureValue, TADA.ResultDepthHeightMeasure.MeasureValue,
TADA.ResultDepthHeightMeasure.MeasureValueDataTypes.Flag, ResultDepthHeightMeasure.MeasureUnitCode,
TADA.ResultDepthHeightMeasure.MeasureUnitCode, ResultDepthAltitudeReferencePointText, ActivityRelativeDepthName,
ActivityDepthHeightMeasure.MeasureValue, TADA.ActivityDepthHeightMeasure.MeasureValue,
TADA.ActivityDepthHeightMeasure.MeasureValueDataTypes.Flag, ActivityDepthHeightMeasure.MeasureUnitCode,
TADA.ActivityDepthHeightMeasure.MeasureUnitCode, ActivityTopDepthHeightMeasure.MeasureValue,
TADA.ActivityTopDepthHeightMeasure.MeasureValue, TADA.ActivityTopDepthHeightMeasure.MeasureValueDataTypes.Flag,
ActivityTopDepthHeightMeasure.MeasureUnitCode, TADA.ActivityTopDepthHeightMeasure.MeasureUnitCode,
ActivityBottomDepthHeightMeasure.MeasureValue, TADA.ActivityBottomDepthHeightMeasure.MeasureValue,
TADA.ActivityBottomDepthHeightMeasure.MeasureValueDataTypes.Flag, ActivityBottomDepthHeightMeasure.MeasureUnitCode,
TADA.ActivityBottomDepthHeightMeasure.MeasureUnitCode, ResultTimeBasisText, StatisticalBaseCode, ResultFileUrl,
ResultAnalyticalMethod.MethodName, ResultAnalyticalMethod.MethodDescriptionText, ResultAnalyticalMethod.MethodIdentifier,
ResultAnalyticalMethod.MethodIdentifierContext, ResultAnalyticalMethod.MethodUrl, TADA.AnalyticalMethod.Flag,
SampleCollectionMethod.MethodIdentifier, SampleCollectionMethod.MethodIdentifierContext, SampleCollectionMethod.MethodName,
SampleCollectionMethod.MethodDescriptionText, SampleCollectionEquipmentName, MeasureQualifierCode,
TADA.MeasureQualifierCode.Flag, TADA.MeasureQualifierCode.Def, ResultCommentText, ActivityCommentText, HydrologicCondition,
HydrologicEvent, DataQuality.PrecisionValue, DataQuality.BiasValue, DataQuality.ConfidenceIntervalValue,
DataQuality.UpperConfidenceLimitValue, DataQuality.LowerConfidenceLimitValue, SamplingDesignTypeCode, LaboratoryName,
ResultLaboratoryCommentText, ResultIdentifier, ActivityIdentifier, OrganizationIdentifier, OrganizationFormalName,
TADA.MultipleOrgDuplicate, TADA.MultipleOrgDupGroupID, TADA.ResultSelectedMultipleOrgs, TADA.SingleOrgDupGroupID,
TADA.SingleOrgDup.Flag, ProjectName, ProjectDescriptionText, ProjectIdentifier, ProjectFileUrl, QAPPApprovedIndicator,
QAPPApprovalAgencyName, CountryCode, StateCode, CountyCode, MonitoringLocationName, MonitoringLocationTypeName,
MonitoringLocationDescriptionText, LatitudeMeasure, TADA.LatitudeMeasure, LongitudeMeasure, TADA.LongitudeMeasure,
HorizontalCoordinateReferenceSystemDatumName, HUCEightDigitCode, MonitoringLocationIdentifier, TADA.NearbySiteGroups,
AquiferName, AquiferTypeName, LocalAqfrName, ConstructionDateText, WellDepthMeasure.MeasureValue,
WellDepthMeasure.MeasureUnitCode, WellHoleDepthMeasure.MeasureValue, WellHoleDepthMeasure.MeasureUnitCode,
ActivityDepthAltitudeReferencePointText, ActivityEndDate, ActivityEndTime.Time, ActivityEndTime.TimeZoneCode,
ActivityEndDateTime, ActivityConductingOrganizationText, SampleAquifer, ActivityLocation.LatitudeMeasure,
ActivityLocation.LongitudeMeasure, ResultStatusIdentifier, ResultWeightBasisText, ResultTemperatureBasisText,
ResultParticleSizeBasisText, USGSPCode, BinaryObjectFileName, BinaryObjectFileTypeCode, AnalysisStartDate,
ResultDetectionQuantitationLimitUrl, LabSamplePreparationUrl, timeZoneStart, timeZoneEnd, SourceMapScaleNumeric,
HorizontalAccuracyMeasure.MeasureValue, HorizontalAccuracyMeasure.MeasureUnitCode, HorizontalCollectionMethodName,
VerticalMeasure.MeasureValue, VerticalMeasure.MeasureUnitCode, VerticalAccuracyMeasure.MeasureValue,
VerticalAccuracyMeasure.MeasureUnitCode, VerticalCollectionMethodName, VerticalCoordinateReferenceSystemDatumName,
FormationTypeText, ProjectMonitoringLocationWeightingUrl, DrainageAreaMeasure.MeasureValue, DrainageAreaMeasure.MeasureUnitCode,
ContributingDrainageAreaMeasure.MeasureValue, ContributingDrainageAreaMeasure.MeasureUnitCode, ProviderName, LastUpdated,
TADA.ConsolidatedDepth.Bottom, TADA.DepthCategory.Flag, TADA.DepthProfileAggregation.Flag)`
[1] "TADA_DepthProfilePlot: Adding surface delination to figure."
Error: Must supply `x` and `y` attributes
@cristinamullin should "null" be the default for surfacevalue and bottomvalue in this function?
I like what you have now (consistent with the default used to the categories) but null would work too.
From: hillarymarler @.>
Sent: Thursday, May 23, 2024 3:23 PM
To: USEPA/TADA @.>
Cc: Mullin, Cristina (she/her/hers) @.>; Mention @.>
Subject: Re: [USEPA/TADA] Color Palette Set Up (PR #462)
Caution: This email originated from outside EPA, please exercise additional caution when deciding whether to open attachments or click on provided links.
@hillarymarlerhttps://github.com/hillarymarler I tried running TADA_DepthProfilePlot with the surfacevalue and bottomvalue null which currently produces as error message. Recommendation: update this so that when these inputs are null, the function returns the figure without the black horizontal lines and text on the figure marking surface, middle, and bottom.
TADA_DepthProfilePlot(Data_6Tribes_5y_Harmonized,
groups = c("TEMPERATURE_NA_NA_DEG C", "PH_NA_NA_NA", "DEPTH, SECCHI DISK DEPTH_NA_NA_M"),
location = "REDLAKE_WQX-ANKE",
activity_date = "2018-10-04",
surfacevalue = NULL,
bottomvalue = NULL)
[1] "TADA_DepthProfilePlot: Running TADA_DepthCategoryFlag function to add required columns to data frame"
[1] "TADA_DepthCategory.Flag: checking data set for depth values. 69516 results have depth values available."
[1] "TADA_DepthCategory.Flag: assigning depth categories."
[1] "TADA_DepthCategory.Flag: Grouping results by MonitoringLocationIdentifier, OrganizationIdentifier, CharacteristicName, and ActivityStartDate for aggregation for entire water column."
[1] "TADA_DepthCategory.Flag: No aggregation performed."
[1] "TADA_DepthProfilePlot: Depth unit in data set matches depth unit specified by user for plot. No conversion necessary."
[1] "TADA_DepthProfilePlot: Identifying available depth profile data."
[1] "TADA_DepthProfilePlot: Any results for DEPTH, SECCHI DISK DEPTH, DEPTH, SECCHI DISK DEPTH (CHOICE LIST), DEPTH, SECCHI DISK DEPTH REAPPEARS, DEPTH, DATA-LOGGER (NON-PORTED), DEPTH, DATA-LOGGER (PORTED), RBP STREAM DEPTH - RIFFLE, RBP STREAM DEPTH - RUN, THALWEG DEPTH match the depth unit selected for the figure."
Joining with `by = join_by(ActivityTypeCode, TADA.ActivityType.Flag, ActivityMediaName, TADA.ActivityMediaName,
ActivityMediaSubdivisionName, ResultSampleFractionText, TADA.ResultSampleFractionText, TADA.SampleFraction.Flag,
TADA.FractionAssumptions, CharacteristicName, TADA.CharacteristicName, TADA.CharacteristicNameAssumptions, SubjectTaxonomicName,
SampleTissueAnatomyName, MethodSpeciationName, TADA.MethodSpeciationName, TADA.MethodSpeciation.Flag,
TADA.SpeciationAssumptions, TADA.ComparableDataIdentifier, TADA.Harmonized.Flag, ActivityStartDate, ActivityStartTime.Time,
ActivityStartTime.TimeZoneCode, ActivityStartDateTime, ResultMeasureValue, TADA.ResultMeasureValue,
TADA.ResultMeasureValueDataTypes.Flag, ResultValueTypeName, TADA.ResultValueAboveUpperThreshold.Flag,
TADA.ResultValueBelowLowerThreshold.Flag, ResultMeasure.MeasureUnitCode, TADA.ResultMeasure.MeasureUnitCode,
TADA.WQXResultUnitConversion, TADA.ResultUnit.Flag, ResultDetectionConditionText, DetectionQuantitationLimitTypeName,
DetectionQuantitationLimitMeasure.MeasureValue, TADA.DetectionQuantitationLimitMeasure.MeasureValue,
TADA.DetectionQuantitationLimitMeasure.MeasureValueDataTypes.Flag, DetectionQuantitationLimitMeasure.MeasureUnitCode,
TADA.DetectionQuantitationLimitMeasure.MeasureUnitCode, TADA.CensoredData.Flag, TADA.CensoredMethod, TADA.ConsolidatedDepth,
TADA.ConsolidatedDepth.Unit, ResultDepthHeightMeasure.MeasureValue, TADA.ResultDepthHeightMeasure.MeasureValue,
TADA.ResultDepthHeightMeasure.MeasureValueDataTypes.Flag, ResultDepthHeightMeasure.MeasureUnitCode,
TADA.ResultDepthHeightMeasure.MeasureUnitCode, ResultDepthAltitudeReferencePointText, ActivityRelativeDepthName,
ActivityDepthHeightMeasure.MeasureValue, TADA.ActivityDepthHeightMeasure.MeasureValue,
TADA.ActivityDepthHeightMeasure.MeasureValueDataTypes.Flag, ActivityDepthHeightMeasure.MeasureUnitCode,
TADA.ActivityDepthHeightMeasure.MeasureUnitCode, ActivityTopDepthHeightMeasure.MeasureValue,
TADA.ActivityTopDepthHeightMeasure.MeasureValue, TADA.ActivityTopDepthHeightMeasure.MeasureValueDataTypes.Flag,
ActivityTopDepthHeightMeasure.MeasureUnitCode, TADA.ActivityTopDepthHeightMeasure.MeasureUnitCode,
ActivityBottomDepthHeightMeasure.MeasureValue, TADA.ActivityBottomDepthHeightMeasure.MeasureValue,
TADA.ActivityBottomDepthHeightMeasure.MeasureValueDataTypes.Flag, ActivityBottomDepthHeightMeasure.MeasureUnitCode,
TADA.ActivityBottomDepthHeightMeasure.MeasureUnitCode, ResultTimeBasisText, StatisticalBaseCode, ResultFileUrl,
ResultAnalyticalMethod.MethodName, ResultAnalyticalMethod.MethodDescriptionText, ResultAnalyticalMethod.MethodIdentifier,
ResultAnalyticalMethod.MethodIdentifierContext, ResultAnalyticalMethod.MethodUrl, TADA.AnalyticalMethod.Flag,
SampleCollectionMethod.MethodIdentifier, SampleCollectionMethod.MethodIdentifierContext, SampleCollectionMethod.MethodName,
SampleCollectionMethod.MethodDescriptionText, SampleCollectionEquipmentName, MeasureQualifierCode,
TADA.MeasureQualifierCode.Flag, TADA.MeasureQualifierCode.Def, ResultCommentText, ActivityCommentText, HydrologicCondition,
HydrologicEvent, DataQuality.PrecisionValue, DataQuality.BiasValue, DataQuality.ConfidenceIntervalValue,
DataQuality.UpperConfidenceLimitValue, DataQuality.LowerConfidenceLimitValue, SamplingDesignTypeCode, LaboratoryName,
ResultLaboratoryCommentText, ResultIdentifier, ActivityIdentifier, OrganizationIdentifier, OrganizationFormalName,
TADA.MultipleOrgDuplicate, TADA.MultipleOrgDupGroupID, TADA.ResultSelectedMultipleOrgs, TADA.SingleOrgDupGroupID,
TADA.SingleOrgDup.Flag, ProjectName, ProjectDescriptionText, ProjectIdentifier, ProjectFileUrl, QAPPApprovedIndicator,
QAPPApprovalAgencyName, CountryCode, StateCode, CountyCode, MonitoringLocationName, MonitoringLocationTypeName,
MonitoringLocationDescriptionText, LatitudeMeasure, TADA.LatitudeMeasure, LongitudeMeasure, TADA.LongitudeMeasure,
HorizontalCoordinateReferenceSystemDatumName, HUCEightDigitCode, MonitoringLocationIdentifier, TADA.NearbySiteGroups,
AquiferName, AquiferTypeName, LocalAqfrName, ConstructionDateText, WellDepthMeasure.MeasureValue,
WellDepthMeasure.MeasureUnitCode, WellHoleDepthMeasure.MeasureValue, WellHoleDepthMeasure.MeasureUnitCode,
ActivityDepthAltitudeReferencePointText, ActivityEndDate, ActivityEndTime.Time, ActivityEndTime.TimeZoneCode,
ActivityEndDateTime, ActivityConductingOrganizationText, SampleAquifer, ActivityLocation.LatitudeMeasure,
ActivityLocation.LongitudeMeasure, ResultStatusIdentifier, ResultWeightBasisText, ResultTemperatureBasisText,
ResultParticleSizeBasisText, USGSPCode, BinaryObjectFileName, BinaryObjectFileTypeCode, AnalysisStartDate,
ResultDetectionQuantitationLimitUrl, LabSamplePreparationUrl, timeZoneStart, timeZoneEnd, SourceMapScaleNumeric,
HorizontalAccuracyMeasure.MeasureValue, HorizontalAccuracyMeasure.MeasureUnitCode, HorizontalCollectionMethodName,
VerticalMeasure.MeasureValue, VerticalMeasure.MeasureUnitCode, VerticalAccuracyMeasure.MeasureValue,
VerticalAccuracyMeasure.MeasureUnitCode, VerticalCollectionMethodName, VerticalCoordinateReferenceSystemDatumName,
FormationTypeText, ProjectMonitoringLocationWeightingUrl, DrainageAreaMeasure.MeasureValue, DrainageAreaMeasure.MeasureUnitCode,
ContributingDrainageAreaMeasure.MeasureValue, ContributingDrainageAreaMeasure.MeasureUnitCode, ProviderName, LastUpdated,
TADA.ConsolidatedDepth.Bottom, TADA.DepthCategory.Flag, TADA.DepthProfileAggregation.Flag)`
[1] "TADA_DepthProfilePlot: Adding surface delination to figure."
Error: Must supply x and y attributes
@cristinamullinhttps://github.com/cristinamullin should "null" be the default for surfacevalue and bottomvalue in this function?
Reply to this email directly, view it on GitHubhttps://github.com/USEPA/TADA/pull/462#issuecomment-2127871948, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ALGLGYCRFU3QVLH6AI6F7U3ZDY62VAVCNFSM6AAAAABH4TNKAOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRXHA3TCOJUHA.
You are receiving this because you were mentioned.Message ID: @.@.>>
| gharchive/pull-request | 2024-05-17T18:37:41 | 2025-04-01T04:33:10.038055 | {
"authors": [
"cristinamullin",
"hillarymarler"
],
"repo": "USEPA/TADA",
"url": "https://github.com/USEPA/TADA/pull/462",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1373571504 | Use CAMPD API instead of FACT
FACT API will be decommissioned
Will the CAMPD API include all of the same data endpoints as the FACT API?
Specifically I'm interested in the MATS_FLAG_CODE that indicates startup and shutdown (https://github.com/singularity-energy/open-grid-emissions/issues/155#issuecomment-1196114277)
| gharchive/issue | 2022-09-14T20:49:56 | 2025-04-01T04:33:10.041019 | {
"authors": [
"grgmiller",
"j-tafoya"
],
"repo": "USEPA/camd-eia-crosswalk",
"url": "https://github.com/USEPA/camd-eia-crosswalk/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2625890486 | 🧑💻: Add back to top button
Title
Adding a back to top button.
Enhancement Aim
The purpose of this enhancement is to improve user experience by adding a "Back to Top" button. This button allows users to quickly navigate back to the top of a page without manually scrolling, especially useful for pages with long content.
Changes
Add a "Back to Top" button that becomes visible when the user scrolls down a certain distance on the page.
Implement smooth scrolling for a better user experience.
Screenshots 📷
No response
Guidelines
[X] I have read the guidelines
[ ] I have the link to my latest merged PR
Full Name
Banasmita Jena
Participant Role
Hactoberfest, gssoc-extd
@banasmita24 there is no coded website for this project.
| gharchive/issue | 2024-10-31T03:27:51 | 2025-04-01T04:33:10.098145 | {
"authors": [
"UTSAVS26",
"banasmita24"
],
"repo": "UTSAVS26/PyVerse",
"url": "https://github.com/UTSAVS26/PyVerse/issues/965",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2593726488 | Add ARP Spoofing Detection Tool with GUI and Real-Time Monitoring
Pull Request for PyVerse 💡
Requesting to submit a pull request to the PyVerse repository.
Issue Title
Add ARP Spoofing Detection Tool with GUI and Real-Time Monitoring
[x] I have provided the issue title.
Info about the Related Issue
What's the goal of the project?
The goal of this project is to provide a ARP Spoofing Detection Tool that monitors the local network for ARP cache poisoning attacks in real-time. The tool utilizes Scapy to scan the ARP table and checks for suspicious changes in IP-MAC mappings. It provides alerts via a user-friendly tkinter GUI and logs any detected spoofing attempts in a log file for further analysis.
Key Features:
Real-time ARP Spoofing Detection: Continuously scans the ARP table for IP-MAC mismatches, indicating possible spoofing attacks.
GUI for User Interaction: Start and stop monitoring through an intuitive tkinter-based GUI, with real-time alert notifications.
Logging: All potential ARP spoofing incidents are logged in arp_spoofing_log.txt with timestamps for analysis.
Threaded Monitoring: The ARP monitoring process runs on a separate thread to ensure the GUI remains responsive.
Added Files:
arp_spoofing_detection.py: Main script containing the tool's logic and GUI.
added the Readme.md file for instructions
This tool is ideal for detecting MITM attacks or ARP cache poisoning on local networks.
[x] I have described the aim of the project.
Name
Please mention your name.
Peroxide Paradox.
[x] I have provided my name.
GitHub ID
Please mention your GitHub ID.
github
[x] I have provided my GitHub ID.
Email ID
Please mention your email ID for further communication.
aromaticperoxide@gmail.com
[x] I have provided my email ID.
Identify Yourself
Mention in which program you are contributing (e.g., WoB, GSSOC, SSOC, SWOC).
GGSOC
[x] I have mentioned my participant role.
Closes
Enter the issue number that will be closed through this PR.
Closes: #644
[x] I have provided the issue number.
Describe the Add-ons or Changes You've Made
Give a clear description of what you have added or modified.
I have added an ARP Spoofing Detection Tool with the following features:
Detects ARP cache poisoning attempts.
Displays real-time alerts using a tkinter GUI.
Logs incidents in a file (arp_spoofing_log.txt) with timestamps.
Uses a separate thread for continuous monitoring to keep the GUI responsive.
[x] I have described my changes.
Type of Change
Select the type of change:
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Code style update (formatting, local variables)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
How Has This Been Tested?
Describe how your changes have been tested.
The tool has been tested by running it with administrative privileges to detect ARP spoofing attempts. The program runs smoothly, alerting the user via the GUI and logging details in the specified log file. It was tested on both Windows (with Npcap) and macOS (with root privileges).
[x] I have described my testing process.
Checklist
Please confirm the following:
[x] My code follows the guidelines of this project.
[x] I have performed a self-review of my own code.
[x] I have commented my code, particularly wherever it was hard to understand.
[x] I have made corresponding changes to the documentation.
[x] My changes generate no new warnings.
[x] I have added things that prove my fix is effective or that my feature works.
[x] Any dependent changes have been merged and published in downstream modules.
Hi @UTSAVS26 , @TheChaoticor , @shaansuraj
please review it
appreciate your time !
| gharchive/pull-request | 2024-10-17T06:13:11 | 2025-04-01T04:33:10.114431 | {
"authors": [
"PeroxideParadox"
],
"repo": "UTSAVS26/PyVerse",
"url": "https://github.com/UTSAVS26/PyVerse/pull/676",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2679088607 | Add JSON reader & extractor
Close #6. This proved more complicated than I first envisioned, because the example data I found happened to be structured as arrays within a dictionary. This solution can now tackle both types of data: single file with multiple documents, or a file per document.
Unfortunately, the pandas libary needs to be a higher version than available with Python 3.8 to flatten the JSON appropriately. Should we drop Python 3.8 support, seeing as it's beyond end of life now?
Apologies for the late response, I hadn't realised this was ready for review.
Before I review this, can you add relevant documentation changes in /docs and README.md? I would not have approved this without a documentation update, and it makes the rest of the review easier.
@lukavdplas I included the json module in the mkdocs index, and updated the README to state that the package also defines a reader for JSON. I also just went ahead now and excluded the Python 3.8 test from the GitHub actions matrix, and edited the README to state that we don't support Python versions lower than 3.9.
| gharchive/pull-request | 2024-11-21T11:19:30 | 2025-04-01T04:33:10.119418 | {
"authors": [
"BeritJanssen",
"lukavdplas"
],
"repo": "UUDigitalHumanitieslab/ianalyzer-readers",
"url": "https://github.com/UUDigitalHumanitieslab/ianalyzer-readers/pull/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
965092619 | Report procedures cause errors and warnings in vsim batch mode
Expected Behaviour
I expect to be able to use report_alert_counters in vsim batch mode without errors and warnings.
Current Behaviour
Currently each call of report_alert_counters causes 2 additional errors and 1 additional warning in the alert summary (see Failure Logs). The same issue was observed for report_global_ctrl
Context
We use UVVM Utility Library procedures in a framework launching vsim via the command line with -batch option with the aim of integrating this in a CI.
Failure Logs
A test bench ending the simulation with the following lines ...
report_alert_counters(FINAL);
finish;
... results in 2 reported errors and 1 warning caused by report_alert_counters:
# UVVM: ====================================================================================================================================================================
# UVVM: *** FINAL SUMMARY OF ALL ALERTS ***
# UVVM: ====================================================================================================================================================================
# UVVM: REGARDED EXPECTED IGNORED Comment?
# UVVM: NOTE : 0 0 0 ok
# UVVM: TB_NOTE : 0 0 0 ok
# UVVM: WARNING : 0 0 0 ok
# UVVM: TB_WARNING : 0 0 0 ok
# UVVM: MANUAL_CHECK : 0 0 0 ok
# UVVM: ERROR : 0 0 0 ok
# UVVM: TB_ERROR : 0 0 0 ok
# UVVM: FAILURE : 0 0 0 ok
# UVVM: TB_FAILURE : 0 0 0 ok
# UVVM: ====================================================================================================================================================================
# UVVM: >> Simulation SUCCESS: No mismatch between counted and expected serious alerts
# UVVM: ====================================================================================================================================================================
# UVVM:
# UVVM:
# End time: 13:18:00 on Aug 10,2021, Elapsed time: 0:00:02
# Errors: 2, Warnings: 1
Possible Cause
As this issue only occurs in vsim's batch mode (-batch), but neither in GUI nor command line (-c) mode, it seems to be related to the missing command line.
Possible Solution
Point out way how to properly use report procedures in vsim batch mode
Hi Nick,
Thank you for the report. I will have to test this and investigate the problem.
Br,
Marius
Hi @mariuselv,
is there any news regarding this issue?
Best regards,
Nick
Hi Nick,
I have started looking into it, but have not been able to determine what is causing the problem. The errors and warning you get from the simulator at the end of the transcript is somehow triggered by the report procedure. I will notify you when I have found a solution to the problem.
Br,
Marius
Hi, This problem no longer exists with any newer simulator, and we could not find the source of the problem in the previous version. Thus, as we expect this is a simulator problem, we close this issue.
BR
UVVM Team
| gharchive/issue | 2021-08-10T15:16:58 | 2025-04-01T04:33:10.128894 | {
"authors": [
"UVVM",
"mariuselv",
"nfritzsc"
],
"repo": "UVVM/UVVM",
"url": "https://github.com/UVVM/UVVM/issues/160",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
114909157 | MUMUP-2123 part 1: Create basic frame page
Creates a basic <frame-page> directive that you can pass in header information. It utilizes <portlet-header> for the header and ng-transcude for the body. This should promote similar pages.
I rewrote the main.html page in frame to show as an example
:+1:
:+1:
:+1:
| gharchive/pull-request | 2015-11-03T21:02:50 | 2025-04-01T04:33:10.131281 | {
"authors": [
"apetro",
"jhanstra",
"paulerickson",
"timlevett"
],
"repo": "UW-Madison-DoIT/uw-frame",
"url": "https://github.com/UW-Madison-DoIT/uw-frame/pull/59",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1754372231 | [Update] 7.2.0 Update credix ABI
Credix updated their main-net ABI, updating our IDL
Yoyo
woot?
| gharchive/pull-request | 2023-06-13T08:52:12 | 2025-04-01T04:33:10.134713 | {
"authors": [
"GregoryNEUT",
"uxd-vincent"
],
"repo": "UXDProtocol/uxd-client",
"url": "https://github.com/UXDProtocol/uxd-client/pull/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.