added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:39:27.900903
2015-01-18T16:03:14
54700717
{ "authors": [ "lukevenediger", "phillijw" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:7998", "repo": "lukevenediger/statsd.net", "url": "https://github.com/lukevenediger/statsd.net/issues/38" }
gharchive/issue
Installer fails I'm having trouble installing this server on one of our Windows Server 2008 R2 (64bit) servers. The server appears to only have .Net 4.0 which I'm assuming is the problem. Is there a way to get this app to work on .Net 4.0 instead of 4.5? Running a transacted installation. Beginning the Install phase of the installation. See the contents of the log file for the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly's progress. The file is located at E:\statsd.net-v<IP_ADDRESS>\statsdnet.InstallLog. An exception occurred during the Install phase. System.InvalidOperationException: Unable to get installer types in the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly. The inner exception System.Reflection.ReflectionTypeLoadException was thrown with the following error message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. The Rollback phase of the installation is beginning. See the contents of the log file for the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly's progress. The file is located at E:\statsd.net-v<IP_ADDRESS>\statsdnet.InstallLog. An exception occurred during the Rollback phase of the System.Configuration.Install.AssemblyInstaller installer. System.InvalidOperationException: Unable to get installer types in the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly. The inner exception System.Reflection.ReflectionTypeLoadException was thrown with the following error message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. An exception occurred during the Rollback phase of the installation. This exception will be ignored and the rollback will continue. However, the machine might not fully revert to its initial state after the rollback is complete. The Rollback phase completed successfully. The transacted install has completed. Running a transacted installation. Beginning the Install phase of the installation. See the contents of the log file for the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly's progress. The file is located at E:\statsd.net-v<IP_ADDRESS>\statsdnet.InstallLog. An exception occurred during the Install phase. System.InvalidOperationException: Unable to get installer types in the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly. The inner exception System.Reflection.ReflectionTypeLoadException was thrown with the following error message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. The Rollback phase of the installation is beginning. See the contents of the log file for the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly's progress. The file is located at E:\statsd.net-v<IP_ADDRESS>\statsdnet.InstallLog. An exception occurred during the Rollback phase of the System.Configuration.Install.AssemblyInstaller installer. System.InvalidOperationException: Unable to get installer types in the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly. The inner exception System.Reflection.ReflectionTypeLoadException was thrown with the following error message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. An exception occurred during the Rollback phase of the installation. This exception will be ignored and the rollback will continue. However, the machine might not fully revert to its initial state after the rollback is complete. The Rollback phase completed successfully. The transacted install has completed. Hi Joe, Sorry, but the service needs .net 4.5 and upwards to run. Thanks, Luke Sent from my iPhone On 18 Jan 2015, at 6:03 PM, Joe Phillips<EMAIL_ADDRESS>wrote: I'm having trouble installing this server on one of our Windows Server 2008 R2 (64bit) servers. The server appears to only have .Net 4.0 which I'm assuming is the problem. Is there a way to get this app to work on .Net 4.0 instead of 4.5? Running a transacted installation. Beginning the Install phase of the installation. See the contents of the log file for the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly's progress. The file is located at E:\statsd.net-v<IP_ADDRESS>\statsdnet.InstallLog. An exception occurred during the Install phase. System.InvalidOperationException: Unable to get installer types in the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly. The inner exception System.Reflection.ReflectionTypeLoadException was thrown with the following error message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. The Rollback phase of the installation is beginning. See the contents of the log file for the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly's progress. The file is located at E:\statsd.net-v<IP_ADDRESS>\statsdnet.InstallLog. An exception occurred during the Rollback phase of the System.Configuration.Install.AssemblyInstaller installer. System.InvalidOperationException: Unable to get installer types in the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly. The inner exception System.Reflection.ReflectionTypeLoadException was thrown with the following error message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. An exception occurred during the Rollback phase of the installation. This exception will be ignored and the rollback will continue. However, the machine might not fully revert to its initial state after the rollback is complete. The Rollback phase completed successfully. The transacted install has completed. Running a transacted installation. Beginning the Install phase of the installation. See the contents of the log file for the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly's progress. The file is located at E:\statsd.net-v<IP_ADDRESS>\statsdnet.InstallLog. An exception occurred during the Install phase. System.InvalidOperationException: Unable to get installer types in the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly. The inner exception System.Reflection.ReflectionTypeLoadException was thrown with the following error message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. The Rollback phase of the installation is beginning. See the contents of the log file for the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly's progress. The file is located at E:\statsd.net-v<IP_ADDRESS>\statsdnet.InstallLog. An exception occurred during the Rollback phase of the System.Configuration.Install.AssemblyInstaller installer. System.InvalidOperationException: Unable to get installer types in the E:\statsd.net-v<IP_ADDRESS>\statsdnet.exe assembly. The inner exception System.Reflection.ReflectionTypeLoadException was thrown with the following error message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. An exception occurred during the Rollback phase of the installation. This exception will be ignored and the rollback will continue. However, the machine might not fully revert to its initial state after the rollback is complete. The Rollback phase completed successfully. The transacted install has completed. — Reply to this email directly or view it on GitHub.
2025-04-01T06:39:27.912864
2020-12-19T15:57:19
771405461
{ "authors": [ "dicksonkimeu", "igorocampos", "lukevp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:7999", "repo": "lukevp/ESC-POS-.NET", "url": "https://github.com/lukevp/ESC-POS-.NET/issues/95" }
gharchive/issue
Support for Fiscal Printer Does this support fiscal printer ? I’m not sure what a fiscal printer is, could you provide a reference to this please? https://docs.microsoft.com/en-us/previous-versions/windows/embedded/ms884287(v=winembedded.4) Hi @lukevp , A thermal receipt you pass a free text to the sdk. A thermal fiscal receipt there are certain commands that should be passed to the SDK. It's still ESC/POS commands. How can I pass ESC/POS commands using the SDK ? LIST OF FISCAL COMMANDS - IN ASCENDING ORDER HEX DEC Function 21h (33) Clear the display 23h (35) Show text on lower line of display 26h (38) Open non-fiscal receipt 27h (39) Close non-fiscal receipt 29h (41) Setting the memory switches 2Ah (42) Printing non-fiscal free text 2Bh (43) Set FOOTER and printing options 2Ch (44) Advance paper 2Dh (45) Paper cut 2Eh (46) Set HEADER (Name and address) 2Fh (47) Showing text on upper line of display 30h (48) Open fiscal receipt 31h (49) Register sale 32h (50) Tax rates set during selected period 33h (51) Subtotal 34h (52) Register sale and show on display 35h (53) Calculate TOTAL 36h (54) Print free fiscal text 38h (56) Close fiscal receipt 3Ch (60) Cancel fiscal receipt 3Dh (61) Set date and hour 3Eh (62) Get current date and hour 3Fh (63) Show date and hour on display 40h (64) Info on last fiscal entry 41h (65) Info on daily totals 43h (67) Info on daily paid sums 44h (68) Number of free fields in fiscal memory 45h (69) Daily financial report with/without closure 46h (70) Internal debiting/crediting 47h (71) Print diagnostic info 48h (72) Fiscalization 49h (73) Detailed report of the fiscal memory selected by number of entry 4Ah (74) Read statuses 4Ch (76) Status of the fiscal transaction 4Fh (79) Short report of the fiscal memory selected by date of entry 50h (80) Sound signal 53h (83) Set multiplier, decimals, currency name and disabled taxes 54h (84) Print a bar code 55h (85) Set additional payment names 56h (86) Get last fiscal memory date 59h (89) Program production test area 5Ah (90) Return diagnostic info 5Bh (91) Program serial number, country number and Fiscal memory number 5Eh (94) Detailed of fiscal memory (selected by date of entry) 5Fh (95) Short report of fiscal memory (selected by entry number) 60h (96) Set tax office text 61h (97) Return tax rates 62h (98) Set tax registration number 63h (99) Return set tax registration number 64h (100) Show free text on display 65h (101) Set operator’s password 66h (102) Enter operator’s name 67h (103) Info on current receipt 69h (105) Operator report 6Ah (106) Drawer kick-out 6Bh (107) Define items and items info 6Ch (108) Detailed daily report 6Dh (109) Print duplicate receipt 6Eh (110) Additional daily info 6Fh (111) Report on groups of items 70h (112) Reading info on operator 71h (113) Read the number of the last fiscal entry or period 72h (114) Read info on fiscal entry or period 73h (115) Program graphic logo 74h (116) Read fiscal memory block 76h (118) Register technical intervention 78h (120) Electronic journal support 79H (121) Read code memory (firmware) 7Eh (126) Erase electronic journal 7Fh (127) RAM reset Hey @dicksonkimeu , this is not currently supported, but would be implemented as a FiscalEmitter. We currently only have the Epson emitter. You would not need to use that Fiscal SDK as that’s a different interface (UPOS) vs directly interacting with the printer like this library. are you open to implementing the Fiscal support? I will assist you as I can but I do not have any hardware that works with this, so we would have to collaborate on a PR. I've worked with Fiscal Printers here in Brazil 10 years ago (or so), however, I did not use direct command bytes, but rather each manufacture DLL. You will need the whole command to implement it. @dicksonkimeu, the list that you provided is only a summary. You will probably need the parameters info as well, if you provide a handbook I can also assist to implement the commands in a new emitter. Send me a test email to<EMAIL_ADDRESS>I will forward you the documentation from Datecs fp300
2025-04-01T06:39:27.924246
2022-11-23T13:16:26
1461747381
{ "authors": [ "lukka", "rlalik" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8000", "repo": "lukka/get-cmake", "url": "https://github.com/lukka/get-cmake/issues/60" }
gharchive/issue
Add 'latest' option for version Hi, you provide two ways of setting up version: # Using 'latest' branch, the most recent CMake and ninja are installed. uses: lukka/get-cmake@latest # <--= THIS IS THE ONE LINER YOU NEED and - name: Get specific version CMake, v3.24.3, and Ninja v1.11.1 uses: lukka/get-cmake@latest with: cmakeVersion: 3.24.3 # <--= optional, overrides the _latest_ version of CMake ninjaVersion: 1.11.1 # <--= optional, overrides the _latest_ version of Ninja however it makes problem if I would like to use matrix build for different cmake versions, especially if one of them is latest. I cannot make jobs: strategy: matrix: cmake: [ 3.9.2, latest ] steps: - name: Get specific version CMake version 1 uses: lukka/get-cmake@${{ matrix.cmake }} # cannot use matrix component in the action name - name: Get specific version CMake version 2 uses: lukka/get-cmake@latest with: - cmakeVersion: ${{ matrix.cmake }} # 'latest' will not work here Could you maybe add accepting latest as the valid version which fallbacks to the default? @rlalik totally agreed, usage like you described should be supported. @rlalik you may give a try to the version on PR: `get-cmake@dev/any-version solved in https://github.com/lukka/get-cmake/releases/tag/v3.25.1
2025-04-01T06:39:27.928180
2023-07-07T16:42:09
1793842591
{ "authors": [ "Hugoo", "fhildeb" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8001", "repo": "lukso-network/docs", "url": "https://github.com/lukso-network/docs/pull/574" }
gharchive/pull-request
feat: FAQ - Add Network and Nodes Chapter Removes the old Network and Validator pages and sets up the new Network and Nodes folder, including the following pages: Blockchain Architecture Network Configuration Peer Connections Node Setup Validators Security Staking Thanks, Johann, for taking the time 🙏🏻😙 I applied your suggestions and clarified the answers. Also added redirects and modified the headings as in the other sections sadly we now have a few conflicts bc we merged the other PRs :/ should be easy to fi anyway
2025-04-01T06:39:27.933800
2019-07-16T11:50:52
468609478
{ "authors": [ "sylwiabr", "wdanilo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8002", "repo": "luna/enso", "url": "https://github.com/luna/enso/issues/37" }
gharchive/issue
Parser special constructs implementation Summary Based on the parser framework, it's important to implement special constructs, like type definitions. These constructs could be in the future implemented using the meta-enso layer, so here we need to use the same layer, but internally. Value Correct parsing of all Enso constructs. Acceptance Criteria & Test Cases All language constructs should be parsed correctly. Is there a PR @wdanilo ? If so, please connect it to the issue. @wdanilo how can I test it?
2025-04-01T06:39:27.944469
2016-03-24T06:43:27
143167751
{ "authors": [ "lunixbochs" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8003", "repo": "lunixbochs/usercorn", "url": "https://github.com/lunixbochs/usercorn/issues/174" }
gharchive/issue
trace viewer almost certainly depends on #173 timeline view like sublime text minimap with memory/register activity highlighted in color colorized memory blocks based on the basic block that "owns" them, or which blocks touched them navigate memory based on which basic blocks touched them, kinda like xrefs in ida but for memory access also select memory/syscalls and visually see the taint flow and highlight the blocks in a graph view time decayed memory memory diffing text tracing should be a driver that parses the binary trace run actual emulator from state - if syscalls are reduced to operational transforms, we can forward/rewind
2025-04-01T06:39:27.966867
2023-08-29T18:33:40
1872193356
{ "authors": [ "huitseeker" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8004", "repo": "lurk-lab/lurk-rs", "url": "https://github.com/lurk-lab/lurk-rs/issues/639" }
gharchive/issue
On duplication in LEM integration I think there are two factors of duplication in #629: Misuse of genericity This is essentially explained in a PR comment. That is, the types that depend on a generic C: Coprocessor<F> are redefined in #629 because the LEM logic cannot for now accomodate any coprocessor other than DummyCoproc<F>. However, for any GenericType<Foo, Bar>, there is nothing in Rust that prevents you from defining methods for type GenericType<Foo, usize>. I have a branch that starts working out the deduplication in https://github.com/huitseeker/lurk-rs/tree/lem-integration-experiment The reason I think it's important to resolve this is that the methods we'd define today on Foo<F, DummyCoproc<F>> will eventually be able to be defined on Foo<F, C: Coproc<F>>, which is going to be much easier to adapt to if they're not distinct types. Not abstracting the Multiframe, and failing to define provers in terms of that abstraction The MultiFrame is the fundamental type that's defined starkly differently in LEM and Lurk: in LEM in Lurk This struct appears in the APIs of the prover: https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/proof/mod.rs#L20 https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/proof/mod.rs#L33-L34 https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/proof/mod.rs#L107-L110 https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/proof/nova.rs#L121 https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/proof/nova.rs#L131-L141 https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/proof/nova.rs#L172-L181 https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/proof/nova.rs#L421-L428 morally speaking, the only thing we need (that is, the only thing Nova requires) in the position of this MultiFrame in the prover is some instance of StepCircuit. I suspect we could implement: a trait MultiFrame that abstracts over both types of Multiframe (LEM, non-Lurk). The main APIs to provide there are associated types fixing the local notion of Store and Ptr, blank, from_frames and synthesize_frames (the later being expressed in terms of those associated types: https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/proof/nova.rs#L421-L428 https://github.com/lurk-lab/lurk-rs/blob/9aa8b75247066b70972f2806a1537c359f47c2c6/src/circuit/circuit_frame.rs#L183-L192 Note the apparent difference between the instances of this backend function are not key, since their only usage is in the impl nova::StepCircuit for MultiFrame and impl bellpepper_core::Circuit for MultiFrame. Whatever abstraction that allows implementing those two callsites on top of what the MultiFrame trait offers should be enough. an implementation of the Nova prover that works on top of the Multiframe trait, rather than introspecting into the details of the two instances of their implementations above Summarizing the story: #629 was a meaty PR, we worked out how to prove through an interface in #642 and #633 #663 committed isolated changes from #629 the blocker was then a pattern of genericity + mutability of the store which could make interleaving evaluation and proving really hard, we moved to an interior mutability store in #680, this allowed for the sought after genericity in #709 this paved the way for #717 and #718 which offer apples-to-apples guarantees of feature parity for evaluation and proving, The present should close as soon as #717 and #718 are merged, because while we are indeed taking some duplication with those PRs, this is better than the alternative (worked out in #729) due to a lower complexity. The missing pieces are NIVC support from #677 and #725, which should also resolve soon. Closed with the merge of #717, #718.
2025-04-01T06:39:27.983009
2020-06-08T07:00:37
634260321
{ "authors": [ "handw-github", "lutzroeder", "oxygen-dioxide" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8005", "repo": "lutzroeder/netron", "url": "https://github.com/lutzroeder/netron/issues/513" }
gharchive/issue
auto update how to disable automatic update? This is not supported. how to disable automatic update? You can Install netron with pip: pip install netron which won't update automatically
2025-04-01T06:39:28.021978
2024-09-10T00:13:41
2515145204
{ "authors": [ "kozlov721", "sokovninn" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8006", "repo": "luxonis/luxonis-train", "url": "https://github.com/luxonis/luxonis-train/pull/70" }
gharchive/pull-request
Add DDRNet model for semantic segmentation This PR integrates the DDRNet model to enhance semantic segmentation performance. Comparison with Current Model (MicroNet + Segmentation Head) Parameters: DDRNet has 5.7M, compared to 2.1M for MicroNet. Training Time: Both models have similar single-epoch training times. Metrics After 1 Epoch MicroNet + Segmentation Head: Jaccard Index: 0.00873 F1 Score: 0.69811 DDRNet-23-slim: Jaccard Index: 0.00952 F1 Score: 0.70069 Possible improvements Compute Jaccard Index every n epochs to increase training speed. Improve handling of auxiliary heads for export. Use pretrained weights. Add Online Hard Example Mining (OHEM) with Cross Entropy Loss. Add EMA (Exponential Moving Average). Would it make sense to have auxiliary segmentaion head part of the backbone (if use_aux_head=True) so we could then easily remove it from graph if needed (e.g. durgin export)? Because now user has to change use_aux_head=False in the config if they want to export optimally. CC: @kozlov721 also for thoughts Yeah I think this would be a better solution. We could later add some more general support for train-only heads, but I think there's not gonna be many use-cases and they will mostly be solvable by making the head part of the backbone anyways. Also, let's first merge #69, then sync this one and add a test case for the new predefined model to tests.integration.test_simple.test_predefined_models
2025-04-01T06:39:28.040366
2022-09-15T19:19:36
1374976741
{ "authors": [ "C47D", "MrMarteng", "Olfox59", "ammaree", "kisvegabor" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8007", "repo": "lvgl/lv_port_esp32", "url": "https://github.com/lvgl/lv_port_esp32/issues/320" }
gharchive/issue
LVGL v8 status ? Animation with Square Line Studio We use GitHub issues for development related discussions. Please use the forum to ask questions. Describe the issue Hi All, I'm starting a nexw project based on Lilygo T display ESP32S3 and LVGL. I'm able to create a screen with the nice and usefull tool SquareLine Studio. I can display it on the devkit. I tried to add an animation on my image. I export the ui from sqaure line studio and tried to build on vscode. But call to function "" refer to undifined function. After comparing v8 to v7 , i saw tat this function were not present in v7. So i need to go on v8 to benefit of it. I tried to create a blank project, and add lvgl as a submodule to have v8 ready to use. But it seems that KConfig of lv_port_esp32 and last v8 lvgl are different. I don't find all the same options in menuconfig. So should I only modify the Kconfig and add the missing options present in the lv_port_esp32 version may be? Or is there also other changes to do and it is more complex than that? I saw the first ticket on this topics that exist for one year now but still open. Thank you ! Code to reproduce the issue Expected Results Actual Results ESP32 Chip version ESP32S3 ESP-IDF version v4.4.2 Development kit used Lilygo T display ESP32S3 Development machine OS Visual studio code Compilation warnings/errors (if available) implicit declaration of function 'lv_obj_get_x_aligned'; did you mean 'lv_obj_set_align'? [-Werror=implicit-function-declaration] implicit declaration of function 'lv_obj_get_y_aligned'; did you mean 'lv_obj_set_align'? [-Werror=implicit-function-declaration] implicit declaration of function 'lv_anim_set_user_data'; did you mean 'lv_obj_set_user_data'? [-Werror=implicit-function-declaration] If possible, copy the compilation log into a file and attach it here +1 for official LVGL v8.x support +1 from me as well. Still trying to get LVGL running with lvgl and regular drivers library I also got some customer request for ESP32-LVGLv8 support and I really like to have this and lvgl_esp32_drivers repos updated. As v7 and v8 differ only in some minor API changes in the drivers I think it's not that difficult to update them. Unfortunately, I don't have enough hardware for a deep enough testing, however you might have already seen our sponsorship program. From our donations I'd be happy to give 300 USD for updating these repos to v8. Does it sound like a fair offer? Would you be interested it? cc @C47D With some help from @sukesh-ak I have changed to use LGFX Master with LVGL v8. Have a look at https://github.com/sukesh-ak/ESP32-TUX So far that has worked very well with ESP-IDF v5.x but I have only tried it with ESP-WROVER-Kit v4.1 and the Makerfabs 16bit parallel+touch devkits. The only changes I have had to make to get a clean compile are: lv_demo_stress.c: lIne 77 change first %d to %lu lv_example_table_2.c: line 95 change first %"LV_PRIu32" to %d So the burning need from our side is gone and the $300 can be saved by using LGFX Hi @kisvegabor, thanks for CCing me but I'm pretty busy with day job in this days. Happy to see yall got it working!
2025-04-01T06:39:28.129970
2022-03-28T13:27:44
1183443629
{ "authors": [ "hadmut", "stgraber" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8008", "repo": "lxc/lxd", "url": "https://github.com/lxc/lxd/issues/10137" }
gharchive/issue
lxc remote add cannot take certificate fingerprint as command line argument Security problem / feature request Required information Distribution: Ubuntu Distribution version: 20.04 The output of "lxc info" or if that fails: Kernel version: LXC version: LXD version: 4.24-c92c0b2 Storage backend in use: zfs Issue description It is not possible (or not documented how) to add a remote source in an automated and secure way. When doing a lxc remote add someserver somelocation --public lxc displays the certificate fingerprint and asks for confirmation. This is secure, but requires interaction. From within a script or automated installation system this is not possible. There you could do a lxc remote add someserver somelocation --public --accept-certificate which works automated, but insecure, since it accepts any certificate. There should be a command line argument for the fingerprint, to add a remote server in an automated, but still secure way. Steps to reproduce Invite some evil hacker in your network or the common internet to fake the image server and polute it with dirty images. lxc remote add someserver somelocation --public --accept-certificate from within a script, e.g. automated installation with cloud-iniit or similar. Be doomed. stgraber@dakara:~$ lxc config trust add --name blah Client blah certificate add token: eyJjbGllbnRfbmFtZSI6ImJsYWgiLCJmaW5nZXJwcmludCI6IjQwMDI1MTc4N2Q2NzA0ZmY4OTdkYmZkOGQ0Mzg3OTcwYTJkOTVkOWRjOTA1MzAzYTI4OTM3MzE0YWE0YjhhODEiLCJhZGRyZXNzZXMiOlsiMTcyLjE3LjAuMjMyOjg0NDMiLCJbMjYwMjpmYzYyOmI6MTAwMDo1NDM2OjViMjU6NjRlNDpkODFhXTo4NDQzIiwiMTcyLjE3LjI1MC4xOjg0NDMiLCJbMjYwMjpmYzYyOmI6MjUwOjoxXTo4NDQzIiwiWzIwMDE6NDcwOmIyYjU6MTAzNzo6MTAwMF06ODQ0MyJdLCJzZWNyZXQiOiIzOTllY2M1MWMzYjdjZTc5Yjg1MTFmNGZiYzAxZTJjMjJjMGQyNzlkODA2NzZjYjUzMDM5M2JjNDMxNWE2MzFlIn0= stgraber@dakara:~$ lxc remote add foo eyJjbGllbnRfbmFtZSI6ImJsYWgiLCJmaW5nZXJwcmludCI6IjQwMDI1MTc4N2Q2NzA0ZmY4OTdkYmZkOGQ0Mzg3OTcwYTJkOTVkOWRjOTA1MzAzYTI4OTM3MzE0YWE0YjhhODEiLCJhZGRyZXNzZXMiOlsiMTcyLjE3LjAuMjMyOjg0NDMiLCJbMjYwMjpmYzYyOmI6MTAwMDo1NDM2OjViMjU6NjRlNDpkODFhXTo4NDQzIiwiMTcyLjE3LjI1MC4xOjg0NDMiLCJbMjYwMjpmYzYyOmI6MjUwOjoxXTo4NDQzIiwiWzIwMDE6NDcwOmIyYjU6MTAzNzo6MTAwMF06ODQ0MyJdLCJzZWNyZXQiOiIzOTllY2M1MWMzYjdjZTc5Yjg1MTFmNGZiYzAxZTJjMjJjMGQyNzlkODA2NzZjYjUzMDM5M2JjNDMxNWE2MzFlIn0=
2025-04-01T06:39:28.134380
2016-08-18T03:55:41
171812798
{ "authors": [ "stgraber" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8009", "repo": "lxc/lxd", "url": "https://github.com/lxc/lxd/issues/2294" }
gharchive/issue
Support operation cancellation in the client (and make more operations support it) As evidenced by #2293, users rightly expect ctrl-c to cancel things. This is not the case with LXD, with most operations not being cancel-able over the API anyway. We should investigate making the following cancel-able and having them canceled on user interrupt: Image transfer Container transfer Container publication The rest of the time we probably should catch ctrl-c, show a warning message that this will NOT interrupt what's going on in the background, and after repeated ctrl-c (lets say 3 times), actually exit. Per #3059 when fixing the daemon side of this issue we should add a hook that will automatically cancel all operations during daemon shutdown, ensuring a cleaner daemon shutdown and returning a clear message to clients about the daemon going down. So I think it's time to look into this one again. I expect this is going to be a bit of ongoing work until we get all the existing operations to support cancellation. Looking at the current code in operations.go, it looks like all the bits are in place to support cancellation for all operation types. We "just" need to have them define a onCancel function. My guess is that the onCancel function will typically share a channel with the onRun function and will be limited to writing a value to the channel and then block on it. The onRun function can then select on that channel and if it notices the cancellation, it can do whatever's needed to cancel and then close the channel to have LXD consider the operation cancelled. Anyway, I'd recommend you pick something easy to cancel and just try to get onCancel to behave, then test this manually through the API. No need to do client side plumbing for this right now. The new client library has proper support for cancellation so we should only do the client side of this once the port to the new library is complete (I'm really really close now :)). We've got the initial pass of the server side of this implemented. I'll take the issue over to get the initial pass of the client integration done. We should be able to get that done with a helper in lxc/utils.go that wraps around a lxd.Operation struct and effectively calls Wait() + support cancellation through SIGHUP/SIGKILL.
2025-04-01T06:39:28.142648
2018-12-11T21:01:35
389951946
{ "authors": [ "smibarber", "stgraber" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8010", "repo": "lxc/lxd", "url": "https://github.com/lxc/lxd/issues/5349" }
gharchive/issue
Config for avoiding rootfs shifts/mitigating failures with snapshots I'd like to add a flag or config to avoid performing operations that would cause an id shift on the container rootfs. In other words, this would allow something like the following flow: lxc config set mycontainer raw.idmap "both 1000 1000" lxc start --disallow-id-shift mycontainer # fails lxc snapshot mycontainer pre-shift-snapshot lxc start mycontainer # succeeds after remapping # some time later lxc delete mycontainer/pre-shift-snapshot # or, if the remapping was interrupted lxc restore mycontainer pre-shift-snapshot AFAICS the same would be needed for lxc publish since that may also perform shifts under the hood. I'd love to hear any thoughts on whether this seems sane, or if there's a better way to go about this before an in-kernel remapping solution is ready :) I'm happy to post PRs, of course. Alternatives One other idea I had was a config to have LXD internally perform a snapshot before shifting ids, and then remove the snapshot afterward. That felt a bit too much like it's moving policy into LXD though, and it seems weird that a start or publish operation would also create a snapshot... On the other hand, I think this would be easier for other LXD users to utilize. The other idea was for Chrome OS's LXD control daemon tremplin to check the id maps itself via the volatile.* configs to know when to perform a snapshot, but that seems very brittle. IMO the concrete id map at runtime, aside from any manually mapped ids like "both 1000 1000", shouldn't be the business of anything except LXD. Background On Chrome OS it's not improbable that our VM will be inadvertently shut down while shifting the container's rootfs ids. This could be due to a power failure, kernel panic in VM guest or host, or a Chrome crash (yes, Chrome crashing will cause our VM/LXD instances to require shutdown :). We saw a few users hitting this in crbug.com/894299. Until there's a solution to leave these ids unshifted on disk, we would like to ensure that we take a snapshot before performing id shifts. Our downstream tracking bug for this is crbug.com/912360. I think it should actually be pretty simple for tremplin to determine whether a shift is going to happen by comparing volatile.last_state.idmap to volatile.idmap.next, if they differ, a shift will occur on startup. In theory we could introduce a security.protection.shift config key which if set to true would prevent any shifting operation, requiring the user to set it to false temporarily as needed, though the check above may be enough until we get to shiftfs. @sforshee is currently working on porting the shiftfs patches to 4.19 and I'm expecting to have the initial LXD support for it done in January with it shipping to our users as part of Ubuntu 19.04 in April. That will unfortunately not be mainline at that point and it's still not completely clear what the mainline solution will end up being, whether that's just merging shiftfs or some VFS rework to make it possible to do this without being a filesystem, but shiftfs will be a clean standalone virtual filesystem so backporting it and maintaining a kernel with it shouldn't need much effort. If you're comfortable with programs other than LXD examining volatile.last_state.idmap and volatile.idmap.next then I think that addresses my concerns with letting tremplin deal with that state. I could see the security.protection.shift config key being useful just to protect users who want to run lxc publish. But if this doesn't seem like a super useful addition for LXD, I can just update the reddit crostini wiki to encourage snapshotting and we can close this :) security.protection.shift should be easy enough to implement that if you think it's worth it to prevent potential issues with lxc publish, it's probably fine to do so. Snapshots when published, at least on btrfs and zfs, get copied read-write, then shifted, exported and discarded, so even if something bad happens mid publish, the snapshot itself should be fine and so the protection doesn't need to extend to them. So it'd effectively only control startup time shifting of the container itself and be only relevant on systems where shifting is used (so will be ignored when shiftfs is available).
2025-04-01T06:39:28.145234
2023-02-03T08:40:06
1569441257
{ "authors": [ "gabrielmougard", "lxc-jenkins", "monstermunchkin" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8011", "repo": "lxc/lxd", "url": "https://github.com/lxc/lxd/pull/11328" }
gharchive/pull-request
lxd/backup : update the error msg when "backup/index.yaml" can't be found Signed-off-by: Gabriel Mougard<EMAIL_ADDRESS> This pull request didn't trigger Jenkins as its author isn't in the allow list. An organization member must perform one of the following: To have this branch tested by Jenkins, use the "ok to test" command. To have a one time test done, use the "test this please" command. Those commands are simple Github comments of the format: "jenkins: COMMAND" jenkins: test this please
2025-04-01T06:39:28.146727
2018-05-03T08:40:01
319831969
{ "authors": [ "morphis", "stgraber" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8012", "repo": "lxc/lxd", "url": "https://github.com/lxc/lxd/pull/4529" }
gharchive/pull-request
xattr: Support empty values Signed-off-by: Stéphane Graber<EMAIL_ADDRESS> LGTM! Thanks for fixing it that quickly :-) @brauner jenkins looks pretty happy with this
2025-04-01T06:39:28.148533
2021-01-31T16:51:11
797755890
{ "authors": [ "mazerty", "sparkiegeek" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8013", "repo": "lxc/pylxd", "url": "https://github.com/lxc/pylxd/issues/462" }
gharchive/issue
Update Ubuntu packages Hi ! The "python3-pylxd" packages for the Focal, Groovy and Hirsute versions of Ubuntu are still pointing to the 2.2.10 Can you push the latest version to these repositories ? Thanks :) We don't maintain the packages in Ubuntu/Debian - you're probably best contacting the Debian maintainers - https://tracker.debian.org/pkg/python-pylxd
2025-04-01T06:39:28.184451
2018-09-01T02:25:34
356163789
{ "authors": [ "artem-zinnatullin", "satoshun" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8014", "repo": "lyft/domic", "url": "https://github.com/lyft/domic/pull/35" }
gharchive/pull-request
fix activity path of AndroidManifest com.lyft.domic.samples.redux.rxredux.MainActivity is wrong activity path. I fix to '(com.lyft.domic.samples.mvvm).MainActivity. it's a mvvm sample. Ah, I guess refactoring touched that, thanks!
2025-04-01T06:39:28.214202
2021-01-15T20:43:13
787180864
{ "authors": [ "codecov-io", "schottra", "service-github-lyft-semantic-release" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8015", "repo": "lyft/flyteconsole", "url": "https://github.com/lyft/flyteconsole/pull/142" }
gharchive/pull-request
fix: failed data loading/refreshing in TaskExecutionDetails TL;DR Fixes a failure to load data in the TaskExecutionDetails view when coming from a page where the TaskExecution has already been loaded an is in a final state. Also fixes a refresh issue with child NodeExecutions on that page. Type [x] Bug Fix [ ] Feature [ ] Plugin Complete description The enabled flag on useQuery is meant to delay running a dependent query until data from the parent is available. We were abusing it a little bit by attempting to use it to return cached data from a query without running the query. A better implementation of this is to conditionally set the staleTime to Infinity and disable refetch in the case where we want to just use whatever cached data is available. This updates useCondtionalQuery to follow that logic. Also fixed an issue where the NodeExecutions list on the TaskExecutionDetails page was not refreshing when the parent generator task succeeded but the spawned NodeExecutions were still in progress. This required adding a refetchInterval to the query and fixing the logic in the shouldEnableQuery function. Tracking Issue https://github.com/lyft/flyte/issues/672 Codecov Report Merging #142 (fcab137) into master (d8daf6c) will decrease coverage by 0.10%. The diff coverage is 68.42%. @@ Coverage Diff @@ ## master #142 +/- ## ========================================== - Coverage 74.43% 74.32% -0.11% ========================================== Files 415 414 -1 Lines 7310 7310 Branches 1154 1159 +5 ========================================== - Hits 5441 5433 -8 - Misses 1869 1877 +8 Impacted Files Coverage Δ src/components/App/App.tsx 85.71% <ø> (-0.50%) :arrow_down: src/components/data/QueryAuthorizationObserver.tsx 29.41% <ø> (ø) src/components/hooks/useFetchableData.ts 91.07% <ø> (-1.79%) :arrow_down: src/components/data/apiContext.ts 61.90% <20.00%> (-32.22%) :arrow_down: ...utions/TaskExecutionDetails/TaskExecutionNodes.tsx 57.57% <33.33%> (ø) src/components/Navigation/ProjectSelector.tsx 100.00% <100.00%> (ø) ...rc/components/Navigation/SearchableProjectList.tsx 100.00% <100.00%> (ø) src/components/hooks/useConditionalQuery.ts 100.00% <100.00%> (ø) Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update dd62120...fcab137. Read the comment docs. Codecov Report Merging #142 (fcab137) into master (d8daf6c) will decrease coverage by 0.10%. The diff coverage is 68.42%. @@ Coverage Diff @@ ## master #142 +/- ## ========================================== - Coverage 74.43% 74.32% -0.11% ========================================== Files 415 414 -1 Lines 7310 7310 Branches 1154 1159 +5 ========================================== - Hits 5441 5433 -8 - Misses 1869 1877 +8 Impacted Files Coverage Δ src/components/App/App.tsx 85.71% <ø> (-0.50%) :arrow_down: src/components/data/QueryAuthorizationObserver.tsx 29.41% <ø> (ø) src/components/hooks/useFetchableData.ts 91.07% <ø> (-1.79%) :arrow_down: src/components/data/apiContext.ts 61.90% <20.00%> (-32.22%) :arrow_down: ...utions/TaskExecutionDetails/TaskExecutionNodes.tsx 57.57% <33.33%> (ø) src/components/Navigation/ProjectSelector.tsx 100.00% <100.00%> (ø) ...rc/components/Navigation/SearchableProjectList.tsx 100.00% <100.00%> (ø) src/components/hooks/useConditionalQuery.ts 100.00% <100.00%> (ø) Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update dd62120...fcab137. Read the comment docs. :tada: This PR is included in version 0.19.3 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket: :tada: This PR is included in version 0.19.3 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T06:39:28.217018
2020-12-29T06:59:12
775736118
{ "authors": [ "katrogan", "kumare3", "wild-endeavor" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8016", "repo": "lyft/flytekit", "url": "https://github.com/lyft/flytekit/pull/298" }
gharchive/pull-request
Switching from using model Metadata -> TaskMetadata TaskMetadata will be maintained as a shadow and allows decoupling of protocol buffer types from contributor code and user code. This allows more flexiblity another prime motivation behind this change is that - it allows making the interface less verbose. This makes it trivial to support default values for metadata. I think I'm not seeing the bigger picture. At least I don't understand how this is a decoupling? Ultimately, at serialization time, the Python class will have to be converted into the model class right? I feel like this is more delaying the coupling rather than decoupling. Which is fine, but I don't understand what this enables. +1 if you want. I think it's okay to use the model sometimes though - like currently we use the dynamic job spec model, literal map and parameter map models, all the literal models, auth, labels, annotations, etc. I don't think we should create parallel classes for all those. can you take a look at test failures?
2025-04-01T06:39:28.218068
2019-02-27T14:52:02
415163533
{ "authors": [ "akonradi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8017", "repo": "lyft/protoc-gen-validate", "url": "https://github.com/lyft/protoc-gen-validate/pull/148" }
gharchive/pull-request
Update bazel-gazelle version The current bazel version complains about use of cfg="data" on an attribute in the old version of bazel-gazelle. The current release doesn't have this problem. @rodaine can you approve this to fix CI?
2025-04-01T06:39:28.220645
2023-08-21T09:57:34
1859025503
{ "authors": [ "BNHeadrick" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8018", "repo": "lynchjames/obsidian-day-planner", "url": "https://github.com/lynchjames/obsidian-day-planner/issues/217" }
gharchive/issue
Can't truly uninstall When I disable and uninstall this plugin, and after restarting on all devices, it continues to create the Daily Planner directory and new day planner file. This is driving me crazy; it's bad enough I'm considering nuking my vault and apps entirely to remove it. Well, I nuked the whole Vault and made a new one. Now, the old vault, with the same name, keeps getting made so that “Daily Planner” can be made with the daily note file underneath. This is crazy. I guess I could setup a chronjob to delete the dir every day, but what do I have to do here? I fully reinstalled Obsidian and it still keeps happening. Okay, I think I figured it out. I had moved on from my old vault on only 3 out of 4 devices. So the last one, that I forgot about, was still making that daily note constantly. Can probably mark as solved.
2025-04-01T06:39:28.224188
2020-04-15T07:51:33
600095483
{ "authors": [ "jcague", "vpoddubchak" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8019", "repo": "lynckia/licode", "url": "https://github.com/lynckia/licode/pull/1564" }
gharchive/pull-request
Adapt to new nICEr API After https://github.com/lynckia/nrappkit/pull/4 and https://github.com/lynckia/nICEr/pull/2 are merged, need to update licode to work with changed nICEr api. This PR is showing how I did it. Not sure it is 100% correct, but it works in our environment and now is on testing stage. Note: need to change GIT_TAG for project_nicer in erizo/src/third_party/nicer.cmake to new value. This looks promising! thanks a lot for the contribution, why do you want to update nICEr? are there new features? Mostly because of stability issues: we often see disconnection with "Ica failed" error. For some clients it happens during almost each session. On our side we still have ability to switch between libnice and nicer, and when we switch to the latest libnice it works more stable. But you decided to go with nICEr, this is why we tried the latest version of it. For now I can say - it works better. During merging I saw such new features: changed ice-restart logic mDNS support I merged these commits in another PR to fix a couple of cases, thanks for the contribution
2025-04-01T06:39:28.248238
2023-01-03T10:27:29
1517196859
{ "authors": [ "codecov-commenter", "federicoisepponfincons" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8020", "repo": "lyne-design-system/lyne-components", "url": "https://github.com/lyne-design-system/lyne-components/pull/1516" }
gharchive/pull-request
fix(sbb-tag): fix styles Preflight Checklist [x] I have read the Contributing Guidelines for this project. [x] I agree to follow the Code of Conduct that this project adheres to. [x] I have searched the pull request tracker for a Pull Request (PR) that matches the one I want to submit, without success. Issue This PR fixes an issue reported by Aleksandar from team Lynx, related to both storybook and his aem environment: Tag elements are not correctly displayed Pull request checklist Please check if your PR fulfills the following requirements: [ ] Tests for the changes have been added (for bug fixes / features) [ ] Docs have been reviewed and added / updated if needed (for bug fixes / features) See Review Guidelines for more information what is checked during review process. Changes Changes in this pull request: fixed tag element display (background, border, animations) Browsers I tested the build on the following browsers: [x] Firefox Desktop [x] Chrome Desktop [ ] Edge Desktop [x] Safari Desktop [x] Chrome Mobile [x] Safari Mobile Screen readers I tested the build on the following browsers: [ ] JAWS Firefox Desktop [ ] JAWS Chrome Desktop [ ] NVDA Firefox Desktop [ ] NVDA Chrome Desktop [ ] VoiceOver Safari Desktop [ ] VoiceOver Chrome Desktop [ ] VoiceOver Safari Mobile [ ] Android Accessibility Suite Chrome Mobile Pull request type Please check the type of change your PR introduces: [x] Bugfix [ ] Feature [ ] Code style update (formatting, renaming) [x] Refactoring (no functional changes, no api changes) [ ] Build related changes [ ] Documentation content changes [ ] Other (please describe): Does this introduce a breaking change? [ ] Yes [x] No Other information The ellipsis seem to be broken working on it Codecov Report Merging #1516 (47821d4) into master (ab917b0) will decrease coverage by 1.62%. The diff coverage is 62.05%. @@ Coverage Diff @@ ## master #1516 +/- ## ========================================== - Coverage 54.85% 53.22% -1.63% ========================================== Files 49 82 +33 Lines 1659 3438 +1779 Branches 406 958 +552 ========================================== + Hits 910 1830 +920 - Misses 671 1474 +803 - Partials 78 134 +56 Impacted Files Coverage Δ ...mponents/sbb-accordion-item/sbb-accordion-item.tsx 0.00% <0.00%> (-37.21%) :arrow_down: src/components/sbb-clock/sbb-clock.tsx 0.00% <0.00%> (ø) src/components/sbb-link/sbb-link.tsx 59.57% <ø> (-31.86%) :arrow_down: src/components/sbb-logo/sbb-logo.tsx 0.00% <ø> (ø) src/components/sbb-menu-action/sbb-menu-action.tsx 61.90% <ø> (ø) src/components/sbb-menu/sbb-menu.tsx 25.89% <ø> (ø) ...ts/sbb-navigation-action/sbb-navigation-action.tsx 71.42% <ø> (ø) ...onents/sbb-navigation-list/sbb-navigation-list.tsx 84.00% <ø> (ø) ...ts/sbb-navigation-marker/sbb-navigation-marker.tsx 53.19% <ø> (ø) ...mponents/sbb-checkbox-group/sbb-checkbox-group.tsx 30.61% <30.61%> (ø) ... and 65 more :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more pushed fixes for ellipsis + active state + focus outline + minor bug on firefox
2025-04-01T06:39:28.255751
2015-08-11T19:55:58
100395468
{ "authors": [ "booleanbetrayal", "danielBlowingNose", "salmanasiddiqui", "toshimaru" ], "license": "WTFPL", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8021", "repo": "lynndylanhurley/devise_token_auth", "url": "https://github.com/lynndylanhurley/devise_token_auth/issues/333" }
gharchive/issue
NameError (uninitialized constant DeviseTokenAuth::Concerns::User::BCrypt) I get error when doing multiple API requests in my ROR application running this gem which is deployed on heroku using Puma (based on this article https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server). Especially the error occurs when I have set Puma parameter -w (defining number of workers) to amount bigger than 1. Interestingly the error does not occur when I have set parameter -t (number of threads) to amount biger than 1. So it is probably solely connected to web concurrency defined by puma workes. Please see below details of the error: app[web.1]: NameError (uninitialized constant DeviseTokenAuth::Concerns::User::BCrypt): app[web.1]: vendor/bundle/ruby/2.0.0/gems/devise_token_auth-0.1.30/app/models/devise_token_auth/concerns/user.rb:111:in token_is_current?' app[web.1]: vendor/bundle/ruby/2.0.0/gems/devise_token_auth-0.1.30/app/models/devise_token_auth/concerns/user.rb:86:in valid_token?' app[web.1]: vendor/bundle/ruby/2.0.0/gems/devise_token_auth-0.1.30/app/controllers/devise_token_auth/concerns/set_user_by_token.rb:40:in set_user_by_token' app[web.1]: vendor/bundle/ruby/2.0.0/gems/devise_token_auth-0.1.30/lib/devise_token_auth/controllers/helpers.rb:115:in current_user' app[web.1]: vendor/bundle/ruby/2.0.0/gems/devise_token_auth-0.1.30/lib/devise_token_auth/controllers/helpers.rb:103:in authenticate_user!' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:424:in block in make_lambda' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:143:in call' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:143:in block in halting_and_conditional' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:229:in call' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:229:in block in halting' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:166:in call' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:166:in block in halting' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:229:in call' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:229:in block in halting' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:86:in call' app[web.1]: vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.9/lib/active_support/callbacks.rb:86:in run_callbacks' I am having this issue aswell thanks @andersonbrandon @booleanbetrayal any plan to release new gem version? Would like to get a new gem version out in the next couple of days, but am waiting for one changeset to make it in. In the meantime, you could always point to master: gem 'devise_token_auth', :git => 'https://github.com/lynndylanhurley/devise_token_auth.git', :branch => 'master' ok, thanks!!!
2025-04-01T06:39:28.258367
2022-07-01T16:14:07
1291593749
{ "authors": [ "leynier" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8022", "repo": "lynot/mesirve-status", "url": "https://github.com/lynot/mesirve-status/issues/44" }
gharchive/issue
⚠️ App has degraded performance In aa42580, App (https://mesirve.app) experienced degraded performance: HTTP code: 200 Response time: 395 ms Resolved: App performance has improved in 45a4b08.
2025-04-01T06:39:28.264807
2023-05-23T21:21:51
1722803774
{ "authors": [ "Boyadjie", "CruuzAzul" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8023", "repo": "lyonjs/shortvid.io", "url": "https://github.com/lyonjs/shortvid.io/issues/330" }
gharchive/issue
💡 Implement React Testing Library Feature Request 👨🏼‍💻 Implement React Testing Library for project testing 🧪 Use Case ✍🏼 As developers, we want to ensure the stability and reliability of our project by implementing a robust testing framework. By introducing React Testing Library, we aim to facilitate the testing process and improve the overall quality of our codebase. Possible Solution 💡 Install React Testing Library as a project dependency. Create separate folders for the tests. Integrate testing into the project's development workflow. List all current components that need to be tested (make US 🏷️) Components To Test : /App Header Footer NavBar LayerByMode CopyUrlButton Code ActiveLink /utils encodedObjectValues formatUrlWithQuery loadFont /hooks useInputChange useInputDateChange useSelectedFont /forms colorInput FontPicker input inputDate selectInput /remotion /molecules AvatarWithCaption IconWithCaption TalkDetails
2025-04-01T06:39:28.275351
2024-01-24T09:17:52
2097799663
{ "authors": [ "ilia-chelak" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8024", "repo": "lzhnb/Primitive3D", "url": "https://github.com/lzhnb/Primitive3D/issues/5" }
gharchive/issue
Ray caster output tensors size Hi @lzhnb, Thanks for the library. I am trying to employ it to do some ray casting on meshes. The question I have is what size the depths, normals, and primitive_ids tensors should have? I tried to initiate the tensors with torch zeros of size [n, 1], [n, 3], and [n, 1] accordingly but get wrong outputs consisting of all 10 for depths, -1 for primitive ids, and 0 for normals. Thanks in advance. Nevermind, seems like my rays were just not pointing onto the surface. Upon further inspection I found that everything works fine.
2025-04-01T06:39:28.309018
2024-03-24T10:58:23
2204293917
{ "authors": [ "m-avagyan", "rkmsnc" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8025", "repo": "m-avagyan/webpack-react-typescript-template", "url": "https://github.com/m-avagyan/webpack-react-typescript-template/pull/3" }
gharchive/pull-request
Update package.json Hi @m-avagyan, There is missing react-i18next package in dependencies, and @types/react-i18next (This is a stub types definition. react-i18next provides its own type definitions, so you do not need this installed.) here Thanks Hi @rkmsnc , thanks for contribution 🙌
2025-04-01T06:39:28.351868
2017-11-04T04:02:17
271164565
{ "authors": [ "dlrobertson", "whitequark" ], "license": "0BSD", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8026", "repo": "m-labs/smoltcp", "url": "https://github.com/m-labs/smoltcp/pull/69" }
gharchive/pull-request
[WIP] Add ICMPv4 sockets Add support for ICMP sockets Add tests for ICMP sockets Rename proto-type features to socket-type Update documentation Resolves: #64 See src/socket/icmp.rs:137-174 for the updated API using bind. @dlrobertson The updated API doesn't look quite right. Shouldn't you be binding to the identifier field in the IP header rather than IP endpoint? Imagine you are implementing a traceroute program. With the current API this is completely impossible. To send and receive ICMP messages that are not associated with a specific TCP/UDP port number (e.g., Echo, Echo Reply, Timestamp, Timestamp Reply, Information Request, Information Reply), the socket has to be bound to a specific ICMP identifier. The ICMP identifier is a 16-bit field present in bytes 5/6 in the header of these messages. Only messages containing the right identifier can be sent or received through a safe raw ICMP socket of this type. So it isn't the identifier in the IP header (I was wrong in my previous comment) it is the identifier in the Icmp message. I used the IpEndpoint port value for consistency with other implementations and for the case where you're binding to ICMP error responses to a UDP port. Then the socket creation takes an additional parameter that determines how the port value of the bound IpEndpoint works. As a result, the current API includes the following two cases. Bind to ICMP error responses for UDP packets sent from port 53. use smoltcp::socket::{Socket, IcmpSocket, IcmpSocketType}; // Created with type Udp. IpEndpoint::port is truely a port. let mut icmp_socket = match IcmpSocket::new(rx_buffer, tx_buffer, IcmpSocketType::Udp) { Socket::Icmp(socket) => socket, _ => unreachable!() }; icmp_socket.bind(53).unwrap(); Bind to ICMP messages with the identifier 0x1234 use smoltcp::socket::{Socket, IcmpSocket, IcmpSocketType}; // Created with type Icmp. IpEndpoint::prt is actually the 16 bit identifier let mut icmp_socket = match IcmpSocket::new(rx_buffer, tx_buffer, IcmpSocketType::Icmp) { Socket::Icmp(socket) => socket, _ => unreachable!() }; icmp_socket.bind(0x1234).unwrap(); I chose to implement this using a parameter added to socket creation, but this could also be accomplished using an enum passed to bind. I'm starting to realize the use IpEndpoint makes sense for C, but I think it would be more readable if I use some sort of enum. Ah, I see. Yes, I think a dedicated enum IcmpEndpoint would be best. How important is it to keep the standard bind API? E.g. How bad would it be if we ended up with something like. let mut socket = ... socket.bind(IcmpEndpoint::Identifier(0x1234)) When bind is only given a u16 it gets to be a bit ambiguous. In theory bind could keep the same signature. Then we could implement the setting of the sockets endpoint with something like the following. match self.socket_type { SocketType::Udp => self.endpoint = IcmpEndpoint::Udp(endpoint.into()), SocketType::Icmp => { let tmp_endpoint: IpEndpoint = endpoint.into(); self.endpoint = IcmpEndpoint::Icmp(tmp_endpoint.port), } } There's no "standard bind API", we do not try to implement POSIX. There aren't ICMP sockets in POSIX anyway. So, do the most clear thing. Rebased on master and updated the ping example. I got it to work on my system, but otherwise I did very little testing of it. After the comments above are fixed this is ready to be merged. Thanks for the reviews. The final product was much much better than the first few revisions
2025-04-01T06:39:28.361254
2020-03-06T23:56:59
577234715
{ "authors": [ "EbGu3", "JohnAllen", "MarCixn", "chancelier", "gwonglapierre", "m-rtijn", "romybompart", "sutanu86" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8027", "repo": "m-rtijn/mpu6050", "url": "https://github.com/m-rtijn/mpu6050/issues/29" }
gharchive/issue
ModuleNotFoundError when importing mpu6050 the smbus installation worked, I was able to test the i2c and that works. I also got no errors when installing through pip install. However as soon as I try to run a script in Python3, it says that: Traceback (most recent call last): File "/home/pi/Documents/gyrompu.py", line 1, in import mpu6050 as mpu6050 ModuleNotFoundError: No module named 'mpu6050' I've read some threads about similar issues and it looks like it's all coming from using Python 3 seems like the issue is within Python because it is working when I'm using a script that's located in the same directory as the mpu6050.py same issue @sutanu86 Did you install the package? If so, how? @chancelier did you use the same python version for installing the package as for running it? Hi, I just released a version that might help for different platform: Jetson nano, raspberry pi, Beaglebone, pyboard, ODROID and more. It is based on mr Tijn package. https://github.com/romybompart/py_imu_mpu6050 https://pypi.org/project/py-imu-mpu6050/ Now that is working in different platform I will add self_calibration functions and Magnetometer support git clone https://github.com/Tijndagamer/mpu6050.git cd mpu6050 sudo python setup.py install Hello, you can see what is the directory that save the files when you install the mpu6050, for example in my case add in ./.local/lib/python2.7/site-packages... if you run you can try to run with python2 <filename.py> git clone https://github.com/Tijndagamer/mpu6050.git cd mpu6050 sudo python setup.py install This didn't work for me. "ModuleNotFoundError: No module named 'smbus'"
2025-04-01T06:39:28.392278
2022-06-23T16:53:02
1282689345
{ "authors": [ "drelatgithub", "lmiq" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8028", "repo": "m3g/CellListMap.jl", "url": "https://github.com/m3g/CellListMap.jl/pull/61" }
gharchive/pull-request
fix: signature for pairwise mapping Fixed signature for map_pairwise_serial! and map_pairwise_parallel! for CellListPair type. Currently, the package fails for the following example: using CellListMap box = Box([100,100,100],20) b1 = [[100,100,100] .* rand(3) for _ in 1:10] b2 = [[100,100,100] .* rand(3) for _ in 1:10] cl = CellList(b1, b2, box) CellListMap.map_pairwise_serial!((x,y,i,j,d2,output)->output, nothing, box, cl) because keyword show_progress is by default show_progress which is not defined elsewhere. Please let me know if you experience any other issue. Thanks again.
2025-04-01T06:39:28.439926
2021-02-19T23:29:15
812429926
{ "authors": [ "erikng", "macbm" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8029", "repo": "macadmins/nudge", "url": "https://github.com/macadmins/nudge/issues/121" }
gharchive/issue
Nudge app appears off center when it's opened on an external monitor, then the user disconnects from monitor When an update is past due, Nudge cannot be dismissed by the "Later" or "I understand" buttons (expected). However, if the user is working off an external monitor, when they unplug and work on the main laptop screen, Nudge does not recenter itself. This prevents the user from fully see and interact with Nudge. Possible fix: https://developer.apple.com/documentation/appkit/nswindow/1419090-center I think I have fixed it: https://github.com/macadmins/nudge/commit/ce198c101ff2e2758eed6e1c38be1d6d5a8fe87e Please try this version: https://github.com/macadmins/nudge/releases/tag/v.<IP_ADDRESS>02021001333 It took two commits, but confirmed with @macbm that this fixes it. It will take Nudge 60 seconds to fix this as it's tied to the nudgeRefreshCycle key that defaults to 60 seconds, but that's the best I can do without increasing CPU usage and polling this constantly (which I don't think is worth the effort).
2025-04-01T06:39:28.444601
2018-10-26T17:31:22
374478349
{ "authors": [ "nawatts" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8031", "repo": "macarthur-lab/gnomadjs", "url": "https://github.com/macarthur-lab/gnomadjs/issues/320" }
gharchive/issue
Searchbox showing incorrect options for some region queries Type 1-1 in the search box and the first option shown is 1--19-21. It's suggesting the 20b region around position 1 in chromosome 1, so chromosome 1, start -19, end 21.
2025-04-01T06:39:28.463211
2015-11-30T14:49:08
119507203
{ "authors": [ "SteveShaffer", "adiakritos", "avaragado", "chrisparton1991", "cudasteve" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8032", "repo": "machineboy2045/angular-selectize", "url": "https://github.com/machineboy2045/angular-selectize/issues/117" }
gharchive/issue
npm release is out of date https://www.npmjs.com/package/angular-selectize2 reports version 1.2.3. Please release an updated version for npm. It looks like you tried to update package.json to include the 3.0.1 version number but the v3.0.1 tag is below that commit. So when npm looks up the v3.0.1 tag, it gets the v3.0.1 code (I think) but the package.json file that gets downloaded still says v.1.2.3, which can lead to issues when using npm or other tools like npm-shrinkwrap cuz they'll get tripped up on the version number in that file. Perhaps just moving the git v3.0.1 tag a few commit forward would help. Or publishing a v3.0.2 with everything in sync? Agreed, I've created #151 to add a main entry to package.json, but it won't help if npm isn't being updated. I think you should be able to reference the GitHub repo directly in your package dependencies if npm isn't current. I wonder if this is the reason that I can't get it to load within my app.js file while using webpack...
2025-04-01T06:39:28.483569
2024-09-25T07:53:00
2547242741
{ "authors": [ "macmotp", "tamasori" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8033", "repo": "macmotp/locale", "url": "https://github.com/macmotp/locale/pull/1" }
gharchive/pull-request
Fix Hungarian default date format and add Hungarian translations Hey, I fixed the Hungarian default date format as it was incorrect, you can find more information here: https://en.wikipedia.org/wiki/Date_and_time_notation_in_Hungary#Date Also I added translations for the Hungarian language! Thank you @tamasori, since I am going to release a more stable version soon, I will keep this PR on hold and apply your changes directly. I will add you as contributor for reference Ok, thank you @macmotp All your changes have been included into the new release, thanks!
2025-04-01T06:39:28.575503
2015-05-04T18:05:29
73086124
{ "authors": [ "douglasdrumond", "jpetrie", "splhack" ], "license": "Vim", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8034", "repo": "macvim-dev/macvim", "url": "https://github.com/macvim-dev/macvim/issues/30" }
gharchive/issue
Bug displaying unicode characters above BMP with "syntax on" From @GoogleCodeExporter on March 16, 2015 9:26 What steps will reproduce the problem? Set syntax on (latex for instance) and add a calligraphic A in math mode (U+1D49C) between parenthesis, itself and between dollars $(A)$. I use EversonMono font. What is the expected output? What do you see instead? I expect it to be displayed properly but sometimes, a space is displayed after the A, shift the rest of the line up to the $ What version of MacVim and OS X are you using (see "MacVim->About MacVim" and "Apple Menu->About This Mac" menu items, e.g. "Snapshot 40, 10.5.6 Intel")? Snapshot 73 Please provide any additional information below. Attached the screenshots of normal buggy display, normal display and syntax off display. Original issue reported on code.google.com by<EMAIL_ADDRESS>on 9 Jan 2015 at 2:09 Attachments: bug1.tiff bug2.tiff bug3.tiff Copied from original issue: douglasdrumond/macvim#524 For what it's worth, I can still reproduce this as well as of snapshot 76: Copy this character: 𝒜 (the Unicode character referenced in the original post; hopefully Github preserves it). Launch MacVim, paste the character inside a set of parentheses and type some text after (so you have a line like "(𝒜) hello." Make sure there's a line below this text. Exit insert mode, move the cursor to the start of the line. Move left and right between the first and second lines and note how the display of the line shifts, sometimes visually truncating the last character in the line. I can't reproduce it. @jpetrie could you write more specific way to reproduce the issue? for example Open an empty MacVim window Enter insert mode Paste "(𝒜) hello." and enter return. Input "aaaa" Exit insert mode ... I can still see it by doing the following: Make sure the Core Text renderer is enabled. Launch MacVim or otherwise open a new, empty buffer. Enter insert mode. Paste the text inside quotes: " (𝒜) test" (that's space, open-paren, 𝒜, close paren, space, 'test') Exit insert mode Use h and l to scrub the cursor along the line, particular to the start of the line, observe the rendering artifacts. what is 'the rendering artifacts'?? snapshot-81, 10.11, I can't see any issue. could you record screencast? @jpetrie are you using snapshot-80 or earlier version? snapshot-81 has no problem. Yeah, it's fixed in 81+.
2025-04-01T06:39:28.577480
2021-01-05T19:28:27
779464551
{ "authors": [ "macwille" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8035", "repo": "macwille/club-webstore", "url": "https://github.com/macwille/club-webstore/issues/1" }
gharchive/issue
Refresh causes unknown endpoint Current use of React BrowserRouter can not reload correctly after refresh Could be fixed with React HashRouter Maybe redirect to "/" url. Fixed with "/* endpoint that returns index.html from build. Fixed with "/* endpoint that returns index.html from build.
2025-04-01T06:39:28.590274
2015-07-23T09:11:41
96767878
{ "authors": [ "SgtOddball", "ToBe998", "ilyaporopudas", "raffij", "rnjailamba" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8036", "repo": "madebymany/sir-trevor-js", "url": "https://github.com/madebymany/sir-trevor-js/issues/370" }
gharchive/issue
Handle failed image upload in image block If a the image upload fails, Sir Trevor will still create an empty image block, which can cause rendering problems. That's an interesting one, if I have a failed image upload I still end up seeing the image (now as a base64 file) in an image block but adds a warning to inform that the image failed. Although thinking about it, I don't use the default image block I've been using Image-extended so that might have something to do with it. Admittedly, there could do with a way of replacing the uploaded image block without first removing it but that's more of a usability issue than anything else and isn't much of an issue at that. Going through old issues as part of a block creation / config idea i'm working on. It'll address bad uploads and allow re-uploading without deleting the block. Also you'll be able to say whether the image element is required, which will allow us to force re-upload or the user will need to delete the block before being able to save and submit the data. As a side note. This would make image blocks easier to build rather than replicating and editing each time someone wants to customise a field. Coolbeans, it's not a critical end of the world as we know it kinda issue but It's nice to know it's being looked at. @raffij Is this feature still in progress? This can also happen with a video block. I thought the validation regex tweaks prevented that from happening on the vids? I'll see if i've got time this weekend and see if I can see how to validate the video block @SgtOddball it's something i need to fix anyway for one of our projects. The decision is whether an invalid video block should be ignored, or fail validation before a url is posted. Ahh ok, I'd think that invalid blocks should be ignored otherwise a video link that becomes invalid later on could cause issues as it's already been accepted previously as valid (unless it gets re-evaluated each time the block is rendered). That's my thinking too. Just stumbled into this too. It seems that the current image block can no longer detect if an image upload failed in any way. The Promise will always trigger the success method, no matter what the server responded. So, it is completely possible that i read the code wrong, but the image block (and maybe other blocks using the uploader and/or cancellable-promise) is no longer able to react to any problems during upload.
2025-04-01T06:39:28.600398
2016-01-26T21:48:05
128953333
{ "authors": [ "Anahkiasen", "kelunik" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8037", "repo": "madewithlove/why-cant-we-have-nice-things", "url": "https://github.com/madewithlove/why-cant-we-have-nice-things/issues/79" }
gharchive/issue
Wrong RFC state http://why-cant-we-have-nice-things.mwl.be/events shows https://wiki.php.net/rfc/invalid_strings_in_arithmetic as implemented, but the vote was canceled and it was put under discussion again. I'm guessing the code is giving the votes priority over the status at some point, cause from a votes standpoint it would have been implemented. Will look into it.
2025-04-01T06:39:28.617464
2015-11-28T20:26:55
119316454
{ "authors": [ "chirino", "madoublet" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8038", "repo": "madoublet/respond", "url": "https://github.com/madoublet/respond/pull/360" }
gharchive/pull-request
Fixes render="publish" does not work when used with an expression based url #354. This is handy for rendering the main content of a page to get better SEO love. Perfect. Thanks!
2025-04-01T06:39:28.638550
2017-08-01T07:21:14
246977271
{ "authors": [ "cesar-tonnoir" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8039", "repo": "maestrano/impac-angular", "url": "https://github.com/maestrano/impac-angular/pull/374" }
gharchive/pull-request
[IMPAC-603] Create dashboard from template + designer mode "Create from template" feature Fix currencies drop-down when dashboard changed Fix alerts settings button @xaun I think this is ready - can you please review? @xaun : up please, the release is tomorrow... thanks.
2025-04-01T06:39:28.647760
2017-04-26T01:22:28
224316558
{ "authors": [ "magcius", "zmodem" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8040", "repo": "magcius/xplain", "url": "https://github.com/magcius/xplain/issues/27" }
gharchive/issue
Chromium link in regions.html is broken The link in "Some applications, like Chromium, actually still do this in their own window decorations." doesn't seem to work. The code is still there, but the code search interface doesn't seem to like the "rcl" parameter. Maybe a link to the git repo is more stable: https://chromium.googlesource.com/chromium/src/+/d7d447a09a95eb7ad399eb8512f677a66e127d69/ui/views/window/window_shape.cc#12 the wonder of the modern web Replaced with the URI you suggested.
2025-04-01T06:39:28.848134
2023-01-10T10:56:17
1527176192
{ "authors": [ "Ablu", "magiclen" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8041", "repo": "magiclen/words-count", "url": "https://github.com/magiclen/words-count/issues/2" }
gharchive/issue
Clarification request about the algorithm used here vs the Unicode text segmentation guidelines Hi! First: Thanks for this handy crate! I compared the results of this crate against what https://unicode.org/reports/tr29/#Word_Boundaries mandates and was surprised to find some differences. I then checked the algorithm used by your crate and realized that it is a more pragmatic approach. I wonder whether this is done on purpose (in which case it might make sense to explain it in the README) or whether this is an oversight? The https://crates.io/crates/unicode-segmentation crate seems to claim to do it the "official" way (Disclaimer: I do not claim that I fully checked every rule and confirmed that it is correct) and comes to the conclusion that the example from the documentation at https://crates.io/crates/words-count should be 18 words long. What is your thought on this? This crate doesn't follow the rules in Word Boundaries. In Chinese composition, I believe Chinese punctuation marks need to be counted as words. That's why Rust是由 Mozilla 主導開發的通用、編譯型程式語言。 counts 20 words (including 、 and 。). You can get the same result in LibreOffice Writer. Also, you can try "a-good-word". use unicode_segmentation::UnicodeSegmentation; fn main() { let s = "a-good-word"; println!("{:?}", words_count::count(s)); println!("{:?}", s.unicode_words().collect::<Vec<&str>>()); println!("{:?}", s.split_word_bounds().collect::<Vec<&str>>()); } This crate tells you a-good-word is a word, while the unicode_words function in unicode-segmentation returns ["a", "good", "word"]. In LibreOffice Writer, a-good-word is counted as one word. Thanks for the clarification! I can totally understand that sometimes a unicode standard may not exactly be what one would want. I have sent a suggestion how to potentially clarify this a bit more in the README :). It might make sense to reference https://crates.io/crates/unicode-segmentation, but I left it out for now and will leave that up to you!
2025-04-01T06:39:28.877875
2020-06-14T05:56:08
638290480
{ "authors": [ "magnet", "nemosupremo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8042", "repo": "magnet/metered-rs", "url": "https://github.com/magnet/metered-rs/issues/25" }
gharchive/issue
Is the only way to access metrics via serde? Hi, Maybe I'm not reading the documentation correctly, but I'm trying to access the metrics generatd by the library (to format and print to the console) and it seems the only way to do so is to roundtrip via some serde serializer? I feel like I'm missing something. Hi, Generated metric registries are regular Rust structs, so you can access the different metrics directly, and do whatever reading or transforms you need. Metrics implement Serialize to hook various back-ends that require different formats, notably Prometheus which is supported since 0.4. Ok, it sounds like I'm using the library incorrectly? For example, using the library manually I have #[derive(Default, Debug)] pub struct Perf { fps: Throughput, } fn main() { let metrics = Perf::default(); let fps = &metrics.fps; loop { measure!(fps, expensive())) // println!("{}", fps.mean()) } } However, the internal Histogram isn't visible, so I can't call fps.mean()?. Am I supposed to implement my own Throughput type? Hmm, that's an oversight, I feel like we should expose the inner histogram. Would you like to open that PR?
2025-04-01T06:39:28.961930
2017-11-26T20:35:48
276854359
{ "authors": [ "magneticstain" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8043", "repo": "magneticstain/Inquisition", "url": "https://github.com/magneticstain/Inquisition/issues/69" }
gharchive/issue
fork() Does Not Occur During Run Tests When performing run tests in travis CI, the processes are not being forked. This results in reduced test coverage of the logic performed after fork() runs, confirmed by reviewing Coveralls results. I suspect this is due to limitations with Travis CI in order to prevent fork bombs. What we'll do to fix this is check to see if the test_run CLI flag is set, and if so, continue running the contained logic. Found this exception when running coverage tests: Traceback (most recent call last): File "inquisition.py", line 173, in <module> main() File "inquisition.py", line 150, in main anatomize.startAnatomizer() File "/home/travis/build/magneticstain/Inquisition/lib/anatomize/Anatomize.py", line 150, in startAnatomizer numLogsBetweenTrackingUpdate=numLogsBetweenTrackingUpdate) File "/home/travis/build/magneticstain/Inquisition/lib/anatomize/Parser.py", line 523, in pollLogFile every_n=numLogsBetweenTrackingUpdate): File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/pygtail/core.py", line 81, in __init__ [int(line.strip()) for line in offset_fh] ValueError: need more than 0 values to unpack I found another bug in my last commit where new process spawn even when in test mode. I fixed that, and now it seems that it has fixed this exception as well. Still not sure how, but I suspect it's some sort of race condition between parser processes. Fixing this so that all logic runs in a single process during test mode has resolved it during local testing. https://sentry.io/carlsonet/inquisition/issues/410604874/ https://sentry.io/carlsonet/inquisition/issues/410604750/
2025-04-01T06:39:28.980804
2023-04-25T02:54:40
1682344389
{ "authors": [ "ArjunKrishna3367", "mahalrs" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8044", "repo": "mahalrs/newsgen", "url": "https://github.com/mahalrs/newsgen/pull/21" }
gharchive/pull-request
Add tokenizer to encode input text and decode predicted image from logits Tokenizer to encode text using BartTokenizer. It decodes Bart decoder output of image tokens to image using VQGAN decoder. Currently getting an error when trying the tokenizer: ‘BartTokenizer’ object has no attribute ‘to’, for running the line tokenizer.to(device) Not working with cpu nor cuda. Trying to see if there's an easy fix. Let me fix it
2025-04-01T06:39:29.011347
2023-09-23T18:25:48
1909959745
{ "authors": [ "Dr4gon", "mai-soup" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8045", "repo": "mai-soup/porch-reads-club", "url": "https://github.com/mai-soup/porch-reads-club/issues/77" }
gharchive/issue
#CleanCode No inline js and use configuration variables @mai-soup The thing with inline js is - just don't do it. It has so many drawbacks: A bad separation of concerns because the view shouldn't know about the logic It's hard to read, and even harder to maintain because it's inline instead of a dynamic environment setting linked in a configuration file Debugging hell, if something goes wrong .... LoansView.vue // if the loan is due in 7 days or less, show a button to extend the loan button(v-if="(new Date(loan.returnDate) - new Date()) < 1000 * 60 * 60 * 24 * 7" @click="doExtend(loan)") Extend The idea here would be using a function call from JS and configure this value in your .env. instead of environment variables, decided to assign the loan-related durations on a per-library basis (#113)
2025-04-01T06:39:29.015909
2017-12-21T16:15:31
283931148
{ "authors": [ "hunterlester", "maidsafe-highfive" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8046", "repo": "maidsafe/safe_examples", "url": "https://github.com/maidsafe/safe_examples/pull/339" }
gharchive/pull-request
fix/use promise to await for archiver to finalize before applying sha… …256sum r? @krishnaIndia (maidsafe_highfive has picked a reviewer for you, use r? to override) Draft based on this PR: https://github.com/maidsafe/safe_examples/releases/tag/untagged-437234bec79b2cf8b510
2025-04-01T06:39:29.078091
2015-03-01T02:04:28
59371552
{ "authors": [ "blakeperdue", "leemunroe" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8047", "repo": "mailgun/transactional-email-templates", "url": "https://github.com/mailgun/transactional-email-templates/pull/10" }
gharchive/pull-request
Fixing two iOS display issues Mail app on iOS 8 the default body padding is 20px. This makes the template rather skinny. Most modern email templates have a smaller left and right body padding of 10px. 20px padding: http://cl.ly/image/0x2b3e2P1i2b 10px padding: http://cl.ly/image/1p2h1b2Z0m1E Mail app on iOS 8, Helvetica's 600 weight font looks wonky. The letter-spacing and font-weight looks off. 600 weight: http://cl.ly/image/0x2b3e2P1i2b 800 weight: http://cl.ly/image/1p2h1b2Z0m1E Changed this in a few places ad13d824800bcd968870e1714cc1dea465ce1c1c Thanks
2025-04-01T06:39:29.085503
2019-03-19T08:26:13
422599216
{ "authors": [ "KoolPal", "OnkelTem", "peterdd" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8048", "repo": "mailhog/MailHog", "url": "https://github.com/mailhog/MailHog/issues/243" }
gharchive/issue
Feature request: Web browser notification It would be nice to have mailhog capable to display notifications upon receiving messages. This would save clicking on mailhog tab every time to see if a message is received. Which browser? Here my experience on a Mac: Firefox: It seems to work. It even plays a bing sound. Safari: Shows the notifcation, but no sound. (dunno why not playing sound) Chrome: No notification, no sound. (dunno why) Screenshot: Mailhog is not the active tab, but receives the notification. @peterdd Chrome, Linux, Version 72.0.3626.121 (Official Build) (64-bit) Maybe I have the reason: https://developer.mozilla.org/en-US/docs/Web/API/notificationSecure context "This feature is available only in secure contexts (HTTPS), in some or all supporting browsers." So in the latest Firefox 67 it is not working anymore - unless I change in about:config the dom.webnotifications.allowinsecure to false when you use mailhog just local for testing without TLS. (notification works again, with sound) In Chrome 74 I fiddled with chrome://flags #unsafely-treat-insecure-origin-as-secure (enabled and insert address of your mailhog running) and #enable-message-center-new-style-notification (enabled) to get them working for a local http ip address. ( http://10.0.0.x:8025 ) (notifications shown, no sound, don't know why) Hi, With Firefox 74, even this does not seem to work. So in the latest Firefox 67 it is not working anymore - unless I change in about:config the dom.webnotifications.allowinsecure to false when you use mailhog just local for testing without TLS. Any other way to make this work now? Sorry, I meant set dom.webnotifications.allowinsecure to true With Firefox 74 was able to get the notification alerts on mac again. New seems the little speech bubble icon in the address bar. Click and allow it in the popup dialog and you get the other icon as shown in screnshot. @peterdd Perfect! Both changes done and it works on Windows 10 & Firefox 74! Thanks a lot!
2025-04-01T06:39:29.093156
2024-02-14T15:47:24
2134649440
{ "authors": [ "amosaxe", "maitrungduc1410" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8049", "repo": "maitrungduc1410/react-native-video-trim", "url": "https://github.com/maitrungduc1410/react-native-video-trim/issues/37" }
gharchive/issue
getting Reference to property 'player' in closure requires explicit use of 'self' to make capture semantics explicit While running in ios, we are getting the following error:/Users/prachhhi/Documents/yes/app/AppV2/node_modules/react-native-video-trim/ios/VideoTrimmerViewController.swift:254:16 Reference to property 'player' in closure requires explicit use of 'self' to make capture semantics explicit which version of this library and RN are you using? Have you run Pod install? can you try this in new project, because with the above errors I'll definitely will see it and never publish :) RN version: 0.72.4 react-native-video-trim version: 1.0.10 yes we have ran pod install. Yes correct and apologies for not adding full context, the same code is running in another system so we suspect something wrong with xcode settings. But as this issue is coming with library only, I thought of asking for help here. Can you click each of the error then screenshot and show me exact location for each error? This file contains all the errors Let me push a fix for this, I think this is about backward compatibility, it's really fine on my end Sure, thanks a lot, yes I am also able to run on another system with latest xcode so looks like that only. Hey @maitrungduc1410, Yep this worked, thanks for this, now I am thinking why I didnt just patched on my own and raised a PR lol. Again thanks for quickly responding and helping.
2025-04-01T06:39:29.147984
2022-03-08T13:38:44
1162669660
{ "authors": [ "codecov-commenter", "makenowjust" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8050", "repo": "makenowjust-labs/recheck", "url": "https://github.com/makenowjust-labs/recheck/pull/379" }
gharchive/pull-request
Fix coreJVM/initialCommands to run Changes Fix coreJVM/initialCommands to run Codecov Report Merging #379 (4d34971) into main (6246719) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## main #379 +/- ## ========================================== Coverage 100.00% 100.00% ========================================== Files 66 56 -10 Lines 2666 1848 -818 Branches 241 88 -153 ========================================== - Hits 2666 1848 -818 Impacted Files Coverage Δ packages/recheck/src/lib/env.ts packages/recheck/src/browser.ts packages/recheck/src/lib/java.ts ...ges/eslint-plugin-redos/src/rules/no-vulnerable.ts packages/recheck/src/lib/pure.ts packages/recheck/src/lib/exe.ts packages/recheck/src/lib/native.ts packages/recheck/src/main.ts packages/recheck/src/lib/worker-pool.ts packages/recheck/src/lib/agent.ts Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 6246719...4d34971. Read the comment docs.
2025-04-01T06:39:29.181386
2021-05-27T08:53:12
903425615
{ "authors": [ "abhijeet141", "geekymeeky" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8051", "repo": "makesmatheasy/makesmatheasy", "url": "https://github.com/makesmatheasy/makesmatheasy/issues/3931" }
gharchive/issue
To add working steps in T shape calculator Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Describe the solution you'd like A clear and concise description of what you want to happen. Hi, I want to work on this issue I will start working on it as soon as I get assigned!! I am a part of GSSoC'21. Kindly assign this issue to me! Discord Username : Abhijeet Sinha(P) Discord Tag: #4018 @abhijeet141 I'm already working on issue #3924. Please close this issue
2025-04-01T06:39:29.188586
2016-01-26T11:21:13
128794142
{ "authors": [ "SBats", "camillemonchicourt" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8052", "repo": "makinacorpus/Geotrek-mobile", "url": "https://github.com/makinacorpus/Geotrek-mobile/issues/182" }
gharchive/issue
Static pages - Cible WEB and HIDDEN displayed in Mobile Just tried to create a static page with Cible = Web. It is displayed in Rando but also in Mobile. Just tried to create a HIDDEN static page and this one is also displayed in Geotrek-mobile. There's indeed an issue in the condition. It still tests the old value for target and should be updated.
2025-04-01T06:39:29.196561
2023-10-13T11:35:55
1941778458
{ "authors": [ "makspll", "zwazel" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8053", "repo": "makspll/bevy_mod_scripting", "url": "https://github.com/makspll/bevy_mod_scripting/pull/81" }
gharchive/pull-request
Fix removal of script contexts This should fix #79. Okay i found following issue: the system script_remove_synchronizer handles removing the contexts of the script. https://github.com/makspll/bevy_mod_scripting/blob/32eba12bc5e2d87ab05cfbf4a2dd14063d3842a2/bevy_mod_scripting_core/src/systems.rs#L94-L103 The problem here is, that .remove_context expects the script ID, as context_entities uses it as the Key. https://github.com/makspll/bevy_mod_scripting/blob/32eba12bc5e2d87ab05cfbf4a2dd14063d3842a2/bevy_mod_scripting_core/src/hosts.rs#L267-L271 Which results in a very unreliable removing of the scripts. especially once the entities have multiple scripts and not just one. I have updated the script_remove_synchronizer system, to loop through all context_entities and find those with a matching entity id. The problem I see with this solution is that we have to loop through everything, as we can't break after one found match. because each entity could have multiple scripts, so we need to loop through all of them to find all. https://github.com/makspll/bevy_mod_scripting/blob/745c20bb9e22d18761da7c50e96ee7f4766e790f/bevy_mod_scripting_core/src/systems.rs#L94-L112 I just added some changes to the context_entities. I would love to hear some feedback from you, I personally think it makes more sense to use the entity as a key, instead of the script ID. Not sure how to fix the test, as it comes from a crate rust_out? running 7 tests test src/lib.rs - (line 252) ... ignored test src/lib.rs - (line 65) ... ignored test src/lib.rs - (line 268) - compile ... FAILED test src/lib.rs - (line 208) ... ok test src/lib.rs - (line 117) ... ok test src/lib.rs - (line 139) ... ok test src/lib.rs - (line 179) ... ok failures: ---- src/lib.rs - (line 268) stdout ---- error[E0601]: `main` function not found in crate `rust_out` --> src/lib.rs:287:2 | 21 | } | ^ consider adding a `main` function to `src/lib.rs` error: aborting due to previous error For more information about this error, try `rustc --explain E0601`. Couldn't compile the test. failures: src/lib.rs - (line 268) test result: FAILED. 4 passed; 1 failed; 2 ignored; 0 measured; 0 filtered out; finished in 0.71s I see, haven't considered that! In that case I'll quickly revert back to how it was before! Merge Request is back at only fixing the bug in removing script context No worries, much appreciated! Just one code style comment and I am happy!
2025-04-01T06:39:29.204581
2019-08-24T22:32:49
484868850
{ "authors": [ "ariel11", "bartvanandel", "fbeaudoincoveo", "lnqs" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8054", "repo": "maleck13/readline", "url": "https://github.com/maleck13/readline/issues/26" }
gharchive/issue
License clarification Hi, does license "BSD" mean the BSD-3-clause (https://opensource.org/licenses/BSD-3-Clause), or one of the other BSD licenses? Thanks! Good point. We're currently building an app and are working on proper attribution. So far, the only (indirect) dependency that we have that doesn't have an SPDX-compatible license id is readline. It would really help if your package were using one of the identifiers from the SPDX licenses list. I second that. We have automatic validations to ensure that all of our project dependencies have a license that matches one of those allowed by our legal department. Unfortunately, "BSD" is considered too vague to be added to that list, which means that unless it is clarified, we'll have to either switch to another similar library that has a clear license, or implement the features we need on our own. Would it be possible to use one of the SPDX identifiers instead? https://spdx.org/licenses/ Thanks! Same for us. We're totally willing to live with not having a valid SPDX identifier, but knowing what BSD flavor this package is meant to be published as would be great.
2025-04-01T06:39:29.219643
2024-11-10T11:54:41
2647168236
{ "authors": [ "BartaBlzs", "mallibone" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8055", "repo": "mallibone/MauiUI2022ProgressButton", "url": "https://github.com/mallibone/MauiUI2022ProgressButton/issues/1" }
gharchive/issue
Once completed, the system will not reset if (secondsRemaining == 0) { _cancellationTokenSource.Cancel(); return; } The return shouldn't be there, because it returns before you call the ResetView() method Thanks for pointing this out - and fixing it with PR #3 🥳
2025-04-01T06:39:29.229332
2017-05-15T17:13:22
228782667
{ "authors": [ "TassosD", "alexlukelevy", "carlosafw", "justinmasse", "lotusms", "sarahshuffle" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8056", "repo": "malte-wessel/react-custom-scrollbars", "url": "https://github.com/malte-wessel/react-custom-scrollbars/issues/158" }
gharchive/issue
Scrollbars are rejected in MaterialUI Tables When used in a material UI TableBody, it seems that MaterialUI rejects anything that may alter their table's architecture throught the DOM. Material UI forbids using anything other than TableHeader or TableBody as Table children, and any inside of TableBody or Table Header (per HTML5 standars, this is not permitted either) render as a div. Is there a way to assign Scrollbars to the TableBody without using the so that way the Table architecture is unchanged? Here is the code `<TableBody displayRowCheckbox={this.state.showCheckboxes} deselectOnClickaway={this.state.deselectOnClickaway} showRowHover={this.state.showRowHover}> <Scrollbars //not permitted. it renders as a <div> autoHide={false} style={ScrollBarsStyle}> {tableData.map( (row, index) => ( <TableRow key={index}> <TableRowColumn>{row.name}</TableRowColumn> <TableRowColumn>{row.type}</TableRowColumn> <TableRowColumn>{row.owner}</TableRowColumn> </TableRow> ))} </Scrollbars> </TableBody>` UPDATE The only way it can work is by adding the entire table inside the Scrollbars tags. But it disables the fixedHeader. This can be accomplished by using two material-ui tables. One for the header and one for the rows. <Table> <TableHeader> ... </TableHeader> </Table> <Scrollbars> <Table> <TableBody> ... </TableBody> </Table> </Scrollbars> Yup! That did it! Thank you! But what if you need the columns to be the same width? Then give the both sets of columns a style property e.g. style={{ width: '10%' }} Unideal solution. What if you want the header to be fixed on vertical scroll, however to scroll with the content on horizontal scroll. How would the horizontal scrolling work in this case? @justinmasse did you get this to work? @justinmasse just put both tables inside a horizontally scrollable div
2025-04-01T06:39:29.284618
2024-06-28T13:08:22
2380402370
{ "authors": [ "jhancock-taxa", "mamift" ], "license": "MS-PL", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8057", "repo": "mamift/LinqToXsdCore", "url": "https://github.com/mamift/LinqToXsdCore/issues/66" }
gharchive/issue
Support DataAnnotations? I searched everywhere and I couldn't find any documentation on how to enable DataAnnotations being generated on the file. It see that it's generating XML Documentation but not generating DataAnnotations. I was expecting like the xsd.exe to see MaxLength, RegularExpression, Description attributes on everything. Is there a way to turn these on? Sorry but LinqToXsd does not support those. It was originally started as a strongly-typed wrapper API around the Linq to XML API (think XDocument, XElement etc), and it's evolved that way. Adding that in is definitely possible though, but the code generator does not generate these attributes at the moment. If System.Component.DataAnnotations are a hard requirement, I recommend using XmlSchemaClassGenerator - it generates code that's very close to what old-school xsd.exe gives you and also supports emitting DataAnnotations. LinqToXSD has its own validation mechanism that does not use DataAnnotations (it generates its own TypeValidator classes for validation).
2025-04-01T06:39:29.305850
2015-10-17T06:42:56
111947770
{ "authors": [ "adlerhsieh", "jhass" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8058", "repo": "manastech/crystal", "url": "https://github.com/manastech/crystal/pull/1759" }
gharchive/pull-request
Add Hash#key method Hash#key(value) returns the corresponding key with the given value. Updated. Added Hash#key? method. Updated. I followed the #fetch pattern and make #key block variant public for now; however, like you said, I'm not quite sure about this as it is not quite straightforward as a public API. What do others think about a public Hash#key(value, &block)? Updated. Maybe @asterite would have an idea about this? We are discussing if methods like Hash#fetch and Hash#key (with a block) should be public methods or not. The purpose of these methods are not for public API use but for DRYing related methods, thus their logic might not be as straightforward and may confuse developers at first sight. Well, I never questioned it for Hash#fetch, there it very well has legit public usages. Oops, I see. Well, no other opinions, then let's keep it public for now. Thanks! Thanks!
2025-04-01T06:39:29.308638
2015-06-27T19:13:58
91503986
{ "authors": [ "adam12", "asterite", "hugoabonizio" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8059", "repo": "manastech/crystal", "url": "https://github.com/manastech/crystal/pull/892" }
gharchive/pull-request
Fix HTTP::Request#keep_alive? method Add "upgrade" type of connection to fix Websocket hand shake, that expects "Connection" header to be "Upgrade". I'm trying to run this require "http/server" require "http/server/handlers/websocket_handler" handlers = [] of HTTP::Handler handlers << HTTP::LogHandler.new handlers << HTTP::WebSocketHandler.new do |req| puts "> #{req}" end server = HTTP::Server.new(3000, handlers) server.listen But in Chrome it gives me the error "WebSocket connection to 'ws://pandora-102353.nitrousapp.com:3000/' failed: Error during WebSocket handshake: 'Connection' header value must contain 'Upgrade'". Do you have a working example without changing HTTP::Request#keep_alive? method? This fix worked for me to get WebSockets upgrading properly, but had to be applied to src/http/common.cr @hugoabonizio @adam12 Sorry for the delay! Your solutions were perfect, but we couldn't merge it because the code moved.
2025-04-01T06:39:29.311910
2016-08-03T08:22:12
169074658
{ "authors": [ "amitkumarj441", "mandar2812" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8060", "repo": "mandar2812/DynaML", "url": "https://github.com/mandar2812/DynaML/issues/42" }
gharchive/issue
Implementing State Space Models and simulating the State Space Updated tests #40 and #41 Process like Ornstein-Uhlenbeck process which can be simulated by specifying the parameters #41 of the process, theta - the mean of the process. @amitkumarj441 We already have implementation of Gaussian Process, in that you can implement OrnsteinUlhenbeckKernel class which calculates the Ornstein Ulhenbeck covariance function for two points x and y. Hey @mandar2812 , I'm onto implementing Ornstein Ulhenback covariance function for two points x & y, I already initiated some needed PRs for the above implementation, I'll soon initiate a PR for the same. @amitkumarj441 : Look at this class RBFKernel to see how to implement kernels in DynaML: RBFKernel @mandar2812 Just initiated a PR to generate akka stream of Metropolis Hasting state #50 with breeze implementation of Markov Chain
2025-04-01T06:39:29.316403
2023-05-26T13:54:01
1727676675
{ "authors": [ "HuskyHacks", "elevateman", "mr-tz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8061", "repo": "mandiant/flare-vm", "url": "https://github.com/mandiant/flare-vm/issues/452" }
gharchive/issue
Check valid username in installer Upon following this script: Open a PowerShell prompt as administrator Download the installation script installer.ps1 to your desktop (New-Object net.webclient).DownloadFile('https://raw.githubusercontent.com/mandiant/flare-vm/main/install.ps1',"$([Environment]::GetFolderPath("Desktop"))\install.ps1") Unblock the installation script by running: Unblock-File .\install.ps1 Enable script execution by running: Set-ExecutionPolicy Unrestricted If you receive an error saying the execution policy is overridden by a policy defined at a more specific scope, you may need to pass a scope in via Set-ExecutionPolicy Unrestricted -Scope CurrentUser to view execution policies for all scopes, type Get-ExecutionPolicy -List Finally, execute the installer script as follow: .\install.ps1 I get the following error See Image The solution is to build the windows 10 VM with a one word user name. I kept making the fresh windows install with user name JOHN DOE instead of a single word user name. This resolved the issue upon package install the paths wheee the package is installed is clear of blanks. @Ana06 FYSA, adding this check in https://github.com/mandiant/flare-vm/pull/485 See #485, thanks @HuskyHacks!
2025-04-01T06:39:29.323441
2016-05-09T04:02:28
153699606
{ "authors": [ "brian428", "manekinekko" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8062", "repo": "manekinekko/angular2-dependencies-graph", "url": "https://github.com/manekinekko/angular2-dependencies-graph/issues/16" }
gharchive/issue
Include all classes (not just components)? This is a great start. But could it be modified to include all classes? It looks like the tool does parse non-component classes (like services, models, etc.), but doesn't appear to include them in the graph output. I think being able to see all the dependencies would also be quite useful. @brian428 if you check this output, you can see the blue deps which are classes (services). Is that what you mean? Well, yes...partly. Though for some reason I have a number of services and other injected classes that aren't showing up as blue dependencies like you're showing above. Does the graph only pick up providers declared on a given class (rather than singleton (global) or inherited providers)? Beyond that, I think it would be useful to see other types as well (models, composed/aggregated classes and so on). I realize that could make the graph a lot bigger, so maybe it could be an option. But being able to essentially see the relationships for all imported classes (or at least all imported non-framework classes) could be quite useful to identify issues with module decoupling and organization. What do you think? Am I making sense? :-) Yes, the tool crawls only the providers (hence the dependencies feature). I am open to add this feature. However, I think this has to be behind flags. The developer should be able to choose what she/he wants to generate. However, my priority for now would be to update the tool so it can handle TS 2.0 Right, that's what I inferred (about providers). Given that, I'd say two key enhancements might be: To have it handle global/singleton providers that are configured in the application bootstrap. (Currently, it only seems to handle providers declared on individual components, right?) To use constructor arguments to determine a component's dependencies rather than relying only on the component's providers. What is shown right now, where only dependencies declared explicitly on the component as providers, is certainly useful. But in many cases, you're dealing with providers that are declared further up a component hierarchy (or bootstrapped at the app level), which are "lost" (to some degree) in the current dependency graph. I'd have no problem with these options being exposed via flags though. To be clear, I'm not disparaging what you've done...it's already very useful. As I said, I just think that being able to really see all of the dependencies, across the entire app, would also be very useful. In that light, the constructor params might be a more accurate way to determine the dependencies. Or to go even further, using the imports to truly visualize all of the dependencies. Thanks! Sure @brian428 I see your point ^^ Do you think you can send a PR so we can discuss more in details on the implementation? I will see if I can take a stab at doing this, but it may take some time. Partly because I'm obviously not familiar with how you're actually doing this. And partly because I'm on the hook already for some other PRs on other projects (namely, the angular2-seed project). :-)
2025-04-01T06:39:29.346444
2022-08-05T18:09:41
1330224827
{ "authors": [ "4cecoder" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8063", "repo": "manga-g/manga-g", "url": "https://github.com/manga-g/manga-g/issues/38" }
gharchive/issue
[Enhancement] Pdf functions needed for creation and adding images from list of image file names editing file app/pdf.go using this package for pdf creation create pdf in go video use of this go package just a quick example of some possible function names func GetImageList()[]string{ var imageList []string // do some stuff here return imageList } func ImagesToPdf(document *pdf, imageList []String){ // some code to add those images to the pdf } you can rework this idea however you see fit with the use flags just as long as pdf output works @Yuno-obsessed
2025-04-01T06:39:29.359508
2016-01-29T05:03:29
129667642
{ "authors": [ "manodeep" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8066", "repo": "manodeep/Corrfunc", "url": "https://github.com/manodeep/Corrfunc/issues/14" }
gharchive/issue
Makefile crashes when running with make -j I think this happens since the io/utils files are specified in each of the required Makefiles, rather than being made once. Multiple threads try to create, e.g., io.o and ends up corrupting the object file. Probably use make -j4 just to be on the safe-side. This issue is now worse. Previously make would crash for make -j8 or similar; now even make -j2 is enough to crash the code. This will be the next issue to get fixed. Requires full rewrite of the Makefiles and using non-recursive make. While this is very unsatisfactory, overhauling all the Makefiles will require some thinking and re-arrangment. Since the user experience is not hampered, will shelve fixing till the next version.
2025-04-01T06:39:29.360847
2017-11-01T18:40:37
270409015
{ "authors": [ "lgarrison", "manodeep" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8067", "repo": "manodeep/Corrfunc", "url": "https://github.com/manodeep/Corrfunc/pull/142" }
gharchive/pull-request
Add checks for Numpy array endianness to the Python wrappers. Closes… … #140 and #101. Thanks! I had two small questions - otherwise this PR is ready to merge. Merging in. Thanks! 👍
2025-04-01T06:39:29.497816
2024-08-01T20:57:26
2443420073
{ "authors": [ "dvdkouril", "manzt" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8068", "repo": "manzt/quak", "url": "https://github.com/manzt/quak/pull/32" }
gharchive/pull-request
Add hovered value label for Histogram plot The ValueCounts plot shows what the user is hovering before making a selection. This is a quick implementation of a similar behavior for the histogram. There's a bunch of stuff that could be improved: (I might give it a try before closing this PR) [ ] Give the label a background to prevent collision with the min/max tick labels AFAIK, since the labels are done through an SVG, you need to implement this through a rect that matches the text's bounding box [ ] label following the cursor during selection dragging also bit complicated due to the events captured by mosaic interactor [ ] better value formatting It seems like TODO 2 (mosaic capturing interactions) might be a little tricky to sort out, and I don't want to block this from merging. I need to think about 1 some more because I'm not totally sure how I'd do that either :) Maybe we append a rect to the tick group and set the background.... we would need to resize the width depending on width of the text (DOM/div would be nice here). For 3, you can have a look at how I formatted the ticks and we could pick something with less sigfigs but more detail than the axis bounds. Label background should be done. Agree that doing the dragging effect the way we'd like might need some further investigation.
2025-04-01T06:39:29.636215
2015-08-21T15:31:00
102404523
{ "authors": [ "daniel-j-h", "pandabr", "stepthom" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8069", "repo": "mapbox/cncc", "url": "https://github.com/mapbox/cncc/issues/6" }
gharchive/issue
Error when running: "libclang.so: cannot open shared object file: No such file or directory" First, I ran sudo apt-get install python-yaml and sudo pip install clang. Success. Then I tried: [dev@ubuntu:~/cncc (master)] $ ls cncc examples LICENSE MyClass.cpp MyClass.h README.md util [dev@ubuntu:~/cncc (master)] $ git log -1 commit 4529cb3536c7cec20ea0bb850d0f95e80cded733 Author: Daniel J. Hofmann<EMAIL_ADDRESS>Date: Wed Aug 19 14:55:48 2015 +0200 Respect global style file, closes #5 Local style files do not seem to make a lot sense. What we could do is walk all parent directories like `clang-format` does, though. [dev@ubuntu:~/cncc (master)] $ cat MyClass.h class MyClass{ public: void init(int a, int b); int loopAlot(); private: int var1, var2; }; [dev@ubuntu:~/cncc (master)] $ cat MyClass.cpp #include "MyClass.h" void MyClass::init(int a, int b){ var1 = a; var2 = b; } int MyClass::loopAlot(){ int res = this->var1 + var2; for(int i=0; i<this->var1; ++i){ res = res + this->var2; } return res; } [dev@ubuntu:~/cncc (master)] $ ./cncc --style=examples/basic.style MyClass.cpp Traceback (most recent call last): File "./cncc", line 22, in <module> index = I.create() File "/usr/local/lib/python2.7/dist-packages/clang/cindex.py", line 2119, in create return Index(conf.lib.clang_createIndex(excludeDecls, 0)) File "/usr/local/lib/python2.7/dist-packages/clang/cindex.py", line 141, in __get__ value = self.wrapped(instance) File "/usr/local/lib/python2.7/dist-packages/clang/cindex.py", line 3429, in lib lib = self.get_cindex_library() File "/usr/local/lib/python2.7/dist-packages/clang/cindex.py", line 3460, in get_cindex_library raise LibclangError(msg) clang.cindex.LibclangError: libclang.so: cannot open shared object file: No such file or directory. To provide a path to libclang use Config.set_library_path() or Config.set_library_file(). Any idea what I'm doing wrong? Sorry for not responding earlier, seems like I overlooked the notification. You do not have a matching libclang native shared library for the Python wrapper that python-clang is supposed to wrap. I found PyPI to not provide wrappers compatible with every clang version. That is, check your clang --version, and then see if PyPI has a matching wrapper for this (I think the one for 3.6 was missing, and e.g. Ubuntu 15.04 comes with Clang 3.6). That's the main reason this issue was never resolved. The easiest way to install python-clang is by means of your package manager, e.g. aptitude on Debian-based systems. Hi, for who have problem with python-clang, make a simbolic link for the same version of native clang and the python-bind: cd /usr/lib/x86_64-linux-gnu/ sudo ln -s libclang-3.8.so.1 libclang.so cheers []s
2025-04-01T06:39:29.740286
2015-06-02T15:16:08
84067781
{ "authors": [ "BergWerkGIS", "bkfunk" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8070", "repo": "mapbox/mapbox-studio", "url": "https://github.com/mapbox/mapbox-studio/issues/1368" }
gharchive/issue
In Mapbox Studio, when adding new layer, can't browse to network drives on Windows When looking for source files to add as new layers, the browse window only allows you to go to the C root, not to other drives. Accessing other drives is in the works and should be available with the next release. In the meantime you could try to mount that drive into a folder on C:, (Technet: Assign a mount point folder path to a drive) Upcoming release will allow to browse to other drives.
2025-04-01T06:39:29.771965
2022-04-25T19:19:36
1214939059
{ "authors": [ "aleokdev", "bjorn" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8071", "repo": "mapeditor/rs-tiled", "url": "https://github.com/mapeditor/rs-tiled/issues/216" }
gharchive/issue
Renaming the 0.11 branch to next and master to current We're changing the names of the branches to better represent what they're supposed to refer to, as well as to accomodate some structural changes: current (Previously master) will be used for the latest version of the last major release, And next (Previously 0.11) will be used for the next major release in-the-works. If you're directly referring to these branches in your project, we recommend you switch to the new names so as to prevent issues later on. This has been done. :-) Also, branch protection rules have been expanded to cover all branches, which are currently just current and next.
2025-04-01T06:39:29.797021
2021-09-22T11:19:01
1004173732
{ "authors": [ "wang3702", "zxz0928" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8072", "repo": "maple-research-lab/AdCo", "url": "https://github.com/maple-research-lab/AdCo/issues/12" }
gharchive/issue
A question about memloss logits=torch.softmax(logits,dim=1) batch_prob=torch.sum(logits[:,:logits.size(0)],dim=1) batch_prob=torch.mean(batch_prob) mem_losses.update(batch_prob.item(),logits.size(0)) why calculate mem_losses use the method above rather than cross-entropy loss? Thank you very much. That's is not mem loss, we just use that to track the sum of in-batch probabilities. Thank you for your reply.But I still can't understand why we need to track the sum of in-batch probabilities rather than loss?and how the in-batch probabilities value changes means that the experimental result is correct. That can reflect how much the memory bank negatives contribute to the loss optimization.
2025-04-01T06:39:29.826292
2022-12-15T14:57:01
1498585960
{ "authors": [ "maxammann" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8073", "repo": "maplibre/maplibre-rs", "url": "https://github.com/maplibre/maplibre-rs/issues/229" }
gharchive/issue
Limit work done per frame Even thought his can be done by newcomers, this issue is HARD We need to do some work on the main thread: Receive data from threads or WebWorkers upload data to GPU Here for example we process all the data we receive in the event loop: https://github.com/maxammann/maplibre-rs/blob/2b917e9e0850d95c876bfcf0f0f3c777f56b842d/maplibre/src/stages/populate_tile_store_stage.rs#L41-L43 We could limit this to a certain amount of messages or the message sizes. 🤔 Expected Behavior No frames should be dropped. 😯 Current Behavior Sometimes frames are dropped because of uploads. 💁 Possible Solution Limit the time spend on uploading or other work. This can either be done by measuring time (low level), or by restricting it on a higher level, e.g. the amount of work items. Steps for this issue: Check where the most time is spent during the render loop using the Tracy profiler Reduce it by doing less work per frame. Closing for now as this has no priority and we should only optimize when needed.
2025-04-01T06:39:29.852528
2018-10-02T07:49:11
365778562
{ "authors": [ "devproof" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8074", "repo": "mapr-emea/mapr-ansible", "url": "https://github.com/mapr-emea/mapr-ansible/issues/74" }
gharchive/issue
MapR 6.1 compare ecosystem component configuration from Installer and add changes. MapR 6.1 compare ecosystem component configuration from Installer and add changes. Done in branch mapr61, needs to be tested properly
2025-04-01T06:39:29.854354
2017-01-25T07:11:59
203028815
{ "authors": [ "mvexel", "rbovard" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8075", "repo": "maproulette/maproulette2", "url": "https://github.com/maproulette/maproulette2/issues/262" }
gharchive/issue
Cannont skip a point After login, I cannot skip this point: http://maproulette.org/map/1434/1284898 KO : Invalid task status supplied. What is strange is if I logout I can skip it but if I login in the next point, I come back to this one and am blocked again. Probably the same as #221
2025-04-01T06:39:29.869449
2019-06-21T09:04:29
459092630
{ "authors": [ "Smoke1987", "wf9a5m75" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8076", "repo": "mapsplugin/cordova-plugin-googlemaps", "url": "https://github.com/mapsplugin/cordova-plugin-googlemaps/issues/2646" }
gharchive/issue
Map not zooming with 'animateCamera' I'm submitting a ... (check one with "x") [ ] question [x] any problem or bug report OS: (check one with "x") [ ] Android [ ] iOS [x] Browser cordova information: (run $> cordova plugin list) cordova-plugin-device 2.0.2 "Device" cordova-plugin-geolocation 4.0.1 "Geolocation" cordova-plugin-googlemaps 2.6.2 "cordova-plugin-googlemaps" cordova-plugin-ionic-keyboard 2.1.3 "cordova-plugin-ionic-keyboard" cordova-plugin-ionic-webview 4.0.1 "cordova-plugin-ionic-webview" cordova-plugin-splashscreen 5.0.2 "Splashscreen" cordova-plugin-statusbar 2.4.2 "StatusBar" cordova-plugin-whitelist 1.3.3 "Whitelist" If you use @ionic-native/google-maps, please tell the package.json (only @ionic-native/core and @ionic-native/google-maps are fine mostly) @ionic-native/core : "4.20.0", @ionic-native/google-maps : "~4.20.0" Current behavior: Then trying animate map with increased zoom, map not zooming in. Also, this logic with 'decrease zoom' work correctly, and map zooming out. (sorry for my english :) ) Expected behavior: Animating map with increased and decreased zoom always. Screen capture or video record: Related code, data or error log (please format your code or data): Map Settings: controls: { 'compass': false, 'myLocationButton': false, 'myLocation': true, // (blue dot) 'indoorPicker': false, 'zoom': false, // android only 'mapToolbar': false // android only }, gestures: { scroll: true, tilt: false, zoom: false, rotate: false }, preferences: { zoom: { minZoom: this.MapZoomLevelMin, maxZoom: this.MapZoomLevelMax } }, Functions: private zoomIn ( zoom? ) { let _zoom = this.map.getCameraZoom(); console.log("HomePage/NATIVE @ zoomIn():: current =" + _zoom); let _cameraPosition: any = { target: this.map.getCameraPosition().target }; if ( zoom ) { _zoom = zoom; } else { _zoom++; } if ( _zoom >= this.MapZoomLevelMax ) { console.log("HomePage/NATIVE @ zoomIn() -> zoom already maximum", { zoom, _zoom, max: this.MapZoomLevelMax }); _zoom = this.MapZoomLevelMax; } _cameraPosition.zoom = _zoom; console.log("HomePage/NATIVE @ zoomIn():: ", { _cameraPosition, max: this.MapDisplayPositionZoomLevelMax, current: _zoom }); this.map.animateCamera(_cameraPosition).then(()=>{ console.log("zoomed IN"); }); } private zoomOut ( zoom? ) { let _zoom = this.map.getCameraZoom(); let _cameraPosition: any = { target: this.map.getCameraPosition().target }; if ( zoom ) { _zoom = zoom; } else { _zoom--; } if ( _zoom <= this.MapZoomLevelMin ) { console.log("HomePage/NATIVE @ zoomOut() -> zoom already minimum", { zoom, _zoom, min: this.MapZoomLevelMin }); _zoom = this.MapZoomLevelMin; } _cameraPosition.zoom = _zoom; console.log("HomePage/NATIVE @ zoomOut():: ", { _cameraPosition, min: this.MapZoomLevelMin, current: _zoom }); this.map.animateCamera(_cameraPosition).then(()=>{ console.log("zoomed OUT"); }); } Support this plugin activity I appreicate if you give me a beer :beer: from here On android platform work correctly. Browser (Google Maps JavaScript v3) does not support the feature. In the browser it is impossible to bring the map (zoom in) programmatically?))) The animateCamera() behaves the same as moveCamera() on browser platform. The animateCamera() work correctly (with animation) when cameraPosition 'decrease' zoom (a.k.a zoomOut).... https://github.com/mapsplugin/cordova-plugin-googlemaps-doc/blob/master/v2.6.0/class/Map/animateCameraZoomOut/README.md
2025-04-01T06:39:29.901010
2016-07-29T23:36:56
168434006
{ "authors": [ "binx", "migurski", "souperneon" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8077", "repo": "mapzen/data-pages", "url": "https://github.com/mapzen/data-pages/pull/204" }
gharchive/pull-request
first pass at map labels on boxes closes #200 different stylings at different zoom levels @binx this is awesome! 👍 on the zoom level as well when the labels appear. Is there a way to make those labels links? So instead of the pop-up on click we can add a hover state to the text and make it link to the extract download page? These look great!
2025-04-01T06:39:29.923930
2024-12-02T10:28:08
2711383115
{ "authors": [ "marc1404" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8078", "repo": "marc1404/gardener", "url": "https://github.com/marc1404/gardener/pull/7" }
gharchive/pull-request
fix(deps): update module github.com/gardener/cert-management to v0.17.1 This PR contains the following updates: Package Type Update Change github.com/gardener/cert-management require minor v0.16.0 -> v0.17.1 Release Notes gardener/cert-management (github.com/gardener/cert-management) v0.17.1 Compare Source [gardener/cert-management] 🐛 Bug Fixes [OPERATOR] Fix panic if target issuer referenced but not allowed by @​MartinWeindel [#​371] Helm Charts cert-controller-manager: europe-docker.pkg.dev/gardener-project/releases/charts/cert-controller-manager:v0.17.1 Docker Images cert-management: europe-docker.pkg.dev/gardener-project/releases/cert-controller-manager:v0.17.1 v0.17.0 Compare Source [gardener/cert-management] ✨ New Features [USER] Introduce the new Issuer type SelfSigned for creating self-signed certificates. by @​RaphaelVogel [#​228] [USER] The certificate resource can now define a duration (the lifetime of the certificate). The issuer (especially Let's Encrypt) may ignore this field. by @​marc1404 [#​354] 🐛 Bug Fixes [OPERATOR] Cleanup status for orphan pending certificate resources by @​MartinWeindel [#​367] 🏃 Others [DEVELOPER] Use Pebble as an ACME server in the integration tests. by @​marc1404 [#​339] Helm Charts cert-controller-manager: europe-docker.pkg.dev/gardener-project/releases/charts/cert-controller-manager:v0.17.0 Docker Images cert-management: europe-docker.pkg.dev/gardener-project/releases/cert-controller-manager:v0.17.0 Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 Ignore: Close this PR and you won't be reminded about this update again. [ ] If you want to rebase/retry this PR, check this box Release note: NONE ℹ Artifact update notice File name: go.mod In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s): 8 additional dependencies were updated Details: Package Change github.com/gardener/etcd-druid v0.24.1 -> v0.25.0 istio.io/client-go v1.23.2 -> v1.23.3 k8s.io/kube-openapi v0.0.0-20240808142205-8e686545bdb8 -> v0.0.0-20240903163716-9e1beecbcb38 github.com/gorilla/websocket v1.5.0 -> v1.5.1 google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 -> v0.0.0-20241015192408-796eee8c2d53 google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 -> v0.0.0-20241007155032-5fefd90f89a9 google.golang.org/grpc v1.66.2 -> v1.67.1 k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 -> v2.0.0-20240826214909-a7b603a56eb7
2025-04-01T06:39:29.953003
2016-05-13T10:10:53
154678410
{ "authors": [ "jsdw", "marcj" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8080", "repo": "marcj/css-element-queries", "url": "https://github.com/marcj/css-element-queries/pull/106" }
gharchive/pull-request
support removing specific attached events This is a feature I needed to make ResizeSensor.js (as a standalone lib) useful for me at work; its only a small addition and seems like others would find it useful too :) Basically, this allows you to pass the original event in to .detach methods as well as the element (fully optional) to only remove that event. from the queue. Thanks!
2025-04-01T06:39:29.960282
2022-09-02T15:47:59
1360362877
{ "authors": [ "marcmengel" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8081", "repo": "marcmengel/jobsub_lite", "url": "https://github.com/marcmengel/jobsub_lite/issues/109" }
gharchive/issue
local submit directories/sandbox transfer directories and cleanup Currently jobsub_lite creates directories under $HOME/.jobsub_lite, with submit files, and is the place where condor_transfer_data returns results; which works, but repeated use will eventually run the user out of quota in their $HOME area. As we move to jobsub_fetchlog, this becomes increasingly invisible to the users, so it will not occur to them to clean this up. We should revisit this, and pick a "standard" location, and make sure files get cleaned up eventually. Email disucussion included this from Kevin: After some more thought, while /run/user ($XDG_RUNTIME_DIR) would be convenient, our use would be against spec so we’d need to be careful, especially if dumping logs in there (true of any tmpfs solution). Interactive nodes give /run a few GB and generally have plenty of RAM (depending on what users are doing of course) so maybe it’d be OK… tmpfs 5.8G 3.0M 5.8G 1% /run $XDG_CACHE_HOME (default $HOME/.cache) or $XDG_STATE_HOME (default $HOME/.local/state) are probably more appropriate locations, but then we’d need to do cleanup. Maybe that’s not so bad - clean up as we go, and maybe also have a routine run with every jobsub command that looks for old submission dirs that didn’t get cleaned up. https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html#variables Earlier comment from Kevin: Maybe we should just delete the local sandbox after successful submission? Fetchlog will re-create the directory if it doesn’t exist, and them I suppose it should delete the directory as well when done. Or just put them in some tmpfs location (/run/user, /run/jobsub?) so we get OS cleanup occasionally. I think this would work fine -- if we make sure to include the submit file, etc in the files we transfer to the job, so we get it back at the end, Otherwise the submit file disappears, which could make debugging difficult. So after discussion I think we're looking at: making sure all the job files (i.e. the simple.cmd, etc.) get transferred to and from the job cleaning out the sandbox directory after submission Copying submit files to current directory, and or a subdirectory of current directory if --no-submit cleaning out the sandbox directory after condor_transfer_data and copying elsewhere in jobsub_fetchlog We could also have jobsub_submit and jobsub_fetchlog clean up any more-than-a-week-old submit directories in case jobsub_submit or jobsub_fetchlog were killed or whatever before they cleaned up. Then the user still ends up with the submit files, etc. if they do jobsbu_fetchlog. I'm also leaning to $HOME/.cache/jobsub_lite for the root of the area for these sandbox directories. Fixed with #115. All the submit directory files are transferrred to the schedd, and submit directory is removed. Directory is recreated before calling condor_transfer_data in jobsub_fetchlog, and cleaned up again after jobsub_fetchlog .
2025-04-01T06:39:29.972995
2020-04-01T11:48:44
591854344
{ "authors": [ "disburden", "marcojakob" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8083", "repo": "marcojakob/dart-event-bus", "url": "https://github.com/marcojakob/dart-event-bus/issues/34" }
gharchive/issue
Are there plans to support flutter_web? as title I haven't tried it. Does it not work like this? The event bus needs very little to work: Dart Streams and some generic types. See https://github.com/marcojakob/dart-event-bus/blob/master/lib/event_bus.dart Yes, it doesn't work in flutter_web project, I have n’t even started using it, I just import it at the top of the file, and I got the following error message: Unable to find modules for some sources, this is usually the result of either a bad import, a missing dependency in a package (or possibly a dev_dependency needs to move to a real dependency), or a build failure (if importing a generated file). Please check the following imports: `import 'package:event_bus/event_bus.dart';` from ... PageWorks.dart at 8:1 Failed after 193ms How does your pubspec.yaml look like? Here is how to install it: https://pub.dev/packages/event_bus#-installing-tab- Ok, closing this as we didn't receive any further info. Sorry, I have forgotten to reply due to the progress of the project in the past few days. My project uses a lot of other dependent libraries, it seems that the configuration method is the same. I do n’t think there will be any problems in installation, otherwise my "import" statement should be a problem, rather than reporting an error at runtime However, if other users do not encounter problems, I think it may be related to my computer or environment. I will try again when I have time. thank you for your reply!
2025-04-01T06:39:29.974233
2023-02-26T22:10:51
1600220709
{ "authors": [ "marcolivierarsenault" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8084", "repo": "marcolivierarsenault/moonraker-home-assistant", "url": "https://github.com/marcolivierarsenault/moonraker-home-assistant/issues/12" }
gharchive/issue
Fix environement Address 2 comments from https://github.com/hacs/default/pull/1721 done in pr ☝️
2025-04-01T06:39:29.979870
2021-04-28T05:38:47
869515684
{ "authors": [ "marcotcr", "mayurka" ], "license": "bsd-2-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8085", "repo": "marcotcr/lime", "url": "https://github.com/marcotcr/lime/issues/597" }
gharchive/issue
Facing issue for explain instance with custom classifier function exp = explainer.explain_instance(df_val_final.Description[idx],predproba_list,num_features=5, top_labels=2) While executing the explain instance of LimeTextExplainer, above statement keeps on executing continously with below warning message. Execution stops only if i interrupt the kernel C:\ProgramData\Anaconda3\lib\site-packages\fastai\torch_core.py:83: UserWarning: Tensor is int32: upgrading to int64; for better performance use int64 input warn('Tensor is int32: upgrading to int64; for better performance use int64 input') C:\ProgramData\Anaconda3\lib\site-packages\fastai\torch_core.py:83: UserWarning: Tensor is int32: upgrading to int64; for better performance use int64 input warn('Tensor is int32: upgrading to int64; for better performance use int64 input') C:\ProgramData\Anaconda3\lib\site-packages\fastai\torch_core.py:83: UserWarning: Tensor is int32: upgrading to int64; for better performance use int64 input warn('Tensor is int32: upgrading to int64; for better performance use int64 input') I want to use my own custom classifier model and hence I wrote a classifier function - predproba_list, which returns a numpy array of predicted probabilties for the classes Below is the function code def predproba_list(test1) : pred = learn_clf.predict(test1) return np.array(pred[2]) pred[2] vaue is tensor([0.1423, 0.2133, 0.6444]) which i then convert to a numpy array Can you please advise if the return value of the function is as expected by the explain instance's classifier function, and what could be causing the code to keep on executing without any result Thanks in advance Now I am getting the below error, ValueError: Found input variables with inconsistent numbers of samples: [5000, 1]. 5000 is the default value for argument num_samples in function explain_instance() if it is not explicitly defined. How is the value for num_samples determined if need to set it explicitly The output should be a 2d array , where columns are prediction probabilities for different labels. If you only have one label, it should still be (n, 1)
2025-04-01T06:39:29.985996
2021-03-10T04:24:51
827077992
{ "authors": [ "marcus7070" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8086", "repo": "marcus7070/cq-flake", "url": "https://github.com/marcus7070/cq-flake/issues/13" }
gharchive/issue
text segfaults Making text onto no base solid segfaults. Not sure if my fault or upstream yet. There are no tests in CadQuery that cover this behaviour. Works now, this was probably CadQuery/cadquery#762.
2025-04-01T06:39:29.988137
2022-04-29T00:11:25
1219482497
{ "authors": [ "codyburleson", "marcusolsson" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8087", "repo": "marcusolsson/obsidian-plugin-docs", "url": "https://github.com/marcusolsson/obsidian-plugin-docs/pull/46" }
gharchive/pull-request
Update react.md code sample var name Since Vault is itself an API object, it is confusing to name an instance of the App object "vault"; why not name it "app"? Hey (and sorry for the late response)! Maybe I'm misunderstanding you here, but vault doesn't refer to the app instance here, but the vault instance. The following code: const { vault } = useApp(); Is equivalent to: const app = useApp(); const value = app.vault; The { vault } syntax extracts a property from the object on the right hand side.
2025-04-01T06:39:30.038731
2018-08-23T05:04:30
353226016
{ "authors": [ "arun121cs", "marian-margeta" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8088", "repo": "marian-margeta/gait-recognition", "url": "https://github.com/marian-margeta/gait-recognition/issues/12" }
gharchive/issue
Not working properly Hello, I created a dataset of 20 people at two different locations to test the algorithm but unfortunately it is not working. I am taking profile view and 100 frames per person. Can you tell me why it is not working? Also at the different location, i am taking opposite side of a person. hi @arun121cs Did you use pre-trained models for pose or gait estimation? Or you train one of them or both from scratch? I can send you the dataset and code ple guide me On Fri, Aug 24, 2018, 12:19 AM Marián Margeta<EMAIL_ADDRESS>wrote: hi @arun121cs https://github.com/arun121cs Did you use pre-trained models for pose or gait estimation? Or you train one of them or both from scratch? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/marian-margeta/gait-recognition/issues/12#issuecomment-415528300, or mute the thread https://github.com/notifications/unsubscribe-auth/Ahr4U32v52ElYsWe0brsJJgZNog4jt3bks5uTvklgaJpZM4WI2ih . The code might by helpful Ok. I will send u tomorrow morning. One more thing, I am taking profile images(video of walking people). Is it enough? Also can you share some of the videos for testing it more? It will be very helpful. Thanks. On Fri, Aug 24, 2018, 12:36 AM Marián Margeta<EMAIL_ADDRESS>wrote: The code might by helpful — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/marian-margeta/gait-recognition/issues/12#issuecomment-415533897, or mute the thread https://github.com/notifications/unsubscribe-auth/Ahr4U0b9fiWYDfdWEFhj4CUK0xhtp6KSks5uTv05gaJpZM4WI2ih . Also do you have the code to put the object in center? On Fri, Aug 24, 2018, 12:55 AM Arun Sharma<EMAIL_ADDRESS>wrote: I am taking 90 images for training the svm and same for testing. But I am not getting good accuracy. On Fri, Aug 24, 2018, 12:52 AM Marián Margeta<EMAIL_ADDRESS>wrote: Yes, video of profile view of walking persons should be enough, but they have to be centered and in appropriate quality. I used TUM GAID dataset that have restricted license, so you must contact the TUM to acquire it. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/marian-margeta/gait-recognition/issues/12#issuecomment-415539828, or mute the thread https://github.com/notifications/unsubscribe-auth/Ahr4U_FDQNNwJVtxDr3pMixY2mf09rQFks5uTwDjgaJpZM4WI2ih . Unfortunately not. I had just scripts for that that I incrementally changed and now I am not sure if they work at this moment. What about quantity of images? As I told you I am taking minimum of 90 images for both training and testing (identification). On Fri, Aug 24, 2018, 1:24 AM Marián Margeta<EMAIL_ADDRESS>wrote: Unfortunately not. I had just scripts for that that I incrementally changed and now I am not sure if they work at this moment. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/marian-margeta/gait-recognition/issues/12#issuecomment-415550476, or mute the thread https://github.com/notifications/unsubscribe-auth/Ahr4U4fKnUz3HEilhka4ImPFqobjwnJIks5uTwhwgaJpZM4WI2ih . Code for loading models `import numpy as np from scipy.misc import imresize, imread import pandas as pd from human_pose_nn import HumanPoseIRNetwork from gait_nn import GaitNetwork net_pose = HumanPoseIRNetwork() net_gait = GaitNetwork(recurrent_unit = 'GRU', rnn_layers = 1) Load pre-trained models net_pose.restore('./models/Human3.6m.ckpt') net_gait.restore('./models/H3.6m-GRU-1.ckpt')` Code for converting frames into tensor:- ` images = [] images_path = '/home/administrator/Desktop/Video_to_image/images/01' for image_path in glob.glob(images_path + '/*.jpg'): img = cv2.imread(image_path) images.append(cv2.resize(img, (299, 299))) images_001 = np.array(images) images_001.shape` [This code is for single person] Code for getting identification vectors:- img_list = [images_001, images_004, images_005, images_006, images_007, images_008, images_009, images_010] features_list = list() for element in img_list: spatial_features = net_pose.feed_forward_features(element) identification_vector = net_gait.feed_forward(spatial_features) #df = pd.DataFrame.from_records(identification_vector) iv =list(identification_vector) iv.pop(1) features_list = features_list + iv This is how i am getting identification vectors and then converting into dataframe for further classification.
2025-04-01T06:39:30.113022
2020-11-08T18:23:28
738524692
{ "authors": [ "Joelius300", "LukeTOBrien", "PoisnFang", "axelroy", "kchristman54", "larschristensen20", "pgrimstrup" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8089", "repo": "mariusmuntean/ChartJs.Blazor", "url": "https://github.com/mariusmuntean/ChartJs.Blazor/issues/160" }
gharchive/issue
Difficult times Finally, 2.0 is released. I'm sorry for the long wait. As a short disclaimer before you continue reading this: I'm very thankful for Marius creating this library and giving me the opportunity to maintain it. Although you might find some frustration in my statements, I don't want anyone to spread hate towards Marius. I will continue to support him and so should you. Release 2.0 You'll notice that the 2.0 release isn't hosted on the official ChartJs.Blazor but instead on ChartJs.Blazor.Fork. The reason for that is simple, I cannot publish to ChartJs.Blazor. I pushed everything to the releases branch but the release pipeline was paused quite a while ago and I don't have permission to enable it again. The same goes for the samples thus https://www.iheartblazor.com/ will remain in an old state as long as Marius doesn't return to update it. Of course I tried to publish it normally but Marius hasn't responded to my mail so I thought I'd get it over with now. Past year Marius (@mariusmuntean), the owner of the project, has had very little involvement throughout the last year. Both the project maintenance and the development of version 2.0 was done by me since the end of 2019 and I really enjoyed it. However, single-handedly maintaining a library with ~80k downloads as an 18yo without real world experience was challenging at times. I was the one who put in some horrible features and made some terrible decisions (e.g. the covariant datasets) which made the library unpleasant to use. The goal of version 2.0 was to fix all of my prior mistakes and make the library ready for easy use in most projects. I believe version 2.0 accomplishes that goal but that doesn't mean the library is now finished. We're still only supporting Chart.js 2.9, there are still missing features and bugs (e.g. support for gradients, issues with responsiveness) and we don't have any docs yet. Future I will continue to maintain this library by responding to issues and fixing urgent bugs. However, I won't actively develop new features. I'm still hoping Marius will return and either continue to work on the library or hand the ownership to someone else. I don't feel comfortable searching for new maintainers in the name of the library since I'm "just" a contributor. However, if you'd like to help develop this library and/or take over it, please tell us and we might get back to you. When Chart.js 3.0 releases, many people will want to upgrade and so will the people using ChartJs.Blazor. I encourage you to try working with Chart.js 3.0 if you need the new features or the insane performance improvements but it will require some customization and you should probably use your own fork for it (and reference that directly). In the spirit of Open Source, I highly encourage you to contribute those changes back to the library :heart: As a side-note, I have compiled some of my thoughts in a GitHub project. These are just some points that were important to me and I thought I'd do something public instead of just telling Marius about it. My journey Back when I started to work on this library (June 2019), I had very little experience. I had only worked on a handful of terrible school projects, never seen or worked with Open Source and had zero experience with Blazor. As such, most of my changes were bad but for better or worse, they all found their way into this library and now about 80k projects have to suffer from my incompetence. After some time, I got better at programming (of course I'm still nowhere) and started to understand and like Open Source. Suddenly I found myself being the maintainer of this fast growing library in the fast growing Blazor ecosystem. Although stressful (I'm sorry for all the poor issuers that received an unfriendly response), it was a really educational period. At some point I realized just how big this library got because searching "blazor chart" on google shows this repo as first result. This might be influenced but on DuckDuckGo, it's the 4th entry. Also, ChartJs.Blazor is used in a sample project of an official dotnet repo along with multiple Microsoft employees opening issues on our repo. I thought this was absolutely insane and being the only person actively working on the repo, I didn't want to make all these people use a version that never should've been considered 1.0 in my books. So I got to work on 2.0 and now we're here. Unfortunately, I have continuously lost interest in this library and it was just about finishing 2.0. A big factor in this is probably that I don't use Blazor myself anymore and I don't think I'll get back to it until .NET 6. Now that version 2.0 is released, I can peacefully slow down my activity here. As I said, I will continue to assist with issues because at the very least, this library has a special place in my heart. Closing I'd like to thank everyone who supported me and this library be it opening helpful issues, submitting pull requests or participating in discussions. Special thanks go to Marius (@mariusmuntean) who has made this all possible in the first place. Without him, ChartJs.Blazor wouldn't exist. I wish everyone the best ~Joel Update 24.01.2021 Thank you all for this journey, it's been great. Now, it's time to say goodbye. The last semester of my apprenticeship is about to start and I'd like move on from ChartJs.Blazor. I have contacted Marius multiple times about the state of the library, the 2.0 release and me leaving. I've not heard back from him. That being said, I wish ChartJs.Blazor the best and it makes me happy to see this community be so helpful and grateful. It's not a big community but we've surpassed 100k downloads recently and I'll gladly look back on this achievement in the future. As suggested in the comments, I've also contacted Blazorise but I won't pursue that further (if you'd like to, please do). Goodbye ~Joel Thank you for your continuous work during the past year @Joelius300. Question: I'm curious if you have been in contact with Microsoft about the project and its future at any point? @larschristensen20 Thank you for using the library and being so cooperative with your issues :) Question: I'm curious if you have been in contact with Microsoft about the project and its future at any point? No, I haven't. Marius started the project, I got into it but it never went beyond being a small project fueled by a few peoples spare time. Why do you ask? Well as a 18yo this is a good think to put on your CV and talk about in interviews, good job! @larschristensen20 Thank you for using the library and being so cooperative with your issues :) Question: I'm curious if you have been in contact with Microsoft about the project and its future at any point? No, I haven't. Marius started the project, I got into it but it never went beyond being a small project fueled by a few peoples spare time. Why do you ask? Had a train of thought about them maybe being interested in helping maintain it, but it might very well just have been wishful thinking :-) Also Blazorise use ChartJS, maybe you could take a look? 😉 Well as a 18yo this is a good thing to put on your CV and talk about in interviews, good job! Thank you, I definitely will :) Had a train of thought about them maybe being interested in helping maintain it, but it might very well just have been wishful thinking :-) I'd also say that's closer to wishful thinking, at least I wouldn't know about similar projects and I also don't know how I would go about asking them. I could see them sponsoring such projects but even that seems highly unlikely to me given that the project still isn't huge. Also Blazorise use ChartJS, maybe you could take a look? 😉 That's actually a really good pointer, thank you! Maybe they're interested in a collaboration or they could actually supersede our library with theirs 🤔 Both seem like good options. They do have a lot more resources available and ChartJs.Blazor would definitely fit in their system. Also Blazorise use ChartJS, maybe you could take a look? 😉 That's actually a really good pointer, thank you! Maybe they're interested in a collaboration or they could actually supersede our library with theirs 🤔 Both seem like good options. They do have a lot more resources available and ChartJs.Blazor would definitely fit in their system. +1 for this, I would love to see this project integrated into theirs I'll ask them, why not :) In the end, Marius will have to decide what's going to happen to this library but one of the great things about Open Source is that Blazorise can integrate our library into theirs (almost) however they wish. Glad you find it helpful. BTW [here is the poorly designed website[(https://jollify.app/) for the app I'm working on. Thank you so much for maintaining this repository, you've done an incredible job. The 2.0 version seems to fix all the bug remaining in the application I'm currently on, it's really a fantastic news. The migration procedure from 1.1 to 2.0 is also very precise and straighforward. 👍 I'm very thankfull for your job, and althought I do not program in Blazor very often, I will keep an eye on this project, and I might contribute in the future. @Joelius300 Thank you so much to your commitment to this project! Thank you all for the positive comments. I posted an update on the original issue. TL;DR Goodbye ❤️ Thank you all for the positive comments. I posted an update on the original issue. TL;DR Goodbye ❤️ @Joelius300 I will continue to maintain this library by responding to issues and fixing urgent bugs. However, I won't actively develop new features. I highly disagree with this. If the maintainer of this project has abandoned it then someone should probably create a fork of this and actively maintain and update that fork as the new current version. Then this project should be marked as obsolete with a disclaimer that it's no longer being maintained, and put a link to the new project. There are plenty of cases where this has happened before on github. @Joelius300 I will continue to maintain this library by responding to issues and fixing urgent bugs. However, I won't actively develop new features. I highly disagree with this. If the maintainer of this project has abandoned it then someone should probably create a fork of this and actively maintain and update that fork as the new current version. Then this project should be marked as obsolete with a disclaimer that it's no longer being maintained, and put a link to the new project. There are plenty of cases where this has happened before on github. @PoisnFang I thought it worked pretty good for the three months this "mode" was in action. Multiple bugs were fixed and the alternative was leaving the repo for good (which I'm doing now) in which case I wouldn't have fixed any of these bugs.. Now I'm leaving this library and yes, if you fork it and actively maintain it feel free to post about it here in a short comment. But saying "someone" should create a fork and actively maintain that sadly doesn't do the trick. Do you think it would be better to more clearly highlight that it's unmaintained in the readme? @PoisnFang I thought it worked pretty good for the three months this "mode" was in action. Multiple bugs were fixed and the alternative was leaving the repo for good (which I'm doing now) in which case I wouldn't have fixed any of these bugs.. Now I'm leaving this library and yes, if you fork it and actively maintain it feel free to post about it here in a short comment. But saying "someone" should create a fork and actively maintain that sadly doesn't do the trick. Do you think it would be better to more clearly highlight that it's unmaintained in the readme? But saying "someone" should create a fork and actively maintain that sadly doesn't do the trick. I agree, if I decide to maintain it in the future then I will post. Do you think it would be better to more clearly highlight that it's unmaintained in the readme? Yes please do this. But saying "someone" should create a fork and actively maintain that sadly doesn't do the trick. I agree, if I decide to maintain it in the future then I will post. Do you think it would be better to more clearly highlight that it's unmaintained in the readme? Yes please do this. @Joelius300 Please put it at the TOP of the Readme in big bold letters. I do plan on taking this project on to maintain either in a fork or my or custom build for it. @Joelius300 I have submitted a PR for some changes. I am also keep to take on maintainer role of this repo. Is there a way you can provide me with write access to the repo? Please see the issue I have raised #191 @PoisnFang Are you moving ahead with a fork? I have created a fork and have started making some bug fixes. As per your suggestion, this repo should be marked as non-maintained and point to either my fork or your fork. I can send you the changes I have submitted so far. I am actively using this library in several commercial and personal projects. I want it to continue. My plans are to build a v3 of this repo targeting v3 of ChartJS, and to create a .NET Core 5 branch. As everyone in the community has said so far, @Joelius300 @mariusmuntean thank you for your time and commitment to this project. You can rest now. I am able to stand on the shoulders of giants and continue forward. @pgrimstrup No, I am not actively maintaining a fork of this any more. We should continue with your fork. Just thought I'd let you all know of a new Blazor ChartJs implementation I encountered recently: https://github.com/erossini/BlazorChartjs Time will tell if it will be actively maintained, but to me that is more important than if the initial feature-set is less.
2025-04-01T06:39:30.128465
2021-03-07T13:39:38
823921099
{ "authors": [ "markbattistella" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8090", "repo": "markbattistella/docsify-charty", "url": "https://github.com/markbattistella/docsify-charty/issues/1" }
gharchive/issue
Moustache compatibility Moustache doesn't work - the rendering seems to occur before moustache triggers the replacement It works - docsify-tabs causes docsify-moustache to fail in the rendering!
2025-04-01T06:39:30.194332
2023-04-04T02:51:49
1653076999
{ "authors": [ "Royhowtohack", "gera2ld" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8091", "repo": "markmap/markmap", "url": "https://github.com/markmap/markmap/issues/161" }
gharchive/issue
Is it possible to have the code work on GitHub pages Thank you very much for this project. I was wondering is it possible to run the code on my own website? Yes, please following the documentation and the demos there.
2025-04-01T06:39:30.198019
2024-06-21T02:15:41
2365561464
{ "authors": [ "flowerhaha", "gera2ld", "trry-hub", "wlk-menglan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8092", "repo": "markmap/markmap", "url": "https://github.com/markmap/markmap/issues/256" }
gharchive/issue
[BUG] [ ] I have searched for existing issues that already reported this problem and found none [x] The bug is present in REPL Describe the bug there is a typeError when i generate a new markmap, Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'refreshHook'). i try to find the reason, but This example also reports the same error [https://stackblitz.com/edit/markmap-react?file=src%2Fstyle.css] how can i fix it , please help me, thank you me too me too 来信已收到!顺祝您身体健康!Thank for your email , Good Health and Happy!! Thanks @Adonis0123 Thank you very much @Adonis0123
2025-04-01T06:39:30.242624
2015-12-04T00:21:50
120303582
{ "authors": [ "AccountingResearcher", "ChckNrrs", "Nishantkumark", "Piedjoo", "Rgui", "benrugg", "dragoon3", "gmanueltp", "hannahwen", "jcsouth", "katekol", "madelainegur", "marcosmercado", "marianna240296", "mrGreenbean", "rai1234", "samronsin", "vonBlasberg", "yauhenibankouski" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8093", "repo": "markolson/kickscraper", "url": "https://github.com/markolson/kickscraper/issues/35" }
gharchive/issue
Kickstarter csv Hello guys, just looking to do a school paper - if possible, could someone upload the latest csv file? Noticed a few dropbox links here - though they seem to have disappeared. It would be super appreciated! Thanks Here you go. I'll leave this link up for a bit: https://www.dropbox.com/s/75rxb32xdtipjbd/kickstarter projects.csv.zip?dl=0 One quick note... On Nov 10th of this year, Kickstarter changed their API and it broke part of the way I'm collecting data. So for any projects that have finished on that date or later, they are inaccurately marked in my data as "deleted" and their final backer count and pledged amount are just slightly off. Hi Ben, that's super helpful! Do you by any chance have any csv data from just before the 10th Nov? (Just wondering if the 10th Nov bug is possible to fix in the future?) Yeah, there is 2.5 years worth of data in that CSV. And only projects ending after Nov 10th have the slight inaccuracy. At some point I am hoping to make a workaround for the API change and fix the data, but I won't have time for a while, unfortunately. Keep up the good work Ben! ☺ will keep an eye here in case there is development. Have a good weekend, Hey benrugg and mrGreenbean, my name is Richard, I'm a German student and I'm looking for an current data set of kickstarter projects for my master thesis. Actually I have one which ends in September 2015. @benrugg Here and in another thread I saw, that you provide such data sets, but the download links do not work anymore. Would it be possible, that you upload the .cvs from above again or, if you fixed the bug, a later one? This would help me so much! Best regards! Richard Hey @ChckNrrs, sure thing... I just exported the projects (up to date) and put them here temporarily: https://www.dropbox.com/s/75rxb32xdtipjbd/kickstarter projects.csv.zip?dl=0 (see the note above about the slight inaccuracies for projects ending Nov 10th and later) Thank you very much @benrugg! Hello @benrugg and @mrGreenbean , As many in this post, I am also working on my master thesis about kickstarter, my bottleneck is the data and I would trully appreciate if you could help me out with the csv file. I had checked the other links but they wont open. Thanks a lot in advanced! My best wishes for your projects, Guillermo Now that this is coming up so often, I'll try to put some work into an automated export of this data, so I can just post it somewhere, and so we don't have to clog up the github threads... But for now @gmanueltp, here's the latest dump of project data: https://www.dropbox.com/s/75rxb32xdtipjbd/kickstarter projects.csv.zip?dl=0 Thanks a lot @benrugg I really appreciate it! Keep the good work!! You're welcome :-) Let me know if you do something cool with it. I love seeing what everyone has been creating/researching with this kickstarter data. lll do it @benrugg thanks!! Ill be working on it! Hi @benrugg, I did this project with the help of your data: http://www.axplusb.com/search I'll update the data soon, thanks a lot for sharing this! Wow @samronsin, this is is really cool. I love the visualization of that data. Great project! @benrugg thanks so much for the dataset you have uploaded. Can you provide me with the textual content of the data these campaigns provide. I am a student and would love to do some analysis. You know what... The info in that dataset is the only content I have. I don't believe you can easily scrape the textual content from Kickstarter. On Jul 16, 2016, at 2:36 AM, rai1234<EMAIL_ADDRESS>wrote: @benrugg thanks so much for the dataset you have uploaded. Can you provide me with the textual content of the data these campaigns provide. I am a student and would love to do some analysis. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread. Hi everyone, Does any of you know any dataset on Kickstarter that also crawled the community data (meaning the country of origin and the number of backers) and/or the number of updates and comments? Or a dataset with the daily updates on the number of backers and the daily amount raised by a set of projects? @Rgui - I've got data that shows some of what you're asking for. You can get it at the last dropbox link (above) that I posted. Hello! Does anyone have an idea if there is a data set available which includes the comments (text) from all successful campaigns? Thanks! @vonBlasberg - I don't have that data. If you end up finding it, let me know! @benrugg Hello, I'm a year-two univiersity business student from Asia, recently I was doing a small class report about data analysis in Kickstarer, I've downloaded the previous dataset you've already uploaded ( covering to 08/2016), I really appreciate your csv dateset, it helps me a lot to build some statistical model, but now I really want the latest csv dataset since there are some new Asia projects coming out. If you could upload the latest csv file or automated export site , I would be very grateful. Thanks a lot in advanced! @dragoon3 Sure thing. I just uploaded a new version at that same link above. @benrugg First of all, thanks a lot! I have recently conducted statistical analysis of this dataset and found many interesting pattern, I would upload my findings after finishing this class report. I'm really interested in the process of using this code file to get data from the web and build a dataset transformed to csv, could you simply explained how this code file works? (since I'm not familiar with coding, still don't have a clue how to run it) In addition, can this code file gernerate data including reward level, avg_price, min_price, %backed_two %backed_ten, %backed_hundred, num_comments? Hi @dragoon3 - I'm definitely curious to see the patterns and analysis you've done with the data. Post that for sure! The project data that I have is really just pulled directly from Kickstarter. If you wanted to do average price, min price or any other analysis, you'd have to do that in Excel or something. Also, Kickstarter doesn't make any of the detailed backer data available, so the only data is just the basic info on each project. @benrugg Hello! Thanks for the amazing data! Do you have any up to date data? I am working on identifying the gender of the person behind each project and for that I need the name of the person, are you pulling this data in any way? Thank you for all your work! Hi @marcosmercado - Unfortunately, no... I don't have the name for any project creators. I'm not sure if Kickstarter offers that information easily in their API. Sorry! Hi @benrugg, amazing that you're keeping track of all the data! Do you happen to have data on the required returns on the projects as well? @Piedjoo what kind of info are you thinking? The "goal" data for each project is what dollar amount they need to reach for the project to be successful. Were you asking about that or something else? @benrugg Tnx for your quick reply! What I meant was the return the projects offer. Lets say a project wants to raise $100K in debt and offers a 6% return on it. I would like to use the data to investigate the cost of capital related to crowdfunding.. Thanks! @Piedjoo ah, yeah, that would be great data. As far as I know, Kickstarter doesn't allow projects to offer debt or pay interest to their backers. I think a couple of the other crowdfunding sites are known for that. (Let me know if I'm wrong!) Hi guys! @benrugg do you happen to have an updated dataset (in csv format) of the kickstarter projects? I'm writing my master thesis on crowdfunding and I desperately need data... Plus, I've tried to download and open the dropbox files you updated above but excel cracks every time I try to import them, why so? Hey guys! @benrugg I would be reall happy if you could share the newest Kickstarter database? Also, do you think it is possible to scrape more info from Kickstarter, for example I need to research whether the projects that have video, comments and updates outperform those that do no have as well as the number of friends of founder affects the success of the campaign.And finally make the cross-region comparison @katekol and @yauhenibankouski - I've updated the shared csv file with the latest data. If excel is crashing, it's probably because it can't import such a large file. You should be able to find some way of importing only part of it, or using a different app to split the file into pieces. There isn't an easy way to get other data like video, comments or friends. Sorry! Where can I find the most updated data set of the kickstarter projects? @benrugg I'll start doing my thesis on crowdfunding soon and I would really appreciate to have the data available for running the analysis @marianna240296 - the most updated file is still at the dropbox link in this thread. It's sort of buried now, I guess, so I'll post it again: https://www.dropbox.com/s/75rxb32xdtipjbd/kickstarter projects.csv.zip?dl=0 Thank you so much for sharing these data @benrugg Il 21 feb 2017 6:18 PM, "Ben Rugg"<EMAIL_ADDRESS>ha scritto: @marianna240296 https://github.com/marianna240296 - the most updated file is still at the dropbox link in this thread. It's sort of buried now, I guess, so I'll post it again: https://www.dropbox.com/s/75rxb32xdtipjbd/kickstarter% 20projects.csv.zip?dl=0 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/markolson/kickscraper/issues/35#issuecomment-281412351, or mute the thread https://github.com/notifications/unsubscribe-auth/AYu6H3aWAKSQWstzAywFeejJKU9w2rTzks5rexx9gaJpZM4GufM6 . @benrugg Your data set is super helpful. I'm interested in analyzing the stats just for journalism projects (I teach journalism in the US). When I filter the file you posted for journalism as a main or sub category, I find 845 projects -- which is a decent size set but far short of the 4,262 journalism projects that Kickstarter says have been posted (https://www.kickstarter.com/help/stats ... 917 successful, 3,345 failed). Any thoughts there? Hey @jcsouth - I just looked into this... it turns out there is an answer, but it's an unsatisfying one. Apparently just a few months after I began collecting data, Kickstarter reworked all their categories. They expanded them to include many more sub categories than they had before (including niche sub categories like "bacon"). When they did that, they also changed the way their API returned data, and unfortunately we never updated our code to match it. It looks like the vast majority of my data incorrectly has main category where it should be sub category. (For example, showing "Childrenswear" as the main category, when it should be "Fashion" as the main and "Childrenswear" as the sub). This is especially problematic for a few categories, and journalism is one of them. If you search Kickstarter's categories, you can see all main/sub categories. (Go to https://www.kickstarter.com/discover/categories/journalism and then click "Journalism" and scroll down and you can expand each main category one by one). Journalism has just a few sub categories: Audio, Photo, Print, Video, Web. So in theory you could search my data for projects with each of those in the main category field (and also include the ones with "Journalism" in the main category field), and then you'd have all Journalism projects. The problem is that "Web" is also a sub category of "Technology". My data has 3787 projects listed as "Web". Some of those are Journalism/Web and the rest are Technology/Web. At this point with the data I have, I don't know of any way to accurately separate them out. (Also, one small additional source of discrepancy between my data and Kickstarter's is that I didn't start collecting info until June 2013.) Depending on your purposes, hopefully you can still use the data in some form. Hi @benrugg , Thanks for your hard work. I have a question. When I look at the projects between 2009 and 2013 (excluding 2013), I see only successful projects. Am I right? I just want to be sure about the data I have. Again thanks a lot @AccountingResearcher - Yeah, I'm not sure I ever made that clear. Once the scraping system was active (spring or summer 2013), I started collecting all projects (so any data after that is comprehensive). But then I went back and tried to find any past kickstarter projects and add them as well. The huge caveat is that it was only easy to collect successful (and notable/popular) projects from the past. So anything before that time is incomplete. By the way @jcsouth, I'm working on updating my code to go back and gather accurate main and sub categories for all projects. I think within a day or two I'll have it all, and I'll post the new data. Thanks, @benrugg. I've been noodling around with different ways to focus my research. The backdrop is that there's been a big shakeout in crowdsourcing platforms for journalism in recent years. A lot of start-ups (Spot.us, Beacon, Contributoria) have all folded, leaving Kickstarter as the default for journalists seeking funding. So that's why I'm interested in looking at funding trends for Kickstarter journalism projects by year since, say, 2013: projects launched, success rates, amounts sought, amounts raised ... I wonder if there was an uptick in success over the past year with all the controversy over "fake news" and "alternative facts." (Might be too soon to measure, but worth a look.) Anyway -- thanks for whatever help you can provide on the data front. I'll credit you, of course, in any academic articles I might write off the data. @jcsouth, that's a really cool project. I'd love to see what you end up finding. I'll let you know when the data is up-to-date. (It's taking longer because Kickstarter limits the speed at which I can request info). Ok, I've finished updating all the projects with their correct main/sub categories. @jcsouth, you should now be able to look at just the Journalism projects and the numbers should be a lot closer to what Kickstarter reports. The csv is at the same link (above). Thanks again, @benrugg. Sorry for the delayed reply -- grading midterms, etc. I'll wade into this this week. Cheers. Hi @benrugg! I am currently working on my thesis and would find it super helpful if you could re-post the cvs file here again. I will mention you among the people who helped in the thesis :D Hi @madelainegur - here's the latest version: https://www.dropbox.com/s/75rxb32xdtipjbd/kickstarter projects.csv.zip?dl=0 @benrugg @madelainegur Hi Ben the column names are kind of confusing. Do we have any information about what each column represents ? @madelainegur I am working on a kickstarter data for my final project. I am planning to use R and use machine learning to it. How are you planning to approach it ? Hi, @benrugg (@madelainegur and @Nishantkumark). Ben, I've been meaning to tell you something: I parsed the data that your scraper produced into a spreadsheet and hit a few snags. Apparently, some of data in certain fields have quotation marks and commas -- and so those rows blow up; they won't parse correctly. (If you open the CSV file in Excel, you'll see what I mean.) Could you use a different delimiter -- like a pipe (|) or a tilde (~)? I think that would solve the problem and produce a file that would flow seamlessly into Excel or any DB manager. @Nishantkumark - a fair amount of the column names are actually just from the project I use the data for - www.jumpkick.me. You can ignore those. The rest should be pretty self-explanatory kickstarter-related fields. @jcsouth - hmm, that's no good. Excel should be able to handle any kind of delimiting, because any commas or quotes in the values should be escaped. Here's a new copy delimited by a pipe. Hopefully that'll work in Excel... https://www.dropbox.com/s/1ya53uunkq0ge4d/kickstarter projects - pipe delimited.csv.zip?dl=0 @Nishantkumark Hi there, sorry for the late response. I am studying finance, so my initial plan was to find the factors that influence the success of a campaign. I only chose a specific category and added more data manually, but since I don't really know ho to use R and currently have also no time, I will probably do just a logit regression model. (for now there is a strong correlation between many variables so I will have to think of something else) However I'd like to see your results when you're done. @madelainegur Hey sure. Can discuss on it little further. Can you mail me at<EMAIL_ADDRESS> @benrugg hi ben, have you also scraped the data about the project creators? e.g., information about biography, backed_projects_count , created_projects_count ,  social as mentioned in the WIKI?
2025-04-01T06:39:30.330576
2015-04-09T14:01:33
67368799
{ "authors": [ "MrMorten" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8094", "repo": "markusgumbel/moduro", "url": "https://github.com/markusgumbel/moduro/issues/16" }
gharchive/issue
Metric Type gets doubled in list while editing At some point there need to be some fx (layout) imports. In my opinion this should be in the controller class of the fxml classes (because there are fx imports anyway). fixed. Added some validation
2025-04-01T06:39:30.404135
2024-11-28T02:40:16
2700508031
{ "authors": [ "AXSJ" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8095", "repo": "marsupialtail/quokka", "url": "https://github.com/marsupialtail/quokka/issues/62" }
gharchive/issue
Cannot Find Module - Monorepo Issue: Quokka Not Recognizing Aliased Paths in Monorepo Setup Description I am experiencing an issue where Quokka is unable to resolve aliased paths in my TypeScript monorepo setup. Despite following various troubleshooting steps, the problem persists, and I am unable to import modules using the alias paths defined in my tsconfig.json. Cannot find module '@/example-service/resources/schema' Require stack: - <rootDir>/quokka.js Environment Quokka Version: v1.0.671 IDE: Cursor 0.43.5 Operating System: MacOS Sonoma Node.js Version: 18.20 Project Setup Monorepo Tool: Yarn, nx TypeScript Version: 4.95 This is my tsconfig.json { "compilerOptions": { "baseUrl": "./src", "paths": { "@dataloaders/": ["../dataloaders/"], "@/": [""], "@test/": ["../tests/"] }, "module": "commonjs", "target": "es6", "moduleResolution": "node", "esModuleInterop": true, "allowSyntheticDefaultImports": true, "strict": true, "skipLibCheck": true } Steps to Reproduce Set up a monorepo using [Yarn Workspaces/Lerna]. Configure tsconfig.json with path aliases as shown above. Attempt to run a Quokka file that imports a module using an alias path, e.g., import { myModule } from '@/myModule'. Quokka throws an error: Cannot find module '@/myModule'. What I've Tried Installed tsconfig-paths: Ensured tsconfig-paths is installed in the project. Quokka Configuration: Added the following configuration to .quokka. I've tried a million different permutations of the following. Some configs i have tried a not included: { "env": { "params": { "runner": "-r tsconfig-paths/register", "env": "NODE_PATH=./src" } } } { "ts": { "compilerOptions": { "baseUrl": "./src", "paths": { "@dataloaders/*": ["../dataloaders/*"], "@/*": ["*"], "@test/*": ["../__tests__/*"] } } }, "env": { "params": { "runner": "-r tsconfig-paths/register" } } } { "ts": { "compilerOptions": { "baseUrl": "./src", "rootDir": "./src", "moduleResolution": "node", "paths": { "@dataloaders/*": ["../dataloaders/*"], "@/*": ["*"], "@test/*": ["../__tests__/*"] } } }, "env": { "params": { "env": "NODE_PATH=./src" } } Verified tsconfig.json: Double-checked that baseUrl and paths are correctly set relative to the tsconfig.json file location. Checked Quokka File Location: Ensured the Quokka file is within the scope of the tsconfig.json. Restarted Quokka and IDE: Restarted both Quokka and my IDE after making configuration changes. Used Absolute Paths: Attempted to set NODE_PATH to the src directory. Checked for Quokka Pro Features: Verified that my Quokka version supports the required features. Expected Behavior Quokka should resolve the aliased paths as defined in tsconfig.json and allow importing modules using these paths without errors. Actual Behavior Quokka throws an error indicating that it cannot find the module specified by the alias path. Request for Assistance I would appreciate any guidance or suggestions on how to resolve this issue. If there are any additional configurations or steps I might have missed, please let me know. Thank you for your assistance! Note: Please let me know if you need any further information or if there are specific logs or files I should provide. Apologies wrong quokka repo
2025-04-01T06:39:30.436303
2024-06-20T09:14:44
2363943495
{ "authors": [ "Cam396", "Zoobdude", "ixMarcel", "martinfekete10" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8096", "repo": "martinfekete10/Tuneful", "url": "https://github.com/martinfekete10/Tuneful/issues/92" }
gharchive/issue
App uses way too much CPU on M1 MacBook Air. Can you share your Tuneful settings (menu bar, popover and mini player)? I am using M1 Air as well but couldn't reproduce this CPU usage, it's typically near 1% of CPU usage and 50 MB of RAM. I couldn't reproduce it i have the same issue on a 2017 touchbar MBP, tuneful is using over 50% CPU I'm having the same issue on my M2 Mac mini It is definitely caused by the scrolling song info, when I expand the song info width (so that the number of pixels is high enough for scrolling not to be needed) the CPU usage drops back to more reasonable levels (and increases when reducing it so that it has to start scrolling again).
2025-04-01T06:39:30.494761
2024-11-11T18:14:45
2650041801
{ "authors": [ "mcamou" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8097", "repo": "masa-finance/masa-oracle", "url": "https://github.com/masa-finance/masa-oracle/pull/626" }
gharchive/pull-request
chore(cleanup): Remove sentiment analysis since it's no longer called from anywhere Description The endpoint for sentiment analysis was removed some time ago, which left this as dead code. One of the config.GetInstance() calls remaining after #625 is from sentiment.go. Let's remove all that currently-dead code in a separate PR, so that we have easy access to it if we want to revive it. Notes for Reviewers Don't you just love deleting code? Signed commits [x] Yes, I signed my commits. Force-push after rebasing on top of main after merging #625
2025-04-01T06:39:30.498820
2023-06-22T01:34:57
1768775733
{ "authors": [ "haykbaluyan", "masahiro331" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8098", "repo": "masahiro331/go-ext4-filesystem", "url": "https://github.com/masahiro331/go-ext4-filesystem/issues/8" }
gharchive/issue
failed to parse group descriptor: EOF This is directly related to the issue raised in https://github.com/dsoprea/go-ext4/issues/7 Essentially for ext4 filesystems for which featureInCompat64bit is false (64 bit feature is not set), getGroupDescriptor call will result in the below error. 2023-06-22T05:26:59.664+0400 WARN Partition error: filesystem error: unexpected fs error: new ext4 filesystem error: failed to get group Descriptor: failed to parse group descriptor: EOF Unfortuanately i do not have AMI from which the volume was created but the fix is essentially what was articulated in the above raised issue. You need to read only half of that struct if the 64-bit feature is not set and not use the rest of the fields. For the next entry you only increment offset by 32, not 64. So the fix is literally to have 32 byte equivalent of GroupDescriptor and read to it when 64-bit feature is not set. // GroupDescriptor32 is 32 byte type GroupDescriptor32 struct { BlockBitmapLo uint32 `struc:"uint32,little"` InodeBitmapLo uint32 `struc:"uint32,little"` InodeTableLo uint32 `struc:"uint32,little"` FreeBlocksCountLo uint16 `struc:"uint16,little"` FreeInodesCountLo uint16 `struc:"uint16,little"` UsedDirsCountLo uint16 `struc:"uint16,little"` Flags uint16 `struc:"uint16,little"` ExcludeBitmapLo uint32 `struc:"uint32,little"` BlockBitmapCsumLo uint16 `struc:"uint16,little"` InodeBitmapCsumLo uint16 `struc:"uint16,little"` ItableUnusedLo uint16 `struc:"uint16,little"` Checksum uint16 `struc:"uint16,little"` } Here is the dump from my ext4 formatted volume. $ sudo dumpe2fs -f /dev/xvdf1 dumpe2fs 1.46.5 (30-Dec-2021) Filesystem volume name: / Last mounted on: / Filesystem UUID: 8cd9967e-f9c0-438f-bebd-a0a7c5886ebc Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 524288 Block count: 2096635 Reserved block count: 20966 Overhead clusters: 70281 Free blocks: 1395708 Free inodes: 473043 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 511 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Mon Jan 15 18:42:02 2018 Last mount time: Wed Jun 21 23:10:14 2023 Last write time: Wed Jun 21 23:10:14 2023 Mount count: 4 Maximum mount count: -1 Last checked: Mon Jan 15 18:42:02 2018 Check interval: 0 (<none>) Lifetime writes: 455 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 71949b1b-7e27-4206-a570-d27984b3cd37 Journal backup: inode blocks Journal features: journal_incompat_revoke Total journal size: 128M Total journal blocks: 32768 Max transaction length: 32768 Fast commit length: 0 Journal sequence: 0x006fd961 Journal start: 1 @haykbaluyan Thank you for your issue. Could you provide how to make ext4-filesystem-32bit ?
2025-04-01T06:39:30.506384
2021-05-10T16:59:42
884648260
{ "authors": [ "gizemsudekocarslan", "jghasemi44" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8099", "repo": "masashitsubaki/molecularGNN_smiles", "url": "https://github.com/masashitsubaki/molecularGNN_smiles/issues/7" }
gharchive/issue
label numbers Hi, Your dataset has 2 labels: 0,1. My dataset has 185 labels [0,1..,184]. When i run your code with my dataset i got errors. which part of code should i change for 185labels? Thanks. Hi you must use one hot encoding
2025-04-01T06:39:30.532247
2022-02-09T08:54:02
1128231360
{ "authors": [ "albi90", "efedericomedina", "maslick", "sesam" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8102", "repo": "maslick/koder", "url": "https://github.com/maslick/koder/issues/64" }
gharchive/issue
Barcode scanner doesn't work with Galaxy S10/ Galaxy A52s in Google Chrome Confirm by changing [ ] to [x] below to ensure that it's a bug: [x] I've gone through the README.md [x] I've searched for previous similar issues and didn't find any solution I tried to use the scanner in my Samsung Galaxy 10 and I couldn't make it work. I tested it on Google Chrome, the camera does not get to focus and the image gets blurry though I tried it out with Firefox and it worked fine with the same phone. I have tested the scanner also on Samsung galaxy A52s and I had the same results with Google Chrome. Steps to reproduce the behaviour using the demo page. 1- Open Chrome and go to https://qr.maslick.tech/ 2- Start scanner 3- Aim camera at book's barcode. Expected behavior Scanned barcode text. Device: [Galaxy S10][Galaxy A52s] OS: [Android 12] Browser [Chrome] Version [98] Did you try adding Koder to Home screen (Add to Home Screen)? Does the camera start? Are there any errors/warnings in the console? Try open Developer Tools on your Laptop and connect to your phone Can you provide any screenshots? Can you post an example of the barcode you are trying to scan? Hi, Did you try adding Koder to Home screen (Add to Home Screen)? yes, I tried that. the camera is blurry. Does the camera start? yes Are there any errors/warnings in the console? Try open Chrome Developer Tools on your Laptop and connect to your phone There are no errors in the console. Can you provide any screenshots? I guess your barcode is broken, I can see a couple white dots on some thin bars. Try scanning this code Here's another one from a real book: It worked with that code on Chrome. I checked with different books here and if its a normal book barcode then it does not work. See my comment above which phone have you used to scan? iPhone 11 Pro / Chrome Browser So I presume it's the camera/image quality problem you're facing... yes, it is strange though that in Firefox the image is good and it works well but not in Chrome :o We've worked with barcode scanning for a few years, and noticed some strange things: Android phones have multiple logical cameras (even on some old phones with a single camera) Chrome may pick a different camera "by default" depending on whether you're running in "standalone" or "fullscreen mode" there's no way to reliably detect which camera is the not blurry one. Typically we've seen a blurry camera getting chosen sporadically and our best cure so far is to try to detect the list of cameras and have an array of deviceId of the cameras we've seen working out well. (The blurry is typically a wide-angle camera) I have had similar issues on my Galaxy Z Flip 3, im my case i was making a VueJS component from this excellent implementation of zbar and there are a few things i have found. You need to add support for camera selection the default camera is not always the best it in most cases its a wide angle camera and this doesn't work well for decoding. The canvas size makes a big difference, I'm my case I made the canvas 600x300 and this gave me a good sweet spot for scanning. My first test was on the main thread and this worked really well with the exception of slowing down other operations after moving this to a webworker on Android my scans became super slow like 3-4 sec after alot of debugging and adding the frame time to the postmessage from the main thread and passing this back after the detection i could see the processing times for the frame and detection were fast like 10 or 11ms but when checking the original frame time to the time when the message is received back on the main thread there was a delay of 3-4 sec, after alot of playing around I found that if I only send the frame to worker every 20ms then everything runs super smooth full end to end detection of 11-12ms @maslick thanks for this great implementation with a bit of playing around its extremally fast and quite accurate @maslick thanks will try increasing the scan rate to see if it has much effect on my devices your correct its probably a bit fast 50fps, 250ms seems slow to me though was there any particular devices that required lower frame rates. Closing for now...
2025-04-01T06:39:30.763396
2019-04-16T22:17:27
434009648
{ "authors": [ "Holus", "abhiomkar" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8108", "repo": "material-components/material-components-web-codelabs", "url": "https://github.com/material-components/material-components-web-codelabs/issues/87" }
gharchive/issue
GitHub 404 error node-sass while trying to npm install gulp-sass it throws: Cannot download "https://github.com/sass/node-sass/releases/download/v3.13.1/win32-x64-57_binding.node": HTTP error 404 Not Found. I tried to google the link and it does not exist, I found the latest binding node though but idk how to install it locally. Can you try npm install with bit older version of node to see if that works?
2025-04-01T06:39:30.774011
2022-05-11T19:42:35
1233105339
{ "authors": [ "albi005" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:8109", "repo": "material-foundation/material-color-utilities", "url": "https://github.com/material-foundation/material-color-utilities/issues/40" }
gharchive/issue
HCT produces unexpected RGB values This is most noticable when the seed color is yellow: Background color is Primary 98. I have made a graph that shows the RGB value, with chroma set to 100: I included CAM16 (that also seems to be broken) and HSL as a reference. These images were made using my C# library, but it shouldn't have any differences, as most of it is just the Java version copy-pasted. I also added every test from the Dart library, which are all passing. Setting the chroma to a lower value seems to alleviate the problem a bit: This was made with Flutter, here is the code for that: import 'package:flutter/material.dart'; import 'package:material_color_utilities/material_color_utilities.dart'; void main() { runApp(CustomPaint( painter: ColorsPainter(), )); } class ColorsPainter extends CustomPainter { @override void paint(Canvas canvas, Size size) { for (double x = 0; x < 360; x++) { for (double y = 0; y < 100; y++) { HctColor hct = HctColor.from(x, 100, y); canvas.drawCircle(Offset(x, y), 1, Paint()..color = Color(hct.toInt())); } } } @override bool shouldRepaint(ColorsPainter oldDelegate) { return false; } } Flutter code for the CAM16 graph (produces the same result as C# above): import 'package:flutter/material.dart'; import 'package:material_color_utilities/material_color_utilities.dart'; void main() { runApp(CustomPaint( painter: ColorsPainter(), )); } class ColorsPainter extends CustomPainter { @override void paint(Canvas canvas, Size size) { for (double x = 0; x < 360; x++) { for (double y = 0; y < 100; y++) { Cam16 cam = Cam16.fromJch(y, 100, x); canvas.drawCircle( Offset(x, y), 1, Paint()..color = Color(cam.viewedInSRgb)); } } } @override bool shouldRepaint(ColorsPainter oldDelegate) { return false; } } After looking around for a while I found Okhsl, which produces very similar results, so this is most likely intended.