id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2185007352
⚠️ Spryr has degraded performance In 95c3f49, Spryr (https://spryr.com) experienced degraded performance: HTTP code: 200 Response time: 9648 ms Resolved: Spryr performance has improved in 78f8eea after 7 minutes.
gharchive/issue
2024-03-13T22:36:24
2025-04-01T06:40:50.475363
{ "authors": [ "geahaad" ], "repo": "vcsphere/upptime", "url": "https://github.com/vcsphere/upptime/issues/43", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1760117324
ERROR: failed to solve: process "/bin/sh -c pip3 install --upgrade awscli" did not complete successfully: exit code: 1 ERROR: failed to solve: process "/bin/sh -c pip3 install --upgrade awscli" did not complete successfully: exit code: 1 New version 1.1 removes the need for AWSCLI altogether. Please try V1.1 and let me know if any issues remain.
gharchive/issue
2023-06-16T07:46:11
2025-04-01T06:40:50.555150
{ "authors": [ "ofermend", "sunddytwo" ], "repo": "vectara/vectara-answer", "url": "https://github.com/vectara/vectara-answer/issues/24", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1554417549
Add OpenAPI specification plugin This PR adds a first cut at an OpenAPI specification and a playground for using the OAS via the docs website I added @cjcenizal as a reviewer as well here
gharchive/pull-request
2023-01-24T06:46:46
2025-04-01T06:40:50.556071
{ "authors": [ "eskibars" ], "repo": "vectara/vectara-docs", "url": "https://github.com/vectara/vectara-docs/pull/17", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
411871500
Android app icon shows one unread message permanently HI, I have an Android 8 device where the Riot app icon (Riot 0.8.21 installed via PlayStore) reports one unread message although there aren't any unread messages. Edit: Maybe related to https://github.com/vector-im/riot-web/issues/6617 ? same issue on my android 6 Is it better with Riot 0.8.26? @bmarty No, still present with 0.8.28a riot web has the problem as well. @bjonnh Does reinstalling Riot solve the problem? Was the case for one of my homeserver users. yes DoctorSubtilis notifications@github.com writes: yes No, in my case, I had to go into the list of channels, find the old RIOT android channel and click on it to read the message and then it disappeared (on all platforms). However, the channel itself still comes back from time to time on riot android as an "Empty room" and sometimes on the web version as the real named one. I did "leave room" in both, but it still tends to reappear and then disappear by itself. Really weird, it is as if it was still subscribed on my matrix account somewhere. I think I have have a similar issue. Even after deleting element's app data and reinstalling the one unread message is shown permanently, even before logging in. I suspect old data from riot is still installed on the phone, but as it is not rooted I cannot verify. How can I check for the existence (and get rid) of the old riot-data?
gharchive/issue
2019-02-19T11:14:39
2025-04-01T06:40:50.594387
{ "authors": [ "DoctorSubtilis", "bjonnh", "bmarty", "r4dh4l", "tycho-kirchner" ], "repo": "vector-im/riot-android", "url": "https://github.com/vector-im/riot-android/issues/2969", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
256547345
Error on Windows with 64bit version generators Setting a Win64 version of Visual Studio CMake generators gives errors. Expected result The extension call properly the generator. Actual condition The extension show an MSBuild error message, about wrong parameters. Will put proper output tomorrow. Workaround Using the official GUI or the command-line everything works fine. Probably same problem like here. Sounds like it - duplicate? Could be related to the linked issue, but the sub-toolsets thing has always been a real pain. Can anyone reproduce this issue using the new Kits features? Eh, closing for inactivity. @vector-of-bool sorry, didn't work here for a while; will double-check tonight (I suppose). I didn't test extensively, but I was able to reconfigure 2 projects with Win64 compilers. It was hard due to the previously generated configuration; after some cleanup with VSCode closed, it worked well.
gharchive/issue
2017-09-10T23:04:15
2025-04-01T06:40:50.602711
{ "authors": [ "FoxDevilsWild", "ItachiSan", "vector-of-bool" ], "repo": "vector-of-bool/vscode-cmake-tools", "url": "https://github.com/vector-of-bool/vscode-cmake-tools/issues/229", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
792675544
Puncturing convolutional code at rate 1/2 to get rate 3/4 Hi, I am trying to execute the readme.md example for convolutional coding at code rate 3/4. The generator matrix [5, 7] for R=1/2 is used and punctured with [[1,0,1],[1,1,0]] to get R=3/4. But the BER values of viterbi decoded outputs are higher than that of uncoded output. Also the size of the coded bits is as per 1/2 rate and not 3/4. Is it an issue with the 'conv_encode' function or am I missing any steps in between? Also is there any method to get the coding rate 3/4 using generator matrix? import numpy as np import commpy.channelcoding.convcode as cc import commpy.modulation as modulation def BER_calc(a, b): num_ber = np.sum(np.abs(a - b)) ber = np.mean(np.abs(a - b)) return int(num_ber), ber N = 100 #number of symbols per the frame message_bits = np.random.randint(0, 2, N) # message M = 2 # modulation order (BPSK) k = np.log2(M) #number of bit per modulation symbol modem = modulation.PSKModem(M) # M-PSK modem initialization generator_matrix = np.array([[5, 7]]) # generator branches trellis = cc.Trellis(np.array([2]), generator_matrix) # Trellis structure punctureMatrix=np.array([[1,0,1],[1,1,0]]) rate = 3/4 # code rate L = 7 # constraint length m = np.array([L-1]) # number of delay elements tb_depth = 5*(m.sum() + 1) # traceback depth EbNo = 5 # energy per bit to noise power spectral density ratio (in dB) snrdB = EbNo + 10*np.log10(k*rate) # Signal-to-Noise ratio (in dB) noiseVar = 10**(-snrdB/10) # noise variance (power) N_c = 10 # number of trials BER_soft = np.zeros(N_c) BER_hard = np.zeros(N_c) BER_uncoded = np.zeros(N_c) for cntr in range(N_c): message_bits = np.random.randint(0, 2, N) # message coded_bits = cc.conv_encode(message_bits, trellis,puncture_matrix=punctureMatrix) # encoding modulated = modem.modulate(coded_bits) # modulation modulated_uncoded = modem.modulate(message_bits) # modulation (uncoded case) Es = np.mean(np.abs(modulated)**2) # symbol energy No = Es/((10**(EbNo/10))*np.log2(M)) # noise spectrum density noisy = modulated + np.sqrt(No/2)*\ (np.random.randn(modulated.shape[0])+\ 1j*np.random.randn(modulated.shape[0])) # AWGN noisy_uncoded = modulated_uncoded + np.sqrt(No/2)*\ (np.random.randn(modulated_uncoded.shape[0])+\ 1j*np.random.randn(modulated_uncoded.shape[0])) # AWGN (uncoded case) demodulated_soft = modem.demodulate(noisy, demod_type='soft', noise_var=noiseVar) # demodulation (soft output) demodulated_hard = modem.demodulate(noisy, demod_type='hard') # demodulation (hard output) demodulated_uncoded = modem.demodulate(noisy_uncoded, demod_type='hard') # demodulation (uncoded case) decoded_soft = cc.viterbi_decode(demodulated_soft, trellis, tb_depth, decoding_type='unquantized') # decoding (soft decision) decoded_hard = cc.viterbi_decode(demodulated_hard, trellis, tb_depth, decoding_type='hard') # decoding (hard decision) NumErr, BER_soft[cntr] = BER_calc(message_bits, decoded_soft[:message_bits.size]) # bit-error ratio (soft decision) NumErr, BER_hard[cntr] = BER_calc(message_bits, decoded_hard[:message_bits.size]) # bit-error ratio (hard decision) NumErr, BER_uncoded[cntr] = BER_calc(message_bits, demodulated_uncoded[:message_bits.size]) # bit-error ratio (uncoded case) mean_BER_soft = BER_soft.mean() # averaged bit-error ratio (soft decision) mean_BER_hard = BER_hard.mean() # averaged bit-error ratio (hard decision) mean_BER_uncoded = BER_uncoded.mean() # averaged bit-error ratio (uncoded case) print("Soft decision:\n{}\n".format(mean_BER_soft)) print("Hard decision:\n{}\n".format(mean_BER_hard)) print("Uncoded message:\n{}\n".format(mean_BER_uncoded)) PS: The package and libraries are from the Github cloned version. did you find the issue in the library itself or what ?
gharchive/issue
2021-01-23T23:36:12
2025-04-01T06:40:50.618321
{ "authors": [ "SalmaEasa", "mpaparna" ], "repo": "veeresht/CommPy", "url": "https://github.com/veeresht/CommPy/issues/98", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1954720319
Create size slider Story As a user I want a order size slider So that I can size my order without manually typing out numbers Acceptance Criteria [ ] I can set the size I want on a slider (sketch is stolen from Binance, ignore, use the one on the leverage) [ ] When I set my size on the slider the written size in the ticket updates on screen [ ] If i set my size manually in written text the slider moves to reflect this [ ] In isolated margin mode: [ ] When I have no open position the max size on the slider is the position size that would use up all my remaining general account as margin, eg. MAX = balance in general account / margin factor [ ] When I am making an existing position larger then same approach as above [ ] When I am flipping an existing position then the max is the amount needed to use all my balance on the other side, eg. MAX = (current margin that would be returned to you + balance in general account) / margin factor [ ] In cross mode ... do not show the size slider for now (work being done in Core to simplify this THEN we can show it) Tasks [x] UX (if needed) [x] Design (if needed) [x] Team and stakeholder review [x] Specs reviewed and created or adjusted [ ] Implementation [ ] Testing (unit and/or e2e) [ ] Code review [ ] QA review Sketch Additional details / background info Moved the cross margin onto #5963
gharchive/issue
2023-10-20T17:10:27
2025-04-01T06:40:50.670446
{ "authors": [ "JonRay15" ], "repo": "vegaprotocol/frontend-monorepo", "url": "https://github.com/vegaprotocol/frontend-monorepo/issues/5102", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1111331574
Added Polish translations Pull request type Please check the type of change your PR introduces: [ ] Bugfix [ ] Feature [ ] Code style update (formatting, renaming) [ ] Refactoring (no functional changes, no api changes) [ ] Build related changes [ ] Documentation content changes [X] Internationalization and localization [ ] Other (please describe): What is the current behavior? Issue Number: N/A What is the new behavior? Full Polish translation Quality check Before creating this PR, have you: [X] Followed the code style guideline as described in CONTRIBUTING.md [X] Verified that the change work in Release build configuration [X] Checked all unit tests pass Just one question, in translation files I saw texts for "Lorem Ipsum generator", why they are there, and no generator is accessible from the UI? :)
gharchive/pull-request
2022-01-22T08:08:42
2025-04-01T06:40:50.685879
{ "authors": [ "Andret2344" ], "repo": "veler/DevToys", "url": "https://github.com/veler/DevToys/pull/195", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1167328833
Update japanese translation. Pull request type Please check the type of change your PR introduces: [ ] Bugfix [ ] Feature [ ] UI change (please include screenshot!) [ ] Code style update (formatting, renaming) [ ] Refactoring (no functional changes, no api changes) [ ] Build related changes [ ] Documentation content changes [x] Internationalization and localization [ ] Other (please describe): What is the current behavior? Issue Number: N/A What is the new behavior? Update japanese translation. Other information Quality check Before creating this PR, have you: [x] Followed the code style guideline as described in CONTRIBUTING.md [x] Verified that the change work in Release build configuration [x] Checked all unit tests pass Thank you for this. :)
gharchive/pull-request
2022-03-12T16:11:11
2025-04-01T06:40:50.690876
{ "authors": [ "niyari", "veler" ], "repo": "veler/DevToys", "url": "https://github.com/veler/DevToys/pull/444", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
746650533
[SPM] The problem with incorrect interaction with Swift Package Manager When we have Synx and Swift PM in our Project, every time the utility runs, it affects the SPM dependencies. Synx renaming and moving the SPM dependencies. This is the first problem, because every time the SPM returns all modifications back, as they were not applied correctly. And Synx next renames it, and so on in a circle, every time. The second problem is that there is no way to exclude the interaction of synx with SPM, thereby making this utility inapplicable on projects that use SPM. Open Project, SPM updated dependencies, and rename it as correct. Run Synx, Synx renamed SPM dependencies. Xcode 12.1 MacOS Catalina 10.15.7 To work around this problem, you can install a newer version of xcodeproj. I'm using this in a project: gem 'xcodeproj', github: 'CocoaPods/Xcodeproj', ref: 'c8ab614079b338e38e987671e1e74319168bf61f' Works like a charm for me :) Also see this PR: https://github.com/CocoaPods/Xcodeproj/pull/799 Unfortunately, there hasn't been a new release of Xcodeproj yet, so you'll have to install this by ref.
gharchive/issue
2020-11-19T14:49:15
2025-04-01T06:40:50.705877
{ "authors": [ "JanNash", "stolyarenkokswing" ], "repo": "venmo/synx", "url": "https://github.com/venmo/synx/issues/148", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1080124041
main: Fix CLI quickstarts See https://github.com/veracruz-project/veracruz/pull/309 for more info I've cherry-picked this commits onto main, but am not in a position to test on Nitro at the moment. I think these should eventually be moved to Linux and added to CI, but in the meantime this PR gets them into a better state. +1+1 = +1 reached, merging.
gharchive/pull-request
2021-12-14T19:16:58
2025-04-01T06:40:50.717009
{ "authors": [ "dominic-mulligan-arm", "geky" ], "repo": "veracruz-project/veracruz", "url": "https://github.com/veracruz-project/veracruz/pull/310", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
988286631
Provider: Crystallize Headless eCommerce provider: https://crystallize.com/ Hey there! Thank you for opening this issue. We have decided to take Next.js Commerce in a new direction and will be closing out current PRs and issues due to this change. Please see this PR for more details: https://github.com/vercel/commerce/pull/966
gharchive/issue
2021-09-04T12:46:03
2025-04-01T06:40:50.727922
{ "authors": [ "TommySorensen", "leerob" ], "repo": "vercel/commerce", "url": "https://github.com/vercel/commerce/issues/470", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2485849971
Remove numpy from nextjs-flask to fix broken builds Description pip fails to install this dependency on Python 3.12 Stacktrace: https://gist.github.com/sup/f0ff6eb90c03fbe7bd4329b0202dbbb8 Issue: https://github.com/vercel/examples/issues/946 While fixing the root cause of the error is certainly one way of going about it, numpy is not even used for the hello-world Flask app this ships with. To simplify things, this commit removes numpy as a dependency entirely. Demo URL https://automaton-seven.vercel.app/ Type of Change [ ] New Example [x] Example updates (Bug fixes, new features, etc.) [ ] Other (changes to the codebase, but not to examples) New Example Checklist Not applicable [ ] 🛫 npm run new-example was used to create the example [ ] 📚 The template wasn't used but I carefuly read the Adding a new example steps and implemented them in the example [ ] 📱 Is it responsive? Are mobile and tablets considered? @dferber90 It's a bit quiet around here, but can I interest you in a review of a simple fix for a broken example? 👀 Thank you @sup! I was out of office and just came back 👍
gharchive/pull-request
2024-08-26T04:24:49
2025-04-01T06:40:50.732998
{ "authors": [ "dferber90", "sup" ], "repo": "vercel/examples", "url": "https://github.com/vercel/examples/pull/947", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1181074323
feat(account page): update user full name Feature This feature was request in this issue #28 https://user-images.githubusercontent.com/1596614/160174916-ee6d9b5a-25da-47f3-9cfb-c3e72625961e.mov Closes #28 Could you review this PR? @leerob @thorwebdev Hi @tdeitz , I dont get this error. This error show when call a function and pass it a parameter that doesn't match any of its specified overloads. I will update my branch to solve these conflicts, they updated supabase dependencies. Hope help you. @irvile thanks very much for the reply. I'm a Typescript amateur (at best), so my debugging ability is pretty limited for now. Appreciate the branch update, thanks again :) @tdeitz done! This error that you commented show up because the new version of supabase v2, don't accept Type as the last version( supabase.from<UserDetails>) Welcome! Its a pleasure help. Let me know if you have any other issue. I tested right now and supabase and stripe works fine. @irvile thanks so much, really appreciate your efforts, can't wait to have a look! :) Hello, is there any blocker for this PR to be merged ?
gharchive/pull-request
2022-03-25T17:51:27
2025-04-01T06:40:51.645361
{ "authors": [ "adrientiburce", "irvile", "tdeitz" ], "repo": "vercel/nextjs-subscription-payments", "url": "https://github.com/vercel/nextjs-subscription-payments/pull/115", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1042986463
Remove options.fetcher default value Refs: https://github.com/vercel/swr-site/pull/174#issuecomment-956166833 📝 Changed routes: /docs/options.en-US /docs/options.es-ES (view full size) (view full size) /docs/options.ja /docs/options.ko (view full size) (view full size) /docs/options.ru /docs/options.zh-CN (view full size) (view full size) Commit 43865f1e20d5247671c7413c451016cf621bfd17 (https://swr-site-gn6wbqo03.vercel.sh). Thank you!
gharchive/pull-request
2021-11-03T01:17:57
2025-04-01T06:40:51.655581
{ "authors": [ "leader22", "shuding", "vercel-team" ], "repo": "vercel/swr-site", "url": "https://github.com/vercel/swr-site/pull/176", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1491750065
Preload docs - Correction to code. In the react example, the dependency array passed to the useEffect had useId instead of userId . Description [ ] Adding new page [x] Updating existing documentation [ ] Other updates @Rajarshi07 Good catch, thank you! Could you update all other languages as well? Thank You. I'm on it. I'll submit a PR once done.
gharchive/pull-request
2022-12-12T12:23:25
2025-04-01T06:40:51.657785
{ "authors": [ "Rajarshi07", "koba04" ], "repo": "vercel/swr-site", "url": "https://github.com/vercel/swr-site/pull/395", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2144530520
chore(docs): update github actions versions to support node 20 Description This pull request updates the versions of GitHub Actions in the documentation, as well as in the example workflows. Node 16 is deprecated, and #7224 is missing the update of actions in the docs. Hopefully, nothing else was overlooked here. Allow CI Workflow Run [ ] approve CI run for commit: e5cf01d14e74844e31cf7629d98b50400d7f2f15 Note: this should only be enabled once the PR is ready to go and can only be enabled by a maintainer
gharchive/pull-request
2024-02-20T14:37:19
2025-04-01T06:40:51.659820
{ "authors": [ "ijjk", "robinsmith-source" ], "repo": "vercel/turbo", "url": "https://github.com/vercel/turbo/pull/7434", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2373701245
Remove nohash-hasher dependency Description Testing Instructions [!WARNING] This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite. Learn more #8605 👈 #8604 main This stack of pull requests is managed by Graphite. Learn more about stacking. Join @bgw and the rest of your teammates on Graphite Merge activity Jun 26, 3:19 AM EDT: Graphite rebased this pull request as part of a merge.
gharchive/pull-request
2024-06-25T21:34:54
2025-04-01T06:40:51.665226
{ "authors": [ "bgw" ], "repo": "vercel/turbo", "url": "https://github.com/vercel/turbo/pull/8605", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1207480366
Cache broken symlinks Describe the feature you'd like to request turbo should cache broken symlinks. Just because a target doesn't exist yet doesn't mean we should cache the pointer that someone created. If nothing else, it will aid in debugging why the link is broken, rather than missing. Describe the solution you'd like The walk of files to cache should not exempt broken symlinks Describe alternatives you've considered I think this is implemented now.
gharchive/issue
2022-04-18T21:41:16
2025-04-01T06:40:51.666918
{ "authors": [ "gsoltis" ], "repo": "vercel/turborepo", "url": "https://github.com/vercel/turborepo/issues/1070", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
785138667
Add support for Storyblok CMS. This adds support for using Storyblok as a CMS with Storybloks GraphQL api. Nice! This looks awesome @lisilinhart 🙌 Will you let me know once you've signed the CLA? Could you share a read-only env var I could add here to test this with the preview URL? Nice! This looks awesome @lisilinhart 🙌 Will you let me know once you've signed the CLA? Could you share a read-only env var I could add here to test this with the preview URL? Hi @leerob, I signed and sent the CLA just now I also added two small commits since yesterday: The first was just an additional query on the speakers to display their talk. The second one is a link in the README to directly duplicate the example space, so the space is already set up in Storyblok. This should make getting the project running pretty easy. You can try it with this link if you have a Storyblok account: Duplicate Virtual Event Space. Finally the public token to test with your Preview URL: X8vjZHTJiZq71qr5roMiHAtt Hi @leerob, I signed and sent the CLA just now I also added two small commits since yesterday: The first was just an additional query on the speakers to display their talk. The second one is a link in the README to directly duplicate the example space, so the space is already set up in Storyblok. This should make getting the project running pretty easy. You can try it with this link if you have a Storyblok account: Duplicate Virtual Event Space. Finally the public token to test with your Preview URL: X8vjZHTJiZq71qr5roMiHAtt @cla-bot check @cla-bot check @cla-bot check @cla-bot check
gharchive/pull-request
2021-01-13T14:12:44
2025-04-01T06:40:51.679375
{ "authors": [ "leerob", "lisilinhart", "matheuss" ], "repo": "vercel/virtual-event-starter-kit", "url": "https://github.com/vercel/virtual-event-starter-kit/pull/25", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
345026414
[WIP] Implement Tor Tor running on device. Might change the Tor dependency... WE FINALLY DID IT! <3
gharchive/pull-request
2018-07-26T22:24:01
2025-04-01T06:40:51.682418
{ "authors": [ "SwenVanZanten", "marpme" ], "repo": "vergecurrency/vIOS", "url": "https://github.com/vergecurrency/vIOS/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
820429238
trial_specification_demo.ipynb notebook test failure Expected Behavior Run GitHub action Test Metis Python package and notebooks on unchanged code, action succeeds. Actual Behavior https://github.com/verilylifesciences/site-selection-tool/runs/2017133803?check_suite_focus=true Please note that a PR was just merged, which triggered the GitHub Action, but the code is unchanged since the prior run of the GitHub Action which was successful for the PR. https://github.com/verilylifesciences/site-selection-tool/actions/runs/579750192 Steps to Reproduce the Problem Navigate to https://github.com/verilylifesciences/site-selection-tool/actions?query=workflow%3A"Test+Metis+Python+package+and+notebooks" Click on "Run Workflow" Choose branch "main" I can reproduce with the steps given, but not locally. I tried Running jupyter notebook and "run all cells". It works as expected. Running jupyter nbconvert --to notebook --execute trial_specification_demo.ipynb. It runs without incident. Running the workflow fails at main (f3c805a), but also at the previous commit (b0f0ad7) where it previously passed. This suggests that something in the outside world has changed (as with #25, the OpenCovid issue we ran into previously), but if that were the case I'd expect to be able to reproduce locally. I tried reinstalling the requirements packages on the off chance that something in those packages had changed, but still could not reproduce locally. Any suggestions for how to debug further? At this point it seems like the path forwards is to iteratively push changes to a branch and run the workflow :-/. The problem is also reproducible on Terra. If you like you can make a clone of https://app.terra.bio/#workspaces/verily-metis/Site-selection-tool-for-vaccine-trial-planning and use %debug to step through the code. Thanks! I can reproduce on Terra.
gharchive/issue
2021-03-02T22:05:26
2025-04-01T06:40:51.700027
{ "authors": [ "deflaux", "geraschenko" ], "repo": "verilylifesciences/site-selection-tool", "url": "https://github.com/verilylifesciences/site-selection-tool/issues/32", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
380243127
Same clientId connecting to two topics is losing messages Summary When the same client connects to two different topics, it doesn't receives all the published messages anymore. Environment VerneMQ Version: erlio/docker-vernemq:latest, id: 55a857f7b481 OS: MacOS VerneMQ configuration (vernemq.conf) or the changes from the default: None, using default Expected behavior If the clientId is subscribed to two or more topics, it should receive all the messages for both of them. Actual behaviour Note: keep in mind I am using mosquitto_sub and mosquitto_pub to test this. First, I create two mosquitto_sub processes: $ mosquitto_sub -i cid -h localhost -p 1883 -t test -v And, in a second terminal: $ mosquitto_sub -i cid -h localhost -p 1883 -t testtest -v So, now I have a client id subscribed to two different topics: test and testtest. I have a small shell script that publishes messages to test: for i in {1..2}; do for j in {1..10}; do mosquitto_pub -i `date | md5` -h localhost -p 1883 -t test -m "$i-$j" done; done; When I run it, the output in the test topic changes every time and, very often, doesn't contain all the published messages: test 1-7 test 1-8 test 1-9 test 1-10 test 2-1 test 2-2 test 2-3 test 2-4 test 2-5 test 2-6 test 2-7 test 2-8 test 2-9 test 2-10 From the moment I kill the second terminal, the client starts receiving all the messages: test 1-1 test 1-2 test 1-3 test 1-4 test 1-5 test 1-6 test 1-7 test 1-8 test 1-9 test 1-10 test 2-1 test 2-2 test 2-3 test 2-4 test 2-5 test 2-6 test 2-7 test 2-8 test 2-9 test 2-10 So, here are my questions: Why are two process with the same ID allowed to connect? I would assume that this is just not possible. Ok, so we have two processes running under the same clientId. How are the connections being handled by the cluster? I would expect the cluster to send the messages to the connection that is actually listening to that topic. If, in another terminal, I run exactly the same command from the first one (meaning mosquitto_sub -i cid -h localhost -p 1883 -t test -v, same clientId and topic), I see that, actually, the messages are being splitted in between the two processes. Why? Given that it's not a shared topic, I would expect them both to receive all the messages. Or does the split happens exactly because they share the same clientId? Given that I would expect a second process with same cliendId to fail to connect, are you aware of some mosquitto_sub or mosquitto_pub weirdness that I should take into account when testing? Thanks Hi According to the MQTT spec two different clients are not allowed to connect with the same client-id. So in this case VerneMQ will disconnect the client that already was connected. You can see this behaviour if you add the -d flag to the mosquitto_sub commands where each client is disconnected because the other connected and then it reconnects and disconnects the other one. The issue here is that unless you use the -d flag you don't see that mosquitto_sub actually reconnected. Interesting. I didn't know about this option in mosquitto_sub. Thank you for enlightening me. :) So, actually, the reason of why it doesn't get all the messages is merely because once one connects, the other disconnects. I should have guessed, but wasn't aware of the -d option. I was definitely expecting mosquitto_sub to just fail with an error message or so. Lesson here is: check the documentation of the tool you are using. :) Now it is clear. Thanks! 👍
gharchive/issue
2018-11-13T14:08:28
2025-04-01T06:40:51.717568
{ "authors": [ "larshesel", "ricardoatsouza" ], "repo": "vernemq/vernemq", "url": "https://github.com/vernemq/vernemq/issues/954", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
199412253
Cleaner OSGi support Would like to see better/cleaner support for OSGi. separation of API and implementation bundles Thanks. @vietj for your group post request for suggestions. https://groups.google.com/forum/#!topic/vertx/duFVIcSR0zg Seperation of API and implementation is a topic that has popped up a few times, one thing to be careful is that this could break the semver API. This requires good thinking on how to do it with minimal impact if applicable to 3.x code base. @pmlopes I agree, it is not an easy task. it's a good practice for non OSGi projects too, I found it useful practice on other platforms and languages. Related to #139 Due to lack maintainers and proper testing OSGi metadata is removed from all modules on master. Which makes this issue not fixable.
gharchive/issue
2017-01-08T09:22:27
2025-04-01T06:40:51.758636
{ "authors": [ "gadieichhorn", "pmlopes" ], "repo": "vert-x3/issues", "url": "https://github.com/vert-x3/issues/issues/227", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2226624094
Avoid duplicated qids/skolemids in datatype height axioms I was doing some experiments with QI, and I noticed that all datatype height axioms share the same :qid prelude_datatype_height, and :skolemid skolem_prelude_datatype_height. Having duplicated qids for different quantifiers complicates the analysis a bit, so I am proposing an update here. This looks fine. Do you also want to include the type name in case multiple types have the same field? @Chris-Hawblitzel Thanks for bringing that up. I am not sure if I am reading it correctly, but it seems like the datatype name is pre-pended to the field at when calling datatype_height_axioms? https://github.com/verus-lang/verus/blob/eb988fe4a8600e012822fa0d6b0b5d94a14f0c23/source/vir/src/datatype_to_air.rs#L463 As a result the current quantifier (on this branch) looks like the following: (assert (forall ((x adts!Vehicle2.)) (! (=> (is-adts!Vehicle2./Car x) (height_lt (height (Poly%adts!Car. (adts!Vehicle2./Car/0 x))) (height (Poly%adts!Vehicle2. x )))) :pattern ((height (Poly%adts!Car. (adts!Vehicle2./Car/0 x)))) :qid prelude_datatype_height_adts!Vehicle2./Car/0 :skolemid skolem_prelude_datatype_height_adts!Vehicle2./Car/0 )))
gharchive/pull-request
2024-04-04T22:27:25
2025-04-01T06:40:51.776727
{ "authors": [ "yizhou7" ], "repo": "verus-lang/verus", "url": "https://github.com/verus-lang/verus/pull/1059", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1564161881
How can I fix System.Numerics.Vectors conflict? I am currently using CSharpMath 0.3.0 in my Xamarin.Forms application, because when I am trying to update it to 0.5.1, there is a System.Numerics.Vector conflict, and the LaTeX won't render. I don't know if these 2 things are related, but regardless can I do anything about this to fix it (to get it to render the LaTeX)? I seem to have the same issue. Would be nice to know the fix to this. I have also noticed that it makes the place for the render but not showing up. It's the basic Xamarin template with the 0.5.1. CSharpMath. If anyone is wondering, if you are using CSharpMath above 0.4.0 you also need to add SkiaSharp as a nugget package to render LaTeX
gharchive/issue
2023-01-31T11:50:41
2025-04-01T06:40:51.779720
{ "authors": [ "NGC224-Andromeda", "ViDanMaster" ], "repo": "verybadcat/CSharpMath", "url": "https://github.com/verybadcat/CSharpMath/issues/221", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
253638594
Reduce number of worker threads from default (500) Reduce the number of worker threads will have a positive impact on the memory consumption. This changes the default. That is considered ok also for the large installations we already have? It should be more than enough as long as the queries are not very expensive (<100ms). It's not uncommon that we tell people to increase the number of threads in this pool, which indicates that reducing it will cause problems, so I don't think we can do this. I think we could change to prestart just 100 of the 500 threads though. We need to introduce a new setting for that. @bratseth this changes the number of threads for the config server only, nothing else. Ah, ok nm then :-)
gharchive/pull-request
2017-08-29T12:26:13
2025-04-01T06:40:51.789941
{ "authors": [ "bjorncs", "bratseth", "hmusum" ], "repo": "vespa-engine/vespa", "url": "https://github.com/vespa-engine/vespa/pull/3251", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
254320170
Show log output during build Stop using travis_wait as it hides log output during build Nice workaround
gharchive/pull-request
2017-08-31T12:38:51
2025-04-01T06:40:51.790788
{ "authors": [ "bjorncs", "gjoranv" ], "repo": "vespa-engine/vespa", "url": "https://github.com/vespa-engine/vespa/pull/3291", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1934761662
fix: Missed context argument in resources.CloudConfigSecret.delete() call 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall [-] Fixed interval looping call 'magnum.service.periodic.ClusterUpdateJob.update_status' failed: TypeError: init() missing 1 required positional argument: 'cluster' 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall Traceback (most recent call last): 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib/python3.9/site-packages/oslo_service/loopingcall.py", line 150, in _run_loop 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw) 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib/python3.9/site-packages/magnum/service/periodic.py", line 72, in update_status 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall cdriver.update_cluster_status(self.ctx, self.cluster) 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall File "/var/lib/kolla/venv/lib/python3.9/site-packages/magnum_cluster_api/driver.py", line 161, in update_cluster_status 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall resources.CloudConfigSecret(self.k8s_api, cluster).delete() 2023-10-10 04:06:21.169 7 ERROR oslo.service.loopingcall TypeError: init() missing 1 required positional argument: 'cluster' hey @MagnusR -- thanks for your contribution, do you think you could help checking out the black fix, if not i call follow up and merge it thanks!
gharchive/pull-request
2023-10-10T08:33:00
2025-04-01T06:40:51.797921
{ "authors": [ "MagnusR", "mnaser" ], "repo": "vexxhost/magnum-cluster-api", "url": "https://github.com/vexxhost/magnum-cluster-api/pull/233", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
181578231
Fix cookie problem between main and blog When the domain not set to the same, the cookie will lose. common 有点迷……应该改成 cookie/domain 之类的东西。。 嗯,是有点...不过我考虑的是通用部分...而且其实这个端口并没什么用....
gharchive/pull-request
2016-10-07T03:31:49
2025-04-01T06:40:51.799884
{ "authors": [ "MascoSkray", "vfleaking" ], "repo": "vfleaking/uoj", "url": "https://github.com/vfleaking/uoj/pull/27", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
154892245
previous link of Google Java Style Guide is dead the link to Google Java Style Guide http://google-styleguide.googlecode.com/svn/trunk/javaguide.html is broken . after googling i found this link https://google.github.io/styleguide/javaguide.html which means they shifted this guide from svn to github. Thanks!
gharchive/pull-request
2016-05-15T05:50:36
2025-04-01T06:40:51.854212
{ "authors": [ "ghishadow", "vhf" ], "repo": "vhf/free-programming-books", "url": "https://github.com/vhf/free-programming-books/pull/1935", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
111534259
Can't get user access token assume I'm doing something wrong but can't workout how to get a user access token. I'm making a request with curl https://graph.facebook.com/oauth/access_token?client_id=MYAPPID&client_secret=MYAPPSECRET&grant_type=client_credentials and using that as the access token but then requests are getting oauth errors. Might also have my FB app setup incorrectly. @mortocks here's the url for getting an access token https://github.com/vhpoet/facebook-cli/blob/daa51715708ee968e5a6cb874981fba9e1dc5f22/auth.js#L26-L30
gharchive/issue
2015-10-15T02:44:53
2025-04-01T06:40:51.855839
{ "authors": [ "mortocks", "vhpoet" ], "repo": "vhpoet/facebook-cli", "url": "https://github.com/vhpoet/facebook-cli/issues/3", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2440170994
Add support for QRR type in structured remittance information Addresses #9 Looks good to me and we were able to test it successfully
gharchive/pull-request
2024-07-31T14:13:34
2025-04-01T06:40:51.856918
{ "authors": [ "andre-m-dev", "tobischo" ], "repo": "viafintech/sps_king", "url": "https://github.com/viafintech/sps_king/pull/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2330632321
有关多个手机同时操作的问题 大佬,你好,我想问下,如果我的电脑上同时插入多个手机,能否批量同步操作功能呢, 就一个同一个操作点击,在多个手机上同时运行。 抱歉,暂不支持群控。 抱歉,暂不支持群控。 老板,咱们有支持群控功能的计划没?💕💕💕
gharchive/issue
2024-06-03T09:25:34
2025-04-01T06:40:51.862478
{ "authors": [ "onlyWYY", "viarotel", "xiamo008" ], "repo": "viarotel-org/escrcpy", "url": "https://github.com/viarotel-org/escrcpy/issues/187", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1086359886
fast-syntax-highlighting error when install Installing zdharma/fast-syntax-highlighting@v1.2... Error! Activate logging and try again. Thank you!
gharchive/pull-request
2021-12-22T01:21:30
2025-04-01T06:40:51.863475
{ "authors": [ "0xdeface", "popstas" ], "repo": "viasite-ansible/ansible-role-zsh", "url": "https://github.com/viasite-ansible/ansible-role-zsh/pull/57", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2021357435
Unable to Expose to 0.0.0.0 Hey there, Thank you so much for this project!! 4 little instructions to install, WAY simpler than the native way! Currently, I am trying to host this in a container for my local network, however the way this is setup natively: 127.0.0.1:8888 Doesn't allow it to be exposed to my network Changing the config file to be 0.0.0.0:8888 and 0.0.0.0 port 8888 allows me to get to the User Manager through the /uc/admin page, but going to ipaddress:8888 in my web browser, where Outline should appear, shows a blank dark-themed screen Is there another way to expose this application to the network? Thank you! Update, changing to my local IP works locally (instead of 0.0.0.0:8888, use localip:8888, in my case 10.1.10.100:8888), however I am trying to assign this to a domain @ notes.mydomain.com, and while I can load this on my local net, trying to load on notes.mydomain.com loads the blank page again, while the UC Admin can be accessed just fine I've got an NGINX proxy manager on a different server that is pointing back to this instance I'm trying to do exatcly the same. Have you found a solution to this? Thanks Bump 这个问题我也遇到了 但是,这个nginx代理就会出问题。外网访问 ,但是无法绑定域名。 I found how to fix this, Combined with the resolution in here of adding user: 0:0 // add this line <---------------------------- This line does not exist, add it to wk-outline in the Docker Compose and what I'm about to post here, this should work for you all, too FULL STEPS OF HOW I SET THIS UP: ~~ Create an Ubuntu / Debian Container / VM Install Updates and Upgrades Install Docker, Make, and Nano Clone the Git Clone the config.sh from scripts/config.sh.example Nano the config.sh Then, “Make Install” Commands: Start Off: cd / mkdir outlineserverfolder Apt-get update && apt-get upgrade -y Apt install make && apt install nano Install Docker: apt-get install ca-certificates curl gnupg install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null apt-get update apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose Install Outline from Github: outline-docker-compose-master.zipcd directory/where/outlinedata/is/stored (/outlineserverfolder in this case) git clone https://github.com/vicalloy/outline-docker-compose.git cd outline-docker-compose cp /scripts/config.sh.sample scripts/config.sh nano scripts/config.sh OutlineConfig: URL= (PublicFacingURL:PublicFacingPort[PORT REQUIRED!])notes.mydomain.com:443 PORT= 3000 NginxConfig: URL=0.0.0.0 (Ties Nginx Container to the Host IP) PORT = 8888 CTRL + O to save, CTRL + X to exit nano Run the MakeFile to Install and Setup the Server: make install Other Commands: docker ps ~ Will show Docker Containers docker exec -it /bin/bash ~ executes /bin/bash on container, CTRL+P+Q to exit make install ~ Installs and Sets Up the server make start ~ Starts all associated containers make stop ~ Stops all associated containers make clean ~ Clears data from all containers make clean-data ~ Clears All Container .env Variables and All Container Data, also Deletes Containers Notes regarding- Outline Config: URL = PublicFacingURL:PublicFacingPort The URL and domain that you are using to host this with the port of the site. Essentially, if you host this at notes.yourdomain.com via cloudflare, and you’re pointing it back to an NGINX proxy, the service itself will host on the NginxConfig host, 0.0.0.0, and is accessible via port 8888, so redirect from notes.yourdomain.com to the IP Address of the Outline host on port 8888, ensure 443 and 80 are open in your firewall, and in the configuration, when it is asking for the OutlineConfig URL, ensure that you list both the domain URL and the port utilized (if using HTTP, port 80, if HTTPS, port 443) in the Outline Config, the port is a required element How this is setup: Docker Host: 192.168.1.100 Config File: OutlineSection ~ URL: notes.mydomain.com:443 PORT: 3000 NginxSection ~ URL: 0.0.0.0 PORT: 8888 Leave the Rest Alone Nginx Proxy Manager: Add a HTTP Redirection Host to 192.168.1.100:8888 from notes.mydomain.com WebSocket Support + Block Common Exploits + Cache SSL > Request New SSL Certificate Firewall(s): NAT Forward port 80/443 to Nginx Proxy Manager If multiple firewalls / routers / layers to network, push port through to each until host reached Access: For Internal Useage / Testing, 192.168.1.100:8888 will get to the site For Public Access / Useage, notes.mydomain.com should get to the site For User Management, 192.168.1.100:8888/uc/admin/auth/user/ NOTE: USER MUST HAVE LISTED EMAIL ADDRESS OR SIGN-IN WILL FAIL In this current state, photos will fail to upload anywhere. Within the docker-compose.yml, wk-outline: image: outlinewiki/outline:${OUTLINE_VERSION} command: sh -c "yarn db:migrate --env production-ssl-disabled && yarn start" environment: - DATABASE_URL=postgres://user:pass@wk-postgres:5432/outline - DATABASE_URL_TEST=postgres://user:pass@wk-postgres:5432/outline-test - REDIS_URL=redis://wk-redis:6379 - AWS_S3_UPLOAD_BUCKET_NAME=outline-bucket env_file: - ./env.outline - ./env.oidc volumes: - ./data/outline:/var/lib/outline/data user: 0:0 // add this line <---------------------------- This line does not exist, add it to wk-outline restart: unless-stopped depends_on: - wk-postgres - wk-redis ##BEGIN MINIO - wk-minio ##END I wrote an entire document on Outline for how to host Outline, sorry if the formatting isn't great, but this is basically every step I took What you'll need to do is set the config.sh outline URL to the PUBLIC FACING URL:443 or :80, depending on if HTTP or HTTPS, if using LetsEncrypt with your NGINX Proxy Manager, set it to your notes.yourdomain.com:443. Leave the PORT option at 3000 Then, for the NGINX config, I have it hosted on 0.0.0.0, to ensure that if you navigate to the local domain, 192.168.1.100:8888, it still works and still redirects fine, so that the 127.0.0.1 cannot access issue is gone, but with the outline url set to notes.yourdomain.com:443 PORT3000 you should be able to redirect through Proxy Manager to it just fine without needing advanced flags or location flags The only thing I am now working on is blocking access to the /uc/admin site through proxy manager, and though I am struggling to figure that out, this is not an NGINX Proxy Manager Github, so I'll figure that one out @giovannipollo @Codename-11 @hardingCheng Please ping me if anyone has issues with understanding that, I just copy pasted from my Outline page Thanks it works! Following the documents of @7ekhed , I did the following things. clone git git clone https://github.com/vicalloy/outline-docker-compose.git cd outline-docker-compose cp scripts/config.sh.sample scripts/config.sh Change configuration nano scripts/config.sh URL=<my_url>:8888 HTTP_IP=0.0.0.0 Change configuration - 2 nano scripts/templates/docker-compose.yml volumes: - ./data/outline:/var/lib/outline/data user: 0:0 // add this line <---------------------------- This line does not exist, add it to wk-outline restart: unless-stopped Install and run make install Anyone has tried this approach with traefik? I tried numbers on configuration to use traefik as reverse proxy but it would only work temporarily then nginx would result in worker process 23 exited with code 0 Thank you all, especially @7ekhed & @sheepvs5 ! I spend several hours to try to fix it. Got it to work now. 🤝🏼 Mine still not working. I have no problem open it with local ip:8888 http://192.168.1.227:8888 then I set up nginx reverse proxy with my own domain name pointing to my local ip and port 8888 the nginx proxy manager is running on a different server on my local network: config.sh file URL 但and HTTP_IP was set like below: but when I go to https://docs.stonelab.me I can open the page, but it brought to the following page with a button Using OenID to continue, then nothing working from there. I think it might be the authentication issue?
gharchive/issue
2023-12-01T18:07:15
2025-04-01T06:40:51.893336
{ "authors": [ "7ekhed", "Codename-11", "geekqq", "giovannipollo", "hardingCheng", "pathavyer", "sheepvs5", "y3sp3r" ], "repo": "vicalloy/outline-docker-compose", "url": "https://github.com/vicalloy/outline-docker-compose/issues/72", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
174320702
Add an option for displaying the track enveloppe For track shapes that are very different from a triangle, I'd like to know what's the geometric enveloppe of my track. (The enveloppe is like a giant rubber band wrapped around the track.) Would it be possible to add an option for displaying the enveloppe and its length? For example, a small button with a lasso in it, below the other green buttons in the top-right corner of the map, would show/hide the enveloppe on the map, and a message box somewhere with the enveloppe length and its opening lengths. (The opening length is the distance between the start and end waypoints.) Not sure what you exactly mean ? Do you have an exemple / known OSS implementation ? Here's an example of a flight track, where I manually added the enveloppe in green. What is the purpose ? The purpose is to get an evaluation of the track length that is more realistic than with the standard geometric shapes now used for distance scoring. With a fixed number of sides (3 for FAI triangles, or even 4 as were allowed in the past by French CFD rules), many "interesting" tracks (those with many extrema, or with convex shapes like a circle) do get widely under-evaluated. The example image above shows that well on such a track, with the green enveloppe much longer than the blue FAI triangle. On the other hand, tracks with just 3 points will often have an enveloppe almost identical to their FAI triangle. This is the case for most of the top tracks in the CFD this year. I proposed that idea publicly last spring in a letter that was published in the #166 issue of Parapente Mag. Then last week, I was happily surprised to learn that a Paramotor association had adopted it and implemented it for their own distance scoring. See their new rule here: [http://cfdm.forumperso.com/t141-reglement-de-la-cfdm] And here's an example of a paramotor track with the enveloppe shown. [http://cfdm.forumperso.com/t130-0001-13-08-16-julien-heyl-79-6-km-homologue] So now my request is to add that capability to VisuGps as an optional feature. People who don't care will see no change. And people interested can visualize their envelopes, and get its length, by clicking on a button. Note that I've heard that many graphic libraries do already have functions for drawing envelopes. If you do use such a library, this may be relatively easy to implement. Else it'll be more work. I'm a developer, so I volunteer to contribute if needed. Looking at your ex your previous reasoning does not seem valid. If the question is how to better reflect "real" distance for recreational pilots, may be there are better options ? (ie more segments). I'll probably be quite busy for the 2 coming weeks but nappy to discuss this after.
gharchive/issue
2016-08-31T16:36:13
2025-04-01T06:40:51.900372
{ "authors": [ "JFLarvoire", "vicb" ], "repo": "vicb/VisuGps3", "url": "https://github.com/vicb/VisuGps3/issues/10", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1432406788
Image alt attributes needed Images are not labeled with alt attributes Resolved
gharchive/issue
2022-11-02T03:10:26
2025-04-01T06:40:51.902095
{ "authors": [ "vicearus" ], "repo": "vicearus/Module1", "url": "https://github.com/vicearus/Module1/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1419634873
🛑 SC storefront productend is down In 55d0fc5, SC storefront productend (https://shop.line.me/@024azltq/product/319384970) was down: HTTP code: 0 Response time: 0 ms Resolved: SC storefront productend is back up in f7e9822.
gharchive/issue
2022-10-23T04:41:18
2025-04-01T06:40:51.904833
{ "authors": [ "vichaos" ], "repo": "vichaos/upptime", "url": "https://github.com/vichaos/upptime/issues/266", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1614155450
Code link isn't saved https://arxiv.org/abs/1707.05821v1 Fixed in #160
gharchive/issue
2023-03-07T20:24:44
2025-04-01T06:40:51.905811
{ "authors": [ "vict0rsch" ], "repo": "vict0rsch/PaperMemory", "url": "https://github.com/vict0rsch/PaperMemory/issues/145", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
717445327
Results in NeuralQA inconsistent with same model running on HF I've tested a model that I've deployed on NeuralQa vs one deployed on HF and noticed that the same inputs are yielding different outputs even though it's using the exact same model. This can of course be attributed to a few things but I can't seem to identify the culprit. Here's the context: Question: Are your handsets locked or unlocked? Corpus: ['No, all our handsets are unlocked.','Since your SIM isn’t working in your handset while other SIM cards are, it might be an issue with your handset provider; or the mobile phone could be locked, meaning it only accepts SIM cards from a particular service provider. Please contact the handset dealer for more assistance.'] The following returns 'unlocked' which is the correct response: See Demo on HuggingFace I've configured the exact same model in NeuralQA (with relsnip disabled) and the result is 'locked' even though I'm feeding exactly the same inputs. Here my log: 0:No, all our handsets are unlocked. [{'answer': 'unlocked', 'took': 0.35032129287719727, 'start_probability': '0.92030567', 'end_probability': '0.00026586326', 'probability': '0.460418697912246', 'question': 'Are your handsets locked or unlocked?', 'context': 'no, all our handsets are unlocked '}] 1:Since your SIM isn’t working in your handset while other SIM cards are, it might be an issue with your handset provider; or the mobile phone could be locked, meaning it only accepts SIM cards from a particular service provider. Please contact the handset dealer for more assistance. [{'answer': 'locked', 'took': 0.5319299697875977, 'start_probability': '0.9462091', 'end_probability': '0.007203659', 'probability': '0.48030819557607174', 'question': 'Are your handsets locked or unlocked?', 'context': 'since your sim isn ’ t working in your handset while other sim cards are, it might be an issue with your handset provider ; or the mobile phone could be locked , meaning it only accepts sim cards from a particular service provider. please contact the handset dealer for more assistance'}] As you can see the 2nd answer gets a higher probability but that doesn't really make sense as it's exactly the same model. The main difference is that the NeuralQA model is feeding the corpus content independently while in the HF example, we're feeding the entire corpus. Any ideas on why this is happening? Could this be related to #39 @jvence , Yup, it is definitely related to #39 .The solution will be to rewrite that piece using the HF approach. Its part of some work to convert the entire lib to use pytorch. See #53 . Hoping to have some updates in the coming week or so. Yes further testing with multiple models does confirm that the results given by NeuralQA are way off the ones returned by HF face model. Hope this can be resolved soon as it's critical to us. Thank you Hi @victordibia, just checking in to see if there's any update on this? Seems like a pretty critical issue. Thanks Noticed something interesting. Running the following through a model Sentence: My name is Jean Question: what is your name? answer= {'answer': '', 'took': 1.0392448902130127, 'start_probability': '0.9280809', 'end_probability': '1.2582249e-06', 'probability': '0.46404171642723213'} The start probability is very high but the actual probability is only 0.46. Is this normal? @victordibia Is this project still maintained? We have not heard from you for a while. Hope everything is ok. @victordibia It's a shame that this is no longer maintained. What are you plans vis-a-vis this project?
gharchive/issue
2020-10-08T15:24:45
2025-04-01T06:40:51.921356
{ "authors": [ "jvence", "victordibia" ], "repo": "victordibia/neuralqa", "url": "https://github.com/victordibia/neuralqa/issues/60", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
678677339
Fix for issue 224 Removed "singularize" from menu in layouts/partials/nav.html Super!
gharchive/pull-request
2020-08-13T19:17:47
2025-04-01T06:40:51.925275
{ "authors": [ "thegreatluke", "victoriadrake" ], "repo": "victoriadrake/hugo-theme-introduction", "url": "https://github.com/victoriadrake/hugo-theme-introduction/pull/241", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2686890346
Update zio, zio-streams, zio-test, ... to 2.1.13 About this PR 📦 Updates dev.zio:zio dev.zio:zio-streams dev.zio:zio-test dev.zio:zio-test-magnolia dev.zio:zio-test-sbt from 2.1.12 to 2.1.13 📜 GitHub Release Notes - Version Diff Usage ✅ Please merge! I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! ⚙ Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "dev.zio" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "dev.zio" } }] labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1 Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Project coverage is 56.67%. Comparing base (7e37845) to head (5c00a5c). Report is 123 commits behind head on master. Additional details and impacted files @@ Coverage Diff @@ ## master #533 +/- ## ========================================== - Coverage 56.84% 56.67% -0.18% ========================================== Files 35 38 +3 Lines 1752 1775 +23 Branches 233 237 +4 ========================================== + Hits 996 1006 +10 - Misses 756 769 +13 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here. 🚨 Try these New Features: Flaky Tests Detection - Detect and resolve failed and flaky tests Superseded by #540.
gharchive/pull-request
2024-11-24T02:22:56
2025-04-01T06:40:51.983877
{ "authors": [ "codecov-commenter", "scala-steward" ], "repo": "vigoo/desert", "url": "https://github.com/vigoo/desert/pull/533", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
243632619
Using TableViewCell as Subview Hi, I use Storyboards, and I have a prototype cell defined in a TableView. It's a header cell that summarises the rest of the table. I am wanting to use this prototype cell as the subview of a SCLAlertView, i.e. so that it shows the summary of the table on the view underneath the alertView. I tried using: let cell = tableView.dequeueReusableCell(withIdentifier: "header") as! PayHeaderTVC and then modifying the cell's properties: cell.lblItemsMarked.text = "\(itemsMarkedCount)" cell.lblTotalMarking.text = markingTotal.Currency() and then finally calling alertView.customSubview = cell, but when the alertView shows, the subview does not. Any ideas? Cheers Saf UITableViewCell is not supposed to be used outside of UITableView. You can create your own custom view and use it as a subview of your cell's contentView and as alertView.customSubview (not the same instance of course, two different instances).
gharchive/issue
2017-07-18T08:05:33
2025-04-01T06:40:51.991065
{ "authors": [ "algrid", "sarfrazb" ], "repo": "vikmeup/SCLAlertView-Swift", "url": "https://github.com/vikmeup/SCLAlertView-Swift/issues/341", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
340994380
Add button to get ABI while viewing an account card Next to the account history button hi can i clone full code for rework another blockchain? hi can i clone full code for rework another blockchain?
gharchive/issue
2018-07-13T12:06:42
2025-04-01T06:40:51.994298
{ "authors": [ "YudaAdiPratama", "vikxx" ], "repo": "vikxx/eos1bot", "url": "https://github.com/vikxx/eos1bot/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1880962000
🛑 Booking is down In 1ff0739, Booking ($SITE_URL) was down: HTTP code: 0 Response time: 0 ms Resolved: Booking is back up in 2395935 after 7 minutes.
gharchive/issue
2023-09-04T23:36:47
2025-04-01T06:40:51.999054
{ "authors": [ "herschel666" ], "repo": "villekulla-nma/monitor-booking", "url": "https://github.com/villekulla-nma/monitor-booking/issues/242", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1590433293
[vim9script] Weird class X not found on interface X error Steps to reproduce To reproduce, source this script: vim9script class Parent public this.value = 0 endclass def Test_get_parent_member() var parent = Parent.new(9) assert_equal(9, parent.value) enddef class Child extends Parent endclass Test_get_parent_member() Running the above script results in class Parent not found on interface Parent. If the definition of the Child class is removed, then the code runs fine. Expected behaviour The code should run fine and the test should not fail. Version of Vim 9.0.1321 Environment macOS 13.2 Apple Terminal xterm-256color ZSH 5.8.1 Logs and stack traces No response I think this issue is fixed by https://github.com/vim/vim/commit/74cc13cc402fa4df9033fbbc0643b1b403525950. Can you try to reproduce this issue with the latest Vim version? Thanks, indeed the issue appears to be fixed!
gharchive/issue
2023-02-18T18:40:07
2025-04-01T06:40:52.006356
{ "authors": [ "lifepillar", "yegappan" ], "repo": "vim/vim", "url": "https://github.com/vim/vim/issues/12023", "license": "Vim", "license_type": "permissive", "license_source": "github-api" }
545725240
Add eob to 'fillchars' Is your feature request related something that is currently hard to do? Please describe. I want to hide the tildes after the last line of buffers. Setting EndOfBuffer highlight to show fg and bg in same color is a way, but in some terminal emulators with background transparency which is not applied to texts (e.g. iTerm2), the tildes come back visible. This was discussed in neovim/neovim#2067, and now NeoVim has option eob for 'fillchars'. I want Vim to have same ability. Describe the solution you'd like Add eob option to 'fillchars' to change the tildes, that is compatible with NeoVim. Describe alternatives you've considered Adding option to hide tildes? I'm verry sorry this is duplicate of #3820. I've only searched with listchars. closing.
gharchive/issue
2020-01-06T13:25:21
2025-04-01T06:40:52.009376
{ "authors": [ "Bakudankun" ], "repo": "vim/vim", "url": "https://github.com/vim/vim/issues/5445", "license": "Vim", "license_type": "permissive", "license_source": "github-api" }
1301493341
WIP: Update menu translations for Italian / Italiano Developers who know the language are now needed to help complete this PR. The menu translation items template is generated by https://github.com/adaext/vim-menutrans-helper. I've scanned all the menu items and created a fairly completed template. It haven't been updated for years, many new items would be added except items inmenu.vim. @chrisbra this is already translated This seems to be maschine translated with some errors. I am not including this. Closing.
gharchive/pull-request
2022-07-12T03:42:36
2025-04-01T06:40:52.011229
{ "authors": [ "adaext", "chrisbra", "ronin49" ], "repo": "vim/vim", "url": "https://github.com/vim/vim/pull/10711", "license": "Vim", "license_type": "permissive", "license_source": "github-api" }
1977643179
Make autoload/dist/vim.vim work properly when lacking vim9script support :return cannot be used outside of :function in older Vims lacking Vim9script support or in Neovim, even when evaluation is being skipped in the dead :else branch. Instead, use the pattern described in :h vim9-mix, which uses :finish to end script processing before it reaches the Vim9script stuff. yeah, thanks. Makes sense.
gharchive/pull-request
2023-11-05T05:42:29
2025-04-01T06:40:52.012809
{ "authors": [ "chrisbra", "seandewar" ], "repo": "vim/vim", "url": "https://github.com/vim/vim/pull/13487", "license": "Vim", "license_type": "permissive", "license_source": "github-api" }
1406736110
Update span create or get from ctx make sure that the span can be changed in user code by getting the active span from the context. All changes are added based on the origin of the span(cox or new) Not going to push this code since I've changed it in a commit to main directly 035f3c4a37e3a84adeb357b343a7ae3a4f0ade6a
gharchive/pull-request
2022-10-12T19:58:41
2025-04-01T06:40:52.091288
{ "authors": [ "vincentfree" ], "repo": "vincentfree/opentelemetry-http", "url": "https://github.com/vincentfree/opentelemetry-http/pull/15", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
346490389
Importing Application Error Hi Vincent, Thought I would give this a try and am struggling to import the application into my environment. I get the following error: Regards Hi Vincent : I re-downloaded the APEX SQL file and now seems to be working. Very odd. Apologies
gharchive/issue
2018-08-01T08:01:59
2025-04-01T06:40:52.094036
{ "authors": [ "Kopite4Ever" ], "repo": "vincentmorneau/apex-pwa", "url": "https://github.com/vincentmorneau/apex-pwa/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
754835147
[RFC-15][HUDI-1325] Merge updates of unsynced instants to metadata table Tips Thank you very much for contributing to Apache Hudi. Please review https://hudi.apache.org/contributing.html before opening a pull request. What is the purpose of the pull request There can be the possibility that the dataset timeline and the metadata table timeline become out of sync. When trying to read from the metadata table while the timeline is out of sync you would get incorrect values for getAllFilesInPartition and getAllPartitionPaths. This change provides a way to overcome this scenario by reading unsynced timeline instants and merging it with existing metadata table records to get the most up to date state of the file system JIRA: https://issues.apache.org/jira/browse/HUDI-1325 Brief change log The logic of converting timeline metadata to metadata table records was directly tied to the commit phase in FSBackedMetadataWriter. Refactored this logic to a utility class HoodieTableMetadataTimelineUtil Created a scanner HoodieMetadataMergedInstantRecordScanner which handles conversion of timeline instants to metadata records and merges results Added third step in FSBackedTableMetadata.getMergedRecordByKey which uses the new scanner mentioned to fetch the HoodieRecord associated with the desired key from the unsynced timeline instants and merge it with the record from the metadata table When converting rollback operation to metadata table records there was logic that re-read from the metadata table to ensure any files being deleted as part of roll back existed. // Rollbacks deletes instants from timeline. The instant being rolled-back may not have been synced to the // metadata table. Hence, the deleted filed need to be checked against the metadata. This doesn't make sense since all instants are processed in serial order so there would never be the case where a rollback was being written before an instant earlier on the timeline was already synced. Removed this logic because it created circular dependency when implementing timeline merging Changed the validate metadata step in tests to use the metadata reader FSBackedTableMetadata. By default when metadata writer FSBackedTableMetadataWriter is initialized it syncs all instants to the metadata table. By using the reader we can simulate metadata table being out of sync. Modified initMetaClient in test base class to allow table type to be passed in since table type is always set as COPY_ON_WRITE if using this method to initialize the meta client Verify this pull request (Please pick either of the following options) This pull request is a trivial rework / code cleanup without any test coverage. (or) This pull request is already covered by existing tests, such as (please describe tests). (or) This change added tests and can be verified as follows: (example:) Added integration tests for end-to-end. Added HoodieClientWriteTest to verify the change. Manually verified the change by running a job locally. Committer checklist [ ] Has a corresponding JIRA in PR title & commit [ ] Commit message is descriptive of the change [ ] CI is green [ ] Necessary doc changes done or have another open PR [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. @rmpifer I was looking for this in apache/hudi :). and totally missed that it's here. Can we retarget this to apache/hudi/rfc-15? Codecov Report Merging #6 (75a3352) into rfc-15 (7f84b12) will decrease coverage by 0.06%. The diff coverage is 0.00%. @@ Coverage Diff @@ ## rfc-15 #6 +/- ## ============================================ - Coverage 43.79% 43.73% -0.07% Complexity 3379 3379 ============================================ Files 573 575 +2 Lines 24400 24438 +38 Branches 2445 2449 +4 ============================================ Hits 10687 10687 - Misses 12692 12730 +38 Partials 1021 1021 Flag Coverage Δ Complexity Δ hudicli 27.48% <0.00%> (+0.35%) 0.00 <0.00> (ø) hudiclient 24.74% <0.00%> (+0.41%) 0.00 <0.00> (ø) hudicommon 51.64% <0.00%> (-0.95%) 0.00 <0.00> (ø) hudihadoopmr 33.05% <ø> (ø) 0.00 <ø> (ø) hudispark 67.19% <ø> (ø) 0.00 <ø> (ø) huditimelineservice 64.43% <ø> (ø) 0.00 <ø> (ø) hudiutilities 69.38% <ø> (ø) 0.00 <ø> (ø) Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ Complexity Δ ...hudi/metadata/HoodieBackedTableMetadataWriter.java 4.01% <0.00%> (+1.34%) 4.00 <0.00> (ø) ...pache/hudi/metadata/HoodieBackedTableMetadata.java 0.00% <0.00%> (ø) 0.00 <0.00> (ø) ...data/HoodieMetadataMergedInstantRecordScanner.java 0.00% <0.00%> (ø) 0.00 <0.00> (?) ...hudi/metadata/HoodieTableMetadataTimelineUtil.java 0.00% <0.00%> (ø) 0.00 <0.00> (?)
gharchive/pull-request
2020-12-02T00:41:38
2025-04-01T06:40:52.149787
{ "authors": [ "codecov-io", "rmpifer", "vinothchandar" ], "repo": "vinothchandar/hudi", "url": "https://github.com/vinothchandar/hudi/pull/6", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
154666950
Unable to run pypdfocr.exe ver 0.9.0 Hi, I am unable to run pypdfocr.exe 0.9.0 on Windows 7 x64. This is the error message that I get: This version of pypdfocr.exe is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher. I have tested on windows 7 x64 and it runs fine for me. Are you sure you're on 64-bit? Have you tried an older version of pypdfocr (like 0.8.3) and does that also not work? Closed, unable to reproduce
gharchive/issue
2016-05-13T09:10:42
2025-04-01T06:40:52.175904
{ "authors": [ "violetsolutions", "virantha" ], "repo": "virantha/pypdfocr", "url": "https://github.com/virantha/pypdfocr/issues/45", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
445733823
Request For Cardboard Button On Player Screen Description Hello Viro team, I would like to ask for a feature that already existing in Google VR Video (Or in youtube). There you will see that there is a cardboard button (a cardboard icon ^_^ ). The cardboard button/icon will display when the video play in mono mode (still 360 and users move their phone around to see), user click on it to switch to VR mode. Does it sound promising? Or you think that tapping on the screen to switch mode is good enough? Thanks. Hello @dthian , What do you think about this request? The cardboard button/icon will display when the video play in mono mode (still 360 and users move their phone around to see), user click on it to switch to VR mode. Does it sound promising? The above sounds like an awesome video application that developers can build. Sure, you can use the Viro platform sdk to build this experience - we should be able to support this case. What do you think about this request? Unfortunately, Viro is not a dev shop at the moment - our main focus is to add support and fix issue that are found on the platform, and we are not an "applications building team", but more like a "Frameworks team". One thing you can try is to reach out to other devs in our Slack channel and see if they might be willing to build such an application for you. Thank @dthian for your information.
gharchive/issue
2019-05-18T16:53:24
2025-04-01T06:40:52.179341
{ "authors": [ "dthian", "tuananhcwrs" ], "repo": "viromedia/viro", "url": "https://github.com/viromedia/viro/issues/631", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1871467980
Change CLI backend option to ValueEnum Change the CLI to receive backends as a positional argument with values listed in a ValueEnum. Current '--help' output: A virtio-sound device using the vhost-user protocol. Usage: vhost-user-sound --socket <SOCKET> --backend <BACKEND> Options: --socket <SOCKET> vhost-user Unix domain socket path --backend <BACKEND> audio backend to be used [possible values: null, pipewire, alsa] -h, --help Print help -V, --version Print version If a wrong backend is given, it give hints: $ cargo run -- --socket /tmp/sound.sock --backend nul error: invalid value 'nul' for '<BACKEND>' [possible values: null, pipewire, alsa] tip: a similar value exists: 'null' Add a test to verify (minimally) the backend argument. Last commit adds rstest crate to have multiple cases in a parametrized test. Last commit outputs: running 4 tests test tests::test_cli_backend_arg::case_2_pipewire ... ok test tests::test_cli_backend_arg::case_3_alsa ... ok test tests::test_cli_backend_arg::case_1_null_backend ... ok test tests::test_sound_config_setup ... ok test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s But if the dependency is not desired, the commit can be omitted. Fixed the build issue by tweaking the conditional compilation of the cfg attribute for the match block: #[cfg(not(all( feature = "null-backend", feature = "pw-backend", feature = "alsa-backend" )))] _ => Err(Error::AudioBackendNotSupported), Does not look great, but I couldn't come with a cleaner correct option. Hopefully it is ok though :) Fixed the build issue by tweaking the conditional compilation of the cfg attribute for the match block: #[cfg(not(all( feature = "null-backend", feature = "pw-backend", feature = "alsa-backend" )))] _ => Err(Error::AudioBackendNotSupported), Does not look great, but I couldn't come with a cleaner correct option. Hopefully it is ok though :) What about as was before this series, I mean applying this patch: diff --git a/crates/sound/src/audio_backends.rs b/crates/sound/src/audio_backends.rs index bd13c37..dea74f9 100644 --- a/crates/sound/src/audio_backends.rs +++ b/crates/sound/src/audio_backends.rs @@ -17,13 +17,7 @@ use self::alsa::AlsaBackend; use self::null::NullBackend; #[cfg(feature = "pw-backend")] use self::pipewire::PwBackend; -use crate::{device::ControlMessage, stream::Stream, BackendType, Result}; -#[cfg(not(all( - feature = "null-backend", - feature = "pw-backend", - feature = "alsa-backend" -)))] -use crate::Error; +use crate::{device::ControlMessage, stream::Stream, BackendType, Error, Result}; pub trait AudioBackend { fn write(&self, stream_id: u32) -> Result<()>; @@ -64,11 +58,6 @@ pub fn alloc_audio_backend( Pipewire => Ok(Box::new(PwBackend::new(streams))), #[cfg(feature = "alsa-backend")] Alsa => Ok(Box::new(AlsaBackend::new(streams))), - #[cfg(not(all( - feature = "null-backend", - feature = "pw-backend", - feature = "alsa-backend" - )))] _ => Err(Error::AudioBackendNotSupported), } } Okay, in this case we will have warning: unreachable pattern, so I think your solution is the only one available, or we should suppress the warn(unreachable_patterns) Okay, in this case we will have warning: unreachable pattern, so I think your solution is the only one available, or we should suppress the warn(unreachable_patterns) I think it might be better to suppress the warning for that line by putting a nice comment explaining why. Otherwise every time we add a backend we have to edit these lines. WDYT? I think it might be better to suppress the warning for that line by putting a nice comment explaining why. Otherwise every time we add a backend we have to edit these lines. WDYT? I was tempted of doing that as I was trying to make it work. But I wanted to present the other option first. I think suppressing the warning is best for this case, it is justified and will make it more maintainable. Let's go for that! Another option now that I think of it, is to have the enum variants conditional in the declaration. Another option now that I think of it, is to have the enum variants conditional in the declaration. Nah, it complicates the no-features case. But it may be a nice change if we ever tweak the compiler to use at least the Null backend in any case (if it is possible). I'll leave it as is. Another option now that I think of it, is to have the enum variants conditional in the declaration. Do you mean in pub enum BackendType ? It could make sense. Should we also avoid the default? Do you mean in pub enum BackendType ? It could make sense. Should we also avoid the default? Yes, the BackendType. And yes, we would have to avoid the default. That would've been fine, but also it would leave the enum with no values with no-features case. And that makes some parts of the code unreachable... Kindof a mess to fix. But also proves that is not a good idea to compile with no features I guess :) Let's keep it in the back of our heads for the future. Do you mean in pub enum BackendType ? It could make sense. Should we also avoid the default? Yes, the BackendType. And yes, we would have to avoid the default. That would've been fine, but also it would leave the enum with no values with no-features case. And that makes some parts of the code unreachable... Kindof a mess to fix. But also proves that is not a good idea to compile with no features I guess :) I'm start thinking the same :-) So maybe we should always compile the null backend. Disable it will save just few bytes, nothing more since it doesn't have any dependency. Let's keep it in the back of our heads for the future. Sure, feel free to open an issue here for tracking it. Another option now that I think of it, is to have the enum variants conditional in the declaration. Nah, it complicates the no-features case. But it may be a nice change if we ever tweak the compiler to use at least the Null backend in any case (if it is possible). I'll leave it as is. However, I think it makes sense, partly because now we print all as possible values, even if not enabled. $ cargo build --no-default-features --features=null-backend $ target/debug/vhost-user-sound --socket /tmp/sock --backend gstreamer error: invalid value 'gstreamer' for '--backend <BACKEND>' [possible values: null, pipewire, alsa] For more information, try '--help'. But I agree that we can do it later by removing the feature to disable null-backend to simplify the code. However, I think it makes sense, partly because now we print all as possible values, even if not enabled. Right, that is a good point. But I agree that we can do it later by removing the feature to disable null-backend to simplify the code. I don't mind handling this myself in a follow-up PR :)
gharchive/pull-request
2023-08-29T11:27:22
2025-04-01T06:40:52.203847
{ "authors": [ "aesteve-rh", "stefano-garzarella" ], "repo": "virtio-sound/vhost-device", "url": "https://github.com/virtio-sound/vhost-device/pull/24", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2203297016
Viosock: Remove Coinstaller Stuff from the INF File The driver is installed in the Driver Store (DIRID 13) instead of the System Drivers directory (DIRID 12). Is the protocol not registered? [VirtioSocket_Device_CoInstaller_AddReg] HKR,,CoInstallers32,0x00010000,"viosocklib.dll,ViosockCoInstaller" ==> The old INFs will register the protocol through the ViosockCoInstaller interface. [VirtioSocket_Device_CoInstaller_AddReg] HKR,,CoInstallers32,0x00010000,"viosocklib.dll,ViosockCoInstaller" ==> The old INFs will register the protocol through the ViosockCoInstaller interface. Ah, OK, I missed that. I will improve the PR. @MartinDrab The build failed C:\EWDK11\Program Files\Microsoft Visual Studio\2019\BuildTools\MSBuild\Microsoft\VC\v160\Microsoft.CppBuild.targets(436,5): error MSB8013: This project doesn't contain the Configuration and Platform combination of Debug|Win32. [C:\workspace\VirtIO-EWDK-11-21H2-SDV\viosock\installer\viosock-installer.vcxproj] @MartinDrab https://learn.microsoft.com/en-us/windows-hardware/drivers/install/using-an-extension-inf-file did you try to check "Extension INFs"? @MartinDrab https://learn.microsoft.com/en-us/windows-hardware/drivers/install/using-an-extension-inf-file did you try to check "Extension INFs"? I came across them several days ago when working on another (unrelated) issue. I did not have time to look at them in more detail and possibly use them, however, I hope to get to it shortly. @MartinDrab We merge https://github.com/virtio-win/kvm-guest-drivers-windows/pull/1087 to switch to the new EWDK. For now, we just disable viosock build for Win11. @MartinDrab We merge #1087 to switch to the new EWDK. For now, we just disable viosock build for Win11. OK. I hope to work on this PR shortly and finish it. I apologize for this inconvenience. @MartinDrab We merge #1087 to switch to the new EWDK. For now, we just disable viosock build for Win11. OK. I hope to work on this PR shortly and finish it. I apologize for this inconvenience. Not a problem. Disabling the build is not a complicated task. This is more problem for companies that release viosock. @MartinDrab We merge #1087 to switch to the new EWDK. For now, we just disable viosock build for Win11. OK. I hope to work on this PR shortly and finish it. I apologize for this inconvenience. Not a problem. Disabling the build is not a complicated task. This is more problem for companies that release viosock. Hello, I hope I overcame issues regarding the co-installer and the socket WSP installation from the INF file. In the end, I decided to separate the WSP installation into a special service since it is possible to install and start it from the INF file. Other possibilities seem to be problematic: WDK complains about the co-installer even when it is used only on old versions of Windows 10, the AddSoftware directive is not supported on old versions of Windows 10. I hope this should finally pass the tests and build successfully also with new (E)WDKs. @MartinDrab Thanks for your work. Unfortunately, the build failed again. Please also revert https://github.com/virtio-win/kvm-guest-drivers-windows/commit/f7646006430f40014373cc747bb75a0d7f2cf1c2 and update buildAll.bat (https://github.com/virtio-win/kvm-guest-drivers-windows/commit/ad1aed4601ba937625263c54720fbf316685a3e5 disabled Win11 for viosock) add rem NO WIN11 build for viosock for now if errorlevel 1 goto :fail call tools\build.bat viosock\sys\viosock.vcxproj "Win10_SDV Win11_SDV" %* if errorlevel 1 goto :fail call tools\build.bat viosock\wsk\wsk.vcxproj "Win10_SDV Win11_SDV" %* if errorlevel 1 goto :fail call tools\build.bat viosock\viosock-wsk-test\viosock-wsk-test.vcxproj "Win10_SDV Win11_SDV" %*
gharchive/pull-request
2024-03-22T20:49:03
2025-04-01T06:40:52.214939
{ "authors": [ "MartinDrab", "YanVugenfirer", "kostyanf14", "xuehuihui" ], "repo": "virtio-win/kvm-guest-drivers-windows", "url": "https://github.com/virtio-win/kvm-guest-drivers-windows/pull/1064", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
405961364
Did not import style as expects I think it is a bug of https://github.com/ant-design/babel-plugin-import . import with alias: import { TimePicker as ATimePicker} from 'ant-design-vue' What it generated below. It does not processed by babel-plugin-import, it's not load-on-demand, and no style is loaded __webpack_require__.r(__webpack_exports__); /* harmony import */ var ant_design_vue__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(/*! ant-design-vue */ \"./node_modules/ant-design-vue/dist/antd.min.js\"); /* harmony import */ var ant_design_vue__WEBPACK_IMPORTED_MODULE_0___default = /*#__PURE__*/__webpack_require__.n(ant_design_vue__WEBPACK_IMPORTED_MODULE_0__); const autoImports = [] autoImports.push({fullName: 'AButton', tag: 'a-button', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"Button\"]}) autoImports.push({fullName: 'APagination', tag: 'a-pagination', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"Pagination\"]}) autoImports.push({fullName: 'AAffix', tag: 'a-affix', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"Affix\"]}) autoImports.push({fullName: 'ABreadcrumb', tag: 'a-breadcrumb', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"Breadcrumb\"]}) autoImports.push({fullName: 'AAutoComplete', tag: 'a-auto-complete', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"AutoComplete\"]}) autoImports.push({fullName: 'AInputNumber', tag: 'a-input-number', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"InputNumber\"]}) autoImports.push({fullName: 'ADatePicker', tag: 'a-date-picker', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"DatePicker\"]}) autoImports.push({fullName: 'ATimePicker', tag: 'a-time-picker', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"TimePicker\"]}) autoImports.push({fullName: 'ACalendar', tag: 'a-calendar', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"Calendar\"]}) autoImports.push({fullName: 'AXyz', tag: 'a-xyz', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"Xyz\"]}) autoImports.push({fullName: 'AAbc', tag: 'a-abc', component: ant_design_vue__WEBPACK_IMPORTED_MODULE_0__[\"Abc\"]}) /* harmony default export */ __webpack_exports__[\"default\"] = (function (Component) { const c = Component.options.components autoImports.forEach(i => { if (c[i.fullName] == null) c[i.fullName] = i.component }) }); Because the generated file was not processed by babel-plugin-import, so it not their bug. const autoImports = [] import { Button as AButton } from 'ant-design-vue' autoImports.push({fullName: 'AButton', tag: 'a-button', component: AButton}) import { Pagination as APagination } from 'ant-design-vue' autoImports.push({fullName: 'APagination', tag: 'a-pagination', component: APagination}) import { Affix as AAffix } from 'ant-design-vue' autoImports.push({fullName: 'AAffix', tag: 'a-affix', component: AAffix}) import { Breadcrumb as ABreadcrumb } from 'ant-design-vue' autoImports.push({fullName: 'ABreadcrumb', tag: 'a-breadcrumb', component: ABreadcrumb}) import { AutoComplete as AAutoComplete } from 'ant-design-vue' autoImports.push({fullName: 'AAutoComplete', tag: 'a-auto-complete', component: AAutoComplete}) import { InputNumber as AInputNumber } from 'ant-design-vue' autoImports.push({fullName: 'AInputNumber', tag: 'a-input-number', component: AInputNumber}) import { DatePicker as ADatePicker } from 'ant-design-vue' autoImports.push({fullName: 'ADatePicker', tag: 'a-date-picker', component: ADatePicker}) import { TimePicker as ATimePicker } from 'ant-design-vue' autoImports.push({fullName: 'ATimePicker', tag: 'a-time-picker', component: ATimePicker}) import { Calendar as ACalendar } from 'ant-design-vue' autoImports.push({fullName: 'ACalendar', tag: 'a-calendar', component: ACalendar}) import { Xyz as AXyz } from 'ant-design-vue' autoImports.push({fullName: 'AXyz', tag: 'a-xyz', component: AXyz}) import { Abc as AAbc } from 'ant-design-vue' autoImports.push({fullName: 'AAbc', tag: 'a-abc', component: AAbc}) export default function (Component) { const c = Component.options.components autoImports.forEach(i => { if (c[i.fullName] == null) c[i.fullName] = i.component }) }
gharchive/issue
2019-02-02T08:17:23
2025-04-01T06:40:52.247006
{ "authors": [ "viruscamp" ], "repo": "viruscamp/vue-cli-plugin-auto-import-tag", "url": "https://github.com/viruscamp/vue-cli-plugin-auto-import-tag/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1261134753
NetworkTransform with clientAuthority set true cannot be moved by server when no client has authority When you have a game object with a NetworkTransform with clientAuthority set to true, it can only be moved by clients with authority. When no client has authority (i.e. the server has authority), it cannot be moved by the server. Is this intentional behaviour? I would expect the server to be able to move the object when no client has authority. This was the behaviour in previous versions of Mirror. I believe the issue is in NetworkTransformBase where the serverBuffer is used when clientAuthority is set and current client has no authority on this line. Demonstration Project I've set up a simple project with the smallest amount of code. There's only empty player objects and a cube that moves vertically when the server has authority. Start up a build as the client and the unity editor as host. The cube is being moved automatically by the server (when connection to client is null). When clientAuthority on the Network Transform is checked then the cube stops because the server is not able to move the cube (that's the bug). Uncheck clientAuthority and the cube moves. https://user-images.githubusercontent.com/1922279/172066366-6160c7be-ac2b-482f-afac-6551ccd1ad4c.mp4 Desktop (please complete the following information): OS: Windows Build target: Windows Unity version: 2021.1.25f Mirror branch: release version 66 Many thanks in advance! FYI I don't need tis answering anymore but leaving for posterity. Hello I updated Mirror in a project and having that problem now too. I also figured out that moving objects around on the multiple clients has problems. It seams that they have different last states that replace the object when the authority is lost. That leads to the problem that objects are not on the same position on the different devices. @brainwipe May i ask how you solved the problem? Would it make sense to turn on and off the clientAuthority on all clients+server at runtime when needed? Mirror Version 66.0.9 Unity 2021.3.1f1 Tested devices: Mac Book, Windows 10, Oculus Quest 2
gharchive/issue
2022-06-05T19:04:25
2025-04-01T06:40:52.253920
{ "authors": [ "ChrisEL20", "brainwipe" ], "repo": "vis2k/Mirror", "url": "https://github.com/vis2k/Mirror/issues/3174", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
425218743
NetworkTransform should sync Scale Seeing as how NetworkTransform is meant as an easy way to sync the Transform properties of a GameObject, it seems odd that Position and Rotation were included, but Scale was left out. I will need this in my game eventually. I'll try to push it with a ClientRPC. Can probably use existing interpolation pretty easy. bump for this 👍 @Reelix @davoodkharmanzar should be doable. please submit a pull request if you want that change and we will merge it :)
gharchive/issue
2019-03-26T04:13:50
2025-04-01T06:40:52.255668
{ "authors": [ "AnthonE", "Reelix", "davoodkharmanzar", "vis2k" ], "repo": "vis2k/Mirror", "url": "https://github.com/vis2k/Mirror/issues/657", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
502128167
Allow multiple host checks This is based on @Forever-Young's work in https://github.com/vishnubob/wait-for-it/pull/22. I just rebased on top of latest. Hey @bwolfe You can use the Wait4X, It's already supported multiple host checking. Example: wait4x tcp 1.1.1.1:80 1.1.1.1:53 --log-level debug
gharchive/pull-request
2019-10-03T15:12:17
2025-04-01T06:40:52.266733
{ "authors": [ "atkrad", "bwolfe" ], "repo": "vishnubob/wait-for-it", "url": "https://github.com/vishnubob/wait-for-it/pull/79", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1488669236
model not removed from aimodels-lock.json when removed I think in this case we remove the model, and the credentials are independent and would remain confirmed
gharchive/issue
2022-12-10T16:50:18
2025-04-01T06:40:52.269807
{ "authors": [ "neural-loop" ], "repo": "visioninit/aimm", "url": "https://github.com/visioninit/aimm/issues/38", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2025751260
TypeError: Cannot read private member from an object whose class did not declare it when I use setData() <template> <div class="crc-vis-box h-full flex justify-center items-center"> <div id="viz" ref="canvas" class="vis-canvas border"> </div> </div> </template> <script setup lang="ts"> import * as vis from 'vis-network' import type { Network } from 'vis-network' import { dataList } from "@/composables/vis/vis-data"; import { options } from "@/composables/vis/vis-options"; import { nextTick, onMounted, ref, type Ref } from "vue"; const canvas: Ref<Network | undefined> = ref() const networkData = ref(dataList) onMounted(async () => { const container = document.getElementById('viz'); if (container) { canvas.value = new vis.Network(container, networkData.value, options) const newData = { nodes: [{ id: 1, label: 'Node 1' }], edges: [], }; canvas.value?.setData(newData); } else { console.error('Container element not found.'); } }) </script> I use vue3 + vis-network. When I use on to register the event, I hope I can setData when the event is triggered. This problem occurs. I tried adding nextTick approach canvas.value?.setData(newData); ↓ canvas.value?.on('click', (params) => { console.log(params); nextTick(() => { canvas.value?.setData(newData); }); }) error message from Uncaught (in promise) TypeError: Cannot read private member from an object whose class did not declare it at __classPrivateFieldGet (weak-map.js:1:18) at Proxy.clear2 (selection-accumulator.ts:138:5) at Proxy.unselectAll (SelectionHandler.js:369:32) at Network.setData (Network.js:398:25) at VisDDD.vue:27:23 at runtime-core.esm-bundler.js:2679:88 at callWithErrorHandling (runtime-core.esm-bundler.js:158:18) at callWithAsyncErrorHandling (runtime-core.esm-bundler.js:166:17) at hook.__weh.hook.__weh (runtime-core.esm-bundler.js:2659:19) at flushPostFlushCbs (runtime-core.esm-bundler.js:325:40) became Uncaught (in promise) TypeError: Cannot read private member from an object whose class did not declare it at __classPrivateFieldGet (weak-map.js:1:18) at Proxy.clear2 (selection-accumulator.ts:138:5) at Proxy.unselectAll (SelectionHandler.js:369:32) at Network.setData (Network.js:398:25) at VisDDD.vue:30:31 I also got this from Stackoverflow https://stackoverflow.com/questions/76961106/is-visjs-network-supported-by-vue3 Saw the same question I'm facing a bit of a problem and I'm hoping someone can lend me a hand. Would really appreciate your help. Thanks a ton! Do not use ref, simply define a variable using let or const, and there will be no error accessing private member variables or methods
gharchive/issue
2023-12-05T09:30:41
2025-04-01T06:40:52.287054
{ "authors": [ "Wuwuyiaewu", "humingxian" ], "repo": "visjs/vis-network", "url": "https://github.com/visjs/vis-network/issues/2092", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1823227623
Check ports in TrackingServer app in v0.9.8, Layers app is not getting connection to tracking server --> mayber due to port / protocol issues ? issue was related to missing address property when request to start broadcast --> fixed in layers app
gharchive/issue
2023-07-26T21:42:39
2025-04-01T06:40:52.306249
{ "authors": [ "reworc" ], "repo": "visualengineers/reflex", "url": "https://github.com/visualengineers/reflex/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2284765150
ensure status in Draft|Published ensure only Draft|Published are shown WRT https://github.com/zazuko/cube-creator/wiki/LINDAS-Specifics#needed-attributes-that-a-cube-shows-up-on-visualizeadminch there are rare cases with different status i.e. https://s.zazuko.com/m6iuJB which shouldn't be shown nb: this PR also makes results more consistent with https://github.com/visualize-admin/visualization-tool/blob/eef59af39594123bacc535484df6654550830fa1/app/rdf/queries.ts#L69-L74 cc @bprusinowski Thanks @Rdataflow, LGTM! Side note: I think this situation technically shouldn't happen, looking at the below screenshot? Maybe it could be possible with some custom pipeline? 🤔 @bprusinowski it's likely due to a combination of legacy project and code in C-C that doesn't expire deprecated cubes properly :+1:
gharchive/pull-request
2024-05-08T06:14:27
2025-04-01T06:40:52.309381
{ "authors": [ "Rdataflow", "bprusinowski" ], "repo": "visualize-admin/visualization-tool", "url": "https://github.com/visualize-admin/visualization-tool/pull/1510", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
665737880
Allow overriding task data and taskrc file on the command line Currently, taskrc file location is overridable in vit config.ini file (and indirectly via TASKRC envvar) [taskwarrior] # Full path to the Taskwarrior configuration file. Tilde will be expanded to the user's home directory. # NOTE: This setting is overridden by the TASKRC environment variable. #taskrc = ~/.taskrc TaskWarrior gives two ways to override config file and data location, via command-line or environment: Config file Data location Command-lineargument rc:config_file rc.data.location:data_directory Environmentvariable TASKRC=config_file TASKDATA=data_directory See CONFIGURATION FILE AND OVERRIDE OPTIONS section in task(1) manpage for details. Overriding manually the environment via TASKRC and TASKDATA (used by TaskWarrior) before launching vit works currently, and shown data/used config are correct. What would be useful it to be able to specify those via vit command line, as task does: task rc.data.location:alternateDataLocationDirectory rc:alternateTaskrcConfFile ... This would allow for instance to have completely separate environments, without having to rely on contexts (where it's a soft separation). Linked issues: #257 and #235 See https://github.com/scottkosty/vit/issues/257 for the suggestion there, perhaps a single --default-args switch? See #257 for the suggestion there, perhaps a single --default-args switch? Yes, a --default-args would do, to pass to every taskwarrior command. Care should be taken to respect precedence of options, so they are applied by increasing precedence: config.ini -> environment variable -> command line option E.g. taskrc option in config.ini -> TASKRC -> --default-args rc:... I've abandoned the idea of a --default-args approach, too complicated. I've also abandoned the idea of any VIT-specific overrides for task data, see https://github.com/vit-project/vit/issues/257#issuecomment-691721253 for more. I'd still consider adding a CLI arg that would allow overriding the task data location -- this is reasonably straightforward, compliments the existing config file option, and allows for more flexibility for those using more complex setups.
gharchive/issue
2020-07-26T08:45:47
2025-04-01T06:40:52.318015
{ "authors": [ "fdela", "thehunmonkgroup" ], "repo": "vit-project/vit", "url": "https://github.com/vit-project/vit/issues/258", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
15206523
I have jQuery, Bootstrap, jQuery-ui which one should I use? Hi, with jQuery + bootstrap + jQuery-ui (all are latest versions at time of writing), the following errors were encountered for each package used: Bootstrap error: popover is not defined jQuery error: $.fn.editableutils is undefined jQuery-ui error: $.fn.editableutils is undefined Can you kindly advise which package I should use? The initialization used is very simple: $('#edit').editable(); P.S. it works when I remove jQuery-ui, I have no idea why? I just have had the same error. Solution was to move the wysihtml5 files (meaning - and Basically all css should be loaded first. Then the javascript libraries. Not mixed. And - see jQuery - the jquery.js has to be the first, then the jquery-ui. Same goes for bootstrap.
gharchive/issue
2013-06-06T05:24:44
2025-04-01T06:40:52.321149
{ "authors": [ "BlueManLine", "clw", "osworx" ], "repo": "vitalets/x-editable", "url": "https://github.com/vitalets/x-editable/issues/260", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1801095521
Manifest file is not put in the correct location I am using Laravel + Vue to build an application. In order to get the service worker to generate in the correct location I am using the following configuration. { injectRegister: 'null', outDir: 'public/', scope: '/', base: '/', buildBase: '/', workbox: { globPatterns: ['**/*.{js,css}'], navigateFallback: null, }, } However this leaves the manifest.webmanifest file in /public/build/manifest.webmanifest. I tried adding the manifest.publicPath option to the config but it doesn't work and when doing a global search of the source code I don't see any usages of it. How would I either get the manifest.webmanifest file to be put in /public/ instead of /public/build/ or get the file url in the sw.js to use /build/manifest.webmanifest instead of /manifest.webmanifest? @userquin I do see that there are some weird settings to the laravel plugin but it seems like the manifest file is getting put in the wrong location. Do you have any suggestions? @likeadeckofcards the laravel plugin is setting the Vite outDir to that folder, check my comment in this issue https://github.com/vite-pwa/vite-plugin-pwa/issues/467#issuecomment-1427998051
gharchive/issue
2023-07-12T14:24:18
2025-04-01T06:40:52.334780
{ "authors": [ "likeadeckofcards", "userquin" ], "repo": "vite-pwa/vite-plugin-pwa", "url": "https://github.com/vite-pwa/vite-plugin-pwa/issues/547", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
823773021
feat: upgrade mdx This gives us type safety for MDX options. Not sure if we want to wait until MDX 2.0 is officially released? Whoops, I forgot to check on the status of #6 :)
gharchive/pull-request
2021-03-06T23:04:45
2025-04-01T06:40:52.336131
{ "authors": [ "aleclarson" ], "repo": "vitejs/vite-plugin-react-pages", "url": "https://github.com/vitejs/vite-plugin-react-pages/pull/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
777663500
Automated releases Automated releases save us the hassle of manually cutting releases and updating a changelog, and users get the latest changes as soon as they're merged. More info: https://intuit.github.io/auto/ All change logs are already automated. Publishing each package is as simple as running yarn release. I don't trust automated releases. All change logs are already automated. Publishing each package is as simple as running yarn release. I don't trust automated releases. What's there to not trust? Opening a terminal, navigating to your vite clone, and running yarn release is enough friction to discourage you from doing it every time you merge a PR. And what if a maintainer without publish privileges merges a PR while you're focused on other projects? Ideally, fixes and features are immediately available once merged, so users don't have to wait arbitrary amounts of time. What's there to not trust? Opening a terminal, navigating to your vite clone, and running yarn release is enough friction to discourage you from doing it every time you merge a PR. And what if a maintainer without publish privileges merges a PR while you're focused on other projects? Ideally, fixes and features are immediately available once merged, so users don't have to wait arbitrary amounts of time. That's exactly what I don't like: the fact that things can be released without me being aware of it. That's exactly what I don't like: the fact that things can be released without me being aware of it.
gharchive/issue
2021-01-03T15:46:45
2025-04-01T06:40:52.339931
{ "authors": [ "aleclarson", "yyx990803" ], "repo": "vitejs/vite", "url": "https://github.com/vitejs/vite/issues/1324", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
915217276
PostCSS dependencies are not registered correctly in some cases ~Describe the bug~ ~It's hard to explain this one without referring to the reproduction: https://github.com/bradlc/vite-module-bug~ ~In this example main.js is registered as a PostCSS dependency, but editing it does not trigger a CSS rebuild. I am not too familiar with the vite codebase but the issue seems to be related to this section of code:~ const depModules = new Set( [...deps].map((file) => moduleGraph.createFileOnlyEntry(file)) ) ~Should this be checking for an existing module, something like this?~ const depModules = new Set( [...deps].map((file) => moduleGraph.getModuleById(file) ?? moduleGraph.createFileOnlyEntry(file)) ) ~Again, I am not familiar with the code so I might be way off here, but this change seemed to help in my testing.~ ~Reproduction~ ~https://github.com/bradlc/vite-module-bug~ Describe the bug The above example seems to have been fixed by e048114 but the issue is still preset for .vue files. When registering a .vue file as a PostCSS dependency the CSS is not rebuilt when that file changes. Reproduction https://github.com/bradlc/vite-vue-bug System Info Output of npx envinfo --system --npmPackages vite,@vitejs/plugin-vue --binaries --browsers: System: OS: macOS 11.2.3 CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz Memory: 489.48 MB / 32.00 GB Shell: 5.8 - /bin/zsh Binaries: Node: 16.0.0 - /var/folders/qw/ffpg9q8n6sgdyqlt84m2pxmr0000gn/T/fnm-shell-8843510/bin/node Yarn: 1.22.10 - /var/folders/qw/ffpg9q8n6sgdyqlt84m2pxmr0000gn/T/fnm-shell-8843510/bin/yarn npm: 7.10.0 - /var/folders/qw/ffpg9q8n6sgdyqlt84m2pxmr0000gn/T/fnm-shell-8843510/bin/npm Browsers: Chrome: 91.0.4472.77 Chrome Canary: 93.0.4536.0 Firefox: 88.0.1 Safari: 14.0.3 Safari Technology Preview: 14.2 npmPackages: vite: ^2.3.7 => 2.3.7 Used package manager: npm Quick follow-up after a bit more testing: the same issue occurs when registering .vue files as dependencies, but the small change I made (moduleGraph.getModuleById(file)) did not help in that case. Perhaps Vue modules are more complex? I believe this same issue is present for svelte files in sveltekit. Having issues getting JIT working with TailwindCSS in SvelteKit. I believe this same issue is present for svelte files in sveltekit. Having issues getting JIT working with TailwindCSS in SvelteKit. @joshpierce The problem had been logged in the tailwindcss's document, use TAILWIND_MODE=watch while using JIT mode. hope helps. The updated repro works fine with latest dep versions and actual Tailwind JIT also seems to be working just fine with Vue files. Closing.
gharchive/issue
2021-06-08T16:10:48
2025-04-01T06:40:52.347348
{ "authors": [ "beetaa", "bradlc", "joshpierce", "yyx990803" ], "repo": "vitejs/vite", "url": "https://github.com/vitejs/vite/issues/3717", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
933270920
builtin-modules doesn't include fs/promises causing vite failing to recognize builtin module Describe the bug fs/promises isn't included in the array given by builtin-modules package but it is in import { builtinModules } from 'module';: This cause Vite to give this error: Failed to resolve import "fs/promises" from "src\routes\api\get-file-content.ts". Does the file exist? Error: Failed to resolve import "fs/promises" from "src\routes\api\get-file-content.ts". Does the file exist? at formatError (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:173:46) at TransformContext.error (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:169:19) at normalizeUrl (C:\workspace\m7d\vite\packages\vite\dist\node\plugins\importAnalysis.js:126:26) at async TransformContext.transform (C:\workspace\m7d\vite\packages\vite\dist\node\plugins\importAnalysis.js:259:57) at async Object.transform (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:374:30) at async Object.transformRequest (C:\workspace\m7d\vite\packages\vite\dist\node\server\transformRequest.js:122:29) at async instantiateModule (C:\workspace\m7d\vite\packages\vite\dist\node\ssr\ssrModuleLoader.js:44:10) Reproduction Use import "fs/promises". I'll provide a repo in the next days. System Info System: OS: Windows 10 10.0.19042 CPU: (16) x64 AMD Ryzen 9 4900H with Radeon Graphics Memory: 17.22 GB / 31.42 GB Binaries: Node: 16.2.0 - C:\Program Files\nodejs\node.EXE Yarn: 1.22.10 - ~\AppData\Roaming\npm\yarn.CMD npm: 7.13.0 - C:\Program Files\nodejs\npm.CMD Browsers: Edge: Spartan (44.19041.1023.0), Chromium (91.0.864.59) Internet Explorer: 11.0.19041.1 Used Package Manager npm Logs Failed to resolve import "fs/promises" from "src\routes\api\get-file-content.ts". Does the file exist? Error: Failed to resolve import "fs/promises" from "src\routes\api\get-file-content.ts". Does the file exist? at formatError (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:173:46) at normalizeUrl (C:\workspace\m7d\vite\packages\vite\dist\node\plugins\importAnalysis.js:126:26) at async TransformContext.transform (C:\workspace\m7d\vite\packages\vite\dist\node\plugins\importAnalysis.js:259:57) at async Object.transform (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:374:30) at async Object.transformRequest (C:\workspace\m7d\vite\packages\vite\dist\node\server\transformRequest.js:122:29) at async instantiateModule (C:\workspace\m7d\vite\packages\vite\dist\node\ssr\ssrModuleLoader.js:44:10) Validations [X] Follow our Code of Conduct [X] Read the Contributing Guidelines. [X] Read the docs. [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate. [X] Make sure this is a Vite issue and not a framework-specific issue. For example, if it's a Vue SFC related bug, it should likely be reported to https://github.com/vuejs/vue-next instead. [X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server. My temporary workaround is to specify fs/promises as external in the rollup options: // vite.config.js import { defineConfig } from "vite"; export default defineConfig({ build: { rollupOptions: { external: ["fs/promises"], }, }, }); chaunqisun's workaround doesn't seem to work with current SvelteKit ie- adding that config to svelte.config.js > kit > vite still cannot import fs/promises without Svite throwing JSON error @sveltejs/kit 1.0.0-next.1 svelte 3.42.6
gharchive/issue
2021-06-30T03:29:13
2025-04-01T06:40:52.355147
{ "authors": [ "chuanqisun", "drschwabe", "raythurnevoid" ], "repo": "vitejs/vite", "url": "https://github.com/vitejs/vite/issues/4037", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2427227519
feat: environment api config options rework Description Building on top of: https://github.com/vitejs/vite/pull/17753 For reference, before #17753, we had: environment.config is the top level config (i.e. environment.config.root) environment.options is the ResolvedEnvironmentOptions, equivalent to environment.config.environments[environment.name] (i.e. environment.options.resolve.conditions) The motivation for #17753 is that environment.config being the top level config (the shared config instance that has the default values) is error prone environment.config.resolve.conditions should never be used. We discussed deprecating these defaults from ResolvedConfig and then removing them but that will take a while. We could make the type of environment.config more strict even if the default options are in the object, but there are other issues. Having the top level config as environment.getTopLevelConfig() brought two things to the spotlight: Most of the access for the config is for root and base. It would be good to have a more ergonomic way to access these instead of environment.getTopLevelConfig().root We may make other config options per-environment in the future. Every time we do it, users will need to move from config.flag to environment.options.flag This PR leaves environment.getTopLevelConfig() for when the shared instance is needed, and removes environment.options in favor of environment.config that has type ResolvedConfig & ResolvedEnvironmentOptions (maybe the type could be improved). It is currently implemented as: this.config = new Proxy( options as ResolvedConfig & ResolvedEnvironmentOptions, { get: (target, prop: keyof ResolvedConfig) => { if (prop === 'logger') { return this.logger } if (prop in target) { return this._options[prop as keyof ResolvedEnvironmentOptions] } return this._topLevelConfig[prop] }, }, ) This solves the two issues above and avoids confusion. environment.config always returns the configuration for this environment (it doesn't matter if the options are per-environment or shared). There are no longer issues with users accessing the defaults by mistake. Notes: The PR also changes the ssr flag for EnvironmentOptions introduced at https://github.com/vitejs/vite/pull/16471/commits/90185f793247023e5d2464ff38fe8929582acf28 because it collides with the ssr object in ResolvedConfig. This was confusing in that commit already but I couldn't came up with a better name. We discussed with @sheremet-va and settled down on renaming it to consumer: 'client' | 'server' for now. Proxy looks good to me. If environment.config.xxx would suffice for most of the cases, when would users need to go environment.getTopLevelConfig().xxx? We still have many internal APIs that take a ResolvedConfig instead of an environment. So in that cases you can use environment.getTopLevelConfig() to get the shared instance. I think a lot of these APIs did that just because it was a more comfortable way to access root and base though, and probably later on they could be reworked. We have some other cases were we use the config as the key of a cache (the fs tree cache for example). environment.getTopLevelConfig().xxx would not be a pattern we see used.
gharchive/pull-request
2024-07-24T10:41:57
2025-04-01T06:40:52.363410
{ "authors": [ "patak-dev" ], "repo": "vitejs/vite", "url": "https://github.com/vitejs/vite/pull/17756", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1225120277
test(css): fix postcss cleanid test Description fixes test of #7827 It was failing because #7807 minifies the content and comments were removed. Additional context What is the purpose of this pull request? [ ] Bug fix [ ] New Feature [ ] Documentation update [x] Other Before submitting the PR, please make sure you do the following [x] Read the Contributing Guidelines. [x] Read the Pull Request Guidelines and follow the Commit Convention. [x] Check that there isn't already a PR that solves the problem the same way to avoid creating a duplicate. [x] Provide a description in this PR that addresses what the PR is solving, or reference the issue that it solves (e.g. fixes #123). [ ] Ideally, include relevant tests that fail without this PR but pass with it. Thanks for the quick fix! ❤️
gharchive/pull-request
2022-05-04T09:20:56
2025-04-01T06:40:52.368334
{ "authors": [ "patak-dev", "sapphi-red" ], "repo": "vitejs/vite", "url": "https://github.com/vitejs/vite/pull/8013", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2012516806
JSDOM 23.0.0 atob endless recursion Describe the bug We have a test case that uses JSDOM and tests a function that calls "atob". Since the upgrade to JSDOM 23.0.0, this test hangs. Reproduction A very short way to reproduce this is import {JSDOM} from "jsdom"; import {populateGlobal} from "vitest/environments"; const jsdom = new JSDOM() populateGlobal(global, jsdom.window) atob("dGVzdAo=") It yields the error [DOMException [InvalidCharacterError]: The string to be decoded contains invalid characters.] which is a bit misleading, as JSDOM throws this whenever the global "atob" throws any error. Adding a 'console.log' statement to JSDOM's "atob" implementation shows that "atob" calls itself, resulting in an endless recursion. System Info System: OS: macOS 13.6 CPU: (10) arm64 Apple M1 Pro Memory: 2.64 GB / 32.00 GB Shell: 5.9 - /bin/zsh Binaries: Node: 18.16.0 - ~/.asdf/installs/nodejs/18.16.0/bin/node Yarn: 1.22.19 - ~/.asdf/installs/nodejs/18.16.0/bin/yarn npm: 9.5.1 - ~/.asdf/plugins/nodejs/shims/npm bun: 0.6.7 - ~/.bun/bin/bun Browsers: Brave Browser: 118.1.59.117 Chrome: 119.0.6045.159 Safari: 17.0 npmPackages: @vitest/ui: 0.34.6 => 0.34.6 vite: 5.0.2 => 5.0.2 vitest: 0.34.6 => 0.34.6 Used Package Manager npm Validations [X] Follow our Code of Conduct [X] Read the Contributing Guidelines. [X] Read the docs. [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate. [X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server. [X] The provided reproduction is a minimal reproducible example of the bug. Work-around for now is to provide atob global in test.setupFiles: import { atob as NodeAtob } from "buffer"; globalThis.atob = NodeAtob; Looks like JSDOM now relies on Node's atob: https://github.com/jsdom/jsdom/pull/3625/files#diff-b5cd5c96785357dc930f47c18b45d1626b467e8c16068720a31c0cfc0d8344d3L18 There is now a related issue at JSDOM: https://github.com/jsdom/jsdom/pull/3625 This is an issue with how Vitest overrides globals, so I would expect it to be fixed on our side. Happy-dom also had a similar problem with setTimeout at one point As a workaround, You can also do this in your setup file. globalThis.atob = (b64Str: string) => Buffer.from(b64Str, `base64`).toString(`binary`); Reference import { atob as NodeAtob } from "buffer"; globalThis.atob = NodeAtob; This worked for me, but I have to put the line into a beforeAll block. Fixed in da7949dcd056a00c39ed6a163562cedd463a6ca8
gharchive/issue
2023-11-27T15:20:10
2025-04-01T06:40:52.380280
{ "authors": [ "AriPerkkio", "DercilioFontes", "nils4cosee", "sheremet-va" ], "repo": "vitest-dev/vitest", "url": "https://github.com/vitest-dev/vitest/issues/4602", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2384180366
fix(vitest): allow testing unandled rejection/exception Description Fixes #5796 Please don't delete this checklist! Before submitting the PR, please make sure you do the following: [ ] It's really useful if your PR references an issue where it is discussed ahead of time. If the feature is substantial or introduces breaking changes without a discussion, PR might be closed. [ ] Ideally, include a test that fails without this PR but passes with it. [ ] Please, don't make changes to pnpm-lock.yaml unless you introduce a new test example. Tests [ ] Run the tests with pnpm test:ci. Documentation [ ] If you introduce new functionality, document it. You can run documentation with pnpm run docs command. Changesets [ ] Changes in changelog are generated from PR name. Please, make sure that it explains your changes in an understandable manner. Please, prefix changeset messages with feat:, fix:, perf:, docs:, or chore:. Just in case anyone else comes across this looking for an example of how to test for an unhandled rejection or exception, some examples from the test file test('can test unhandled rejection', async () => { const fn = vi.fn() const promise = new Promise<void>((resolve) => { process.on('unhandledRejection', () => { fn() resolve() }) }) Promise.resolve().then(() => { throw new Error('unhandled rejection') }) await promise expect(fn).toHaveBeenCalledTimes(1) }) test('can test unhandled exception', async () => { const fn = vi.fn() const promise = new Promise<void>((resolve) => { process.on('uncaughtException', () => { fn() resolve() }) }) nextTick(() => { throw new Error('unhandled exception') }) await promise expect(fn).toHaveBeenCalledTimes(1) })
gharchive/pull-request
2024-07-01T15:47:55
2025-04-01T06:40:52.385350
{ "authors": [ "longzheng", "sheremet-va" ], "repo": "vitest-dev/vitest", "url": "https://github.com/vitest-dev/vitest/pull/6016", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1160600915
feat: loading animation for beforeEach, beforeAll, afterEach, afterAll fix: #338 Loading animation for beforeEach, beforeAll, afterEach, afterAll. interface Task add hooks and use hooks save suite/test hook exec state. Seems good, thanks!
gharchive/pull-request
2022-03-06T12:41:59
2025-04-01T06:40:52.387179
{ "authors": [ "AlaaZorkane", "poyoho" ], "repo": "vitest-dev/vitest", "url": "https://github.com/vitest-dev/vitest/pull/898", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2267438440
Add a warning about cluster_dns and service_cidr in examples As mentioned in #351. There is already a note about that in the README, but I thought it might help to have it proeminently in the sample configuration as well. Ah you did it already and it's just text, so I can merge now :) Thanks!
gharchive/pull-request
2024-04-28T08:56:34
2025-04-01T06:40:52.389488
{ "authors": [ "jpetazzo", "vitobotta" ], "repo": "vitobotta/hetzner-k3s", "url": "https://github.com/vitobotta/hetzner-k3s/pull/352", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
35279556
Reputation system Refactor User Profile, add an integer field to store the user's reputation Accepted answer: +10 reputation Liked article: +5 reputation Favorited question: +3 reputation Liked feed: +1 reputation Answer up vote: +1 reputation Display reputation on users profile page and on feed/question pages @gissues:{"order":50,"status":"inprogress"} This issue is too old now, closing it and adding it to #66
gharchive/issue
2014-06-09T12:07:40
2025-04-01T06:40:52.391290
{ "authors": [ "sebastian-code", "vitorfs" ], "repo": "vitorfs/bootcamp", "url": "https://github.com/vitorfs/bootcamp/issues/19", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
952136741
No connectin with MySQL Hi we try to install but is say "No connectin with MySQL" we use vestacp with mysql but if try to phpmyadmin work if try with remote software its work but not in hopebilling with localhost or public ip thx for helping Now he wave : Warning: Illegal string offset 'db_host' in
gharchive/issue
2021-07-24T19:42:03
2025-04-01T06:40:52.402674
{ "authors": [ "Gregory-Hepicloud" ], "repo": "vityabond/hopebilling", "url": "https://github.com/vityabond/hopebilling/issues/60", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1446431269
Ugly way to find out if running in deferred task? Currenty (in viur-core >= 3.3.0) this is the only valid way to find out if code is running within a deferred task: getattr(utils.currentRequest.get(), "DEFERRED_TASK_CALLED", False) Becuase DEFERRED_TASK_CALLED is only a member of the Request when it was invoked from a deferred task (@CallDeferred). In viur-core < 3.3.0, the variable DEFERRED_TASK_CALLED was also mispelled DEFERED_TASK_CALLED, which was fixed by #508. This issue is a request for providing a flag is_deferred (likely to isDevelopmentServer) in the current Request object, to decide whether code is executed within a deferred call or not. Resolved by #556
gharchive/issue
2022-11-12T12:28:13
2025-04-01T06:40:52.410163
{ "authors": [ "phorward" ], "repo": "viur-framework/viur-core", "url": "https://github.com/viur-framework/viur-core/issues/550", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
174044474
datamodel: update documentation for getPropType documentation indicates that getPropType() returns a name (string), but it actually returns the type instance (or None). update the documentation to reflect this. feel free to close this issue and open a bug that the documentation was correct and the implementation wrong, and i'll take a stab at fixing that. returning the name (a string) is as easy as: -return self.getDataType( pdef[1].get('ptype') ) +return self.getDataType( pdef[1].get('ptype') ).name
gharchive/pull-request
2016-08-30T15:04:48
2025-04-01T06:40:52.418874
{ "authors": [ "williballenthin" ], "repo": "vivisect/synapse", "url": "https://github.com/vivisect/synapse/pull/43", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
329591389
Билд под macOs High Sierra Делал все, как сказано в readme. brew install dub dmd curl openssl mpv С этим проблем не возникло. brew install homebrew/dupes/ncurses Error: homebrew/dupes was deprecated. This tap is now empty as all its formulae were migrated. Зато это работает: brew install ncurses brew doctor brew link ncurses -force тоже особо проблем нету, только в .bash_profile добавил: export PATH="/usr/local/opt/ncurses/bin:/usr/local/sbin:$PATH" git clone https://github.com/vk-cli/vk cd vk git checkout VER dub build Первый раз: dub build Running pre-generate commands for vk... version string: 0.7.6 Performing "debug" build using dmd for x86_64. ncurses ~master: target for configuration "minimal" is up to date. vk ~master: building configuration "debug"... source/app.d(378,84): Deprecation: integral promotion not done for -to(1), use '-transition=intpromote' switch or -cast(int)(to(1)) Reticulating splines... Linking... Undefined symbols for architecture x86_64: "__D9vkversion14currentVersionxAya", referenced from: __Dmain in vk.o __D5utils7initdbmFZv in vk.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) Error: linker exited with status 1 dmd failed with exit code 1. Если написать dub build еще раз: Running pre-generate commands for vk... version string: 0.7.6 Performing "debug" build using dmd for x86_64. ncurses ~master: target for configuration "minimal" is up to date. vk ~master: building configuration "debug"... source/app.d(378,84): Deprecation: integral promotion not done for -to(1), use '-transition=intpromote' switch or -cast(int)(to(1)) Reticulating splines... Linking... To force a rebuild of up-to-date targets, run again with --force. Пробуем запустить: `object.Exception@std/process.d(3339): Environment variable not found: LANG 4 vk 0x0000000102db0fda pure @safe void std.exception.bailOut!(Exception).bailOut(immutable(char)[], ulong, const(char[])) + 126 5 vk 0x0000000102db0f56 pure @safe bool std.exception.enforce!().enforce!(bool).enforce(bool, lazy const(char)[], immutable(char)[], ulong) + 106 6 vk 0x0000000102eb09dd @safe immutable(char)[] std.process.environment.opIndex(const(char[])) + 89 7 vk 0x0000000102e20f5f void localization.setEnvLanguage() + 27 8 vk 0x0000000102daebc0 void app.init() + 52 9 vk 0x0000000102da3f39 _Dmain + 769 10 vk 0x0000000102e6e12f void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])).runAll().__lambda1() + 39 11 vk 0x0000000102e6dfbf void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])).tryExec(scope void delegate()) + 31 12 vk 0x0000000102e6e09a void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])).runAll() + 138 13 vk 0x0000000102e6dfbf void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])).tryExec(scope void delegate()) + 31 14 vk 0x0000000102e6df2d _d_run_main + 485 15 vk 0x0000000102da4341 main + 33 16 libdyld.dylib 0x00007fff55a83014 start + 0 17 ??? 0x0000000000000000 0x0 + 0 ` LANG="en_US.UTF-8" ./vk ? LANG="en_US.UTF-8" ./vk Работает :D
gharchive/issue
2018-06-05T19:21:45
2025-04-01T06:40:52.433484
{ "authors": [ "dimkaxdx", "x0wllaar" ], "repo": "vk-cli/vk", "url": "https://github.com/vk-cli/vk/issues/118", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
367083152
Function "Hello" does not presented in serverless.yml When I try to execute the step functions in offline, i got the error 'Function "Hello" does not presented in serverless.yml' Below is the console output Serverless: Preparing.... Serverless: Trying to find state "hellostepfunc" in serverless.yml Serverless: Building StepWorkFlow Serverless: Function "Hello" does not presented in serverless.yml My serverless.yml is below package: exclude: - node_modules/** provider: name: aws runtime: nodejs8.10 stage: dev region: ap-southeast-2 functions: hello: handler: hello.hello bye: handler: bye.bye stepFunctions: stateMachines: hellostepfunc: events: - http: path: hello method: POST definition: Comment: "An example app using step-functions and api gateway" StartAt: Hello States: Hello: Type: Task Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${opt:stage}-hello" Next: Bye Bye: Type: Task Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${opt:stage}-bye" End: true custom: stepFunctionsOffline: stepOne: hello stepTwo: bye plugins: - serverless-step-functions - serverless-pseudo-parameters - serverless-offline - serverless-step-functions-offline``` Hi @anandrajj . Sorry for long response. Yes, because your settings are not correctly specified for plugin. In section stepFunctionsOffline you need to specify object like [name of step function]: [name of function in serverless yml.] In your case it should be custom: stepFunctionsOffline: Hello: hello Bye: bye
gharchive/issue
2018-10-05T06:57:12
2025-04-01T06:40:52.447744
{ "authors": [ "anandrajj", "vkkis93" ], "repo": "vkkis93/serverless-step-functions-offline", "url": "https://github.com/vkkis93/serverless-step-functions-offline/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
558541392
[Umbrella Issue] Add support for io-related methods This is an umbrella issue for IO support. The following examples show what the implementation could look like: Read input var val string = e.IO.ReadIn("prompt") File var file File = e.File.Open("path") Alias var string val := e.ReadIn(<prompt>) var lines []string = e.Cat(<file path>) var file File = e.Open("path") Write output Write output e.IO.Write("string") // writes to stdout e.IO.Write("string", path) File output var file File = e.File.Open("path") file.Write("data") Implemented in release https://github.com/vladimirvivien/gexe/releases/tag/v0.1.0
gharchive/issue
2020-02-01T13:30:38
2025-04-01T06:40:52.470398
{ "authors": [ "vladimirvivien" ], "repo": "vladimirvivien/gexe", "url": "https://github.com/vladimirvivien/gexe/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
562494570
Improve logging of array parameters in BasicBinder Before: binding parameter [3] as [ARRAY] - [[Lcom.xxx.yyy.zzz.xxx.api.SampleTriggers;@5c3ef336] Now: binding parameter [3] as [ARRAY] - [[PURCHASE_CONFIRMATION]::SampleTriggers[]] Thanks for the Pull Request. I'll review it when I have some time. I reopened it because I thought the issue was about truncating the array. I'll integrate it without truncation. The purpose of truncation was to avoid possible megabytes of data logged as nobody can know the array lengths in general. But you are right, it was mainly not about truncation. Thanks, I merged it.
gharchive/pull-request
2020-02-10T11:19:53
2025-04-01T06:40:52.472621
{ "authors": [ "dpolivaev", "vladmihalcea" ], "repo": "vladmihalcea/hibernate-types", "url": "https://github.com/vladmihalcea/hibernate-types/pull/182", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1998188569
using vllm to answer in Qwen-7B-Chat, there is a recurring issue of answers being repeated multiple times, When using vllm to answer in Qwen-7B-Chat, there is a recurring issue of answers being repeated multiple times, which is not present when not using vllm. GPU: 2 T4 MODEL: Qwen-7B-Chat PROMPT: <指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 \n"根据已知信息无法回答该问题,请使用随手拍进行提问,随手拍使用路径:手机移动办公-工作台-数字直通车"。不允许在答案中添加编造成分,答案请使用中文,结果以markdown的形式输出。</指令>\n\n<已知信息>问题:是否可以先开户、后面再补上门核实和法人开户意愿核实工作? 答案:上门核实可以后补,法人开户意愿核实需要提前或同步完成。\n问题:客户签约财资需要哪几个步骤(大步骤,具体的步骤可以参考操作文档) 答案:1、客户确认签约财资的方案;\n2、上级单位与下级单位完成账户使用授权(如无下级单位或关联单位,则跳过这一步);\n3、在前台签署相关协议,录入系统;\n4、经办行引导客户登录财资,并协助客户配置相关设置及参数;\n问题:营业执照刚刚注册好,工商系统还查不到,能开户吗? 答案:在风险可控的情况下,确定客户开户信息真实,可以开户。</已知信息>\n\n<问题>企业开户流程</问题> INFO 11-20 08:54:05 async_llm_engine.py:370] Received request 8b2eab1b928747abb9d544a6b2a5a37a: prompt: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 \n"根据已知信息 无法回答该问题,请使用随手拍进行提问,随手拍使用路径:手机移动办公-工作台-数字直通车"。不允许在答案中添加编造成分,答案请使用中文,结果以markdown的形式输出。</指令>\n\n<已知信息>问题:是否可以先开户、后面再补上门核实和法人开户意愿核实工作? 答案:上门核实可以后补,法人开户意愿核实需要提前或同步完成。\n问题:客户签约财资需要哪几个步骤(大步骤,具体的步骤可以参考操作文档) 答案:1、客户确认签约财资的方案;\n2、上级单位与下级单位完成账户使用授权(如无下级单位或关联单位,则跳过这一步);\n3、在前台签署相关协议,录入系统;\n4、经办行引导客户登录财资,并协助客户配置相关设置及参数;\n问题:营业执照刚刚注册好,工商系统还查不到,能开户吗? 答案:在风险可控的情况下,确定客户开户信息真实,可以开户。</已知信息>\n\n<问题>企业开户流程</问题><|im_end|>\n<|im_start|>assistant\n', sampling params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['<|im_start|>', '<|endoftext|>', '<|im_end|>'], ignore_eos=False, max_tokens=7908, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True), prompt token ids: None.2023-11-20 08:54:05 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK" INFO 11-20 08:54:05 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 9.4%, CPU KV cache usage: 0.0% INFO 11-20 08:54:10 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 13.6%, CPU KV cache usage: 0.0% INFO 11-20 08:54:15 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 18.3%, CPU KV cache usage: 0.0% INFO 11-20 08:54:20 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 22.5%, CPU KV cache usage: 0.0% INFO 11-20 08:54:25 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 27.2%, CPU KV cache usage: 0.0% INFO 11-20 08:54:30 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 31.4%, CPU KV cache usage: 0.0% 2023-11-20 08:54:31 | INFO | model_worker | Send heart beat. Models: ['Qwen-7B-Chat']. Semaphore: Semaphore(value=4, locked=False). call_ct: 1. worker_id: bc0bc8a3. 2023-11-20 08:54:31 | INFO | controller | Receive heart beat. http://127.0.0.1:20002 2023-11-20 08:54:31 | INFO | stdout | INFO: 127.0.0.1:57204 - "POST /receive_heart_beat HTTP/1.1" 200 OK INFO 11-20 08:54:36 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 35.6%, CPU KV cache usage: 0.0% INFO 11-20 08:54:41 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.4 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 40.3%, CPU KV cache usage: 0.0% INFO 11-20 08:54:46 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.8 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 44.5%, CPU KV cache usage: 0.0% INFO 11-20 08:54:51 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 48.7%, CPU KV cache usage: 0.0% INFO 11-20 08:54:56 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 53.4%, CPU KV cache usage: 0.0% INFO 11-20 08:55:01 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 57.6%, CPU KV cache usage: 0.0% INFO 11-20 08:55:06 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 61.8%, CPU KV cache usage: 0.0% INFO 11-20 08:55:11 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.8 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 66.0%, CPU KV cache usage: 0.0% INFO 11-20 08:55:16 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 70.2%, CPU KV cache usage: 0.0% 2023-11-20 08:55:16 | INFO | model_worker | Send heart beat. Models: ['Qwen-7B-Chat']. Semaphore: Semaphore(value=4, locked=False). call_ct: 1. worker_id: bc0bc8a3. 2023-11-20 08:55:16 | INFO | controller | Receive heart beat. http://127.0.0.1:20002 2023-11-20 08:55:16 | INFO | stdout | INFO: 127.0.0.1:57220 - "POST /receive_heart_beat HTTP/1.1" 200 OK INFO 11-20 08:55:21 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 74.3%, CPU KV cache usage: 0.0% INFO 11-20 08:55:26 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 78.5%, CPU KV cache usage: 0.0% INFO 11-20 08:55:31 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 82.7%, CPU KV cache usage: 0.0% INFO 11-20 08:55:36 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 86.9%, CPU KV cache usage: 0.0% INFO 11-20 08:55:41 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 91.1%, CPU KV cache usage: 0.0% INFO 11-20 08:55:46 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 95.3%, CPU KV cache usage: 0.0% INFO 11-20 08:55:51 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 99.5%, CPU KV cache usage: 0.0% same problem
gharchive/issue
2023-11-17T03:10:58
2025-04-01T06:40:52.516217
{ "authors": [ "shujun1992", "zhanpengjie" ], "repo": "vllm-project/vllm", "url": "https://github.com/vllm-project/vllm/issues/1694", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2048341361
vllm.engine.async_llm_engine.AsyncEngineDeadError 2023-12-19 18:11:16 | ERROR | stderr | 2023-12-19 18:11:16 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-12-19 18:11:16 | ERROR | stderr | 2023-12-19 18:11:16 | ERROR | stderr | Traceback (most recent call last): 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi 2023-12-19 18:11:16 | ERROR | stderr | result = await app( # type: ignore[func-returns-value] 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call 2023-12-19 18:11:16 | ERROR | stderr | return await self.app(scope, receive, send) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/fastapi/applications.py", line 1106, in call 2023-12-19 18:11:16 | ERROR | stderr | await super().call(scope, receive, send) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/applications.py", line 122, in call 2023-12-19 18:11:16 | ERROR | stderr | await self.middleware_stack(scope, receive, send) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in call 2023-12-19 18:11:16 | ERROR | stderr | raise exc 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in call 2023-12-19 18:11:16 | ERROR | stderr | await self.app(scope, receive, _send) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in call 2023-12-19 18:11:16 | ERROR | stderr | raise exc 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in call 2023-12-19 18:11:16 | ERROR | stderr | await self.app(scope, receive, sender) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in call 2023-12-19 18:11:16 | ERROR | stderr | raise e 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in call 2023-12-19 18:11:16 | ERROR | stderr | await self.app(scope, receive, send) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/routing.py", line 718, in call 2023-12-19 18:11:16 | ERROR | stderr | await route.handle(scope, receive, send) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle 2023-12-19 18:11:16 | ERROR | stderr | await self.app(scope, receive, send) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/routing.py", line 69, in app 2023-12-19 18:11:16 | ERROR | stderr | await response(scope, receive, send) 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/responses.py", line 270, in call 2023-12-19 18:11:16 | ERROR | stderr | async with anyio.create_task_group() as task_group: 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 597, in aexit 2023-12-19 18:11:16 | ERROR | stderr | raise exceptions[0] 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap 2023-12-19 18:11:16 | ERROR | stderr | await func() 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/responses.py", line 262, in stream_response 2023-12-19 18:11:16 | ERROR | stderr | async for chunk in self.body_iterator: 2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/fastchat/serve/vllm_worker.py", line 113, in generate_stream 2023-12-19 18:11:16 | ERROR | stderr | async for request_output in results_generator: 2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 435, in generate 2023-12-19 18:11:16 | ERROR | stderr | raise e 2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 429, in generate 2023-12-19 18:11:16 | ERROR | stderr | async for request_output in stream: 2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 70, in anext 2023-12-19 18:11:16 | ERROR | stderr | raise result 2023-12-19 18:11:16 | ERROR | stderr | File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run 2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 37, in _raise_exception_on_finish 2023-12-19 18:11:16 | ERROR | stderr | raise exc 2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 32, in _raise_exception_on_finish 2023-12-19 18:11:16 | ERROR | stderr | raise AsyncEngineDeadError( 2023-12-19 18:11:16 | ERROR | stderr | vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause. https://github.com/vllm-project/vllm/issues/2239 duplicated
gharchive/issue
2023-12-19T10:15:14
2025-04-01T06:40:52.534189
{ "authors": [ "joindn", "yhyu13" ], "repo": "vllm-project/vllm", "url": "https://github.com/vllm-project/vllm/issues/2197", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2055967713
Could we support Fuyu-8B, a multimodel llm? Hi, Fuyu 8B is a multimodel llm model, could we support it in vllm? https://www.adept.ai/blog/fuyu-8b It seems to me current vllm only could support pure text, so for this kind of multimodel mixing with image, how could we handle it? Thx~ now we added supported for llava, this is welcomed! I would like to work on this model. But it seems that the persimmon used as language model in Fuyu-8B hasn't been supported. Maybe we can support it first?
gharchive/issue
2023-12-26T02:28:57
2025-04-01T06:40:52.536736
{ "authors": [ "Isotr0py", "leiwen83", "simon-mo" ], "repo": "vllm-project/vllm", "url": "https://github.com/vllm-project/vllm/issues/2262", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2193088256
[New Model]: Request to support xai-org/grok-1 (314B parameters with MOE architecture) The model to consider. https://huggingface.co/xai-org/grok-1 With int8 quantization, this model can be hosted on 8 GPUs with 80GB memory, a node of H100 or A100. After a high-level look at the code, I am seeing xai has the model architecture implemented via JAX and its code couples model architecture and implementation details such as int8 quantization and sharing across GPUs. I saw a twitter post about the tricky implementation differences in Gemma's implementations. So, I wonder if someone familiar with JAX is planning to port it to PyTorch and validate, so that it can be integrate with vLLM with additional optimization for MOE architecture. The closest model vllm already supports. Mixtral 8x7B. What's your difficulty of supporting the model you want? its source code is in JAX, instead of PyTorch It requires quantization; otherwise, it won't work on most GPUs, including H100/A100. Here, I assume cpu offloading is not of considerations to avoid notable impact on efficiency Its MOE component require additional optimization for inference efficiency HF Version https://huggingface.co/keyfan/grok-1-hf untested, wasn't able to run it on 8xA10 PyTorch 移植是第一步! I saw that grok-1 already has a torch version(https://huggingface.co/hpcai-tech/grok-1), which has been considered to be available in modelscope. I wonder when vllm will support it?
gharchive/issue
2024-03-18T19:27:03
2025-04-01T06:40:52.541486
{ "authors": [ "ai-jz", "nivibilla", "uRENu" ], "repo": "vllm-project/vllm", "url": "https://github.com/vllm-project/vllm/issues/3472", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2310437234
[Usage]: How to start vLLM on a particular GPU? Your current environment Collecting environment information... PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.29.3 Libc version: glibc-2.31 Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-1056-azure-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe GPU 1: NVIDIA A100 80GB PCIe Nvidia driver version: 545.23.08 cuDNN version: Probably one of the following: /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8.7.0 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0 /usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 48 On-line CPU(s) list: 0-47 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 1 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 25 Model: 1 Model name: AMD EPYC 7V13 64-Core Processor Stepping: 1 CPU MHz: 2445.437 BogoMIPS: 4890.87 Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 1.5 MiB L1i cache: 1.5 MiB L2 cache: 24 MiB L3 cache: 192 MiB NUMA node0 CPU(s): 0-23 NUMA node1 CPU(s): 24-47 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip vaes vpclmulqdq rdpid fsrm Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] torch==2.3.0 [pip3] triton==2.3.0 [pip3] vllm_nccl_cu12==2.18.1.0.4.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] torch 2.3.0 pypi_0 pypi [conda] triton 2.3.0 pypi_0 pypi [conda] vllm-nccl-cu12 2.18.1.0.4.0 pypi_0 pypiROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.4.2 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV12 SYS 0-23 0 N/A GPU1 NV12 X SYS 24-47 1 N/A NIC0 SYS SYS X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NIC Legend: NIC0: mlx5_0 How would you like to use vllm I have two GPUs in my VM... I am already using vLLM on one of the GPUs and the other one is vacant. How can I start a second vLLM instance on the second GPU of mine? I tried: --device cuda | --device auto | --device cuda:1 but they don't seem to work as I was expecting... Could you please tell me what am I missing here? Regards! You can use CUDA_VISIBLE_DEVICES environment variable when running the command. I changed CUDA_VISIBLE_DEVICES, and when I delete CUDA_VISIBLE_DEVICES to load another model. I got an error: CUDA error: invalid device ordinal. I changed CUDA_VISIBLE_DEVICES, and when I delete CUDA_VISIBLE_DEVICES to load another model. I got an error: CUDA error: invalid device ordinal. Can you show the commands (including a rnv variables) which you used to run vLLM? 我更改了CUDA_VISIBLE_DEVICES,当我删除CUDA_VISIBLE_DEVICES以加载另一个模型时。我收到错误:CUDA 错误:设备序号无效。 您能展示用于运行 vLLM 的命令(包括 env 变量)吗? I use an script to select GPU of most memory. So I have to del CUDA_VISIBLE_DEVICES env variable after I load a model, and then to load another model. However, When I move new model to the device I select. I got the error. Actually, I think this bug is not caused by vllm. Even I don't use vllm, when I set CUDA_VISIBLE_DEVICES and then unset CUDA_VISIBLE_DEVICES to load another model, I will got an error. I don't think set CUDA_VISIBLE_DEVICES is a good way to set GPU. 我更改了CUDA_VISIBLE_DEVICES,当我删除CUDA_VISIBLE_DEVICES以加载另一个模型时。我收到错误:CUDA 错误:设备序号无效。 您能展示用于运行 vLLM 的命令(包括 env 变量)吗? It appears that if you set the CUDA_VISIBLE_DEVICES environment variable, for example, os.environ["CUDA_VISIBLE_DEVICES"] = "2,3", then in your code, the device indices will start from 0. That is, cuda:0 corresponds to the actual cuda:2, and cuda:1 corresponds to the actual cuda:3 我更改了CUDA_VISIBLE_DEVICES,当我删除CUDA_VISIBLE_DEVICES以加载另一个模型时。我收到错误:CUDA 错误:设备序号无效。 您能展示用于运行 vLLM 的命令(包括 env 变量)吗? 如果您设置了CUDA_VISIBLE_DEVICES环境变量,例如 os.environ[“CUDA_VISIBLE_DEVICES”] = “2,3”,那么在您的代码中,设备索引将从 0 开始。也就是说,cuda:0 对应于实际的 cuda:2,而 cuda:1 对应于实际的 cuda:3 通常,我在命令行中而不是在 Python 中设置环境变量,例如: CUDA_VISIBLE_DEVICES=0,1 python -m <command> 这是因为在导入 PyTorch 之前需要更新环境变量才能使其正确生效,这很难依赖。 I have several model and gpu. So I have to set CUDA_VISIBLE_DEVICES several times, and get error. Set CUDA_VISIBLE_DEVICES is not a good way. I think when people have several model and gpu, they need a device paramter. I have decided not to use vllm. Vllm has a DeviceConfig configuration, but the kv-cache does not use it and always uses cuda:0. This is too messy.
gharchive/issue
2024-05-22T12:41:56
2025-04-01T06:40:52.551223
{ "authors": [ "DarkLight1337", "fengshansi", "kstyagi23" ], "repo": "vllm-project/vllm", "url": "https://github.com/vllm-project/vllm/issues/4981", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
723088012
Fix issue in UDP/SCTP Service flow in Antrea Proxy Set IP protocol number according to the Service protocol type in "learn" flow. Add integration test cases to ensure the flow is realized as expected. Fixes #1395 Codecov Report Merging #1398 into master will decrease coverage by 16.77%. The diff coverage is 90.90%. @@ Coverage Diff @@ ## master #1398 +/- ## =========================================== - Coverage 64.35% 47.57% -16.78% =========================================== Files 159 74 -85 Lines 12685 5597 -7088 =========================================== - Hits 8163 2663 -5500 + Misses 3665 2650 -1015 + Partials 857 284 -573 Flag Coverage Δ #integration-tests 47.57% <90.90%> (+2.74%) :arrow_up: #kind-e2e-tests ? #unit-tests ? Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ pkg/ovs/openflow/ofctrl_action.go 88.29% <90.90%> (-1.56%) :arrow_down: pkg/agent/proxy/service.go 0.00% <0.00%> (-100.00%) :arrow_down: pkg/ovs/ovsconfig/default.go 0.00% <0.00%> (-100.00%) :arrow_down: pkg/agent/proxy/types/groupcounter.go 0.00% <0.00%> (-95.00%) :arrow_down: pkg/agent/proxy/endpoints.go 0.00% <0.00%> (-88.58%) :arrow_down: pkg/agent/querier/querier.go 0.00% <0.00%> (-87.50%) :arrow_down: pkg/apis/controlplane/register.go 0.00% <0.00%> (-85.72%) :arrow_down: pkg/agent/proxy/types/types.go 0.00% <0.00%> (-84.62%) :arrow_down: pkg/agent/flowexporter/connections/conntrack.go 0.00% <0.00%> (-84.22%) :arrow_down: pkg/agent/types/networkpolicy.go 5.88% <0.00%> (-82.36%) :arrow_down: ... and 125 more /test-all /test-e2e /test-conformance /test-networkpolicy /test-all
gharchive/pull-request
2020-10-16T10:09:27
2025-04-01T06:40:52.707362
{ "authors": [ "codecov-io", "lzhecheng", "wenyingd" ], "repo": "vmware-tanzu/antrea", "url": "https://github.com/vmware-tanzu/antrea/pull/1398", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
779448620
Change flow export interval mechanism in Flow Exporter We have one flow export interval for all flows at Antrea Agent; this leads to one burst of flow records at flow collector. Instead we could have granular flow export intervals for individual flows by using active flow export timeout and inactive flow export time out. As part of this change, the flow export frequency config parameter is removed. Fixes #1637 Remaining part is adding tests. Codecov Report Merging #1714 (e35692e) into master (9d3d10b) will decrease coverage by 10.88%. The diff coverage is 51.82%. @@ Coverage Diff @@ ## master #1714 +/- ## =========================================== - Coverage 63.31% 52.43% -10.89% =========================================== Files 170 184 +14 Lines 14250 15799 +1549 =========================================== - Hits 9023 8284 -739 - Misses 4292 6593 +2301 + Partials 935 922 -13 Flag Coverage Δ kind-e2e-tests 37.12% <38.48%> (-18.27%) :arrow_down: unit-tests 40.31% <35.52%> (-0.96%) :arrow_down: Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ cmd/antrea-agent/agent.go 0.00% <0.00%> (ø) .../agent/apiserver/handlers/networkpolicy/handler.go 58.33% <ø> (ø) ...gent/controller/noderoute/node_route_controller.go 45.83% <0.00%> (-0.64%) :arrow_down: pkg/agent/proxy/proxier_linux.go 0.00% <0.00%> (-25.00%) :arrow_down: pkg/agent/proxy/types/groupcounter.go 0.00% <0.00%> (-95.00%) :arrow_down: pkg/agent/proxy/types/types.go 0.00% <0.00%> (-84.62%) :arrow_down: pkg/agent/stats/collector.go 97.72% <ø> (ø) pkg/agent/types/networkpolicy.go 37.50% <ø> (-45.84%) :arrow_down: pkg/antctl/antctl.go 100.00% <ø> (ø) pkg/antctl/command_definition.go 54.65% <ø> (+14.24%) :arrow_up: ... and 136 more Codecov Report Merging #1714 (e35692e) into master (9d3d10b) will decrease coverage by 10.88%. The diff coverage is 51.82%. @@ Coverage Diff @@ ## master #1714 +/- ## =========================================== - Coverage 63.31% 52.43% -10.89% =========================================== Files 170 184 +14 Lines 14250 15799 +1549 =========================================== - Hits 9023 8284 -739 - Misses 4292 6593 +2301 + Partials 935 922 -13 Flag Coverage Δ kind-e2e-tests 37.12% <38.48%> (-18.27%) :arrow_down: unit-tests 40.31% <35.52%> (-0.96%) :arrow_down: Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ cmd/antrea-agent/agent.go 0.00% <0.00%> (ø) .../agent/apiserver/handlers/networkpolicy/handler.go 58.33% <ø> (ø) ...gent/controller/noderoute/node_route_controller.go 45.83% <0.00%> (-0.64%) :arrow_down: pkg/agent/proxy/proxier_linux.go 0.00% <0.00%> (-25.00%) :arrow_down: pkg/agent/proxy/types/groupcounter.go 0.00% <0.00%> (-95.00%) :arrow_down: pkg/agent/proxy/types/types.go 0.00% <0.00%> (-84.62%) :arrow_down: pkg/agent/stats/collector.go 97.72% <ø> (ø) pkg/agent/types/networkpolicy.go 37.50% <ø> (-45.84%) :arrow_down: pkg/antctl/antctl.go 100.00% <ø> (ø) pkg/antctl/command_definition.go 54.65% <ø> (+14.24%) :arrow_up: ... and 136 more This is what I get from RFC 5102: 5.11.1. flowActiveTimeout Description: The number of seconds after which an active Flow is timed out anyway, even if there is still a continuous flow of packets. Abstract Data Type: unsigned16 ElementId: 36 Status: current Units: seconds 5.11.2. flowIdleTimeout Description: A Flow is considered to be timed out if no packets belonging to the Flow have been observed for the number of seconds specified by this field. Abstract Data Type: unsigned16 ElementId: 37 Status: current Units: seconds 5.11.3. flowEndReason Description: The reason for Flow termination. The range of values includes the following: 0x01: idle timeout The Flow was terminated because it was considered to be idle. 0x02: active timeout The Flow was terminated for reporting purposes while it was still active, for example, after the maximum lifetime of unreported Flows was reached. 0x03: end of Flow detected The Flow was terminated because the Metering Process detected signals indicating the end of the Flow, for example, the TCP FIN flag. It seems that in this PR you ignore case 0x01: idle timeout altogether. However, I believe that when we talk about "inactive_timeout", this is the case that actually matters the most. Actually I feel we have 2 solutions: use "inactive_timeout" for both 0x01: idle timeout and 0x03: end of Flow detected use "inactive_timeout" for 0x01: idle timeout only (in which case I suggest renaming it to "idle_timeout") and export a flow record immediately when a connection is actually terminated My preference would definitely be the second solution. I feel like it's more consistent with RFC 6645 and RFC 5470. Thanks for the comments, Antonin. Yes, 0x01: idle timeout is being ignored. As we are relying on conntrack to timeout the flow in the conntrack table with DYING flag (both UDP and TCP) to treat the flow as inactive, I thought we can consider the conntrack timeout as the detection signal to consider it as 0x03: end of Flow detected. My read for 0x01: idle timeout is that for example if flow records packet data and throughput data do not change for a given time period, we could timeout the flow and send the 0x01: idle timeout as the reason. I consider the flows, where there is only TCP-SYN (CONFIRMED) orTCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason asflow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout`. use "inactive_timeout" for both 0x01: idle timeout and 0x03: end of Flow detected Maybe we could use both depending on whether the status is DYING or TCP-SYN (CONFIRMED)/ TCP-SYN+TCP-SYN-ACK(SEEN_REPLY) What do you think? This is what I get from RFC 5102: 5.11.1. flowActiveTimeout Description: The number of seconds after which an active Flow is timed out anyway, even if there is still a continuous flow of packets. Abstract Data Type: unsigned16 ElementId: 36 Status: current Units: seconds 5.11.2. flowIdleTimeout Description: A Flow is considered to be timed out if no packets belonging to the Flow have been observed for the number of seconds specified by this field. Abstract Data Type: unsigned16 ElementId: 37 Status: current Units: seconds 5.11.3. flowEndReason Description: The reason for Flow termination. The range of values includes the following: 0x01: idle timeout The Flow was terminated because it was considered to be idle. 0x02: active timeout The Flow was terminated for reporting purposes while it was still active, for example, after the maximum lifetime of unreported Flows was reached. 0x03: end of Flow detected The Flow was terminated because the Metering Process detected signals indicating the end of the Flow, for example, the TCP FIN flag. It seems that in this PR you ignore case 0x01: idle timeout altogether. However, I believe that when we talk about "inactive_timeout", this is the case that actually matters the most. Actually I feel we have 2 solutions: use "inactive_timeout" for both 0x01: idle timeout and 0x03: end of Flow detected use "inactive_timeout" for 0x01: idle timeout only (in which case I suggest renaming it to "idle_timeout") and export a flow record immediately when a connection is actually terminated My preference would definitely be the second solution. I feel like it's more consistent with RFC 6645 and RFC 5470. Thanks for the comments, Antonin. Yes, 0x01: idle timeout is being ignored. As we are relying on conntrack to timeout the flow in the conntrack table with DYING flag (both UDP and TCP) to treat the flow as inactive, I thought we can consider the conntrack timeout as the detection signal to consider it as 0x03: end of Flow detected. My read for 0x01: idle timeout is that for example if flow records packet data and throughput data do not change for a given time period, we could timeout the flow and send the 0x01: idle timeout as the reason. I consider the flows, where there is only TCP-SYN (CONFIRMED) orTCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason asflow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout`. use "inactive_timeout" for both 0x01: idle timeout and 0x03: end of Flow detected Maybe we could use both depending on whether the status is DYING or TCP-SYN (CONFIRMED)/ TCP-SYN+TCP-SYN-ACK(SEEN_REPLY) What do you think? My read for 0x01: idle timeout is that for example if flow records packet data and throughput data do not change for a given time period, we could timeout the flow and send the 0x01: idle timeout as the reason. That's the idea I consider the flows, where there is only TCP-SYN (CONFIRMED) or TCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason as flow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout. I don't see how this is compatible with the above. If you have a SEEN_REPLY connection (for me this is basically an "established" connection, but am I wrong), I think the following logic should apply while it is in that state: if packet counts don't change for a 15s window, send a record with "idle timeout" end flow reason otherwise, after 60s, always send a record with "active timeout" end flow reason I would say the same goes for a CONFIRMED connection. DYING I don't think this is a good signal for "end of Flow detected" (0x03). The RFC text above explicitly lists TCP FIN as a possible signal. Between TCP FIN and the connection going to DYING state, there will be 120s (TIME_WAIT state). As you see I have carefully avoided the terminology "inactive timeout". Either it means "idle timeout", in which case we should use "idle timeout" as it is not as ambiguous. Or it means something else, but in that case I am not sure what exactly or what should be implemented. My read for 0x01: idle timeout is that for example if flow records packet data and throughput data do not change for a given time period, we could timeout the flow and send the 0x01: idle timeout as the reason. That's the idea I consider the flows, where there is only TCP-SYN (CONFIRMED) or TCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason as flow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout. I don't see how this is compatible with the above. If you have a SEEN_REPLY connection (for me this is basically an "established" connection, but am I wrong), I think the following logic should apply while it is in that state: if packet counts don't change for a 15s window, send a record with "idle timeout" end flow reason otherwise, after 60s, always send a record with "active timeout" end flow reason I would say the same goes for a CONFIRMED connection. DYING I don't think this is a good signal for "end of Flow detected" (0x03). The RFC text above explicitly lists TCP FIN as a possible signal. Between TCP FIN and the connection going to DYING state, there will be 120s (TIME_WAIT state). As you see I have carefully avoided the terminology "inactive timeout". Either it means "idle timeout", in which case we should use "idle timeout" as it is not as ambiguous. Or it means something else, but in that case I am not sure what exactly or what should be implemented. I don't think this is a good signal for "end of Flow detected" (0x03). The RFC text above explicitly lists TCP FIN as a possible signal. Between TCP FIN and the connection going to DYING state, there will be 120s (TIME_WAIT state). Yes, you are right about status flag DYING which is only set when conntrack entry is deleted--made sure from Linux conntrack code. Agreed that FIN_WAIT state cannot be captured with DYING. I consider the flows, where there is only TCP-SYN (CONFIRMED) or TCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason as flow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout. I don't see how this is compatible with the above. If you have a SEEN_REPLY connection (for me this is basically an "established" connection, but am I wrong), I think the following logic should apply while it is in that state: I was considering following status flags: (SEEN_REPLY & !ASSURED) and (CONFIRMED & !ASSURED). Yes, SEEN_REPLY status flag can be there for connection with the "established" state as well. I think we should use TCP states: SYN_SENT and SYN_RECV rather than status flags. if packet counts don't change for a 15s window, send a record with "idle timeout" end flow reason otherwise, after 60s, always send a record with "active timeout" end flow reason I would say the same goes for a CONFIRMED connection. Agree with the approach to have only "idle_timeout" and "active_timeout" following RFC 5470 and RFC 6645. However, I have a question for long-standing connections in the ESTABLISHED state (the default time out for this state is 5days). If there are no packets for 15s and we timeout the connection to export the flow records, then I am thinking of the following approach following this example. When this situation happens, we will delete the connection from the flow record map after expiring the flow with "idle_timeout". We will retain the same connection in the connection map but create the flow record again. Any comments on this policy? I don't think this is a good signal for "end of Flow detected" (0x03). The RFC text above explicitly lists TCP FIN as a possible signal. Between TCP FIN and the connection going to DYING state, there will be 120s (TIME_WAIT state). Yes, you are right about status flag DYING which is only set when conntrack entry is deleted--made sure from Linux conntrack code. Agreed that FIN_WAIT state cannot be captured with DYING. I consider the flows, where there is only TCP-SYN (CONFIRMED) or TCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason as flow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout. I don't see how this is compatible with the above. If you have a SEEN_REPLY connection (for me this is basically an "established" connection, but am I wrong), I think the following logic should apply while it is in that state: I was considering following status flags: (SEEN_REPLY & !ASSURED) and (CONFIRMED & !ASSURED). Yes, SEEN_REPLY status flag can be there for connection with the "established" state as well. I think we should use TCP states: SYN_SENT and SYN_RECV rather than status flags. if packet counts don't change for a 15s window, send a record with "idle timeout" end flow reason otherwise, after 60s, always send a record with "active timeout" end flow reason I would say the same goes for a CONFIRMED connection. Agree with the approach to have only "idle_timeout" and "active_timeout" following RFC 5470 and RFC 6645. However, I have a question for long-standing connections in the ESTABLISHED state (the default time out for this state is 5days). If there are no packets for 15s and we timeout the connection to export the flow records, then I am thinking of the following approach following this example. When this situation happens, we will delete the connection from the flow record map after expiring the flow with "idle_timeout". We will retain the same connection in the connection map but create the flow record again. Any comments on this policy? However, I have a question for long-standing connections in the ESTABLISHED state (the default time out for this state is 5days). If there are no packets for 15s and we timeout the connection to export the flow records, then I am thinking of the following approach following this example. When this situation happens, we will delete the connection from the flow record map after expiring the flow with "idle_timeout". We will retain the same connection in the connection map but create the flow record again. Any comments on this policy? I think this approach kind of makes sense to me. So in case of idle_timeout, we essentially clear the cache (flow record map), which means that if activity resumes for this flow, we will use a new flowStart timestamp and reset counters? That would at least be consistent with network devices which do sampling... Do you think it will work well with external flow collectors? BTW, what made you decide to use 2 separate maps (one for the connection store and one for the flow records) instead of a single one? After all the maps are indexed by the same key. Maybe the commit message / PR description should have a clear description of the purpose of each one. I don't think you have included any changes with regard to leveraging TCP FIN to terminate flow records (end of Flow detected case) and filtering out connections in the TIME_WAIT state? Yes, I did not add the scenario with the state corresponding to the TCP FIN flag and the flow reason end of flow detected. TIME_WAIT state filtering was also not done. Will take it up in a future PR. I think this approach kind of makes sense to me. So in case of idle_timeout, we essentially clear the cache (flow record map), which means that if activity resumes for this flow, we will use a new flowStart timestamp and reset counters? That would at least be consistent with network devices which do sampling... Do you think it will work well with external flow collectors? BTW, what made you decide to use 2 separate maps (one for the connection store and one for the flow records) instead of a single one? After all the maps are indexed by the same key. Maybe the commit message / PR description should have a clear description of the purpose of each one. I went with separate maps to reduce contention when doing polling and exporting. With the current export timeout implementation, where Export function is called every second, separate data structures become more important. Will add their description and purpose in PR and commit message. /test-all /test-all do you think we can change the FlowRecord struct so that the embedded Connection object is a value and not a pointer? So this: type FlowRecord struct { Conn Connection ... } Instead of that: type FlowRecord struct { Conn *Connection ... } I want to make it obvious that the Connection stored in the FlowRecord is a copy of what's stored in the ConnectionStore. It will remove that ambiguity and there is no downside IMO. It makes it more obvious that the separation of connections & flows help reduce contention. Thanks for the comment Antonin. Yes, agreed that there will be no downside in moving to because we take in the Connection object as the argument in the callback function signature: type ConnectionMapCallBack func(key ConnectionKey, conn Connection) error There is scope for improvement by changing the signature to take the pointer of the Connection object. For now, I will change from the pointer to embed the Connection struct directly in FlowRecord. As part of performance improvements with the perf unit test, we can take that up. Hope that's ok. I think that from a design perspective, it is a bit surprising that the FlowExporter is in charge of deleting connections from the ConnectionStore. I understand why: we need to make sure the connection information is preserved until the last record can be sent. But maybe it is worth a comment. Added the comment. /test-all /test-all There is scope for improvement by changing the signature to take the pointer of the Connection object. I think it is better to keep it as "pass-by-value" for now. It makes it clear that the function intends to make a copy and makes the code more readable IMO. Even if there may be a small performance penalty. Did you ever get a chance to benchmark the exporter code? There is scope for improvement by changing the signature to take the pointer of the Connection object. I think it is better to keep it as "pass-by-value" for now. It makes it clear that the function intends to make a copy and makes the code more readable IMO. Even if there may be a small performance penalty. Did you ever get a chance to benchmark the exporter code? Yes sometime back in last year. At that time memory consumption of antrea agent is increased by 10MB when we moved from 300 to 1K flows in Antrea connection store. Agree that the code now warrants a performance unit test using conntrack dumper interface. /test-all /test-all LGTM, looking forward to a follow-up PR with support for TIME_WAIT state transitions & the "end of Flow detected" end-of-flow reason (0x03). Thanks for the review. Yes, that flowEndReason will be added in a follow up PR.
gharchive/pull-request
2021-01-05T19:14:17
2025-04-01T06:40:52.771112
{ "authors": [ "antoninbas", "codecov-io", "srikartati" ], "repo": "vmware-tanzu/antrea", "url": "https://github.com/vmware-tanzu/antrea/pull/1714", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1165864835
🐦 Please tweet about v0.26.0 So I can RT it for https://github.com/adriens/chocolatey-vendir/issues/16 🙏 Sorry @adriens I was OOO since Thursday. Thanks! No worries @microwavables 💟
gharchive/issue
2022-03-11T00:22:47
2025-04-01T06:40:52.773897
{ "authors": [ "adriens", "microwavables" ], "repo": "vmware-tanzu/carvel-vendir", "url": "https://github.com/vmware-tanzu/carvel-vendir/issues/142", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1208530041
Tanzu CLI update from 10.1 to 11 failed with - Error: could not read artifact "artifacts/pinniped-auth/v0.11.4/tanzu-pinniped-auth-linux_amd64": storage: object doesn't exist Bug Report Tanzu cli update from commandline not working at this moment and failed with Error: could not read artifact "artifacts/pinniped-auth/v0.11.4/tanzu-pinniped-auth-linux_amd64": storage: object doesn't exist Steps to Reproduce the Bug user@k8s-mgmt:~$ tanzu version version: v0.10.1 buildDate: 2022-02-14 sha: 401d55b user@k8s-mgmt:~$ tanzu update the following updates will take place: core v0.10.1 → v0.11.4 package v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})} pinniped-auth v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})} secret v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})} cluster v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})} kubernetes-release v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})} login v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})} management-cluster v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})} would you like to continue? [y/n] y Environment Details Build version (tanzu version): version: v0.10.1, buildDate: 2022-02-14, sha: 401d55b Deployment (Managed/Unmanaged cluster): Managed Infrastructure Provider (Docker/AWS/Azure/vSphere): vSphere Operating System (client): Ubuntu 20.04.4 LTS, x86_64 i get the same error but with a different plugin ? would you like to continue? [y/n] y Error: could not read artifact "artifacts/conformance/v0.11.0/tanzu-conformance-linux_amd64": storage: object doesn't exist ✖ could not read artifact "artifacts/conformance/v0.11.0/tanzu-conformance-linux_amd64": storage: object doesn't exist
gharchive/issue
2022-04-19T15:32:16
2025-04-01T06:40:52.781877
{ "authors": [ "Aradiv", "trantor1" ], "repo": "vmware-tanzu/community-edition", "url": "https://github.com/vmware-tanzu/community-edition/issues/4102", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1826832321
site css: images on resource page should fit the grid Images on the resources page should resize to fit the grid, much like embedded youtube videos. Reviewed this older PR to see that we do commit the built CSS files. Rebasing.
gharchive/pull-request
2023-07-28T17:47:15
2025-04-01T06:40:52.801223
{ "authors": [ "benjaminapetersen" ], "repo": "vmware-tanzu/pinniped", "url": "https://github.com/vmware-tanzu/pinniped/pull/1595", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
239263091
Update Craft Library symbols [X] bug [ ] feature request [ ] enhancement Expected behavior [ ] Sync symbols with correct resizing attributes [ ] update Design Resources section with new version number and download Actual behavior Some symbols have broken resizing attributes InVision Craft Library plugin recently fixed (v1.0.38) the bug where this happened confirmed bug has been fixed Environment details Clarity Craft Library version: 0.9.9 Updated and tested
gharchive/issue
2017-06-28T19:01:26
2025-04-01T06:40:52.826399
{ "authors": [ "reddolan" ], "repo": "vmware/clarity", "url": "https://github.com/vmware/clarity/issues/1125", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
194218072
Webpack hangs when ClairtyModule imported on project created with angular-cli Select one ... (check one with "x") [x] bug - More of a question/request for guidance [ ] feature request [ ] enhancement Expected behavior angular-cli app, added Clarity, Webpack should complete and bundle should be valid(?) Time: 1734ms chunk {0} main.bundle.js, main.bundle.map (main) 4.64 kB {3} [initial] chunk {1} styles.bundle.js, styles.bundle.map (styles) 730 kB {4} [initial] chunk {2} scripts.bundle.js, scripts.bundle.map (scripts) 55.8 kB {4} [initial] chunk {3} vendor.bundle.js, vendor.bundle.map (vendor) 2.22 MB [initial] chunk {4} inline.bundle.js, inline.bundle.map (inline) 0 bytes [entry] webpack: bundle is now VALID. Actual behavior I'm new to web development so apologies in advance as I don't fully understand all the technologies at play here (e.g. webpack). After following the instructions for adding Clarity to an app created with angular-cli, webpack seems to hang when building modules. 21% building modules 96/96 modules 0 active When actually trying to connect to the new site when webpack is in this state, I receive this on the console: webpack: wait until bundle finished: / If I do not include ClarityModule, webpack completes and I get the nice little 'app works!' message generated by the angular-cli project: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { HttpModule } from '@angular/http'; import { ClarityModule } from 'clarity-angular'; import { AppComponent } from './app.component'; ... imports: [ BrowserModule, FormsModule, HttpModule, ], But with ClarityModule: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { HttpModule } from '@angular/http'; import { ClarityModule } from 'clarity-angular'; import { AppComponent } from './app.component'; ... imports: [ BrowserModule, ClarityModule, FormsModule, HttpModule, ], Webpack hangs. This is on a brand new project created with the angular-cli (e.g. ng new myproject). Reproduction of behavior Create new project with angular-cli ng new myproj Follow instructions for adding clarity, add css and js to the angular-cli.json file under styles and scripts sections. npm start to confirm that you receive the "app works!" message from your browser Import module in app.modules.ts npm start seems to hang at webpack preparation. Environment details $ ng --version angular-cli: 1.0.0-beta.22-1 node: 6.2.1 os: linux x64 14 "dependencies": { 15 "@angular/common": "2.2.3", 16 "@angular/compiler": "2.2.3", 17 "@angular/core": "2.2.3", 18 "@angular/forms": "2.2.3", 19 "@angular/http": "2.2.3", 20 "@angular/platform-browser": "2.2.3", 21 "@angular/platform-browser-dynamic": "2.2.3", 22 "@angular/router": "3.2.3", 23 "@webcomponents/custom-elements": "^1.0.0-alpha.3", 24 "clarity-angular": "^0.7.3", 25 "clarity-icons": "^0.7.3", 26 "clarity-ui": "^0.7.3", 27 "core-js": "^2.4.1", 28 "mutationobserver-shim": "^0.3.2", 29 "rxjs": "5.0.0-beta.12", 30 "sass-loader": "^4.0.2", 31 "ts-helpers": "^1.1.1", 32 "zone.js": "^0.6.23" Angular version: 2.0.X Clarity version: OS and version: Ubuntu 16.04 Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] N/A @RainTomassi people are experiencing the errors with the latest angular-cli 3426 too. We are currently investigating the possible solutions for this issue and get back to you soon. Thank you for the update. Appreciate the quick response and apologies, I didn't see the other ticket :-) I can confirm the following: if you remove the clarity dependency as a module and do "ng serve" and then update that file by adding that dependency and let webpack do the build job, it works! So it must be something that the beginning that makes the compiler not finding ClarityModule. Maybe some include PATH's? angular-cli@1.0.0-beta.22 is still experimental and forces AOT compilation. See https://github.com/angular/angular-cli/issues/3354 and https://github.com/angular/angular-cli/issues/3368 on the angular-cli project. It's a pretty heated topic right now, but all we can do on our side is become AOT-compliant as soon as possible. So I'm closing this as a duplicate of #62.
gharchive/issue
2016-12-08T00:49:24
2025-04-01T06:40:52.843124
{ "authors": [ "RainTomassi", "Shijir", "dragosrusu", "youdz" ], "repo": "vmware/clarity", "url": "https://github.com/vmware/clarity/issues/204", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
919127070
fix: hide card doc from website sidebar nav Signed-off-by: stsogoo stsogoo@vmware.com PR Checklist Please check if your PR fulfills the following requirements: [ ] Tests for the changes have been added (for bug fixes / features) [ ] Docs have been added / updated (for bug fixes / features) [ ] If applicable, have a visual design approval PR Type What kind of change does this PR introduce? [x] Bugfix [ ] Feature [ ] Code style update (formatting, local variables) [ ] Refactoring (no functional changes, no api changes) [ ] Build related changes [ ] CI related changes [ ] Documentation content changes [ ] clarity.design website / infrastructure changes [ ] Other... Please describe: What is the current behavior? Issue Number: N/A What is the new behavior? Does this PR introduce a breaking change? [ ] Yes [ ] No Other information Preview: https://60c3becc62e97e0a9f9e8497--vmware-clarity.netlify.app/ I was thinking to make a tool next week to look for @beta tags and discard the component docs for components in beta. What do you think?
gharchive/pull-request
2021-06-11T19:26:52
2025-04-01T06:40:52.849406
{ "authors": [ "Shijir", "bbogdanov" ], "repo": "vmware/clarity", "url": "https://github.com/vmware/clarity/pull/6052", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2308576372
CLN/ENH: Decouple Segments from Sound I think we want to decouple Segments from Sound so that a Segments instance does not have an attribute that is a Sound instance. Instead we should just associate a sample rate with segments and remove need for Sound This will also mean that we remove the Segment class. We just use a Segments instance with Sound.segment, to get back a new list of Sounds. Some nuance to this: I think I have found corner cases that reveal another bug in the original ava segmentation algorithm. I only found this because of the pre-condition for Segments where we require that the offset of the last segment not be greater than the length of the Sound in samples. So we probably want to keep that condition, to rule out that class of errors that could occur if we're not checking. [ ] To do so without requiring that the Sound be around, we can solve the same way we're keeping the start_times / stop_times, etc.: add an attribute n_samples that we get from the sound, just like we get samplerate so we can convert from sample number to time. Then we will save n_samples in the json file representing the segments, just like we will save the samplerate. I need to think more about the name of the attribute but I think this solution generally will work.
gharchive/issue
2024-05-21T15:32:09
2025-04-01T06:40:52.888968
{ "authors": [ "NickleDave" ], "repo": "vocalpy/vocalpy", "url": "https://github.com/vocalpy/vocalpy/issues/154", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }