id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1529651205
🛑 NWTFV is down In 0c2d60f, NWTFV (https://nwtfv.com) was down: HTTP code: 0 Response time: 0 ms Resolved: NWTFV is back up in a6ed039.
gharchive/issue
2023-01-11T20:30:14
2025-04-01T04:33:32.426363
{ "authors": [ "arnef" ], "repo": "arnef/status", "url": "https://github.com/arnef/status/issues/1012", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1533492487
🛑 NWTFV is down In ff6d3f5, NWTFV (https://nwtfv.com) was down: HTTP code: 0 Response time: 0 ms Resolved: NWTFV is back up in a0504eb.
gharchive/issue
2023-01-14T21:12:43
2025-04-01T04:33:32.428458
{ "authors": [ "arnef" ], "repo": "arnef/status", "url": "https://github.com/arnef/status/issues/1074", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1153736337
Add new CSS-effect template Issue Type: [ ] Bug Report [ ] Feature Request [ ] Documentaion Describe the bug A clear and concise description of what the bug is. Possible solution Describe the solution you thought of. Screenshots If applicable, add screenshots to help explain your problem. Hi,I am GSSOC'22 participant. I would like to add a new template. Could you please assign me? Hey, I'm GSSOC'22 participant. i would like to Add new CSS-effect template @arpit456jain plz assign me
gharchive/issue
2022-02-28T07:13:41
2025-04-01T04:33:32.443921
{ "authors": [ "adi2002vicky", "sudojarvis" ], "repo": "arpit456jain/Amazing-Css-Effects", "url": "https://github.com/arpit456jain/Amazing-Css-Effects/issues/253", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1158373192
Website Template for Bootcamp Issue Type: [x] Feature Request Describe A ready-to-use website template for BootCamp made with pure bootstrap. It will be useful to change the content and use for any type of Bootcamp. Please assign me this issue as a GSSoC'22 participant. Add ss or video!
gharchive/issue
2022-03-03T12:52:09
2025-04-01T04:33:32.445113
{ "authors": [ "Rutikab12", "astha2412" ], "repo": "arpit456jain/Cool-Front-End-Templates", "url": "https://github.com/arpit456jain/Cool-Front-End-Templates/issues/295", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
495916310
The optional arguments for warp_affine and warp_perspective are not active These two functions don't utilize their optional arguments. kornia.geometry.transform.mgwarp.warp_affine: flags (probably should be named mode) is not passed down or used same for padding_mode kornia.geometry.transform.mgwarp.warp_perspective: its optional arguments are not mentioned in the documents! flags (probably should be named mode) is not passed down or used same for border_mode and border_value @jasam-sheja @CoinCheung I mark this as a feature request for v0.1.4
gharchive/issue
2019-09-19T16:52:41
2025-04-01T04:33:32.450316
{ "authors": [ "edgarriba", "jasam-sheja" ], "repo": "arraiyopensource/kornia", "url": "https://github.com/arraiyopensource/kornia/issues/218", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1264419433
🛑 Nitter is down In 6aa6f70, Nitter (https://tw.artemislena.eu) was down: HTTP code: 0 Response time: 0 ms Resolved: Nitter is back up in 6eae6fb.
gharchive/issue
2022-06-08T08:56:54
2025-04-01T04:33:32.465028
{ "authors": [ "artemislena" ], "repo": "artemislena/upptime", "url": "https://github.com/artemislena/upptime/issues/76", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1823352459
Write objects code Add missing objects code generation (e.g. gdjs-evtsext__panelspritebutton__panelspritebutton.js for PanelSpriteButton extension) At some point, I should fully recreate this project in a way that extracts all this code from the GDevelop source code automatically. In the meantime, looks good to me, thanks! Are there other such changes you are planning Currently not, my goal was to make the exporter work with the latest editor's version on the empty project with some basic extensions. It works for now, I will continue to fix issues as they arise. Alright, pushing an update now 👍
gharchive/pull-request
2023-07-26T23:29:29
2025-04-01T04:33:32.468978
{ "authors": [ "arthuro555", "malec-palec" ], "repo": "arthuro555/gdcore-tools", "url": "https://github.com/arthuro555/gdcore-tools/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1243610980
kaisred i want hacks for arsenal Que clase de trucos ?
gharchive/issue
2022-05-20T20:49:48
2025-04-01T04:33:32.469528
{ "authors": [ "kaisred", "manhattancafeA" ], "repo": "arthurspk/guiadevbrasil", "url": "https://github.com/arthurspk/guiadevbrasil/issues/125", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1725326905
OverflowError: out of range integral type conversion attempted while running python qlora.py python qlora.py --model_name_or_path decapoda-research/llama-13b-hf (I have updated the tokenizer_config.json and config.json as per the various discussions here tokenizer_class: LlamaTokenizer and architectures: LlamaForCausalLM) ================================================================================== adding LoRA modules... trainable params: 125173760.0 || all params: 6922327040 || trainable: 1.8082612866554193 loaded model Using pad_token, but it is not set yet. Traceback (most recent call last): File "qlora.py", line 758, in train() File "qlora.py", line 620, in train "unk_token": tokenizer.convert_ids_to_tokens(model.config.pad_token_id), File "/home/envs/qlora_env/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 307, in convert_ids_to_tokens return self._tokenizer.id_to_token(ids) OverflowError: out of range integral type conversion attempted Running of Tesla V100 32GB GPU same issue Please check if this pr fixes your issue. The pr was designed to fix something else but will also bypass token conversion if tokenizer already contains the special tokens. https://github.com/artidoro/qlora/pull/20 same issue Please check if this pr fixes your issue. The pr was designed to fix something else but will also bypass token conversion if tokenizer already contains the special tokens. #20 this is solve my issue but I am getting maximum recursion depth error now. File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc     return self.unk_token_id   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1142, in unk_token_id     return self.convert_tokens_to_ids(self.unk_token)   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids     return self._convert_token_to_id_with_added_voc(tokens)   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc     return self.unk_token_id   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1142, in unk_token_id     return self.convert_tokens_to_ids(self.unk_token)   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids     return self._convert_token_to_id_with_added_voc(tokens)   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc     return self.unk_token_id   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1142, in unk_token_id     return self.convert_tokens_to_ids(self.unk_token)   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 250, in convert_tokens_to_ids     return self._convert_token_to_id_with_added_voc(tokens)   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 257, in _convert_token_to_id_with_added_voc     return self.unk_token_id   File "/home/atilla/miniconda3/envs/qlora/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1142, in unk_token_id     return self.convert_tokens_to_ids(self.unk_token) RecursionError: maximum recursion depth exceeded same... same issue update Change model.config.pad_token_id to 0 should fix this problem but may harm to training. tokenizer.add_special_tokens( { "eos_token": tokenizer.convert_ids_to_tokens(model.config.eos_token_id), "bos_token": tokenizer.convert_ids_to_tokens(model.config.bos_token_id), "unk_token": tokenizer.convert_ids_to_tokens(0), } ) I wonder what exactly does this change provide? same issue @amdnsr, Based on the error message you shared, it appears that there is an "OverflowError: out of range integral type conversion attempted" when converting token IDs during tokenization. To address this issue, we recommend the following solution: Update your code as follows: from transformers import LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-13b-hf") tokenizer.add_special_tokens({"pad_token": "[PAD]"}) # Rest of your code... By using the LlamaTokenizer from the transformers library and adding the [PAD] token as a special token, you can resolve the "out of range integral type conversion" error. Best regards, @hemangjoshi37a
gharchive/issue
2023-05-25T08:20:06
2025-04-01T04:33:32.478858
{ "authors": [ "LIO-H-ZEN", "Qubitium", "amdnsr", "atillabasaran", "hemangjoshi37a", "mofanv", "ricksun2023" ], "repo": "artidoro/qlora", "url": "https://github.com/artidoro/qlora/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
58164628
Use expect instead of assert for testing Aligns with ember-cli commit: https://github.com/ember-cli/ember-cli/commit/24356790ba1a6aead425c8bddfd96f6cb06ab1cb Looking good - thank you again!
gharchive/pull-request
2015-02-19T03:46:50
2025-04-01T04:33:32.491891
{ "authors": [ "Globegitter", "IanVS" ], "repo": "artificialio/sane", "url": "https://github.com/artificialio/sane/pull/80", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
635282849
StorageLock.java:39-40: Consider waiting for the lock to... The puzzle 230-d8ecbde2 from #230 has to be resolved: https://github.com/artipie/rpm-adapter/blob/c5b426bf46ca004352abdd1fc8ff4aaf62245abf/src/main/java/com/artipie/rpm/misc/StorageLock.java#L39-L40 The puzzle was created by olenagerasimova on 09-Jun-20. Estimate: 30 minutes, role: DEV. If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is removed from the source code. Here is more about PDD and about me. @g4s8/z, @olenagerasimova/z please, pay attention to this issue The puzzle 230-d8ecbde2 has disappeared from the source code, that's why I closed this issue. Job gh:artipie/rpm-adapter#245 is not assigned, can't get performer
gharchive/issue
2020-06-09T09:49:45
2025-04-01T04:33:32.500927
{ "authors": [ "0crat", "0pdd" ], "repo": "artipie/rpm-adapter", "url": "https://github.com/artipie/rpm-adapter/issues/245", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
663243547
Updated http and asto version Updated http and asto version, used new features where possible. @victornoel thanks, done @g4s8 please approve/merge Job was finished in 17 hours, bonus for fast delivery is possible (see §36) Job audit: performer @victornoel/z/z didn't make CR comments
gharchive/pull-request
2020-07-21T19:08:52
2025-04-01T04:33:32.502429
{ "authors": [ "0crat", "olenagerasimova" ], "repo": "artipie/rpm-adapter", "url": "https://github.com/artipie/rpm-adapter/pull/331", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
297483592
Advice on compiler errors sought Please see the attached image. I am only seeing these errors after adding the Signals pod to a new Cocoa Touch Framework Swift 4.0 project What do you think is the best way of dealing with these, Tuomas? You seem to be pulling an older verision, one that isn't compatible with Swift 4. Please update your podfile to pull version 5.0. oh. ok. dunno how I managed that. thanks Tuomas
gharchive/issue
2018-02-15T15:19:48
2025-04-01T04:33:32.505447
{ "authors": [ "artman", "josephbeuysmum" ], "repo": "artman/Signals", "url": "https://github.com/artman/Signals/issues/55", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
57245772
No bidder number/pin shown after placing a bid under reserve Number/pin screen does not should show up, you see the screen that says "Higher Bid Needed" with buttons that say "Place Higher Bid" or "Continue". When you go to "Place Higher Bid", you can place it without entering bidder number and pin, but still never get to that page with bidder number/pin. We only show bidder number and pin if the user was newly created. Is it possible that you were testing with an existing user? Wait, the PAF auction doesn't have reserves, does it? Gonna move this back to the 4.0 milestone. Correct, PAF does not have starting bids :) On Fri, Apr 10, 2015 at 3:36 PM, Ash Furrow notifications@github.com wrote: Wait, the PAF auction doesn't have reserves, does it? Gonna move this back to the 4.0 milestone. — Reply to this email directly or view it on GitHub https://github.com/artsy/eidolon/issues/369#issuecomment-91661086. -- [A.] Alessandra Castillo / Curatorial Liaison Artsy http://artsy.net / 401 Broadway, 25th Floor, New York, NY 10013 alessandra@artsy.net +1 401 601 6555 Artsy for Museums https://www.artsy.net/pdf/institutions-overview.pdf | Auctions https://www.dropbox.com/s/mmvvbprnbf0eyio/ArtsyAuctions.pdf Fixed!
gharchive/issue
2015-02-10T22:12:27
2025-04-01T04:33:32.511852
{ "authors": [ "alessandra-castillo", "ashfurrow" ], "repo": "artsy/eidolon", "url": "https://github.com/artsy/eidolon/issues/369", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
87338224
[Push Notifications] Device token submitted to Artsy API at wrong time. Currently we try to submit the device token to the Artsy API as soon as possible, but it’s very possible that by then the user has not yet signed-in. Will we ever want to spam all users that have the app installed, even if they are not registered users? (e.g. to do more general announcements) Keep in mind that this is not just a matter of a decision, it will require eigen and gravity side work which may or may not be trivial. if not, then we can just defer registering for push notifications until the user has signed-in, or if they’re signed-in perform it at startup as is currently the case @katarinabatina @orta I could see us wanting to push people in the fair context who aren't signed in Ok, I’m gonna do some work on Gravity’s end to support push notifications for trial users.
gharchive/issue
2015-06-11T13:00:10
2025-04-01T04:33:32.513338
{ "authors": [ "alloy", "katarinabatina" ], "repo": "artsy/eigen", "url": "https://github.com/artsy/eigen/issues/518", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2747890001
fix: restore image fallbacks This PR resolves [] Description Fixes error causing auctions not to render as well as a few other images I missed in migration. This restores some behavior we had with the old opaqueImageView, that is if a null or undefined url is passed render a gray box as a fallback. It looks like this: Screenshot PR Checklist [ ] I have tested my changes on iOS and Android. [x] I hid my changes behind a feature flag, or they don't need one. [x] I have included screenshots or videos, or I have not changed the UI. [x] I have added tests, or my changes don't require any. [x] I added an app state migration, or my changes do not require one. [x] I have documented any follow-up work that this PR will require, or it does not require any. [x] I have added a changelog entry below, or my changes do not require one. To the reviewers 👀 [ ] I would like at least one of the reviewers to run this PR on the simulator or device. Changelog updates Changelog updates Cross-platform user-facing changes fix images not rendering when url is null - brian iOS user-facing changes Android user-facing changes Dev changes Need help with something? Have a look at our docs, or get in touch with us. Thank you 🙏 This PR contains the following changes: Cross-platform user-facing changes (fix images not rendering when url is null - brian - brainbicycle) Generated by :no_entry_sign: dangerJS against 9dc961ae51ac17c6152b1e2bb2bb7d5559f26285
gharchive/pull-request
2024-12-18T13:56:26
2025-04-01T04:33:32.518857
{ "authors": [ "ArtsyOpenSource", "MounirDhahri", "brainbicycle" ], "repo": "artsy/eigen", "url": "https://github.com/artsy/eigen/pull/11307", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
617194796
Bump reaction bump Fails :no_entry_sign: Danger failed to run dangerfile.ts. Error GraphQLError Syntax Error: Cannot parse the unexpected character "<". GraphQLError: Syntax Error: Cannot parse the unexpected character "<". at syntaxError (/home/circleci/project/node_modules/graphql/error/syntaxError.js:24:10) at readToken (/home/circleci/project/node_modules/graphql/language/lexer.js:302:32) at Object.lookahead (/home/circleci/project/node_modules/graphql/language/lexer.js:61:43) at Object.advanceLexer [as advance] (/home/circleci/project/node_modules/graphql/language/lexer.js:52:33) at expect (/home/circleci/project/node_modules/graphql/language/parser.js:1296:11) at parseDocument (/home/circleci/project/node_modules/graphql/language/parser.js:107:3) at parse (/home/circleci/project/node_modules/graphql/language/parser.js:38:10) at buildSchema (/home/circleci/project/node_modules/graphql/utilities/buildASTSchema.js:447:43) at _callee3$ (/home/circleci/project/scripts/validateSchemas.js:154:31) at tryCatch (/home/circleci/project/node_modules/regenerator-runtime/runtime.js:45:40) Dangerfile 19| const changes: string[] = await getBreakingChanges(versionToCheck, 2) 20| 21| // There are breaking changes with the schema 22| if (changes.length) { 23| if (isReleasePR) { ------------^ 24| const deployURL = "https://github.com/artsy/metaphysics#deployment" 25| fail( 26| `You need to promote Metaphysics from staging to production. The schema used in Reaction, is further ahead than the version in Metaphyisc production. This could cause your queries to fail. You can see how to [deploy here](${deployURL}).` 27| ) Generated by :no_entry_sign: dangerJS against ff6e7490c53f40533396ff339a38b46e00561f37
gharchive/pull-request
2020-05-13T06:58:43
2025-04-01T04:33:32.520938
{ "authors": [ "ArtsyOpenSource", "damassi" ], "repo": "artsy/force", "url": "https://github.com/artsy/force/pull/5564", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
205687524
@craigspaeth : replace google search with elasticsearch Switches our 'search results' page over from Google to Elasticsearch. Closely mirrors the corresponding artsy/force#410 PR. Uses our existing SearchResult model and collection, deleting the GoogleSearchResult equivalents. Adds description formatting logic for shows/fair booths. Previously this was returned by the Google API but microgravity now needs to construct it. I've also tested this functionally locally and it looks fine in terms of result rendering. Looks like some related route spec failures, otherwise no blocking comments on my end. Feel free to address and merge when 💚 Thanks! Solid work! 1) Search routes #index success makes the appropriate request and removes accents: TypeError: Cannot read property 'should' of undefined at Context.<anonymous> (apps/search/test/routes.coffee:25:40) 2) Search routes #index error doesnt swallow the error: Uncaught AssertionError: expected false to be true at Assertion.fail (node_modules/should/cjs/should.js:205:17) at Assertion.value (node_modules/should/cjs/should.js:277:19) at apps/search/test/routes.coffee:55:11 at Timeout._onTimeout (node_modules/underscore/underscore.js:768:19) thanks @craigspaeth - i had difficulty testing the error callback in the route (as i had changed the scheme from a .fetch.then.catch promise based scheme to a .fetch with success and error params). so i had to delete that spec in the end.
gharchive/pull-request
2017-02-06T19:44:25
2025-04-01T04:33:32.524196
{ "authors": [ "cavvia", "craigspaeth" ], "repo": "artsy/microgravity", "url": "https://github.com/artsy/microgravity/pull/115", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1808121906
Added a fix for submission details page changes not committing new submissions This PR adds a fix that broke functionality due to leetcode website changes. @arunbhardwaj Please let me know if you have any comments. I tested the extension locally, and it works fine now. Fixes Issue: https://github.com/arunbhardwaj/LeetHub-2.0/issues/12 I find that this approach is more of a "forced submission" approach. I've detailed an alternative solution that I feel handles the new issue from the LeetCode UI changes in this PR: https://github.com/arunbhardwaj/LeetHub-2.0/pull/18 I'd love to confirm that this works for you @Mouli-Sanketh-Maturi
gharchive/pull-request
2023-07-17T16:21:45
2025-04-01T04:33:32.535436
{ "authors": [ "Mouli-Sanketh-Maturi", "arunbhardwaj" ], "repo": "arunbhardwaj/LeetHub-2.0", "url": "https://github.com/arunbhardwaj/LeetHub-2.0/pull/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
98836764
Renamed files I've renamed files so that the wrapper scripts and configuration files all match with the exception of the file extension. This is done to group the wrapper with configuration file when listed alphabetically so it's clear which scripts go together. The one other thing I would do is split this directory into Wrappers, Configuration Files, and Graphing directories, but I will leave that decision up to you. I've left all wrapper scripts ending in .sh and all corresponding configuration files ending in .conf I've also renamed the conversion R script and common configuration files to be more descriptive. Below is a list of before and after: Before After 2dsfs_allo.symp.conf consolidated with `2dsfs_example.conf 2dsfs_example.conf 2D_Site_Frequency_Spectrum.conf ANGSD_2DSFS.sh 2D_Site_Frequency_Spectrum.sh abbababa.conf ABBA_BABA.conf ANGSD_ABBA_BABA.sh ABBA_BABA.sh ngsadmix.conf Admixture.conf NGS_Admix.sh Admixture.sh anc.conf Ancestral_Sequence.conf ANGSD_ANC_SEQ.sh Ancestral_Sequence.sh common.conf Common_Variables.conf ngsFST.conf Fst.conf NGS_FST.sh Fst.sh genotype_example.conf Genotype_Likelihoods.conf ANGSD_genotypes.sh Genotype_Likelihoods.sh ngsF.conf Inbreeding_Coefficients_Estimator.conf NGS_F.sh Inbreeding_Coefficients_Estimator.sh ANGSD_PCA.sh PCA.sh convertSFS.R SFS_to_FST.R sfs_example.conf Site_Frequency_Spectrum.conf ANGSD_SFS Site_Frequency_Spectrum.sh thetas_example.conf Thetas_Calculator.conf ANGSD_Thetas.sh Thetas_Calculator.sh init.sh unchanged utils.sh unchanged, but could be removed by using the if constructs that bash provides I have not changed anything in the shiny directory. Thanks Paul!
gharchive/pull-request
2015-08-03T20:54:35
2025-04-01T04:33:32.541707
{ "authors": [ "arundurvasula", "mojaveazure" ], "repo": "arundurvasula/angsd-wrapper", "url": "https://github.com/arundurvasula/angsd-wrapper/pull/89", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
475821369
Button in expanded panel Wondering if there is a special way to handle a button in the expanded panel, now it does not work and sometimes sends it to blank screen and I am just trying to run print statement right now. Please provide an example to reproduce the problem. i would love to see this one too
gharchive/issue
2019-08-01T18:15:39
2025-04-01T04:33:32.549766
{ "authors": [ "abbadabbatech", "aryzhov", "taufiqridha" ], "repo": "aryzhov/flutter-expandable", "url": "https://github.com/aryzhov/flutter-expandable/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
473498157
stick on top hi, thank a lot to release this useful library i have a simple issue as, how can i know stopper stick on top such as appBar? Currently, the only place where you can see the current position of the stopper is the builder. I guess I could expose the state such that you can attach a listener. Can you describe your use case scenario? @aryzhov i think stop in builder can be provide that, is this right? Yes, that's correct.
gharchive/issue
2019-07-26T19:13:11
2025-04-01T04:33:32.550866
{ "authors": [ "MahdiPishguy", "aryzhov" ], "repo": "aryzhov/flutter-stopper", "url": "https://github.com/aryzhov/flutter-stopper/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1599568738
about './Model Checkpoints/weights.hdf5' Hello! Author. I can't find the file "weights.hdf5". Where does this file come from? @hgtt has got the link of google drive. Closing this issue
gharchive/issue
2023-02-25T05:51:23
2025-04-01T04:33:32.551492
{ "authors": [ "as791", "hgtt" ], "repo": "as791/Multimodal-Brain-Tumor-Segmentation", "url": "https://github.com/as791/Multimodal-Brain-Tumor-Segmentation/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
73338090
Mc event pane @DanielJMaher , @ssontag55 Update asset management page, with event pane. TODO: Consolate duplicate code into a function, just wanted to get this bit in here asap. @FBRTMaka how long would it take to consolodate that code? Another hour or so, should be ready by noon @DanielJMaher good to go! @ssontag55 Can you review please?
gharchive/pull-request
2015-05-05T14:58:05
2025-04-01T04:33:32.560265
{ "authors": [ "DanielJMaher", "FBRTMaka" ], "repo": "asascience-open/ooi-ui", "url": "https://github.com/asascience-open/ooi-ui/pull/320", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
306941965
BUG (?) validating required fields in a list of structs Validator fails to validate elements (structs) of a list when they have required fields. I have the following structs: type CustomerOption struct { Name string valid:"required" json:"name" Value string json:"value" } // CustomerOptions a set of customer options (name, value pairs) // swagger:model CustomerOptions type CustomerOptions struct { CustomerOptions []CustomerOption valid:"required" json:"List" } func (custOpt *CustomerOption) Validate() error { validation.AddCustomValidators() if _, err := govalidator.ValidateStruct(custOpt); err != nil { return oshandlers.NewValidationError().Add(err) } return nil } func (custOpts *CustomerOptions) Validate() error { validation.AddCustomValidators() if _, err := govalidator.ValidateStruct(custOpts); err != nil { return oshandlers.NewValidationError().Add(err) } return nil } Context("CustomerOptions.Validate()", func() { //The following test passes: It("should error if no options", func() { custOpts := types.CustomerOptions{ CustomerOptions: nil, } err := custOpts.Validate() Expect(err).To(MatchError(oshandlers.NewValidationError().Field("List", "required"))) }) //But this one fails complaining that 'List' is required which is wrong since the List (CustomerOptions is provided). It should have failed validating CustomerOption. It used to work correctly before (I had older version). It("should error if no name", func() { custOpts := types.CustomerOptions{ CustomerOptions: []types.CustomerOption{ types.CustomerOption{ Name: "", }, }, } err := custOpts.Validate() Expect(err).To(MatchError(oshandlers.NewValidationError().Field("name", "required"))) }) }) Hello guys! I forked this package cause owner disappeared. Hope, he will be back, but it would be easier to merge these changes back if he is back Link to my repo: create issue there and we'll discuss it.
gharchive/issue
2018-03-20T16:29:01
2025-04-01T04:33:32.563578
{ "authors": [ "ahavriluk", "sergeyglazyrindev" ], "repo": "asaskevich/govalidator", "url": "https://github.com/asaskevich/govalidator/issues/267", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2042660526
Update Gradle and SDK version. Also fixes a segfault. With these changes, Vinyl Cast now builds under the current Android Studio (hedgehog). Gradle 8.5 targetSdkVersion 34 Ah, I meant to send this as multiple pull requests.
gharchive/pull-request
2023-12-14T23:36:39
2025-04-01T04:33:32.565346
{ "authors": [ "kyleatmakrs" ], "repo": "aschober/vinyl-cast", "url": "https://github.com/aschober/vinyl-cast/pull/25", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
257233467
Multiple math mode glitches It looks like \[\] equation environment is very glitch if it is placed in the middle of a line, and sometimes weird + signs will appear. Looks like an incorrectly inserted new line to me. Another glitch is that a closing square bracket can end up outside the math if $...$ is used for mathmode, but this is not a problem with \(\). Hi, Could you provide me with examples? That would help a great deal. Thanks so much! On Sep 12, 2017, at 9:51 PM, xsrvmy notifications@github.com wrote: It looks like [] equation environment is very glitch if it is placed in the middle of a line, and sometimes weird + signs will appear. Another glitch is that a closing square bracket can end up outside the math if $...$ is used for mathmode, but this is not a problem with () — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/asciidoctor/asciidoctor-latex/issues/72, or mute the thread https://github.com/notifications/unsubscribe-auth/AApgN0QNlwM77Spw-KwQZLB_Eywll3PFks5shzULgaJpZM4PVeUS. I will have to review this. The problem is the following. What is inside [ ... ] has to be protected from alteration by other parts of Asciidoctor. In the regex that handles this, I assume that [ and ] at the beginning of the line. Hence the problem. I'll see if there is a better solution that doesn't mess something else up. Using regexes to do all this work is inherently problematic, and will be solved once there is a proper parser or asciidoc. Example of the second bug: $[a],b$ incorrectly results in the output [a,b] \([a],b\) correctly results in the output [a],b
gharchive/issue
2017-09-13T01:51:07
2025-04-01T04:33:32.569879
{ "authors": [ "jxxcarlson", "xsrvmy" ], "repo": "asciidoctor/asciidoctor-latex", "url": "https://github.com/asciidoctor/asciidoctor-latex/issues/72", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
584803168
変数と宣言: 初期値に変更 URL : https://github.com/asciidwango/js-primer/blob/master/source/basic/variables/README.md 代入できる値にも制限 :memo: https://github.com/asciidwango/js-primer/blob/master/source/basic/variables/README.md#L233 constの話なので代入より、「初期値」の方がいい 初期値にも制限は有りませんって書くと微妙だな。 初期値なしもOKに見える。 一応意味は通じるしこれは一旦やめるか
gharchive/issue
2020-03-20T02:20:04
2025-04-01T04:33:32.573485
{ "authors": [ "azu" ], "repo": "asciidwango/js-primer", "url": "https://github.com/asciidwango/js-primer/issues/1097", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1674806780
[Traefik Plugin Catalog] Plugin Analyzer has detected a problem. The plugin was not imported into Traefik Plugin Catalog. Cause: failed to load readme: failed to get the readme file: GET https://api.github.com/repos/ascpikmin/keepcookie/readme?ref=v0.1.0: 404 Not Found [] Traefik Plugin Analyzer will restart when you will close this issue. If you believe there is a problem with the Analyzer or this issue is the result of a false positive, please contact us. Added README file
gharchive/issue
2023-04-19T12:30:13
2025-04-01T04:33:32.576844
{ "authors": [ "ascpikmin", "traefiker" ], "repo": "ascpikmin/keepcookie", "url": "https://github.com/ascpikmin/keepcookie/issues/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2321544482
Change asdf-pandoc to be maintained by StephanMeijer and Fbrisset asdf-pandoc does not seem to have been claimed as community repository. I would like to do so so that I can work on my changes as a community effort. @Fbrisset is the current maintainer of the repository present in the Index. CC: @sys9kdr ya, I know Feberist/asdf-pandoc is in index. But Feberist/asdf-pandoc isn't good, IMHO. It doesn't meet basic requirements, and we can only download a few versions of Pandoc with this plugin. see https://github.com/asdf-vm/asdf-plugins/issues/995 I am against putting the current Feberist/asdf-pandoc that doesn't implement basic features into the Community. If sys9kdr/asdf-pandoc is added to the official index, I will add it to the community. see https://github.com/asdf-vm/asdf-plugins/pull/996 and we can only download a few versions of Pandoc with this plugin Correct. I put out a pull request for that. I am against putting the current Feberist/asdf-pandoc that doesn't implement basic features into the Community. I'm proposing to put my fork out. If sys9kdr/asdf-pandoc is added to the official index, I will add it to the community. I'm not against that. Either is fine for me to be honest. See also: https://github.com/Fbrisset/asdf-pandoc/pull/6 https://github.com/jgm/pandoc/issues/9810 I saw @StephanMeijer's PR, but there are still some issues https://github.com/Fbrisset/asdf-pandoc/pull/6 . I'll create issues and send a PRs to @Fbrisset later. @sys9kdr @Fbrisset What do you think going forward making contributions as a team of three? effectively merging all our efforts? I think it's a good idea. We can only get better by being together. @StephanMeijer @Fbrisset okay. I'll close my PRs in asdf-vm. @smorimoto I confirmed that the quality was good enough. Please merge it.
gharchive/pull-request
2024-05-28T17:04:08
2025-04-01T04:33:32.581039
{ "authors": [ "Fbrisset", "StephanMeijer", "smorimoto", "sys9kdr" ], "repo": "asdf-community/infrastructure", "url": "https://github.com/asdf-community/infrastructure/pull/183", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
93634061
Drag spontaneously cancelled after moving a short distance In one mode of my mapview, its setup with a single draggable annotation with a custom view. After the drag starts and the annotation view has moved a few pixels (like 5-10), something is causing the mapview to set its drag state to MKAnnotationViewDragStateEnding. Also, the touch that was dragging the annotation starts scrolling the map instead. I change my mapview from a TSClusterMapView subclass to a plain MKMapView and this no longer happens. The stack trace at the time seems only shows execution of the mapview's gesture recognizers. It looks like at this point the touch has already been taken over by the pan gesture recognizer for scrolling, and that this itself is causing the drag to end. * frame #0: 0x000000010750b6f6 Wundrit`-[PendingQuestionAnnotationView setDragState:animated:](self=0x00007ffcb9950620, _cmd=0x0000000116f54852, newDragState=MKAnnotationViewDragStateEnding, animated=false) + 342 at MainMapViewController.m:1485 frame #1: 0x000000010a60cb81 MapKit`-[MKAnnotationContainerView _dropDraggingAnnotationViewAnimated:] + 280 frame #2: 0x000000010a55cb95 MapKit`-[MKMapView _dropDraggingAnnotationView:] + 90 frame #3: 0x000000010a55e10a MapKit`-[MKMapView _willChangeRegionAnimated:] + 87 frame #4: 0x00000001169c2b29 VectorKit`-[VKMapCameraController startPanningAtPoint:panAtStartPoint:] + 89 frame #5: 0x000000010a59ac21 MapKit`-[MKMapGestureController handlePan:] + 384 frame #6: 0x000000010ab4c656 UIKit`_UIGestureRecognizerSendActions + 262 frame #7: 0x000000010ab4b2f9 UIKit`-[UIGestureRecognizer _updateGestureWithEvent:buttonEvent:] + 532 So what could be causing the scrolling recognizer to be taking over the touch that started out belonging to the annotation drag? I was thinking the UIPanGestureRecognizer that's setup in initHelpers was somehow to blame (what is the point of that anyway?), but I commented out the creation of that recognizer and the problem is still happening. Any ideas for what this could be or how to debug this? Looks like my problem was caused by this: - (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer { return YES; } I've removed it and everything to do with the added pan gesture recognizer and dragging works fine. @jpmhouston Yes sorry that was carry over from the project which spawned this library. I will make that piece optional/removed in the next update as it could result in undesirable effects as you have found.
gharchive/issue
2015-07-07T21:32:17
2025-04-01T04:33:32.595484
{ "authors": [ "ashare80", "jpmhouston" ], "repo": "ashare80/TSClusterMapView", "url": "https://github.com/ashare80/TSClusterMapView/issues/21", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1659346611
RuntimeError: OpenGL 4.4 Description When I run DMTet finetune, the error is happend: self.cpp_wrapper = _get_plugin(gl=True).RasterizeGLStateWrapper(output_db, mode == 'automatic', cuda_device_idx) RuntimeError: OpenGL 4.4 or later is required Steps to Reproduce python main.py -O --text "A ripe strawberry." --workspace trial_dmtet_a_ripe --dmtet --iters 5000 --init_ckpt trial_A_ripe_strawberry/checkpoints/df.pth Expected Behavior fix it Environment pytorch1.20.1 ubuntu20.04 this is a problem results from nvidia-version.
gharchive/issue
2023-04-08T02:45:45
2025-04-01T04:33:32.597008
{ "authors": [ "lingtengqiu" ], "repo": "ashawkey/stable-dreamfusion", "url": "https://github.com/ashawkey/stable-dreamfusion/issues/193", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2178272784
🛑 Home is down In 0bfeea9, Home (http://home.ashishkhokhar.com/$TEST_URL) was down: HTTP code: 0 Response time: 0 ms Resolved: Home is back up in 6e76d83 after 8 minutes.
gharchive/issue
2024-03-11T05:39:13
2025-04-01T04:33:32.600149
{ "authors": [ "ashish-khokhar" ], "repo": "ashish-khokhar/services-status", "url": "https://github.com/ashish-khokhar/services-status/issues/166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1924467719
Golang I Write in golang and it would be awesome if golsng worked .. Btw I use Wazero as a wasm runner. Wazero is Portable to anything. Servers, desktops and mobiles. Just mentioning this as it might be useful to this project. It’s written in golang itself. Is there a reason golang would not currently work? Hey @gedw99 thanks for the suggestion. I see a few improvements to Godot Wasm that could help with Golang fans (I include myself in this group). I'm going to include some context here for posterity. Technically, any language targeting Wasm is supported by this project. That said, typically, compiling Golang to a Wasm module actually targets WASI. There is currently no Go equivalent of Rust's wasm32-unknown-unknown target. Despite this, Godot Wasm should support modules built for WASI although there may be a few hoops to jump through. A simple Go program compiled to Wasm via TinyGo (tinygo build -o tinygo.wasm -target wasi main.go) produces a Wasm module with the following signature. Type: wasm Size: 68.6 KB Imports: Functions: "wasi_snapshot_preview1"."fd_write": [I32, I32, I32, I32] -> [I32] Memories: Tables: Globals: Exports: Functions: "malloc": [I32] -> [I32] "free": [I32] -> [] "calloc": [I32, I32] -> [I32] "realloc": [I32, I32] -> [I32] "_start": [] -> [] "hello": [] -> [] "asyncify_start_unwind": [I32] -> [] "asyncify_stop_unwind": [] -> [] "asyncify_start_rewind": [I32] -> [] "asyncify_stop_rewind": [] -> [] "asyncify_get_state": [] -> [I32] Memories: "memory": not shared (2 pages..) Tables: Globals: The same program targeting WASI via Go (actually gotip) (GOOS=wasip1 GOARCH=wasm gotip build -o go.wasm) produces a module with the following signature. Type: wasm Size: 1.4 MB Imports: Functions: "wasi_snapshot_preview1"."sched_yield": [] -> [I32] "wasi_snapshot_preview1"."proc_exit": [I32] -> [] "wasi_snapshot_preview1"."args_get": [I32, I32] -> [I32] "wasi_snapshot_preview1"."args_sizes_get": [I32, I32] -> [I32] "wasi_snapshot_preview1"."clock_time_get": [I32, I64, I32] -> [I32] "wasi_snapshot_preview1"."environ_get": [I32, I32] -> [I32] "wasi_snapshot_preview1"."environ_sizes_get": [I32, I32] -> [I32] "wasi_snapshot_preview1"."fd_write": [I32, I32, I32, I32] -> [I32] "wasi_snapshot_preview1"."random_get": [I32, I32] -> [I32] "wasi_snapshot_preview1"."poll_oneoff": [I32, I32, I32, I32] -> [I32] Memories: Tables: Globals: Exports: Functions: "_start": [] -> [] Memories: "memory": not shared (266 pages..) Tables: Globals: Godot Wasm provides default implementations for the following small subset of WASI import functions according to the WASI spec. wasi_snapshot_preview1.clock_time_get wasi_snapshot_preview1.args_sizes_get wasi_snapshot_preview1.args_get wasi_snapshot_preview1.environ_sizes_get (stubbed out) wasi_snapshot_preview1.environ_get (stubbed out) wasi_snapshot_preview1.random_get wasi_snapshot_preview1.args_sizes_get wasi_snapshot_preview1.args_get wasi_snapshot_preview1.environ_sizes_get wasi_snapshot_preview1.environ_get wasi_snapshot_preview1.random_get wasi_snapshot_preview1.clock_time_get With the background info out of the way, there are a few options to enable/ease support Golang. Extend default WASI support. At the very least, wasi_snapshot_preview1.sched_yield and wasi_snapshot_preview1.poll_oneoff would need to be provided by default. If you can provide the signature (e.g. wasmer inspect go.wasm) of modules you've compiled from Go, perhaps we can find other required functions. Implement required function in GDScript. This can be done immediately to enable support for modules with unmet WASI requirements. Import functions can be provided in a dictionary of the shape { "functions": { "wasi_snapshot_preview1.sched_yield": [self, "sched_yield"] } }. One of the major disadvantages to this approach would be a massive decrease in speed compared to a native implementation if the import function is called with any frequency. Godot Wasm forwards the runtime's default WASI implementation to the module. I'm not entirely sure what this would look like without a little more research. One downside to this approach would be the assumptions made about the Wasm runtime used. Currently, the only assumption/requirement is that the runtime abides by the Wasm C API which makes porting support very simple. As for Wazero support, it seems as though this runtime uses a bespoke API. Supporting Wazero would be quite an undertaking. Rather, it may make sense to petition for Wazero to adopt the Wasm C API as this is as close as we've got to a universally adopted Wasm runtime API standard. I forgot to mention it explicitly in my comment above but the TinyGo example is supported by this project out of the box as it requires fewer WASI import functions.
gharchive/issue
2023-10-03T16:06:28
2025-04-01T04:33:32.607023
{ "authors": [ "Trey2k", "ashtonmeuser", "gedw99" ], "repo": "ashtonmeuser/godot-wasm", "url": "https://github.com/ashtonmeuser/godot-wasm/issues/55", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
600825996
Added StoreCashBoxes project. The aim of the PR is to create a project for simulating work of cash boxes in the store. Added source files of the StoreCashBoxes project. Added xUnit tests. Updated xUnit tests of previous projects. Added simple CI-workflow. Very cool what you added CI workflow 👍
gharchive/pull-request
2020-04-16T07:55:12
2025-04-01T04:33:32.613965
{ "authors": [ "MikhailMasny", "aslamovyura" ], "repo": "aslamovyura/TMS-DotNet-aslm", "url": "https://github.com/aslamovyura/TMS-DotNet-aslm/pull/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
250755966
Client Connectivity I'm currently attempting at getting kube-monkey up and running on a kubernetes cluster, and I keep getting an error stating: panic: Unable to verify client connectivity to Kubernetes server Any ideas or suggestions on what's going wrong and how to fix it? Thanks. Are you using a VIP in front of your API server? I've noticed this as well and I think the solution might be to allow manually specifying the location for the apiserver, and not using the one from the in-cluster config file. I did some more research & testing, and seem to have figured out a solution. First, I changed the error check after the inClusterConfig call in kubernetes.go to print the error, to try and see what exactly was failing. Examining the logs after running, I received a panic: Get https://[cluster IP]/version: x509: cannot validate certificate for [cluster IP] because it doesn't contain any IP SANs error. Here, the problem was that the client was attempting to authenticate on the IP, which is invalid, as opposed to the cluster DNS, which is valid. Examining the actual inClusterConfig function in client-go revealed that the config is populated with the IP, and even has a comment saying: TODO: switch to cluster DNS. To fix this, I manually changed the Host field of the config after the function call to match the cluster DNS, and now it correctly authenticates. Yeah, I figured this would be causing it. I'm currently working on a commit to allow you to override the host field of InClusterConfig via a kube-monkey configuration. I"ll re-open this Issue for now. Thanks for digging into it. @valtsi You should be able to configure kube-monkey to use a different host URL now by adding the following to the config file: [kubernetes] host="https://your-apiserver-url.com" On start-up, kube-monkey will log a message indicating that the API server has been overriden.
gharchive/issue
2017-08-16T20:50:56
2025-04-01T04:33:32.617551
{ "authors": [ "asobti", "valtsi" ], "repo": "asobti/kube-monkey", "url": "https://github.com/asobti/kube-monkey/issues/4", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
493771308
Fix for Python 3.8: Use stdlib importlib.metadata Flake8-2020 doesn't work on Python 3.8 because importlib_metadata is a backport of importlib.metadata from the standard library in 3.8. It fails like this (running tox on 3.8-dev, Python 3.8.0b4+): tests/flake8_2020_test.py:5: in <module> from flake8_2020 import Plugin flake8_2020.py:9: in <module> import importlib_metadata .tox/py38/lib/python3.8/site-packages/importlib_metadata/__init__.py:532: in <module> __version__ = version(__name__) .tox/py38/lib/python3.8/site-packages/importlib_metadata/__init__.py:494: in version return distribution(distribution_name).version .tox/py38/lib/python3.8/site-packages/importlib_metadata/__init__.py:467: in distribution return Distribution.from_name(distribution_name) .tox/py38/lib/python3.8/site-packages/importlib_metadata/__init__.py:181: in from_name dists = resolver(DistributionFinder.Context(name=name)) <frozen importlib._bootstrap_external>:1385: in find_distributions ??? .tox/py38/lib/python3.8/re.py:275: in escape pattern = str(pattern, 'latin1') E TypeError: decoding to str: need a bytes-like object, Context found https://travis-ci.org/hugovk/flake8-2020/builds/585264854 With this fix, here's an example of it (almost) passing: https://travis-ci.org/hugovk/flake8-2020/builds/585272513 It fails because coverage's --fail-under 100. Should that be reduced to allow for this branching, or make it ignore it? it works fine for me using nightly -- I'm going to wait until python3.8 is released before merging this change Thanks for following up upstream at https://gitlab.com/python-devs/importlib_metadata/issues/86, this now works on 3.8-dev.
gharchive/pull-request
2019-09-15T18:38:21
2025-04-01T04:33:32.626562
{ "authors": [ "asottile", "hugovk" ], "repo": "asottile/flake8-2020", "url": "https://github.com/asottile/flake8-2020/pull/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1798048314
[Question]: Best way to add an executable tool (like curl) to js_binary sandbox I have a 3rd party node module which attempts to spawn curl under the hood. However, using --incompatible_strict_action_env removes the system's PATH and therefore curl is not found. What is the best way to make curl (or any other executable) available to the build actions that needs it? My current thoughts: Set a specific PATH environment variable of the js_binary rule in question. (This seems wrong though because it's OS-specific and may not work well with RBE) Build curl within the repo itself. (Seems like a lot to try to get this to work, but may be a good option?) Create a toolchain that downloads curl for the OS and pass it into the js_binary rule (somehow) Any thoughts? And if going with # 3, is the best thing to do create a wrapper rule around js_binary_lib.create_launcher(), or could js_binary itself accept an attribute that uses the toolchain? I think you've got a good handle on the options. There's also 3b which doesn't give a hermetic toolchain, rather it just asserts that curl is on the $PATH and exposes the absolute path to it for you to stitch into your action. Here's a similar thing I was working on for tar https://github.com/aspect-build/bazel-lib/commit/0417e15c4cd2d88a3d46a7338c5e72a7dec00bb6 I think we'll probably close this since we'd rather not use the issue tracker for user support, and there's no bug or FR for rules_js here. Come chat on Bazel slack in the #aspect channel? Thanks for that @alexeagle. I'll def join the slack :) As far as this one goes though: is it possible for js_binary to accept a toolchains attribute for a list of toolchains to provide in the sandbox? Having this attribute may alleviate the need for rules like cypress_module_test which seems to simply be a wrapper around js_test that provides the Cypress toolchain, but if I need more toolchains than just Cypress, then my only option seems to be writing a custom rule. A toolchains attribute for js_binary may be my feature request here :)
gharchive/issue
2023-07-11T04:03:19
2025-04-01T04:33:32.632879
{ "authors": [ "alexeagle", "gregjacobs" ], "repo": "aspect-build/rules_js", "url": "https://github.com/aspect-build/rules_js/issues/1153", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2075188992
[FR]: formatter's --mode check shouldn't bail on the first error What is the current behavior? I've defined //:format as a multi_formatter_binary that supports Go, Starlark, and Java. I have a Go file and a Java file which are both incorrectly formatted. Running bazel run //:format -- --mode check prints the following output: Formatting Starlark with Buildifier... Formatting Java with java-format... java/com/.../SomeJavaFile.java java/com/.../SomeJavaFile.java FAILED: A formatter tool exited with code 123 Try running 'bazel run @@//bazel/lint:format' to fix this. Notice that the check didn't reach the Go file, it bailed after the first error. Describe the feature It would be helpful for usability reasons if the --mode check invocation of the formatter checked all the file types and reported an exhaustive list of unformatted files. Sounds good to me - probably at the same time it makes sense to fix a TODO in there to run the formatter tools in parallel as well (both in check and fix mode)
gharchive/issue
2024-01-10T20:55:03
2025-04-01T04:33:32.634622
{ "authors": [ "alexeagle", "ciarand" ], "repo": "aspect-build/rules_lint", "url": "https://github.com/aspect-build/rules_lint/issues/97", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2659070531
[FR]: Add ability to pass additional user defined flags to interpreter What is the current behavior? Currently bootstrap script doesn't allow to pass custom interpreter flags to Python interpreter: https://github.com/aspect-build/rules_py/blob/main/py/private/run.tmpl.sh#L61 Two flags are added by rules_py implementation (https://github.com/aspect-build/rules_py/blob/main/py/private/py_semantics.bzl#L6-L14), but I didn't find an option to add more flags defined by user to interpreter args argument for py_binary target allows to pass custom arguments to invoked script, but not to interpreter itself. Describe the feature I would like to have ability to pass custom user flags to Python interpreter used by rules_py. Main motivation for it to allow to run py_binary targets under debugpy debugger via bazel. Instead of having following line in bootstrap file exec "{{EXEC_PYTHON_BIN}}" {{INTERPRETER_FLAGS}} "$(rlocation {{ENTRYPOINT}})" "$@" I want to have something like +exec "{{EXEC_PYTHON_BIN}}" {{INTERPRETER_FLAGS}} $RULES_PY_USER_INTERPRETER_FLAGS "$(rlocation {{ENTRYPOINT}})" "$@" Example usage will be: RULES_PY_USER_INTERPRETER_FLAGS="-m debugpy --listen 12345 --wait-for-client" bazel run //path/to/rules_py/py_binary/target I've seen and tried proposed way to debug via creating venv from https://github.com/bazelbuild/rules_python/issues/1401#issuecomment-2389768034 This way doesn't work for us in our monorepo setting for couple of reasons: We use custom pytest_runner We have our own VSCode extension which uses bazel to run tests and debug in TestExplorer and other things and we would prefer to keep interaction with tests and codebase through bazel, not via python interpreter from rules_py venv It creates one extra step for our user to be able to interact with IDE, comparing with current setup Similar feature request from bazel slack: https://bazelbuild.slack.com/archives/CA31HN1T3/p1730573900089359 Another alternative I was thinking is to use run_under flag (https://bazel.build/reference/command-line-reference#flag--run_under), but it seems to be not feasible for debugpy use case, as bootstrap script is shell and not Python. If there is a way to specify custom toolchain for this use case I am happy to learn how to do it as well! Thanks! Created example PR with proposed change and described how I tested it: https://github.com/aspect-build/rules_py/pull/442 I think we should explore allowing the interpreter flags to be set on the toolchain and target (like node_interpreter_args from rules_js). The DX of just the env var isn't great. We also need to validate the flags passed perhaps, as there are interpreter flags that rules_py relies on, and we don't want users to override theses. For the debug case, I'd propose this case, rather than modifying the template too much. py_binary( name = "foo", srcs = [...], deps = [...], env = select({ "//:is_dbg": {"PYDEVD_RESOLVE_SYMLINKS": "1" }, "//conditions:default": {}, }), interpreter_args = select({ "//:is_dbg": ["-m", "debugpy", "--listen", "12345", "--wait-for-client"], "//conditions:default": [], }), ) Then the bazel invocation is bazel run -c dbg //:foo for debugging (in the debug compilation mode). This allows the flags to be defined in the BUILD file, users don't have to remember them, or the magic env var to set. Yes, that would be great for us if we could just add flags and env setting as another py_binary arguments. Running with -c dbg is also preferable, as we planned to use it anyway to add debugpy requirement as an optional dependency. @mattem I tried out your idea, it works. Updated PR with new proposed change. Not sure about validating flags part. If you could share some specifics here, I would try to add this as well. @mattem do you want me to take this one over, or can you still continue as design reviewer? Implemented in https://github.com/aspect-build/rules_py/pull/442
gharchive/issue
2024-11-14T14:49:30
2025-04-01T04:33:32.641218
{ "authors": [ "alexeagle", "mattem", "muravev-vasilii" ], "repo": "aspect-build/rules_py", "url": "https://github.com/aspect-build/rules_py/issues/436", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1897759944
SIH_HACKTHON_PROJECT We have developed a website for tackling the problem of lack of linguistic knowledge among children We have developed a website to tackle the problem of lack of linguistic knowledge among children.
gharchive/pull-request
2023-09-15T05:59:54
2025-04-01T04:33:32.642080
{ "authors": [ "Sundaram-rathor" ], "repo": "aspiringsecurity/Water-ICM", "url": "https://github.com/aspiringsecurity/Water-ICM/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
535141055
Could not load file or assembly... when generating scaffold On macOS 10.15 the following command fails (RazorPagesMovie is current folder, % is a shell prompt) RazorPagesMovie % dotnet aspnet-codegenerator razorpage -m Movie -dc RazorPagesMovieContext -udl -outDir Pages/Movies --referenceScriptLibraries Building project ... Unhandled exception. System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.VisualStudio.Web.CodeGeneration.Utils, Version=3.1.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The system cannot find the file specified. File name: 'Microsoft.VisualStudio.Web.CodeGeneration.Utils, Version=3.1.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60' Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 6719f08e-3bd7-dc1a-71df-f2ef9fbca9d8 Version Independent ID: 7096fdb3-612e-9e00-bd0b-8ea4886a09ce Content: Add a model to a Razor Pages app in ASP.NET Core Content Source: aspnetcore/tutorials/razor-pages/model.md Product: aspnet-core Technology: aspnetcore-tutorials GitHub Login: @Rick-Anderson Microsoft Alias: riande Did you dotnet tool install --global dotnet-ef dotnet tool install --global dotnet-aspnet-codegenerator dotnet add package Microsoft.EntityFrameworkCore.SQLite dotnet add package Microsoft.VisualStudio.Web.CodeGeneration.Design dotnet add package Microsoft.EntityFrameworkCore.Design dotnet add package Microsoft.EntityFrameworkCore.SqlServer RazorPagesMovies.csproj had target framework netcoreapp3.0 (see my other issue on previous tutorial step). I changed it to necoreapp3.1, removed obj and bin folders with rm command and rebuilt project with "dotnet build". After this scaffold generated successfully. Thanks!
gharchive/issue
2019-12-09T18:49:17
2025-04-01T04:33:32.649787
{ "authors": [ "Rick-Anderson", "ionyshch" ], "repo": "aspnet/AspNetCore.Docs", "url": "https://github.com/aspnet/AspNetCore.Docs/issues/16097", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
505317595
updating second snippet to call ReadToEndAsync [EDIT by guardrex to add the issue link] Fixes #14979 Instead of that, let’s change the docs above the comment to explain how we can do async deserialization directly with the new JSON serializer Fixes 14979 @davidfowl should we add another section or we should remove that one that i updated? another question, for the first snippet, should we alter and dispose the streamreader? OK lets do this, keep this change and add a 3rd snippet of DO This with the fully async and non buffered request body. Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: eddynaka sign nowYou have signed the CLA already but the status is still pending? Let us recheck it. @davidfowl just added the third as you said before. I added the namespace in each line, because in the snippet in the docs, its not showing the namespaces used, so we won't know what nuget/library we are using to deserialize. :smile: Now ... it throws the CLA! @Rick-Anderson that system is still 🐞-y as heck. Yikes! @eddynaka Please sign the CLA if you agree with its terms. ☝️ It's supposed to be signed before we even look at a community PR. However, that system has been a pain point for years here. It throws the CLA notice after making PR changes for readers and the team regularly, so it ends up breaking our process quite a bit. @eddynaka your branch was behind master so I had to merge them. Can you check my commits? Thanks Rick Everything looks good! On Thu, Oct 10, 2019, 15:44 Rick Anderson notifications@github.com wrote: @eddynaka https://github.com/eddynaka your branch was behind master so I had to merge them. Can you check my commits? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/aspnet/AspNetCore.Docs/pull/14978?email_source=notifications&email_token=AD3S2ZHSIUXFGTMXJIJ5JT3QN5ZYHA5CNFSM4I7N43U2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEA5LW6A#issuecomment-540719992, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD3S2ZFLRX7OVROHR5DT2B3QN5ZYHANCNFSM4I7N43UQ . Fixes 14990
gharchive/pull-request
2019-10-10T14:40:35
2025-04-01T04:33:32.655308
{ "authors": [ "Rick-Anderson", "davidfowl", "dnfclas", "eddynaka", "guardrex" ], "repo": "aspnet/AspNetCore.Docs", "url": "https://github.com/aspnet/AspNetCore.Docs/pull/14978", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
498427627
blazor (client-side) template dissapeared from VS 2019 16.2.0 Preview 2 This issue has been moved from a ticket on Developer Community. Hello, This page (https://docs.microsoft.com/en-us/aspnet/core/blazor/get-started?view=aspnetcore-3.0&tabs=visual-studio) says that "For a Blazor client-side experience, choose the Blazor (client-side) template." however I can see only Blazor ( serevr-side) templete in the list. I tested it with 2 different installation of VS 16.2.0 Preview 2 and the template is missng in both of them. It seems that in the earlier versions of VS 2019 there was this template. However, I can still create new blazor client-side project using command dotnet command line tool (.NET Core 3.0 Preview 6). Would you mind clarify that dissapearance that template from VS is an intended change or a bug? Thank you, Alexander. Original Comments Visual Studio Feedback System on 6/24/2019, 02:52 AM: We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps. Visual Studio Feedback System on 6/26/2019, 01:40 AM: Thank you for sharing your feedback! Our teams prioritize action on product issues with broad customer impact. See details at: https://docs.microsoft.com/en-us/visualstudio/ide/report-a-problem?view=vs-2017#faq. In case you need answers to common questions or need assisted support, be sure to use https://visualstudio.microsoft.com/vs/support/. We’ll keep you posted on any updates to this feedback. Marc W. George on 7/1/2019, 11:35 AM: I have the same issue in Preview 3. Byrd, Joel on 7/25/2019, 05:16 AM: Same issue in Visual Studio 16.3.0 Preview 1.0. I ran...dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview7.19365.7...in the console, and it lists all the Blazor templates, including both server-side and client-side, but when I go to create a new project and search for Blazor, only the server-side version shows up. Brad Wood on 8/14/2019, 04:56 PM: Using VS2019 version 16.2.2, after running dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview8.19405.7 (as it reads in the current blazor doc), I don't get any templates at all. Matthew Tamm on 8/21/2019, 06:36 PM: VS 16.3.0 Preview 2 and dotnet 3.0.100-preview8-013656 Template was also missing here ... installed preview 8 templates with the following:dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview8.19405.7Restarted VS, all good now.Taken from the below. There is a note in there about uninstalling any previous version templates, this might be affecting others, YMMVhttps://docs.microsoft.com/en-us/aspnet/core/blazor/get-started?view=aspnetcore-3.0&tabs=visual-studio Original Solutions (no solutions) Thanks for contacting us. Seems this is resolved already.
gharchive/issue
2019-09-25T17:39:02
2025-04-01T04:33:32.660327
{ "authors": [ "mkArtakMSFT", "vsfeedback" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/14429", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
501248411
Identity scaffold template missing encoding when verifying via login page Describe the bug The Identity scaffold template has several ways to trigger the email verification, but one is missing encoding. .../V3/Account/Login.cshtml.cs OnPostSendVerificationEmailAsync() ... var code = await _userManager.GenerateEmailConfirmationTokenAsync(user); //Missing code = WebEncoders.Base64UrlEncode(Encoding.UTF8.GetBytes(code)); //End Missing var callbackUrl = Url.Page( Verification process will always decode, so un-encoded links will fail. To Reproduce Steps to reproduce the behavior: Using this version of ASP.NET Core 3 Create project with local account storage Scaffold Identity Implement IEmailSender Create Account Logout Use Resend Verification Email from login page Follow link in second email - it will fail. Follow link in first email - it will succeed. Expected behavior All email verification links to be encoded for email, then decoded on submit. This is somewhat related to https://github.com/aspnet/AspNetCore/issues/13462, we can discuss at triage today Using this bug to track removing all vestiges of resend email in 3.1, it will be brought back in better form in 5.0 Removed the feature in 3.1 https://github.com/aspnet/AspNetCore/pull/14901 scaffolding side of the removal is here https://github.com/aspnet/Scaffolding/pull/1117
gharchive/issue
2019-10-02T03:02:43
2025-04-01T04:33:32.666863
{ "authors": [ "HaoK", "StickMick" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/14659", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
390515047
Config support for SNI In 2.1 preview2 we added support for SNI: https://github.com/aspnet/KestrelHttpServer/issues/2357 This is a code only approach and isn't supported by the new config system also introduced in 2.1. Proposal: See if there's sufficient user demand for supporting SNI from config. Recommend revisit in 2.2. Backlogging until there is more customer demand. Closing as a duplicate of #15144 since that one actually represents a customer.
gharchive/issue
2018-04-04T21:10:51
2025-04-01T04:33:32.669267
{ "authors": [ "Tratcher", "anurse" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/4749", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
394444360
EventedStreamReader not really emitting OnCompleteLine() per each line received Describe the bug EventedStreamReader's Run() method sometimes reads chunks containing multiple \n, but it only cares about the first \n it sees, and buffers the rest. The problem is when the chunk is the last output coming of npm during npm run serve, for a while. The remaining buffer is not notified (OnCompleteLine() is not called). The risk is WaitFor(Regex regex) randomly missing the opportunity to finish, because it might miss some lines, as it listens to onReceivedLineHandler, and not to chunk events. To Reproduce Steps to reproduce the behavior: ASP.Net Core 2.2 with Spa extensions; tricky to reproduce: the app reads from npm's stdout a stream chunk that contains multiple \n characters in it; notice how only the first chunk (until the first \n) gets notified to the event listeners, the rest doesn't. Expected behavior I would expect one call to OnCompleteLine() per actual line. Thanks for contacting us, @dgaspar. Would you be interested in sending us a PR for this? We'll happily consider it. Thanks for your contribution, @dgaspar👍
gharchive/issue
2018-12-27T17:02:27
2025-04-01T04:33:32.673607
{ "authors": [ "dgaspar", "mkArtakMSFT" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/6146", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
407768962
Hanging SignalR hub method causes memory leak Describe the bug After a SignalR client invokes a hub method that hangs indefinitely, this client's buffers get leaked, and the server invocations continue to be written into them even after the client disconnects. To Reproduce using System; using System.Threading.Tasks; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.SignalR; using Microsoft.AspNetCore.SignalR.Client; using Microsoft.Extensions.DependencyInjection; class Program { static void Main(string[] args) { var url = "http://localhost:5000"; var host = new WebHostBuilder() .UseKestrel() .UseStartup<Startup>() .UseUrls(url) .Build(); var hubContext = host.Services.GetRequiredService<IHubContext<HangingHub>>(); Task.Run(() => { var methodName = new string('0', 100_000_000); while (true) { hubContext.Clients.All.SendAsync(methodName).Wait(); } }); var task = host.RunAsync(); var connection = new HubConnectionBuilder() .WithUrl($"{url}/hanging") .Build(); connection.StartAsync().Wait(); connection.InvokeAsync("Hang"); connection.DisposeAsync().Wait(); task.Wait(); } public class HangingHub : Hub { public Task Hang() => Task.Delay(-1); } public class Startup : IStartup { public IServiceProvider ConfigureServices(IServiceCollection services) { services.AddSignalR(); return services.BuildServiceProvider(); } public void Configure(IApplicationBuilder app) { app.UseWebSockets().UseSignalR(routes => { routes.MapHub<HangingHub>("/hanging"); }); } } } Expected behavior After the client disconnects, the data structures holding its invocations should not accumulate new invocations and be disposed. When we initially discussed this behavior, we decided that it was an application bug and the leak more obvious since you have some bad code not cleaning it the way it's supposed to. Leaving it hanging around means that you can easily see it in a memory profiler and correct the issue. I'll still not convinced we should be covering up your application bugs but if you ran into this in a more legitimate way, I'd be open to hearing that.
gharchive/issue
2019-02-07T15:54:06
2025-04-01T04:33:32.676628
{ "authors": [ "davidfowl", "mbashov" ], "repo": "aspnet/AspNetCore", "url": "https://github.com/aspnet/AspNetCore/issues/7349", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
220301696
Azure App Sites siteextension work Add siteextension project to the repo Build with CI cc @glennc https://github.com/glennc/site_extension Consider putting it in the https://github.com/aspnet/MetaPackages repo (or its own repo) instead, as the site extension primarily depends on the output of the package cache metapackage. The site extension does not directly depend on any of the components in this repo. Name: Microsoft.AspNetCore.AzureAppServices.SiteExtension Docs: https://azure.microsoft.com/en-us/blog/azure-web-sites-extensions/ https://azure.microsoft.com/en-us/blog/writing-a-site-extension-for-azure-websites/ Initial checkin and packaging complete. Xdt.Extensions needs tests. Will add tests in preview2. @muratg this can move to preview3, we're not updating the extension for preview2. Remaining work for 2.0.0: Tests.
gharchive/issue
2017-04-07T19:06:34
2025-04-01T04:33:32.680867
{ "authors": [ "Tratcher", "glennc", "muratg" ], "repo": "aspnet/AzureIntegration", "url": "https://github.com/aspnet/AzureIntegration/issues/21", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
238994090
An element with the same key but a different value already exists. Key: NETStandard.Library.NETFramework I'm running Windows 10 and I've encountered the following issue while cloning this repo (just letting you guys know): Clone the configuration repo to my local drive with git clone <repo url> Open the solution with Visual Studio 2017 Enterprise A non fatal error occurs and a dialog with the following message pops up: The project system has encountered an error. An element with the same key but a different value already exists. Key: NETStandard.Library.NETFramework A diagnostic log has been written to the following location: "C:\Users<MyUserAccount>\AppData\Local\Temp\VsProjectFault_.failure.txt". The content of the diagnostic log (which has been appended multiple times at the same time into the file): ===================== 27-6-2017 23:45:06 Crippling System.AggregateException: One or more errors occurred. ---> System.ArgumentException: An element with the same key but a different value already exists. Key: NETStandard.Library.NETFramework at System.Collections.Immutable.ImmutableDictionary`2.HashBucket.Add(TKey key, TValue value, IEqualityComparer`1 keyOnlyComparer, IEqualityComparer`1 valueComparer, KeyCollisionBehavior behavior, OperationResult& result) at System.Collections.Immutable.ImmutableDictionary`2.AddRange(IEnumerable`1 items, MutationInput origin, KeyCollisionBehavior collisionBehavior) at System.Collections.Immutable.ImmutableDictionary`2.AddRange(IEnumerable`1 pairs, Boolean avoidToHashMap) at System.Collections.Immutable.ImmutableDictionary`2.AddRange(IEnumerable`1 pairs) at Microsoft.VisualStudio.ProjectSystem.Designers.ProjectBuildSubscriptionService.UpdateRuleSnapshotCore(IProjectVersionedValue`1 source, String ruleName, IRule rule, ImmutableDictionary`2 items, ImmutableDictionary`2 properties, ImmutableList`1 recentItemRenames, IProjectChangeDiff& diff) at Microsoft.VisualStudio.ProjectSystem.Designers.ProjectRuleSubscriptionServiceBase`1.UpdateSnapshotCoreAsync(TSource source, IImmutableSet`1 subscribedElements, IProjectVersionedValue`1 previousResult) at Microsoft.VisualStudio.ProjectSystem.Designers.CustomizableBlockSubscriberBase`3.<UpdateSnapshotAsync>d__28.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.ProjectSystem.Designers.CustomizableBlockSubscriberBase`3.<>c__DisplayClass27_0.<<Initialize>b__1>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.Threading.JoinableTask.<JoinAsync>d__76.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.Threading.JoinableTask`1.<JoinAsync>d__3.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.ProjectSystem.Designers.CustomizableBlockSubscriberBase`3.<<Initialize>b__27_0>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.ProjectSystem.DataflowExtensions.<>c__DisplayClass23_0`2.<<CreateSelfFilteringTransformBlock>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.VisualStudio.ProjectSystem.CommonProjectSystemTools.Rethrow(Exception ex) at Microsoft.VisualStudio.ProjectSystem.ProjectErrorReporting.<>c__DisplayClass6_0.<SubmitErrorReport>b__0() at Microsoft.VisualStudio.ProjectSystem.ExceptionFilter.<>c__DisplayClass2_0.<Guard>b__0() at GuardMethodClass.GuardMethod(Func`1 , Func`2 , Func`2 ) --- End of stack trace from previous location where exception was thrown --- at Microsoft.VisualStudio.Telemetry.WindowsErrorReporting.WatsonReport.GetClrWatsonExceptionInfo(Exception exceptionObject) --- End of inner exception stack trace --- ---> (Inner Exception #0) System.ArgumentException: An element with the same key but a different value already exists. Key: NETStandard.Library.NETFramework at System.Collections.Immutable.ImmutableDictionary`2.HashBucket.Add(TKey key, TValue value, IEqualityComparer`1 keyOnlyComparer, IEqualityComparer`1 valueComparer, KeyCollisionBehavior behavior, OperationResult& result) at System.Collections.Immutable.ImmutableDictionary`2.AddRange(IEnumerable`1 items, MutationInput origin, KeyCollisionBehavior collisionBehavior) at System.Collections.Immutable.ImmutableDictionary`2.AddRange(IEnumerable`1 pairs, Boolean avoidToHashMap) at System.Collections.Immutable.ImmutableDictionary`2.AddRange(IEnumerable`1 pairs) at Microsoft.VisualStudio.ProjectSystem.Designers.ProjectBuildSubscriptionService.UpdateRuleSnapshotCore(IProjectVersionedValue`1 source, String ruleName, IRule rule, ImmutableDictionary`2 items, ImmutableDictionary`2 properties, ImmutableList`1 recentItemRenames, IProjectChangeDiff& diff) at Microsoft.VisualStudio.ProjectSystem.Designers.ProjectRuleSubscriptionServiceBase`1.UpdateSnapshotCoreAsync(TSource source, IImmutableSet`1 subscribedElements, IProjectVersionedValue`1 previousResult) at Microsoft.VisualStudio.ProjectSystem.Designers.CustomizableBlockSubscriberBase`3.<UpdateSnapshotAsync>d__28.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.ProjectSystem.Designers.CustomizableBlockSubscriberBase`3.<>c__DisplayClass27_0.<<Initialize>b__1>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.Threading.JoinableTask.<JoinAsync>d__76.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.Threading.JoinableTask`1.<JoinAsync>d__3.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.ProjectSystem.Designers.CustomizableBlockSubscriberBase`3.<<Initialize>b__27_0>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.VisualStudio.ProjectSystem.DataflowExtensions.<>c__DisplayClass23_0`2.<<CreateSelfFilteringTransformBlock>b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.VisualStudio.ProjectSystem.CommonProjectSystemTools.Rethrow(Exception ex) at Microsoft.VisualStudio.ProjectSystem.ProjectErrorReporting.<>c__DisplayClass6_0.<SubmitErrorReport>b__0() at Microsoft.VisualStudio.ProjectSystem.ExceptionFilter.<>c__DisplayClass2_0.<Guard>b__0() at GuardMethodClass.GuardMethod(Func`1 , Func`2 , Func`2 ) --- End of stack trace from previous location where exception was thrown --- at Microsoft.VisualStudio.Telemetry.WindowsErrorReporting.WatsonReport.GetClrWatsonExceptionInfo(Exception exceptionObject)<--- Sounds like a point in time thing. Can you try again now?
gharchive/issue
2017-06-27T21:52:43
2025-04-01T04:33:32.692460
{ "authors": [ "QuantumHive", "shirhatti" ], "repo": "aspnet/Configuration", "url": "https://github.com/aspnet/Configuration/issues/684", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
346925596
Missing examples for configuration in ConfigureServices This paragraph: > Since options are typically loaded from configuration, configuration can be used in startup in both Configure and ConfigureServices. For examples of using configuration during startup, see the Application startup topic. It links to a page that doesn't actually show how to get an IOptions<> in the Configure or ConfigureServices methods. https://docs.microsoft.com/en-us/aspnet/core/fundamentals/startup?view=aspnetcore-2.1 Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: ed39cb0e-aa85-8bba-ffb4-859b56a60723 Version Independent ID: dc7e3449-650b-d117-5eac-69e5ccb250cb Content: Options pattern in ASP.NET Core Content Source: aspnetcore/fundamentals/configuration/options.md Product: aspnet-core GitHub Login: @guardrex Microsoft Alias: riande this sounds contradictory If a service provider is built in ConfigureServices to access options, it wouldn't contain any options configurations provided after the service provider is built. ... Since options are typically loaded from configuration, configuration can be used in startup in both Configure and ConfigureServices. Hello @StevenLiekens ... Thanks for commenting. Yes, I agree that the paragraph needs work. I'll attempt to clarify. Nice, thanks :) @Allann Configure the providers with ConfigureAppConfiguration. See ... https://docs.microsoft.com/aspnet/core/fundamentals/configuration#providers ... and then you can access config in Startup.ConfigureServices ... https://docs.microsoft.com/aspnet/core/fundamentals/configuration#access-configuration-during-startup
gharchive/issue
2018-08-02T09:11:55
2025-04-01T04:33:32.699561
{ "authors": [ "StevenLiekens", "guardrex" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/issues/7927", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
212824288
Update Interpolated Strings link Link to newer version of the documentation. This seems like a small (but important) contribution, so no Contribution License Agreement is required at this point. We will now review your pull request. Thanks, .NET Foundation Pull Request Bot @rolandkopka Thanks! I removed /en-us which was also in the original.
gharchive/pull-request
2017-03-08T18:57:43
2025-04-01T04:33:32.701343
{ "authors": [ "Rick-Anderson", "dnfclas", "rolandkopka" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/pull/2922", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
130418021
Updating the SecretManager package name Issue: https://github.com/aspnet/Docs/issues/943 Here is the manual for installing the SecretManager http://docs.asp.net/en/latest/security/app-secrets.html#installing-the-secret-manager-tool Here is the issue about the old name and NullReferenceException: https://github.com/aspnet/UserSecrets/issues/34 Hi @sguryev, I'm your friendly neighborhood .NET Foundation Pull Request Bot (You can call me DNFBOT). Thanks for your contribution! This seems like a small (but important) contribution, so no Contribution License Agreement is required at this point. Real humans will now evaluate your PR. TTYL, DNFBOT; Thanks! 195ce020865bc46e83dedca1183395885d671a32 Thank you for the great job!
gharchive/pull-request
2016-02-01T16:57:52
2025-04-01T04:33:32.704831
{ "authors": [ "danroth27", "dnfclas", "sguryev" ], "repo": "aspnet/Docs", "url": "https://github.com/aspnet/Docs/pull/944", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
52269440
Consider how code-based provider-specific configuration should look Should all configuration for every provider be at the top level, or should making a call to UseSqlServer (or similar) drop into a set of API specific to SQL Server? The former is what we have now; the latter would help Intelisense by not showing everything at the top level and by only showing things that can be used with SQL Server at the second level. It would also avoid the potential naming clashes where multiple providers want to define the same extension method. For example, if I want to set the batch size for SQL Server I would do: protected override void OnConfiguring(DbContextOptions options) { options.UseSqlServer().MaxBatchSize(20); } But I would not be able to do: protected override void OnConfiguring(DbContextOptions options) { options.UseSqlServer(); options.MaxBatchSize(20); } Also, consider whether there should be relational configuration that will be used for any relational provider or just configuration for each provider. No longer relevant.
gharchive/issue
2014-12-17T17:43:55
2025-04-01T04:33:32.706810
{ "authors": [ "ajcvickers" ], "repo": "aspnet/EntityFramework", "url": "https://github.com/aspnet/EntityFramework/issues/1307", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
64860689
RevEng: Don't include ModelBuilder.Entity calls for types with no config If there is a type that doesn't need any configuration (i.e. everything is detected by convention) we still include a section in the OnModelCreating method for it. This isn't needed since we include DbSets for all types in the model. modelBuilder.Entity<AspNetUsers>(entity => { } ); Fixed by fix for https://github.com/aspnet/EntityFramework/pull/2078.
gharchive/issue
2015-03-27T22:08:45
2025-04-01T04:33:32.708291
{ "authors": [ "lajones", "rowanmiller" ], "repo": "aspnet/EntityFramework", "url": "https://github.com/aspnet/EntityFramework/issues/1931", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
97216366
Script-Migration broken I just installed the beta7-13841 and ran the `Script-Migration' command. It was working with the previous release, now I receive the following error: PM> Script-Migration Specified cast is not valid. Update: same behavior with Add-Migration I'm getting the error The name 'SqlServerIdentityStrategy' does not exist in the current context. Same here on beta7-12274 E:\Projects\HelloMvc\Migrations\PersonDBContextModelSnapshot.cs(16,66): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' does not exist in the current context E:\Projects\HelloMvc\Migrations\PersonDBContextModelSnapshot.cs(22,74): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' does not exist in the current context E:\Projects\HelloMvc\Migrations\20150726005437_MyFirstMigration.cs(17,74): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' doesnot exist in the current context E:\Projects\HelloMvc\Migrations\20150726005437_MyFirstMigration.Designer.cs(26,66): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' does not exist in the current context E:\Projects\HelloMvc\Migrations\20150726005437_MyFirstMigration.Designer.cs(32,74): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' does not exist in the current context Also persist on dnx beta7-12275 with entityframework beta7-13844 To clarify, you generated migration classes using an older version of EF (e.g. Beta 6) and you're trying to run Script-Migration on Beta 7? If you delete your migrations (and model snapshot) and add a new one, does Script-Migration succeed? I delete the migration files for each update of entity framework. In the new migration files is missing an using on the assembly which include the SqlServerIdentityStrategy. +1 same for me With 7.0.0-beta7-13854 I cleared out all migrations and ran the following updated command: dnx . ef migrations add MyFirstMigration --context PersonDBContext I received these errors when compiling: E:\Projects\HelloMvc\Migrations\PersonDBContextModelSnapshot.cs(16,66): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' does not exist in the current context E:\Projects\HelloMvc\Migrations\PersonDBContextModelSnapshot.cs(22,74): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' does not exist in the current context E:\Projects\HelloMvc\Migrations\20150730033506_MyFirstMigration.cs(17,74): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' doesnot exist in the current context E:\Projects\HelloMvc\Migrations\20150730033506_MyFirstMigration.Designer.cs(26,66): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' does not exist in the current context E:\Projects\HelloMvc\Migrations\20150730033506_MyFirstMigration.Designer.cs(32,74): DNX,Version=v4.5.1 error CS0103: The name 'SqlServerIdentityStrategy' does not exist in the current context doing a little digging I found 'SqlServerIdentityStrategy' in Microsoft.Data.Entity.SqlServer.Metadata so I added this to all three migration files: using Microsoft.Data.Entity.SqlServer.Metadata; Now when I compile I am now get this error: E:\Projects\HelloMvc\Migrations\20150730033506_MyFirstMigration.cs(18,74): DNX,Version=v4.5.1 error CS1503: Argument 2: cannot convert from 'Microsoft.Data.Entity.SqlServer.Metadata.SqlServerIdentityStrategy' to 'string' Noral Exactly, i have added the ToString() for each SqlServerIdentityStrategy.IdentityColumn because IdentityColumn is of type enum. Compilation all right but when applied the migration file with dnx . ef migration apply initial i have the error below dnx.exe : System.InvalidCastException: Le cast sp‚cifi‚ n'est pas valide.At line:1 char:1 + dnx . ef migration apply initial + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (System.InvalidC...est pas valide.:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError … Microsoft.Data.Entity.SqlServer.SqlServerMigrationSqlGenerator.ColumnDefinition(String schema, String table, String name, String type, Boolean nullable, Object defaultValue, String defau ltValueSql, String computedColumnSql, IAnnotatable annotatable, IModel model, SqlBatchBuilder builder) … Microsoft.Data.Entity.Migrations.Sql.MigrationSqlGenerator.ColumnDefinition(AddColumnOperation operation, IModel model, SqlBatchBuilder builder) … Microsoft.Data.Entity.Migrations.Sql.MigrationSqlGenerator.Generate(CreateTableOperation operation, IModel model, SqlBatchBuilder builder) … CallSite.Target(Closure , CallSite , Object , Object , IModel , SqlBatchBuilder ) … System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid4[T0,T1,T2,T3](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3) … Microsoft.Data.Entity.Migrations.Sql.MigrationSqlGenerator.Generate(IReadOnlyList`1 operations, IModel model) … Microsoft.Data.Entity.Migrations.Migrator.ApplyMigration(Migration migration) … Microsoft.Data.Entity.Migrations.Migrator.ApplyMigrations(String targetMigration) … Microsoft.Data.Entity.Commands.MigrationTool.ApplyMigration(String migrationName, String contextTypeName, String startupAssemblyName) … Microsoft.Data.Entity.Commands.Program.<>c__DisplayClass13_0.<ApplyMigration>b__0() … Microsoft.Data.Entity.Commands.Program.Execute(String startupProject, Func`1 invoke) … Microsoft.Data.Entity.Commands.Program.ApplyMigration(String migration, String context, String startupProject) … Microsoft.Data.Entity.Commands.Program.<>c__DisplayClass10_3.<Main>b__9() … Microsoft.Framework.Runtime.Common.CommandLine.CommandLineApplication.Execute(String[] args) … Microsoft.Data.Entity.Commands.Program.Main(String[] args) --- Fin de la trace de la pile … partir de l'emplacement pr‚c‚dent au niveau duquel l'exception a ‚t‚ lev‚e --- … System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() … Microsoft.Framework.Runtime.Common.EntryPointExecutor.Execute(Assembly assembly, String[] args, IServiceProvider serviceProvider) … Microsoft.Framework.ApplicationHost.Program.ExecuteMain(DefaultHost host, String applicationName, String[] args) … Microsoft.Framework.ApplicationHost.Program.Main(String[] args) --- Fin de la trace de la pile … partir de l'emplacement pr‚c‚dent au niveau duquel l'exception a ‚t‚ lev‚e --- … System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() … Microsoft.Framework.Runtime.Common.EntryPointExecutor.Execute(Assembly assembly, String[] args, IServiceProvider serviceProvider) … dnx.host.Bootstrapper.RunAsync(List`1 args, IRuntimeEnvironment env, FrameworkName targetFramework) … dnx.host.RuntimeBootstrapper.ExecuteAsync(String[] args, FrameworkName targetFramework) … dnx.host.RuntimeBootstrapper.Execute(String[] args, FrameworkName targetFramework) @branciat I added the ToString() for the ServerIdentityStrategy.IdentityColumn. However, the command to apply the migration has changed. The new command is: dnx . ef database update --context PersonDBContext the error I am getting now is: E:\Projects\HelloMvc>dnx . ef database update --context PersonDBContext Using context 'PersonDBContext'. Using database 'aspnet5-BaseMVC-9adc29c4-988e-411c-96d3-15ad2e5a39e7' on server 'WIN81-ASPNET\SQLEXPRESS'. System.InvalidCastException: Specified cast is not valid. at Microsoft.Data.Entity.SqlServer.SqlServerMigrationSqlGenerator.ColumnDefinition(String schema, String table, String name, String type, Boolean nullable, Object defaultValue, String defaultValueSql, String computedColumnSql, IAnnotatable annotatable, IModel model, SqlBatchBuilder builder) at Microsoft.Data.Entity.Migrations.Sql.MigrationSqlGenerator.ColumnDefinition(AddColumnOperation operation, IModel model, SqlBatchBuilder builder) at Microsoft.Data.Entity.Migrations.Sql.MigrationSqlGenerator.Generate(CreateTableOperation operation, IModel model, SqlBatchBuilder builder) at System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid4[T0,T1,T2,T3](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3) at Microsoft.Data.Entity.Migrations.Sql.MigrationSqlGenerator.Generate(IReadOnlyList`1 operations, IModel model) at Microsoft.Data.Entity.Migrations.Migrator.ApplyMigrations(String targetMigration) at Microsoft.Data.Entity.Commands.MigrationTool.ApplyMigration(String migrationName, String contextTypeName, String startupAssemblyName) at Microsoft.Data.Entity.Commands.Program.Execute(String startupProject, Action invoke) at Microsoft.Dnx.Runtime.Common.CommandLine.CommandLineUtilsExtensions.<>c__DisplayClass0_0.<OnExecute>b__0() at Microsoft.Dnx.Runtime.Common.CommandLine.CommandLineApplication.Execute(String[] args) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Dnx.Runtime.Common.EntryPointExecutor.Execute(Assembly assembly, String[] args, IServiceProvider serviceProvider) at Microsoft.Dnx.ApplicationHost.Program.ExecuteMain(DefaultHost host, String applicationName, String[] args) at Microsoft.Dnx.ApplicationHost.Program.Main(String[] args) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Dnx.Runtime.Common.EntryPointExecutor.Execute(Assembly assembly, String[] args, IServiceProvider serviceProvider) at Microsoft.Dnx.Host.Bootstrapper.RunAsync(List`1 args, IRuntimeEnvironment env, FrameworkName targetFramework) at Microsoft.Dnx.Host.RuntimeBootstrapper.ExecuteAsync(String[] args, FrameworkName targetFramework) at Microsoft.Dnx.Host.RuntimeBootstrapper.Execute(String[] args, FrameworkName targetFramework) The cast error is #2545 I investigated deeper in the issue. It seems that with https://github.com/aspnet/EntityFramework/commit/baba8936d2db859f5f3fc2f95c72b1578b789239#diff-cd68493e81362e882059c78d0aa9ab01R288 commit we changed annotations in snapshot/migration files from string to enum type if any. We have not included using directive for that. This gives the error The name 'SqlServerIdentityStrategy' does not exist in the current context This error occurs in generated Migration & ModelSnapshot files. To fix this we can add using directive to the enum SqlServerIdentityStrategy . Now the OperationBuilder<TOperation> expects Annotation value to be string. https://github.com/aspnet/EntityFramework/blob/dev/src/EntityFramework.Relational/Migrations/Builders/OperationBuilder.cs#L25. This gives error in migration file Argument 2: cannot convert from 'Microsoft.Data.Entity.SqlServer.Metadata.SqlServerIdentityStrategy' to 'string' This error occurs in generated Migration files. To fix this the enum SqlServerIdentityStrategy required to be casted to string using ToString() method. The commit https://github.com/aspnet/EntityFramework/commit/ea658e95e47aebf2efd0889888e536bcc811458a#diff-4ae577e6726dfbb2a31b20bd07e1af96L271 casted the annotation value which is string type to enum SqlServerIdentityStrategy without using TryParse. This gives error Specified cast is not valid. This error throws at Microsoft.Data.Entity.SqlServer.SqlServerMigrationSqlGenerator.ColumnDefinition
gharchive/issue
2015-07-25T10:36:05
2025-04-01T04:33:32.719718
{ "authors": [ "Bidou44", "NoralK", "branciat", "bricelam", "smitpatel", "tinganho" ], "repo": "aspnet/EntityFramework", "url": "https://github.com/aspnet/EntityFramework/issues/2715", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
123712344
Migration under VB.Net Hy at all developer I use the EF7 RC1 Final. Migration under VB is not possible. I have created a new project and added the NuGet packages SQLServer and Command. I added the classes like the wiki article and make an initial.migration and make update-database over the PakageManagerConsole. In thy VB Project was created *.cs files. Then i have added the propertie "Comment" to the Post Class and do a second migration but it created again a complete migration like the initialmigration. AND after a update-database in my DB ist not a column with name "Comment". Sorry for my bad english, my mainlanguage is german. MigrationTest.zip OK, sorry. I have found this: https://github.com/aspnet/EntityFramework/issues/4064 Can deleted
gharchive/issue
2015-12-23T19:32:47
2025-04-01T04:33:32.723292
{ "authors": [ "NoFear23m" ], "repo": "aspnet/EntityFramework", "url": "https://github.com/aspnet/EntityFramework/issues/4169", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
201747605
using EF, C# data doesn't match SQL Server db result From @JayGuAtGitHub on January 10, 2017 2:34 Steps to reproduce you can use the following sql script to generate the test data(the database Saron is set as default) EF bug.sql.txt using (var db = new SaronEntities()) { var q = from l in db.MiscTableCatalog join r in db.MiscTableRelatedFile on l.Id equals r.MiscId into rt from r in rt.DefaultIfEmpty() where l.TableName != null && l.TableName != "" select new { MiscTableRelatedFile = r, MiscTableCatalog = l }; var x= q.ToList(); } watch q.ToString(), the sql is as below: SELECT 1 AS [C1], [Extent2].[MiscId] AS [MiscId], [Extent2].[FileName] AS [FileName], [Extent2].[File] AS [File], [Extent1].[Id] AS [Id], [Extent1].[NameCn] AS [NameCn], [Extent1].[NameEn] AS [NameEn], [Extent1].[ParentId] AS [ParentId], [Extent1].[SeqNbr] AS [SeqNbr], [Extent1].[TableName] AS [TableName], [Extent1].[DataSourceEn] AS [DataSourceEn], [Extent1].[DataSourceCn] AS [DataSourceCn], [Extent1].[DefaultCheck] AS [DefaultCheck] FROM (SELECT [MiscTableCatalog].[Id] AS [Id], [MiscTableCatalog].[NameCn] AS [NameCn], [MiscTableCatalog].[NameEn] AS [NameEn], [MiscTableCatalog].[ParentId] AS [ParentId], [MiscTableCatalog].[SeqNbr] AS [SeqNbr], [MiscTableCatalog].[TableName] AS [TableName], [MiscTableCatalog].[DataSourceEn] AS [DataSourceEn], [MiscTableCatalog].[DataSourceCn] AS [DataSourceCn], [MiscTableCatalog].[DefaultCheck] AS [DefaultCheck] FROM [dbo].[MiscTableCatalog] AS [MiscTableCatalog]) AS [Extent1] LEFT OUTER JOIN [dbo].[MiscTableRelatedFile] AS [Extent2] ON [Extent1].[Id] = [Extent2].[MiscId] WHERE ([Extent1].[TableName] IS NOT NULL) AND ( NOT ((N'' = [Extent1].[TableName]) AND ([Extent1].[TableName] IS NOT NULL))) watch the result: first line: C1 MiscId FileName File Id NameCn NameEn 1 NULL NULL NULL 9451af71-37ab-4f1e-9288-0ba3b76898ef 河北 Hebei the 20th line: C1 MiscId FileName File Id NameCn NameEn 1 NULL NULL NULL a7871184-1827-435d-82f4-4cdd1eae8924 河北 Hebei watch the x in IDE: first line: Id "9451af71-37ab-4f1e-9288-0ba3b76898ef" string NameCn "河北" string NameEn "Hebei" string 20th line: Id "9451af71-37ab-4f1e-9288-0ba3b76898ef" string NameCn "河北" string NameEn "Hebei" string And it can also be reproduced in my production environment (IIS, windows server 2008, same EF config, SQL Server 2014) Besides, if i remove the "duplicated" data (either in 9451af71-37ab-4f1e-9288-0ba3b76898ef, hebei and a7871184-1827-435d-82f4-4cdd1eae8924, hebei), it will be fine. Further technical details EF Core version: Database Provider: see in config section Database: SQL Server 2014 Operating system: windows 7; windows server 2008 IDE: Visual Studio 2013 Config: Copied from original issue: aspnet/EntityFramework#7382 Looking at the Table creation script I could very well be that you are missing a PK on the MiscTableCatalog table. The moment that EF can't find or use a PK and it can't define something correctly/unique on its own, you will see duplicate data being returned in the DbSet. We have seen this problem mainly when we used Views in EDMX. seems the answer of @flts is right, after I add primary key, the error gone. however, will this be fixed in the future? I'm afraid if I run the sql with ado.net and convert it to a object with reflect, it won't be like that. Do you still need that code? I think I missed my EF version, it is 6.1.3 @JayGuAtGitHub The issue @flts is referring to is a limit of the EF Designer that we're not planning to change. However, it shouldn't be blocking since changing the model to define an appropriate primary key is not too difficult. Beyond this, EF requires that a unique column or combination of columns must be defined as a key. This is not something that will change.
gharchive/issue
2017-01-19T02:17:01
2025-04-01T04:33:32.736670
{ "authors": [ "JayGuAtGitHub", "ajcvickers", "flts", "maumar" ], "repo": "aspnet/EntityFramework6", "url": "https://github.com/aspnet/EntityFramework6/issues/175", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
288071026
Recommended context factory injection method? I'm looking to run some queries in parallel and understand that I'll need to create multiple context instances for this. As such I'm looking to inject a factory rather that single instance. Would you be able to give a little guidance please as to a recommended approach? It looks like runtime context factories aren't something built-in, and the current DI helper: https://github.com/aspnet/EntityFrameworkCore/blob/03bcb5122e3f577a84498545fcf130ba79a3d987/src/Microsoft.EntityFrameworkCore/EntityFrameworkServiceCollectionExtensions.cs#L142 ...just registers the context type, of course. Should I just implement my own IEfContextFactory<TContext> type which depends on DbContextOptionsFactory<TContext>? Is there a method you guys use for this scenario? Thanks Sam, indeed I know I can create a factory (though I wouldn't have the consuming component have to read config - kind of the point of the factory, to take away those details). @kierenj Yes, I'm at the same place. Honestly, I need this ability to do something temporary, but I still need it. Having to refer to new DbContext() is challenging because we rely on various other configuration-specific steps to get a properly-configured context in production cloud, dev, and test environments, such as Azure keys and such. I really don't want to rebuild all that system-specific configuration for this temporary component. One solution for ASP.NET Core DI; you can inject a simple model which holds the provider you have in-mind, using Microsoft.Extensions.DependencyInjection.IServiceCollection.ServiceCollectionServiceExtensions.Add*, e.g. public override void Configure(IServiceCollection services) { services.AddSingleton(f => new MyProvider { Factory = f.GetRequiredService<DbContext> }); I think I've done this before, but I didn't verify. Sorry! Inject your provider and you have your factory. I think the only simplification missing is the ability to inject delegates directly via .NET Core DI, not sure.
gharchive/issue
2018-01-12T10:19:22
2025-04-01T04:33:32.741654
{ "authors": [ "kierenj", "syndicatedshannon" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/issues/10694", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
383165709
Numerous Issues passing a geometry parameter to a query Sample code: var geometryFactory = new OgcCompliantGeometryFactory(new PrecisionModel(PrecisionModels.Floating), 27700); var extentGeometry = new WKTReader(geometryFactory).Read(extent); var results = _Context.Cases .Where(c => c.GridReference != null && c.GridReference.Within(extentGeometry)) ... The geometry parameter is of type varbinary so queries such as: extentGeometry.Contains(c.GridReference) throw a SQL exception The varbinary sent actually has the X/Y co-ordinates backwards for example: WTK in extent string: POLYGON((371162.5815447777 102934.05565338745, 671089.3393572777 102934.05565338745, 671089.3393572777 242741.18455963745, 371162.5815447777 242741.18455963745, 371162.5815447777 102934.05565338745)) extentGeometry.ToString(): {POLYGON((371162.5815447777 102934.05565338745, 671089.33935727773 102934.05565338745, 671089.33935727773 242741.18455963745, 371162.58154477773 242741.18455963745, 371162.58154477773 102934.05565338745))} Intercepting the SQL sent to SQLServer exec sp_executesql N' SELECT CAST(@__extentGeometry_0 AS Geometry).STAsText(); SELECT [c].[Id], [c].[Address], [c].[GridReference], [c].[Ordered], [c].[PostCode], [c].[PropertyType], [c].[Source], [c].[SourceId] FROM [Cases] AS [c] WHERE ([c].[GridReference] IS NOT NULL AND ([c].[GridReference].STWithin(@__extentGeometry_0) = 1))', N'@__extentGeometry_0 varbinary(112),@__cutoffDate_1 datetime2(7)',@__extentGeometry_0=0x346C000001040500000070CEF4E36021F940667980536AA7164170CEF4E36021F940B33CC0ADE27A24413867FA79A9A10D41B33CC0ADE27A24413867FA79A9A10D41667980536AA7164170CEF4E36021F940667980536AA7164101000000020000000001000000FFFFFFFF0000000003,@__cutoffDate_1='2008-11-21 00:00:00' SELECT CAST(@__extentGeometry_0 AS Geometry).STAsText(): POLYGON ((102934.05565338745 371162.58154477773, 102934.05565338745 671089.33935727773, 242741.18455963745 671089.33935727773, 242741.18455963745 371162.58154477773, 102934.05565338745 371162.58154477773)) ### Further technical details EF Core version: 2.2.0-preview3-35497 Database Provider: Microsoft.EntityFrameworkCore.SqlServer @lawrencephillips There isn't enough information here for us to be able to reproduce what you are seeing. Could you post a runnable project/solution or complete code listing that demonstrates the issue? EF Team Triage: Closing this issue as the requested additional details have not been provided and we have been unable to reproduce it.
gharchive/issue
2018-11-21T15:18:11
2025-04-01T04:33:32.745686
{ "authors": [ "ajcvickers", "divega", "lawrencephillips" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/issues/13992", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
491417097
how to count specific column in entity framework core StackOverflow issue here. When I use left join to perform a grouped query of two tables, the generated count statement is always count(*), I want to generate the count of the specified column. For example: count(InstrumentCode). Framework: EF Core 3.0-preview9 var resultQuery = from t1 in query join t2 in orgCrmQuery on t1.CUSTOMER equals t2._ID join t3 in orgQuery on t2.NAME equals t3.ORGANIZATION_NAME into t4 from t5 in t4.DefaultIfEmpty() join t6 in instrumentQuery on t5.ORGANIZATION_ID equals t6.ORGANIZATION_ID into t7 from t8 in t7.DefaultIfEmpty() where t2.RECORD_TYPE == dto.RecordType && t1.MEETING_NO == dto.MeetingNo group t8.INSTRUMENT_CODE by new { t1.PERSON_NAME, t1.SEX, t1.POSTION, t1.PHONE, t1.IS_CIZHU_KEHU, t2.NAME, t2.ORG_LEVEL, } into g orderby g.Key.NAME select new { g.Key.PERSON_NAME, g.Key.SEX, g.Key.POSTION, g.Key.PHONE, g.Key.IS_CIZHU_KEHU, g.Key.NAME, g.Key.ORG_LEVEL, INSTRUMENT_COUNT = g.Count() }; Sql statement generated now: SELECT [t].[PERSON_NAME], [t].[SEX], [t].[POSTION], [t].[PHONE], [t].[IS_CIZHU_KEHU], [t0].[NAME], [t0].[ORG_LEVEL], COUNT ( * ) AS [INSTRUMENT_COUNT] FROM [dbo].[TB_MEETING_PERSONAL] AS [t] WITH ( NOLOCK ) INNER JOIN [dbo].[TB_ORG_CRM] AS [t0] WITH ( NOLOCK ) ON [t].[CUSTOMER] = [t0].[_ID] LEFT JOIN [dbo].[MS_ORGANIZATION] AS [m] WITH ( NOLOCK ) ON [t0].[NAME] = [m].[ORGANIZATION_NAME] LEFT JOIN [dbo].[MS_INSTRUMENT] AS [m0] WITH ( NOLOCK ) ON [m].[ORGANIZATION_ID] = [m0].[ORGANIZATION_ID] WHERE ( ( ( [t0].[RECORD_TYPE] = @__dto_RecordType_0 ) AND ( [t0].[RECORD_TYPE] IS NOT NULL AND @__dto_RecordType_0 IS NOT NULL ) ) OR ( [t0].[RECORD_TYPE] IS NULL AND @__dto_RecordType_0 IS NULL ) ) AND ( ( ( [t].[MEETING_NO] = @__dto_MeetingNo_1 ) AND ( [t].[MEETING_NO] IS NOT NULL AND @__dto_MeetingNo_1 IS NOT NULL ) ) OR ( [t].[MEETING_NO] IS NULL AND @__dto_MeetingNo_1 IS NULL ) ) GROUP BY [t].[PERSON_NAME], [t].[SEX], [t].[POSTION], [t].[PHONE], [t].[IS_CIZHU_KEHU], [t0].[NAME], [t0].[ORG_LEVEL] ORDER BY [t0].[NAME] OFFSET 0 ROWS FETCH NEXT 100 ROWS ONLY I checked some information, but I didn't find a solution. How to write such query has been answered on stackoverflow already. For the translation duplicate of #17376
gharchive/issue
2019-09-10T02:34:36
2025-04-01T04:33:32.749115
{ "authors": [ "netcore-jroger", "smitpatel" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/issues/17736", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
150238653
Query :: customers.Include(c => c.Orders).SingleOrDefault() reads orders for the second customer, which is redundant For the scenario in question we create two queries: outer - querying for the customers: SELECT TOP(2) [c].[CustomerID], [c].[Address], [c].[City], [c].[CompanyName], [c].[ContactName], [c].[ContactTitle], [c].[Country], [c].[Fax], [c].[Phone], [c].[PostalCode], [c].[Region] FROM [Customers] AS [c] ORDER BY [c].[CustomerID] and inner - querying for corresponding orders: SELECT [o].[OrderID], [o].[CustomerID], [o].[EmployeeID], [o].[OrderDate] FROM [Orders] AS [o] INNER JOIN ( SELECT DISTINCT TOP(2) [c].[CustomerID] FROM [Customers] AS [c] ORDER BY [c].[CustomerID] ) AS [c0] ON [o].[CustomerID] = [c0].[CustomerID] ORDER BY [c0].[CustomerID] When executing the query we: materialize customer1 from the outer query, materialize orders for customer1 from the inner query, materialize customer2 from the outer query, materialize orders for customer2 from the inner query, perform client-side SingleOrDefault() which throws exception. We could throw earlier, after reading the second customer, we don't need to also read their corresponding orders. That would probably require custom _SingleOrDefault() method, rather than relying on the Enumerable version Duplicate of #2182
gharchive/issue
2016-04-22T01:30:02
2025-04-01T04:33:32.752130
{ "authors": [ "maumar", "smitpatel" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/issues/5148", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
251387940
HTTP Error 502.5 - Process Failure ASP.Net Core2 after update to Microsoft.EntityFrameworkCore -Version 2.0.0 #7462 From @embryologist on August 17, 2017 2:20 Steps to reproduce ARTSQLCORE.zip `dotnet --info` output: ARTNetCore> It was not possible to find any compatible framework version ARTNetCore> ARTNetCore> The specified framework 'Microsoft.NETCore.App', version '2.0.0' was not found. ARTNetCore> ARTNetCore> - Check application dependencies and target a framework version installed at: ARTNetCore> ARTNetCore> \ ARTNetCore> ARTNetCore> - Alternatively, install the framework version '2.0.0'. ARTNetCore> ARTNetCore> It was not possible to find any compatible framework version ARTNetCore> ARTNetCore> The specified framework 'Microsoft.NETCore.App', version '2.0.0' was not found. ARTNetCore> ARTNetCore> - Check application dependencies and target a framework version installed at: ARTNetCore> ARTNetCore> \ ARTNetCore> ARTNetCore> - Alternatively, install the framework version '2.0.0'. ARTNetCore> ARTNetCore> It was not possible to find any compatible framework version ARTNetCore> ARTNetCore> The specified framework 'Microsoft.NETCore.App', version '2.0.0' was not found. ARTNetCore> ARTNetCore> - Check application dependencies and target a framework version installed at: ARTNetCore> ARTNetCore> \ ARTNetCore> ARTNetCore> - Alternatively, install the framework version '2.0.0'. ARTNetCore> ARTNetCore> It was not possible to find any compatible framework version ARTNetCore> ARTNetCore> The specified framework 'Microsoft.NETCore.App', version '2.0.0' was not found. ARTNetCore> ARTNetCore> - Check application dependencies and target a framework version installed at: ARTNetCore> ARTNetCore> \ ARTNetCore> ARTNetCore> - Alternatively, install the framework version '2.0.0'. I installed Microsoft.NETCore.App -Version 2.0.0 yet when I run the app I get this msg, HTTP Error 502.5 - Process Failure Common causes of this issue: The application process failed to start The application process started but then stopped The application process started but failed to listen on the configured port Troubleshooting steps: Check the system event log for error messages Enable logging the application process' stdout messages Attach a debugger to the application process and inspect Copied from original issue: dotnet/corefx#23320 From @karelz on August 17, 2017 21:59 Do you have a minimal repro? The attached project is rather large and hard to debug. Should this get moved to https://github.com/aspnet? From @embryologist on August 18, 2017 2:30 @karelz, how can I provide you with a minimal one? how can I provide you with a minimal one? you can zip it and attach it to github here or put it on OneDrive and send a link From @karelz on August 18, 2017 16:19 Or make it so minimal, that the code modifications to default template fit here as source code ... that would be ideal From @embryologist on August 19, 2017 0:25 I believe its <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> I think updating it ti version 2 with the current Asp.Net Core Preview creates a conflict. downgrading solved the problem @tarekgh , Thanks so much, Please close the issue, To run Core2 version 2 I updated my visual studio and the sdk, everything return back to normal. Please close the issue
gharchive/issue
2017-08-19T00:38:05
2025-04-01T04:33:32.760324
{ "authors": [ "embryologist", "tarekgh" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/issues/9480", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
316365595
Query: Improve alias-ing of projection in SelectExpression Issue: When we try to find name of the projection, we go through Convert/NullableExpression nodes. But when updating the alias we just wrap it inside a new AliasExpression. If the inner expression had AliasExpression giving it a name, then we would have multiple "AS" in SQL Fix: When assigning new alias to projection, we run visitor and tries to replace old name to new name to avoid multiple aliases. Resolves #11757 Approved for RC1
gharchive/pull-request
2018-04-20T18:20:41
2025-04-01T04:33:32.762101
{ "authors": [ "ajcvickers", "smitpatel" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/pull/11760", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
486115324
Add null propagation/protection logic for InMemory provider. When we bind to a non-nullable property on entity that can be nullable (e.g. due to left join) we modify the result to be nullable, to avoid "nullable object must have a value" errors. This nullability is then propagated further. We have few blockers: predicates (Where, Any, First, Count, etc): always need to be of type bool. When necessary we add "== true". conditional expression Test: needs to be bool, same as above method call arguments: we can't reliably rewrite methodcall when the arguments types change (generic methods specifically), we convert arguments back to their original types if they were changed to nullable versions. method call caller: if the caller was changed from non-nullable to nullable we still need to call the method with the original type, but we add null check before - caller.Method(args) -> nullable_caller == null ? null : (resultType?)caller.Method(args) selectors (Select, Max etc): we need to preserve the original result type, we use convert anonymous type, array init: we need to preserve the original type, we use convert Also enable GearsOfWar and ComplexNavigation tests for in memory. Also update InMemoryComplianceTest
gharchive/pull-request
2019-08-28T01:41:06
2025-04-01T04:33:32.764996
{ "authors": [ "maumar", "smitpatel" ], "repo": "aspnet/EntityFrameworkCore", "url": "https://github.com/aspnet/EntityFrameworkCore/pull/17465", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
140742302
Error with dotnet restore Mac OS X, ASP.NET Core Debugger Hello, I want to use the experimental .NET Core Debugger in my ASP.NET Core project, but after dotnet restore I get the following error: project.json Microsoft.Win32.Registry 4.0.0-rc3-23909 provides a compile-time reference assembly for Microsoft.Win32.Registry on DNXCore,Version=v5.0, but there is no run-time assembly compatible with osx.10.11-x64. Some packages are not compatible with DNXCore,Version=v5.0 (ox.10.11-x64). The project.json is: { "version": "1.0.0-", "compilationOptions": { "emitEntryPoint": true }, "tooling": { "defaultNamespace": "mchomerBlog" }, "dependencies": { "EntityFramework.Commands": "7.0.0-rc1-final", "EntityFramework7.Npgsql": "3.1.0-rc1-3", "Microsoft.AspNet.Diagnostics.Entity": "7.0.0-rc1-final", "Microsoft.AspNet.Identity.EntityFramework": "3.0.0-rc1-final", "Microsoft.AspNet.Diagnostics": "1.0.0-rc1-final", "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final", "Microsoft.AspNet.Mvc": "6.0.0-rc1-final", "Microsoft.AspNet.Mvc.TagHelpers": "6.0.0-rc1-final", "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final", "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final", "Microsoft.AspNet.Tooling.Razor": "1.0.0-rc1-final", "Microsoft.AspNet.Session": "1.0.0-rc1-final", "Microsoft.AspNet.Http": "1.0.0-rc1-final", "Microsoft.Extensions.Configuration.FileProviderExtensions": "1.0.0-rc1-final", "Microsoft.Extensions.Configuration.Json": "1.0.0-rc1-final", "Microsoft.Extensions.Logging": "1.0.0-rc1-final", "Microsoft.Extensions.Logging.Console": "1.0.0-rc1-final", "Microsoft.Extensions.Logging.Debug": "1.0.0-rc1-final", "Microsoft.Dnx.Watcher": "1.0.0-rc1-final", "Newtonsoft.Json": "8.0.2", "Microsoft.jQuery.Unobtrusive.Validation": "3.2.3" }, "commands": { "web": "Microsoft.AspNet.Server.Kestrel --ASPNET_ENV development", "ef": "EntityFramework.Commands" }, "frameworks": { "dnx451": {}, "dnxcore50": { "dependencies": { "NETStandard.Library":"1.0.0-rc3-" }, "imports": "portable-net45+win8" } }, "exclude": [ "wwwroot", "node_modules", "bower_components" ], "publishExclude": [ "node_modules", "bower_components", ".xproj", ".user", "**.vspscc" ], "scripts": { "prepublish": [ "npm install", "bower install", "gulp clean", "gulp min" ] } } I am using VS Code (newest one) and Mac OS X El Captain. Sadly also the cli-samples does not work on my Mac. There seems some trouble with packages and Mac OS. Do you have any suggestions or advices, how I can get the Core.NET Debugger for VS code in a ASP.NET project running on Mac OS? This issue is being closed because it has not been updated in 3 months. We apologize if this causes any inconvenience. We ask that if you are still encountering this issue, please log a new issue with updated information and we will investigate.
gharchive/issue
2016-03-14T17:31:55
2025-04-01T04:33:32.770778
{ "authors": [ "aspnet-hello", "mchomer" ], "repo": "aspnet/Home", "url": "https://github.com/aspnet/Home/issues/1354", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
125029867
Share StreamCopyOperation implementations https://github.com/aspnet/HttpAbstractions/blob/dev/src/Microsoft.AspNet.Http.Extensions/StreamCopyOperation.cs https://github.com/aspnet/StaticFiles/blob/dev/src/Microsoft.AspNet.StaticFiles/StreamCopyOperation.cs If StaticFiles already uses Http.Extensions then we should just use that location, otherwise find somewhere else for it to live. Internal helper in common? We can make it public in Http.Extensions since StaticFiles already depends on that and then we'll copy Ben's changes to the StaticFiles one over to Http.Extensions Fixed via 3f84e992f48f52d86404732915ebe86bfdd2a610
gharchive/issue
2016-01-05T19:06:29
2025-04-01T04:33:32.773088
{ "authors": [ "BrennanConroy", "Tratcher", "davidfowl", "glennc" ], "repo": "aspnet/HttpAbstractions", "url": "https://github.com/aspnet/HttpAbstractions/issues/528", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
136295992
Correct test attributes #654 [IPv6SupportedCondition] requires [ConditionalFact] or [ConditionalTheory] /cc @stephentoub Mostly waiting for CI builds to complete. :tada: two clean Travis builds! :tada: Well, one AppVeyor build was unhappy but with tests I didn't change. @stephentoub is the following a known issue? info: commandline: test --configuration Debug info: workingdir: test\Microsoft.AspNetCore.Server.KestrelTests xUnit.net DNX Runner (64-bit win7-x64) Discovering: Microsoft.AspNetCore.Server.KestrelTests Discovered: Microsoft.AspNetCore.Server.KestrelTests Starting: Microsoft.AspNetCore.Server.KestrelTests Vector.IsHardwareAccelerated == True Vector<byte>.Count == 32 Microsoft.AspNetCore.Server.KestrelTests.MultipleLoopTests.ServerPipeListenForConnections [SKIP] Test needs to be fixed (UvException: Error -4082 EBUSY resource busy or locked from loop_close) Microsoft.AspNetCore.Server.KestrelTests.MultipleLoopTests.ServerPipeDispatchConnections [SKIP] Test needs to be fixed (UvException: Error -4088 EAGAIN resource temporarily unavailable from accept) Unhandled Exception: System.Diagnostics.Debug+DebugAssertException: Block being garbage collected instead of returned to pool at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at System.Environment.get_StackTrace() at System.Diagnostics.Debug.Assert(Boolean condition, String message, String detailMessage) at Microsoft.AspNetCore.Server.Kestrel.Infrastructure.MemoryPoolBlock2.Finalize() in C:\projects\kestrelhttpserver\src\Microsoft.AspNetCore.Server.Kestrel\Infrastructure\MemoryPoolBlock2.cs:line 80 at System.Diagnostics.Debug.WindowsDebugLogger.ShowAssertDialog(String stackTrace, String message, String detailMessage) at Microsoft.AspNetCore.Server.Kestrel.Infrastructure.MemoryPoolBlock2.Finalize() fail: Exit code -532462766 from dotnet 3705551 Merged because this was causing a build break and should be uncontroversial. @dougbu The DebugAssertException is caused by a new assertion that was only enabled last week after I thought I fixed the leaking memory pool blocks. The assertion is to show that the blocks don't leak. At the time, we were having build issues on the AppVeyor CI servers, so I didn't see that the assertion was failing there. I'm investigating this issue now.
gharchive/pull-request
2016-02-25T06:36:00
2025-04-01T04:33:32.776502
{ "authors": [ "dougbu", "halter73" ], "repo": "aspnet/KestrelHttpServer", "url": "https://github.com/aspnet/KestrelHttpServer/pull/655", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
179067045
Remove Abstractions Linq dependency Don't need to reference Linq for a tiny function and its a 125Kb dependency (As well as casting through object, to a IEnumerator<object>, to then Select for the nulls back to IEnumerator<object> to then to Join(String separator, IEnumerable<object> values) rather than Join(String separator, IEnumerable<String> values) not being the greatest code path) LGTM, there is a question about dropping deps and how that relates to breaking changes but this looks fine /cc @Eilon There are extra deps brought in for a single error message resource string; and some which aren't used but are used by sub packages: Also CultureInfo.InvariantCulture => (IFormatter)null Which would mean the following additional deps could be removed (also brought in by ResourceManager) "System.Diagnostics.Debug": "4.0.11-*" "System.Globalization": "4.0.11-*" "System.Reflection": "4.1.0-*" "System.Resources.ResourceManager": "4.0.1-*" "System.Runtime.Extensions": "4.1.0-*" But that is a behavioural change and they mostly exist in mscorlib anyway so don't think that would gain much 😞 We're going to add standard library references to everything anyway, so even if the refs are temporarily dropped from this project.json, they'll be effectively added back once we add standard lib. The bigger question is just whether this change is valuable. And considering that as I mentioned the dependency won't actually end up getting dropped, LINQ will be available no matter what. And almost all apps reference LINQ eventually anyway, so most apps won't be getting any savings in practice. We're going to add standard library references to everything anyway, Whoa! Really? So if I'm building a standalone app its xcopy size has just jumped? And almost all apps reference LINQ eventually anyway, I'd say under 50% of nuget libraries use Linq https://apisof.net/catalog/System.Linq and under 8% use XmlDoc which is another that the standard library brings in https://apisof.net/catalog/System.Xml.XmlDocument Though Linq and Xml aren't the end of the world dependency-wise; however the code change is still valid as an improvement. @benaadams there's a "dependency trimming" feature that is being worked on that will significantly help mitigate the "extraneous dependencies" problem. Also, my comment was that apps end up using LINQ, not libraries. Ultimately apps are the unit of publishing, so it's only their dependencies that matter for that. @Eilon Do you have a link for that issue? @herecydev sorry, I don't. It's something I overheard that a team was working on. I was spying on them 😄 Perhaps @davidfowl knows more about it? Closing this PR as we've pulled in netstandard.library
gharchive/pull-request
2016-09-25T04:26:46
2025-04-01T04:33:32.783846
{ "authors": [ "Eilon", "benaadams", "davidfowl", "herecydev" ], "repo": "aspnet/Logging", "url": "https://github.com/aspnet/Logging/pull/492", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
262716690
Improve SqliteDataReader.GetSchemaTable() DataType Note that column and table names cannot be bound using parameters (I enclosed them in " to make SQL injection attacks less likely). Addresses #436 Rebased on top of current dev.
gharchive/pull-request
2017-10-04T09:11:25
2025-04-01T04:33:32.785243
{ "authors": [ "AlexanderTaeschner" ], "repo": "aspnet/Microsoft.Data.Sqlite", "url": "https://github.com/aspnet/Microsoft.Data.Sqlite/pull/437", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
58797861
With certain errors, razor precompilation throws an exception when trying to report them I'm currently trying to track down a Razor precompilation issue on Mono in beta3. The error with vanilla beta3 is: System.ArgumentOutOfRangeException: Argument is out of range. Parameter name: start at Microsoft.CodeAnalysis.Text.TextSpan..ctor (Int32 start, Int32 length) [0x00000] in <filename unknown>:0 at Microsoft.AspNet.Mvc.Razor.RazorErrorExtensions.ToDiagnostics (Microsoft.AspNet.Razor.Parser.SyntaxTree.RazorError error, System.String filePath) [0x00000] in <filename unknown>:0 at Microsoft.AspNet.Mvc.Razor.RazorPreCompiler+<>c__DisplayClass21_0.<GetCacheEntry>b__0 (Microsoft.AspNet.Razor.Parser.SyntaxTree.RazorError error) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable+<CreateSelectIterator>c__Iterator10`2[Microsoft.AspNet.Razor.Parser.SyntaxTree.RazorError,Microsoft.CodeAnalysis.Diagnostic].MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Microsoft.CodeAnalysis.Diagnostic]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToList[Diagnostic] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Microsoft.AspNet.Mvc.Razor.RazorPreCompiler.GetCacheEntry (Microsoft.AspNet.Mvc.Razor.RelativeFileInfo fileInfo) [0x00000] in <filename unknown>:0 at Microsoft.AspNet.Mvc.Razor.RazorPreCompiler.OnCacheMiss (ICacheSetContext cacheSetContext) [0x00000] in <filename unknown>:0 at Microsoft.Framework.Cache.Memory.CacheExtensions+<>c__DisplayClass7_0`1[Microsoft.AspNet.Mvc.Razor.PrecompilationCacheEntry].<Set>b__0 (ICacheSetContext context) [0x00000] in <filename unknown>:0 at Microsoft.Framework.Cache.Memory.MemoryCache.Set (System.String key, IEntryLink link, System.Object state, System.Func`2 create) [0x00000] in <filename unknown>:0 at Microsoft.Framework.Cache.Memory.CacheExtensions.Set (IMemoryCache cache, System.String key, System.Object state, System.Func`2 create) [0x00000] in <filename unknown>:0 at Microsoft.Framework.Cache.Memory.CacheExtensions.Set[PrecompilationCacheEntry] (IMemoryCache cache, System.String key, System.Object state, System.Func`2 create) [0x00000] in <filename unknown>:0 at Microsoft.Framework.Cache.Memory.CacheExtensions.GetOrSet[PrecompilationCacheEntry] (IMemoryCache cache, System.String key, System.Object state, System.Func`2 create) [0x00000] in <filename unknown>:0 at Microsoft.AspNet.Mvc.Razor.RazorPreCompiler+<>c__DisplayClass17_0.<CreateCompilationDescriptors>b__0 (Int32 index) [0x00000] in <filename unknown>:0 at System.Threading.Tasks.Parallel+<ForWorker>c__AnonStorey3`1[System.Object].<>m__1 () [0x00000] in <filename unknown>:0 I have WriteLine'ed the parameters as follows: error.Location.AbsoluteIndex: -1 error.Length: 1 filePath: [path to my view] error.Message: Cannot resolve TagHelper containing assembly 'Microsoft.AspNet.Mvc.TagHelpers'. Error: Could not load file or assembly 'Microsoft.AspNet.Mvc.TagHelpers' or one of its dependencies. The system cannot find the file specified. This parameter combination probably shouldn't give a stack trace that hides the actual error. We should see what the current behavior is now that precompilation and taghelpers work nicely together. @NTaylorMullen Does this also repro on Windows? @pranavkm fixed this issue via https://github.com/aspnet/Mvc/commit/57f5b19f25950ab90669fe9a37bab7941f39892d
gharchive/issue
2015-02-24T20:20:08
2025-04-01T04:33:32.789553
{ "authors": [ "NTaylorMullen", "Tragetaschen", "danroth27" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/2074", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
213791797
View helper @Json.Serialize does not HTML escape by default The JsonHelper does not HTML escape values by default and just uses whatever global JsonOutputFormatter options were configured. As of today, it is posisble to create a new 1.1 project and write the following code on a view, which ends up in the alert being displayed: @{ var model = new { prop1 = 1, prop2 = "</script><script>alert('hello world');</script>", prop3 = false }; } <script> var foo = @Json.Serialize(model); </script> This can be mitigated by explicitly passing the formatting options to that helper, setting JSON.Net's StringEscapeHandling option as EscapeHtml: var foo = @Json.Serialize(model, new JsonSerializerSettings { StringEscapeHandling = StringEscapeHandling.EscapeHtml }); It could also be set as the default Json formatting options, but then that would apply to any JSON included in the response of an API which might not be what you want. services.AddMvc().AddJsonOptions(opts => opts.SerializerSettings.StringEscapeHandling = StringEscapeHandling.EscapeHtml) Could this be the default behavior of JsonHelper regardless of the default options? As if it would take the default options and merge them with StringEscapeHandling as EscapeHtml. Thanks, this was fixed in the recent ASP.NET Core patch releases from May 2017.
gharchive/issue
2017-03-13T14:54:15
2025-04-01T04:33:32.793473
{ "authors": [ "DaniJG", "Eilon" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/5949", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
303081492
Apply Input Formatter at Action level? I note from the docs that in order to use a custom formatter you have to apply it globally: services.AddMvc(options => { options.InputFormatters.Insert(0, new VcardInputFormatter()); }); However I didn't notice any option to apply a Formatter for specific Action method/s can this be done? Hi @dazinator. You can use the ProducesAttribute and ConsumesAttribute on action level, to enforce certain formatting. You an read more about formatting with those attributes here. @mkArtakMSFT - Thanks - I found information on that page about the ProducesAttribute but nothing about the ConsumesAttribute. Should this be added to the docs? ConsumesAttribute is covered under model binding: https://docs.microsoft.com/en-us/aspnet/core/mvc/models/model-binding#bind-formatted-data-from-the-request-body
gharchive/issue
2018-03-07T12:35:50
2025-04-01T04:33:32.796568
{ "authors": [ "danroth27", "dazinator", "mkArtakMSFT" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/7450", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
358261407
Templates show up on some projects and not others. Why? I am writing a NopCommerce plugin. In NopCommerce's Visual Studio 2017 solution file, there are several standard plugins created as independent NetCore 2.1 library projects. If I right-click on any of their projects and select "new item from template", I am able to see and select MVC template items. For my project, however, I don't see these templates, but I did see standard and WPF ones. I carefully compared my project configuration with their plugins and I couldn't find any differences. After an hour of research, I discovered that by changing the project SDK the MVC templates showed up: <Project SDK="Microsoft.NET.Sdk.Razor"> ... </Project> instead of: <Project SDK="Microsoft.NET.Sdk"> ... </Project> This is clearly a bug. All of the other plugins were recognized as MVC projects even though they are not configured as Microsoft.NET.Sdk.Razor. Is Visual Studio caching the configuration information or stored in some cryptic location? Why are you guys constantly solving complexity by adding more complexity? Developing on the .NET platform officially unmanageable. Developers are embracing alternative development strategies because of small, constant and time consuming issues like this. Get your act together or find another line of business like growing turnips! @Joebeazelman, since this is specific to VS experience, this feedback needs to go through VS Feedback tool. You can follow the instructions in this link: https://docs.microsoft.com/en-us/visualstudio/ide/how-to-report-a-problem-with-visual-studio-2017?view=vs-2017
gharchive/issue
2018-09-08T03:53:15
2025-04-01T04:33:32.800228
{ "authors": [ "Joebeazelman", "mkArtakMSFT" ], "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/8427", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
168501069
Disabling client validation breaks TH validation From customer DISQUS comment: https://docs.asp.net/en/latest/tutorials/first-mvc-app/validation.html If I disable Client Validation using services.AddMvc(). AddViewOptions(options => { options.HtmlHelperOptions.ClientValidationEnabled = false; }); Server validation errors are not display with Tag Helper validation. Switching to HTML Helper validation, errors are displayed. @Rick-Anderson could you please file this issue in Mvc? Doesn't seem to be a Razor related issue since the MVC TagHelpers are the ones responsible for dealing with validation logic. This issue was moved to aspnet/Mvc#5102
gharchive/issue
2016-07-31T04:34:38
2025-04-01T04:33:32.802357
{ "authors": [ "Eilon", "NTaylorMullen", "Rick-Anderson" ], "repo": "aspnet/Razor", "url": "https://github.com/aspnet/Razor/issues/812", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
816492826
Tagging package Hi, we are currently using tequila as part of quantum chemistry circuit generation work. I am hoping it may be possible to periodically tag versions of the repository with a given set of changes. Right now we have a specific merge commit hash as the chosen version we use, but would prefer tags, eg git checkout v0.1.0, or similar. Is this something that would be possible? Happy to discuss our use of this package over a call if that would help. I have to admit, that I didn't had the best discipline with git tags until now. Wanted to wait until our overview article is out and start with v1.0.0. This is basically now, so I could finally do this. Thanks for the push. Feel free to let us know if there are any other problems/inconveniences with the package. Great, thanks a lot --- this should help us quite a bit. We must say that is has been very useful having access to this package, so great work on everything. I will close this issue.
gharchive/issue
2021-02-25T14:34:00
2025-04-01T04:33:32.827724
{ "authors": [ "kottmanj", "mlxd" ], "repo": "aspuru-guzik-group/tequila", "url": "https://github.com/aspuru-guzik-group/tequila/issues/131", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
214304264
版本4.14 编译不过 RT。 请问是编辑器版本,还是源码版本? 源码和编辑器版本都编译不过。 你可以先试试demo_firstperson_umg这个分支,因为有些东西东西需要手改,具体怎么改可以看看这个https://github.com/asqbtcupid/unreal.lua/wiki/1.Installation。 恩,我下了umg分支在4.14下是可运行的,赞一个! 4.15涉及到c++编译优化,目前编译报错。 大概看了下结构,感觉是不是将来把LuaPlugin抽离出game project,(LuaPlugin项目+UEToLuaGlueCodeLib项目)是不是这样会更灵活一些? @cty41 感谢关注,目前只实现了最基本的东西,更像是验证了一下lua开发的可行性,真正用来做项目还是有所欠缺。我打算再深入学习一段时间虚幻4,然后重新写一下,让lua能跟虚幻更紧密的结合在一起,用起来更舒服。 回复好快,哈哈哈。我也是有这个计划,本来想自己学着写个lua插件的,但先搜到也是国人的你写的这个了,不过显然你比我快了不少:)我想一块来完善这个插件,你看咋样 可以啊,最近我试着用unreal.lua来做自己的小demo,发现一些不好用的地方,然后做了一点修改,我们可以从这个版本开始完善,在家里呢,我晚上上传一下。我的QQ是285291946,方便的话加一下好沟通。
gharchive/issue
2017-03-15T07:27:21
2025-04-01T04:33:32.830566
{ "authors": [ "albus0x01", "asqbtcupid", "cty41" ], "repo": "asqbtcupid/unreal.lua", "url": "https://github.com/asqbtcupid/unreal.lua/issues/3", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
334024194
Sodium with travis Since we recommend using libsodium, i added this as default and added it to the tracis Ci job. Codecov Report Merging #36 into master will increase coverage by 1.17%. The diff coverage is n/a. @@ Coverage Diff @@ ## master #36 +/- ## ========================================= + Coverage 94.53% 95.7% +1.17% ========================================= Files 23 21 -2 Lines 2779 2375 -404 ========================================= - Hits 2627 2273 -354 + Misses 152 102 -50 Impacted Files Coverage Δ tests/salt_mock.c 95.12% <0%> (-0.26%) :arrow_down: ...c/external/tweetnacl_modified/tweetnacl_modified.c examples/randombytes_linux.c ...al/tweetnacl_modified/tweetnacl_modified_wrapper.c src/external/libsodium/libsodium_wrapper.c 96.87% <0%> (ø) src/salti_handshake.c 93.3% <0%> (+0.26%) :arrow_up: tests/salt_write.c 100% <0%> (+1.04%) :arrow_up: src/salt.c 95.25% <0%> (+1.35%) :arrow_up: tests/salt_read.c 89.47% <0%> (+1.54%) :arrow_up: tests/sign_test.c 100% <0%> (+24.32%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 81f2004...7417ebc. Read the comment docs.
gharchive/pull-request
2018-06-20T10:51:15
2025-04-01T04:33:32.843450
{ "authors": [ "codecov-io", "sijohans" ], "repo": "assaabloy-ppi/salt-channel-c", "url": "https://github.com/assaabloy-ppi/salt-channel-c/pull/36", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
167478245
XMLSerializer support Does Zombie support XMLSerialize? I'm seeing the following error: ReferenceError: XMLSerializer is not defined Is there alternative for this? If anyone need, this is how I resolved in my case. const xmlserializer = require("xmlserializer"); const DOMParser = require("xmldom").DOMParser class XMLSerializer { serializeToString(doc) { return xmlserializer.serializeToString(doc); } }; this.browser = new Zombie(); this.browser.on("loaded", () => { this.browser.window.XMLSerializer = XMLSerializer; this.browser.window.DOMParser = DOMParser; }); this.browser.visit(this.url, () => { console.log(this.browser.success); });
gharchive/issue
2016-07-25T22:12:50
2025-04-01T04:33:32.845223
{ "authors": [ "rafaelmaiolla" ], "repo": "assaf/zombie", "url": "https://github.com/assaf/zombie/issues/1061", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
63729771
"Did not get to load all resources on this page" When using Zombie on a long running process (24+ hours) I occasionally run in to this error: Error: Timeout: did not get to load all resources on this page at /node_modules/zombie/lib/index.js:1383:14 at done (/node_modules/zombie/lib/eventloop.js:554:7) at timeout (/node_modules/zombie/lib/eventloop.js:543:31) at Timer.listOnTimeout [as ontimeout] (timers.js:112:15) Once this error appears, all subsequent Zombie requests fail. Restarting the server resolves the issue. Here's our implementation of Zombie: Zombie 3.1.0 Node 0.10.31 Zombie.visit(url, { runScripts: true, loadCSS: false, silent: true, headers: { bot: true } }, function(err, browser, status) { if (err) return next(err); var html = browser.html(); }); Any ideas? Look at CPU utilization, if it jumps to 100% and stays there, that would explain why everything times out. That's usually a sign of a memory leak. Memory is at ~10% as the errors occur... doesn't seem to go much above that. CPU might spike up temporarily during the 2-3 seconds it takes Zombie to error out, put it goes right back down to ~1% after the error. Node has a memory limit of 1GB (see https://github.com/joyent/node/wiki/FAQ). Memory was about 450mb at time of error (10% machine usage). FWIW, we've been running 2.5.1 for a while which has been stable. Really appreciate the fast reply. I can't think of anything other than a memory leak exhibiting "time bomb" behavior. I was giving Zombie a first try and I receive this error, too. Unfortunately, I can't get past this first step. I tried to visit a few different pages and finally settled on google.com simply for debugging purposes but no luck. I'm running it as a mocha test using zombie 3.1.1, node 0.12.0, mocha 2.2.1. As far as I can tell there's no memory spike, cpu spike, etc... It just fails with this error and that's it. I've seen this too on 3.1.1, and will run into 100% cpu utilization when I set the waitDuration to something pretty high. This doesn't happen on 2.5.1. I'm just leaving a "me too" on the error. I found this thread while hunting for the error message. I don't have any more data to add on the cause though, except to say that I am getting it after running a pretty good while and processing a few hundred HTTP requests. Also experiencing the same issue. Happens for any visit call to any domain. Running Debian 64x, io.js 1.6.4, npm 2.7.5, zombie 4.0.5. Error: "unhandled rejection Error: Timeout: did not get to load all resources on this page at timeout (/home/zackiles/workspace/modern-scanner/node_modules/zombie/lib/eventloop.js:545:36) at Timer.listOnTimeout (timers.js:89:15)" I have the same problem. I tried changin the timeout value without success I'm having this problem as well with 3.1.1. Doesn't happen all the time, but most. I'd say something like 60 percent of the time I get this error when I call: var browser = new Browser({ maxWait: 10000, loadCSS: false }); browser.visit(url, function (e) { if (e) { throw e; } // wait for new page to be loaded then fire callback function browser.wait().then(function() { return callback(null); }); }); I ended up using spookyJS, it works perfectly if you don't want to wait this issue to be solved. Well I'm using zombiejs to login to a page then navigate around and download stuff and it looks like that would be a nightmare with spookiejs. I wish this could just get a little attn because this is the only real issue that is keeping me from using this module. I use it for the same purpose I don't think it is a nightmare. But if Zombie is working in your environment it is certainly better and faster. I get this same error on every time: $ DEBUG=zombie node server.js Robot listening at http://0.0.0.0:8080 Executing gallery item id 13123 zombie Opened window http://studio.code.org/c/42489384/edit +0ms zombie GET http://studio.code.org/c/42489384/edit => 200 +1s zombie Loaded document http://studio.code.org/c/42489384/edit +111ms zombie GET http://studio.code.org/shared/js/client_api.js => 200 +620ms zombie GET http://studio.code.org/shared/js/initApp.js => 200 +111ms server.js:18 throw e; ^ Error: Timeout: did not get to load all resources on this page at timeout (node_modules/zombie/lib/eventloop.js:543:36) at Timer.listOnTimeout [as ontimeout] (timers.js:112:15) Code at https://github.com/ottok/code-org-robot/blob/master/server.js If I disable throwing an error from the first browser.visit if will continue to load more resources, but at no point the complete page. Fiddling around with different uses of browser.wait or traditional setTimeout cat get a few steps longer but eventually it still fails with the same error: zombie Opened window http://studio.code.org/c/42489384/edit +19ms zombie GET http://studio.code.org/c/42489384/edit => 200 +1s zombie Loaded document http://studio.code.org/c/42489384/edit +76ms zombie GET http://studio.code.org/shared/js/client_api.js => 200 +643ms zombie GET http://studio.code.org/shared/js/initApp.js => 200 +101ms Code.org - Click 'Run' to see my program in action zombie GET http://studio.code.org/assets/application-de3a2cfca2d3211ac5ee15e95b385684.js => 200 +3s zombie Fired setTimeout after 1ms delay +181ms zombie GET http://www.google-analytics.com/analytics.js => 200 +895ms zombie GET http://js-agent.newrelic.com/nr-632.min.js => 200 +29ms Unhandled rejection Error: Timeout: did not get to load all resources on this page at timeout (node_modules/zombie/lib/eventloop.js:543:36) at Timer.listOnTimeout [as ontimeout] (timers.js:112:15) zombie GET http://studio.code.org/blockly/js/blockly.js => 200 +5s :+1: yep, having the same error. My page loads a google tag manager container, I'm wondering if it's hanging waiting to retrieve that? Same here increasing wait time dosen't resolve the issue either , major let down for me :unamused: Just a suggestion, try to open your page in Chrome with the console open to see if there are any errors popping up. I know that zombie reports an error if anything goes wrong in the page. @mikegleasonjr but there should be a way to ignore such errors if I'm not interested in them! something like zombie.ignoreDroppedRequests = true; there are no errors , like i said this works fine when you browse from a normal browser but when you automate the process through zombie this happens I've had this problem, but not in a long running process, but because of a tag in google tag manager. It was loading a javascript file, that included another js file, which finally loaded an iframe. Something about that code was causing zombie to reliably throw this error. Here's the actual tag if it's helpful: <script> __reach_config = { pid: '<removed>', url: window.location.protocol + "//" + window.location.hostname+window.location.pathname, reach_tracking: false }; (function(){ var s = document.createElement('script'); s.async = true; s.type = 'text/javascript'; s.src = document.location.protocol + '//d8rk54i4mohrb.cloudfront.net/js/reach.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(s); })(); </script> @sjparkinson that's my exact scenario (using GTM) and I get the same resources error that is being talked about here. Did you ever solve or get around it? Unfortunately not. We're currently reviewing the testsuite as it's a bit flakey. My "fix" is to disable GTM on the development environment. I experience the same problem, and are unable to fix it by removing google analytics code from the webpage. Okay, so by searching the repo for 'maxWait' I found some piece of a changelog which points out that the option is now called waitDuration. By setting waitDuration to 30 * 1000 in var browser = new zombie({waitDuration: 30*1000}) I was able to fix this issue. I must point out that the documentation of this project is terrible. I had this problem when trying to use 127.0.0.1 instead of localhost in the line: zombie.localhost('localhost', 5000); even with waitDuration: 30*1000 +1 zombie.waitDuration = '30s'; fixed it... before it was maxWait, and waitFor as ms... now theres a new one??? This is working for me too. Same problem here, setting the waitDuration doesn't solve it in my case. I've since switched to serializable views (i.e. React) and Jest snapshots edit I just increased the waitDuration to make it go away for me: var zombie = require('zombie'); zombie.waitDuration = '30s'; had the same problem, setting a larger waitduration solved it. thanks
gharchive/issue
2015-03-23T14:00:36
2025-04-01T04:33:32.863675
{ "authors": [ "AndersonZacharyT", "DasithKuruppu", "GuillaumeLeclerc", "JulianKingman", "a7madgamal", "assaf", "berliner", "bf", "blubbll", "chadfurman", "chapma26", "dlbrown06", "icodeforlove", "mikegleasonjr", "ottok", "paton", "rfphill", "rmfitzpatrick", "sjparkinson", "thomaslsimpson", "zackiles" ], "repo": "assaf/zombie", "url": "https://github.com/assaf/zombie/issues/882", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1588677572
[feature] add thrift support https://docs.rs/tree-sitter-thrift/latest/tree_sitter_thrift/ Example usage: find fields without comments. id: no-field-commend message: field should have comment. severity: error language: Thrift rule: kind: field not: follows: kind: comment
gharchive/issue
2023-02-17T02:59:34
2025-04-01T04:33:32.880509
{ "authors": [ "HerringtonDarkholme" ], "repo": "ast-grep/ast-grep", "url": "https://github.com/ast-grep/ast-grep/issues/255", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2400603912
Add -> None automatically to all pytest tests When running mypy on a larger codebase with many tests, it will report missing -> None annotations on tests in strict mode. I suggest adding a rule that adds this return type to all tests discovered by pytest. Hi, thanks for the feature request! Have you tried enabling the autofixes for ANN201, ANN202, ANN204, ANN205 and ANN206? They might already do what you want here I just did seconds ago and was about to close the issue due to this ;) Thanks much You're very welcome, glad to hear it! :D
gharchive/issue
2024-07-10T12:16:22
2025-04-01T04:33:32.978063
{ "authors": [ "AlexWaygood", "chbndrhnns" ], "repo": "astral-sh/ruff", "url": "https://github.com/astral-sh/ruff/issues/12274", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1830428550
select E not include Exxx? $ ruff check c.py --select E shows no error, while $ ruff check c.py --select E231 c.py:1:7: E231 [*] Missing whitespace after ',' Found 1 error. [*] 1 potentially fixable with the --fix option. The same result if I use pyproject.toml to set select. version: ruff 0.0.280 or main c.py: a = (1,2) I'm not sure if it's a bug or I don't use ruff in right way. I see, E231 seems still in Nursery.
gharchive/issue
2023-08-01T04:13:15
2025-04-01T04:33:32.980320
{ "authors": [ "OmmyZhang" ], "repo": "astral-sh/ruff", "url": "https://github.com/astral-sh/ruff/issues/6222", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2062737103
About format imports Why is it that before it was going to format the import in three pieces and now it's suddenly a two piece area, I'm not sure what I set up to cause this. Now FORMAT is shown below: Most likely it can't find core (assuming core is a local module, and not a third-party module). Could you share your rough project structure, and your pyproject.toml or ruff.toml file? And where is the pyproject.toml relative to those files? Where is the .ruff.toml being stored? (The above example works fine for me without any .ruff.toml file so want to make sure I'm giving you the correct answer.) Yeah you can -- you can add something like this to your settings.json in VS Code: { "ruff.lint.args": ["--extend-select", "I"] } Awesome, so glad to resolve! Sorry for the trouble.
gharchive/issue
2024-01-02T17:35:45
2025-04-01T04:33:32.983820
{ "authors": [ "Byxs20", "charliermarsh" ], "repo": "astral-sh/ruff", "url": "https://github.com/astral-sh/ruff/issues/9361", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2130125882
PYI049: false positive for private inner class The following code results in a PYI049 error even though _B is very much used: from typing import TypedDict class A: class _B(TypedDict): pass def f(self) -> None: print(A._B()) A().f() $ ruff check --select=PYI test.py test.py:5:11: PYI049 Private TypedDict `_B` is never used Found 1 error. $ ruff --version ruff 0.2.1 We should avoid flagging these when they're defined in a class scope. @AlexWaygood just because I see you on this issue, do you want to fix? Otherwise I can. @AlexWaygood just because I see you on this issue, do you want to fix? Otherwise I can. I'm super busy this week at work -- I have a lot of handover stuff to be getting on with :( If you could take a look, that would be great! No prob, on it.
gharchive/issue
2024-02-12T13:19:46
2025-04-01T04:33:32.986221
{ "authors": [ "AlexWaygood", "arvidfm", "charliermarsh" ], "repo": "astral-sh/ruff", "url": "https://github.com/astral-sh/ruff/issues/9948", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1942126533
Insert newline after nested function or class statements Summary Insert a newline after function and class definitions in other suite kinds like black does. I'm not entirely sure about the exact conditions we need to test for, but it fixes all cases that i tested. Current dependencies on/for this PR: main PR #7944 PR #7946 👈 This comment was auto-generated by Graphite. This is more complex than i expected, leaving it for now: def fakehttp(): class FakeHTTPConnection: if mock_close: def close(self): pass FakeHTTPConnection.fakedata = fakedata Here we must not insert an empty line after close because the class already does. I think we need to switch strategies to always check the last nested child for each but the last element in a suite.
gharchive/pull-request
2023-10-13T15:01:51
2025-04-01T04:33:32.990281
{ "authors": [ "konstin" ], "repo": "astral-sh/ruff", "url": "https://github.com/astral-sh/ruff/pull/7946", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1224769586
Add missing docstrings for some of the Triggers We don't have docstrings for the following Triggers astronomer/providers/core/triggers/external_task.py:14:class TaskStateTrigger(BaseTrigger): # noqa: D101 astronomer/providers/core/triggers/external_task.py:72:class DagStateTrigger(BaseTrigger): # noqa: D101 astronomer/providers/databricks/triggers/databricks.py:9:class DatabricksTrigger(BaseTrigger): # noqa: D101 We should fix this and get it in 1.3.0 - @phanikumv
gharchive/issue
2022-05-03T23:37:08
2025-04-01T04:33:32.998645
{ "authors": [ "kaxil" ], "repo": "astronomer/astronomer-providers", "url": "https://github.com/astronomer/astronomer-providers/issues/304", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2055789025
Make hipscat conda installable See https://github.com/conda-forge/staged-recipes/pull/24833 for hipscat and associated packaged being made conda-installable. This ticket is to capture this work and progress from hipscat side. As of v0.2.1, the released tarball contains the .git_archival.txt file. https://github.com/astronomy-commons/hipscat/releases/tag/v0.2.1 Here's the conda-forge PR: https://github.com/conda-forge/staged-recipes/pull/25115 I'm working on adding the running of pytest tests, that may trigger some small PRs to hipscat, hipscat-import and LSDB. Done! https://anaconda.org/conda-forge/hipscat
gharchive/issue
2023-12-25T17:12:15
2025-04-01T04:33:33.012597
{ "authors": [ "delucchi-cmu", "mjuric", "nevencaplar" ], "repo": "astronomy-commons/hipscat", "url": "https://github.com/astronomy-commons/hipscat/issues/188", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
169377436
Docs change theme to default astropy @cdeil - I acknowledge that there are issues with the default theme, but it's rather nice to have a shared look for the packages. The theme is hosted in astropy-helpers, so any suggestion for improvements are I think welcomed there. I'm strongly 👎 on this change. The docs theme https://github.com/snide/sphinx_rtd_theme has over 1000 stars on github, is used by 1000s of projects and several astropy-affiliated packages. That has a good reason ... a lot of work went into making it, it looks nice on desktop, tablets and phones. Asides from my personal opinion that the Astropy theme doesn't "look good", it does not work as well and is not as well maintained. So if anything: please stop changing packages to the Astropy theme, and instead let's change more to a better theme if you think "shared look for packages" is important. Coverage remained the same at 63.243% when pulling 7fd62fa0a2dae7bef0a0f5f390706812f3f93419 on bsipocz:docs_change_theme into 9c07e37ddcb229486fc21bc28c26f3539c338ee2 on astropy:master. So if anything: please stop changing packages to the Astropy theme, and instead let's change more to a better theme if you think "shared look for packages" is important. I switched only a small number of packages as it was included in the upgrade to use the latest package template (solving quite a lot of other issues). Obviously accepting these changes are optional. Also many of the astropy maintained packages were already using the theme for quite a long time now. Also, my view is that it's good to have a different look than the other 1000x packages out there that use sphinx_rtd. Until someone takes the time to figure out how to customize it, whether it is customizable enough, I would rather see packages to share that of astropy so emphasize the connection, rather than to look ok on a phone. Anyway, it may be worth raising this at the helpers and have a discussion there with others who worked on the theme. @cdeil could you open the discussion on theming on package-template? I agree with @bsipocz 's logic that we should have a common rtfd theme or look for astropy-affiliates, and so I support the general approach of enforcing that conformity. However, if there are problems with the theme (....there are), we should be fixing them across the board. I like sphinx-rtd-theme but I also like the idea of having customized themes to make our packages more recognizable. So I think it would be great at some point to fork sphinx-rtd-theme and maybe just change the color theme and add the logo. @keflavich wrote: @cdeil could you open the discussion on theming on package-template? I agree with @bsipocz 's logic that we should have a common rtfd theme or look for astropy-affiliates, and so I support the general approach of enforcing that conformity. However, if there are problems with the theme (....there are), we should be fixing them across the board. As I said before, I disagree. I'm certainly 👎 on enforcing conformity or even to try and convert projects like this one with nice and perfectly working Sphinx docs to astropy-helpers. I'd rather not start a discussion issue on astropy-helpers on that topic. We could start a thread on astropy-dev, or just delay that discussion to the next Python in astronomy and have a face-to-face discussion, immediately followed up by sprinting on improving things. IMO that would be better than a GH issue with discussions and (my guess) no action. Just to give some background where I'm coming from: when astropy-helpers was introduced in 2004 I asked for one page of documentation how it works. In https://github.com/astropy/astropy-helpers/issues/56#issuecomment-49770563 @eteq commented that he agrees and even that some docs explaining how astropy-helpers works are high priority. It's 2016 and I still don't know how the astropy-helpers Sphinx stuff works, every time there's an issue or I try to configure or extend something, it's very frustrating and takes me a lot of time. @astrofrog @cdeil - What do you think, figuring out an astropy theme that builds on sphinx_rtd_theme is more like a hackday project or somewhat longer and could be part of a gsoc project? E.g. do you plan to add the gammapy web dev project again this year? If yes, would you mind to have this piggybacking on it? Side comment: For Gammapy, we're doing two face-to-face meetings in the coming weeks (see here) to grow the user and contributor base, and we're competing with Gammapy as CTA science tools this year against Gammalib/ctools. So I probably won't have time to mentor for GSoC at all, unless it's with someone very good that's already contributing. Apart from that, I'm not sure there's a good GSoC project here. I mean, the first step would be to discuss if Astropy and affiliated packages want a new theme at all, no? To me it's not clear that more than a few people really care enough about improving things at the moment, and in what direction to go (improve current theme, or start new one based on sphinx_rdt_theme). So my suggestion here would actually to close this PR for now, and then to maybe bring up this question at some future f2f meeting where many Astropy devs / users are present. What do we do here? It doesn't help to leave this PR open without a decision. I prefer the RTD theme we have now (see https://pyregion.readthedocs.io/) and @bsipocz prefers the Astropy theme and this PR makes the change. Maybe other pyregion devs can vote (cc @leejjoon @keflavich @sargas @astrofrog) and then we decide? I would prefer to defer this until we know what the long-term future of this package is given the regions package. I don't have strong feeling about this repo, so happy to close the PR without merge. However I still strongly feel that having a uniformish look for (some) of the affiliates are very valuable. We didn't get anywhere at pyastro about customizing the sphinx-theme, but I still have some hopes that @Cadair et all will manage to get something done (or explore the options) as part of the sunpy web gsoc.
gharchive/pull-request
2016-08-04T13:43:11
2025-04-01T04:33:33.131859
{ "authors": [ "astrofrog", "bsipocz", "cdeil", "coveralls", "keflavich" ], "repo": "astropy/pyregion", "url": "https://github.com/astropy/pyregion/pull/96", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
101967674
Change from pycurl to Requests requests seems like a better dependency, plus far easier to install on the Raspberry Pi (I think.) Will attempt to migrate to it soon. Fixed! Requests is even easier than pycurl.
gharchive/issue
2015-08-19T18:38:07
2025-04-01T04:33:33.140274
{ "authors": [ "astrowonk" ], "repo": "astrowonk/DC-Metro-Pi-Speaker", "url": "https://github.com/astrowonk/DC-Metro-Pi-Speaker/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1368808112
ISSUE #57 :: Data Processing & Training Scripts, Converted Models for TPCx-AI Fraud Dataset [PLEASE DO NOT MERGE] Running Tests for Scale Factors 1, 3, 10, 30. Below are the dataset file sizes for each Scale Factor. 555M financial_transactions_train_SF1.csv (~7.3M Rows. Please scale the rest of the dataset rows accordingly.) 1.2G financial_transactions_train_SF3.csv 4.2G financial_transactions_train_SF10.csv 9.9G financial_transactions_train_SF30.csv Is it better to compress the model files? Is it better to compress the model files? Yes, that would make sense to reduce the number of files and file sizes. I added the git commands to be automatically pushed after running the required tasks, so I didn't notice the number of files being pushed. I will try to do that today or tomorrow.
gharchive/pull-request
2022-09-11T02:56:21
2025-04-01T04:33:33.143258
{ "authors": [ "hguan6", "venkate5hgunda" ], "repo": "asu-cactus/netsdb", "url": "https://github.com/asu-cactus/netsdb/pull/61", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1146369286
working cross-platform version Several changes were conducted to make the C++ source code cross-platform (Windows, Linux, macOS). add explicit conversion mainly for min and max functions to make the source code pass compilation using g++ and clang++; remove full qualified name on max functions (in DTA.h only) which will cause compiler error C2589 using MSVC; include essential headers and using-declarations where they are needed; remove extern key word before class NetworkForSP in DTA.h to make it forward declaration. otherwise, the compilation will fail using g++ (both MSVC and clang++ will skip this key word); declare using __int64 = long long for compiler which is not MSVC; fix typo on _int64 by replacing it with __int64; fix typo on fopen_s() by replacing it with fopen_ss(); remove ad-hoc #define BUILD_EXE in multiple files; update CMakeLists.txt to include simulation.cpp; rename the source code directories; update directory info in DTALite.sln accordingly; remove temporary files under release and src/cpp; add .gitignore; apply minor cosmetic changes for multiple files. many thanks, reviewed and merged into the main branch! Thanks for the changes.
gharchive/pull-request
2022-02-22T01:55:35
2025-04-01T04:33:33.147502
{ "authors": [ "jdlph", "xzhou99" ], "repo": "asu-trans-ai-lab/DLSim-MRM", "url": "https://github.com/asu-trans-ai-lab/DLSim-MRM/pull/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
791507358
Escape does not close parameter hints or problems popup Hey, Thanks for putting together this great extension. This is what got me into vim/neovim a few days ago, and I'm loving it. I've noticed that I cannot close the popup with parameter hints or the problems popup, which is usually done using the Escape key. For example: Parameter hints (triggered using gh): Problems popup (triggered using Alt + F8): I assume this is because the neovim extension handles the Escape key itself and calls something along the lines of preventDefault on the event, which does not let VSCode handle it too. I changed the when condition for the Escape key keybinding to: editorTextFocus && neovim.init && !markersNavigationVisible && !parameterHintsVisible (essentially, I added && !markersNavigationVisible && !parameterHintsVisible) and the Escape key started working after that. I would happily contribute a fix to the extension if you're ok with that 🙂 I wanted to report it first and let you handle it yourself if you'd like Nice solution! This was driving me crazy, but your solution works perfectly. I'd love to see this integrated into the extension, since I'm sure we're not the only ones. I was very happy to find this solution, but after modifying the when condition leaving insert mode sometimes takes 2-3 Esc taps. Might be something with my setup? I was very happy to find this solution, but after modifying the when condition leaving insert mode took 2-3 Esc taps sometimes, so now I'm back to shift+Esc. Might be something with my setup? I didn't encounter such problems myself, so looks like something on your end may be capturing the Esc keys. You could try disabling all extensions aside from neovim and see if that helps, and then enable extensions one by one to see which one causes the problem I was very happy to find this solution, but after modifying the when condition leaving insert mode took 2-3 Esc taps sometimes, so now I'm back to shift+Esc. Might be something with my setup? I didn't encounter such problems myself, so looks like something on your end may be capturing the Esc keys. You could try disabling all extensions aside from neovim and see if that helps, and then enable extensions one by one to see which one causes the problem Haven't happened since, so the solution seems fine to me 👌🏻 I edited mine so it would also work with diff windows. Thank you for this if you feel the same way update your PR and lets hope they merge it, this is a must :) "when": "inDiffEditor && editorTextFocus && neovim.init && !markersNavigationVisible && !parameterHintsVisible"
gharchive/issue
2021-01-21T21:29:38
2025-04-01T04:33:33.158848
{ "authors": [ "Gelio", "b1m1nd", "jose-elias-alvarez", "thekarel" ], "repo": "asvetliakov/vscode-neovim", "url": "https://github.com/asvetliakov/vscode-neovim/issues/534", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
988615494
autocmd BufWritePost I'm not sure if it's a supported feature, but some of the autocmd BufWritePost directives I have configured to run whenever a file is saved are not working if I save the file on vscode. Saving the files on nvim works as expected. Eg; autocmd BufWritePost ~/.local/src/my-repo/{*.c,*.h} !cd ~/.local/src/my-repo/; sudo make install This extension currently uses a wrapper called AlterCommand for some commands, so there are quirks. See https://github.com/asvetliakov/vscode-neovim/issues/136#issuecomment-581085268 for details and future ideas.
gharchive/issue
2021-09-05T22:46:22
2025-04-01T04:33:33.161252
{ "authors": [ "diogogmt", "justinmk" ], "repo": "asvetliakov/vscode-neovim", "url": "https://github.com/asvetliakov/vscode-neovim/issues/743", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1440463307
ci: remove not needed workflow This PR removes not needed golang workflow. This change will optimize github actions runtime. Thanks to this line the file I just removed will not be created here by bot again. /rtm
gharchive/pull-request
2022-11-08T15:53:58
2025-04-01T04:33:33.163039
{ "authors": [ "codingtenshi", "magicmatatjahu" ], "repo": "asyncapi/generator", "url": "https://github.com/asyncapi/generator/pull/862", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1124250696
Unable to reuse Server Objects defined in Components.servers field Hello everybody, While implementing 2.3 support into tooling I've noticed couple of inconsistencies/errors in the specification. As AsyncAPI 2.3.0 introduced new Components.servers field, there is no way how Server Objects defined in this map can be used later in the definition. Usage in Servers Object I assume first intended usecase was to reference Server Object defined in Components.servers field in Servers Object. This intention is clear from the release article: What the article is showing is not possible according to the spec as the Servers Object pattern field only allows Server Object and not Server Object | Reference Object. Usage in Channel Item Object.servers field With this I'm not 100% sure of the intended usage. The specification says following: This limits using referencing Server Object to ones defined Servers Object (by name). Should we allow referencing the Server Object even from Components.servers field as well? I'll provide a PR in a couple of minutes. Good catch! This limits using referencing Server Object to ones defined Servers Object (by name). Should we allow referencing the Server Object even from Components.servers field as well? That would be a new feature, as regardless of what you define in your components sections, servers are not "active" until referenced under servers. They are nothing but reusable chunks. So string still makes sense to use for Channel Item Object.servers. 🙂 That would be a new feature, as regardless of what you define in your components sections, servers are not "active" until referenced under servers. They are nothing but reusable chunks. So string still makes sense to use for Channel Item Object.servers. slightly_smiling_face Right, I assumed it was like you explained but I'd rather asked ;] Having said that, maybe it warrants clarification in the spec? But maybe not...not sure. Right, I assumed it was like you explained but I'd rather asked ;] Having said that, maybe it warrants clarification in the spec? But maybe not...not sure. Could definitely be now that you can define servers in multiple places 🤔 Maybe another issue for further discussing that? Yeah I think it's worth discussing further. If agreed, I'll extract part of this issue regarding servers to a new one. This issue can be now closed as it got merged and released. @char0n @smoya thanks, closing.
gharchive/issue
2022-02-04T14:14:28
2025-04-01T04:33:33.170498
{ "authors": [ "char0n", "jonaslagoni", "smoya" ], "repo": "asyncapi/spec", "url": "https://github.com/asyncapi/spec/issues/705", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1651969564
Where to put JMS specific headers? The documentation for version 2.6.0 of the AsyncAPI spec seems to contradict itself regards protocol specific headers for JMS. It says here: https://www.asyncapi.com/docs/reference/specification/v2.6.0#messageObject Schema definition of the application headers. Schema MUST be of type "object". It MUST NOT define the protocol headers. However, if I follow the documentation through: A Message object may have a traits property. A Message Trait object may have a bindings property A Message Bindings object may have a jms property. The JMS Message Binding object does not permit any properties. If I cannot put the protocol specific headers in the Message or Message Trait, and if I can't put them in the JMS Message Binding... where can I put them? @adamretter in AsyncAPI all protocol-specific semantics are delegated to AsyncAPI bindings. You can also have Message level bindings. The problem is just that nobody yet from JMS community took time to contribute what these should be and volunteer to maintain it in future. there was some enquiry in the past https://github.com/asyncapi/bindings/issues/87 but did not go too far. does that help? @derberg That is helpful thanks. So I guess we should put things in the JMS Message Binding object of our Async API Schema. We are not experts in JMS, but we can try and help; would you be happy to receive a PR against https://github.com/asyncapi/bindings/tree/master/jms from us? Yes please!!! it is hard to find someone to do first step. You might not be experts but doing first step will enable us to share your PR with rest of the community to drag them in with feedback. Thanks 🙏 Once you open up a PR we will close this issue. For now leaving it open to have it as reminder Thanks @derberg I have sent a PR for a set of JMS bindings to here - https://github.com/asyncapi/bindings/pull/193
gharchive/issue
2023-04-03T12:25:07
2025-04-01T04:33:33.178078
{ "authors": [ "adamretter", "derberg" ], "repo": "asyncapi/spec", "url": "https://github.com/asyncapi/spec/issues/919", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1498793483
chore: improve json schema for .asyncapi-tool we have this schema to validate .asyncapi-tools we need to make sure ☝🏼 is as detailed as possible to: enable better validation enable better autocompletion (we want to push this schema to schema store so in VSCode people get autocompletion) with this PR I'm just adding as many details as possible so we can push it to schema store. As a followup we can think of a bit automated way to update this JSON Schema, as at the moment enum information is duplicated there manually. /rtm
gharchive/pull-request
2022-12-15T17:06:46
2025-04-01T04:33:33.180447
{ "authors": [ "derberg" ], "repo": "asyncapi/website", "url": "https://github.com/asyncapi/website/pull/1161", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1561789902
fix: addition of Prev/Next buttons in build time Description This PR rewrites the logic of automation of Prev and UpNext buttons of docs in scripts folder. Hence, the entire implementation of the buttons are written and executed on build time, rather on run time, thus improving the efficiency of the website. Also, to inform that the structure of the posts.json has been changed from array of all posts to the JSON object having key-value pairs of docs, blog, about, jobs and docsTree. Related issue(s) Extends #1188 @derberg @alequetzalli @magicmatatjahu Kindly review this PR and do point out the errors you find out in the implementation. In case, you don't understand any part of implementation, feel free to ask it. hmm this feels a little weird.. what do you think @akshatnema? upside_down_face Yeah, this feels little weird for everyone, but if we want to show the rootElement titles too, this one has rootElement title and page title same. And I don't want to hardcode the prev/next buttons specially for this case. yeah i hear you, i guess it will be ok 👍 /rtm
gharchive/pull-request
2023-01-30T05:49:06
2025-04-01T04:33:33.183968
{ "authors": [ "akshatnema", "alequetzalli" ], "repo": "asyncapi/website", "url": "https://github.com/asyncapi/website/pull/1273", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1818220467
Support micronaut-r2dbc We aim to support the smooth transition of micronaut-r2dbc from dev.miku to io.asyncer:r2db-mysql by actively resolving the related issues. link: https://github.com/micronaut-projects/micronaut-r2dbc/issues/357 I found the root cause will submit PR soon
gharchive/issue
2023-07-24T11:25:51
2025-04-01T04:33:33.185528
{ "authors": [ "jchrys" ], "repo": "asyncer-io/r2dbc-mysql", "url": "https://github.com/asyncer-io/r2dbc-mysql/issues/139", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }