added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:38:24.423949
2023-01-03T18:32:43
1517771453
{ "authors": [ "dawnwages" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5353", "repo": "djangonaut-space/wagtail-indymeet", "url": "https://github.com/djangonaut-space/wagtail-indymeet/issues/35" }
gharchive/issue
Integrate testing (playwright) + documentation Blocking: https://github.com/dawnwages/wagtail-indymeet/issues/3 This issue must be done first. This may be a good place to ask Ed Rivas or Zan Anderle for their opinion on Django + Playwright. or Andrew Knight or Debbie O'Brien Doesn't have to be playwright https://youtu.be/_tAhD-OCuN8 https://github.com/RachellCalhoun/blog-posts
2025-04-01T06:38:24.437471
2021-03-30T19:02:42
845078844
{ "authors": [ "djc", "marcbowes" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5354", "repo": "djc/bb8", "url": "https://github.com/djc/bb8/pull/104" }
gharchive/pull-request
Implement non-retriable errors ManageConnection::should_retry can now be implemented to prevent bb8 from attempting to retry connection attempts. This serves two purposes: The caller of pool.get will now receive the underlying exception if a "catastrophic" error is hit. Previously, the caller would get a TimedOut with no cause. The error can be surfaced quickly, rather than going through several connection attempts. This is particularly useful for interactive use of bb8, such as allowing a user to directly input a connection to a program such a CLI. If the connection is incorrect, it is desirable to fail fast with a useful error message. The tests will fail in the postgres/redis libraries because I haven't propagated the changes to make Error implement Clone. I thought we should discuss that first. It seems like there are two paths forward: If you get a catastrophic error, the Sink does not get a copy of the actual error. Instead, it could get a CatastrophicError or something else. This seems reasonable to me. Make Error: Clone as I've shown. This is not backwards compatible and is a fairly annoying bound to have. Yes, I think your understanding of (1) is correct, and I do think it might solve your specific problem. I had forgotten about this exact feature, or I would have mentioned it sooner as potentially helpful to your problem. The initial spawning of connections can be executed asynchronously while returning the first error directly. Note, to make that work you'll need to configure min_idle to something other than 0 (the default). Given the lack of follow-up, going to close this for now. Feel free to reopen if you think it's still relevant. My bad for losing track of this. It didn't solve my problem actually. I landed up just making an initial health check call myself, outside of bb8. I passed the client into bb8 so the initial (http) connection goes into the pool.
2025-04-01T06:38:24.438476
2018-12-08T23:48:55
388970732
{ "authors": [ "Ralith" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5355", "repo": "djc/quinn", "url": "https://github.com/djc/quinn/pull/113" }
gharchive/pull-request
Fix travis nightly rustfmt Let's see if this helps. On review, we weren't using rustfmt in the nightly build anyway for obvious reasons.
2025-04-01T06:38:24.445616
2020-11-03T17:41:37
735516546
{ "authors": [ "chubchenko", "djezzzl" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5356", "repo": "djezzzl/database_consistency", "url": "https://github.com/djezzzl/database_consistency/issues/72" }
gharchive/issue
Reserved word and uniqueness validation First of all, I want to say thank you for this gem!!! We found an issue where the column name is equal to a reserved word and we have to add an index to make it unique. RSpec.describe DatabaseConsistency::Checkers::MissingUniqueIndexChecker do subject(:checker) { described_class.new(model, attribute, validator) } let(:model) { klass } let(:attribute) { :from } let(:validator) { klass.validators.first } context 'with postgresql database' do include_context 'postgresql database context' context 'when uniqueness validation has case sensitive option turned off' do context 'when the column name is equal to a reserved word' do let(:klass) { define_class { |klass| klass.validates :from, uniqueness: { case_sensitive: false } } } before do define_database_with_entity do |table| table.string :from table.index "lower('from')", unique: true end end specify do expect(checker.report).to have_attributes( checker_name: 'MissingUniqueIndexChecker', table_or_model_name: klass.name, column_or_attribute_name: 'lower(from)', status: :ok, message: nil ) end end end end end It looks like the problem occurs during the following comparison: extract_index_columns(index.columns).sort == sorted_index_columns which equals ["lower('from'::text)"] == ["lower(from)"] Hey @chubchenko , thank you for pointing this out! And for using the gem, of course! I'm not sure how smart should we be here, but I guess it would be fine if compare assuming that value can be casted at any type. So, I'll try to fix it ASAP 👍 The fix is here: https://github.com/djezzzl/database_consistency/pull/73. I'll release it probably today (with some other improvements). Hey, I just released 0.8.9, please try it out and let me know if it didn't help. Feel free to reopen the issue if needed. P.S. Sorry it took me so long to release.
2025-04-01T06:38:24.484134
2015-03-15T17:52:11
61874605
{ "authors": [ "Bautistax", "anthonymonori", "dkhamsing" ], "license": "cc0-1.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5361", "repo": "dkhamsing/open-source-ios-apps", "url": "https://github.com/dkhamsing/open-source-ios-apps/issues/16" }
gharchive/issue
Merge Apps visitBCN https://github.com/maurovc/visitBCN https://itunes.apple.com/us/app/visitbcn/id904676442?l=es&ls=1&mt=8 a menjar https://github.com/maurovc/aMenjar https://itunes.apple.com/us/app/a-menjar!/id816473131?l=es&ls=1&mt=8 Color Blur https://github.com/maurovc/ColorBlur https://itunes.apple.com/us/app/id928863510 iGrades https://github.com/maurovc/iGrades https://itunes.apple.com/us/app/id816987574 Peggsite https://github.com/jenduf/GenericSocialApp https://itunes.apple.com/us/app/peggsite/id938445951?mt=8 It would be really helpful if you would create a pull request instead of suggesting new ones as an issue. It's ok, all contributions are welcome :-)
2025-04-01T06:38:24.524892
2016-05-28T15:58:03
157345052
{ "authors": [ "CyberShadow", "JackStouffer", "aG0aep6G", "andralex", "jmh530", "schuetzm", "wilzbach" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5362", "repo": "dlang/dlang.org", "url": "https://github.com/dlang/dlang.org/pull/1314" }
gharchive/pull-request
Areas of D usage As the first five minutes matter, I thought it would be nice if we have an overview of areas in which D is used. It is different to the great Overview page as it tries for every area to inform about applications written in D key reasons/selling points That being said, this document is a draft and your feedback & review is highlighy appreciated. If I left out an area or you miss a great application - please let me know! Also note that I propose to put this on dlang.org, because in contrast to the wiki we can go through the very helpful peer-review stage and have the official branding. Can someone have a look at the content? I beliebe providing such an overview easily accessible from the front page is part of our mission to make D more mainstream! Agreed that this is important. I wouldn't put "Bare metal" at the top of the list, though. It's not D's strongest point currently. Also, "Compilers" are not really bare metal applications. Agreed that this is important. I wouldn't put "Bare metal" at the top of the list, though. How would you structure it? It's not D's strongest point currently. Also, "Compilers" are not really bare metal applications. Renamed the category to System programming, fixed all the links and went over the text again. I'd say by "popularity" of the usage area. For example "Numerical computing / Data science" seems to be a hot topic, judging from the forum activity. On the other hand, "Game development" is sexy. But we probably shouldn't make a science out of it. I'd say by "popularity" of the usage area. For example "Numerical computing / Data science" seems to be a hot topic, judging from the forum activity. On the other hand, "Game development" is sexy I put academia on purpose on the bottom to avoid the impression it's an "academic language". Maybe we can create a new grouping? We could split academia into research and teaching, but then teaching would be alone. Has someone else a better idea? This one is still broken, need to use LINK2 as below... Rebased & fixed. The spacing on GPU programming looks wonky due to it being justified. Also, I wouldn't necessarily group Numerical Computing, GPU Computing, and Data Science in to Academia. Maybe split them in to their own category. I might also mention how chaining range operations is like dplyr or something like that in the data science section. This should be linked from somewhere. Documentation→Articles? Resources? I see that overview.html is apparently only linked from the Learn tour. That's not exactly great. The spacing on GPU programming looks wonky due to it being justified. Also, I wouldn't necessarily group Numerical Computing, GPU Computing, and Data Science in to Academia. Maybe split them in to their own category. It's now named cutting-edge research. I might also mention how chaining range operations is like dplyr or something like that in the data science section. On it - will come soon. Thanks for the idea. This should be linked from somewhere. I know added overview and this article to the bottom of dlang.org - I think it fits, because the heading is "Why D?". However the wording isn't the best yet. I see that overview.html is apparently only linked from the Learn tour. That's not exactly great. I know (I made many PRs to the tour) - we are highly "underlinked"! Resources? We have to be careful the current number of menu items is already too much. Can we somehow create a new group? Articles Could be an idea, but probably not the first place I search for an overview. Documentation. We should merge http://dlang.org/comparison.html (linked from documentation) with http://dlang.org/overview.html I had another look through the text and fixed a couple of points. Btw that's how I propose to list the overview on the front page: How should we proceed? More nitpicks? Hey I went through the text again (minor revisions), but I added fancy icons - they are not optimal yet. The usual CSS hacks (responsive, nice alignment) are missing, however I think you could already check whether some icons don't fit overview sections See all This looks really cool but I think should be approved by W/A. Normally this would probably belong on the wiki, but I really like how it's laid out. Some weird wrapping here, maybe just use a constant margin for the whole section: The icon currently used for "Industry" seems more associated with academy, I think. Last week I tried to make the icons more responsive. It's not perfect, but should look a lot better than before. Moreover I add two nitpick runs over the text - can someone take 5-10 minutes to give the text a final nitpick round, s.t. we can ship an initial version? :) I think using this for the industry icon would make more sense. Why is the industry icon on the other side of the page? I would remove the word "perfect" from the industry section. Your aim as a programmer is never to make perfect code as you know that's impossible. I think using this for the industry icon would make more sense. It isn't part of FontAwesome 4.2 (see also #1371) - I guess to whether icon we decide there, I will just use the same here. Why is the industry icon on the other side of the page? The idea was to have indent the subcategories, so I put for the categories the icons on the right side. Is this too weird? Better ideas? "few scientific programmers care about" sounds negative. "few scientific programs need to worry about" sounds better IMO. Also I would mention parallelism somewhere in that section. "Last but least" -> "Last but not least" I would link to the orgs using D page somewhere in the industry section Is this too weird? Better ideas? Yeah, it's kind of jarring. This effect is already achieved via the typography of the section headers. Yeah, it's kind of jarring. This effect is already achieved via the typography of the section headers. Alrighty :) Also I would mention parallelism somewhere in that section. I added: " allows to easily parallelize your algorithm and pipelines," I would link to the orgs using D page somewhere in the industry section I added: "D has been used in many, diverse domains. A short selection will be presented below. For a full overview you can browse the list of reported $(LINK2 $(ROOT_DIR)orgs-using-d.html, organizations which use the D Language). Thanks @JackStouffer :) Do we need more eyes or is this ready for the first version? I guess we're ready. Should we take away the funny but aged D mascot? Please squash commits I guess we're ready. Should we take away the funny but aged D mascot? There is a related discussion for the DLang Tour: https://github.com/stonemaster/dlang-tour/pull/248 Please squash commits With pleasure :) Engage!
2025-04-01T06:38:24.553159
2014-06-25T13:48:23
2709133707
{ "authors": [ "dlangBugzillaToGithub" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5363", "repo": "dlang/phobos", "url": "https://github.com/dlang/phobos/issues/9403" }
gharchive/issue
utf8 string not read/written to windows console sum.proxy reported this on 2014-06-25T13:48:23Z Transfered from https://issues.dlang.org/show_bug.cgi?id=12990 CC List aldacron bugzilla (@WalterBright) dlang-bugzilla (@CyberShadow) Description import std.stdio; void main() { string s = stdin.readln(); write(s); } The code above should write a unicode (specifically cyrillic) string to output to a windows console (with cp set to 65001), but the string comes out empty. The same code works correctly when run through windows debugger windbg.exe, so hopefully it will be an easy fix. sum.proxy commented on 2014-06-27T15:27:51Z I still see no output in the regular console (no exception indication either). However, when I run it with windbg.exe it throws some exception (can't tell which one exactly, couldn't figure out how to load debug symbols). Appears like a write problem to me.. dfj1esp02 commented on 2014-06-27T18:56:55Z Then try write(cast(ubyte[])s); dfj1esp02 commented on 2014-06-27T15:02:18Z import std.stdio, std.utf; void main() { string s = stdin.readln(); validate(s); write(s); } Check if validation passes. dfj1esp02 commented on 2014-06-27T14:59:07Z This bug is probably better to split. It either read an invalid utf-8 string, or couldn't write a valid utf-8 string. sum.proxy commented on 2014-06-28T07:12:22Z This time it returned an empty array ([]). Thanks. sum.proxy commented on 2014-10-25T10:41:22Z I tried the new version of the compiler with the issue you referred to, but alas - no luck. Please see https://issues.dlang.org/show_bug.cgi?id=1448#c12 SetConsoleCP(65001) and SetConsoleOutputCP(65001) didn't help either. Thanks. sum.proxy commented on 2014-08-15T11:34:25Z Sorry, any feedback on this one? dlang-bugzilla (@CyberShadow) commented on 2014-10-25T02:10:16Z Try calling SetConsoleCP(65001) and SetConsoleOutputCP(65001). dfj1esp02 commented on 2014-07-07T09:00:29Z An empty array means no input rather than no output. Did it wait for the input? Do you compile it for console or GUI subsystem? echo 000 | yourprogram.exe Does this work? sum.proxy commented on 2014-07-07T09:38:18Z Yes, it does wait for the input, but the output is empty. It's a console application and sending the input through pipe seems to work correctly. dlang-bugzilla (@CyberShadow) commented on 2014-10-25T13:51:34Z Indeed. Happens with both DMC and MSVC runtime. sum.proxy commented on 2014-07-03T08:02:07Z I also tried it on a 32-bit windows system and the behavior is the same - no output. sum.proxy commented on 2014-10-25T20:25:45Z From what I know this program will work incorrectly for any non-ascii unicode input, which I have confirmed through simple tests. scanf and strlen rely on '\0' to indicate string termination, but I don't think this goes well with unicode strings. I believe the right way to do something similar (without buffer length) is this: #include <stdio.h> #include <fcntl.h> #include <io.h> int main( void ) { wchar_t buf[1024]; _setmode( _fileno( stdin ), _O_U16TEXT ); _setmode( _fileno( stdout ), _O_U16TEXT ); wscanf( L"%ls", buf ); wprintf( L"%s", buf ); } For further info please refer to http://www.siao2.com/2008/03/18/8306597.aspx and http://msdn.microsoft.com/en-us/library/tw4k6df8%28v=vs.120%29.aspx HTH, Thanks. sum.proxy commented on 2014-10-25T14:35:30Z Do you find it necessary to report the issue elsewhere, or the guys in charge of https://issues.dlang.org/show_bug.cgi?id=1448 will do it? dlang-bugzilla (@CyberShadow) commented on 2014-10-25T13:53:32Z "scanf" misbehaves in the same way. Not a D bug, I think. sum.proxy commented on 2014-10-28T12:28:55Z Or perhaps "the right" way would be to stick to UTF-16, since it's default for Unicode in Windows. dlang-bugzilla (@CyberShadow) commented on 2014-10-25T14:42:32Z Report it where? To Microsoft? Figuring out why scanf is failing would probably be the next step to resolving this. sum.proxy commented on 2014-10-28T11:32:14Z I believe the problem is that default internal representation of Unicode in Windows is UTF-16, which implies that some sort of conversion would be necessary here. I haven't found a way to do it right yet. dlang-bugzilla (@CyberShadow) commented on 2014-10-26T00:35:23Z (In reply to Sum Proxy from comment #18) > scanf and strlen rely on '\0' to indicate string termination, but I don't > think this goes well with unicode strings. Not true. At least, not true with UTF-8, which is what we set the CP to. > I believe the right way to do something similar (without buffer length) is > this: I would not say that's the "right" way. That's the way to read wchar_t text, but we need UTF-8 text. sum.proxy commented on 2014-10-25T14:50:12Z Are you referring to C's scanf? Is it consistently reproducible in a small chunk of C code? dlang-bugzilla (@CyberShadow) commented on 2014-10-25T15:01:25Z Yep: /////////// test.c /////////// void main() { char buf[1024]; SetConsoleCP(65001); SetConsoleOutputCP(65001); scanf("%s", buf); printf("%d", strlen(buf)); } ////////////////////////////// bugzilla (@WalterBright) commented on 2023-06-22T08:10:47Z (In reply to Sum Proxy from comment #22) > SetConsoleCP(1200); 1200 utf-16 Unicode UTF-16, little endian byte order (BMP of ISO 10646); available only to managed applications https://learn.microsoft.com/en-us/windows/win32/intl/code-page-identifiers sum.proxy commented on 2014-10-28T12:53:37Z This actually works on my system: ///////////// test.d ////////////// import std.stdio; import std.c.windows.windows; extern(Windows) BOOL SetConsoleCP( UINT ); void main() { SetConsoleCP(1200); string s = stdin.readln(); write(s); } ///////////////////////////////////
2025-04-01T06:38:24.570268
2019-10-11T17:50:35
505980105
{ "authors": [ "GeorgLink", "valeriocos" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5364", "repo": "dlumbrer/kbn_radar", "url": "https://github.com/dlumbrer/kbn_radar/issues/11" }
gharchive/issue
Port Radar Visualization to Open Distro This issue is an idea for the Open Distro Hack Day at All Things Open on October 14. The idea is to port this visualization to Open Distro and submit it as an official plugin for Open Distro for Kibana. related to https://github.com/chaoss/grimoirelab/issues/219
2025-04-01T06:38:24.592969
2018-10-14T12:05:32
369895933
{ "authors": [ "MalloZup", "kjenney" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5365", "repo": "dmacvicar/terraform-provider-libvirt", "url": "https://github.com/dmacvicar/terraform-provider-libvirt/issues/449" }
gharchive/issue
Wait for Cloud-Init To Complete Version Reports: Distro version of host: cat /etc/*release | grep PRETTY_NAME PRETTY_NAME="Ubuntu 18.04.1 LTS" Terraform Version Report terraform --version Terraform v0.11.8 + provider.libvirt (unversioned) + provider.template v1.0.0 Libvirt version virsh --version 4.0.0 terraform-provider-libvirt plugin version (git-hash) 0.5.0 (Downloaded binary from releases) Description of Issue/Question I've got cloud-init setup to install an Ansible playbook on the VM. Is there any way to wait until the cloud-init script has completed before outputting values? Additional Infos: SELinux is disabled, ufw is inactive. Nothing special about my config. Hi ! Check this out https://github.com/hashicorp/packer/issues/2639. Adding this would make the codebase rely on unstable 3party changes, an would impact much the codebase for this. You can solve this e.g in your workflow with a remote exec and looping.p @kjenney are you satisfied with the answer more or less ? to me we can close this because in a realistic world you can use remote-exec and wait for it or doing other workflow around the core. In the terraform-libvirt part i see this really far away from a future implementation ( is not what i would prioritize as high, imho). feel free to ask any add info Closing for additional question feel free to join the gitter chat! Thx
2025-04-01T06:38:24.621234
2023-08-23T14:37:30
1863472564
{ "authors": [ "anthonypuppo", "dmitry-brazhenko" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5366", "repo": "dmitry-brazhenko/SharpToken", "url": "https://github.com/dmitry-brazhenko/SharpToken/pull/20" }
gharchive/pull-request
Add gpt-3.5-turbo-16k model to encoding mapping support Fixes #19 Thanks a lot for your contribution!
2025-04-01T06:38:24.629683
2015-11-10T08:23:52
116060117
{ "authors": [ "unityunreal", "winstywang" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5367", "repo": "dmlc/mxnet", "url": "https://github.com/dmlc/mxnet/issues/530" }
gharchive/issue
build error:threadediter.h:326:37: error: no matching function for call to ‘std::condition_variable::notify_all() const’ After execute make on terminal, such error happens: g++ -std=c++0x -c -DMSHADOW_FORCE_STREAM -Wall -O3 -I./mshadow/ -I/usr/local/opt/openblas/include -I./dmlc-core/include -fPIC -Iinclude -msse3 -funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas -DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMXNET_USE_OPENCV=1 `pkg-config --cflags opencv` -fopenmp -c src/io/io.cc -o build/io/io.o In file included from src/io/./iter_prefetcher.h:13:0, from src/io/io.cc:8: ./dmlc-core/include/dmlc/threadediter.h: In lambda function: ./dmlc-core/include/dmlc/threadediter.h:326:37: error: no matching function for call to ‘std::condition_variable::notify_all() const’ consumer_cond_.notify_all(); ^ ./dmlc-core/include/dmlc/threadediter.h:326:37: note: candidate is: In file included from ./dmlc-core/include/dmlc/threadediter.h:15:0, from src/io/./iter_prefetcher.h:13, from src/io/io.cc:8: /home/rescape/lib/include/c++/4.8.0/condition_variable:83:5: note: void std::condition_variable::notify_all() <near match> notify_all() noexcept; ^ /home/rescape/lib/include/c++/4.8.0/condition_variable:83:5: note: no known conversion for implicit ‘this’ parameter from ‘const std::condition_variable*’ to ‘std::condition_variable*’ In file included from src/io/./iter_prefetcher.h:13:0, from src/io/io.cc:8: ./dmlc-core/include/dmlc/threadediter.h:333:37: error: no matching function for call to ‘std::condition_variable::notify_all() const’ consumer_cond_.notify_all(); ^ ./dmlc-core/include/dmlc/threadediter.h:333:37: note: candidate is: In file included from ./dmlc-core/include/dmlc/threadediter.h:15:0, from src/io/./iter_prefetcher.h:13, from src/io/io.cc:8: /home/rescape/lib/include/c++/4.8.0/condition_variable:83:5: note: void std::condition_variable::notify_all() <near match> notify_all() noexcept; ^ /home/rescape/lib/include/c++/4.8.0/condition_variable:83:5: note: no known conversion for implicit ‘this’ parameter from ‘const std::condition_variable*’ to ‘std::condition_variable*’ In file included from src/io/./iter_prefetcher.h:13:0, from src/io/io.cc:8: ./dmlc-core/include/dmlc/threadediter.h:352:45: error: no matching function for call to ‘std::condition_variable::notify_all() const’ if (notify) consumer_cond_.notify_all(); Any suggestions of how to solve this? Thanks! Perhaps my gcc versions causes? my gcc version is gcc version 4.8.0 (GCC) A similar issue is reported in previous project with GCC 4.8.0: https://github.com/dmlc/cxxnet/issues/221 Could you please try to upgrade it? At least 4.8.4 is good to go. Yes, I upgrade gcc version and this message disappear, perhaps the minimum version of gcc of the requirement to build mxnet should be updated on the doc site. Happy to hear that. Thanks for the advice.
2025-04-01T06:38:24.636990
2017-03-05T18:05:18
211970648
{ "authors": [ "jmerkow", "piiswrong" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5368", "repo": "dmlc/mxnet", "url": "https://github.com/dmlc/mxnet/pull/5261" }
gharchive/pull-request
[WIP] Proposal: Extend imdecode to uint16 I am working with image data that does not fit well into uint8. Given that I am working with 100k-1M images, I need a data iterator that is fast, such as the ImageIter. I have been looking into extending the ImageIter to use other types besides uint8 for reading data. It appears that after the initial read, there is no problem with container type. I've tried writing a new iter in python (using python reads), and they are still slow. So I've turned my attention to extending mxnet's imdecode function to work with uint16 in addition to uint8. This PR is a sketch, if there is interest I will continue. There are a number of issues remaining, just as mshadow has no uint16 type (to replace mshadow::kUint8). I am open to comments on my approach. An alternative would be to allow I/O from CV mat files, which can store any format. How about add a dtype parameter to allow uint8, int32, and float32? I don't there there would be much difference speed wise between int16 and int32. Also since you are going to convert it to float later anyway, why not directly convert to float in imdecode The idea behind uint16 is that they can be stored in tifs which I know opencv can read, I am not sure about int32 and imdecode. In the end, what image formats should this thing read? We could make a new templated version of mxnet::io::Imdecode, then the orginal (and exposed) mxnet::io::Imdecode contain a have a switch, which choses the correct templated function based on the dtype param. So you are suggesting that we drop mshadow::kUint8 and replace with mshadow::float16? I have not looked in depth at mxnet and mshadow, but I assume most processing is done on singles (float16). yes something like mx.io.imdecode(..., dtype='float32') should work. We don't need to make imdecode templated. We only need to do a conversion at the end with MSHADOW_DTYPE_SWITCH like the other ops. I went back and re-read the opencv docs on imencode and imdecode. opencv only supports uint8 and uint16 formats for all image encoding and decoding. If we wanted an alternative image io format, one that can use a larger variety of dtypes, it think we need to look outside opencv. NDArrays load very fast, but the files are quite large (unsure if there is a way to change this), and does not offer compression? Pillow supports a lot of formats , but it is python only, and I would want to test what formats+dtypes it can actually save/read. Given that imagerecio can be handled mostly in python now, python only might not be a problem (for the python platform). I would want the final result of all this is fast i/o such as those offered by RecordIO. I doesn't matter what format cudnn supports. You can use opencv to decode into a temporary buffer of whatever type and convert it to the target output format mxnet supports Im saying, that if you want to save/read dtypes that are not uint8 or uint16, you cannot use opencv for img encoding or decoding. Can one of the admins verify this patch? We only support images that opencv can read. Ok. Let's close this. The effort is not worth the one uncommon new format. I now use a simpler approach to pack images with other dtypes.
2025-04-01T06:38:24.639717
2017-01-03T07:45:55
198423879
{ "authors": [ "xlvector" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5369", "repo": "dmlc/ps-lite", "url": "https://github.com/dmlc/ps-lite/pull/69" }
gharchive/pull-request
solve mxnet docker build error by add --no-same-owner to tar in make/deps.mk If build ps-lite in mxnet docker, I miss following error when tar some deps in make/deps.mk: Cannot change ownership to uid xxxx , gid xxxx: Permission denied According to this post, https://www.krenger.ch/blog/linux-tar-cannot-change-ownership-to-permission-denied/ I fix this and test OK when open ps-lite in mxnet docker build I can not understand travis error...
2025-04-01T06:38:24.644244
2018-11-30T20:57:02
386347471
{ "authors": [ "ajtulloch", "tqchen" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5370", "repo": "dmlc/tvm", "url": "https://github.com/dmlc/tvm/pull/2208" }
gharchive/pull-request
[SCHEDULE] Fix code lowering when loop condition depends on outer axis. Fixe #2207 cc @ajtulloch Oh wow, thanks @tqchen.
2025-04-01T06:38:24.647198
2016-10-19T05:32:09
183869300
{ "authors": [ "formath", "yichenpan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5371", "repo": "dmlc/xgboost", "url": "https://github.com/dmlc/xgboost/issues/1681" }
gharchive/issue
How to tune memory parameters when training large data set on yarn I have a large data set stored on hdfs where training data is about 800G and valid data is about 90G. The data format is libsvm and each instance has 308 features at most. When running training process via yarn, I always get the error INFO dmlc.ApplicationMaster: Diagnostics., num_tasks100, finished=20, failed=80 [DMLC] Task 26 killed because of exceeding allocated physical memory The job is submitted via ../dmlc-core/tracker/dmlc-submit \ --cluster yarn \ --num-workers 100 \ --num-servers 10 \ --worker-memory 20g \ --server-memory 20g \ --queue ${my_queue} \ --ship-libcxx /opt/gcc-4.8.2/lib64/ \ ../xgboost train.conf train.conf # General Parameters, see comment for each definition # choose the booster, can be gbtree or gblinear booster = gbtree # choose logistic regression loss function for binary classification objective = binary:logistic # Tree Booster Parameters # step size shrinkage eta = 1.0 # minimum loss reduction required to make a further partition gamma = 1.0 # minimum sum of instance weight(hessian) needed in a child min_child_weight = 1 # maximum depth of a tree max_depth = 4 # instance sampling subsample = 0.8 # feature sampling when splitting colsample_bylevel = 0.8 # Task Parameters # the number of round to do boosting num_round = 2 #0 means do not save any model except the final round model save_period = 0 # The path of training data data = "hdfs://.../train" # The path of validation data, used to monitor training process, here [test] sets name of the validation set eval[test] = "hdfs://.../valid" # evaluate on training data as well each round eval_train = 1 # The path of model out model_out = "hdfs://.../model/" Please help me tuning the memory parameters so the job can run successfully. @tqchen I have tested that xgboost can not handle such a large data. So do down sampling before training. @formath Hi, sorry for posting at this closed issue, but I have encountered the same problem as you had (You can check the details here) I'm wondering what's the largest data size you have tested on YARN version xgboost? Cause in my case, it cannot handle even 10GB data (I allocated about 40GB physical memory). Thank you!
2025-04-01T06:38:24.650604
2018-09-25T21:15:55
363769562
{ "authors": [ "dwy904", "hcho3" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5372", "repo": "dmlc/xgboost", "url": "https://github.com/dmlc/xgboost/issues/3722" }
gharchive/issue
how can I extract the feature weights of the gblinear booster I am trying to extract the weights of my input features from a gblinear booster. But it seems like it's impossible to do it in python. I am wondering if there's any way to extract them. @dwy904 Did you try using get_dump() or dump_model()? Here is what I got. model_ranke. get_dump() ['bias:\n9.55492e+08\nweight:\n0.288463\n0.151129\n0.00716773\n31.5678\n-1.86324\n0.160523\n2.6101\n-0.0675516\n-7.74334e-10\n0.240564\n0.518223\n-2.97623\n-1.05936\n1.29177\n-0.597921\n1755.24\n-3.80193\n-3.40322\n372.426\n0.0564985\n0.0504407\n0.772678\n1.65475\n1.73482\n0.0058949\n-0.0397106\n-22.6941\n-0.82854\n712.713\n-0.138361\n1.21099\n-0.597615\n-1.56131\n-2.138\n0.13256\n-0.0376063\n-2.89459\n-2.41698\n0.653052\n']``` One more question, is the sequence of the model_rank.get_dump() command exactly the same as the sequence of the command, model_rank.feature_names? except for the bias term. I think so. Thank you so much!
2025-04-01T06:38:24.661201
2018-02-12T02:52:00
296254220
{ "authors": [ "codecov-io", "khotilov" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5373", "repo": "dmlc/xgboost", "url": "https://github.com/dmlc/xgboost/pull/3109" }
gharchive/pull-request
[core] fix slow predict-caching with many classes Addresses the issue of the O(num_class^2) prediction cache behavior #1689, #2926 where cache updates within CommitModel may start dominating with many classes. The reason was that the cache was updated for each single class group commit in a boosting round. E.g., with set.seed(1) n <- 1e4 num_feat <- 200 num_class <- 150 y <- apply(rmultinom(n, 1, rep(1, num_class)), 2, function(yy) which(yy != 0)) - 1 dtr <- xgb.DMatrix(matrix(rnorm(n*num_feat), n, num_feat), label = y) param <- list(objective='multi:softprob', num_class=num_class, debug_verbose=1, tree_method='hist', subsample=0.6) bst <- xgb.train(param, data=dtr, nrounds=1) rm(bst) gc() the timing result before (note that no watchlist was used, and using it would make caching even slower): [16:21:50] ======== Monitor: GBTree ======== [16:21:50] BoostNewTrees: 6.436368s [16:21:50] CommitModel: 10.707612s and after the fix: [16:22:51] ======== Monitor: GBTree ======== [16:22:51] BoostNewTrees: 6.497372s [16:22:51] CommitModel: 0.109006s Some minor changes: remove redundant 'if' in cpu_predictor get rid of compiler warnings make R on windows not to complain about configure script by providing an empty configure.win workaround for R v3.4.3 bug #3081 Codecov Report Merging #3109 into master will decrease coverage by <.01%. The diff coverage is 10.52%. @@ Coverage Diff @@ ## master #3109 +/- ## ============================================ - Coverage 43.79% 43.79% -0.01% Complexity 228 228 ============================================ Files 159 159 Lines 12507 12507 Branches 466 466 ============================================ - Hits 5478 5477 -1 - Misses 6837 6838 +1 Partials 192 192 Impacted Files Coverage Δ Complexity Δ src/gbm/gbtree.cc 17.95% <0%> (-0.1%) 0 <0> (ø) src/objective/regression_obj.cc 84% <100%> (ø) 0 <0> (ø) :arrow_down: src/predictor/cpu_predictor.cc 68.71% <50%> (+0.19%) 0 <0> (ø) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 375d753...c414b00. Read the comment docs.
2025-04-01T06:38:24.699402
2017-11-15T09:59:21
274097084
{ "authors": [ "amaltaro", "bbockelm", "hufnagel", "thongonary", "vlimant" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5374", "repo": "dmwm/WMCore", "url": "https://github.com/dmwm/WMCore/issues/8331" }
gharchive/issue
Improve resource requirements for utilitarian jobs For cleanup, logcollect and merge jobs. By default, they use 1 core, request 1GB of RAM and have a MaxRSS watchdog set to ~2.3GB. We should check ES data and maybe lower these requirements for a better usage of the resources. Might not be a very good idea... I've just found a merge job for the TaskChain_Relval_Multicore template that had a performance failure: PerformanceError PerformanceKill (Exit Code: 50660) Error in CMSSW step cmsRun1 Number of Cores: None Job has exceeded maxRSS: 2355.2 Job has RSS: 2425 Weird. Merge jobs should all use fast-copy of baskets which is fast and should use little memory. Might be worthwhile to get a log of that job and figure out what went wrong... Are you volunteering yourself to look at it? :) #8451 maybe a duplicate You mean the other way around :) from #8451 make sure you update also what goes in htcondor when you rework this So... how straightforward it is to increase the threshold to some higher value, say, 4GB? You don't want to do this for all such jobs. Requesting 4GB for standard merge, cleanup, logcollect etc jobs means you have less resources that can run them (you wait longer to run them and can run less of them) and you leave less resources available for other jobs. If special types of utility jobs (i.e. NANOAOD merges that aren't really standard merges) need more memory, we should request more memory just for these special types of jobs. Cleanup and LogCollect could probably be reduced though. If special types of utility jobs (i.e. NANOAOD merges that aren't really standard merges) need more memory, we should request more memory just for these special types of jobs. Thanks! That's what we want. For the record, several of the Task getters/setters methods don't touch "utilitarian" jobs. Right now we cannot change resource requirements for such jobs and if we want to support updates to those tasks too, that's going to be tricky and likely ugly for the assigner/unified side (the only way I see memory updates working without causing issues in other tasks would be specificifying every single tasks and its Memory requirement). Hi, Note that the NanoAOD merge issues are really a ROOT bug -- and affect how well these files can be effectively read by users. See: https://github.com/cms-sw/cmssw/pull/22445 For the other merge jobs - are we really seeing memory limits, or are we simply snapshotting cmsRun when it forks? The watchdog should be using PSS, not RSS, in the end. Brian
2025-04-01T06:38:24.730667
2020-04-12T01:26:53
598386409
{ "authors": [ "Yuan-Meng", "ashwinidathatri", "terhorst" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5376", "repo": "dnanhkhoa/nb_black", "url": "https://github.com/dnanhkhoa/nb_black/issues/22" }
gharchive/issue
ERROR:root:call() missing 1 required positional argument: 'value' When I ran %reload_ext lab_black in my Jupyterlab, I got an error message: ERROR:root:__call__() missing 1 required positional argument: 'value' Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/lab_black.py", line 218, in format_cell formatted_code = _format_code(cell) File "/opt/anaconda3/lib/python3.7/site-packages/lab_black.py", line 29, in _format_code return format_str(src_contents=code, mode=FileMode()) TypeError: __call__() missing 1 required positional argument: 'value' Here's line 29: Here's line 218: Which function misses what argument? Also, I just updated to Jupyterlab to 2.1.0 so I don't know if there are weird incompatibility issues. Did anyone encounter this? Thanks!! I have been receiving the same error as well. I doubt if it has anything to do with compatibility of the version. Anyway currently I am using JupyterLab version 1.2.16 Any suggestions on this would be helpful. This is caused by having an old version of black. pip install -U black fixed it for me.
2025-04-01T06:38:24.738081
2018-06-11T09:51:36
331119221
{ "authors": [ "tpluscode" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5377", "repo": "dnnsoftware/Dnn.Platform", "url": "https://github.com/dnnsoftware/Dnn.Platform/issues/2108" }
gharchive/issue
"See More Results" when searching produces an error when searching in Firefox Description While on Firefox, if the user tries to search and use "See More Results" at the bottom of the search results, it produces an error and does not open the results page. Expected result Jump to search result page. This works correctly on google chrome. Current result Error in console: TypeError: access to strict mode caller function is censored Affected version [x] 9.2 Fixed by #2039 Please close @dnnsoftware/tag is you have no further comment
2025-04-01T06:38:24.743740
2018-06-12T15:03:09
331628703
{ "authors": [ "Tychodewaard", "nickcrisp", "tpluscode", "valadas" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5378", "repo": "dnnsoftware/Dnn.Platform", "url": "https://github.com/dnnsoftware/Dnn.Platform/issues/2115" }
gharchive/issue
Cannot create new user with email address as username Description Going to Persona Bar > Security > Member Accounts tab > Registration Settings, and marking Use Email Address as Username, will generate an error upon login, displaying message "A critical error has occurred. Please check the Event Viewer for further details." Steps to reproduce Set up a clean DNN 9.2 install. Go to Persona Bar > Manage > Users. Click the "Add User" button. Fill details for new user, setting an email address as the username. Click "Save" button. An error message is displayed saying: "The username specified is invalid. Please specify a valid username." Current result After upgrading to DNN 9.2, customer is no longer able to add user with their email addresses as username. The following Error Message is displayed: "The username specified is invalid. Please specify a valid username." Expected result Being able to add email address as a valid user name in the default user creation form. Affected version [x] 9.2 This has been fixed in PR #2070 and has been caused by #1972 There is also a companion PR dnnsoftware/Dnn.AdminExperience.Extensions#537 which applies the same fix in Persona Bar and removes some code duplication Still occurs in 9.3.2 @nickcrisp I cannot reproduce in 9.3.2 using the above steps to reproduce... I can not reproduce either. @valadas Close? Actually, this one IS closed :) @nickcrisp can you please open a new issue with new steps to reproduce if you can still make it happen on 9.3.2 Thanks.
2025-04-01T06:38:24.762061
2017-03-05T10:43:16
211944694
{ "authors": [ "JeremiahHsu" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5379", "repo": "dnschneid/crouton", "url": "https://github.com/dnschneid/crouton/issues/3123" }
gharchive/issue
Failed to acquire CRAS Please paste the output of the following command here: sudo edit-chroot -all chronos@localhost / $ sudo edit-chroot -all name: precise encrypted: no Entering /mnt/stateful_partition/crouton/chroots/precise... crouton: version 1-20170228132702~master:21cb695b release: precise architecture: amd64 targets: kde host: version 9000.91.0 (Official Build) stable-channel swanky kernel: Linux localhost 3.10.18 #1 SMP Wed Feb 22 23:32:43 PST 2017 x86_64 x86_64 x86_64 GNU/Linux freon: yes Unmounting /mnt/stateful_partition/crouton/chroots/precise... If known, describe the steps to reproduce the issue: I was longing to install Ubuntu on my Toshiba CB35, and after a really long list of something, the system started to acquire CRAS. "Connecting to chromium.googlesource.com (chromium.googlesource.com)|2404:6800:4008:c06::52|:443... failed: Network is unreachable." and for unknown reason,the process was terminated and all the effort seemed gone. I typed sudo startkde as a hail Mary. Unlike previous attempts, the system came back with"the chroot may not be fully set up, would you like to finish it?", it liked a ray of hope that broke the darkness. With joy and excitement I typed yes. Yet the system still cannot acquire CRAS persistently. I copied the link and found it's accessible in Chrome, and the file has been successfully downloaded. Anyhow, I tried to boot the system again but this time it came back with "UID 1000 not found in precise",as my last ray of hope faded away. I have established a proxy link in System Setting due to the censorship in China mainland. However it doesn't solve my issue. Thank you for your reading and I hope God be with you. I found a modification that requires a mandatory SSL link in order to ensure proxy involved on Chinese website. I tried with that and it came back with "Proxy tunneling failed: Proxy Authentication Required Unable to establish SSL connection." I have deployed a Shadowsocks on another computer and used as a proxy server link via LAN, it works fine until now. On other website no security issue discovered. That computer using AES-256 to establish link with some server (you guys call it VPS right?) I just missed an option -P for assign a proxy. Sorry for being a noob. Have a nice day guys!
2025-04-01T06:38:24.768984
2019-05-26T14:56:36
448575376
{ "authors": [ "brizzbane", "rmartin16" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5380", "repo": "dnschneid/crouton", "url": "https://github.com/dnschneid/crouton/issues/4069" }
gharchive/issue
Is there a way to make a crouton chroot use Chrome OS VPN (that is setup by an Android Play Store app)? name: stretch encrypted: no Entering /mnt/stateful_partition/crouton/chroots/stretch... crouton: version 1-20190403182822~master:174af0eb release: stretch architecture: amd64 xmethod: xorg targets: xorg,gnome,xiwi,keyboard,core,extension host: version 12105.42.0 (Official Build) beta-channel cave kernel: Linux localhost 3.18.0-19339-g3e4e496860da #1 SMP PREEMPT Fri May 17 02:20:05 PDT 2019 x86_64 GNU/Linux freon: yes Please describe your issue: I have a VPN running on Chrome OS that is setup by an Android Play Store app, https://play.google.com/store/apps/details?id=com.fast.free.unblock.thunder.vpn&hl=en_US, as an example. The chroot does not use this VPN connection. (A curl request inside the chroot to http://api.ipify.org, shows that it is using the original, non-VPN IP address). If known, describe the steps to reproduce the issue: I have seen https://github.com/dnschneid/crouton/wiki/VPNC. I do not have the 'settings' for the setup/config of vpnc.conf, and ideally would like to be able to use any store VPN app. In the wiki I read: some VPNs requires client features and configurations that are not available. For those networks, it is possible to establish a VPN connection from a chroot that is usable from both within the chroot and Chromium OS. Which made me hopeful when I first started reading it. However, at configuration, it asks for settings, which I do not have access to since it is an android app. If it is not possible, I get it. If it is possible, or there is some way that it could be possible, if anyone could point me in the right direction, or offer any help at all, it would be much appreciated! I have read, and seen the solutions (on Reddit) for using OpenVPN or Private Internet Access VPN (You would have knowledge of actual settings to use to be able to setup). I am wanting/trying to make this work with any Play Store VPN that sets up a VPN in ChromeOS. I was actually kind of surprised when I first checked the IP that it was NOT going through the VPN. I figured that "that" would be a problem people may have been trying to figure out (how to get chroot to NOT go through ChromeOS VPN). Anyways... Thank you. I think this is much more a function of the VPN application creating the tunnel. Some VPN applications seem to configure the tunnel such that everything goes through it...others not...it may also be a function of chromeos version or even hardware. Google searching suggests this is certainly a varying topic. For me anyway, running the TorGuard VPN android app on my samsung pro routes Android, ChromeOS, and crouton traffic through the VPN. also...this might help...mine's been set to "default" for a while now though... within chrome://flags, there's Enable ARC VPN integration. ("Allow Android VPN clients to tunnel Chrome traffic. – Chrome OS")
2025-04-01T06:38:24.780100
2015-01-12T11:32:17
54043771
{ "authors": [ "cribalik", "dnschneid", "drinkcat", "stsquad", "tedm" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5381", "repo": "dnschneid/crouton", "url": "https://github.com/dnschneid/crouton/pull/1339" }
gharchive/pull-request
Kiwi: Send keycodes instead of keysyms This is a partial fix for #1275 (but still much better than current situation, so it can be merged as-is). The protocol now sends X11 key codes instead of key symbols. Keycodes (mostly) correspond to physical keyboard keys, so that the chroot can translate it, depending on the keyboard layout set in crouton. For that purpose, I had to update protocol version from VF1 to VF2. The extension is backward compatible, though. What works: Setting US keyboard in Chromium OS (or any keyboard layout without dead keys), any keyboard layout in crouton should work fine. Extension can connect to old chroot (VF1), in that case keysyms are sent. What still does not work: Setting a keyboard layout in Chromium OS with dead keys (e.g. US Intl) leads to lost key events (I think it's fixable, as the key code appears when the key is lifted). Basic framework for reverting Search+key mapping is there, if the user wants Search+Left to mean Super+Left and not Home (see #1324). Not implemented yet (we need another configuration check box). Other things to be implemented in other PR: Warn the user that it is connecting to an old chroot. Eventually, drop support for VF1 (this requires extensive rework of error handling in kiwi) What I don't think can be fixed: Switching Ctrl/Search/Alt in Chromium OS also results in switched positions in crouton. Search+numbers appears the same as Search+fn keys (F1-F10). I don't think we can distinguish between the 2, to support Super+numbers (see #1324). (untested on freon, my peppy refuses to switch to dev channel...) This works well on Freon. I'm inclined to merge it as-is, since my comments are only nits. I was having trouble testing this with the current extension. Does the new extension get rolled out automatically? @stsquad : The extension auto-updates, when no chroot is running (version in chrome://extensions should now be 2.1.0). You also need to update your chroot. @drinkcat \o/ with the updated extension I just tested master and I have ~# and `¬ working ;-) \o/ with the updated extension I just tested master and I have ~# and `¬ working ;-) Good to hear! See https://code.google.com/p/chromium/issues/detail?id=425156#c16. Both "What still does not work", and "What I don't think can be fixed" are magically fixed with freon! Now there is still a slight bug as "OSLeft"/Super_L event are sometimes discarded. Happy not to have to implement another horrible hack... Awesome! I think freon will be generally a good thing for crouton. Just out of curiosity, is there any reason to still keep the fix in this commit when using Freon? It seems to be the culprit of #1558. @cribalik can you confirm that #1275 doesn't re-emerge if you revert this? @dnschneid I confirmed it with Swedish, Dvorak US and Spanish keyboards in Chrome; the keys recieved from X11 were the same regardless of the layout in Chrome. Switching around the Alt/Search/Ctrl keys also did not have any impact on X11 keycodes. Search + LeftArrow is received as OSLeft + LeftArrow, and not OSLeft + Home (which is what the fix was for I believe). This seems to be Freon's doing that suddenly the correct keycodes are sent. Do we have to keep backwards compatability with non-Freon users? If not, I propose reverting the aforementioned fix. Many users are still on non-freon systems, so a straight-out revert is not an option (yet). We'll need to either have two code paths based on whether Freon is present or not (which probably means having the chroot tell the extension one way or the other), or wait until Freon is in stable on all devices to revert. My device (2012 Samsung ARM / stable channel) isn't on Freon yet, but is very stable with precise, and I wouldn't mind losing the GUI for awhile. I've got a blog platform underneath that is very stable (open ssh, mysql, apache2, php5, myphpadmin, wordpress, etc.). I'm willing to move to trusty at some point, but precise has been extremely stable the past few months on the ARM. I mostly run the chroot with enter-chroot and utilize the server processes.
2025-04-01T06:38:24.789156
2020-07-18T20:22:33
660359980
{ "authors": [ "doamatto" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5383", "repo": "doamatto/southnode", "url": "https://github.com/doamatto/southnode/issues/1" }
gharchive/issue
Better error handling With better error handling, I can help people resolve issues easily. Needs to keep aligned with the privacy statement on the README. Added in recent slew of commits → https://github.com/doamatto/southnode/compare/1f4703cf154d...90e23efb57dd
2025-04-01T06:38:24.791031
2019-12-05T00:01:29
533022084
{ "authors": [ "cricket007", "doanduyhai" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5384", "repo": "doanduyhai/Achilles", "url": "https://github.com/doanduyhai/Achilles/issues/366" }
gharchive/issue
[feature] Java 11 Builds Any plans to add Java 11 build tests to Travis? Update Travis build file is easy. Making Achilles work for Java 11 is another story: massive update of all maven plugins because of the new module system introduced in Java 9 maybe some internal JDK classes used by Achilles to circumvent some defects of APT are no longer available/have been moved You are welcomed to contribute to this migration I found the Cassandra JIRA that Java 11 would be targetted for 4.0, so I assume changes here would be dependent on that
2025-04-01T06:38:24.811364
2016-01-29T08:20:34
129699481
{ "authors": [ "carletes", "oyvindio" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5385", "repo": "docker-library/httpd", "url": "https://github.com/docker-library/httpd/pull/13" }
gharchive/pull-request
Build mod_proxy_*.so by default I think it would be useful if mod_proxy and friends were available by default. +1 (I asked for this some time ago in #6, for Apache 2.2)
2025-04-01T06:38:24.822525
2024-06-29T14:07:32
2381771794
{ "authors": [ "cfis", "johnstarxx" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5386", "repo": "docker-mailserver/docker-mailserver-helm", "url": "https://github.com/docker-mailserver/docker-mailserver-helm/pull/125" }
gharchive/pull-request
Added initContainers Added initContainers (with extraVolumes and extraVolumeMounts fields). Also closes: #114. These fields are need if for example one uses custom CA authority (Step Certificates) and needs to set a custom certificate that needs to be stored in /etc/ssl/certs/. Using an init container, one can mount the certificates to the given folder and run the update-ca-certificates command. For extra volumes, could we just use the already existing persistence key? Should we update the persistence key to work for this use case? I'm still not loving the idea of having two ways of defining volumes. Also would it be worth splitting this into two MRs - one for init containers and then a separate one for the updated volume support? Yes, I think it might work. The difference between the two is that in PR #117 the extraVolume(s/Mounts) are in the .Values, whereas in this PR they are in the .Values.deployment like in other charts (e.g. argo or cert-manager). There are also charts that use them directly from .Values (prometheus-adapter) it they know that there will be only one deployment). I see. Thanks!
2025-04-01T06:38:24.898748
2020-08-12T15:24:13
677781486
{ "authors": [ "craig-osterhout", "datamattsson" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5387", "repo": "docker/docker.github.io", "url": "https://github.com/docker/docker.github.io/issues/11239" }
gharchive/issue
Nimble has moved File: engine/extend/legacy_plugins.md Can we please link the Nimble DVP to https://scod.hpedev.io/docker_volume_plugins/hpe_nimble_storage/index.html Created a new issue in the upstream repo that contains this file. Closing this issue in favor of the new one. https://github.com/docker/cli/issues/3589
2025-04-01T06:38:24.904276
2015-07-24T12:24:43
97041354
{ "authors": [ "sanmai-NL", "sysopcorner", "unclejack" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5388", "repo": "docker/docker", "url": "https://github.com/docker/docker/issues/14946" }
gharchive/issue
Docker daemon unresponsive and gigabytes of memory when full BGP table added to system (650k routes) Description of problem: We have a server which is connected to full BGP feed. It means that system has 650k routes in route table which in fact is quite normal. After starting docker daemon it allocates gigabytes of memory and never stops. All docker commands (eg. "docker ps") hangs. After removing routes everything is going back to normal. Can be reproduced on virtual servers too. How reproducible: Insert thousands of routes into routing table and restart docker daemon. Steps to Reproduce: add dummy interface: ip link add name dummy0 type dummy && ip link set up dummy0 add multiple routes into the system (it takes few minutes): for a in {2..10}; do for b in {1..253}; do for c in {1..253}; do ip route add 1.$a.$b.$c/32 dev dummy0; done ; done; done start docker daemon docker -D -d DEBU[0000] Registering OPTIONS, DEBU[0000] Registering GET, /events DEBU[0000] Registering GET, /images/json DEBU[0000] Registering GET, /containers/json DEBU[0000] Registering GET, /containers/{name:.*}/export DEBU[0000] Registering GET, /containers/{name:.*}/logs DEBU[0000] Registering GET, /containers/{name:.*}/stats DEBU[0000] Registering GET, /containers/{name:.*}/attach/ws DEBU[0000] Registering GET, /info DEBU[0000] Registering GET, /version DEBU[0000] Registering GET, /images/search DEBU[0000] Registering GET, /images/get DEBU[0000] Registering GET, /images/{name:.*}/json DEBU[0000] Registering GET, /containers/{name:.*}/top DEBU[0000] Registering GET, /containers/ps DEBU[0000] Registering GET, /containers/{name:.*}/changes DEBU[0000] Registering GET, /exec/{id:.*}/json DEBU[0000] Registering GET, /_ping DEBU[0000] Registering GET, /images/{name:.*}/get DEBU[0000] Registering GET, /images/{name:.*}/history DEBU[0000] Registering GET, /containers/{name:.*}/json DEBU[0000] Registering POST, /containers/{name:.*}/start DEBU[0000] Registering POST, /containers/{name:.*}/stop DEBU[0000] Registering POST, /exec/{name:.*}/start DEBU[0000] Registering POST, /images/create DEBU[0000] Registering POST, /images/load DEBU[0000] Registering POST, /containers/{name:.*}/kill DEBU[0000] Registering POST, /containers/{name:.*}/unpause DEBU[0000] Registering POST, /exec/{name:.*}/resize DEBU[0000] Registering POST, /containers/{name:.*}/rename DEBU[0000] Registering POST, /auth DEBU[0000] Registering POST, /containers/create DEBU[0000] Registering POST, /containers/{name:.*}/wait DEBU[0000] Registering POST, /containers/{name:.*}/resize DEBU[0000] Registering POST, /containers/{name:.*}/exec DEBU[0000] Registering POST, /build DEBU[0000] Registering POST, /images/{name:.*}/tag DEBU[0000] Registering POST, /containers/{name:.*}/restart DEBU[0000] Registering POST, /containers/{name:.*}/attach DEBU[0000] Registering POST, /commit DEBU[0000] Registering POST, /images/{name:.*}/push DEBU[0000] Registering POST, /containers/{name:.*}/pause DEBU[0000] Registering POST, /containers/{name:.*}/copy DEBU[0000] Registering DELETE, /containers/{name:.*} DEBU[0000] Registering DELETE, /images/{name:.*} DEBU[0000] docker group found. gid: 999 INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) Actual Results: ip route show | wc -l 576088 ps aux --sort=-vsz,-rss | grep docker root 17145 100 8.8 11867512 11623548 pts/2 Sl+ 14:17 1:21 docker -D -d Expected Results: Docker daemon can start and be responsive with full BGP route table. I'll look into this because I've investigate some related code with similar problems in the past. ping
2025-04-01T06:38:24.912821
2015-12-09T17:33:36
121298978
{ "authors": [ "GordonTheTurtle", "marius311", "sam-thibault", "teetrinkers", "thaJeztah" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5389", "repo": "docker/docker", "url": "https://github.com/docker/docker/issues/18542" }
gharchive/issue
Can ping but not connect to container running on the same network If I create a network and put a mysql server on it: $ docker network create net $ docker run -itd --net=net --name mysql -e MYSQL_ROOT_PASSWORD=password mysql:5.6.25 I can ping this container, $ docker run --net=net mysql:5.6.25 ping mysql PING mysql (<IP_ADDRESS>): 48 data bytes 56 bytes from <IP_ADDRESS>: icmp_seq=0 ttl=64 time=0.144 ms ... but not connect to the server, $ docker run --net=net mysql:5.6.25 mysql -h mysql --password=password #hangs indefinitely... The logs show the server is running, and in fact I can exec into the mysql container and connect to the server from its own container, so that's all fine. An important point is I've got two machines with identical versions of Docker (docker version and docker info are identical) but this happens on only one of them, so it must be some other settings on this machine interacting with Docker. Here's the info for the one where the problem happens: Info Linux darkenergy 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Client: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64 Containers: 7 Images: 316 Server Version: 1.9.1 Storage Driver: aufs Root Dir: /home/boincadm/docker/aufs Backing Filesystem: extfs Dirs: 334 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.19.0-31-generic Operating System: Ubuntu 14.04.3 LTS CPUs: 32 Total Memory: 251.9 GiB Name: darkenergy ID: HFOU:WMAG:6J3D:JLDB:LXBS:62C4:A7PY:RL5A:CP3V:ARO3:BGVC:DNXX WARNING: No swap limit support Hi! Please read this important information about creating issues. If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. This is an automated, informational response. Thank you. For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues BUG REPORT INFORMATION Use the commands below to provide key information from your environment: docker version: docker info: uname -a: Provide additional environment details (AWS, VirtualBox, physical, etc.): List the steps to reproduce the issue: 1. 2. 3. Describe the results you received: Describe the results you expected: Provide additional info you think is important: ----------END REPORT --------- #ENEEDMOREINFO I've seen this behaviour when ufw was running. @marius311 can you check if the problem disappears if ufw is disabled? This is an old issue. I will close it as stale.
2025-04-01T06:38:24.922950
2017-02-16T18:48:27
208211171
{ "authors": [ "BastienAr", "cpuguy83", "thaJeztah", "zh99998" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5390", "repo": "docker/docker", "url": "https://github.com/docker/docker/issues/31095" }
gharchive/issue
Docker 1.13 overwrite/remove named volume when stack deploy/update Currently there is no option when updating a stack (docker stack deploy) to overwrite a named volume created by this stack. In a same way, there is no option (such -v) to remove named volume associated with a stack when you remove the stack, other than removing the volumes with docker volume rm. Here is a usecase : Given this file: version: '3' services: config: image: test/config-hoster:1.3 volumes: - volume_config:/datas deploy: mode: global restart_policy: condition: none nginx: image: nginx volumes: - volume_config:/datas:ro depends_on: - config volumes: volume_config: where config-hoster is a from scratch image, exposing a volume with the config files in it, and nginx a service used just for testing the correct mapping. When I first deploy this stack, everything is fine, the volume is created and filled with the config files of the config service. But if I change the version (tag) of the config service, when I update the stack the volume is not deleted, so datas inside are obviously not overwrited. Same behavior if I rm and redeploy the stack. Since this is a named volume, not directly bounded to an host directory, and not created externaly, I see no problem to erase or overwrite this volume when the stack is removed or updated. Hmm, this would actually be a pretty big problem if implemented. The reason it's doing this is the volume exists already and is populated with data when you update the config. I think the right thing here is to actually support config objects (like we do secrets). I've just checked the secret objects in docker. This is great, and actually solve a problem we were facing with ssh keys. Indeed supporting config objects in a same way will perfecly match the previous usecase. I'm not sure to understand why and what would be the problem of deleting volumes created inside a stack, if you're sure of what you're doing (hence the -v option). Could you clarify this for me please ? @BastienAr Volumes are meant for persistent storage. If you are telling a service to use a named volume, this is making a statement on the retention of this volume. For instance if you use an anonymous volume (don't specify a name), this would be cleaned up. I'm using NFS as volume storage backend and.... I can't find any way to change its options except stop containers, remove volumes and deploy again. @cpuguy83 Ok, I get the philosophy. So we definitively need a way to start a service requiring a file that could be modified (like config files) on any swarm node. Currently if you have to change configuration, either you rebuild the image with correct config (dirty) or you bind volume and become host dependent. NFS is also a solution, but I find it cumbersome for this purpose. @BastienAr I feel like volumes are not a good use for injecting configuration, even though it may be the only way to do it right now. One option that is currently available is to inject the config as a secret. It's a little hacky but works well in the short-term. Yeah, that could be a trick, but it's still inconvenient due to the fixed path of secrets (/run/secret/<secret_name>). And the encryption of the file could add unnecessary process (config files rarely need encryption) And the encryption of the file could add unnecessary process (config files rarely need encryption) I think that overhead can be neglected; the encryption/decryption is part of the raft store, and should not cause a noticeable performance issue. w.r.t. the paths (defaulting to /run/secret/<name>), the <name> part can be overridden when using the secret (--secret src=my-little-secret,target=config.json), but the path inside the container currently not (we kept this possibility open for a future enhancement, but needs additional discussion). You can create a symlink inside the container if needed. Okay, this solution is quite simple and operational. Still I think this could be cleaner if shared file that are not secret would be separated from the term secret, even if under the hood this is the exact same process. Bty is there any chance we could talk about this subject at the next Dockercon ? Some guys from our company (including me) will be there. resolved in 17.06 by the introduction of Docker configs. I close the issue
2025-04-01T06:38:24.925325
2015-02-02T14:40:37
56244257
{ "authors": [ "SvenDowideit", "chenhanxiao" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5391", "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/10509" }
gharchive/pull-request
docs: change events --since to fit RFC3339Nano PR6931 changed time format to RFC3339Nano. But the example in cli.md does not changed. Signed-off-by: Chen Hanxiao<EMAIL_ADDRESS> @tiborvass LGTM - @fredlf @jamtur01
2025-04-01T06:38:24.927630
2015-03-07T12:48:10
60205032
{ "authors": [ "coolljt0725", "dmp42", "duglin", "moxiegirl" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5392", "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/11227" }
gharchive/pull-request
Fix docker start help message Signed-off-by: Lei Jitang<EMAIL_ADDRESS>Docker start can start multiple containers, but the help messages show Restart a stopped container which is not correct. ping @estesp @jfrazelle LGTM ping LGTM but would like a doc maintainer to make sure we didn't miss any docs. ping @moxiegirl @SvenDowideit @fredlf LGTM
2025-04-01T06:38:24.929233
2015-07-22T22:20:11
96679101
{ "authors": [ "icecrime", "jfrazelle", "vdemeester" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5393", "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/14878" }
gharchive/pull-request
Enable validate-lint as part of CI Yes, I'm ashamed. Yes, I hope it passes. Don't judge me. lol LGTM, if it passes Also had to fix pkg/chrootarchive/diff_windows.go because of #14862. :stuck_out_tongue_closed_eyes:
2025-04-01T06:38:24.930266
2016-01-20T22:53:25
127803605
{ "authors": [ "LK4D4" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5394", "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/19520" }
gharchive/pull-request
Use sync.Pool for io.Copy buffers Small ioutils.Copy function uses buffers from sync.Pool instead of allocating them on each io.Copy. The size of buffer chosen as a default size in io.Copy. I used it only in overlay copy function because it was major memory eater. I think I'll wait with this change and will use bufio.Reader.
2025-04-01T06:38:25.022170
2017-07-03T05:21:10
240069691
{ "authors": [ "AkihiroSuda", "ishidawataru" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5395", "repo": "docker/go-connections", "url": "https://github.com/docker/go-connections/pull/41" }
gharchive/pull-request
Support parsing SCTP port mapping please see https://github.com/moby/moby/pull/33922 Signed-off-by: Wataru Ishida<EMAIL_ADDRESS> CI failure after rebase is unrelated opened https://github.com/docker/go-connections/pull/47 rebased
2025-04-01T06:38:25.036896
2016-02-24T05:08:28
135957299
{ "authors": [ "FrenchBen", "arvind114", "nathanleclaire" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5396", "repo": "docker/kitematic", "url": "https://github.com/docker/kitematic/issues/1494" }
gharchive/issue
Error installing tensorflow - Mac OSX 10.11.3 Error while pulling image: Get https://index.docker.io/v1/repositories/drunkar/anaconda-tensorflow-gpu/images: dial tcp: lookup index.docker.io on <IP_ADDRESS>:53: read udp <IP_ADDRESS>:40707-><IP_ADDRESS>:53: read: connection refused Downloading has been active for > 1hr. No network issue but download completion status not visible. When you get error like this, does docker-machine restart fix it? Where is your DNS resolver pointing? e.g. is it going through proxy The restart should fix this, feel free to add more comments here if this doesn't solve it.
2025-04-01T06:38:25.039987
2016-12-18T20:08:36
196296321
{ "authors": [ "FrenchBen", "praveenprabharavindran" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5397", "repo": "docker/kitematic", "url": "https://github.com/docker/kitematic/issues/2211" }
gharchive/issue
Unable to pull docker pull microsoft/aspnet:1.0.0-rc1-update1-core Expected behavior The image "microsoft/aspnet:1.0.0-rc1-update1-core" gets installed correctly Actual behavior I am getting the following error: Network timed out while trying to connect to https://index.docker.io/v1/repositories/raduporumb/aspnetcore-rc1-update1/images. You may want to check your internet connection or if you are behind a proxy. Information about the Issue docker pull microsoft/aspnet:1.0.0-rc1-update1-core Steps to reproduce the behavior ... ... Please make sure that you're on a stable connection and use a solid DNS such as Google DNS.
2025-04-01T06:38:25.050399
2016-03-03T05:58:48
138083349
{ "authors": [ "F21", "vdemeester" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5398", "repo": "docker/libcompose", "url": "https://github.com/docker/libcompose/issues/169" }
gharchive/issue
Default listener is writing directly to stdout In https://github.com/docker/libcompose/blob/master/project/listener.go#L71, the default listener writes directly to stdout (tested on windows and ubuntu). I am writing a cli app that uses libcompose to talk with the docker daemon and the output from the listener pollutes the output of my cli app. Is it possible to redirect the output of the listener into memory so that I can write the output of the listener to stdout on my terms? Hi @F21, thanks for the report. It's on its way, there is some refactoring to come in this area :stuck_out_tongue_closed_eyes:. @vdemeester Is there anyway I can help get the ball rolling on this one? Maybe the adding a Listener field to the context struct passed to NewProject(). If there is a listener passed in, it uses it, otherwise it creates a DefaultListener. Let me know what you think! I just notice NewDefaultListener() requires an instance of project, which causes a chicken and egg problem.
2025-04-01T06:38:25.051945
2017-05-26T21:14:41
231736438
{ "authors": [ "aaronlehmann", "fcrisciani", "mavenugo" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5399", "repo": "docker/libnetwork", "url": "https://github.com/docker/libnetwork/pull/1781" }
gharchive/pull-request
Removed printfs Changed some prints into proper logging, also was missing the \n at the end Signed-off-by: Flavio Crisciani<EMAIL_ADDRESS> Not a maintainer, but this looks good to me. LGTM
2025-04-01T06:38:25.054686
2015-10-28T02:33:24
113730681
{ "authors": [ "GordonTheTurtle", "IanLee1521", "dmp42" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5400", "repo": "docker/machine", "url": "https://github.com/docker/machine/pull/2105" }
gharchive/pull-request
Fixed typo Fixed Minor typo. Signed-off-by: Ian Lee<EMAIL_ADDRESS> Please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "patch-1" git@github.com:IanLee1521/machine.git somewhere $ cd somewhere $ git commit --amend -s --no-edit $ git push -f Ammending updates the existing PR. You DO NOT need to open a new one. LGTM Thanks!
2025-04-01T06:38:25.071031
2016-05-19T19:36:23
155817424
{ "authors": [ "GordonTheTurtle", "andrewhsu", "cyli", "docker-jenkins", "endophage" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5401", "repo": "docker/notary", "url": "https://github.com/docker/notary/pull/753" }
gharchive/pull-request
force cgo resolver for name resolution By default, the pure Go resolver is used which makes direct DNS requests first to resolve a hostname before checking /etc/hosts. If a host on the network has the same name as the linked container, the host on the network will be used instead of the linked container. I had a machine on the network with hostname mysql and bringing up the containers with docker-compose had the server and signer linked containers trying to connect to to it... $ sudo docker-compose up ... mysql_1 | 2016-05-19 19:33:29<PHONE_NUMBER>05664 [Note] mysqld: ready for connections. mysql_1 | Version: '10.1.10-MariaDB-1~jessie' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution server_1 | waiting for notarymysql to come up. signer_1 | waiting for notarymysql to come up. server_1 | waiting for notarymysql to come up. signer_1 | waiting for notarymysql to come up. server_1 | waiting for notarymysql to come up. ... Forcing cgo resolver will use C library routines that will honor values in /etc/hosts first and then fall back on DNS. From the docs: https://golang.org/pkg/net/#hdr-Name_Resolution Signed-off-by: Andrew Hsu<EMAIL_ADDRESS>(github: andrewhsu) Please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "compose-resolv" git@github.com:andrewhsu/notary.git somewhere $ cd somewhere $ git commit --amend -s --no-edit $ git push -f Ammending updates the existing PR. You DO NOT need to open a new one. Please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "compose-resolv" git@github.com:andrewhsu/notary.git somewhere $ cd somewhere $ git commit --amend -s --no-edit $ git push -f Ammending updates the existing PR. You DO NOT need to open a new one. Can one of the admins verify this patch? Sorry about the multiple force pushes...trying to get signature requirement squared away. I think it's good to go now. jenkins, test this please @cyli umm...i think jenkins is a bot. that's all he can say: https://github.com/docker-jenkins?tab=activity @andrewhsu Yes it is :). We gate PRs on code coverage results (which wait for results from Jenkins and CircleCI) as well as CI results - this Jenkins server runs our yubikey tests, but doesn't automatically run them for PRs from authors not in the org. The comment I left basically lets it know that it's ok to run tests for this PR, else the codecov check would never finish. We were wondering if converting the compose file to v2 and using a network definition would be sufficient? That would not put anything in /etc/hosts, and the DNS resolution would hopefully work correctly. @andrewhsu it's our jenkins bot and @cyli's comment is what we use to instruct it to run its tests. It will only pay attention to the maintainers on this project :-) @cyli I tried as you suggested, to convert the docker compose files to v2 but it still had the same issue. If I add the environment variable GODEBUG=netdns=cgo then everything works. So, I created #755 as an alternative to this PR with the docker compose v2 format change. @andrewhsu I may be misunderstanding the docs but they don't appear to indicate the behaviour you describe. We think what you might be encountering is a compose issue (previously described here: https://github.com/docker/distribution/issues/1362 ) and while it can be solved with the cgo change, we really don't want to go that route of hacking every other project to work around an issue elsewhere. If you think it's the same issue could you file a ticket on the docker-compose repository? @endophage I understand. I've adjusted #755 to remove the netdns workaround. I'll close out this PR. @andrewhsu Just checking as a possible reason for the issue - did you happen to have more than 2 DNS servers on your host when the other machine took precedence over the linked mysql container? @cyli no i did not have 2 dns servers on the host at the time. Outside of docker containers on the host I had entries something like this in /etc/resolv.conf: search internal.example.com nameserver <IP_ADDRESS> nameserver <IP_ADDRESS> And running our own DNS servers resolves hostnames like this: bash$ host mysql mysql.internal.example.com has address <IP_ADDRESS> In any case, I tried to replicate the issue again after moving my entire development environment from Kitematic to the new Docker for Mac OS X 1.12.0-rc2-beta16 and I was not able to get it to fail like before so I'm not going to spend any more time on it. Stuff works. Ah ok, thanks for clarifying and trying to replicate! On Thursday, July 7, 2016, Andrew Hsu<EMAIL_ADDRESS>wrote: @cyli https://github.com/cyli no i did not have 2 dns servers on the host at the time. Outside of docker containers on the host I had entries something like this in /etc/resolv.conf: search internal.example.com nameserver <IP_ADDRESS> nameserver <IP_ADDRESS> And running our own DNS servers resolves hostnames like this: bash$ host mysqlmysql.internal.example.com has address <IP_ADDRESS> In any case, I tried to replicate the issue again after moving my entire development environment from Kitematic to the new Docker for Mac OS X 1.12.0-rc2-beta16 and I was not able to get it to fail like before so I'm not going to spend any more time on it. Stuff works. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/notary/pull/753#issuecomment-231110375, or mute the thread https://github.com/notifications/unsubscribe/AANn4PBau7RBlOc9LdGAHhpg1_Rh7Ms_ks5qTRjygaJpZM4IimJO .
2025-04-01T06:38:25.073364
2015-07-29T01:35:00
97836327
{ "authors": [ "aluzzardi", "jimmyxian" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5402", "repo": "docker/swarm", "url": "https://github.com/docker/swarm/pull/1099" }
gharchive/pull-request
Small cleanup of Cluster.createContainer Signed-off-by: Andrea Luzzardi<EMAIL_ADDRESS> /cc @jimmyxian @vieux @aluzzardi This cleanup will save the soft-image-affinity in ContainerConfig 4/ Retry with a soft-affinity (but don't store it in the ContainerConfig) @jimmyxian You are right :)
2025-04-01T06:38:25.079090
2018-07-07T23:14:42
339182075
{ "authors": [ "dperny", "thaJeztah" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5403", "repo": "docker/swarmkit", "url": "https://github.com/docker/swarmkit/pull/2694" }
gharchive/pull-request
[18.03] backport reaper fixes Backports for 18.03 of; https://github.com/docker/swarmkit/pull/2526 [manager/orchestrator/task_reaper] Fix task reaper test to also set the desired state on tasks to prevent reconciliation races https://github.com/docker/swarmkit/pull/2591 [manager/orchestrator/taskreaper] Move task_reaper_test to orchestrator/taskreaper https://github.com/docker/swarmkit/pull/2666 [orchestrator/task reaper] Clean up tasks in dirty list for which the service has been deleted https://github.com/docker/swarmkit/pull/2675 [manager/orchestrator/reaper] Clean out the task reaper dirty set at the end of tick() https://github.com/docker/swarmkit/pull/2669 Fix task reaper batching git cherry-pick -s -S -x a388cad309edddb9880899fe8927afbe4717a18e git cherry-pick -s -S -x 8cfb337920a6658b302643f27074ca3d669176ec git cherry-pick -s -S -x 592e8eddfa43ec5fbd6e34da5ad6890dfa9313fb git cherry-pick -s -S -x 1a43a3b612d8c775db8a44c8399844e1f7e4aed2 git cherry-pick -s -S -x 5291c7a7b45773a4fe18720a54485ee2dde0af3d Conflict when cherry-picking https://github.com/docker/swarmkit/commit/8cfb337920a6658b302643f27074ca3d669176ec, likely because things were cherry-picked out of order; On branch 18.03-backport_reaper_2 You are currently cherry-picking commit 8cfb3379. (fix conflicts and run "git cherry-pick --continue") (use "git cherry-pick --abort" to cancel the cherry-pick operation) Changes to be committed: modified: manager/orchestrator/taskreaper/task_reaper_test.go Unmerged paths: (use "git add/rm <file>..." as appropriate to mark resolution) deleted by them: manager/orchestrator/replicated/task_reaper_test.go Resolved by git rm manager/orchestrator/replicated/task_reaper_test.go to mark resolution To verify everything looked ok after merging, I compared the directories with master; git diff master manager/orchestrator/taskreaper/ git diff master manager/orchestrator/ Which produced an empty diff, so the master/orchestrator directory is fully up to date with master This supersedes https://github.com/docker/swarmkit/pull/2668 ping @anshulpundir @dperny @cyli PTAL if this LGTY LGTM, let's do it.
2025-04-01T06:38:25.131731
2019-08-16T18:15:21
481726191
{ "authors": [ "agduncan94", "denis-yuen", "garyluu" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5404", "repo": "dockstore/dockstore", "url": "https://github.com/dockstore/dockstore/issues/2766" }
gharchive/issue
WDL parsing check for recursive imports catches cases that are not recursive See https://github.com/gevro/gatk4-exome-analysis-pipeline-flat If you try and register it with the new dockstore WDL 1.0 parsing code, it will say there might be recursive imports. I checked the import structure and this is not true, it is a flaw with our code. Our current WDL parsing code cannot parse these 1.0 files either, so users cannot publish their valid 1.0 workflows. From https://discuss.dockstore.org/t/unable-to-get-publish-button-to-become-active/1972 ┆Issue is synchronized with this Jira Story ┆Issue Type: Story ┆Fix Versions: Dockstore 1.7 ┆Sprint: Seabright Sprint 17 Raft ┆Issue Number: DOCK-911 @agduncan94 taking a look, let me know if you have code for this already @denis-yuen I have the code, it is in https://github.com/dockstore/dockstore/tree/feature/2766/recursive-imports I still have to write tests, was going to use the one from the discourse discussion. Ah, too late I can now register https://raw.githubusercontent.com/gevro/gatk4-exome-analysis-pipeline-flat/master/ExomeGermlineSingleSample.wdl
2025-04-01T06:38:25.139105
2017-06-13T16:51:00
235619104
{ "authors": [ "Welliton309", "bethsheets", "denis-yuen", "keiranmraine", "wdesouza" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5405", "repo": "dockstore/dockstore", "url": "https://github.com/dockstore/dockstore/issues/770" }
gharchive/issue
Dockstore-based workflows with registered tools Feature Request Is it possible to create Workflow using Tools registered in Dockstore? I noticed that many available workflows in Dockstore provide all tools (CWL files) locally. I want to write Workflow files (CWL) using tools that I previously registered in Dockstore. Any ideia? Desired behaviour A CWL Workflow example: # ... steps: quality_report: run: registry.hub.docker.com/welliton/rqc # latest version in: {} out: {} quality_control: run: registry.hub.docker.com/welliton/trimgalore:v0.4.4 # with tag in: {} out: {} # ... ┆Issue is synchronized with this Jira Story ┆Fix Versions: Dockstore 2.X ┆Issue Number: DOCK-490 ┆Sprint: Backlog ┆Issue Type: Story Hi, I think the approach that comes to mind to try is: Multiple tools are built from multiple Dockerfiles+CWL files in one repo. These are then added to Dockstore Subsequently, a workflow is added to the same repo that uses those tools. We haven't tried sharing multiple tools between multiple repos yet, however. It might depend on how to reference a CWL step in a different repo in CWL. This is something we'd also like to be able to do. To some extent we're also thinking about 'custom' CWL which isn't in a repo but uses registered tools to allow experimentation or user centric plug-and-play. One thought I had is I have the impression you may be able to import workflow steps via URL http://www.commonwl.org/v1.0/SchemaSalad.html#Import This is something that I haven't tried myself yet, but if it works, it might be nice to work out a pattern based on this and support it in Dockstore explicitly. Thank you @denis-yuen for your suggestion. It worked! I used the URL for raw Dockstore.cwl file from GithHub. See example below. cwlVersion: v1.0 class: Workflow inputs: files: File[] groups: string[] outputs: qc_report: type: File outputSource: qc_raw/qc_report steps: qc_raw: run: https://raw.githubusercontent.com/labbcb/dockstore-rqc/v3.5/Dockstore.cwl in: files: files groups: groups out: [qc_report] Input example: files: - class: File path: ERR127302_1_subset.fastq.gz - class: File path: ERR127302_2_subset.fastq.gz groups: [None, None] I tested using cwltool. Another way is to dowload CWL files via dockstore tool cwl. For example: dockstore tool cwl --entry registry.hub.docker.com/welliton/rqc:v3.5 > Rqc.cwl @denis-yuen the answer to this is no, but it could be. Do you want @aofarrel to make a PR to include this in the current documentation? If so, Ash can target develop. I think that would be nice. There's a bit of a start of a description of different kinds of imports at https://docs.dockstore.org/en/stable/end-user-topics/language-support.html#converting-file-path-based-imports-to-public-http-s-based-imports-for-wdl and a new section could be added for CWL
2025-04-01T06:38:25.171198
2015-07-16T21:35:00
95535333
{ "authors": [ "dossorio", "jmikola", "malarzm" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5406", "repo": "doctrine/mongodb-odm", "url": "https://github.com/doctrine/mongodb-odm/issues/1172" }
gharchive/issue
Inverse-side PersistentCollections should be immutable This issue came out of #1086, where we discussed several edge cases when working with inverse-side PersistentCollections that have been modified. @jmikola: Should we consider documenting that inverse collections are read-only, or perhaps enforce that with a special PersistentCollection sub-class? AFAIK, changes to such a collection would not be reflected in the changeset (so they are already read-only from ODM's perspective). If users actually want to modify the collection (so their data model doesn't have to care), I suppose they can manually unwrap it in a lifecycle event and work with an ArrayCollection. @alcaeus: I agree with having an ImmutablePersistentCollection class which forbids any write operations (add(), clear(), remove(), removeKey(), set()) in order to preserve the intended behavior. Only thing is, while it might be that we're just strictly enforcing something that was an unwritten rule before, we're technically breaking BC here. I suppose they can manually unwrap it in a lifecycle event and work with an ArrayCollection. What about having additional parameter in mapping to unwrap it automatically? Lifecycle event seems to be too much compared to @ReferenceMany(..., immutable="false") Anyway I still have in mind introducing custom collections so I'd say create a (really) Big Red Warning in the docs (not current notice, a Big Red Warning) to not complicate things in future? For the record from IRC: @alcaeus: @Ocramius had some input on that one as well basically, the recommendation at the time was to have PersistentCollection be mutable to allow people to add items to inverse collections before persisting to the database - well knowing that some counts may be off if they do ORM has the same issue we do. personally, i don't like removing functionality from a class, which is why i could live with leaving PersistentCollection fully mutable as long as we document this somehow +1 for flexibility and freedom of immutable="false" ! I guess in bi-directional relations like Task <-> Tag where tags are reusable among the Tasks is ok, but in (for example) Task <-> Comment, I would like to delete comments from inside comments Task collection. As you were discussing, its quite confusing, the fact that you can call remove() on a PersistentCollection and it is not persisted, because is the inverse-side of the relation. Anyway, I hope I understood everything well ! It took me a while and I feel I don't know even a half ! :) Thanks !! Custom collections are implemented for a while now, whoever wants immutability for inverse side of references can employ them :)
2025-04-01T06:38:25.199775
2019-06-14T17:30:28
456357570
{ "authors": [ "LarryKlugerDS", "acooper4960", "by12380", "shierro" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5407", "repo": "docusign/docusign-node-client", "url": "https://github.com/docusign/docusign-node-client/issues/136" }
gharchive/issue
envelopesApi.update: Error: INVALID_REQUEST_BODY In an attempt to resend a document using the node api client, the code below throws error status 400 - INVALID REQUEST BODY. await envelopesApi.update(dsJwtAuth.accountId, envelopeId, { resendEnvelope: true }) However, doing the following works: await envelopesApi.update(dsJwtAuth.accountId, envelopeId, { resendEnvelope: true, envelope: {} }) Fix suggestions: Update node-client-api documentation for update(accountId, envelopeId, opts, callback) to say opts.envelope is required OR Update api/EnvelopesApi.js, line 3861 to be var postBody = opts['envelope'] || {} ; Thank you for the bug report. I have filed internal report DCM-3369. Hi @by12380 This issue has been resolved starting in version 4.10.1 and 5.8.1 Can confirm that this is working, thanks! might be nice to have this as an example on examples project
2025-04-01T06:38:25.249113
2022-05-14T18:44:26
1236095850
{ "authors": [ "AronGahagan", "japaweiss" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5408", "repo": "doidotech/TBM", "url": "https://github.com/doidotech/TBM/issues/2" }
gharchive/issue
Option to dim / disable display brightness As a user I want to be able to dim / disable the display brightness because in some situation it is too bright for the room Agree. Would also like to turn it off at night, say between 11 PM and 5 AM.
2025-04-01T06:38:25.268478
2021-10-01T12:02:56
1013263088
{ "authors": [ "doitsujin", "gardotd426" ], "license": "Zlib", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5409", "repo": "doitsujin/dxvk", "url": "https://github.com/doitsujin/dxvk/issues/2321" }
gharchive/issue
86148ec0: "Implement DXVK pieces required for DX11 DLSS support" Breaks Origin Software information Origin crashes using current master of DXVK. 1.9.2 works. After bisecting I found that "86148ec070628f5a89fbb0a91603bae2ce89529a: Implement DXVK pieces required for DX11 DLSS support" is the offending commit. Reverting it fixes the issue System information GPU: Nvidia RTX 3090 Driver: 470.74 Wine version: All wine versions after 6.6 DXVK version: Current master Apitrace file(s) Origin.4.trace.tar.gz Log files d3d9.log: Origin_d3d9.log d3d11.log: Origin_d3d11.log dxgi.log: Origin_dxgi.log Do other 32-bit apps work or is it only Origin? Battle.Net works. Not sure if Ubisoft Connect is 32-bit or not, but it works as well. I know Battle.Net is 32-bit, though, and it works fine. Fixed on master.
2025-04-01T06:38:25.278606
2021-12-22T18:35:53
1087081811
{ "authors": [ "Alexithymia2014", "Blisto91", "doitsujin" ], "license": "Zlib", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5410", "repo": "doitsujin/dxvk", "url": "https://github.com/doitsujin/dxvk/issues/2410" }
gharchive/issue
Ghost Recon Advanced Warfighter 2 does not render shadows at all Ghost Recon Advanced Warfighter 2 and its predecessor (GRAW) do not render shadows with DXVK or WineD3D. There is an old wine bug that could be relevant, https://bugs.winehq.org/show_bug.cgi?id=38015 something to do with cascaded shadows? Software information Ghost Recon Advanced Warfighter 2 downloaded from Amazon Games since it's no longer on another online store. I've maxed out the settings but I've also tried disabling Dynamic Shadows or enabling them at low, medium, and high. System information GPU: AMD Raven Ridge Vega 10 Driver: Mesa 21.3.2 Wine version: Tested on 6.21-devel and 7.0-rc2 DXVK version: Tested on 1.9.2 and latest master (as of two days ago) Fedora 35 Apitrace file(s) Apitrace Log files d3d9.log: https://drive.google.com/file/d/1CIp1-ECqnLKarAl59Ka_RON9mvaTMNZI/view?usp=sharing d3d11.log: N/A dxgi.log:https://drive.google.com/file/d/1L-Sok3HzlLzLmVHJiwU50ozKUmiwKf81/view?usp=sharing Looks like all your google drive links require a password. Looks like all your google drive links require a password. My apologies, I just changed the permissions. Can you try now? I got around to installing Windows 11 and the shadows do render when I play. When I drop in the DXVK DLLs next to the game exe, the shadows no longer render. @Alexithymia2014 Can i get you to test this issue again? Your trace is weird when i replay it with either wined3d or dxvk. The world doesn't seem to load in at all with both once you load the game. I'll try to recreate the trace for you On Mon, Jul 18, 2022, 1:30 PM Blisto91 @.***> wrote: @Alexithymia2014 https://github.com/Alexithymia2014 Can i get you to test this issue again? Your trace is weird when i replay it with either wined3d or dxvk. The world doesn't seem to load in at all with both once you load the game in your trace. — Reply to this email directly, view it on GitHub https://github.com/doitsujin/dxvk/issues/2410#issuecomment-1187870102, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH52BUZSCBCHL3OYVYQ5S43VUWIEVANCNFSM5KTEWXSQ . You are receiving this because you were mentioned.Message ID: @.***> Hi @Blisto91, the shadows work now in 1.10.2! Out of curiosity, was there a specific fix for this? Not as far as i know. But there have been some general dx9 fixes which might affect a bunch of games. Glad to hear it's working! 🙂
2025-04-01T06:38:25.281122
2024-07-24T17:52:26
2428159185
{ "authors": [ "doitsujin", "esullivan-nvidia" ], "license": "Zlib", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5411", "repo": "doitsujin/dxvk", "url": "https://github.com/doitsujin/dxvk/pull/4166" }
gharchive/pull-request
Update shouldSubmit to correctly handle descriptorPoolOverallocation Currently shouldSubmit will force the dxvk context to be flushed when too many descriptor pools have been allocated. This heuristic does not work when VK_NV_descriptor_pool_overallocation is in use because there will only ever be a single pool. This change updates the heuristic to use the number of allocated sets when VK_NV_descriptor_pool_overallocation is in use. Resolves the issue described in https://github.com/ValveSoftware/Proton/issues/7862 For the bug I included in the description the application ends up repeatedly calling vkAllocateDescriptorSets so the driver will eventually hit an OOM if the pool isn't reset. The game doesn't use d3d for presentation so in this case the descriptor pool ends up growing indefinitely. alright, I was kind of expecting pool memory to still be limited in some way, good to know.
2025-04-01T06:38:25.286431
2024-03-11T04:35:03
2178209915
{ "authors": [ "glihm", "kariy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5412", "repo": "dojoengine/dojo", "url": "https://github.com/dojoengine/dojo/pull/1648" }
gharchive/pull-request
fix(sozo): ensure warnings don't stop tests build Closes DOJ-252, #1646. hmm im still getting the reported error even with with this change hmm im still getting the reported error even with with this change With which project did you try? Do you have only warnings? I'll add tests tomorrow, that's a good point. But I've tested on spawn-and-move and I have the expected output: hmm im still getting the reported error even with with this change With which project did you try? Do you have only warnings? I'll add tests tomorrow, that's a good point. But I've tested on spawn-and-move and I have the expected output: ah nvm my mistake.. its working lol As discussed this morning, the rework of sozo ops and commands will address related tests.
2025-04-01T06:38:25.288941
2023-09-08T18:29:20
1888142035
{ "authors": [ "dokar3", "mahmoud-abdallah863" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5413", "repo": "dokar3/ChipTextField", "url": "https://github.com/dokar3/ChipTextField/issues/95" }
gharchive/issue
Add support for material 3 Can't use this library on material 3 currently In my app I am using material 3 only. I don't want to mix both material 3 and 2. Thanks for the suggestion, will ship an M3 artifact when I have time. Great. I'm willing to help in this v0.6.0 is out with Material 3 support: https://github.com/dokar3/ChipTextField/releases/tag/v0.6.0
2025-04-01T06:38:25.313144
2015-01-09T21:24:03
53916481
{ "authors": [ "dokterdok", "mbain108" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5414", "repo": "dokterdok/Continuity-Activation-Tool", "url": "https://github.com/dokterdok/Continuity-Activation-Tool/issues/126" }
gharchive/issue
THANK YOU TEAM! This worked perfectly for me! This worked perfectly for me! I purchased the IOGEAR Bluetooth 4.0 USB Micro Adapter (GBU521) hopes to utilize Apple's handoff and airdrop features. I have the MacBook Pro (15-inch, Mid 2010) running OS X Yosemite. I installed 'Continuity Activation Tool'. It was seamless. There were no hiccups and all is working like a charm! THANK YOU TEAM! Thank you for your support @mbain108 !
2025-04-01T06:38:25.345922
2023-12-16T17:00:51
2044859758
{ "authors": [ "Scyye", "dolfies" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5415", "repo": "dolfies/discord.py-self", "url": "https://github.com/dolfies/discord.py-self/issues/630" }
gharchive/issue
Event Loop Is Closed crash Summary Event Loop Is Closed prevents starting bot. Reproduction Steps Use the current version Start your bot Code bot.run(token=token) Expected Results The bot starts up Actual Results this error: Traceback (most recent call last): File "K:\coding\Other\notificationBot\sb.py", line 117, in bot.run(token=token) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\client.py", line 938, in run asyncio.run(runner()) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 647, in run_until_complete return future.result() File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\client.py", line 927, in runner await self.start(token, reconnect=reconnect) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\client.py", line 857, in start await self.login(token) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\client.py", line 698, in login data = await state.http.static_login(token.strip()) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\http.py", line 991, in static_login await self.startup() File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\http.py", line 562, in startup self.super_properties, self.encoded_super_properties = sp, _ = await utils._get_info(session) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\utils.py", line 1446, in _get_info bn = await _get_build_number(session) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\utils.py", line 1474, in _get_build_number build_url = 'https://discord.com/assets/' + re.compile(r'assets/+([a-z0-9]+).js').findall(login_page)[-2] + '.js' IndexError: list index out of range Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed System Information - Python v3.9.13-final - discord.py-self v2.0.0-final - aiohttp v3.9.1 - system info: Windows 10 10.0.19045 Checklist [X] I have searched the open issues for duplicates. [X] I have shared the entire traceback. [X] I am using a user token (and it isn't visible in the code). Additional Information No response Duplicate of #619
2025-04-01T06:38:25.353082
2022-03-22T19:17:43
1177203577
{ "authors": [ "coffeegoddd" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5416", "repo": "dolthub/bounties", "url": "https://github.com/dolthub/bounties/pull/636" }
gharchive/pull-request
[auto-bump] dependency by zachmu :coffee: An Automated Dependency Version Bump PR :crown: Initial Changes The initial changes contained in this PR were produced by go geting the dependency. $ cd ./go $ go get github.com/dolthub/<dependency>/go@<commit> $ go mod tidy Before Merging This PR must have passing CI and a review before merging. After Merging An automatic PR will be opened against the LD repo that bumps the bounties version there. This PR has been superseded by https://github.com/dolthub/bounties/pull/637
2025-04-01T06:38:25.363386
2019-03-07T12:43:57
418290052
{ "authors": [ "IrickNcqa", "artfulsage", "domaindrivendev", "slahabar", "spfaeffli" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5417", "repo": "domaindrivendev/Swashbuckle.AspNetCore", "url": "https://github.com/domaindrivendev/Swashbuckle.AspNetCore/issues/1064" }
gharchive/issue
JsonObject(ItemRequired = Required.AllowNull) not recognized Hi. I suppose there is a bug in recognizing required properties of class marked with [JsonObject(ItemRequired = Required.AllowNull)] while properties marked with [JsonRequiredAttribute] and [JsonProperty(Required = Required.AllowNull)] recognized well. Repro sample: public class Program { public static void Main(string[] args) => WebHost.CreateDefaultBuilder(args).UseStartup<Startup>().Build().Run(); } public class Startup { public Startup(IConfiguration configuration) => Configuration = configuration; public IConfiguration Configuration { get; } public void ConfigureServices(IServiceCollection services) { services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2); services.AddSwaggerGen(c => c.SwaggerDoc("1.0", new Info())); } public void Configure(IApplicationBuilder app, IHostingEnvironment env) => app.UseSwagger() .UseSwaggerUI(c => { c.SwaggerEndpoint($"/swagger/1.0/swagger.json", "API"); c.RoutePrefix = string.Empty; }) .UseMvc(); } [JsonObject(ItemRequired = Required.AllowNull)] public class Item { public int? Id { get; set; } public string Name { get; set; } } public class Item2 { [JsonRequired] public int? Id { get; set; } [JsonProperty(Required = Required.AllowNull)] public string Name { get; set; } } [Route("api/[controller]")] [ApiController] public class ItemsController : ControllerBase { [HttpGet("default-item")] public ActionResult<Item> GetDefaultItem() => new Item(); [HttpGet("default-item2")] public ActionResult<Item2> GetDefaultItem2() => new Item2(); } I am running into issues with below. [JsonProperty(Required = Required.AllowNull)] tags it as required field but only allows non-null values. Currently the value for JsonProperty.Required only determines if the value is required - it does not allow you to indicate that a value may or may not be null. Example: [JsonProperty(Required = Required.AllowNull)] public string ValAllowNull { get; set; } [JsonProperty(Required = Required.Always)] public string ValAlways { get; set; } [JsonProperty(Required = Required.Default)] public string ValDefault { get; set; } [JsonProperty(Required = Required.DisallowNull)] public string ValDisallowNull { get; set; } [JsonProperty(Required = Required.Always)] public int? ValNullable { get; set; } Produces this output: "RequiredTest": { "required": [ "valAllowNull", "valAlways", "valNullable" ], "type": "object", "properties": { "valAllowNull": { "type": "string" }, "valAlways": { "type": "string" }, "valDefault": { "type": "string" }, "valDisallowNull": { "type": "string" }, "valNullable": { "type": "integer", "format": "int32", "nullable": true } }, "additionalProperties": false }, As you can see, nullable is only set for nullable types. @domaindrivendev : Would you accept a PR that sets the value for nullable based on JsonProperty.Required? If so, I would be happy to implement this. Seems reasonable - a PR would be great! Guys what about original issue with [JsonObject(ItemRequired = Required.AllowNull)]? This is a problem for me too. Optional strings being required? Nothing like hitting deserialization errors in NSwagStudio auto-generated code because optional strings are null, yet marked as DisallowNull :( @artfulsage @slahabar @IrickNcqa - thanks to some inspiring work from @spfaeffli, I now think I have all of your issues resolved. This is the last remaining issue preventing me from moving toward an official 5.0.0 release. So, it would be very helpful if you could pull down the latest preview from myget (5.0.0-rc3-preview-0952 at the time of writing) and confirm everything is now working as expected. Here's the relevant commit - c89876fbe77f65d25b1e768361a71ba7b9ef4462 @spfaeffli @domaindrivendev, thanks for update! I'd like to test it, but it seems there are a lot of breaking changes since version 4. Do you have any guide to upgrade? @artfulsage There is no upgrade guide yet (#1262) - however you could take a look at the release notes to see what changed since v4.
2025-04-01T06:38:25.374937
2021-11-05T08:14:20
1045553886
{ "authors": [ "Hoopou", "LeoJHarris", "MayueCif", "adrianstovall71", "captainsafia", "ch-lee", "dnperfors", "farlop", "feO2x" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5418", "repo": "domaindrivendev/Swashbuckle.AspNetCore", "url": "https://github.com/domaindrivendev/Swashbuckle.AspNetCore/issues/2267" }
gharchive/issue
Include Descriptions from XML Comments For Minimal Api Not work I Create a .Net 6 Minimal Api Project ,But comment not display in swagger html。 The partten of this:https://github.com/domaindrivendev/Swashbuckle.AspNetCore#include-descriptions-from-xml-comments Code: app.MapGet("Test", Handler.Test).WithName("Test"); public static class Handler { /// <summary> /// Test Comment /// </summary> public static string Test() { return "Test Comment"; } } generate xml document is right。Isn't it supported yet? For Minimal Api. I see the same thing. Playing with Microsoft's "ToDo" examples for .Net Core and I see the xml comments for the ToDoItemDTO type show up in the documentation, but not for the app.MapGet entries. Seems like classes do get comments added to their documentation. Methods - the things mapped to http verbs - have documentation generated, but without the comments associated with them. I see the same thing. Playing with Microsoft's "ToDo" examples for .Net Core and I see the xml comments for the ToDoItemDTO type show up in the documentation, but not for the app.MapGet entries. Seems like classes do get comments added to their documentation. Methods - the things mapped to http verbs - have documentation generated, but without the comments associated with them. yes ,it not work for minimal api.present i can`t find solution seems. Maybe this workaround can help: https://github.com/dotnet/aspnetcore/issues/37906#issuecomment-954494599 Hi folks! Yes, this isn't supported yet. We've actually got an issue tracking adding support for this over on the aspnetcore repo (https://github.com/dotnet/aspnetcore/issues/39927). If you'd like to see this happen, give it a thumbs-up and it'll help us with prioritization. I was able to get it running by using a structure like the following with Swashbuckle.AspNetCore 6.3.0: namespace MyAwesomeWebApi; public static class GetContactsEndpoint { public static WebApplication MapGetContactsEndpoint(this WebApplication app) { app.MapGet("/api/contacts", GetContacts) .Produces<ContactsListDto>() .Produces(StatusCodes.Status400BadRequest) .Produces(StatusCodes.Status500InternalServerError); return app; } /// <summary> /// Gets the contacts as a paged result. /// </summary> /// <param name="skip">The number of contacts to skip (optional). Default value is 0. Must be greater than or equal to 0.</param> /// <param name="take">The number of contacts to take (optional). Default value is 30. Must be between 1 and 100.</param> /// <param name="searchTerm">The search term (optional). White space at the front and back will be trimmed automatically. Contacts whose name start with the search term will be found.</param> /// <response code="400">Bad Request: the paging parameters are invalid.</response> public static async Task<IResult> GetContacts(int skip = 0, int take = 30, string? searchTerm = null) { if (skip < 0 || take is <= 0 or take > 100) return Result.BadRequest(); // Open a database session here, load the paged contacts and return them return Result.Ok(contactListDto); } } You can then map this endpoint in Program.cs like so app.MapGetContactsEndpoint(); This worked! On top of configuring Swashbuckle, I needed to add this extra part: builder.Services.AddSwaggerGen(opts => { var xmlFilename = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml"; opts.IncludeXmlComments(Path.Combine(AppContext.BaseDirectory, xmlFilename)); }); @ch-lee I can confirm this worked: In my case I still need to add this to the csproj file: true https://stackoverflow.com/questions/69790435/swagger-asp-net-core-minimal-api-include-xml-comments-files @ch-lee I can confirm this worked: In my case I still need to add this to the csproj file: <GenerateDocumentationFile>true</GenerateDocumentationFile> https://stackoverflow.com/questions/69790435/swagger-asp-net-core-minimal-api-include-xml-comments-files I just created a blank api in .net 6 and I needed to add it too!, Thanks On .NET 7 I was able to get this working, just as @feO2x describes, however examples on elements don't work. All descriptions are correctly shown.
2025-04-01T06:38:25.399926
2022-09-16T22:09:28
1376539023
{ "authors": [ "dominikh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5419", "repo": "dominikh/gotraceui", "url": "https://github.com/dominikh/gotraceui/issues/23" }
gharchive/issue
Provide menu for changing UI settings Implement a popup windows that can be used to customize UI settings. e.g. instead of needing different shortcuts for toggling labels, compact mode, tooltips etc, have one shortcut that opens a menu that allows toggling these features. We have a traditional application menu for this now. We'll have to see if UI settings get changed often enough to justify a more custom approach.
2025-04-01T06:38:25.459149
2016-10-04T04:03:43
180801551
{ "authors": [ "doomspork", "nscyclone", "pragmaticivan", "ruan-brandao" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5420", "repo": "doomspork/elixir-school", "url": "https://github.com/doomspork/elixir-school/pull/699" }
gharchive/pull-request
[pt] translation for comprehensions Related to #355 This lesson is not in the issue's list of pages to translate, but it doesn't have a translation. Feel free to give me your feedback @pragmaticivan and anyone who wants to revise this translation. Thank you @ruan-brandao! I'll give @pragmaticivan a chance to comment and then we can merge 👍 Ops, still reviewing. BTW, I noticed that the lessons for Strings and Custom Mix Tasks need translation too. Should I translate them in this PR or open one PR per lesson? @ruan-brandao individual PRs are usually easier for reviewers since they can be tackled in small chunks. @doomspork ready! Thanks, @ruan-brandao and @pragmaticivan!
2025-04-01T06:38:25.463101
2016-11-11T02:46:21
188672561
{ "authors": [ "doowb", "skleest" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5421", "repo": "doowb/firebase-cron", "url": "https://github.com/doowb/firebase-cron/issues/5" }
gharchive/issue
Horizontal scaling? Hello, Thanks for the component! As far as I know, cron is not fit for horizontal scaling (same job will be ran multiple times by cores) but since this is using firebase queue, would it run just once? Thank you. This was not designed to be horizontally scaled since there is a possibility that a job could be picked up by multiple servers. I think this can be implemented though and use logic similar to firebase-queue to ensure that a job is only handled by a single server. However, unless you have a lot of schedule jobs, you probably don't need to scale this horizontally since this library can be run as a separate process (e.g. not in the same process as your queues) and only adds data to a firebase-queue's task list. From there, the firebase-queue handles scaling the actual execution of tasks. I'm open to PRs though since polling for the next jobs could be handled better.
2025-04-01T06:38:25.549652
2019-11-04T13:58:20
517169997
{ "authors": [ "aaronclong", "dotMorten" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5422", "repo": "dotMorten/DotNetOMDGenerator", "url": "https://github.com/dotMorten/DotNetOMDGenerator/pull/24" }
gharchive/pull-request
Upgrade to DotNet Core 3.0 DotNet Core 3.0 is a major release, requiring many projects to upgrade. This project currently fails on project running on dotnet core 3.0. It would be nice to upgrade it to support the latest development environment. Ran the current published tool against .NET Core 3.0 and 3.1 assemblies, and found no issues.
2025-04-01T06:38:25.553059
2020-02-08T11:53:01
562009187
{ "authors": [ "Redskinsjo", "ardatan", "knixer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5423", "repo": "dotansimha/graphql-yoga", "url": "https://github.com/dotansimha/graphql-yoga/issues/619" }
gharchive/issue
How do I add headers to a server response? Not every response should have these headers. Just when dealing with tokens I want to add some headers. It depends on where you want to manipulate the headers; Let's say you want to access the response object in the resolvers in Node env; (root, args, context, info) => { context.res // You can find `ServerResponse` object here } If you want to do it in the middleware level; const http = require('http'); const yoga = require('@graphql-yoga/node'); const yogaApp = yoga.createServer(..); const server = http.createServer((req, res) => { // You can manipulate `res` here yogaApp.requestListener(req, res); }) I have many 405 Method Not Allowed errors when building my NextJs14 app. I have set cors: { methods: ["POST", "OPTIONS", "GET"], }, but it doesn't work. If the context is the right way to set to allowed methods, what is the best way to set allowed method for every requests ? also I don't find any res field on YogaInitialContext You can create a plugin that changes the headers; https://the-guild.dev/graphql/yoga-server/docs/features/envelop-plugins#onresponse my application stack is quite large, how do you suggest me to make a simple reproduction without identifying the origin of the issue ?
2025-04-01T06:38:25.564524
2018-02-23T22:10:14
299866351
{ "authors": [ "AlexKotov", "ArthurRuxton-DY", "Cierpliwy", "CyxouD", "LeonidVeremchuk", "LingboTang", "bntzio", "gilador", "nmurashi", "originalix", "rafkhan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5424", "repo": "dotintent/react-native-ble-plx", "url": "https://github.com/dotintent/react-native-ble-plx/issues/230" }
gharchive/issue
Turn off or refresh device caching on iOS I'm trying to scan some nrf52 UART devices, I have no problem scanning them. However, when I changed the device name, and then do a scan using ble-plx again, it still displaying the old name that I believe it is cached in iOS system. I confirmed this behaviour with nordic nrf connection app. What's different is that nordic app will refresh the device name after few seconds or after enable a connection between my iphone and my device. I assume ble-plx has similar functions to refresh the cache, but I didn't see it explicitly in the documentation. Can someone help? +1 +1 iOS uses value of https://www.bluetooth.com/specifications/gatt/viewer?attributeXmlFile=org.bluetooth.characteristic.gap.device_name.xml in caches, so yes it is possible that when device name is changed and new connection wasn't established scanning can show old name. Unfortunately I'm not aware of any specific API on iOS to make this process faster. We are using currenty https://developer.apple.com/documentation/corebluetooth/cbperipheral/1519029-name. +1 +1 Still not implemented, but I can only say for sure that it happens on Android i think they way to refresh the gatt cache is by passing the right option parameter: connectToDevice(deviceIdentifier: DeviceId, options?: ConnectionOptions): Promise<Device> where ConnectionOptions has refreshGatt?: RefreshGattMoment: i think there is a way to refresh the gatt cache by passing the right option parameter to the connectToDevice method: connectToDevice(deviceIdentifier: DeviceId, options?: ConnectionOptions): Promise<Device> where ConnectionOptions has refreshGatt?: RefreshGattMoment: so the call should be: const result = await this.manager.connectToDevice(deviceId, { refreshGatt: 'OnConnected', ...other-options.... }) refreshGatt option is Android only. @originalix @gilador @Cierpliwy @LingboTang i think there is a way to refresh the gatt cache by passing the right option parameter to the connectToDevice method: connectToDevice(deviceIdentifier: DeviceId, options?: ConnectionOptions): Promise<Device> where ConnectionOptions has refreshGatt?: RefreshGattMoment: so the call should be: const result = await this.manager.connectToDevice(deviceId, { refreshGatt: 'OnConnected', ...other-options.... }) refreshGatt option is Android only. Is there a way to refresh or clear cache on IOS? Is there any update to this for iOS? Right now it seems the only way to refresh the device in the cache is to connect to it.
2025-04-01T06:38:25.577580
2021-05-17T14:28:42
893392303
{ "authors": [ "guardrex", "rahul7720" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5425", "repo": "dotnet/AspNetCore.Docs", "url": "https://github.com/dotnet/AspNetCore.Docs/issues/22332" }
gharchive/issue
Blazor Wasm .Net 5 - Implement both Individual User Accounts and Azure AD Authentication Below Microsoft documentation explains how to protect a Blazor WASM Hosted app using two different authentication approach. 1. Individual User JWT Authorization(IdentityServer) 2. Azure AD Authentication There is a need to provide end-user with both options by combining both authentication mechanism in a single app. The user should be able to choose one of the options from the login page. The Azure AD option is just to give end-users to SSO experience and besides that, all authorization logic will be handled locally using individual user accounts. Once the user is authenticated using the Azure Adoption, there should be a way to link the user with a local ID to handle authorization logic etc. I did a lot of online research but I couldn't find a guide or tutorial to implement this. I tried to implement this by combining the code but I'm stuck with: Enabling both Local/AzureAd login option in Blazor client login page Linking the Azure AD user with the local user in the server Blazor Client Code public class Program { public static async Task Main(string[] args) { var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.RootComponents.Add<App>("#app"); builder.Services.AddHttpClient("BlazorWasmIndvAuth.ServerAPI", client => client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress)) .AddHttpMessageHandler<BaseAddressAuthorizationMessageHandler>(); builder.Services.AddScoped(sp => sp.GetRequiredService<IHttpClientFactory>().CreateClient("BlazorWasmIndvAuth.ServerAPI")); //OPTION 1 //Azure Ad Authentication builder.Services.AddMsalAuthentication(options => { builder.Configuration.Bind("AzureAd", options.ProviderOptions.Authentication); options.ProviderOptions.DefaultAccessTokenScopes.Add("api://123456/Api.Access"); }); //OPTION 2 //Individual User JWT authentication builder.Services.AddApiAuthorization(); await builder.Build().RunAsync(); } } Hello @rahul7720 ... This scenario may likely be considered an "advanced scenario" left to developers and the community to support. There are only so many use cases that we can cover and maintain. As you can see by the number of issues that we have to keep up with (Blazor Project) and the yearly .NET and ASP.NET Core/Blazor new features, there just isn't a lot of time to present a number of advanced scenarios. I understand that you searched for solutions, but also note that product support is available on public support forums. We often recommend the usual spots to ask ... Stack Overflow (tag: blazor) ASP.NET Core Slack Team Blazor Gitter However ... of course ... let's get a ruling on it. Doc issues are worked based on the PU's priority scenarios for coverage. Ping @mkArtakMSFT to take a look. If we keep this issue as a work item, he'll let us know what priority this should take. I assume tho that it would be P2 or lower (for 2022 probably) given that we'll need to wrap up the UE passes on https://github.com/dotnet/AspNetCore.Docs/issues/19286 get through all of the new .NET 6 coverage on https://github.com/dotnet/AspNetCore.Docs/issues/22045. The current workload is fairly heavy at this time. 🗻⛏️😅 @rahul7720 ... I received guidance from management on this subject. They say that by default moving forward we probably won't document anything related to Identity Server that falls outside of the scenarios described by the Blazor WASM IdS topic. For product support for your scenario, work with various public and private IdS support channels and the usual public Blazor support channels that we recommend ... Stack Overflow (tag: blazor) ASP.NET Core Slack Team Blazor Gitter
2025-04-01T06:38:25.607133
2021-06-28T02:04:29
931091329
{ "authors": [ "NTaylorMullen" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5426", "repo": "dotnet/aspnetcore-tooling", "url": "https://github.com/dotnet/aspnetcore-tooling/pull/3863" }
gharchive/pull-request
Add better HTML & Razor editing, commenting & smart indent support Removed C# components that that would asynchronously auto-insert bits. This operation is now synchronous. Updated TagHelperCompletion to not provide component completion at <!- (beginning of an HTML comment) Enables: < => <|>, NOTE: This is one of the largest changes here, ultimately it makes the editing more seamless and significantly faster <!-- =><!----> @*|*@ On Enter => @* | *@ <tagName>|</tagName> On Enter => <tagName> | </tagName> Block commenting highlighting for Razor comments Before (The key takeaway here is that it's significantly slower) After Found issues: Brace navigation on @* or *@ doesn't work. Issue Cascading auto-completes does not work. Issue OnEnterRule Specification Fixes dotnet/aspnetcore#33897 /cc @ToddGrun @jimmylewis Lots of language-configuration.json goodness here. One interesting thing to call out is the new way to create HTML tags. Upon typing < in an HTML context you get put at <|>. I found this to feel quite natural, faster and also reduced number of stray errors. Was there a reason this wasn't done in the older editor that I'm unaware of?
2025-04-01T06:38:25.609920
2021-07-07T07:12:58
938567533
{ "authors": [ "NTaylorMullen", "allisonchou" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5427", "repo": "dotnet/aspnetcore-tooling", "url": "https://github.com/dotnet/aspnetcore-tooling/pull/3927" }
gharchive/pull-request
Update .editorconfig naming for instance fields I ran into this many times when working on https://github.com/dotnet/aspnetcore-tooling/pull/3808. Before, the below example would try to generate the field errorReporter instead of _errorReporter. Now it generates the correct naming style: Ooo and this would probably be encompassed by https://github.com/dotnet/aspnetcore/issues/23812 ? @NTaylorMullen yep! Although currently this is only for instance fields. I'm not sure what the intended naming style is for static fields, although Roslyn currently prefixes them with an s_
2025-04-01T06:38:25.766872
2019-05-08T20:33:14
441919189
{ "authors": [ "Eilon", "TW8B", "davidfowl", "karelz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5428", "repo": "dotnet/core", "url": "https://github.com/dotnet/core/issues/2695" }
gharchive/issue
SignalR JS client example doesn't work. (solution within) aspnetcore/signalr/javascript-client/sample/wwwroot/js/chat.js requires an iife around start to start. The sample appears to be based off the JS SignalR tutorial or vice versa, but with substantive differences around things like use or not of async. I'm happy to submit a pull request once I'm on a machine that isn't locked down. (async function start() { try { await connection.start(); console.log("connected"); } catch (err) { console.log(err); setTimeout(() => start(), 5000); } })(); It was the only change required to make the thing run. There were some other issues with things like bootstrap being missing, but I didn't dig into that. Again, happy to have a poke. I'm using VS 2019 latest. Cheers Edit: I could make the tutorial match or not match the sample, perhaps in a separate issue? Edit: I could make the tutorial match or not match the sample, perhaps in a separate issue? Our in repo samples aren't really samples so no they shouldn't match. They are mostly for the team's own testing, more like a sandbox while the devs write code. We have a separate repository for samples and docs that we point at them. @davidfowl so what is the action here? Close the issue? Or are there any follow ups/fixes/changes/investigation we should do first? I tried to move it to asp.net but no go. Move it to aspnet Which aspnet repo? I am using @Eilon's tool for cross-org moves btw: https://hubbup.io/Mover (Transfer issue is only in-org :() the AspNetCore one. This issue was moved to aspnet/AspNetCore#10902
2025-04-01T06:38:25.771107
2019-06-21T03:49:23
458994382
{ "authors": [ "carlossanlop", "ericstj", "groogiam" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5429", "repo": "dotnet/core", "url": "https://github.com/dotnet/core/issues/2916" }
gharchive/issue
Is it possible to create WPF or WinForms Class Library I have a couple of .NET Framework class library projects which provide reusable functionality for WPF and Winforms. These projects currently do so by referencing the .NET Framework assemblies like WindowsBase.dll or System.Windows.Forms. I cannot figure out how to replicate this using .NET Core 3 Preview 6. I tried referencing the Microsoft.WindowsDesktop.App package but got the following error. Is this supported? It seems like it should be. NU1202 Package Microsoft.WindowsDesktop.App 3.0.0-preview6-27804-01 is not compatible with netstandard2.0 (.NETStandard,Version=v2.0). Package Microsoft.WindowsDesktop.App 3.0.0-preview6-27804-01 supports: netcoreapp3.0 (.NETCoreApp,Version=v3.0) @ericstj @joperezr is this an issue you can help with? It's related to .NET version. Let me know if we should instead transfer the issue to the wpf or winforms repos. You can create a class library that uses the WindowsDesktop SDK. <Project Sdk="Microsoft.NET.Sdk.WindowsDesktop"> <PropertyGroup> <TargetFramework>netcoreapp3.0</TargetFramework> <!-- use one of the following or both --> <UseWindowsForms>true</UseWindowsForms> <UseWpf>true</UseWpf> </PropertyGroup> </Project> Essentially it's the same as dotnet new winforms or dotnet new wpf with <OutputType>WinExe</OutputType> removed. @diverdan92 do you know if class-library templates are planned for WPF and WinForms? That seems to do the trick. Thanks for the help. This issue seems to be resolved, so I'm closing it. @groogiam if you need more assistance, feel free to create a new issue, we'll be happy to help.
2025-04-01T06:38:25.783209
2023-09-05T13:03:06
1881939430
{ "authors": [ "buyaa-n", "durgambigai" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5430", "repo": "dotnet/core", "url": "https://github.com/dotnet/core/issues/8738" }
gharchive/issue
error CS8802: Only one compilation unit can have top-level statements. [C:\Users\aaa\MyApp\MyApp.csproj] The build failed. Fix the build errors and run again. Problem encountered on https://dotnet.microsoft.com/en-us/learn/dotnet/hello-world-tutorial/run Operating System: windows Provide details about the problem you're experiencing. Include your operating system version, exact error message, code sample, and anything else that is relevant. error CS8802: Only one compilation unit can have top-level statements. [C:\Users\aaa\MyApp\MyApp.csproj] The build failed. Fix the build errors and run again. That typically happens when you run the dotnet new command twice in different folders. Maybe try starting over?
2025-04-01T06:38:26.024129
2018-08-06T18:42:10
348038136
{ "authors": [ "JeremyKuhne", "benaadams", "gfoidl", "jaredpar", "jkotas", "verelpode" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5431", "repo": "dotnet/corefxlab", "url": "https://github.com/dotnet/corefxlab/issues/2417" }
gharchive/issue
ArrayElementReference variant of the new Span / Memory Hello, here's an interesting idea. The new "Span" and "Memory" in C# 7.2 could potentially be used to solve the problem of the garbage collection cost of creating a very large quantity of objects. An app could manage GC cost by allocating objects in groups, that is arrays. We can already make an array of structs, but it is rather limited because we cannot make a field that contains a reference to one of these structs in an array. So, what if Memory is used to make such a reference? Memory internally contains _object, _index, _length, but if it is reference to a single struct in an array, then _length == 1. I suggest making a variation of Memory like this: public readonly struct ArrayElementReference<T> { private readonly T[] _array; private readonly int _index; } or like this: public readonly struct ArrayElementReference<T> { private readonly T[] _array; private readonly System.UIntPtr _offsetInBytes; } Likewise, Span contains _pointer and _length and in this case, we don't need _length because it is always 1. Thus make a corresponding variation of Span like this: public readonly ref struct FastArrayElementReference<T> { private readonly ref T _pointer; } or like this: public readonly ref struct FastArrayElementReference<T> { private readonly System.UIntPtr _pointer; } Thus the next version of C# could give us the ability to define a field (in a class or struct) that contains a reference to a struct element in an array, and this is implemented like ArrayElementReference as shown above. When this field is copied to a local variable or method parameter on the stack, then it is converted to FastArrayElementReference as shown above -- the same idea as how Span is the fast version of Memory. Thanks for considering it. Expose the special ByReference<T> runtime type that Span<T> uses? /cc @jkotas ByReference<T> is a workaround for https://github.com/dotnet/csharplang/issues/1147. To clarify the syntactic sugar, C# would allow you to write like the following example of a kind of large tree that uses struct-based nodes allocated in arrays (instead of thousands of individual objects). struct ExampleNode { ref ExampleNode Parent; // note ref keyword here ref ExampleNode LeftChild; ref ExampleNode RightChild; int SomePayload1, SomePayload2; } void TestMethod() { ExampleNode[] page1 = new ExampleNode[20000]; ref ExampleNode nodeA = page1[0]; // note ref keyword here ref ExampleNode nodeB = page1[1]; ref ExampleNode nodeC = page1[2]; nodeA.LeftChild = nodeB; nodeA.RightChild = nodeC; nodeB.Parent = nodeA; nodeC.Parent = nodeA; } For the internal implementation, the C# compiler would translate the above syntactic sugar to the following: struct ExampleNode { ArrayElementReference<ExampleNode> Parent; // or Memory<ExampleNode> ArrayElementReference<ExampleNode> LeftChild; ArrayElementReference<ExampleNode> RightChild; int SomePayload1, SomePayload2; } void TestMethod() { ExampleNode[] page1 = new ExampleNode[20000]; FastArrayElementReference<ExampleNode> nodeA = page1[0]; // or Span<ExampleNode> FastArrayElementReference<ExampleNode> nodeB = page1[1]; FastArrayElementReference<ExampleNode> nodeC = page1[2]; nodeA.LeftChild = nodeB; nodeA.RightChild = nodeC; nodeB.Parent = nodeA; nodeC.Parent = nodeA; } Ideally, the Span/FastArrayElementReference optimization would be implemented, but if this optimization is impossible, then the proposed syntactic sugar would still be quite useful even if it only uses Memory/ArrayElementReference without ever being optimized like Span. If it is unoptimized, then the C# compiler generates: struct ExampleNode { ArrayElementReference<ExampleNode> Parent; // or Memory<ExampleNode> ArrayElementReference<ExampleNode> LeftChild; ArrayElementReference<ExampleNode> RightChild; int SomePayload1, SomePayload2; } void TestMethod() { ExampleNode[] page1 = new ExampleNode[20000]; ArrayElementReference<ExampleNode> nodeA = page1[0]; // or Memory<ExampleNode> ArrayElementReference<ExampleNode> nodeB = page1[1]; ArrayElementReference<ExampleNode> nodeC = page1[2]; nodeA._array[nodeA._index].LeftChild = nodeB; nodeA._array[nodeA._index].RightChild = nodeC; nodeB._array[nodeB._index].Parent = nodeA; nodeC._array[nodeC._index].Parent = nodeA; } I said "unoptimized" but actually the above "unoptimized" version is still much faster than garbage-collecting 20 thousand individual objects (when ExampleNode is class instead of struct). Thus it's interesting to note that the proposed syntactic sugar is still very good even if it isn't optimized as well as the Memory + Span combo. And if it is optimized as well as Span, then I think it would be a killer new feature. Thanks jkotas for linking to dotnet/csharplang#1147. Unity is a good example because they want to avoid game frame rate stalls caused by GC, and they do this by using the unsafe keyword, but understandably they want to use safe code. I've proposed a solution that eliminates the unsafe code without triggering the problem of frame rate stalls / high GC cost. @benaadams -- I think you already know what I'm about to write, so this message is actually for other readers. ByReference<T> alone would be insufficient. Two special structs (ArrayElementReference and FastArrayElementReference) are required for the same reason why Memory<T> and Span<T> cannot be merged together into a single struct definition. However, if the "unoptimized" (actually the less optimized) version of my proposal is implemented, then only Memory/ArrayElementReference<T> is needed, and Span/ByReference/FastArrayElementReference<T> is unused. If my proposal is implemented using only one special struct, then that struct must be like Memory<T> not like Span/ByReference<T>, but ideally (if possible) my proposal would be optimized as well as the Memory<T> + Span<T> combo meaning 2 special structs. It is important to understand that in the fully-optimized scenario, the C# compiler generates different IL code for the same syntax "ref ExampleNode x" when... when x is a field in a struct or class in the heap, versus when x is a local variable in a method, a variable on the stack. However if the less optimized solution is implemented, then x as field is the same as x as local variable, meaning akin to Memory<T> in both places. One thing to consider whenever we discuss allowing for explicit ref fields in structs is this part of the span safety rules: https://github.com/dotnet/csharplang/blob/master/proposals/csharp-7.2/span-safety.md#length-one-span-over-ref-values Essentially the span safety rules were designed on the idea that a struct could never contain a ref field. If that can happen then the compiler needs to consider any ref parameter as potentially escaping by-ref to any other ref struct parameter or return. This has a fairly devastating effoct on ref struct APIs ref struct S { internal void M(ref int p) { ... } } At this point the compiler must consider that the p parameter can escape by-ref into this. Hence at the callsite both must have the exact same lifetime.: void Method(ref S s) { int i = 42; s.M(ref i); // Error!!! } That example may seem a bit contrived but it's essentially every Read / Write API that we have on ref struct. Overall it makes the system unusable. As a result we ended up adding this language to the spec in order to make these APIs doable. If we wanted to add ref fields in the future we'd need to account for it by doing one of the following: Safety rules would need to distinguish between ref struct that have ref fields and those that don't. Add some notation for disallowing certain parameters / values from being captured. Essentially a way to mark p above as "don't allow capture by ref". These are all doable but it's work and needs quite a bit of working out. CC @JeremyKuhne as I know he's interested in (1) above. @jkotas ByReference is a workaround for dotnet/csharplang#1147. C# also doesn't allow ref fields because there really isn't a way to define them in IL. If we did them it would likely be via emitting ByReference<T>. C# also doesn't allow ref fields because there really isn't a way to define them in IL. This feature would work on new runtimes only. We can choose how to make this feature work on the new runtimes. I think relaxing the restriction on byref fields in IL would be the most natural design. This feature would work on new runtimes only. Definitely. If we added support for ref fields then I think it has to be CoreClr only due to the GC tracking issues. The desktop GC wouldn't treat such fields as a strong reference and hence allowing it there would open up fun GC holes. Correct? I think relaxing the restriction on byref fields in IL would be the most natural design. What would happen to Span<T> then? Would it's implementation just be changed to be essentially: ref struct Span<T> { ref T Data; int Length; } CC @JeremyKuhne as I know he's interested in (1) above. I'm more interested in (2) actually. :) Add some notation for disallowing certain parameters / values from being captured. Essentially a way to mark p above as "don't allow capture by ref". For me the driving scenario is passing Span<byte> buffer = stackalloc byte[64] to ref struct methods so they can manipulate/access the parameter without risk of capturing it. @JeremyKuhne I'm more interested in (2) actually. :) Yes of course 😦. This is the downside of using the "all 1." scheme for numbered lists. What would happen to Span then? Would it's implementation just be changed to be essentially: ref struct Span { ref T Data; int Length; } That makes good sense in my mind, because, honestly, from my perspective, although Span and Memory are great, personally I view references-to-structs as being fundamentally more important and more intrinsic than spans/ranges/extents/substrings, therefore I think it makes good sense if the REAL/core feature is ref T x fields and Span<T> becomes only a thin extension to ref fields that merely adds the length, and the real magic would be done in the implementation of ref T x fields and not in (or for) Span<T>. This design appears to be a clean and logical layering of functionality. Considering that references are such a very fundamental thing, it also makes good sense in my mind that ref fields would be emitted as byref fields in IL directly, not emitted as ByReference<T> or other special struct, but if necessary, emitting a special struct is also a workable solution. Ideally I'd favor having IL directly support this feature, but I'm not the expert in CLR. I think it would be an impressively big win for C# if 20000 instances can be garbage-collected with the same cost as 1 object (or rather 1 array). For another example of potential wins, have a look at System.Collections.Generic.SortedSet<T> and SortedDictionary<TKey,TValue> and you can see they contain an internal Node class: internal class Node { public bool IsRed; public T Item; public Node Left; public Node Right; } This Node class could be changed to a struct: internal struct Node { public bool IsRed; public T Item; public ref Node Left; public ref Node Right; } Even if SortedSet and SortedDictionary don't bother adjusting the page/array length dynamically, even if they simply use a constant page length of 10, the number of objects would be reduced to one-tenth of the current implementation with class Node. For another example, have a look at the internal struct MS.Internal.Xml.Cache.XPathNode. Although it is already optimized to a struct instead of class, imagine how much easier, simpler, and cleaner it would have been to write XPathNode if C# allowed XPathNode to contain ref XPathNode x; fields. I can think of numerous examples that would benefit from this feature. C# if 20000 instances can be garbage-collected with the same cost as 1 object (or rather 1 array) The garbage collection cost is proportional to the bytes allocate and collected. The number of objects matters much less. System.Collections.Generic.SortedSet<T> and SortedDictionary<TKey,TValue> MS.Internal.Xml.Cache.XPathNode The ref fields would be allowed in ref-like types only. I do not think converting the structs in these two examples would work in practice. The ref fields would be allowed in ref-like types only. I do not think converting the structs to ref-like struct in these two examples would work in practice. I agree, converting those two examples to stack-only/ref-like structs would not work, but in my proposal, I meant that a NORMAL struct and a normal class would be able to contain a field that contains a reference to some other struct instance stored in an array. My proposal is already possible to do today in the current C#, except that: The syntax is cumbersome. It would be better with syntactic sugar and/or direct support in IL. It could potentially be optimized further, but this optimization is not mandatory. It already works without use of ref struct, ByReference<T>, and Span<T>. It doesn't need any magic structs or special new IL, except if you want to optimize it. For example, consider the SortedSet<T>.ReplaceChildOfNodeOrRoot method. Following I've rewritten this method to use struct Node instead of class Node. The following is how it looks with regular C#, without any new syntactic sugar. As you can see, it already works but the syntax is cumbersome -- it would be better with syntactic sugar and/or direct support in IL. struct ArrayElementReference<T> { readonly T[] Array; readonly int Index; public static bool operator == (ArrayElementReference<T> a, ArrayElementReference<T> b) { return a.Index == b.Index && a.Array == b.Array; } } // struct ArrayElementReference<T> class SortedSet<T> { ArrayElementReference<Node> root; struct Node // NORMAL struct not "ref struct". { bool IsRed; T Item; ArrayElementReference<Node> Left; ArrayElementReference<Node> Right; } void ReplaceChildOfNodeOrRoot(ArrayElementReference<Node> parent, ArrayElementReference<Node> child, ArrayElementReference<Node> newChild) { if (parent.Array != null) { if (parent.Array[parent.Index].Left == child) parent.Array[parent.Index].Left = newChild; else parent.Array[parent.Index].Right = newChild; } else { this.root = newChild; } } static void TestSetLeft(ArrayElementReference<Node> parent, ArrayElementReference<Node> newChild) { parent.Array[parent.Index].Left = newChild; } } // class SortedSet<T> I compiled it using the existing C# compiler and it emits the following IL for the TestSetLeft method: .method static void TestSetLeft ( valuetype ArrayElementReference`1<valuetype Node<!T>> parent, valuetype ArrayElementReference`1<valuetype Node<!T>> newChild ) cil managed { .maxstack 8 ldarg.0 ldfld !0[] valuetype ArrayElementReference`1<valuetype Node<!T>>::Array ldarg.0 ldfld int32 valuetype ArrayElementReference`1<valuetype Node<!T>>::Index ldelema valuetype Node<!T> ldarg.1 stfld valuetype ArrayElementReference`1<valuetype Node<!0>> valuetype Node<!T>::Left ret } So it already works, but how about making some syntactic sugar and/or better IL? Wouldn't it be excellent if the C# compiler allowed us to write the same thing using the following simple syntax? class SortedSet<T> { ref Node root; struct Node // STILL NORMAL struct not "ref struct". { bool IsRed; T Item; ref Node Left; ref Node Right; } void ReplaceChildOfNodeOrRoot(ref Node parent, ref Node child, ref Node newChild) { if (parent != null) { if (parent.Left == child) parent.Left = newChild; else parent.Right = newChild; } else { this.root = newChild; } } static void TestSetLeft(ref Node parent, ref Node newChild) { parent.Left = newChild; } } // class SortedSet<T> The above syntax could produce the same IL as already supported (the IL above). Alternatively, if desired, it could be optimized to produce IL similar to the following, if you're willing to extend IL to support the following "arrayelemref" or similar. .method static void TestSetLeft ( arrayelemref Node<!T> parent, arrayelemref Node<!T> newChild ) cil managed { ldarg.0 ldarg.1 stfld arrayelemref Node<!T> valuetype Node<!T>::Left ret } The following IL shows the struct parameters passed by reference (IL &), but this fails because these parameters pass only a pointer/address, but the method TestSetLeft needs to know both ArrayElementReference.Array and ArrayElementReference.Index in order to set the field Node.Left because Node.Left needs to store ArrayElementReference<Node> not only a pointer/address, because struct Node is a normal struct not ref struct Node {...}. .method static void TestSetLeft ( valuetype Node<!T>& parent, valuetype Node<!T>& newChild ) cil managed { ldarg.0 ldarg.1 stfld Node<!T> Node<!T>::Left // Incorrect. ret } readonly T[] Array; readonly int Index; This is turning one pointer into pointer + index. In practice, it will be two pointers due to alignment. So this would result into more bytes being allocated on GC heap. It is very unlikely to make anything faster. We do not want to allow storing refs on GC heap because of they are very expensive to deal with during garbage collection. If we have allowed it and people started using them, we would have a big problem with GC pause times. It is very unlikely to make anything faster. If it doesn't make anything faster, then why do Dictionary<TKey,TValue> and XPathNode and other examples do it? Dictionary<TKey,TValue>.Entry is a struct. It would be easier and simpler to write Entry as a class, but the cost would be excessive, so the authors of Dictionary wrote it as a struct. Entry.next is meant to be a reference to another instance of struct Entry, but C# or the CLR doesn't support it. I'm not saying Dictionary should be rewritten, rather I just mean it's one example of reducing cost by using structs instead of classes, but currently it is often a headache to use struct instead of class because structs cannot contain references to each other. My proposal gives structs the power of classes but with lower cost, doesn't it? Don't you think SortedSet<T>.Node would be lower cost if it was changed to a struct akin to what Dictionary does? We do not want to allow storing refs on GC heap because of they are very expensive to deal with during garbage collection. But in my proposal, most of these references are self-references -- they point to themselves. Most instances of Node.Left.Array and Node.Right.Array point to the same array object that contains the Node instance. When traversing a graph for GC purposes, if you are currently at object X1, and X1 contains a field that points to X1 (points to itself), then you don't follow or traverse into this link -- you ignore this link. Ignoring a reference isn't expensive, right? Or am I mistaken? If it doesn't make anything faster, then why do Dictionary<TKey,TValue> and XPathNode and other examples do it? Right, they use indices. Indices are cheap both for storage and GC. Ignoring a reference isn't expensive, right? Or am I mistaken? Finding the object that the ref points to is the expensive part. This would need to be done before you can check that the ref is safe to ignore because of it points into the same array. Damn. Would you like the idea better or worse if ArrayElementReference.Array had a way of indicating self-reference without actually storing the address of the self-array? For example, if the address 1 or (UIntPtr)-1 was interpreted to mean self. Alternatively, when ArrayElementReference.Index is negative and ArrayElementReference.Array is null, then it could mean self. I haven't benchmarked the above idea, but I did do a benchmark without the above idea, and my proposal was faster than class Node, but underwhelming and insufficiently compelling :-( Apparently my proposal needs to be adjusted in some way before it could be sufficiently compelling. Hey, I just noticed this: Try compiling the following short program in VS 15.7.6 and when you run it, it throws System.TypeLoadException! Who is the best person to inform about this bug? Seems like a serious bug. class Program { static void Main(string[] args) { Test(); } static SNode Test() { return new SNode(); } struct ArrayElementReference<T> { public T[] Array; public int Index; } struct SNode { ArrayElementReference<SNode> x; } } Unhandled Exception: System.TypeLoadException: Could not load type 'TestConsoleApp.Program+SNode' from assembly 'TestConsoleApp, Version=<IP_ADDRESS>, Culture=neutral, PublicKeyToken=null'. at TestConsoleApp.Program.Main(String[] args) Know issue: https://github.com/dotnet/coreclr/issues/7957 On second thought, let me tell you the exact benchmark results, and you decide for yourself whether it is a worthwhile improvement. I said "underwhelming" because I was expecting my proposal to be at least 10x faster but it was 3x to 4x faster in this test. Is that worthwhile? More importantly, would it solve the problem for the Unity game engine? Can my proposal be further improved or optimized somehow? I allocated 51_000_000 nodes. Each node contains 2 node references and 48 bytes of integers. When using class Node, it ran for 9.3 seconds. When using struct Node with my proposal, it ran for 2.5 seconds (3.7 times faster). I made no attempt to optimize self-references. The times include the time taken to run: System.GC.Collect(System.GC.MaxGeneration, System.GCCollectionMode.Forced, blocking: true, compacting: true); In both tests, System.GC.GetTotalMemory returns approx 3891 megabytes before GC.Collect. Would anyone like me to email the source code or upload somewhere? Interesting -- a significant part of the reason why my proposal is faster is revealed by System.GC.CollectionCount. After running the benchmark with class Node: Gen 0 collected 649 times. Gen 1 collected 338 times. Gen 2 collected 9 times. After running the benchmark with struct Node: Gen 0 collected 2 times. Gen 1 collected 2 times. Gen 2 collected 2 times. When I reduced the page/array length to 10_000 nodes (causing more arrays to be allocated), the time increased slightly to 2.8 seconds and the number of collections increased slightly: Gen 0 collected 6 times. Gen 1 collected 6 times. Gen 2 collected 6 times. Thus my proposal might solve the problem for the Unity game engine because my proposal causes garbage collection to run much less often. Admittedly it's less beneficial than I presumed it would be. Is it worthwhile? I've brainstormed a few alternative solutions that are also quite interesting to think about: What if a class can have an attribute applied that says that the class should use automatic-reference-counting (like in .NET Native or like in the latest version of MS C++ for UWP apps) instead of the normal CLR GC? MS C++ automatic-reference-counting still suffers the problem of cyclic references, doesn't it? Therefore it would ONLY be used for classes that explicitly request it via an attribute. This would allow people to create particular classes that don't cause garbage collections to run, without resorting to unsafe code. Another alternative: If we could somehow allocate a group of objects inside a "GC compartment/container", and the objects inside a compartment are GC'd on an all-or-nothing basis instead of individually. Any reference to such an object causes all objects in the compartment to be kept alive. Possibly a compartment remains ineligible for GC until the app explicitly/manually marks it as eligible, such as in a IDisposable.Dispose method or in a finalizer. What if the CLR supported an ability to allocate a read-only array of class instances where each element of the array is immediately non-null and cannot be changed to any other reference? System.Array.IsReadOnly == true. These class instances (array elements) may be GC'd on an all-or-nothing basis similar to an array of structs. This feature might only be compatible with classes that have a particular attribute applied, and perhaps such classes can ONLY be allocated in this array manner, never individually. [System.ArrayElementClass] // Means class is restricted to existing as an array element. class MyTest1 { int TestFieldA; MyTest1 Parent; } MyTest1[] ary = new MyTest1[10000]; ary[0].TestFieldA = 123; // Wouldn't cause NullReferenceException. ary[0] = xxxx; // Throws ReadOnlyException. bool b = ary.IsReadOnly; // Returns true. ary[0].Parent = ary[5]; // Supported, unlike if MyTest1 was struct. MyTest1 individualInstance = new MyTest1(); // Disallowed. Every instance of a ArrayElementClass class would contain a CLR-internal read-only field/header that stores the array index or byte offset of the instance/element. Thus if you have a pointer to an instance of an ArrayElementClass, then you can instantly calculate a pointer to the array that contains the instance of the class. i.e. you can instantly recover the array pointer from an element pointer. The GC wouldn't track/determine the reachability of each element/instance of ArrayElementClass, rather it would determine the reachability of the array, akin to how it treats an array of structs. If we could somehow allocate a group of objects inside a "GC compartment/container", and the objects inside a compartment are GC'd on an all-or-nothing basis instead of individually Similar to https://github.com/dotnet/corefx/issues/31643
2025-04-01T06:38:26.032033
2019-12-19T20:31:33
540546594
{ "authors": [ "pgovind", "zHaytam" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5432", "repo": "dotnet/corefxlab", "url": "https://github.com/dotnet/corefxlab/pull/2807" }
gharchive/pull-request
Add Applymethod to PrimitiveDataFrameColumn For issue #2805. This PR adds an Apply<TResult> method to PrimitiveDataFrameColumn that takes a Func<T?, TResult?> and returns a new column with the new type. Example usage (taken from the written unit test): int[] values = { 1, 2, 3, 4, 5 }; var col = new PrimitiveDataFrameColumn<int>("Ints", values); PrimitiveDataFrameColumn<double> newCol = col.Apply(i => i + 0.5d); Squash and merging this. Thank you @zHaytam for the patch!
2025-04-01T06:38:26.033270
2017-11-03T05:10:04
270880267
{ "authors": [ "morganbr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5433", "repo": "dotnet/corert", "url": "https://github.com/dotnet/corert/issues/4863" }
gharchive/issue
String.get_Length returns 0 String.get_Length started returning 0 after #4808 , which breaks printing strings and might indicate other problems. It looks like pinning a string and printing it character-by-character still works, so it could be an issue in the frozen string's length or in reading instance fields. I've verified that the frozen string's length is in the right place by pinning it and working backward from the pointer. This is probably an issue calling instance methods or reading instance fields.
2025-04-01T06:38:26.034358
2018-09-11T02:37:25
358864197
{ "authors": [ "MichalStrehovsky", "morganbr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5434", "repo": "dotnet/corert", "url": "https://github.com/dotnet/corert/issues/6316" }
gharchive/issue
CoreCLR R2R testing building against CoreRT framework While testing R2R on an IJW assembly, I saw a failure due to a missing framework method (Marshal.GetExceptionPointers). That method is in the CoreCLR framework, but not in CoreRT. While they should be very similar, we should really test against the CoreCLR framework to catch discrepancies. Simon fixed this.
2025-04-01T06:38:26.095905
2019-04-05T20:59:39
429932966
{ "authors": [ "bartlomiej-dawidow", "tdykstra" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5435", "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/11697" }
gharchive/issue
The two sentences contradict each other. Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: d8948cf5-0aec-308c-8046-965cc15763b2 Version Independent ID: a30a0be5-b741-2a9c-e997-55d6d9960ca9 Content: Fundamentals of garbage collection Content Source: docs/standard/garbage-collection/fundamentals.md Product: dotnet Technology: dotnet-standard GitHub Login: @rpetrusha Microsoft Alias: ronpet @bartlomiej-dawidow Thanks for reporting that -- I agree the language was confusing and I've submitted PR #14320 to clarify it.
2025-04-01T06:38:26.099248
2016-12-15T16:47:44
195855775
{ "authors": [ "blackdwarf", "cartermp", "dmccaffery", "mairaw", "sbaid" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5436", "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/1339" }
gharchive/issue
Add more documentation for dotnet test and dotnet vstest for Preview 3+ bits There are certain features that do not exist in the dotnet test and dotnet vstest docs that need to be added, such as the ability to use .runsettings files, how filtering of tests work etc. /cc @mairaw is this partly covered here: https://aka.ms/vstest-filtering? I also noticed that we don't have dotnet vstest documented. I'll add a new issue for that. Is code coverage supported outside of windows yet? thats another good thing to know. @dmccaffery No, I don't believe it is. @cartermp :feelsgood: https://aka.ms/vstest-filtering links now points to a doc page now /cc @samadala Closing this one in favor of #2382 that has specific action items for dotnet test docs.
2025-04-01T06:38:26.107358
2019-12-28T13:33:59
543167612
{ "authors": [ "BillWagner", "Youssef1313", "viniciusvw22" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5437", "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/16430" }
gharchive/issue
is it possible to declare a variable static and initialize it later on? Hi to all, is it possible to first declare a variable static and then, on the following lines of code, to initialize this variable? I don't know if this is the proper place for this question, but I'm asking because I'd like to initialize the variable to 0, and every time the procedure is called, the value gets back to 0. Since the procedure will be called many times, I don't want to redeclare and reinitialize the variable so that I'll be preventing overhead on my application. Thanks Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 0b593e2c-d31c-394b-bffd-92ffdb6d6a1e Version Independent ID: 6b4ed15d-ff3a-6392-0204-6c2436603302 Content: Static - Visual Basic Content Source: docs/visual-basic/language-reference/modifiers/static.md Product: dotnet-visualbasic GitHub Login: @KathleenDollard Microsoft Alias: kdollard Why just declare it normally, without the Static keyword ? Dim yourVariable As Integer = 0 I want to declare static because so that the variable doesn't have to be redeclared and reinitialized on every procedure call (there will be many). It will do be collected by the GC according to this article (https://docs.microsoft.com/en-us/dotnet/visual-basic/programming-guide/language-features/declared-elements/lifetime) - because the procedure is not Shared. I still feel that the first approach without Static is much better. But anyways, I've shown the two options. You may need to ask in Stackoverflow on which option is better and why, or wait for someone here who can explain that in deep details. Well, I want to do this: Static myVariable As Integer myVariable = 0 ' block using myVariable ' end block ' variable has to get back to 0 on the next procedure call. I prefer using my option if it no difference on the application performance. Thank you for your reply. @KathleenDollard Can you provide the deeper details on how VB manages storage for local variables declared static? My thought is that there is no real performance improvement in this case, because the type of Counter is a value type. Is that analysis correct?
2025-04-01T06:38:26.127549
2018-10-22T09:26:02
372450106
{ "authors": [ "Thraka", "kwlin", "mairaw", "mikkelbu" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5438", "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/8545" }
gharchive/issue
Which element should be used in config file according the Example? In the example it use the start element system.identityModel but the closing tag is microsoft.identityModel. Which one should be used? Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 0889facb-5306-2161-9186-9d138512ce48 Version Independent ID: 2315afd9-eb82-5974-8654-37b9914cb394 Content: <claimsAuthenticationManager> Content Source: docs/framework/configure-apps/file-schema/windows-identity-foundation/claimsauthenticationmanager.md Product: dotnet-framework GitHub Login: @BrucePerlerMS Microsoft Alias: dotnetcontent Looks like a duplicate of #8544 Closing as duplicate. Added the duplicate label.
2025-04-01T06:38:26.129607
2017-07-29T01:16:42
246489962
{ "authors": [ "mairaw", "rpetrusha" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5439", "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/pull/2780" }
gharchive/pull-request
move serialization docs Fixes Part 1 of #2770 @rpetrusha good catch on those links! Even though the comments were not really related to moving the docs, I made the fixes. Please review. I've done some global fixes, so the number of files impacted is probably bigger than the number of files you've given feedback to. Thanks for making the additional changes, @mairaw. This looks really good. It's ready to merge when you want to. I've noticed I've missed one of the serializer msdn links (my VS Code was super slow yesterday). Fixed that one and I'll merge if builds looks good. Thanks @rpetrusha!
2025-04-01T06:38:26.137767
2022-10-08T15:25:26
1401979289
{ "authors": [ "BillWagner", "IEvangelist", "RikkiGibson", "SunnieShine", "Youssef1313" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5440", "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/pull/31660" }
gharchive/pull-request
Move file keyword into contextual keyword list C# 11 declares a new feature using a new keyword file to restrict the accessibility. It should be a contextual keyword, not a predefined keyword. Page: C# Keywords Content source: docs/csharp/language-reference/keywords/index.md This is my first PR. If there is something wrong with my operation or modification, please tell me :D C# 11 declares a new feature using a new keyword file to restrict the accessibility. It should be a contextual keyword, not a predefined keyword. Page: C# Keywords Content source: docs/csharp/language-reference/keywords/index.md This is my first PR. If there is something wrong with my operation or modification, please tell me :D I think this is an access modifier, like private or public, and it scopes the access to the type within the file itself. Therefore, I believe this belongs next to those other access modifiers. @BillWagner would know for sure. The speclet doesn't explicitly say if file is a keyword or a contextual keyword. I think it's a contextual keyword. Tagging @RikkiGibson to make sure. file is a contextual keyword, but types named file are blocked from being declared in the language. We think it's important for people to still be able to declare a variable like Stream file = ...; hi everybody :) I found this: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/file This passage has mentioned that file is a contextual keyword. Beginning with C# 11, the file contextual keyword is a type access modifier. I believe Roslyn team was always avoiding to say that file is an access modifier. @RikkiGibson Could you confirm please? Thanks! Yeah, we use the term file-local instead of referring to file accessibility. We also avoid saying file scope because we don't want the feature to be compared to file-scoped namespaces. I would like for us to revisit our language around accessibility, it seems like what we've come up with really doesn't match how users think of it, but that decision should be made by the LDM and for now the docs should probably reflect the existing design.
2025-04-01T06:38:26.140221
2022-12-05T23:38:30
1477711442
{ "authors": [ "QuintinWillison", "gewarren" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5441", "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/pull/32887" }
gharchive/pull-request
Type discriminator order Fixes #32789. It's only this morning that I've come across this page in the documentation and given it a read through... I find it really surprising and disappointing that the words "at the start of the JSON object" were conceived and approved -- and, furthermore, that this polymorphic feature was ever released in the fragile form it has been! A JSON object is an unordered collection of name/value pairs (see json.org). This feature is not standards compliant and cannot be guaranteed to survive intermediary processing that uses JSON I/O. I can't imagine how many software systems are going to be badly architected by those trusting the underlying data encodings because this was "rubber stamped" by the official .NET / Microsoft team. 🤯
2025-04-01T06:38:26.207734
2021-06-23T19:46:24
928584470
{ "authors": [ "jander-msft", "shirhatti" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5442", "repo": "dotnet/dotnet-monitor", "url": "https://github.com/dotnet/dotnet-monitor/issues/491" }
gharchive/issue
Diagnostic port connect mode may not work in Kubernetes Mounting the /tmp path between containers and using connect mode (the default mode) may not discover processes correctly. I validated with the customer that /tmp was mounted and that the event pipe socket from the application container was visible to dotnet-monitor. This target application is .NET 5; we were able to avoid the issue by using listen mode. Was there anything interesting about the volume? Was it an emptyDir or something else? It was an emptyDir if I recall correctly. I'm going to try to reproduce the issue myself and see what happens.
2025-04-01T06:38:26.249284
2021-02-19T00:34:21
811584135
{ "authors": [ "beccamc", "vzhuqin" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5443", "repo": "dotnet/machinelearning-modelbuilder", "url": "https://github.com/dotnet/machinelearning-modelbuilder/issues/1245" }
gharchive/issue
New Advanced data options dialog Label shouldn't be selectable twice Alignment is off in this control :blush: Alignment fixed in https://github.com/dotnet/machinelearning-tools/pull/913 This issue can be repro on environment: Windows 10 Enterprise, Version 20H2 ML.Net Model Builder (Preview): <IP_ADDRESS>5505 Microsoft Visual Studio Enterprise 2019: 16.9.4 .Net: 5.0.202 Dadaset: https://testpass.blob.core.windows.net/test-pass-data/issues.tsv.txt This issue can't be repro on: Windows 10 Enterprise, Version 20H2 ML.Net Model Builder (Preview): <IP_ADDRESS>2301 Microsoft Visual Studio Enterprise 2019: 16.9.4 .Net: 5.0.202 Main branch: https://privategallery.blob.core.windows.net/gallery/refs/heads/main/atom.xml Steps: Create new C# console app with .Net 5.0; Add model builder by right click on the project; Navigate to Data page after selected scenario and environment; Open Advanced data options dialog, the "Save" button remain disable if you select multi "Label" purpose.
2025-04-01T06:38:26.267443
2021-11-13T01:13:34
1052507372
{ "authors": [ "beccamc", "johndohoneyjr", "zewditu" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5444", "repo": "dotnet/machinelearning-modelbuilder", "url": "https://github.com/dotnet/machinelearning-modelbuilder/issues/1915" }
gharchive/issue
ML Sentiment Model does not populate score Microsoft Visual Studio Community 2022 Version 17.1.0 Preview 1.0 VisualStudio.17.Preview/17.1.0-pre.1.0+31903.286 Microsoft .NET Framework Version 4.8.04161 Installed Version: Community .NET Core Debugging with WSL 1.0 .NET Core Debugging with WSL ADL Tools Service Provider 1.0 This package contains services used by Data Lake tools ASA Service Provider 1.0 ASP.NET and Web Tools 2019 <IP_ADDRESS>55 ASP.NET and Web Tools 2019 ASP.NET Web Frameworks and Tools 2019 <IP_ADDRESS>55 For additional information, visit https://www.asp.net/ Azure App Service Tools v3.0.0 <IP_ADDRESS>55 Azure App Service Tools v3.0.0 Azure Data Lake Tools for Visual Studio 2.6.4000.0 Microsoft Azure Data Lake Tools for Visual Studio Azure Functions and Web Jobs Tools <IP_ADDRESS>55 Azure Functions and Web Jobs Tools Azure Stream Analytics Tools for Visual Studio 2.6.4000.0 Microsoft Azure Stream Analytics Tools for Visual Studio C# Tools 4.1.0-1.21551.6+e4419d6f6792da7011c8589ba118f59d830ca72f C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used. Common Azure Tools 1.10 Provides common services for use by Azure Mobile Services and Microsoft Azure Tools. Cookiecutter 17.0.21295.2 Provides tools for finding, instantiating and customizing templates in cookiecutter format. Fabric.DiagnosticEvents 1.0 Fabric Diagnostic Events Microsoft Azure Hive Query Language Service 2.6.4000.0 Language service for Hive query Microsoft Azure Service Fabric Tools for Visual Studio 17.0 Microsoft Azure Service Fabric Tools for Visual Studio Microsoft Azure Stream Analytics Language Service 2.6.4000.0 Language service for Azure Stream Analytics Microsoft Azure Tools for Visual Studio 2.9 Support for Azure Cloud Services projects Microsoft JVM Debugger 1.0 Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines Microsoft Library Manager 2.1.134+45632ee938.RR Install client-side libraries easily to any web project Microsoft MI-Based Debugger 1.0 Provides support for connecting Visual Studio to MI compatible debuggers Microsoft Visual Studio Tools for Containers 1.2 Develop, run, validate your ASP.NET Core applications in the target environment. F5 your application directly into a container with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container. Node.js Tools 1.5.31027.1 Commit Hash:dac60d9b246a1d6a5daf23d223c933dbe1518465 Adds support for developing and debugging Node.js apps in Visual Studio NuGet Package Manager 6.1.0 NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/ ProjectServicesPackage Extension 1.0 ProjectServicesPackage Visual Studio Extension Detailed Info Python - Profiling support 17.0.21295.2 Profiling support for Python projects. Python with Pylance 17.0.21295.2 Provides IntelliSense, projects, templates, debugging, interactive windows, and other support for Python developers. Razor (ASP.NET Core) <IP_ADDRESS>2601+724154d925d7d9d26ebf8a73a66d5219aa320400 Provides languages services for ASP.NET Core Razor. SQL Server Data Tools 17.0.62110.20190 Microsoft SQL Server Data Tools ToolWindowHostedEditor 1.0 Hosting json editor into a tool window TypeScript Tools 17.0.1029.2001 TypeScript Tools for Microsoft Visual Studio Visual Basic Tools 4.1.0-1.21551.6+e4419d6f6792da7011c8589ba118f59d830ca72f Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used. Visual F# Tools 17.1.0-beta.21525.2+46af4a248255f4af2284883f48983fe7dd07a760 Microsoft Visual F# Tools Visual Studio Code Debug Adapter Host Package 1.0 Interop layer for hosting Visual Studio Code debug adapters in Visual Studio Describe the bug The Sentiment score is not being populated Console.WriteLine($"Data to Analyze: ", sampleData.Col0); Console.WriteLine($"\n\nPredicted Sentiment: {predictionResult.Prediction}\n\n"); Console.WriteLine($"\n\nPredicted score: {predictionResult.Score}\n\n"); Debugger shows: "\n\nPredicted score: System.Single[]\n\n" To Reproduce Steps to reproduce the behavior: Run the demo as is, behavior does not change with sentiment Expected behavior A sentiment floating point score between {0.0..1.0} Screenshots Sorry for hitting this issue. Data classification we only show the predicted class if you want to see the probability of each label for a sample data you can look at on evaluating page. Do you want to see the probability of each class in Program.cs? @briacht what is your thought? @zewditu Was this a bug in code gen that has been fixed? yes, it is already fixed This should be fixed in the latest release. Please ping me if you are still having problems. Thanks!
2025-04-01T06:38:26.290312
2023-11-01T05:12:52
1971715430
{ "authors": [ "ViktorHofer", "carlossanlop" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5445", "repo": "dotnet/maintenance-packages", "url": "https://github.com/dotnet/maintenance-packages/pull/18" }
gharchive/pull-request
Migrate Microsoft.Bcl.HashCode Includes history from the old release/3.1 branch. The latest available package in nuget.org is https://www.nuget.org/packages/Microsoft.Bcl.HashCode/1.1.1 and I was able to confirm that my local build successfully generated package 1.1.2. By the way, NuGet.config is still using the dotnet6 transport. Should I update it to dotnet9? Need this merged first: https://github.com/dotnet/maintenance-packages/pull/19 . Once merged, I will rebase this PR. This PR should now be unblocked. No squash merge! No squash merge! I don't have permission to merge-commit, unfortunately. Even with elevation.
2025-04-01T06:38:26.431505
2024-07-03T20:44:23
2389437998
{ "authors": [ "marcarro", "phil-allen-msft", "ryzngard" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5446", "repo": "dotnet/razor", "url": "https://github.com/dotnet/razor/pull/10578" }
gharchive/pull-request
Added basic extract to component functionality on cursor over html tag ### Summary of the changes Part of the implementation of the Extract To Component code action. Functional in one of the two cases, when the user is not selecting a certain range of a Razor component, but rather when the cursor is on either the opening or closing tag. Holding off approval just because you'll have to merge in the main branch and switch to System.Text.Json in a few places. Sorry about that! I just pushed the main changes into the feature branch. @marcarro you can just do these commands in your local branch and then handle merge commits. I can help you with that today or Friday git fetch upstream features/extract-to-component:features/extract-to-component git merge features/extract-to-component @dotnet/razor-compiler , PTAL (going into a feature branch)
2025-04-01T06:38:26.432741
2024-07-18T21:31:44
2417454430
{ "authors": [ "333fred" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5447", "repo": "dotnet/razor", "url": "https://github.com/dotnet/razor/pull/10646" }
gharchive/pull-request
Turn off trailing whitespace triming in strings We have tests with baselines that have trailing whitespaces. Our trimTrailingWhitespace setting means that those will get modified automatically, breaking those tests. To fix that, I implemented a vscode feature to avoid triming inside regex and strings, so let's use that. @dotnet/razor-compiler for an extremely small review.
2025-04-01T06:38:26.453713
2017-02-01T07:04:04
204511426
{ "authors": [ "abpiskunov", "davkean", "fubar-coder" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5448", "repo": "dotnet/roslyn-project-system", "url": "https://github.com/dotnet/roslyn-project-system/issues/1414" }
gharchive/issue
"SDKs" show up as packages when they are indirect I think this SDK thing is a leaky abstraction, for example, .NETStandard.Library shows up as SDK when it top level, but when it's a indirect, it shows up a package: i think everything is as expected. in your screenshot we have an SDK X and it's dependencies, which include an SDK Y. It might have a different icon though when displayed as dependency. Btw, i just added to my project and it is not marked as implicit at all , and is resolved as package - what did you mean "it is SDK when it is top level"? <PackageReference Include="NETStandard.Library" Version="1.6.0" /> That's my point how we marked SDK'd as SDKs is leaky - it only works if all of them are marked implicit, here .NETStandard.LIbrary is pulled it because of it's referenced by a implicit package. @abpiskunov When you use a console app, it shows up as child of "Microsoft.NETCore.App". But when you use a class library, it shows up directly under "SDK", thus it shows up at top level. dupe of RTM approved issue https://github.com/dotnet/roslyn-project-system/issues/1456 , we will be hiding all default implicit packages and show them as SDK only
2025-04-01T06:38:27.037452
2020-04-17T16:19:49
602089492
{ "authors": [ "ChrisProlls", "davidfowl", "rynowak" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5449", "repo": "dotnet/tye", "url": "https://github.com/dotnet/tye/issues/374" }
gharchive/issue
Win32Exception with Dapr Describe the bug An exception occurs when we launch tye run inside the sample repo for Dapr To Reproduce Download the sample for Dapr here, and run tye run (I just followed the instructions into the readme) Further technical details tye --version dapr --version dotnet --version Looks like it can't find the daprd process? cc @rynowak @ChrisProlls - do you have dapr on your path? can you can run daprd -version? I reinstall dapr and all seems to work now, daprd -version is working and so tye run. Thanks !
2025-04-01T06:38:27.041339
2020-05-29T06:17:30
627026819
{ "authors": [ "razfriman" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5450", "repo": "dotnet/tye", "url": "https://github.com/dotnet/tye/issues/512" }
gharchive/issue
Logging Extension - Seq What should we add or change to make your life better? Create an extension to push logs to Seq This would be similar to the existing extension built into Tye: Tye can push logs to Elastic stack easily without the need for any SDKs or code changes in your services. Would be implemented in a method like this into tye.yaml: extensions: - name: seq Why is this important to you? I have implemented Seq into my microservice project, however seeing how simple it was to integrate logging with the built-in Tye method was eye opening. Would be great to build a lot more extensions like this. E.g. grafana, prometheus, jaeger, For context -- I have used this library in the past and it made configuring logging very simple with json-like settings to magically determine where to send logs. This could be used as a reference for implementing this feature: https://github.com/convey-stack/Convey.Logging https://github.com/convey-stack/Convey.Logging/blob/master/src/Convey.Logging/Extensions.cs Updated with an (not functional) attempt to implement this. It is definitely a good starting point though Updated with an implementation + docs
2025-04-01T06:38:27.070241
2021-12-10T08:58:23
1076592763
{ "authors": [ "CEbbinghaus", "JoeRobich", "OdisBy", "PsychoNineSix", "TanayParikh", "jesperkristensen", "mauve", "tabish121" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5451", "repo": "dotnet/vscode-csharp", "url": "https://github.com/dotnet/vscode-csharp/issues/4944" }
gharchive/issue
Command Select Project fails with extension manifest validation error Issue Description Steps to Reproduce Ctrl-Shift-P: "OmniSharp: Select project" Error is shown Expected Behavior Opens up the project picker. Actual Behavior Command 'OmniSharp: Select Project' resulted in an error (Extension 'ms-dotnettools.csharp' CANNOT use API proposal: quickPickSeparators. Its package.json#enabledApiProposals-property declares: [] but NOT quickPickSeparators. The missing proposal must be added... Logs OmniSharp log Post the output from Output-->OmniSharp log here C# log Post the output from Output-->C# here Environment information VSCode version: 1.63.0 C# Extension: 1.23.17 Mono Information OmniSharp using built-in mono Dotnet Information .NET SDK (reflecting any global.json): Version: 6.0.100 Commit: 9e8b04bbff Runtime Environment: OS Name: ubuntu OS Version: 20.04 OS Platform: Linux RID: ubuntu.20.04-x64 Base Path: /usr/share/dotnet/sdk/6.0.100/ Host (useful for support): Version: 6.0.0 Commit: 4822e3c3aa .NET SDKs installed: 2.1.818 [/usr/share/dotnet/sdk] 3.1.415 [/usr/share/dotnet/sdk] 5.0.403 [/usr/share/dotnet/sdk] 6.0.100-rc.1.21458.32 [/usr/share/dotnet/sdk] 6.0.100 [/usr/share/dotnet/sdk] .NET runtimes installed: Microsoft.AspNetCore.All 2.1.30 [/usr/share/dotnet/shared/Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.30 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.21 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 5.0.12 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 6.0.0-rc.1.21452.15 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 6.0.0 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.1.30 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.21 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 5.0.12 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 6.0.0-rc.1.21451.13 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 6.0.0 [/usr/share/dotnet/shared/Microsoft.NETCore.App] To install additional .NET runtimes or SDKs: https://aka.ms/dotnet-download Visual Studio Code Extensions Extension Author Version azure-account ms-vscode 0.9.11 azure-pipelines ms-azure-devops 1.195.0 cmake twxs 0.0.17 cmake-tools ms-vscode 1.9.2 cpptools ms-vscode 1.8.0-insiders2 csharp ms-dotnettools 1.23.17 dotnet-test-explorer formulahendry 0.7.7 EditorConfig EditorConfig 0.16.4 jupyter ms-toolsai 2021.11.1001550889 jupyter-renderers ms-toolsai 1.0.4 live-server ms-vscode 0.2.11 LiveServer ritwickdey 5.6.1 prettier-vscode esbenp 9.0.0 python ms-python 2021.12.1559732655 sass-indented syler 1.8.18 test-adapter-converter ms-vscode 0.1.4 vetur octref 0.35.0 vscode-azureappservice ms-azuretools 0.23.0 vscode-azurefunctions ms-azuretools 1.6.0 vscode-azureresourcegroups ms-azuretools 0.4.0 vscode-azurestaticwebapps ms-azuretools 0.9.0 vscode-bicep ms-azuretools 0.4.1008 vscode-docker ms-azuretools 1.18.0 vscode-dotnet-runtime ms-dotnettools 1.5.0 vscode-eslint dbaeumer 2.2.2 vscode-kubernetes-tools ms-kubernetes-tools 1.3.4 vscode-pylance ms-python 2021.12.1 vscode-test-explorer hbenl 2.21.1 vscode-yaml redhat 1.2.2 vsliveshare ms-vsliveshare 1.0.5196 xml DotJoshJohnson 2.5.1 Getting the same error: Command 'OmniSharp: Select Project' resulted in an error (Extension 'ms-dotnettools.csharp' CANNOT use API proposal: quickPickSeparators. Its package.json#enabledApiProposals-property declares: [] but NOT quickPickSeparators. The missing proposal MUST be added and you must start in extension development mode or use the following command line switch: --enable-proposed-api ms-dotnettools.csharp) Issue was fixed in https://github.com/OmniSharp/omnisharp-vscode/pull/4914. Until a new release is ready you can install this prerelease that includes the fix https://github.com/OmniSharp/omnisharp-vscode/releases/tag/v1.23.18-beta2 Does anyone know how to install an old working version until a fix is released? My VS Code is basically bricked now. (Installing a prerelease sounds a bit scary) Does anyone know how to install an old working version until a fix is released? My VS Code is basically bricked now. (Installing a prerelease sounds a bit scary) Old version of omnisharp won't work, I believe it was broken by vscode 1.63, so you either move back to 1.62 or install the pre-release of omnisharp. Does anyone know how to install an old working version until a fix is released? My VS Code is basically bricked now. (Installing a prerelease sounds a bit scary) Omnisharp can't be downgrade, but downgrading vscode you resolve this problem. https://code.visualstudio.com/updates/v1_62 Even after installing the latest prerelease this issue still occurs. Is there prerelease versions which don't include the fix and I need to download exactly 1.23.18? I found that the solution was to completely uninstall the old version and then choose the extension vsix file for my platform from the beta release page and install that: https://github.com/OmniSharp/omnisharp-vscode/releases/tag/v1.23.18-beta2 There have been lots of improvements since this issue was opened. Please open a new issue with logs if you are still running into this.
2025-04-01T06:38:27.072810
2023-10-04T00:41:02
1925179520
{ "authors": [ "dibarbet" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5452", "repo": "dotnet/vscode-csharp", "url": "https://github.com/dotnet/vscode-csharp/pull/6481" }
gharchive/pull-request
Always build release and prerelease VSIXs and allow overriding build … …number so that we can ship from any branch Test builds: https://dnceng.visualstudio.com/internal/_build/results?buildId=2283050&view=results https://dnceng.visualstudio.com/internal/_build/results?buildId=2283875&view=results https://dnceng.visualstudio.com/internal/_build/results?buildId=2283873&view=results This should not merge until we're ready to do a new release out of the release branch.
2025-04-01T06:38:27.083630
2024-01-31T11:47:01
2132786673
{ "authors": [ "HusanjonDeveloper", "mairaw" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5453", "repo": "dotnet/website-feedback", "url": "https://github.com/dotnet/website-feedback/issues/22" }
gharchive/issue
Operating System: windows Problem encountered on https://dotnet.microsoft.com/en-us/learn/aspnet/blazor-tutorial/install Operating System: windows Provide details about the problem you're experiencing. Include your operating system version, exact error message, code sample, and anything else that is relevant. This issue was closed because there was no response to a request for more information for 10 days.
2025-04-01T06:38:27.137029
2024-06-08T19:47:16
2341807690
{ "authors": [ "dotsi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5454", "repo": "dotsi/aduptime", "url": "https://github.com/dotsi/aduptime/issues/516" }
gharchive/issue
⚠️ Adacta-fintech.com has degraded performance In 98cf739, Adacta-fintech.com (https://www.adacta-fintech.com) experienced degraded performance: HTTP code: 200 Response time: 5929 ms Resolved: Adacta-fintech.com performance has improved in 5953693 after 7 minutes.
2025-04-01T06:38:27.148458
2018-05-11T17:32:19
322375823
{ "authors": [ "flgw", "robbmcleod" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5455", "repo": "dougalsutherland/cyflann", "url": "https://github.com/dougalsutherland/cyflann/issues/28" }
gharchive/issue
nn_radius() returns no more than 32 neighbors nn_radius only gives me 32 neighbors max. Despite max_nn=-1 or 1024 and also params.max_neighbors doesn't work? Apparently FLANNIndex.params['checks'] is also a limiting factor. It defaults to 32, and increasing it increases the maximum for max_nn.
2025-04-01T06:38:27.186385
2023-01-28T02:01:14
1560637497
{ "authors": [ "TheMetalCode" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5456", "repo": "doximity/omniauth-doximity-oauth2", "url": "https://github.com/doximity/omniauth-doximity-oauth2/pull/14" }
gharchive/pull-request
Add infra-automation as codeowner for CircleCI and GitHub Actions configs Jira link: https://doximity.atlassian.net/browse/IA-997 Overview The infra automation team would like more visibility for CI config changes as they occur. Hence, adding ourselves as CODEOWNERS for: .circleci directory, aka CircleCI configs .github/workflows directory, aka GitHub Actions workflows .github/actions directory, aka GitHub Actions actions This PR was automatically generated with sourcegraph. Created by Sourcegraph batch change TheMetalCode/circleci-and-gh-actions-codeowners-2. doxbot codereview doxbot codereview --party
2025-04-01T06:38:27.233048
2015-07-23T15:18:30
96838141
{ "authors": [ "AudreyAltman", "anarchivist", "no-reply" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5457", "repo": "dpla/KriKri", "url": "https://github.com/dpla/KriKri/pull/180" }
gharchive/pull-request
check for existence of provider_name This fixes an error that was causing the QA interface to crash if it tried to call Krikri::Provider.name on an indexed provider that was missing a value for provider_name. Thanks, @AudreyAltman - minor comment, expanding on above - what will get returned and rendered in the nav if a Krikri::Provider's name is nil? I addressed the comment from @anarchivist and :squash:ed :clap:
2025-04-01T06:38:27.256709
2024-10-29T15:34:05
2621625437
{ "authors": [ "Flope", "dpryan79" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:5458", "repo": "dpryan79/MethylDackel", "url": "https://github.com/dpryan79/MethylDackel/issues/163" }
gharchive/issue
Confused by --nOT and --nOB and the examples provided in the issues section Hi, I am confused by the definition of --nOT and --nOB and some examples provided by Devon here. My starting reads are 150bp. I trimmed the reads before alignment using pretty standard trimming to remove possible adapter sequences and low-quality bases. The average length of the reads is now ~135bp. So, for MethylDackel extract, I was planning to use --nOT and --nOB to define the inclusion bases. --nOT INT,INT,INT,INT Like --OT, but always exclude INT bases from a given end from inclusion, regardless of the length of an alignment. This is useful in cases where reads may have already been trimmed to different lengths but still nonetheless contain a certain length bias at one or more ends. My understanding of these options (and as somebody else mentioned in another thread) is that if I want to exclude 5bp from each end of the reads, I would use this: --nOT 5,5,5,5 However, in #102, Devon mentions: “Assuming you want to exclude the first 10 bases produced by the sequencer, the --nOT 10,0,0,140 --nOB 10,0,0,140 would do it (presuming you originally had 150 base reads).” Was Devon confused with --OT and --OB, or am I not understanding how --nOT and --nOB work? Thanks! I did a bad job when writing the documentation for this. What I wrote in #102 is correct, I'll make a mental note to clarify this in the documentation.