id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2430094081
Generate question with answers explenations 10 Create models for the server (QuestionRequest and QuestionResponse). Create a function to generate the prompt text for OpenAI. Call the OpenAI API to get the response. Return the response from OpenAI. Add tests to cover both successful and failure cases (e.g., wrong or missing fields). Run pytest to ensure all tests pass. Check the dev branch to ensure there are no conflicts. Open a pull request. I will unify the response type in the server
gharchive/pull-request
2024-07-25T14:17:54
2025-04-01T04:32:44.870763
{ "authors": [ "muhammadser1" ], "repo": "Makes-Innovation-Hub/bit-mentor-server", "url": "https://github.com/Makes-Innovation-Hub/bit-mentor-server/pull/33", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1503606266
V4.y dependency update Why I'd like to be able to get the updated spec.add_dependency 'http', '>= 3.0', '< 6.0' that is currently on master, but with backwards compatibility for the v4.y version. What update the relevant dependencies make the minimum required version of ruby 2.7 in the gemspec drop two EOL ruby versions from github actions get rubocop running/green again RuboCop approach I tried running autofix on the codebase and manually fixing what couldn't be autofixed. This resulted in some failing tests and a broad diff throughout the codebase. I wasn't comfortable with this approach and abandoned that branch (though I still have it locally) in favor of this PR. In this PR, I'm optimizing for keeping the diff small and reviewable. I have disabled every cop that has an active offence in the codebase and moved them under an explanatory comment. If we want to bring the codebase back into compliance, I think it would be better to handle that in separate Pull Requests to make reviewing easier and to split the work out. Some offences are easy but tedious to fix while others require a refactor and I'm not sure what the level of effort would be on those yet. Also, I have added all new cops that need to be configured under a separate comment. This gets rid of a warning in the command line, and could be configured as desired in a follow up. Let me know what you think! Thank you! Thanks! Wow, I didn't expect this to become tricky, I forgot about the ruby version affecting rubocop. In theory, you might be able to leave 2.5 or 2.6 "supported" — people using ruby 2.5 will automatically get http 4.y.z (as latest that supports their ruby), while people with 2.6+ will be able to enjoy http 5. But, that's up to you. I was reluctant for a long while to drop any ruby support from kubeclient (prior to major version bump), but I agree it's OK, no need to support long-EOL versions if it holds us back. If we want to bring the codebase back into compliance, I think it would be better to handle that in separate Pull Requests to make reviewing easier and to split the work out. I agree 100% :+1: This being a backport/maintenance branch, I don't think we care much about stylistic nitpicks, reducing code divergence is more valuable. CI hasn't run, says: Error when evaluating 'runs-on' for job 'build'. .github/workflows/actions.yml (Line: 14, Col: 14): Unexpected type of value '', expected type: OneOf. :confused: Maybe the include for rubocop should also have os_and_command: structure? So far all changes LGTM :+1: released 4.11.0
gharchive/pull-request
2022-12-19T21:10:36
2025-04-01T04:32:44.900117
{ "authors": [ "cben", "dhstewart" ], "repo": "ManageIQ/kubeclient", "url": "https://github.com/ManageIQ/kubeclient/pull/589", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
387755989
[HAMMER] Switch zh-Hans-CN translation identification string to delimiting dashes fixes https://bugzilla.redhat.com/show_bug.cgi?id=1655584 Looks like when we pull down translations - are changing to _ which breaks this translation, but no longer! Though something to watch out for in the future... Aah, good catch but the wrong place to fix it. The problem is that rails is using a different dash/underscore convention than .. everybody else, so we're manually transforming the locale string coming from rails. And that code likely never expected 2 of those in the same locale. (Or, to put it differently, zh_CN is a valid locale, zh-Hans-CN is something.) https://github.com/ManageIQ/manageiq-ui-service/commit/2f356c85db60604ac26ccc711f8687b03d91ce83 The problem: jq 'keys' client/gettext/json/manageiq-ui-service.json > locales jq 'keys' client/gettext/json/available_languages.json > available diff -Naur locales available --- locales 2018-12-05 13:45:39.914921612 +0000 +++ available 2018-12-05 13:45:43.606947052 +0000 @@ -5,8 +5,8 @@ "ja", "ko", "nl", + "pt-BR", "pt-PT", - "pt_BR", - "zh-TW", - "zh_Hans_CN" + "zh-Hans-CN", + "zh-TW" ] (Whereas on master, the diff is empty.) So we just need to update available_languages.json, correct? And client/gettext/json/available_languages.json is an auto-generated file... config/available-languages.js. So.. you should be able to run yarn run available-languages to get this fixed. Oh, darn, saw the bz comment before this, doing doing... hmmm running yarn available-languages generates {"es-ES":"","es":"locale_name","fr":"Français","ja":"locale_name","ko":"","nl":"","pt_BR":"Português (Brasil)","pt-PT":"","zh_Hans_CN":"locale_name","zh-TW":""} which for sure isn't gonna work Oh well :(. @mzazrivec will have to fix that. Every single locale must have a locale_name string that translates to the name of that locale. That's used by the script to generate those. Seems like half the language don't have that string at all, and the other half translates it as locale_name . This is because translators didn't include the locale_name in the translation files, most of the time locale_name was entered as the translation 😭 Yup, exactly :( I just updated / fixed locale_name in Zanata for es fr ja pt-BT zh-Hans-CN locales (hammer branch). So I think what needs to be done is download latest po catalogs from Zanata (with my changes), compile them (into json format) and update available_languages.json. closing in favor of #1509
gharchive/pull-request
2018-12-05T13:26:19
2025-04-01T04:32:44.927617
{ "authors": [ "AllenBW", "himdel", "mzazrivec" ], "repo": "ManageIQ/manageiq-ui-service", "url": "https://github.com/ManageIQ/manageiq-ui-service/pull/1508", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
110618694
Infra Host smartstate goes for a very long time Here is a place getting last logon of a user, by basically traversing /var/log/wtmp and getting users last logon e.g. my file has 200k records this goes through all of them https://github.com/ManageIQ/manageiq/blob/master/gems/pending/metadata/linux/LinuxUsers.rb#L235 and this goes through subset of them for each user https://github.com/ManageIQ/manageiq/blob/master/gems/pending/metadata/linux/LinuxUsers.rb#L205 so we have like 60+users, if one would take just minute, it's over hour. I just shut it off after half hour. :-) @blomquisg @Fryguy @mcornea ^ as @mcornea poineted out, this last -f /var/log/wtmp will give us list of users and their last login. Is there particular need to parse the log instead? Anyway first i am optimizing it, so it goes to that log only once, so we will get to minutes. fix merged
gharchive/issue
2015-10-09T09:13:07
2025-04-01T04:32:44.930941
{ "authors": [ "Ladas" ], "repo": "ManageIQ/manageiq", "url": "https://github.com/ManageIQ/manageiq/issues/4763", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
171952553
Fix erroneous reference to 'display_name' in OSP volume creation Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1366777 by replacing 'display_name' with 'name' when creating the MIQ volume object. This was causing volumes in MIQ to not have an assigned name until after they were refreshed. @dclarizio @tzumainn Thanks for the fix! I think it's possible that this would fix the issue for Cinder v2, while breaking it for Cinder v1. The former uses 'name', while the latter uses 'display_name'. It's possible that what you need is something like what the refresh_parser uses: https://github.com/ManageIQ/manageiq/blob/master/app/models/manageiq/providers/openstack/cloud_manager/refresh_parser.rb#L382 @tzumainn Thanks for pointing that out, I'll adjust this accordingly. :+1: Looks good to me now!
gharchive/pull-request
2016-08-18T17:08:55
2025-04-01T04:32:44.933712
{ "authors": [ "mansam", "tzumainn" ], "repo": "ManageIQ/manageiq", "url": "https://github.com/ManageIQ/manageiq/pull/10606", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
188726279
Custom supports methods shall not crash UI SupportsFeatureMixin is used in many places in the application. For dynamic features CustomFeatureMixin allows blocks like the following. supports :x do provider.api_version.abilities.include? :x end The problem occurs when an exception happens in the given block. As it does in https://bugzilla.redhat.com/show_bug.cgi?id=1394021 The supports methods are used in the UI and they are not expected to raise exceptions. The above mentioned bug crashes the vm details page. Given our direction wrt code decentralization and pluggable features, I am afraid, our chances to keep custom supports blocks sane are deteriorating. That's why I advise for rescue. @miq-bot add_label bug, ui, euwe/yes, blocker /cc @durandom Euwe Backport details: $ git log -1 commit 28d4810a569134f1f79bd02b180c2785fd0c8fa4 Author: Gregg Tanzillo <gtanzill@redhat.com> Date: Fri Nov 11 09:14:52 2016 -0500 Merge pull request #12574 from isimluk/rhbz#1394021 Custom supports methods shall not crash UI (cherry picked from commit bbb8543ff1df2c6129922bea0bf0b21b4ec4d7f0) https://bugzilla.redhat.com/show_bug.cgi?id=1394021 please not that this does not fix the root cause of the BZ @borod108 can you have a look at the BZ and it's stacktrace? It seems your cache sometimes returns nil. Vm.first.supports_migrate? Vm Load (1.2ms) SELECT "vms".* FROM "vms" WHERE "vms"."type" IN ('Vm', 'ManageIQ::Providers::CloudManager::Vm', 'ManageIQ::Providers::InfraManager::Vm', 'VmServer', 'ManageIQ::Providers::Google::CloudManager::Vm', 'ManageIQ::Providers::Openstack::CloudManager::Vm', 'ManageIQ::Providers::Vmware::CloudManager::Vm', 'ManageIQ::Providers::Amazon::CloudManager::Vm', 'ManageIQ::Providers::Azure::CloudManager::Vm', 'ManageIQ::Providers::Microsoft::InfraManager::Vm', 'ManageIQ::Providers::Redhat::InfraManager::Vm', 'ManageIQ::Providers::Vmware::InfraManager::Vm', 'VmXen') AND "vms"."template" = $1 ORDER BY "vms"."id" ASC LIMIT $2 [["template", false], ["LIMIT", 1]] Vm Inst Including Associations (0.3ms - 1rows) ExtManagementSystem Load (3.2ms) SELECT "ext_management_systems".* FROM "ext_management_systems" WHERE "ext_management_systems"."id" = $1 LIMIT $2 [["id", 1000000000001], ["LIMIT", 1]] ExtManagementSystem Inst Including Associations (1743.3ms - 1rows) NoMethodError: undefined method `collect' for nil:NilClass from /Users/yrudman/work/rh/manageiq/app/models/manageiq/providers/redhat/infra_manager/api_integration.rb:11:in `supported_features' from /Users/yrudman/work/rh/manageiq/app/models/manageiq/providers/redhat/infra_manager/api_integration.rb:237:in `block (2 levels) in process_api_features_support' @durandom Hi so after looking at this, the thing is that according to the log, we get "FATAL -- : Error caught: [OvirtSDK4::Error] Can't send request: Couldn't connect to server" so the thing is that if the server is not reachable and the results from it were not cached or are invalid, we do not know what api version it supports, so we do not know what features it supports. In a deeper sense - I think we should not try to continue an operation if we know the ems is unreachable, so I am wondering what is the right thing to do. Its easy to make it return an empty array instead of nil, so in a sense, if the ems is not reachable, it does not supports any features... Would you say that is the right approach? Or maybe we should some how raise and exception and force flows that require a connection with the provider to fail at this stage when it is already clear the provider is not reachable? Of course the second option might require much more coding and re factoring... just it feels strange returning an empty array without making it clear in some way that the problem is connectivity... Its easy to make it return an empty array instead of nil, so in a sense, if the ems is not reachable, it does not supports any features... I think this makes sense. And then add the reason 'ems not reachable' or something like this.
gharchive/pull-request
2016-11-11T10:07:58
2025-04-01T04:32:44.939642
{ "authors": [ "borod108", "chessbyte", "durandom", "isimluk" ], "repo": "ManageIQ/manageiq", "url": "https://github.com/ManageIQ/manageiq/pull/12574", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
324985047
[WIP] Improve snapshot_parent custom save block Improve the snapshot_parent custom save block to reduce the number of queries to find the children and parents from once per snapshot to once for the whole collection. WIP until I test the provider specs
gharchive/pull-request
2018-05-21T16:48:55
2025-04-01T04:32:44.941365
{ "authors": [ "agrare" ], "repo": "ManageIQ/manageiq", "url": "https://github.com/ManageIQ/manageiq/pull/17455", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
66671431
Fixed the function name to include _host https://bugzilla.redhat.com/show_bug.cgi?id=1209015 Added a spec and fixed the function name Please enter the commit message for your changes. Lines starting @gmcculloug Please review @gmcculloug @Fryguy Made the recommended changes
gharchive/pull-request
2015-04-06T18:23:23
2025-04-01T04:32:44.943063
{ "authors": [ "mkanoor" ], "repo": "ManageIQ/manageiq", "url": "https://github.com/ManageIQ/manageiq/pull/2530", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
156154889
Add support for substring in topology search Allow search for substrings and not only exact string match in the topology search. Please note this is case sensitive. @miq-bot add_label enhancement, ui, topology cc @simon3z @Ladas @pilhuhn @Jiri-Kremser @ammendonca :+1: looks good :+1:
gharchive/pull-request
2016-05-22T14:33:17
2025-04-01T04:32:44.945127
{ "authors": [ "Ladas", "abonas", "simon3z" ], "repo": "ManageIQ/manageiq", "url": "https://github.com/ManageIQ/manageiq/pull/8881", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
161920295
container compliance ui bugfix expand all button Bug fix for BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1347194 Another solution could be to backport the new tree support. I understood that a temporary solution for darga is preferable. NOTICE: this is a PR to mangeiq:darga (not manageiq:master) cc @simon3z @martinpovolny @miq-bot add_lable bug @miq-bot add_label bug, darga/yes Another solution could be to backport the new tree support. I understood that a temporary solution for darga is preferable. Correct, patching this on darga is better (rather than backporting a large ui change). :+1: @miq-bot assign chessbyte Please note that this is already against the darga branch. This is ready, but I don't think we want to merge it yet (up to you @chessbyte )
gharchive/pull-request
2016-06-23T13:16:48
2025-04-01T04:32:44.948415
{ "authors": [ "enoodle", "simon3z" ], "repo": "ManageIQ/manageiq", "url": "https://github.com/ManageIQ/manageiq/pull/9396", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
166275540
indicate a partial refresh error by raising an exception In EmsRefresh a failure will only be logged in the refresh context of the log. See error handling in ems_refresher_mixin.rb A queue message will still be delivered with the state ok. See error handling in miq_queue.rb This is the relevant log line. [----] I, [2016-03-16T10:18:06.067938 #3002:90798c] INFO -- : MIQ(MiqQueue#delivered) Message id: [2000001124178], State: [ok], Delivered in [157.827267344] seconds While for me this indicates, that the delivery of the message was successful but not necessarily the result of the invoked method, it seems that users think different. I dont like that I have to use an Exception for this, because right now the refresh is designed to not raise execptions, but log the errors in ems.last_refresh_error So, should we close this as NOTABUG ? Should we use m_callback to log something ? - Sounds like a dirty workaround and misuse of this BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1319324 @jrafanie @gtanzillo @Fryguy thoughts? @miq-bot add_labels core Ohhh I like this approach. Can we then get rid of https://github.com/ManageIQ/manageiq/blob/680033c00276a65b2d47f3c491945cfe4101c524/app/models/ems_refresh.rb#L21-L24 and the various callers? Ohhh I like this approach. Ok, so you are saying raising this PartialError exception is the way to go? @Fryguy Before I continue to work on this, do you want to raise immediatly, when an error occurs, like debug_failures was doing, or at the end, if an error occured? @durandom It makes sense to let all refreshes complete and if any failed, mark the whole thing as failed. That's far better than catching it, logging it and the setting the ems as failed and marking the whole message as ok. I'd like to get that error bubbled up so it's more obvious in both the queue message error and in the ems last refresh. @Fryguy @jrafanie It makes sense to let all refreshes complete and if any failed, mark the whole thing as failed. Just to be 100% sure, we want to, like collect, caught exceptions during the refresh and raise at the end if there were any during the collection. As opposed to raise immediatly, when an error occurs, like debug_failures was doing. The reason why I'm asking back, I find it more intuitive and straightforward not to collect the errors and just raise straight. But this strategy might fit better for the skeletor refresh :skull: I guess I'm confused what the current code does. Basically, blowing an Exception all the way out will bomb out the worker, which is what this is meant to prevent. An Exception in the worker will cause the worker to shut down and another to spin up. On the flip side when you do a refresh in console you probably want it to bomb out. If the current proposal still handles both of those use cases, then it's good...from my casual review I thought it did, but maybe I'm wrong? Basically, blowing an Exception all the way out will bomb out the worker, which is what this is meant to prevent As I understand this, the exception will be caught by miq_queue I'll try the bomb out early approach, which is basically like a --fail-fast and see how it goes. And my gut feeling is, that we probably dont want this in the soon scheduled darga release :smile: to have some more time to test it out. I simulated a failure by putting a raise into amazon/cloud_manager/refresh_parser.rb and a scheduled refresh and a manual refresh. It does not kill the worker, but the message is delivered in an error. The downside is, this would not do a partial refresh. If you prefer to not re-raise the exception immediately, I can revert the last commit and raise at the end of the refresh. As I said, I prefer this approach as its now, in this commit :bomb: Logs: [----] E, [2016-07-20T14:34:10.312326 #16489:2b29ee6c797c] ERROR -- : MIQ(ManageIQ::Providers::Amazon::CloudManager::Refresher#refresh) --- ManageIQ::Providers::Amazon::CloudManager [sadf] id [3] [----] E, [2016-07-20T14:34:10.350747 #16489:2b29ee6c797c] ERROR -- : MIQ(MiqQueue#deliver) Message id: [660], Error: [ooooooh nooooo] [----] E, [2016-07-20T14:34:10.350914 #16489:2b29ee6c797c] ERROR -- : [RuntimeError]: ooooooh nooooo Method:[rescue in deliver] [----] E, [2016-07-20T14:34:10.350993 #16489:2b29ee6c797c] ERROR -- : /home/hild/src/manageiq/providers/amazon/app/models/manageiq/providers/amazon/cloud_manager/refresh_parser.rb:29:in `ems_inv_to_hashes' /home/hild/src/manageiq/providers/amazon/app/models/manageiq/providers/amazon/refresh_helper_methods.rb:51:in `ems_inv_to_hashes' /home/hild/src/manageiq/providers/amazon/app/models/manageiq/providers/amazon/cloud_manager/refresher.rb:5:in `parse_legacy_inventory' /home/hild/src/manageiq/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:115:in `block in parse_targeted_inventory' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block' /home/hild/src/manageiq/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:115:in `parse_targeted_inventory' /home/hild/src/manageiq/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:80:in `block in refresh_targets_for_ems' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block' /home/hild/src/manageiq/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:79:in `refresh_targets_for_ems' /home/hild/src/manageiq/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:21:in `block (2 levels) in refresh' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block' /home/hild/src/manageiq/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:21:in `block in refresh' /home/hild/src/manageiq/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:11:in `each' /home/hild/src/manageiq/app/models/ems_refresh/refreshers/ems_refresher_mixin.rb:11:in `refresh' /home/hild/src/manageiq/app/models/manageiq/providers/base_manager/refresher.rb:10:in `refresh' /home/hild/src/manageiq/app/models/ems_refresh.rb:72:in `block in refresh' /home/hild/src/manageiq/app/models/ems_refresh.rb:71:in `each' /home/hild/src/manageiq/app/models/ems_refresh.rb:71:in `refresh' /home/hild/src/manageiq/app/models/miq_queue.rb:347:in `block in deliver' /home/hild/.rbenv/versions/2.2.5/lib/ruby/2.2.0/timeout.rb:88:in `block in timeout' /home/hild/.rbenv/versions/2.2.5/lib/ruby/2.2.0/timeout.rb:32:in `block in catch' /home/hild/.rbenv/versions/2.2.5/lib/ruby/2.2.0/timeout.rb:32:in `catch' /home/hild/.rbenv/versions/2.2.5/lib/ruby/2.2.0/timeout.rb:32:in `catch' /home/hild/.rbenv/versions/2.2.5/lib/ruby/2.2.0/timeout.rb:103:in `timeout' /home/hild/src/manageiq/app/models/miq_queue.rb:343:in `deliver' /home/hild/src/manageiq/app/models/miq_queue_worker_base/runner.rb:106:in `deliver_queue_message' /home/hild/src/manageiq/app/models/miq_queue_worker_base/runner.rb:134:in `deliver_message' /home/hild/src/manageiq/app/models/miq_queue_worker_base/runner.rb:152:in `block in do_work' /home/hild/src/manageiq/app/models/miq_queue_worker_base/runner.rb:146:in `loop' /home/hild/src/manageiq/app/models/miq_queue_worker_base/runner.rb:146:in `do_work' /home/hild/src/manageiq/app/models/miq_worker/runner.rb:334:in `block in do_work_loop' /home/hild/src/manageiq/app/models/miq_worker/runner.rb:331:in `loop' /home/hild/src/manageiq/app/models/miq_worker/runner.rb:331:in `do_work_loop' /home/hild/src/manageiq/app/models/miq_worker/runner.rb:153:in `run' /home/hild/src/manageiq/app/models/miq_worker/runner.rb:128:in `start' /home/hild/src/manageiq/app/models/miq_worker/runner.rb:21:in `start_worker' /home/hild/src/manageiq/app/models/miq_worker.rb:342:in `block in start' /home/hild/.rbenv/versions/2.2.5/lib/ruby/gems/2.2.0/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork' /home/hild/.rbenv/versions/2.2.5/lib/ruby/gems/2.2.0/gems/nakayoshi_fork-0.0.3/lib/nakayoshi_fork.rb:24:in `fork' /home/hild/src/manageiq/app/models/miq_worker.rb:340:in `start' /home/hild/src/manageiq/app/models/miq_worker.rb:270:in `start_worker' /home/hild/src/manageiq/app/models/mixins/per_ems_worker_mixin.rb:68:in `start_worker_for_ems' /home/hild/src/manageiq/app/models/mixins/per_ems_worker_mixin.rb:46:in `block in sync_workers' /home/hild/src/manageiq/app/models/mixins/per_ems_worker_mixin.rb:45:in `each' /home/hild/src/manageiq/app/models/mixins/per_ems_worker_mixin.rb:45:in `sync_workers' /home/hild/src/manageiq/app/models/miq_server/worker_management/monitor.rb:52:in `block in sync_workers' /home/hild/src/manageiq/app/models/miq_server/worker_management/monitor.rb:50:in `each' /home/hild/src/manageiq/app/models/miq_server/worker_management/monitor.rb:50:in `sync_workers' /home/hild/src/manageiq/app/models/miq_server/worker_management/monitor.rb:22:in `monitor_workers' /home/hild/src/manageiq/app/models/miq_server.rb:350:in `block in monitor' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block' /home/hild/src/manageiq/app/models/miq_server.rb:350:in `monitor' /home/hild/src/manageiq/app/models/miq_server.rb:372:in `block (2 levels) in monitor_loop' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:11:in `realtime_store' /home/hild/src/manageiq/gems/pending/util/extensions/miq-benchmark.rb:30:in `realtime_block' /home/hild/src/manageiq/app/models/miq_server.rb:372:in `block in monitor_loop' /home/hild/src/manageiq/app/models/miq_server.rb:371:in `loop' /home/hild/src/manageiq/app/models/miq_server.rb:371:in `monitor_loop' /home/hild/src/manageiq/app/models/miq_server.rb:254:in `start' /home/hild/src/manageiq/lib/workers/evm_server.rb:65:in `start' /home/hild/src/manageiq/lib/workers/evm_server.rb:92:in `start' /home/hild/src/manageiq/lib/workers/bin/evm_server.rb:4:in `<main>' [----] I, [2016-07-20T14:34:10.351121 #16489:2b29ee6c797c] INFO -- : MIQ(MiqQueue#delivered) Message id: [660], State: [error], Delivered in [0.851910189] seconds [----] I, [2016-07-20T14:34:16.560804 #15656:2b29ee6c797c] INFO -- : MIQ(MiqServer#heartbeat) Heartbeat [2016-07-20 12:34:16 UTC]... [----] I, [2016-07-20T14:34:16.576755 #15656:2b29ee6c797c] INFO -- : MIQ(MiqServer#heartbeat) Heartbeat [2016-07-20 12:34:16 UTC]...Complete The downside is, this would not do a partial refresh. If you prefer to not re-raise the exception immediately, I can revert the last commit and raise at the end of the refresh. As I said, I prefer this approach as its now, in this commit 💣 Why do you prefer blowing up like this? Is it because it's more obvious that there was a problem? If that's the case, can't we get the same information with last refresh status and providing the error information in the queue message status message? we get the same information with last refresh status and providing the error information in the queue message status message exactly, in either way we would see it. The question is, do we want to raise early or collect the errors and raise at the end. I think the code is easier with raising early and it does not need the debug_refresh toggle to enable raising early. And I guess the probability for other errors to occur after the first one is pretty high? to make this PR less changing I reverted to the first approach, raising collected errors at the end of the refresh. this addresses the linked BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1319324 @Fryguy or @jrafanie I'm fine with this, merge? LGTM other than the message length comment above, what do you think @Fryguy? @Fryguy ping @miq-bot add_label darga/yes @Fryguy @blomquisg can I have your :+1: or :-1: as this is also for the upcoming darga-3 ? Darga Backport details: $ git log commit 0d8a024df74969088e8d741b67ccacfed307f04a Author: Jason Frey <fryguy9@gmail.com> Date: Tue Aug 30 17:35:44 2016 -0400 Merge pull request #9907 from durandom/refresh_error_state indicate a partial refresh error by raising an exception (cherry picked from commit 37cb777854c79a76258a4f9098fa9e61a90a1cdb)
gharchive/pull-request
2016-07-19T08:30:52
2025-04-01T04:32:44.964916
{ "authors": [ "Fryguy", "chessbyte", "durandom", "jrafanie" ], "repo": "ManageIQ/manageiq", "url": "https://github.com/ManageIQ/manageiq/pull/9907", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2438071066
Axes and the unit_size Description of bug / unexpected behavior class test(Scene): def construct(self): self.graph_plane_config = { "x_range": [0, 4.3 * PI, PI], "y_range": [-2, 2, 1], "x_axis_config": { "unit_size": 2, #try another number and it's similar }, "y_axis_config": { "unit_size": 1, #try another number and it's similar }, } a = Axes(**self.graph_plane_config) self.add(a) In the class Axes(), in the "x_axis_config" and "y_axis_config", when i change "unit_size" or "length", the result doesn't change Expected behavior How to reproduce the issue Code for reproducing the problem Additional media files Images/GIFs Logs Terminal output PASTE HERE OR PROVIDE LINK TO https://pastebin.com/ OR SIMILAR System specifications System Details OS (with version, e.g., Windows 10 v2004 or macOS 10.15 (Catalina)): RAM: Python version (python/py/python3 --version): Installed modules (provide output from pip list): PASTE HERE LaTeX details LaTeX distribution (e.g. TeX Live 2020): Installed LaTeX packages: Additional comments Just set the lengths of the axes in the main object's parameters. The axis_config are mostly meant for cosmetic changes to the looks of the axes, not to their scaling. class test(Scene): def construct(self): self.graph_plane_config = { "x_range": [0, 4.1 * PI, PI], "y_range": [-2, 2, 1], "x_length": 4.1*PI, "y_length": 4, "tips":False, "x_axis_config": { "decimal_number_config": {"num_decimal_places":2}, }, } a = Axes(**self.graph_plane_config).add_coordinates() self.add(a) Thanks you
gharchive/issue
2024-07-30T15:06:08
2025-04-01T04:32:45.007573
{ "authors": [ "TueNguyen2006", "uwezi" ], "repo": "ManimCommunity/manim", "url": "https://github.com/ManimCommunity/manim/issues/3892", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1070670873
Fixed screeninfo.get_monitors returning empty on osx Overview: What does this pull request change? Fixes screening.get_monitors returning empty on OSX Issue: https://github.com/ManimCommunity/manim/issues/2356 Motivation and Explanation: Why and how do your changes improve the library? At the moment, the quick start example does not work for --renderer=opengl, this fixes it Links to added or changed documentation pages Further Information and Comments The problem is also present in maningl: https://github.com/3b1b/manim/issues/1394 Seems it * should * be handled by https://github.com/rr-/screeninfo , but some probably related issues are open since mid-2020. The get_monitors function seem to try all possible platforms when one is not supplied explicitly: https://github.com/rr-/screeninfo/blob/master/screeninfo/screeninfo.py#L20-L35 But you can see in these lines: for enumerator in Enumerator: try: return _get_monitors(enumerator) except Exception: pass it tries the next one only if previous raised exception, I suspect some enumerator that is tested before osx is returning successfully when it shouldn't. I will investigate and try a PR on screeninfo, maybe I'll make it check and fail also for empty lists, but even if it makes it through, last release there was almost a year ago, will take some time to fix us Reviewer Checklist [ ] The PR title is descriptive enough for the changelog, and the PR is labeled correctly [ ] If applicable: newly added non-private functions and classes have a docstring including a short summary and a PARAMETERS section [ ] If applicable: newly added functions and classes are tested @ryanmccauley211 sorry I'm late, holiday season is quite busy. The update makes opengl work fine in macos. I did got a failed test though, but also tried in main and it's there as well, so don't know what's of it: pytest output manim/utils/testing/frames_comparison.py:128: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ manim/utils/testing/frames_comparison.py:218: in real_test frames_tester.check_frame(-1, scene_tested.renderer.get_frame()) manim/utils/testing/_frames_testers.py:52: in check_frame raise e _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <manim.utils.testing._frames_testers._FramesTester object at 0x12cd4f6d0>, frame_number = -1 frame = array([[[ 0, 0, 0, 255], [ 0, 0, 0, 255], [ 0, 0, 0, 255], ..., [ 0, ..., ..., [ 0, 0, 0, 255], [ 0, 0, 0, 255], [ 0, 0, 0, 255]]], dtype=uint8) def check_frame(self, frame_number: int, frame: np.ndarray): assert frame_number < self._number_frames, ( f"The tested scene is at frame number {frame_number} " f"when there are {self._number_frames} control frames." ) try: > np.testing.assert_allclose( frame, self._frames[frame_number], atol=1.01, err_msg=f"Frame no {frame_number}. You can use --show_diff to visually show the difference.", verbose=False, ) E AssertionError: E Not equal to tolerance rtol=1e-07, atol=1.01 E Frame no -1. You can use --show_diff to visually show the difference. E Mismatched elements: 6 / 1639680 (0.000366%) E Max absolute difference: 255 E Max relative difference: 1.94656489 manim/utils/testing/_frames_testers.py:41: AssertionError ------------------------------------------------ Captured stdout call ------------------------------------------------ (1, 480, 854, 4) ================================================== warnings summary ================================================== tests/test_value_tracker.py: 16 warnings tests/test_graphical_units/test_axes.py: 5 warnings tests/test_graphical_units/test_mobjects.py: 5 warnings tests/test_graphical_units/test_threed.py: 75 warnings tests/test_graphical_units/test_updaters.py: 1 warning /Users/brendon.faccion/Desktop/meera/manim/manim/mobject/value_tracker.py:72: DeprecationWarning: This method is not guaranteed to stay around. Please prefer setting the attribute normally or with Mobject.set(). self.set_points(np.zeros((1, 3))) tests/test_value_tracker.py: 1 warning tests/test_graphical_units/test_threed.py: 24 warnings /Users/brendon.faccion/Desktop/meera/manim/manim/mobject/value_tracker.py:133: DeprecationWarning: This method is not guaranteed to stay around. Please prefer setting the attribute normally or with Mobject.set(). self.set_points(path_func(mobject1.points, mobject2.points, alpha)) tests/opengl/test_axes_shift_opengl.py: 4 warnings tests/opengl/test_coordinate_system_opengl.py: 68 warnings tests/opengl/test_opengl_vectorized_mobject.py: 1 warning tests/opengl/test_ticks_opengl.py: 12 warnings /Users/brendon.faccion/Desktop/meera/manim/manim/mobject/types/opengl_vectorized_mobject.py:923: DeprecationWarning: Calling np.sum(generator) is deprecated, and in the future will give a different result. Use np.sum(np.fromiter(generator)) or the python sum builtin instead. target_length = alpha * np.sum(length for _, length in curves_and_lengths) tests/test_graphical_units/test_axes.py::test_get_area tests/test_graphical_units/test_axes.py::test_get_area tests/test_graphical_units/test_axes.py::test_get_area tests/test_graphical_units/test_axes.py::test_get_area tests/test_graphical_units/test_axes.py::test_get_area tests/test_graphical_units/test_axes.py::test_get_area tests/test_graphical_units/test_axes.py::test_get_area /Users/brendon.faccion/Desktop/meera/manim/manim/mobject/number_line.py:424: DeprecationWarning: This method is not guaranteed to stay around. Please prefer getting the attribute normally. num_mob.shift(num_mob[0].get_width() * LEFT / 2) tests/test_graphical_units/test_specialized.py::test_Broadcast tests/test_graphical_units/test_specialized.py::test_Broadcast tests/test_graphical_units/test_specialized.py::test_Broadcast tests/test_graphical_units/test_specialized.py::test_Broadcast tests/test_graphical_units/test_specialized.py::test_Broadcast /Users/brendon.faccion/Desktop/meera/manim/manim/animation/specialized.py:85: DeprecationWarning: This method is not guaranteed to stay around. Please prefer setting the attribute normally or with Mobject.set(). mob.set_width(self.initial_width) -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================================== short test summary info =============================================== FAILED tests/test_graphical_units/test_threed.py::test_Axes - AssertionError: ============== 1 failed, 559 passed, 2 skipped, 7 xfailed, 2 xpassed, 224 warnings in 183.41s (0:03:03) ============== @ryanmccauley211 I don't get it, it failed before reaching the tests, did my PR changed something on the bootstrap config? O.o, I can't see how ps: please note I rebased with this repository main branch before my last commit, is it something related to that? it failed before reaching the tests, did my PR changed something on the bootstrap config? O.o, I can't see how No, the failures seem to be unrelated. I'll re-run the pipeline.
gharchive/pull-request
2021-12-03T14:52:39
2025-04-01T04:32:45.016603
{ "authors": [ "behackl", "betafcc" ], "repo": "ManimCommunity/manim", "url": "https://github.com/ManimCommunity/manim/pull/2357", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2512588173
Added ConvexHull and LabeledPolygram Overview: What does this pull request change? This PR introduces four additional features and two utilities. manim.mobject.geometry.polygram.ConvexHull which constructs a ConvexHull through a set of points. manim.mobject.three\_d.polyhedra.ConvexHull3D which constructs a ConvexHull through a set of points. manim.mobject.geometry.labeled.Label which constructs a boxed label. manim.mobject.geometry.labeled.LabeledPolygram which constructs a polygram containing a label box at its pole of inaccessibility. This is the point inside a polygon that is farthest from its edges. manim.utilities.qhull which constructs a ConvexHull through a set of points. It extends to arbitrary dimensions. manim.utilities.polylabel which computes the pole of inaccessibility for any complex polygon to a given precision. Notably, none of these introduce breaking changes nor require additional dependencies. Motivation and Explanation: Why and how do your changes improve the library? The first two are useful in scenarios where the user has a collection of points but does not know what order they should be plotted. In 2D this reduces to sorting by angle bit in 3D the task is more complex and can be handled automagically. The fourth is useful for labeling polygons. Links to added or changed documentation pages I am not sure how to format this, but here is what the doc pages look like locally: manim/docs/build/html/reference/manim.mobject.geometry.labeled.LabeledPolygram.html manim/docs/build/html/reference/manim.mobject.geometry.labeled.Label.html manim/docs/build/html/reference/manim.mobject.geometry.polygram.ConvexHull.html manim/docs/build/html/reference/manim.mobject.three_d.polyhedra.ConvexHull3D.html Further Information and Comments Reviewer Checklist [x] The PR title is descriptive enough for the changelog, and the PR is labeled correctly [x] If applicable: newly added non-private functions and classes have a docstring including a short summary and a PARAMETERS section [x] If applicable: newly added functions and classes are tested Hello there! I just added the PointND and PointND_Array type aliases. Hello there! Apologies for the delay here. Do let me know if any additional changes are needed. Also, not sure how to handle the pre-commit typing issue with self.color in shape_matchers.py since I didn't make that change.
gharchive/pull-request
2024-09-08T21:46:48
2025-04-01T04:32:45.023200
{ "authors": [ "JayGupta797", "chopan050" ], "repo": "ManimCommunity/manim", "url": "https://github.com/ManimCommunity/manim/pull/3933", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1376469702
StreetPass Find Mii "Unable to read beyond the end of the stream" Error. effect.zip When you try to open this file, the program gives an error saying it can't read past the stream. (The .bcres is inside the .zip) That's interesting. Is this version known already? Or is it some obscure thing that nobody's noticed until now? Looking at this again, the shapes are using a different structure due to being particle effects. Currently emitter data isn't even supported, so I don't think this will be supported.
gharchive/issue
2022-09-16T21:02:38
2025-04-01T04:32:45.048962
{ "authors": [ "KillzXGaming", "SuperHackio" ], "repo": "MapStudioProject/CTR-Studio", "url": "https://github.com/MapStudioProject/CTR-Studio/issues/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1941695729
BruTile iOS build version issue Mapsui Version 4.0.0 Mapsui Platform Xamarin Device iOS Describe the bug In my project I have include BruTile.MbTiles v5.0.5. The same version is also included in Mapsui packages. When building iOS app with linker settings set to "None" the build is successful. Building with "Framework SDK's only" and adding explicitly to preserve BruTile assembly I get the fallowing error: MTOUCH : error MT2101: Can't resolve the reference 'System.Void BruTile.Web.HttpTileSource::.ctor(BruTile.ITileSchema,System.String,System.Collections.Generic.IEnumerable1<System.String>,System.String,System.String,BruTile.Cache.IPersistentCache1<System.Byte[]>,System.Func2<System.Uri,System.Threading.Tasks.Task1<System.Byte[]>>,BruTile.Attribution,System.String)', referenced from the method 'BruTile.Web.HttpTileSource Mapsui.Tiling.OpenStreetMap::CreateTileSource(System.String)' in 'BruTile, Version=5.0.4.0, Culture=neutral, PublicKeyToken=null'. [/Users/admin/BuildAgent/13/work/fbe61631739df768/Name1.Name2/Name1.Name2.iOS/Name1.Name2.iOS.csproj] In my project I don't have a reference to BruTile v5.0.4. It seams that the issue is coming from Mapsui packages. I saw a similar issue in 2019 https://github.com/Mapsui/Mapsui/issues/703 ai, that is my mistake. I introduced a breaking change in BruTile 5.0.5. I added an optional parameter to that method, thinking since it is optional no one will notice, but if you do not compile against that code you get the error above. Mapsui expects the exact same signature for that method. I should have introduced a separate method for that purpose instead. My plan is currently not to fix it since there are solutions. Use Mapsui 4.1.0 (released an hour ago). Use BruTile 5.0.4 in your own project. Tell me if this fixes your problem. That worked
gharchive/issue
2023-10-13T10:37:20
2025-04-01T04:32:45.070355
{ "authors": [ "FilipNemec", "pauldendulk" ], "repo": "Mapsui/Mapsui", "url": "https://github.com/Mapsui/Mapsui/issues/2217", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
848797435
Fixed strings Fixed strings Added AppCenterSecrets file Thanks for your pull request
gharchive/pull-request
2021-04-01T21:21:19
2025-04-01T04:32:45.075369
{ "authors": [ "MarcAnt01", "yaichenbaum" ], "repo": "MarcAnt01/Fluent-Screen-Recorder", "url": "https://github.com/MarcAnt01/Fluent-Screen-Recorder/pull/54", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1561501766
support touch screen (mobile devices) please support a version for touch screen devices too (for godot 4), thanks for your contribution. Thanks for the surgestion but this is currently not planned. Adding touch support would make this project so much more complicated because of the needed UI controls.
gharchive/issue
2023-01-29T23:14:09
2025-04-01T04:32:45.083197
{ "authors": [ "MarcPhi", "pooyapower" ], "repo": "MarcPhi/godot-free-look-camera", "url": "https://github.com/MarcPhi/godot-free-look-camera/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
605590825
add acejump AceLineAction jl add acejump AceLineAction jl - i use this a lot This looks great! :rocket: Just one thing. I tried to keep key bindings alphabetically ordered. Is it a problem for you to move the key binding around line 19? :) Thanks, could you also update KEYBINDINGS docs? Great, thank you so much 🥳
gharchive/pull-request
2020-04-23T14:34:14
2025-04-01T04:32:45.089763
{ "authors": [ "MarcoIeni", "mangosmoothie" ], "repo": "MarcoIeni/intellimacs", "url": "https://github.com/MarcoIeni/intellimacs/pull/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
372345148
Auto relist all items on transferlist - show how many times it has been listed and not sold READ BEFORE YOU ADD AN ISSUE: Do not request autobuyer features, because they are considered as cheating. Issues regarding autobuyers will be closed immediately. Expected behavior: Give a detailed explanation of the expected behavior. Current behavior: Give a detailed explanation of the current behavior. Metadata: Script version: Browser: OS: Duplicate of #42 Automatically relisting is unwanted behavior because it could be considered as 'botting'.
gharchive/issue
2018-10-21T18:53:06
2025-04-01T04:32:45.102245
{ "authors": [ "Mardaneus86", "Mit010" ], "repo": "Mardaneus86/futwebapp-tampermonkey", "url": "https://github.com/Mardaneus86/futwebapp-tampermonkey/issues/143", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
225376438
Issue #78 Updates typescript to 2.3.2, adds vs code launch configs Overview This adds two new launch configs and a compound config for debugging the client typescript. There is a LOT going on with VS Code right now, and there has been a lot of instability in general around both .NET core and Typescript debugging. Make sure your C# extension is up to date, and if you have trouble consider installing a pre-release if one is available. Either way though, the debugging experience when you try to mix typescript and .net core debugging is fairly rough right now, but a lot of effort is going into making that work much smoother over the next few months. launch configs "[Development] Debug TypeScript" launches a browser for debugging via the chrome debug extensions. Assumes the server is already running. The key here seems to be specifying the sourceMapPathOverrides setting instead of letting it use the default paths. needs testing with linux / osx; I'm not sure the sourceMapPathOverrides is quite right for other OSes. "[Development] Launch Server (no browser)" launches the asp.net core server, but does not open a browser. Intended for use with multi-target debugging in VS Code. Pretty much the same as the existing one, just doesn't open a browser automatically "[Development] Debug Server & Client" this is a compound debugger configuration that starts both of the above configurations in parallel. Start multi-target debugging manually: In VS Code, select the "[Development] Launch Server (no browser)" config and start debugging. Wait until the server has had a few seconds to spin up Switch VS Code to the "[Development] Debug TypeScript" configuration and start debugging The debug bar in VS Code will show two debugger configs and give you a drop down to switch between them. You can set breakpoints for server or client code. Using the compound config The "[Development] Debug Server & Client" config starts debuggers for both the server and client configs at the same time. This is not the smoothest experience since the client config will try to open the browser before the server process is running. Keep refreshing the browser window and it should bring up the page as soon as the server is online. I would like some guidance on naming of the configs, especially if you want to leave the compound config in there... can be confusing. The compound config is a really poor experience, especially for devs without much previous experience with vs code... might be better to leave it out. Either way, a bit of readme might help people get started; I can write that up but I wanted to wait until the code gets a review first. This looks great, let me take a look and run through it later today or tomorrow! Thank you @StephenRedd 💯
gharchive/pull-request
2017-05-01T02:53:49
2025-04-01T04:32:45.208556
{ "authors": [ "MarkPieszak", "StephenRedd" ], "repo": "MarkPieszak/aspnetcore-angular2-universal", "url": "https://github.com/MarkPieszak/aspnetcore-angular2-universal/pull/228", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
683146908
possible validation issue when marshalling data between Automated Testing page & autotester I'm setting up an autograder test suite and upon saving I get the following error: Traceback (most recent call last): File "/data/markus/software/markus-autotesting-1.9.0/venv/bin/markus_autotester", line 11, in load_entry_point('markus-autotester', 'console_scripts', 'markus_autotester')() File "/data/markus/software/markus-autotesting-1.9.0/src/autotester/cli.py", line 257, in cli COMMANDS[args.command](**kwargs) File "/data/markus/software/markus-autotesting-1.9.0/src/autotester/cli.py", line 98, in wrapper return func(*args, **kwargs) File "/data/markus/software/markus-autotesting-1.9.0/src/autotester/cli.py", line 164, in update_specs raise error jsonschema.exceptions.ValidationError: {'test_suite_name': 'Q1', 'script_file': 'parta_tests.rkt'} is not of type 'string' Failed validating 'type' in schema[0]['properties']['test_data']['items']['properties']['script_files']['items']: {'enum': ['parta_tests.rkt', 'partc_tests.rkt', 'partb_tests.rkt'], 'type': 'string'} On instance['test_data'][0]['script_files'][0]: {'script_file': 'parta_tests.rkt', 'test_suite_name': 'Q1'} As a user, I'm not exactly sure how to parse this error, but my impression is that it's expecting one of ['parta_tests.rkt', 'partc_tests.rkt', 'partb_tests.rkt'] but instead was passed a dctionary containing the script filename but also the test suite name. Seems like this is all internal to Markus; I populate these fields from the GUI and don't have any control about the underlying type. If so, was a change in either MarkUs or autograder perhaps not applied to the other? The error message is generated by a dependency and it looks like they have some bug that reports errors from the wrong part of the form. The actual issue is that you need to select a category (admin or student) for your Q10 group. To be honest, I just looked through the page and looked for missing values in inputs.
gharchive/issue
2020-08-20T23:05:49
2025-04-01T04:32:45.211380
{ "authors": [ "dijkstracula", "mishaschwartz" ], "repo": "MarkUsProject/Markus", "url": "https://github.com/MarkUsProject/Markus/issues/4823", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
192295956
Feature Request: Auto-playlists Would it be possible to implement the ability to generate smart playlists in the app. Possibly by integrating this chrome extension: https://github.com/simon-weber/Autoplaylists-for-Google-Music Seems to be open-source, so shouldn't be any legal issues. If someone ports that to not require extension API's and makes a PR I would probably accept it. I am very wary of spoofing GPM rest calls though (which is how that extension works). I can see it ending badly if google changes their rest URL's at some point. TLDR: If someone does it I'll take it, but I'm not going to look into it myself
gharchive/issue
2016-11-29T14:28:24
2025-04-01T04:32:45.303796
{ "authors": [ "MarshallOfSound", "frenchiejnr" ], "repo": "MarshallOfSound/Google-Play-Music-Desktop-Player-UNOFFICIAL-", "url": "https://github.com/MarshallOfSound/Google-Play-Music-Desktop-Player-UNOFFICIAL-/issues/1967", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
774085366
Play inbuilt patterns forever until long press It would be nice for the inbuilts to play forever until long press, as the patterns all have different play times which is hard to keep track off. Or make it possible to save MIW files for inbuilts. Yeah, they are kind of a hodge-podge. Perhaps running forever would be best. After some thought, I agree. I'll remove the timers from the built-ins. Unlike BMP files, the built-ins don't really have a clear ending point, so they might as run until they are told to stop. Thanks for the idea. I have changed the built-ins to run forever for the ones that had timers. Some of them ran just once normally, I have left those running just one time, but they can of course be repeated by setting the repeat option in the repeat settings menu. Try them and let me know how they work for you. cyclone, meteor, one dot, rainbow pulse, two dots, wedge run only once. We either need an indicator in the menu for single running or forever running patterns, or they all should run forever. Also the single runs do not show a progress bar or time left. Any of those can of course be made to repeat by setting the repeat count. Still, you raise some valid issues. I'll work on it. There are actually two kinds of built-in patterns. Some don't have a natural stopping point, like checkerboard or barber pole, etc. These make sense to run forever. There should be a message on the screen to use long-press to stop. I had put a timer on these to make them stop eventually, the long press is fine to stop these. Some patterns have a natural stopping point, like wedge or running a dot from one end to the other. I think it makes sense to only allow these to run once. They can always be set to repeat with the repeat menu entry. If you always have them self repeating then the natural stopping point doesn't mean anything, and if you wanted to stop at that point it will be very difficult to do with the long press. So after some thought I think we need to have both kinds, but I agree that it should be made more clear. I could add something to the name that indicates once or forever. An ∞ symbol would be nice but the TFT fonts don't have one. I could probably create my own though. I've never done that but I have read that it is possible. BTW, it isn't "cyclone", it is Cylon. It's a reference to the old movie/TV series Battestar Galactica. It was also used for the Kitt car in Knight Rider TV series. They were both created by Glen Larson (I think that was his name without looking it up).
gharchive/issue
2020-12-23T23:04:54
2025-04-01T04:32:45.312719
{ "authors": [ "MartinNohr", "dirkessl" ], "repo": "MartinNohr/MagicImageWand", "url": "https://github.com/MartinNohr/MagicImageWand/issues/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
226655586
Determine Key Languages for MVP [x] Select key languages for MVP. [x] Determine who will be doing the translations. Spanish, Mandarin, Cantonese
gharchive/issue
2017-05-05T18:30:52
2025-04-01T04:32:45.353898
{ "authors": [ "komizutama" ], "repo": "MasbiaSoupKitchenNetwork/open_pantry", "url": "https://github.com/MasbiaSoupKitchenNetwork/open_pantry/issues/80", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
436556039
Manual overwrite It would be great to have the ability to manually enter a time - for example if you forget to 'start' your time logging as soon as you arrive in the office. Latest release (v0.9.0) allows you to edit time entries after clicking the button to create them. Does that meet your requirements? 😄 Released in v0.14.0
gharchive/issue
2019-04-24T08:11:19
2025-04-01T04:32:45.357443
{ "authors": [ "annie-birchall", "matthew-shaw" ], "repo": "MashSoftware/the-button", "url": "https://github.com/MashSoftware/the-button/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
176937073
Admin API should support apikeys When adding Key Authentication on top of the Admin API, it's currently not possible to query entities because the system fails with: GET kong:8000/admin/consumers/?apikey=hello_world { "apikey": "unknown field" } This check should only be disabled on the Admin API, but not at the DAO level. Or better, if the admin API supported some form of Basic Auth we wouldn't need to use the apikey workaround. You can get around this by making an "Admin" api and applying the proper plugins to the new api. Then you can disable outside access to port 8001 forcing all admin request to go through the gateway with proper creds. Kong Enterprise now supports RBAC, obviating the need to cover over the Admin API with a separate Kong instance running key authentication. In addition, the hide_credentials configuration option of key-auth solves this. Closing as such!
gharchive/issue
2016-09-14T15:21:35
2025-04-01T04:32:45.360542
{ "authors": [ "halliej", "p0pr0ck5", "thefosk" ], "repo": "Mashape/kong", "url": "https://github.com/Mashape/kong/issues/1630", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
187990661
500 Internal Server Error: /usr/local/share/lua/5.1/kong/dao/schemas/plugins.lua:55: attempt to index local 'plugin_t' (a nil value) Summary I'm using Kongfig to sync old cluster to new cluster, as part of a blue/green deployment process. I must initiate the sync process kongfig apply several times because at various points during that process Kong returns a 500 internal server error. When I retry the sync, it will pick up where it threw an error (after first verifying all of the previous calls to the admin api was successful) and have no issues. That is until it eventually errors again. We have over 200 apis using several plugins each. I can open a ticket with Kongfig, as well, but since this is Kong giving a 500 error, it makes sense to start here first. Thanks! Steps To Reproduce This is difficult to tell you how to reproduce it as it happens rather randomly. And maybe most uses of kong won't have as many apis. Additional Details & Logs Kong version 0.9.4 Kong debug-level startup logs 2016/11/08 13:07:18 [verbose] Kong: 0.9.4 2016/11/08 13:07:18 [debug] ngx_lua: 10005 2016/11/08 13:07:18 [debug] nginx: 1009015 2016/11/08 13:07:18 [debug] Lua: LuaJIT 2.1.0-beta2 2016/11/08 13:07:18 [debug] PRNG seed: 175164123917 2016/11/08 13:07:18 [verbose] no config file found at /etc/kong.conf 2016/11/08 13:07:18 [verbose] reading config file at /etc/kong/kong.conf 2016/11/08 13:07:18 [debug] admin_listen = "0.0.0.0:8001" 2016/11/08 13:07:18 [debug] anonymous_reports = true 2016/11/08 13:07:18 [debug] cassandra_consistency = "ONE" 2016/11/08 13:07:18 [debug] cassandra_contact_points = {"10.3.3.10"} 2016/11/08 13:07:18 [debug] cassandra_data_centers = {"dc1:2","dc2:3"} 2016/11/08 13:07:18 [debug] cassandra_keyspace = "kong" 2016/11/08 13:07:18 [debug] cassandra_password = "******" 2016/11/08 13:07:18 [debug] cassandra_port = 9042 2016/11/08 13:07:18 [debug] cassandra_repl_factor = 2 2016/11/08 13:07:18 [debug] cassandra_repl_strategy = "SimpleStrategy" 2016/11/08 13:07:18 [debug] cassandra_ssl = false 2016/11/08 13:07:18 [debug] cassandra_ssl_verify = false 2016/11/08 13:07:18 [debug] cassandra_timeout = 5000 2016/11/08 13:07:18 [debug] cassandra_username = "kong" 2016/11/08 13:07:18 [debug] cluster_listen = "0.0.0.0:7946" 2016/11/08 13:07:18 [debug] cluster_listen_rpc = "127.0.0.1:7373" 2016/11/08 13:07:18 [debug] cluster_profile = "wan" 2016/11/08 13:07:18 [debug] cluster_ttl_on_failure = 120 2016/11/08 13:07:18 [debug] custom_plugins = {} 2016/11/08 13:07:18 [debug] database = "cassandra" 2016/11/08 13:07:18 [debug] dns_resolver = "8.8.8.8" 2016/11/08 13:07:18 [debug] dnsmasq = false 2016/11/08 13:07:18 [debug] dnsmasq_port = 8053 2016/11/08 13:07:18 [debug] log_level = "notice" 2016/11/08 13:07:18 [debug] lua_code_cache = "on" 2016/11/08 13:07:18 [debug] lua_package_cpath = "" 2016/11/08 13:07:18 [debug] lua_package_path = "?/init.lua;./kong/?.lua" 2016/11/08 13:07:18 [debug] lua_ssl_verify_depth = 1 2016/11/08 13:07:18 [debug] mem_cache_size = "4096m" 2016/11/08 13:07:18 [debug] nginx_daemon = "on" 2016/11/08 13:07:18 [debug] nginx_optimizations = true 2016/11/08 13:07:18 [debug] nginx_worker_processes = "auto" 2016/11/08 13:07:18 [debug] pg_database = "kong" 2016/11/08 13:07:18 [debug] pg_host = "127.0.0.1" 2016/11/08 13:07:18 [debug] pg_port = 5432 2016/11/08 13:07:18 [debug] pg_ssl = false 2016/11/08 13:07:18 [debug] pg_ssl_verify = false 2016/11/08 13:07:18 [debug] pg_user = "kong" 2016/11/08 13:07:18 [debug] prefix = "/usr/local/kong/" 2016/11/08 13:07:18 [debug] proxy_listen = "0.0.0.0:8000" 2016/11/08 13:07:18 [debug] proxy_listen_ssl = "0.0.0.0:8433" 2016/11/08 13:07:18 [debug] serf_path = "serf" 2016/11/08 13:07:18 [debug] ssl = true 2016/11/08 13:07:18 [verbose] prefix in use: /usr/local/kong 2016/11/08 13:07:18 [verbose] preparing nginx prefix directory at /usr/local/kong 2016/11/08 13:07:18 [verbose] saving serf identifier to /usr/local/kong/serf/serf.id 2016/11/08 13:07:18 [debug] searching for OpenResty 'resty' executable 2016/11/08 13:07:18 [debug] /usr/local/openresty/bin/resty -V: 'nginx version: openresty/1.9.15.1' 2016/11/08 13:07:18 [debug] found OpenResty 'resty' executable at /usr/local/openresty/bin/resty 2016/11/08 13:07:18 [verbose] saving serf shell script handler to /usr/local/kong/serf/serf_event.sh 2016/11/08 13:07:18 [verbose] SSL enabled, no custom certificate set: using default certificate 2016/11/08 13:07:18 [verbose] default SSL certificate found at /usr/local/kong/ssl/kong-default.crt 2016/11/08 13:07:18 [verbose] running datastore migrations 2016/11/08 13:07:18 [verbose] migrations up to date 2016/11/08 13:07:18 [verbose] serf agent not running, deleting /usr/local/kong/pids/serf.pid 2016/11/08 13:07:18 [debug] checking 'serf' executable from 'serf_path' config setting 2016/11/08 13:07:18 [debug] serf version: 'Serf v0.7.0' 2016/11/08 13:07:18 [debug] starting serf agent: nohup serf agent -profile 'wan' -rpc-addr '127.0.0.1:7373' -event-handler 'member-join,member-leave,member-failed,member-update,member-reap,user:kong=/usr/local/kong/serf/serf_event.sh' -bind '0.0.0.0:7946' -node '10.3.3.10_0.0.0.0:7946_8376f88077484a8dac0cd9776bd71075' -log-level 'err' > /usr/local/kong/logs/serf.log 2>&1 & echo $! > /usr/local/kong/pids/serf.pid 2016/11/08 13:07:18 [verbose] waiting for serf agent to be running 2016/11/08 13:07:18 [debug] sending signal to pid at: /usr/local/kong/pids/serf.pid 2016/11/08 13:07:18 [debug] kill -0 `cat /usr/local/kong/pids/serf.pid` >/dev/null 2>&1 2016/11/08 13:07:18 [verbose] serf agent started 2016/11/08 13:07:18 [verbose] auto-joining serf cluster 2016/11/08 13:07:18 [verbose] successfully joined cluster at 10.3.3.130:7946 2016/11/08 13:07:18 [verbose] registering serf node in datastore 2016/11/08 13:07:18 [verbose] cluster joined and node registered in datastore 2016/11/08 13:07:18 [debug] searching for OpenResty 'nginx' executable 2016/11/08 13:07:18 [debug] /usr/local/openresty/nginx/sbin/nginx -v: 'nginx version: openresty/1.9.15.1' 2016/11/08 13:07:18 [debug] found OpenResty 'nginx' executable at /usr/local/openresty/nginx/sbin/nginx 2016/11/08 13:07:18 [debug] starting nginx: /usr/local/openresty/nginx/sbin/nginx -p /usr/local/kong -c nginx.conf 2016/11/08 13:07:18 [debug] nginx started 2016/11/08 13:07:18 [info] Kong started Kong error logs (<KONG_PREFIX>/logs/error.log) 2016/11/08 12:25:19 [notice] 8538#0: using the "epoll" event method 2016/11/08 12:25:19 [notice] 8538#0: openresty/1.9.15.1 2016/11/08 12:25:19 [notice] 8538#0: built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) 2016/11/08 12:25:19 [notice] 8538#0: OS: Linux 3.13.0-91-generic 2016/11/08 12:25:19 [notice] 8538#0: getrlimit(RLIMIT_NOFILE): 1024:4096 2016/11/08 12:25:19 [notice] 8545#0: start worker processes 2016/11/08 12:25:19 [notice] 8545#0: start worker process 8546 2016/11/08 12:25:19 [notice] 8545#0: start worker process 8547 2016/11/08 12:25:20 [notice] 8547#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:20 [notice] 8547#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:20 [notice] 8547#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:20 [notice] 8547#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:21 [notice] 8545#0: signal 15 (SIGTERM) received, exiting 2016/11/08 12:25:21 [notice] 8546#0: exiting 2016/11/08 12:25:21 [notice] 8546#0: exit 2016/11/08 12:25:21 [notice] 8545#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:21 [notice] 8545#0: worker process 8546 exited with code 0 2016/11/08 12:25:21 [notice] 8545#0: signal 29 (SIGIO) received 2016/11/08 12:25:21 [notice] 8547#0: exiting 2016/11/08 12:25:21 [notice] 8547#0: exit 2016/11/08 12:25:21 [notice] 8545#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:21 [notice] 8545#0: worker process 8547 exited with code 0 2016/11/08 12:25:21 [notice] 8545#0: exit 2016/11/08 12:25:21 [notice] 8654#0: using the "epoll" event method 2016/11/08 12:25:21 [notice] 8654#0: openresty/1.9.15.1 2016/11/08 12:25:21 [notice] 8654#0: built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) 2016/11/08 12:25:21 [notice] 8654#0: OS: Linux 3.13.0-91-generic 2016/11/08 12:25:21 [notice] 8654#0: getrlimit(RLIMIT_NOFILE): 1024:4096 2016/11/08 12:25:21 [notice] 8662#0: start worker processes 2016/11/08 12:25:21 [notice] 8662#0: start worker process 8664 2016/11/08 12:25:21 [notice] 8662#0: start worker process 8665 2016/11/08 12:25:22 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:22 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:22 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:22 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:22 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:22 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:22 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:22 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:25:24 [error] 8665#0: [lua] cluster.lua:20: log(): [cluster] could not increment number of auto-join retries in 'kong' shm: not found, context: ngx.timer 2016/11/08 12:25:24 [error] 8664#0: [lua] cluster.lua:20: log(): [cluster] could not increment number of auto-join retries in 'kong' shm: not found, context: ngx.timer 2016/11/08 12:25:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:26:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:26:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:27:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:27:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:28:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:28:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:29:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:29:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:30:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:30:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:31:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:31:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:32:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:32:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:33:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:33:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:34:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:34:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:35:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:35:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:36:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:36:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:13 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:14 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:15 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:16 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:16 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:17 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:18 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:19 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:20 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:21 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:22 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:23 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:24 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:25 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:26 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:27 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:27 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:28 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:29 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:30 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:31 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:32 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:33 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:34 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:35 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:36 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:37 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:38 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:39 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:40 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:41 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:42 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:43 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:44 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:45 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:46 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:47 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:48 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:49 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:50 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:52 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:53 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:54 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:55 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:56 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:57 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:37:59 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:00 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:01 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:02 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:03 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:04 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:05 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:06 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:07 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:08 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:09 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:10 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:11 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:12 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:13 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:14 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:16 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:17 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:18 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:19 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:20 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:38:21 [error] 8665#0: *1260 lua coroutine: runtime error: /usr/local/share/lua/5.1/kong/dao/schemas/apis.lua:169: attempt to index local 't' (a nil value) stack traceback: coroutine 0: /usr/local/share/lua/5.1/kong/dao/schemas/apis.lua: in function 'marshall_event' /usr/local/share/lua/5.1/kong/dao/dao.lua:64: in function 'event' /usr/local/share/lua/5.1/kong/dao/dao.lua:128: in function 'insert' /usr/local/share/lua/5.1/kong/api/crud_helpers.lua:100: in function 'post' /usr/local/share/lua/5.1/kong/api/routes/apis.lua:15: in function </usr/local/share/lua/5.1/kong/api/routes/apis.lua:14> coroutine 1: [C]: in function 'resume' /usr/local/share/lua/5.1/lapis/application.lua:393: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'resolve' /usr/local/share/lua/5.1/kong/api/init.lua:50: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:163: in function </usr/local/share/lua/5.1/lapis/application.lua:159> [C]: in function 'xpcall' /usr/local/share/lua/5.1/lapis/application.lua:159: in function 'dispatch' /usr/local/share/lua/5.1/lapis/nginx.lua:214: in function 'serve' content_by_lua(nginx-kong.conf:108):9: in function <content_by_lua(nginx-kong.conf:108):1>, client: 10.40.0.138, server: kong_admin, request: "POST /apis HTTP/1.1", host: "10.3.3.10:8001" 2016/11/08 12:38:21 [error] 8665#0: *1260 [lua] init.lua:62: handle_error(): /usr/local/share/lua/5.1/lapis/application.lua:396: /usr/local/share/lua/5.1/kong/dao/schemas/apis.lua:169: attempt to index local 't' (a nil value) stack traceback: /usr/local/share/lua/5.1/kong/dao/schemas/apis.lua: in function 'marshall_event' /usr/local/share/lua/5.1/kong/dao/dao.lua:64: in function 'event' /usr/local/share/lua/5.1/kong/dao/dao.lua:128: in function 'insert' /usr/local/share/lua/5.1/kong/api/crud_helpers.lua:100: in function 'post' /usr/local/share/lua/5.1/kong/api/routes/apis.lua:15: in function </usr/local/share/lua/5.1/kong/api/routes/apis.lua:14> stack traceback: [C]: in function 'error' /usr/local/share/lua/5.1/lapis/application.lua:396: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'resolve' /usr/local/share/lua/5.1/kong/api/init.lua:50: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:163: in function </usr/local/share/lua/5.1/lapis/application.lua:159> [C]: in function 'xpcall' /usr/local/share/lua/5.1/lapis/application.lua:159: in function 'dispatch' /usr/local/share/lua/5.1/lapis/nginx.lua:214: in function 'serve' content_by_lua(nginx-kong.conf:108):9: in function <content_by_lua(nginx-kong.conf:108):1>, client: 10.40.0.138, server: kong_admin, request: "POST /apis HTTP/1.1", host: "10.3.3.10:8001" 2016/11/08 12:38:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:39:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:39:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:04 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:05 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:07 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:08 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:09 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:10 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:11 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:12 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:14 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:15 [error] 8665#0: *1573 lua coroutine: runtime error: /usr/local/share/lua/5.1/kong/dao/schemas/plugins.lua:55: attempt to index local 'plugin_t' (a nil value) stack traceback: coroutine 0: /usr/local/share/lua/5.1/kong/dao/schemas/plugins.lua: in function 'marshall_event' /usr/local/share/lua/5.1/kong/dao/dao.lua:64: in function 'event' /usr/local/share/lua/5.1/kong/dao/dao.lua:128: in function 'insert' /usr/local/share/lua/5.1/kong/api/crud_helpers.lua:100: in function 'post' /usr/local/share/lua/5.1/kong/api/routes/apis.lua:48: in function </usr/local/share/lua/5.1/kong/api/routes/apis.lua:47> coroutine 1: [C]: in function 'resume' /usr/local/share/lua/5.1/lapis/application.lua:393: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'resolve' /usr/local/share/lua/5.1/kong/api/init.lua:50: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:163: in function </usr/local/share/lua/5.1/lapis/application.lua:159> [C]: in function 'xpcall' /usr/local/share/lua/5.1/lapis/application.lua:159: in function 'dispatch' /usr/local/share/lua/5.1/lapis/nginx.lua:214: in function 'serve' content_by_lua(nginx-kong.conf:108):9: in function <content_by_lua(nginx-kong.conf:108):1>, client: 10.40.0.138, server: kong_admin, request: "POST /apis/8baa43ee-0677-4dba-a399-f4e0184e49db/plugins HTTP/1.1", host: "10.3.3.10:8001" 2016/11/08 12:40:15 [alert] 8665#0: send() failed (90: Message too long) 2016/11/08 12:40:15 [error] 8665#0: *1573 [lua] init.lua:62: handle_error(): /usr/local/share/lua/5.1/lapis/application.lua:396: /usr/local/share/lua/5.1/kong/dao/schemas/plugins.lua:55: attempt to index local 'plugin_t' (a nil value) stack traceback: /usr/local/share/lua/5.1/kong/dao/schemas/plugins.lua: in function 'marshall_event' /usr/local/share/lua/5.1/kong/dao/dao.lua:64: in function 'event' /usr/local/share/lua/5.1/kong/dao/dao.lua:128: in function 'insert' /usr/local/share/lua/5.1/kong/api/crud_helpers.lua:100: in function 'post' /usr/local/share/lua/5.1/kong/api/routes/apis.lua:48: in function </usr/local/share/lua/5.1/kong/api/routes/apis.lua:47> stack traceback: [C]: in function 'error' /usr/local/share/lua/5.1/lapis/application.lua:396: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'resolve' /usr/local/share/lua/5.1/kong/api/init.lua:50: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:163: in function </usr/local/share/lua/5.1/lapis/application.lua:159> [C]: in function 'xpcall' /usr/local/share/lua/5.1/lapis/application.lua:159: in function 'dispatch' /usr/local/share/lua/5.1/lapis/nginx.lua:214: in function 'serve' content_by_lua(nginx-kong.conf:108):9: in function <content_by_lua(nginx-kong.conf:108):1>, client: 10.40.0.138, server: kong_admin, request: "POST /apis/8baa43ee-0677-4dba-a399-f4e0184e49db/plugins HTTP/1.1", host: "10.3.3.10:8001" 2016/11/08 12:40:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:40:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:41:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:41:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:42:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:42:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:43:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:43:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:44:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:44:51 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:44:54 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:44:56 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:44:57 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:44:58 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:44:59 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:01 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:02 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:03 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:04 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:05 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:07 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:08 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:09 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:10 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:11 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:12 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:14 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:15 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:17 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:18 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:20 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:21 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:21 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:22 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:23 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:25 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:26 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:27 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:29 [notice] 8664#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:30 [notice] 8665#0: signal 17 (SIGCHLD) received 2016/11/08 12:45:31 [notice] 8664#0: signal 17 (SIGCHLD) received Kong configuration (registered APIs/Plugins & configuration file) 200+ apis using key-auth,rate-limiting,acls,statsd,request-transformer, ip-restriction Operating System ubuntu 14.04 Just noticed y'all released 0.9.5 yesterday. I'll give spinning up a new cluster with this version a try and report back. Unfortunately, similar results on 0.9.5: 2016/11/08 13:52:14 [notice] 8729#0: signal 17 (SIGCHLD) received 2016/11/08 13:52:16 [error] 8728#0: *10901 lua coroutine: runtime error: /usr/local/share/lua/5.1/kong/dao/schemas/apis.lua:169: attempt to index local 't' (a nil value) stack traceback: coroutine 0: /usr/local/share/lua/5.1/kong/dao/schemas/apis.lua: in function 'marshall_event' /usr/local/share/lua/5.1/kong/dao/dao.lua:64: in function 'event' /usr/local/share/lua/5.1/kong/dao/dao.lua:128: in function 'insert' /usr/local/share/lua/5.1/kong/api/crud_helpers.lua:100: in function 'post' /usr/local/share/lua/5.1/kong/api/routes/apis.lua:15: in function </usr/local/share/lua/5.1/kong/api/routes/apis.lua:14> coroutine 1: [C]: in function 'resume' /usr/local/share/lua/5.1/lapis/application.lua:393: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'resolve' /usr/local/share/lua/5.1/kong/api/init.lua:50: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:163: in function </usr/local/share/lua/5.1/lapis/application.lua:159> [C]: in function 'xpcall' /usr/local/share/lua/5.1/lapis/application.lua:159: in function 'dispatch' /usr/local/share/lua/5.1/lapis/nginx.lua:214: in function 'serve' content_by_lua(nginx-kong.conf:108):9: in function <content_by_lua(nginx-kong.conf:108):1>, client: 10.40.0.138, server: kong_admin, request: "POST /apis HTTP/1.1", host: "10.3.3.205:8001" 2016/11/08 13:52:16 [error] 8728#0: *10901 [lua] init.lua:62: handle_error(): /usr/local/share/lua/5.1/lapis/application.lua:396: /usr/local/share/lua/5.1/kong/dao/schemas/apis.lua:169: attempt to index local 't' (a nil value) stack traceback: /usr/local/share/lua/5.1/kong/dao/schemas/apis.lua: in function 'marshall_event' /usr/local/share/lua/5.1/kong/dao/dao.lua:64: in function 'event' /usr/local/share/lua/5.1/kong/dao/dao.lua:128: in function 'insert' /usr/local/share/lua/5.1/kong/api/crud_helpers.lua:100: in function 'post' /usr/local/share/lua/5.1/kong/api/routes/apis.lua:15: in function </usr/local/share/lua/5.1/kong/api/routes/apis.lua:14> stack traceback: [C]: in function 'error' /usr/local/share/lua/5.1/lapis/application.lua:396: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'resolve' /usr/local/share/lua/5.1/kong/api/init.lua:50: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:130: in function 'handler' /usr/local/share/lua/5.1/lapis/application.lua:163: in function </usr/local/share/lua/5.1/lapis/application.lua:159> [C]: in function 'xpcall' /usr/local/share/lua/5.1/lapis/application.lua:159: in function 'dispatch' /usr/local/share/lua/5.1/lapis/nginx.lua:214: in function 'serve' content_by_lua(nginx-kong.conf:108):9: in function <content_by_lua(nginx-kong.conf:108):1>, client: 10.40.0.138, server: kong_admin, request: "POST /apis HTTP/1.1", host: "10.3.3.205:8001" 2016/11/08 13:52:41 [notice] 8728#0: signal 17 (SIGCHLD) received 2016/11/08 13:53:11 [notice] 8728#0: signal 17 (SIGCHLD) received Closing this since it's now a few versions behind, and I'm hoping that the cli backup/restore tool y'all have fro v0.10.0 will make this a mute issue. @mazubieta Have you been able to reproduce this issue in 0.10.0 or later? We've recently experienced the exact same issue, but on version 0.8.3. We want to upgrade, but it would be interesting to know if this issue is really solved in the latest version of Kong. Cc: @thibaultcha Hey @jmdacruz, we didn't upgrade to 0.10.0 before I parted ways with my last employer. So I do not have a solid answer for you if things are any better beyond 0.9.7, or so. Which I think was the last version I remember upgrading. I can say that we wrote our own replacement for Kongfig that was just a simple python tool that employed retries and exponential backoff to cope with this problem.
gharchive/issue
2016-11-08T13:24:20
2025-04-01T04:32:45.374701
{ "authors": [ "jmdacruz", "mazubieta" ], "repo": "Mashape/kong", "url": "https://github.com/Mashape/kong/issues/1802", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
206089477
Can I inject some actions before API handling? Summary Hi, Team. I am integrating KONG with Cloud Foundry, and I created service broker for KONG which configured Fully-Brokered Service model. For details, Cloud Foundry service overview in here. According to Cloud Foundry route service specification, When we request Cloud Foundry App which bound KONG service, CF router forwards and transforms request to KONG as follows. GET / Host: apps.cf.host X-Cf-Forwarded-Url: https://apps.cf.host/real/request_path For example, when I set KONG API as follows, KONG cannot handling forwarded request. Request Host: apps.cf.host Request Path: /real/ Upstream URL: https://apps.cf.host/ Strip request path: false Preserve Host: false To solve this problem, KONG should support injectable action before API handling and rewrite request path. I tried to write lua script which copy url from X-Cf-Forwarded-Url to ngx.ctx.upstream_url in custom_nginx.template, but it seems to impossible. Now, I solved this problem with proxy server which transform the forwarded request between CF router and KONG. However it is not simple architecture and I worry performance problem. If you have good idea, please tell me. Additional Details & Logs Kong version (0.9.5) Ubuntu 16.04 LTS @kirikak2, Does the Request Transformer plugin (https://getkong.org/plugins/request-transformer/) not meed your needs here? @p0pr0ck5 Thanks for your suggest. but it is not resolve this problem. In my understanding, this plugin works after API handling, and KONG cannot handle this request to API. Currently, I am running reverse proxy on nginx which forwards and transforms request. It seems to be working fine. Ok if we close this issue then? Sure. In the future, I would like to use some hook point in KONG if I can. @kirikak2 Check out #2354 which is aimed at this precise use-case (currently a very rough implementation).
gharchive/issue
2017-02-08T03:55:20
2025-04-01T04:32:45.381637
{ "authors": [ "kirikak2", "p0pr0ck5", "thibaultcha" ], "repo": "Mashape/kong", "url": "https://github.com/Mashape/kong/issues/2053", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
180899839
feat(db) Support for Cassandra 3.x - lua-cassandra 1.x Summary Implement support for Cassandra 3.x by upgrading lua-cassandra support to 1.x. This is waiting on a lua-cassandra 1.1.0 release before being merged. In the meanwhile, manually install the master branch of lua-cassandra. Full changelog Support for lua-cassandra 1.x Support for Cassandra 3.x (2.x is still supported) Configurable Load Balancing policy for the C* cluster. We now support Datacenter-aware round robin with the cassandra_lb_policy and cassandra_local_datacenter properties in kong.conf Rework the rate-limiting and response-ratelimiting plugins "cluster" policies (those are using the DAO). We now pass the DAO as an argument instead of extending the plugin's DAO. Get rid of classic.lua pattern to create DB modules. We now use kong.dao.db.init Prevent using a C* cluster made of nodes with different major release versions Implement a mock ngx.shared.DICT API for use in resty-cli because lua-cassandra 1.x depends on an shm. Waiting on https://github.com/openresty/resty-cli/pull/12 to get rid of it. Better Lua idioms for DAOs and the Factory (Factory.new() instead of Factory()) slightly rework the codestyle of the C* and Postgres DB modules Issues resolved Fix #1294 Missing Bump lua-cassandra to 1.1.x Tests: waiting on lua-cassandra 1.1
gharchive/pull-request
2016-10-04T13:54:34
2025-04-01T04:32:45.388290
{ "authors": [ "thibaultcha" ], "repo": "Mashape/kong", "url": "https://github.com/Mashape/kong/pull/1709", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
139494518
Redelivering messages Hello, I've tried to set up message redelivering like it is described in documentation . Here is my code: var busControl = Bus.Factory.CreateUsingRabbitMq(serviceBusConfigurator => { serviceBusConfigurator.Host(new Uri("rabbitmq://RabbitMqHostName/RabbitMqVHostName"), h => { h.Username(RabbitMqVHostUser); h.Password(RabbitMqVHostPassword); }); serviceBusConfigurator.UseConcurrencyLimit(_concurrentConsumerLimit); serviceBusConfigurator.UseMessageScheduler(new Uri("rabbitmq://RabbitMqHostName/quartz")); serviceBusConfigurator.ReceiveEndpoint(_queueName, ec => { ec.UseMessageScope(); ec.LoadFrom(container); }); }); And here is simplified code of the consumer (I redeliver message if an exception was caught): public abstract class HandlerBase<T> : IConsumer<T> where T : class { public async Task Consume(ConsumeContext<T> context) { await HandleAsync(context); } public abstract void Handle(ConsumeContext<T> context); public virtual Task HandleAsync(ConsumeContext<T> context) { return Task.Run(() => { try { Handle(context); } catch (Exception ex) { HandleError(context, ex); } }); } public virtual void HandleError(ConsumeContext<T> context, Exception ex) { // logging context.Redeliver(TimeSpan.FromMinutes(1)); } } But nothing happens, neither message comes again after a minute, nor even "quartz"-queue is created. Did I miss something probably? You have to run/create the quartz service. This listens for the messages (creates the quartz queue) and handles when to deliver. For testing, you can reference the MassTransit.Quartz package and add .UseInMemoryScheduler() to configure for in-memory testing. Could you please give a link to an example or description of what I need to do? Setup scheduling. Here are docs. http://masstransit.readthedocs.org/en/master/scheduling/index.html Thank you, I'll try to set it up. Thought it would be simpler, cause I only need message to stay in queue (not queue_error) even if there was an exception while handling it. Formerly it was easy with RetryLater, isn't there another way to achieve such a result, without scheduler? Just to send message back to tail? Sending back to the tail might not actually introduce any delay at all, and RetryLater was removed specifically because it was a mistake. It's a thread/queue spinner when used and most people move away from it. A question about retry policy, for example, Intervals: does it send the message back to queue or just waits in memory and if I have consumer limit = 1, it will block other messages to be handled? And second question about scheduling. I've read this chapter: http://docs.masstransit-project.com/en/master/scheduling/index.html - but still don't understand completely who will be scheduling messages. Should it be either a stand-alone winservice with its own queue that uses Quartz and its own logic to schedule messages or the same service that I want to be able to redeliver with configured UseInMemoryScheduler()? In the second case, will there be an additional non-durable queue per each service with redelivery? And if I schedule message's redelivery for example in a second so that messages won't stay too long in quartz-queue than I can use in-memory scheduler without big risk to lose smth? Why not to make queue of in-memory scheduler also durable, so that I can give it name as for usual endpoint? And the last question: is there some example of the winservice from the first case that schedules messages itself? @donRumatta Retry just keeps it in memory. If you want it redelivered, use the Redeliver(TimeSpan) api. Yes: Should it be either a stand-alone winservice with its own queue that uses Quartz and its own logic to schedule messages. However, there is no logic to write....all you do is hook up the already provided consumers to your queue. Do not use In-Memory-Scheduler in production, that is a way to lose messages. Example: https://github.com/MassTransit/MassTransit/blob/8a1aa046cdd29bc3384d3d79eb8413397adf4eb2/src/MassTransit.QuartzService/ScheduleMessageService.cs Thank you, I'll consider your answers.
gharchive/issue
2016-03-09T07:10:36
2025-04-01T04:32:45.404936
{ "authors": [ "donRumatta", "phatboyg", "waynebrantley" ], "repo": "MassTransit/MassTransit", "url": "https://github.com/MassTransit/MassTransit/issues/493", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
687472797
Better handling of Bond/Reward CEs If the script somehow clicked on a Bond CE when clicking rapidly, it can get stuck there. This fix adds another check for the CE details screen. There way my gallery app works, I was able to test even with a notch
gharchive/pull-request
2020-08-27T19:04:37
2025-04-01T04:32:45.471680
{ "authors": [ "MathewSachin" ], "repo": "MathewSachin/Fate-Grand-Automata", "url": "https://github.com/MathewSachin/Fate-Grand-Automata/pull/309", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1741073882
Support for better import path for loadAppsettings Have to do this in svelte.config.js: import { loadAppsettings } from "./node_modules/@maal/svelte-data/dist/env/index.js"; // Lead appsettings.json files into env, so SvelteKit can interact with them loadAppsettings(); "node": "./dist/env/index.js"
gharchive/issue
2023-06-05T06:29:16
2025-04-01T04:32:45.473871
{ "authors": [ "MathiasFrost" ], "repo": "MathiasFrost/Svelte.Data", "url": "https://github.com/MathiasFrost/Svelte.Data/issues/50", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
425951452
Test Coverage (%) is zero at the function level while using existing TestResult Before submitting a bug report, please ensure you : [ ] Verified this bug is not already reported in an issue [ ] Can reproduce in a clean PowerShell session (clean = powershell -NoProfile) Describe the Problem Test Coverage (%) is zeroes at the function level while using existing TestResult Steps To Reproduce Go to '...' Click on '....' Scroll down to '....' Error or unexpected behavior : ... Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Environment Operating system : PowerShell version : PSCodeHealth version : Additional context How has this issue affected you ? What are you trying to accomplish ? Providing context helps us come up with a solution that is most useful and prioritize issues. Attaching snapshot for reference @MathieuBuisson: This could be related to the new Pester v5 release. Should be fixed in PSCodeHealth with the latest PR #26. - As soon as it is pushed to PSGallery ^^
gharchive/issue
2019-03-27T13:11:43
2025-04-01T04:32:45.478598
{ "authors": [ "OCram85", "guptalakshay" ], "repo": "MathieuBuisson/PSCodeHealth", "url": "https://github.com/MathieuBuisson/PSCodeHealth/issues/24", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1627500997
for visualization, the ensurepip doesn't work I tried to use snap to install the blender 2.93lts/stable. And I've used /path/to/blender/python -m ensurepip --upgrade. It returned as follows: Defaulting to user installation because normal site-packages is not writeable Looking in links: /tmp/tmp30yem5vm Requirement already up-to-date: setuptools in /home/vatis/.local/lib/python3.9/site-packages (49.2.1) Requirement already up-to-date: pip in /home/vatis/.local/lib/python3.9/site-packages (20.2.3) It seems that pip was installed in the system python but not this builtin python. Because when I used: blender --background --python-expr "import pip" It returned: Blender 2.93.15 (hash 2888f351e535 built 2023-02-21 00:33:31) Read prefs: /home/vatis/.config/blender/2.93/config/userpref.blend Error: Python: Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'pip' location: <unknown location>:-1 Python: Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'pip' location: <unknown location>:-1 Blender quit But all these worked for me when I followed your instruction on my mac. Did you know any solution to these? THX Hello @ShungJhon , It seems that pip is installed locally (in your "/home/vatis/.local" folder). Maybe you don't have write permissions to install it system wise (in the blender folder). When you use the command: blender --background --python-expr "import pip", it did not load your local python libraries. To load pip library in the python, you should tell python to access this local folder like this: import sys sys.path.append("/home/vatis/.local/lib/python3.9/site-packages") # Then you can load pip import pip Tell me if it works Best, @Mathux Thank you very much. It worked.
gharchive/issue
2023-03-16T13:42:16
2025-04-01T04:32:45.482671
{ "authors": [ "Mathux", "ShungJhon" ], "repo": "Mathux/TEMOS", "url": "https://github.com/Mathux/TEMOS/issues/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2007232432
🛑 Inventario App is down In 5f1c5bb, Inventario App (https://inventario.voluntariosgreenpeace.cl/) was down: HTTP code: 0 Response time: 0 ms Resolved: Inventario App is back up in 7bff50f after 11 hours, 57 minutes.
gharchive/issue
2023-11-22T23:14:04
2025-04-01T04:32:45.485223
{ "authors": [ "MatiasM87" ], "repo": "MatiasM87/uptime", "url": "https://github.com/MatiasM87/uptime/issues/1089", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1670187632
🛑 Ticketing App is down In 89bc956, Ticketing App (https://rt.greenpeace.org.ar/) was down: HTTP code: 0 Response time: 0 ms Resolved: Ticketing App is back up in 6fa9250.
gharchive/issue
2023-04-16T23:11:16
2025-04-01T04:32:45.487581
{ "authors": [ "MatiasM87" ], "repo": "MatiasM87/uptime", "url": "https://github.com/MatiasM87/uptime/issues/463", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1770831976
ci: merge staging to master This is an automatic PR generated by the pipeline CI/CD. This will be automatically fast-forward merged if successful. Pipeline Attempt on 909326934 for 733b2760b0fa50298d075cb781b505e907949177 https://gitlab.com/MatrixAI/open-source/js-id/-/pipelines/909326934 Pipeline Attempt on 909327193 for 405fcb579b9ece482b16b5721c926d95561a0502 https://gitlab.com/MatrixAI/open-source/js-id/-/pipelines/909327193
gharchive/pull-request
2023-06-23T05:59:25
2025-04-01T04:32:45.494691
{ "authors": [ "MatrixAI-Bot" ], "repo": "MatrixAI/js-id", "url": "https://github.com/MatrixAI/js-id/pull/17", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1071120260
my work disappears the things disappears and ive done so many things and it just disappears now i have to do it againnn Did the button also dissapears after you clicked upload?
gharchive/issue
2021-12-04T04:31:32
2025-04-01T04:32:45.523293
{ "authors": [ "EdHerrera123", "lilinLy" ], "repo": "MaxBittker/sandspiel", "url": "https://github.com/MaxBittker/sandspiel/issues/170", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1526124846
[W10H01] Added new Test Case for TrialOfTheSword I chose a example, where a lower value is expected, as it was easier for me to follow what happens with this test case as it was with the test case already given in the repository. I accidentally added Tests for TrialOfTheDreams also to this pull request.. I accidentally added Tests for TrialOfTheDreams also to this pull request.. You can just revert them
gharchive/pull-request
2023-01-09T18:45:51
2025-04-01T04:32:45.562116
{ "authors": [ "felixhauptmann", "mbmueller" ], "repo": "MaximilianAnzinger/pgdp2223-tests", "url": "https://github.com/MaximilianAnzinger/pgdp2223-tests/pull/249", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
249194748
Rftools builder crash - NPE (rftools-1.1x-6.12) on MC 1.11.2 Builder crashes trying to mine block of redstone ore. On my config reproducible by placing 1 block of redstone ore and run builder with clearing quarry card. Mined other blocks fine, though. Tried forge latest 2429 and recomended 2386 - same. crash-2017-08-10_02.28.06-server.txt Same as #1304 and indeed caused by Nuclearcraft. Closed.
gharchive/issue
2017-08-09T23:53:57
2025-04-01T04:32:45.620144
{ "authors": [ "antcig" ], "repo": "McJty/RFTools", "url": "https://github.com/McJty/RFTools/issues/1328", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
21279898
Uncaught TypeError: Cannot read property 'el' of undefined jquery.shapeshift.js:178 When I try for example this: https://github.com/McPants/jquery.shapeshift/blob/master/demos/shuffle.html I get 'el' undefined, only in Chrome, firefox doesn't care, and moves on with the rest of the code. So right now the only way to get it to work on Chrome, is init the shapeshift last. I'm sad now... fyi @pedroresende your fix isn't working Solution : file css copy .container > div[data-ss-colspan='2'] { width: 170px; }
gharchive/issue
2013-10-20T13:14:42
2025-04-01T04:32:45.631297
{ "authors": [ "Daniel3549", "brianputz", "gassorr" ], "repo": "McPants/jquery.shapeshift", "url": "https://github.com/McPants/jquery.shapeshift/issues/79", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2133248645
MAT-6378 Enforce jira ticket in git commit message MADiE PR Jira Ticket: MAT-0000 (Optional) Related Tickets: Summary All Submissions [ ] This PR has the JIRA linked. [ ] Required tests are included. [ ] No extemporaneous files are included (i.e Complied files or testing results). [ ] This PR is merging into the correct branch. [ ] All Documentation needed for this PR is Complete (or noted in a TODO or other Ticket). [ ] Any breaking changes or failing automations are noted by placing a comment on this PR. DevSecOps If there is a question if this PR has a security or infrastructure impact, please contact the Security or DevOps engineer assigned to this project to discuss it further. [ ] This PR has NO significant security impact (i.e Changing auth methods, Adding a new user type, Adding a required but vulnerable package). [ ] All CDN/Web dependencies are hosted internally (i.e MADiE-Root Repo). Reviewers By Approving this PR you are attesting to the following: Code is maintainable and reusable, reuses existing code and infrastructure where appropriate, and accomplishes the task’s purpose. The tests appropriately test the new code, including edge cases. If you have any concerns they are brought up either to the developer assigned, security engineer, or leads. this was test pr closing it
gharchive/pull-request
2024-02-13T22:40:40
2025-04-01T04:32:45.651191
{ "authors": [ "adongare" ], "repo": "MeasureAuthoringTool/madie-util", "url": "https://github.com/MeasureAuthoringTool/madie-util/pull/69", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
2657880954
Qt GUI: Fixes and improvements Qt GUI: Make toolbar moveable but prevent it from being detached Correction of 43289a54085aa55d5a881d94f1278a942138501e Turns out the toolbar was designed to be moveable. The main intention of the previous commit was to prevent toolbar from being detached from main window. Now it is moveable but must be docked to one of the four sides of the window. Qt GUI: Save shell extension preference Prepare for future integration with IExplorerCommand-based shell extension on Windows. The saved preference can be read from registry by the IExplorerCommand implementation to decide if it should be visible. Qt GUI: Add inform version and timestamp options Something that the VCL GUI has Qt GUI: Add captions display options Match other toolkit versions I don't have a suitable file to test display captions so hope I didn't make mistake and it works.
gharchive/pull-request
2024-11-14T07:30:28
2025-04-01T04:32:45.667003
{ "authors": [ "cjee21" ], "repo": "MediaArea/MediaInfo", "url": "https://github.com/MediaArea/MediaInfo/pull/954", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
1784202905
🛑 MediaNext is down In 4d15ff8, MediaNext (https://the-bot.thein321.repl.co) was down: HTTP code: 0 Response time: 0 ms Resolved: MediaNext is back up in 8a0a361.
gharchive/issue
2023-07-01T20:21:45
2025-04-01T04:32:45.669387
{ "authors": [ "MediaPlay-Discord-Bot" ], "repo": "MediaPlay-Discord/status", "url": "https://github.com/MediaPlay-Discord/status/issues/669", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2283499820
Libcuda.so.1 error on Modal.com When running vLLM on Modal.com from the instructions: running the vLLM serverless on modal.com I get a libcuda error when making a request to the deployed app. ImportError: libcuda.so.1: cannot open shared object file: No such file or directory Any thoughts on how I can solve this? Hi, may I know which model did you try when encountering this error?
gharchive/issue
2024-05-07T14:21:06
2025-04-01T04:32:45.697423
{ "authors": [ "darkroasted", "jeffreymeetkai" ], "repo": "MeetKai/functionary", "url": "https://github.com/MeetKai/functionary/issues/182", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1493584853
fix(cpu): fix three tiny bugs 1. If set CPU affinity failed, exit rather than continue. 2. In function src/cpu/backend.cpp:bandwidth, initial 'src' and 'dst' to make sure 'memcpy' actually copies the data. 3. Fix command line option of 'android_build.sh'. @chenqy4933 帮忙review一下吧。前两个bug是同事提的;第三个是因为我之前没有更新我fork的仓库,最新的代码'-a'选项已经被使用了,改成了'-t'。 LGTM
gharchive/pull-request
2022-12-13T07:33:41
2025-04-01T04:32:45.699620
{ "authors": [ "Li-Ming-xin", "chenqy4933" ], "repo": "MegEngine/MegPeak", "url": "https://github.com/MegEngine/MegPeak/pull/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1126932043
Is GIOU or DIOU or CIOU better than IOU? As mentioned in title, have you used giou/diou/ciou to train yolox, and who is better? @ZhengMengbin , I think, this repo has support for iou and giou loss. By default, iou loss is used. https://github.com/Megvii-BaseDetection/YOLOX/blob/main/yolox/models/losses.py BTW, where is it mentioned that giou/diou/ciou loss is used? I have used giou, ciou and eiou to train yolox-tiny. Unfortunately, no one is better than default iou loss and i found that the learning rate of them is significantly small. BTW, I will try increase the initial learning rate of them. How it work what was the result of the experiment? It doesn't work by increasing the learning rate. It doesn't work by increasing the learning rate. so the conclusion is CIOU /DIOU is not better than IOU loss in YOLOX ? After I use giou or siou, I get nan in iou loss and give me error 'Assertion input_val >= zero && input_val <= one failed.' How did you fix this error? @MengtianLee Any update?
gharchive/issue
2022-02-08T08:22:06
2025-04-01T04:32:45.718174
{ "authors": [ "CarlHuangNuc", "GF1447571253", "MengtianLee", "ZhengMengbin", "debapriyamaji", "doctorchoco", "twmht", "wgq18" ], "repo": "Megvii-BaseDetection/YOLOX", "url": "https://github.com/Megvii-BaseDetection/YOLOX/issues/1115", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1922462372
Option to view the date a video was uploaded. Hi, me again. I think the option to view the date a video was uploaded is key piece of information for most video. Especially when working with many video thumbnails. I thinks the following area: Would perfectly fit the uploaded date without making the thumbnail look cluttered. Thanks for your time. This would be nice along with potentially having the view count there too. Unfortunately, I can't get that information through the way the plugin normally gets video info. Like in issue #9 I could get that information from the Youtube API, but only if the user has an API token set and I don't think many users do. So, since very few users would be able to use this feature, it's not worth making the plugin more complicated. Thanks for the idea though!
gharchive/issue
2023-10-02T18:54:02
2025-04-01T04:32:45.721034
{ "authors": [ "Meikul", "albertoloscerritos" ], "repo": "Meikul/obsidian-thumbnails", "url": "https://github.com/Meikul/obsidian-thumbnails/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
328889419
嵌套组件中使用slot无法正常渲染 自定义组件中使用了slot传递内部组件,但是如果在嵌套组件中使用slot,slot对应的内容不会显示。请问这个问题有解决方案了吗??? 定义组件 使用组件 这个 问题 好像到现在都不能支持! 试试把组件style上scoped属性去掉 试试把 组件名换成驼峰 @wjy4124 vue模板中本身不识别驼峰形式的写法吧 @heriky 支持驼峰的 这货好像不会自动 转换名命 components 里是驼峰 template 里就要用驼峰 @wjy4124 仍然渲染不出来,心塞。 渲染出来嵌套组件内部的值全部失效。 这个问题大幅度限制了组件的组合能力。 我也被坑了,但我无意中发现,mpvue@1.0.11,mpvue-template-compiler@1.0.11,mpvue-loader@1.0.13 的环境下嵌套组件的slot可以正常使用,其他情况slot的作用域跑到父组件那里去了 @JRliu 我也恰好遇到这个问题...作用域跑到父组件上面去了 @JRliu mpvue@1.0.11还真能解决问题.. @JRliu @zWingz 如果降版本就能正常是吗? 😅 发现降版本也没有用。 这个问题官方没打算修复一下吗 @limichange 我降了版本可以用哦~, 我只降了mpvue和compiler. @JRliu 把版本换成1.0.11行是行, 有2个具名的slot, 它只给我渲染出一个, 列表有时渲染的出, 有时渲染不出, 真的被吓怕了 @VimMing 我选择用小程序原生的代码实现那些嵌套slot的组件,mpvue做没嵌套的组件 🤣 但是降版本不会出别的问题嘛 @zWingz 能贴一下你的devDependencies吗? @limichange 需要slot的我都改用小程序原生组件了 @zWingz 你这个混用小程序原生组件是怎么做到的?
gharchive/issue
2018-06-04T02:30:48
2025-04-01T04:32:45.737799
{ "authors": [ "JRliu", "VimMing", "heriky", "hnu0416", "hualongdai", "limichange", "wjy4124", "zWingz" ], "repo": "Meituan-Dianping/mpvue", "url": "https://github.com/Meituan-Dianping/mpvue/issues/585", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2752224025
环境配置失败 芯片:Apple M3 Pro 系统:15.1.1 python 版本:3.6.5 pip3 install -r requirements.txt 失败,能否更新对应依赖库的版本,谢谢🙏 推荐py3.8和以上
gharchive/issue
2024-12-20T08:36:31
2025-04-01T04:32:45.739120
{ "authors": [ "BrightHai", "echoZZJ" ], "repo": "Meituan-Dianping/vision-ui", "url": "https://github.com/Meituan-Dianping/vision-ui/issues/36", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1275996113
Birthday msg fixen | High importance Sent by .̔̏𝗠𝗲𝗽𝗵𝗶𝘀𝘁𝗼#5558 (691550551825055775) with bot 948978571802710047 ma die automatische message fixen fixed long time ago but forgot to close issue
gharchive/issue
2022-06-19T08:53:53
2025-04-01T04:32:45.782046
{ "authors": [ "Mephisto5558" ], "repo": "Mephisto5558/Teufelsbot", "url": "https://github.com/Mephisto5558/Teufelsbot/issues/29", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
489740859
Poor performance of nested OPTIONALs When you have a SPARQL query with nested OPTIONAL clauses, such as the following, it's performance is poor, typically causing timeouts. PREFIX bibo: <http://purl.org/ontology/bibo/> PREFIX dcterms: <http://purl.org/dc/terms/> SELECT * WHERE { { SELECT ?article WHERE { ?article a bibo:Article . } LIMIT 10 } OPTIONAL { OPTIONAL { ?article dcterms:issued ?article_issued . } } } Output of Halyard Profile for this query: Optimized query: Projection [2,955,991,897,878,706.5] ProjectionElemList ProjectionElem "article" ProjectionElem "article_issued" LeftJoin [2,955,991,897,878,706.5] Slice ( limit=10 ) [3,614,563.841] Projection [3,614,563.841] ProjectionElemList ProjectionElem "article" StatementPattern [3,614,563.841] Var (name=article) Var (name=_const_f5e5585a_uri, value=http://www.w3.org/1999/02/22-rdf-syntax-ns#type, anonymous) Var (name=_const_6dd7acd3_uri, value=http://purl.org/ontology/bibo/Article, anonymous) LeftJoin [226.251] SingletonSet [1] StatementPattern [226.251] Var (name=article) Var (name=_const_884f353b_uri, value=http://purl.org/dc/terms/issued, anonymous) Var (name=article_issued) The nested OPTIONAL in this query is unnecessary, but it allows to replicate the issue without in a minimal way. Mapping nested optional to LeftJoin with SingletonSet is correct and it should not cause any issue. I see minor issue with cardinality of a sub-select with Slice, however it does not affect final query tree. I'm aware of some specific queries causing performance issues, however unfortunately it is not as simple as just nested OPTIONAL. It requires further investigation.
gharchive/issue
2019-09-05T13:09:43
2025-04-01T04:32:45.785016
{ "authors": [ "asotona", "jindrichmynarz" ], "repo": "Merck/Halyard", "url": "https://github.com/Merck/Halyard/issues/72", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
299949084
fix; use tpl() for template paths This fixes cases of errors where cg is invoked outside of its repository directory. This is from a globally installed official v1.3.3 release: $ cg --debug --verbose nodejs https://raw.githubusercontent.com/OAI/OpenAPI-Specification/e9c539d86f080f133aa35c3e7db33ef004496625/examples/v3.0/petstore.yaml Loaded config nodejs Loaded definition https://raw.githubusercontent.com/OAI/OpenAPI-Specification/e9c539d86f080f133aa35c3e7db33ef004496625/examples/v3.0/petstore.yaml { level: 'Valid', elementType: 'Context', elementId: 'None', message: 'No validation errors detected' } Processing template index.mustache Processing template index-gcf.mustache Processing template package.mustache Processing template README.mustache Processing template swagger.mustache Processing template writer.mustache Making/cleaning output directories Rendering index.js Rendering index-gcf.js Rendering package.json Rendering README.md Rendering api/swagger.yaml Rendering utils/writer.js fs.js:663 return binding.open(pathModule.toNamespacedPath(path), ^ Error: ENOENT: no such file or directory, open './templates/nodejs/controller.mustache' at Object.fs.openSync (fs.js:663:18) at Object.fs.readFileSync (fs.js:568:33) at /home/steveej/.npm-packages/lib/node_modules/openapi-codegen/index.js:87:57 at next (/home/steveej/.npm-packages/lib/node_modules/openapi-codegen/node_modules/rimraf/rimraf.js:75:7) at FSReqWrap.CB [as oncomplete] (/home/steveej/.npm-packages/lib/node_modules/openapi-codegen/node_modules/rimraf/rimraf.js:111:9) Thanks very much for fixing this. Released as part of v1.3.4
gharchive/pull-request
2018-02-24T15:32:39
2025-04-01T04:32:45.801335
{ "authors": [ "MikeRalphson", "steveeJ" ], "repo": "Mermade/openapi-codegen", "url": "https://github.com/Mermade/openapi-codegen/pull/16", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2411128764
Point 'index' in CoqDocJS header to index.html The current link in the header, "Index", points to a non-existent page indexpage.html (https://metacoq.github.io/html/indexpage.html), while the index is actually generated in index.html (https://metacoq.github.io/html/index.html). There are two ways to fix this: we can either add the flag --index indexpage to the html target in Makefile, or make the changes proposed in this PR. I don't know which is the preferred method. Thanks!
gharchive/pull-request
2024-07-16T13:20:33
2025-04-01T04:32:45.806219
{ "authors": [ "SwampertX", "mattam82" ], "repo": "MetaCoq/metacoq", "url": "https://github.com/MetaCoq/metacoq/pull/1094", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
875652545
Fix typos in readme CHANEGLOG -> CHANGELOG. Chaneglog is a half-orc barbarian.
gharchive/pull-request
2021-05-04T16:56:00
2025-04-01T04:32:45.808626
{ "authors": [ "rekmarks" ], "repo": "MetaMask/auto-changelog", "url": "https://github.com/MetaMask/auto-changelog/pull/46", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1178361268
Update test setup to reset and restore all mocks between each test Currently we are not setting resetMocks: true and restoreMocks: true within jest.config.js. We should enable this and remove calls to jest.clearAllMocks() and jest.resetAllMocks() within test files. Note that this may require substantial changes for any test suites that rely heavily upon shared mocks. Update: We are setting restoreMocks today. But not resetMocks.
gharchive/issue
2022-03-23T16:26:25
2025-04-01T04:32:45.810572
{ "authors": [ "Gudahtt", "mcmire" ], "repo": "MetaMask/core", "url": "https://github.com/MetaMask/core/issues/745", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
385245399
InfuraSubprovider connect to mainet with testnet url import Web3 from "web3"; import TransportNodeJs from "@ledgerhq/hw-transport-node-hid"; import createLedgerSubprovider from "@ledgerhq/web3-subprovider"; import ProviderEngine from "web3-provider-engine"; import FetchSubprovider from "web3-provider-engine/subproviders/fetch"; import InfuraSubprovider from 'web3-provider-engine/subproviders/infura.js'; const rpcUrl = "https://rinkeby.infura.io/v3/xxxxxxxxxxxx"; const networkId = 4; const getWeb3 = async () => { const engine = new ProviderEngine(); const getTransport = () => TransportNodeJs.create(); const ledger = createLedgerSubprovider(getTransport, { networkId, }); engine.addProvider(ledger); engine.addProvider(new InfuraSubprovider({ rpcUrl })); engine.start(); return new Web3(engine); }; export default getWeb3 If I use the Fetch subprovider i can connect correctly to the testnet, but if I use the InfuraSubProvider with the exact same code it connects me to the mainet. This is an expected behaviour? @milonite current InfuraSubprovider is for their REST api which they are deprecating. The api you are trying to use is for the normal json rpc POST api, use the FetchSubprovider
gharchive/issue
2018-11-28T12:27:49
2025-04-01T04:32:45.854564
{ "authors": [ "kumavis", "milonite" ], "repo": "MetaMask/provider-engine", "url": "https://github.com/MetaMask/provider-engine/issues/296", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
161313561
Setup tags on Docker Hub Currently, there is 2 tags on docker hub, latest and test. It would be great if tags were set up for each version. You can do so by tagging commits in github and having automatic build on docker hub tags things based on git tags. Okay, I will do that. But I have not enough time at hand to support older versions, i.e. bugfixes and features will only go to the current version. Tags for 1.2.0, 1.3.0 and 1.3.1 created. Thanks!
gharchive/issue
2016-06-20T22:48:17
2025-04-01T04:32:45.875141
{ "authors": [ "e-karge", "smazurov" ], "repo": "MeteoGroup/concourse-ci", "url": "https://github.com/MeteoGroup/concourse-ci/issues/2", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
83139867
File handles into directories in Jar files without trailing slashes unrecognized Here's something I've uncovered while testing that I'm not sure if is a bug or not: trying to access a directory in a Jar file without a trailing slash on the end of the path (i.e. /testJarDir) throws an IOException with the message A filehandle to an empty path (/testJarDir) was requested, and the requested path was not writable. When the trailing slash is added (i.e. /testJarDir/), the FileHandle is created successfully. I'm not sure if this is also the case for directories on the filesystem as well, I'll have to test that. What I'd like to know is, is this correct? Does Pathway require the trailing slash to know that a path is a directory, or do we need to fix this to support creating FileHandles into directories without trailing slashes? Update: I took a look at the spec for IO, and it looks like FileHandles into directories on the filesystem are supported both with and without the trailing slash, so this issue only applies to JarFileHandle (and probably ZipFileHandle as well, since they are essentially the same). Given that observation, this behaviour is obviously incorrect. I'll be happy to fix this once I finish the spec for IO.
gharchive/issue
2015-05-31T17:21:25
2025-04-01T04:32:45.877910
{ "authors": [ "hawkw" ], "repo": "MeteorCode/Pathway", "url": "https://github.com/MeteorCode/Pathway/issues/31", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
214282428
need EULA We should have a EULA for the rest service, map, and any other service we create that provides data. Can use text from WIFIRE Firemap EULA (https://firemap.sdsc.edu/liability.txt), Won’t the EULA depend on the data sources? The plan was to have a “Terms of Use” module in our web service that gave the users data according to the agreements set by each dataset. On Tue, Mar 14, 2017 at 9:54 PM, crawld notifications@github.com wrote: We should have a EULA for the rest service, map, and any other service we create that provides data. Can use text from WIFIRE Firemap EULA ( https://firemap.sdsc.edu/liability.txt), — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/MetroInsight/citadel/issues/11, or mute the thread https://github.com/notifications/unsubscribe-auth/ADFqQoNsyt1W2UwJZ_VmLGM7Wlne9d-rks5rl28dgaJpZM4Mde0Q . But I see we need something general for our service itself. I guess WiFire is a pretty good place to start then. Would we need to run this by the UCSD IP department? Or worse, all the three universities IP dept? On Tue, Mar 14, 2017 at 10:41 PM, Jason Beomkyu Koh < notifications@github.com> wrote: But I see we need something general for our service itself. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MetroInsight/citadel/issues/11#issuecomment-286646392, or mute the thread https://github.com/notifications/unsubscribe-auth/ADFqQoNYf6sz0IPu-xiRVAEY_WJLZ9Apks5rl3n_gaJpZM4Mde0Q .
gharchive/issue
2017-03-15T04:54:53
2025-04-01T04:32:45.903462
{ "authors": [ "bbalaji-ucsd", "crawld", "jbkoh" ], "repo": "MetroInsight/citadel", "url": "https://github.com/MetroInsight/citadel/issues/11", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
459725904
How do i make this grid work with .NET Core 2.1 Can i use .NET Core 2.1 with this grid ? So far i get an error on mismatch of Microsoft.AspNetCore.Authorization Error NU1107 Version conflict detected for Microsoft.AspNetCore.Authorization. Install/reference Microsoft.AspNetCore.Authorization 3.0.0-preview6.19307.2 directly to project UIAdmin.Server to resolve this issue. Hello @sanzor unfortunately you cannot use this nuget with .NET Core 2.1 because it targets 3.0.0.-preview6.19307.2 version. If you are using it on server side due to types which are supporting for Http request I can separate this kind of types into another nuget which will target .NET Standard. Hello @Mewriick , Thank you for your reply. I have managed to create something similar to a .NET Grid working with .NET Core 2.1. I am using client side .
gharchive/issue
2019-06-24T07:09:14
2025-04-01T04:32:45.910966
{ "authors": [ "Mewriick", "sanzor" ], "repo": "Mewriick/Blazor.FlexGrid", "url": "https://github.com/Mewriick/Blazor.FlexGrid/issues/54", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1191440651
Acrylic/Blur-esque effect on Chrome pre-22523. This isn't so much reporting a bug that I want fixed, rather I'm reporting a bug which actually provides a feature I enjoy, and I'm wondering if there's any way to implement it as a more permanent option. There is a way to apply an Acrylic/Blur like effect to Chrome's title bar on the latest release build of Windows 11 (21H2, 22000.593), when it would otherwise only be available on Insider Builds 22523 and later. Unfortunately when the steps below to reproduce the bug are taken, it only applies to any currently open Chrome windows, any window opened after performing these steps will require you to re-do the steps to apply the blur. Make a process rule for chrome Set your Titlebar Color to "Light" Set your Extend Frame Into Client Area to "Off" Set your Blur Behind to "On" Set your Backdrop Type to "None", then after selecting None, change your Backdrop Type to Mica. All currently open Chrome windows should have the below look applied, which IMO is much nicer than Mica and I would appreciate a way of more consistently/permanently applying this effect if possible! In case it matters, I'm running Windows 11 Enterprise (21H2, 22000.593), and MFE 1.0.6.1. My Global Rules are Titlebar Color = System, Backdrop Type = Mica, Extend Frame = Off, Blur Behind = Off. I'd greatly appreciate any consideration you could give to making this a permanent option (if feasible), thanks! Below is an image of the effect. Unfortunately, Blur Behind is a very experimental feature which is never intended to be used, like Extend Frame Into Client Area. And I believe in this build it causes lag while trying to move and resize the window. Blur Behind is a very experimental feature which is never intended to be used, like Extend Frame Into Client Area. I understand, I suppose it's interesting because Blur Behind alone doesn't cause this effect, you have to turn Mica on and off for it to appear, otherwise the titlebars just look like standard Light Mica. I was just wondering if since the Blur Behind option is there anyway and this effect is possible, it could be made available to end-users with the implication that it's an unsupported/experimental feature to be used at user's own risk. And I believe in this build it causes lag while trying to move and resize the window. It doesn't seem to lag any more than without Blur Behind for me, and I'm not on an especially powerful laptop or anything. I'm using 22581, and I could not replicate this effect, however testing on explorer confirms that Blur Behind does apply to new windows correctly. This looks like a problem with either your configuration or Chrome itself (I think it's more likely the latter as it only supports the DWM frame in Windows 7) You can try disabling the custom Windows 10 title bar in Chrome by using -disable-windows10-custom-titlebar (81+) or --disable-features=Windows10CustomTitlebar (80-) Blur Behind is kind of borked on my install Notice how the menu bar and title bar in Paint doesn't have translucency while the status bar does. You can try disabling the custom Windows 10 title bar in Chrome by using -disable-windows10-custom-titlebar (81+) or --disable-features=Windows10CustomTitlebar (80-) Sorry I forgot to mention the really relevant fact that I already had the --disable-windows10-custom-titlebar flag enabled so yeah it is required to re-create this effect. Blur Behind is kind of borked on my install Notice how the menu bar and title bar in Paint doesn't have translucency while the status bar does. I agree Blur Behind in general is pretty messy with some other applications, I get the same results with Paint. Blur Behind applies completely fine to new windows for me, mind to share your config file? Blur Behind applies completely fine to new windows for me, mind to share your config file? # Mica For Everyone Configuration Global { TitleBarColor = System # available modes: Default, System, Light, Dark (case sensitive) BackdropPreference = Mica # available modes: Default, None, Mica, Acrylic, Tabbed ExtendFrameIntoClientArea = False # enable to apply backdrop on background of apps (not recommended) EnableBlurBehind = False # make window translucent } # don't touch explorer and mspaint Process: "explorer" { TitleBarColor = Dark BackdropPreference = Mica ExtendFrameIntoClientArea = True EnableBlurBehind = False } Process: "mspaint" { TitleBarColor = Default BackdropPreference = Mica ExtendFrameIntoClientArea = False EnableBlurBehind = False } # apply Mica background to Console Host window Class: "ConsoleWindowClass" { TitleBarColor = System BackdropPreference = Mica ExtendFrameIntoClientArea = False EnableBlurBehind = False } Process: "ConEmu64" { TitleBarColor = Default BackdropPreference = Mica ExtendFrameIntoClientArea = False EnableBlurBehind = False } Process: "chrome" { TitleBarColor = Light BackdropPreference = Mica ExtendFrameIntoClientArea = False EnableBlurBehind = True } Process: "ONENOTE" { TitleBarColor = Default BackdropPreference = Mica ExtendFrameIntoClientArea = False EnableBlurBehind = False } Does this change anything? Process: "chrome" { TitleBarColor = Default BackdropPreference = Default ExtendFrameIntoClientArea = False EnableBlurBehind = True } Does this change anything? Process: "chrome" { TitleBarColor = Default BackdropPreference = Default ExtendFrameIntoClientArea = False EnableBlurBehind = True } Unfortunately not. Oddly, despite my Global Rules BackdropPref being set to Mica and Windows default set to Dark, when I use that config, Chrome's titlebars are plain white/Light as if I had the style set to None. If I then manually choose Mica for the chrome process rule, it turns blurred like my original post. Unfortunately, there's not much we can do to help as MFE is basically a frontend to DWM APIs such as SetWindowCompositionAttribute and DwmSetWindowAttribute. Whether the backdrops are successfully drawn cannot be guaranteed by MFE, as it has no knowledge about the programs whatsoever. No worries, thanks for your efforts Hey, You found any solution for google chrome mica effect? It is very bad at my current settings :( https://github.com/MicaForEveryone/MicaForEveryone/issues/231 Sorry, can't help you. I've since updated to Win 11's latest build and I use the acrylic rather than mica effect which is working normally for me. For the record, you can use MFE and chrome themes together with -disable-windows10-custom-titlebar, so my Chrome looks like this.
gharchive/issue
2022-04-04T08:31:26
2025-04-01T04:32:45.956233
{ "authors": [ "HeyItsJono", "bigplayer-ai", "dongle-the-gadget" ], "repo": "MicaForEveryone/MicaForEveryone", "url": "https://github.com/MicaForEveryone/MicaForEveryone/issues/59", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2077223234
Wrapping error causes Hey there! It seems that most errors created by this library seem not to wrap the errors that caused them. Is there any reasoning behind that? I think doing so would make debugging at a first glance a lot easier. For example, consider this: https://github.com/MicahParks/keyfunc/blob/d272c92f0acae6ff428ec9dd74ecd281d5dd8115/keyfunc.go#L97-L100 If the returned error would wrap the causing err, one could easily figure out why this failed, without constructing their own debugging setup or using a dynamic debugger. Cheers, Moritz. The reason is because error behavior was copied from /v2, which was based on an older version of Go. Now that /v3 has access to errors.Join, which was added in Go 1.20, the package should be wrapping errors with this function. I'll add this behavior and do a release this evening, eastern USA timezone. Thank you for bringing this to my attention! Please see the newest release v3.2.3 where each error should be wrapped where appropriate. Thank you so much! Unbelievable that you've addressed this so fast! <3
gharchive/issue
2024-01-11T17:24:21
2025-04-01T04:32:45.960498
{ "authors": [ "MicahParks", "msanft" ], "repo": "MicahParks/keyfunc", "url": "https://github.com/MicahParks/keyfunc/issues/103", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
395522730
Change configuration parameter for unit label in weather-modules Hi again, looking into the weather modules I stumbled across the different configuration keys for essentially the same purpose in the currentweather (degreeLabel) and the weatherforecast (scale) modules. I think, it would be more intuitive using the same key for both modules. Personally, I prefer the weatherforcast version. weatherforecast https://github.com/MichMich/MagicMirror/blob/de57daa3cd881ce1a14b88307bf61e8109879c81/modules/default/weatherforecast/weatherforecast.js#L28 currentweather https://github.com/MichMich/MagicMirror/blob/de57daa3cd881ce1a14b88307bf61e8109879c81/modules/default/currentweather/currentweather.js#L30 Is this change applicable, as this is a change in the API? I will open a PR, if this change is desired, the implementation should then be postponed to the next major release. Have a nice day! I agree, but it's important that it's not a breaking change. So the old configuration should still work. Feel free to send a PR to the develop branch. Also note that we are working on a new weather module which will replace both current weather modules. It's included in the latest release. Thanks for your hint! Just one more question: I can not make the forecast work, can you provide a working example or give me a hint, where my error might be? Thank you and have a nice day! This is my code, it works with type: "current". Providing the initialLoadDelay parameter, does not change the behavior. { module: "weather", position: "top_right", header: "Vorhersage", config: { type: "forecast", degreeLabel: true, location: "mylocation", locationID: "mylocationid", //ID from http://www.openweathermap.org/help/city_list.txt apiKey: "myweatherid" } }, @fewieden Any idea? The problem is that you need to specify another weatherendpoint for openweathermap forecast like weatherEndpoint: "/forecast/daily" If is set the initial load delay to 30 seconds it is delayed by 30 seconds. Be aware of that you need to specify th timeframe in milliseconds like: initialLoadDelay: 30000 More informations here: https://github.com/MichMich/MagicMirror/tree/master/modules/default/weather#general-options Thanks for your help, still does not work. I dug into the code and found the following: Excerpt of a single set of data in the .json-file from openweather weather forecast (at least for me) looks like this: "main":{ "temp":5.52,"temp_min":5.32,"temp_max":5.52,"pressure":1043.34,"sea_level":1044.06,"grnd_level":1043.34,"humidity":86,"temp_kf":0.2 }, "rain":{"3h":0.1625} Thus, this part needs to be changed https://github.com/MichMich/MagicMirror/blob/6914465e3dc357b3e15af77f829dedc1ec058bbd/modules/default/weather/providers/openweathermap.js#L95-L103 like this: weather.date = moment(forecast.dt, "X"); weather.minTemperature = forecast.main.temp_min; weather.maxTemperature = forecast.main.temp_max; weather.weatherType = this.convertWeatherType(forecast.weather[0].icon); if (this.config.units === "imperial" && !isNaN(forecast.rain['3h'])) { weather.rain = forecast.rain['3h'] / 25.4; } else { weather.rain = forecast.rain['3h']; } Another issue with the data is, they are provided in a set for three hours. I will make a PR with the adjusted code. P. S.: The discussion drifted away in someway, I'll edit the title. I have some concerns with strictly using forecast.rain['3h']. I added a comment to your PR Hey vincep5, thanks for your feedback. As I do not have access to forecast/daily, can you post an excerpt of a .json file for the rain part? I will be looking into making adjustments to my code, to make it work in both versions. Have a nice day! thanks so much, I will have to configure my setup to the develop branch one day to assist in testing this new stuff. Here's a snip of 1 day of my forecast. It's contained inside a "list": [] example : { "dt": 1547053200, "temp": { "day": 44.01, "min": 34.54, "max": 50.41, "night": 50.41, "eve": 45.45, "morn": 34.54 }, "pressure": 1004.54, "humidity": 0, "weather": [ { "id": 502, "main": "Rain", "description": "heavy intensity rain", "icon": "10d" } ], "speed": 10.85, "deg": 208, "clouds": 97, "rain": 30.06 } Resolved in #1506
gharchive/issue
2019-01-03T10:29:41
2025-04-01T04:32:45.971858
{ "authors": [ "MichMich", "fewieden", "fwitte", "vincep5" ], "repo": "MichMich/MagicMirror", "url": "https://github.com/MichMich/MagicMirror/issues/1504", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1515695509
Release 2.22.0 Hello and thank you for wanting to contribute to the MagicMirror² project Please make sure that you have followed these 4 rules before submitting your Pull Request: Base your pull requests against the develop branch. Include these infos in the description: Does the pull request solve a related issue? If so, can you reference the issue like this Fixes #<issue_number>? What does the pull request accomplish? Use a list if needed. If it includes major visual changes please add screenshots. Please run npm run lint:prettier before submitting so that style issues are fixed. Don't forget to add an entry about your changes to the CHANGELOG.md file. Note: Sometimes the development moves very fast. It is highly recommended that you update your branch of develop before creating a pull request to send us your changes. This makes everyone's lives easier (including yours) and helps us out on the development team. Thanks again and have a nice day! Codecov Report Merging #2983 (0ebedd0) into master (298e585) will decrease coverage by 42.04%. The diff coverage is 19.42%. @@ Coverage Diff @@ ## master #2983 +/- ## =========================================== - Coverage 63.82% 21.77% -42.05% =========================================== Files 9 52 +43 Lines 293 11424 +11131 =========================================== + Hits 187 2488 +2301 - Misses 106 8936 +8830 Impacted Files Coverage Δ js/electron.js 0.00% <0.00%> (-70.97%) :arrow_down: js/main.js 0.00% <0.00%> (ø) modules/default/clock/clock.js 0.00% <0.00%> (ø) modules/default/compliments/compliments.js 0.00% <0.00%> (ø) modules/default/newsfeed/newsfeed.js 0.00% <0.00%> (ø) modules/default/weather/providers/darksky.js 0.00% <0.00%> (ø) modules/default/weather/providers/envcanada.js 0.00% <0.00%> (ø) modules/default/weather/providers/openmeteo.js 0.00% <0.00%> (ø) ...odules/default/weather/providers/openweathermap.js 0.00% <0.00%> (ø) modules/default/weather/providers/smhi.js 0.00% <0.00%> (ø) ... and 66 more Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
gharchive/pull-request
2023-01-01T16:58:08
2025-04-01T04:32:45.988966
{ "authors": [ "MichMich", "codecov-commenter" ], "repo": "MichMich/MagicMirror", "url": "https://github.com/MichMich/MagicMirror/pull/2983", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1620150593
fix e2e tests (failed after async changes) by running calendar and newsfeed tests last. Additional change: allow unit tests to run parallel This is no fix of the real issue of calendar and newsfeed tests but I moved them to the end of the tests so other tests are not failing anymore. There are coming follow up PR's for the real fixes (when I find the culprits). With these change we can stay with the async changes done by @rejas and https://github.com/MichMich/MagicMirror/pull/3060 is obsolete. Codecov Report Merging #3063 (16a0553) into develop (1b2785c) will increase coverage by 0.10%. The diff coverage is 62.50%. :mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@ ## develop #3063 +/- ## =========================================== + Coverage 23.00% 23.10% +0.10% =========================================== Files 52 52 Lines 11578 11582 +4 =========================================== + Hits 2663 2676 +13 + Misses 8915 8906 -9 Impacted Files Coverage Δ modules/default/weather/providers/openmeteo.js 0.00% <0.00%> (ø) modules/default/newsfeed/newsfeedfetcher.js 82.51% <100.00%> (+0.39%) :arrow_up: ... and 1 file with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
gharchive/pull-request
2023-03-11T21:27:12
2025-04-01T04:32:45.997583
{ "authors": [ "codecov-commenter", "khassel" ], "repo": "MichMich/MagicMirror", "url": "https://github.com/MichMich/MagicMirror/pull/3063", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
230883524
Futures and multitasking How do we deal with the fact that we could be reading data from a serial port (and therefore waiting for it to arrive) and may not have everything in memory? Should we block until we have the data, or should we use something like futures to allow multitasking? I'm going to be using this with rtfm which will handle the multitasking. I was planning on having the serial read in line by line, and sending a whole line at a time to the parser. Not sure if that method will work for everybody though. I imagine most embedded Rust programs would follow a similar pattern. Can you give me an example of how (in an ideal world) you'd like the API to behave? I was thinking you might create some "SerialReader" which yields characters and has an underlying buffer, then every time the buffer is empty it'll ask for more bytes from whoever is on the other end of the serial connection. But that may be entirely different to how you'd like to use the library... I just got serial working on my board, so need to do some testing for the best way to implement it. I was thinking of making it RepRap compatible. So I'd read a line at a time, send that to process (just send x and y to the stepper driver in my case, for a more complicated firmware would need to go to the path planner), then send an ok response. Will have to see how much buffering is required and if sending an ok for each line adds too much latency. May eventually add SD support and just read directly from there. I don't have any experience in designing printer firmware, so I'm just learning as I go. I didn't have time to think about the different modules (crates) which could be part of a 3D printer/CNC firmware yet. I hope I'll find some time this weekend. However, I'd like to suggest decoupling the whole serial and printer part from the gcode parser. In my perfect world... ...the printer firmware supports a simple serial protocol which encapsulates the gcode strings or some other data. It also allows communicating to the host if the printer is ready to receive new commands and acknowledges correctly received commands. It also allows to trigger resends if the checksum is wrong. But all this should be independent from the gcode parser itself. It's just a serial protocol to transport the data. A useful protocol for this purpose probably already exists but may not be implemented in rust yet. After the received gcode line is parsed into a proper command it is checked if the command is either an immediate command, like emergency stop, and processed or the command is something like a motion command and put into a command queue. This allows a motion planner to implement features like look-ahead. This will also allow the host to send new commands while the printer is processing the previously received commands until the queue is full. At this point it is not relevant if the gcode commands originated from a serial connection, from a file on a SD Card or some other source. Then there is the printer part which will process commands in the queue which depends on the machine you're building (what hardware: stepper drivers, sensors, ...). definitely, I wasn't saying you should add any of those features to this project. Just showing you how I intend to use it. My stepper controller will look similar to this, https://github.com/etrombly/bluepill/blob/master/examples/stepper_tasks.rs just replace the controller task with the serial input from here https://github.com/etrombly/bluepill/blob/master/examples/loopback.rs (still need to add a buffer). The gcode parser will be the step in between the serial and stepper tasks. @wose, that sounds pretty much how I'd do things. I was thinking something like the SerialReader I mentioned earlier would act as "glue" to convert the data coming from some serial protocol into the char iterator the gcode crate expects. It sounds like the printer firmware can be broken down into a couple fairly self-contained modules: Communication protocol Gcode parser Motion planner Motion "executor"/stepper controller With some sort of meta-crate which contains all the trait definitions and glue necessary to bind everything together. Oh, ok sorry, but it's awesome you two think the same way. I'm really looking forward to patch something like the above together. The current implementation will parse consecutive gcodes out of an arbitrary &str, so should work just fine if reading line-by-line or in larger chunks. It doesn't really support reading from a stream-like source (à la std::io::Read), but when you're working with no_std that should probably be done at a higher level anyway.
gharchive/issue
2017-05-24T00:06:11
2025-04-01T04:32:46.030598
{ "authors": [ "Michael-F-Bryan", "etrombly", "wose" ], "repo": "Michael-F-Bryan/gcode-rs", "url": "https://github.com/Michael-F-Bryan/gcode-rs/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1019125952
Viewer is picking the wrong asteroids Describe the bug When calculating the route in the front-end, you're presented with the wrong selection of asteroids To Reproduce Steps to reproduce the behavior: Go to '...' Click on '....' Scroll down to '....' See error Expected behavior Just the asteroids in the pillbox Screenshots Additional context This could be related to 1cf5cba56cd384809e535c5fcd838497a659f799 Fixed as of f2f185f37228fb17bf57fb9afa6f5018fcdedb51. This was caused by using iloc instead of loc, which made it select by index position instead of index value
gharchive/issue
2021-10-06T20:08:31
2025-04-01T04:32:46.047792
{ "authors": [ "MichaelVerdegaal" ], "repo": "MichaelVerdegaal/InfluenceOrbitMapper", "url": "https://github.com/MichaelVerdegaal/InfluenceOrbitMapper/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
379207184
TSN loading npy weights error Hello I try to load pretrained weights of TSN saved in npy. But I found it not able to load. Error report : Failed to load checkpoint information file Failed loading checkpoint requested. Please check. I guess the reason is npy format is not realy compatiable with tensorflow ckpt format ? How do you work around this ? Need to specify --load as 0, to go around tensorflow checkpoint.
gharchive/issue
2018-11-09T15:30:35
2025-04-01T04:32:46.067405
{ "authors": [ "JoeHEZHAO" ], "repo": "MichiganCOG/M-PACT", "url": "https://github.com/MichiganCOG/M-PACT/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1554512163
Renamed folder from ATSQuadroStategyBase to ATSQuadroStrategyBase Noticed a typo in the Strategy sub-folder.. ok ty
gharchive/pull-request
2023-01-24T08:16:15
2025-04-01T04:32:46.076184
{ "authors": [ "MicroTrendsTom", "bdowling" ], "repo": "MicroTrendsLtd/NinjaTrader8", "url": "https://github.com/MicroTrendsLtd/NinjaTrader8/pull/85", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
253429163
How to associate an HttpHeaders collection with a WebRequest What's the correct way to populate an HttpHeaders collection and then use it with a WebRequest? I see methods to get the headers, to get default headers, but nothing to set the headers. Also, can you set the UserAgent and Content-Type through the HttpHeaders? Some interfaces don't allow that and they need to be set through properties. We need to be able to add custom headers, set Content-Type and UserAgent, and read the headers coming back in the response. Use the GetHeaders method on the HttpRequestMessage. Then on the returned object, call the Add method. No need to write back the Headers object, you don't get a copy of the headers, but a reference. For example, Headers.Add('User-Agent','MyAgentString') should work. For Content-Type header, you have to work with the HttpContent object. This object has also a GetHeaders method that you can call. Then call Add on the return object, like RequestHeaders.Add('Content-Type','application/x-www-form-urlencoded'). For a complete example you may want to look at this blog post: http://www.kauffmann.nl/2017/07/18/al-web-service-examples/
gharchive/issue
2017-08-28T19:23:09
2025-04-01T04:32:46.094676
{ "authors": [ "ajkauffmann", "jcooper01" ], "repo": "Microsoft/AL", "url": "https://github.com/Microsoft/AL/issues/551", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
338688051
Adaptive Sampling in JS? Hi, I have a ASP.NET Core app which I finally found out how to disable / exclude items from adaptive sampling. (Side note: The UI in Azure only displays details for ASP.NET Framework projects containing applicationinsights.config). Now I am wondering how to manage/disable adaptive sampling in the front-end, with this library. To my knowledge, I need to exclude Events and PageViews from being sampled. I manually add page views like so: import { AppInsights } from "applicationinsights-js"; .... AppInsights.trackPageView("page"); reason: SPA application, no real page refresh so it is not tracked when I don't do it manually. So how do I exclude these page views (and also trace logs please) from adaptive sampling? Or how do I disable it all together? Thanks! @Ruud-cb I think I couldn't understand you. Can you please clarify what you mean by "sampling"? @Ruud-cb - Here is a relevant docs page: 41174d3cf1d64d16e90779ea4f17c7f219b9afe2 @Ruud-cb - Here is a relevant docs page: https://docs.microsoft.com/en-us/azure/application-insights/app-insights-sampling#sampling-and-the-javascript-sdk @Dmitry-Matveev is there a way to disable certain telemetry types from being sampled? As far as I know, the JS SDK does not support adaptive sampling, it is left to back end (.NET only). Although it may handle it as referenced in linked doc JS SDK only has samplingPercentage setting on config. You cannot exclude specific types from sampling via configuration in JS SDK, at least that's what I see in the code here, it simply excludes metrics from sampling but seems to sample everything else. I can think of a workaround - implement Telemetry Initializer and set sampleRate to 100 for the telemetry types you'd like to exclude from sampling. @Dmitry-Matveev thanks for the reply. If js sdk is set up with same iKey as .NET sdk will it obey the <IncludeTypes> and <ExcludeTypes> specified in server-side, similar to how it matches the "users" to sample in (doc ref quoted below)? The client-side (JavaScript) SDK participates in fixed-rate sampling in conjunction with the server-side SDK. The instrumented pages will only send client-side telemetry from the same users for which the server-side made its decision to "sample in." @markwolff , unfortunately, not, at least as per my knowledge. The outlined about users was true for a while due to the way hashing algorithm in sampling was implemented - it calculated the same hash for the same user ID value on JS side and on Server side. There was a change a while ago, however, to make server-side sampling rely on Operation Id hash but not User Id hash, which I think may affect the truthfulness of the statement you referenced. Please refer to solution by @Dmitry-Matveev - implement Telemetry Initializer and set sampleRate to 100 for the telemetry types you'd like to exclude from sampling.
gharchive/issue
2018-07-05T18:53:39
2025-04-01T04:32:46.110168
{ "authors": [ "Dmitry-Matveev", "Ruud-cb", "jpiyali", "markwolff", "rand0me" ], "repo": "Microsoft/ApplicationInsights-JS", "url": "https://github.com/Microsoft/ApplicationInsights-JS/issues/630", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
264573932
InstrumentationKey in source control The method UseApplicationInsights load InstrumentationKey from application.json and applicaiotn.{EnvironmentName}.json and environment variables. Is there any way to load InstrumentationKey from app secrets in development or it is safe to put instrumentation key in json file and commit in my source control that is public available. I would suggest that you do not check in your Instrumentation Key in a public repo, no. Instead, you can set variables into your deployment scripts, and have your automation system substitute those variables at deploy time. One example of how others have done this is by our DevOps Rangers, you can see how they did it for one of their OSS projects here. Is there any way to use user secrets to load InstrumentationKey with this code: if (env.IsDevelopment()) { builder.AddUserSecrets<Startup>(); } @dalibor983 This overload of UseApplicationInsights (string instrumentationKey) accepts instrumentation key as a parameter. You can store you key in Azure Key Fault, and read key from there, just like you would read any other secrets, and pass the instrumentation key to the UseApplicationInsights(key) method. The following contains example of retrieving any secret from KeyVault. https://docs.microsoft.com/en-us/aspnet/core/security/key-vault-configuration?tabs=aspnetcore2x Please let us know if this is what you are looking for. Closing this for now.
gharchive/issue
2017-10-11T12:50:42
2025-04-01T04:32:46.114529
{ "authors": [ "cijothomas", "d3r3kk", "dalibor983" ], "repo": "Microsoft/ApplicationInsights-aspnetcore", "url": "https://github.com/Microsoft/ApplicationInsights-aspnetcore/issues/535", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
428757188
az batch pool create: error: 'NoneType' object has no attribute 'startswith' Problem description Unable to create Pool using the command az batch pool create --json-file pool.json Problem details Trying out distributed training. Currently setting up clusters. Can see creating cluster notebook has some typos - I have fixed multiple minor errors (will create pull requests once I successfully do everything) and carrying on creating the pool now. Updated my azure subscription to pay-as-go from free-trial to allow vCpu 6 for NV6. End up with this error : (Last part in the jupyter notebook) az batch pool create: error: 'NoneType' object has no attribute 'startswith' json file generated: Experiment/Environment details Tutorial used:DistributedRL Environment used: NA Versions of artifacts used python 3.6 Keras 2.1.2 ps. If you have time, can you kindly have a look at issue 85: https://github.com/Microsoft/AutonomousDrivingCookbook/issues/85 Major problem I have with E2E deeplearning is I am not able to connect to the modified Airsim client. So I cannot run the model on landscape mode. I just noticed I am getting the same error with other batch pool commands: C:\Users\wonjo>az batch pool list --subscription abcd usage: az batch pool list [-h] [--verbose] [--debug] [--output {json,jsonc,table,tsv,yaml,none}] [--query JMESPATH] [--filter FILTER] [--select SELECT] [--expand EXPAND] [--account-name ACCOUNT_NAME] [--account-key ACCOUNT_KEY] [--account-endpoint ACCOUNT_ENDPOINT] [--subscription _SUBSCRIPTION] az batch pool list: error: 'NoneType' object has no attribute 'startswith' Seems like bug from az batch pool not the code, closing the issue here.
gharchive/issue
2019-04-03T13:06:59
2025-04-01T04:32:46.121134
{ "authors": [ "wonjoonSeol" ], "repo": "Microsoft/AutonomousDrivingCookbook", "url": "https://github.com/Microsoft/AutonomousDrivingCookbook/issues/100", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
206105259
Directline API : 502 Bad Gateway error I am getting the response from the Bot intermittently. While sending the request, I am getting the below error. POST https://directline.botframework.com/v3/directline/conversations/B56KT664xddDwc9SBbL4md/activities 502 (Bad Gateway) I am also getting error in botchat.js. Is something problem with directline api? Can anyone help me to solve this issue? Yes, closing as this is a duplicate of https://github.com/Microsoft/BotBuilder/issues/2201
gharchive/issue
2017-02-08T06:09:25
2025-04-01T04:32:46.125930
{ "authors": [ "AlavudeenRafiq", "dandriscoll" ], "repo": "Microsoft/BotBuilder", "url": "https://github.com/Microsoft/BotBuilder/issues/2207", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
271382571
[Question] When will BotBuilder SDK support FB List & Menu? Bot Info SDK Platform: Node.js Active Channels: Messenger Deployment Environment: ngrok Actual Results When I tried sending native messages (for FB), it produced the infamous api error, and crashed. @SerendioSoftwares Have you tried sending your Messenger specific formatting data via the channelData property of the Message object? See: https://docs.microsoft.com/en-us/bot-framework/dotnet/bot-builder-dotnet-channeldata @SerendioSoftwares Were you able to solve it using the channelData property? closing as no response from user
gharchive/issue
2017-11-06T07:34:16
2025-04-01T04:32:46.129238
{ "authors": [ "SerendioSoftwares", "nwhitmont" ], "repo": "Microsoft/BotBuilder", "url": "https://github.com/Microsoft/BotBuilder/issues/3720", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
300856208
BotBuilder in node.js doesn't resolve session.message.user.name Bot Info SDK Platform: Node.js SDK Version: 3.14.0 Active Channels: WebChat(DirectLine) Deployment Environment: Azure Bot Service Issue Description I'm trying to make a bot using node.js and embedded in SharePoint using below script. Once the user opens bot window, the bot send welcome message using conversationUpdate, however session.message.user.name is 'undefined' In case of check this on Emulator, session.message.user.name works fine! Please let me know how to pass the user.name to the opening message from the bot. Code Example bot.on('conversationUpdate',(session,activity,message) => { console.log('session.message.user.name = ' + session.message.user.name); if(session.membersAdded){ session.membersAdded.forEach(function (identity) { if(identity.id === session.address.bot.id){ bot.beginDialog(session.address,'Welcome'); //session.send(session.user.name) //session.send('this is welcome message') } }); } }) Reproduction Steps Opens the chat window Welcome message shows Welcome 'undefined' user Expected Behavior Expect the user: { id: 'user12345', name : 'user1' } passed correctly. Actual Results session.message.user.name is not 'undefined' or error with below stack trace. TypeError: Cannot read property 'user' of undefined at UniversalBot.bot.on (D:\home\site\wwwroot\app.js:72:65) at emitOne (events.js:96:13) at UniversalBot.emit (events.js:188:7) at D:\home\site\wwwroot\node_modules\botbuilder\lib\bots\UniversalBot.js:156:31 at D:\home\site\wwwroot\node_modules\botbuilder\lib\bots\UniversalBot.js:427:53 at UniversalBot.tryCatch (D:\home\site\wwwroot\node_modules\botbuilder\lib\bots\UniversalBot.js:516:13) at next (D:\home\site\wwwroot\node_modules\botbuilder\lib\bots\UniversalBot.js:427:23) at UniversalBot.eventMiddleware (D:\home\site\wwwroot\node_modules\botbuilder\lib\bots\UniversalBot.js:430:9) at D:\home\site\wwwroot\node_modules\botbuilder\lib\bots\UniversalBot.js:141:23 at D:\home\site\wwwroot\node_modules\botbuilder\lib\bots\UniversalBot.js:472:53 Hi @CheapHead This is a known issue: https://github.com/Microsoft/BotFramework-WebChat/issues/364 https://github.com/Microsoft/BotFramework-WebChat/issues/848 https://github.com/Microsoft/BotFramework-WebChat/issues/536 A workaround is to create a BotChat.DirectLine object for BotChat.App, and post an event activity in client side code. Then in the bot code, respond to the event with the desired welcome message. web page: <!DOCTYPE html> <html> <head> <link href="https://cdn.botframework.com/botframework-webchat/latest/botchat.css" rel="stylesheet" /> </head> <body> <div> <div id="bot" /> </div> <script src="https://cdn.botframework.com/botframework-webchat/latest/botchat.js"></script> <script> var user = { id: 'user-id', name: 'user name' }; var botConnection = new BotChat.DirectLine({ token: '[DirectLineSecretHere]', user: user }); BotChat.App({ user: user, botConnection: botConnection, bot: { id: 'bot-id', name: 'bot name' }, resize: 'detect' }, document.getElementById("bot")); botConnection .postActivity({ from: user, name: 'requestWelcomeDialog', type: 'event', value: '' }) .subscribe(function (id) { console.log('"trigger requestWelcomeDialog" sent'); }); </script> </body> </html> bot code: const connector = new builder.ChatConnector({ appId: 'MicrosoftAppIdHere', appPassword: 'MicrosoftAppPasswordHere' }); const server = restify.createServer(); server.listen(process.env.port || process.env.PORT || 3979, () => { console.log(`${server.name} listening to ${server.url}`); }); server.post('/api/messages', connector.listen()); var bot = new builder.UniversalBot(connector); bot.on('event', function(message) { if(message.name == 'requestWelcomeDialog'){ bot.beginDialog(message.address, '/'); } }); bot.dialog('/', [function(session, args, next) { try { builder.Prompts.choice(session, 'Hey, I'm a bot. Ask me about:', 'Item 1|Item 2|Other', {listStyle: builder.ListStyle["button"]}); } catch (err) {} }, function(session, results) { switch (results.response.entity) { case "Item 1": session.replaceDialog("/Item1Dialog"); break; case "Item 2": session.replaceDialog("/Item2Dialog"); break; case "Other": session.replaceDialog("/Other"); break; default: session.replaceDialog("/"); break; } } ]); bot.dialog('/Item1Dialog', [function(session, args, next) { try { session.send('in item 1 dialog'); } catch (err) {} }]); I fixed this issue and apologize I lack of search for past issues. Many thanks for your help. Hi @EricDahlvang , how can i make it work on WebChat? It worked flawlessly on DirectLine.
gharchive/issue
2018-02-28T01:24:34
2025-04-01T04:32:46.143222
{ "authors": [ "CheapHead", "Chitma", "EricDahlvang" ], "repo": "Microsoft/BotBuilder", "url": "https://github.com/Microsoft/BotBuilder/issues/4245", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
301926577
Azure Function Bot upgrade from v3.12.2.4 to v3.13.1 causes failure to connect Bot Info SDK Platform: .NET SDK Version: 3.13.1 Active Channels: Bot Framework Emulator v3.5.35 Deployment Environment: local development with Function emulator Issue Description I have a Function Bot that works just find with v3.12.2.4. When I update the NuGet libraries to v3.13.1, everything builds. But when I run the function and attempt to connect via Bot Framework Emulator, I get an error in the function emulator window that it can't find the 3.12.2.4 version of Microsoft.Bot.Connector Reproduction Steps Create Azure Function from template Copy files from Bot template into function Install 3.12.2.4 Bot libraries Run function from Visual Studio (it will prompt to upgrade the Function environment. do it) Connect to function endpoint from Bot Framework Emulator Have successful conversation with Bot Upgrade to 3.13.1 Bot libraries, run function During connection, observe error 500 in Bot Framework Emulator Log and observe error in function window attempting to locate Microsoft.Bot.Connector v3.12.2.4 Expected Behavior Bot Emulator should connect without error Works: Microsoft.Bot.Builder v3.12.2.4 Microsoft.Bot.Builder.Azure v3.2.5 Microsoft.Bot.Connector v3.12.2.4 Microsoft.NET.Sdk.Functions v1.0.8 Fails: Microsoft.Bot.Builder v3.13.1 Microsoft.Bot.Builder.Azure v3.2.5 Microsoft.Bot.Connector v3.13.1 Microsoft.NET.Sdk.Functions v1.0.8 After researching, I believe this is connected to the fact that Azure Functions has no mechanism for binding redirects. Microsoft.Bot.Builder.Azure v3.2.5 is linked to Bot.Builder v3.12.2.4. Please release an updated Bot.Builder.Azure that links to the current libraries, and that should solve my problem. Thanks! Also, I just tried to update to the alpha of Bot.Builder.Azure. Failure: Azure Functions requires Newtonsoft.Json = 9.0.1 Microsoft.Bot.Builder.Azure 4.0.0-alpha20180221 requires Newtonsoft.Json >= 10.0.3 So please go back and release a Microsoft.Bot.Builder.Azure v3.2.6 that requires Newtonsoft.Json = 9.0.1 and Microsoft.Bot.Builder = 3.13.0.3. It would be most appreciated. The v4 alpha version is for Bot Builder v4, not v3.x You are right, BotBuilder-Azure needs to be updated to use the latest BotBuilder packages. In fact, the library itself has already been updated: https://github.com/Microsoft/BotBuilder-Azure/pull/52/files It just hasn't been released yet. There is no timeline or schedule for the release, but I assume it will be soon. Lastly, this is an issue with https://github.com/Microsoft/BotBuilder-Azure not with the BotBuilder. Please open an issue with that repository if you wish to pursue this further. Or, wait for the team to release the next nuget package. Sorry for the inconvenience.
gharchive/issue
2018-03-02T22:33:36
2025-04-01T04:32:46.154272
{ "authors": [ "EricDahlvang", "RonPeters" ], "repo": "Microsoft/BotBuilder", "url": "https://github.com/Microsoft/BotBuilder/issues/4257", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
307214141
Microsoft Bot infinitely multiply messages in Telegram desktop for MacOS This problem occurs only on Telegram Desktop App for MacOS (iOs, Android, Ubuntu, Windows - no problems). When a user presses one of any "builder.Prompts.choice" buttons, Bot starts sending message, which is supposed to answer user's choice, infinitely. This problem occured in monday and there was no any changes in Bot's code or settings in a few days before. Can you please supply which SDK you are using, which SDK version you are on, a code example, and the Telegram desktop app version number Also, can you supply which MacOS version you are using (Sierra/High Sierra/El Capitan/etc) and version number, if available. Also, is it sending the prompt choice once per minute in your case? As this is a duplicate of this issue on Stack Overflow, I'll be closing this thread. I will continue to assist there.
gharchive/issue
2018-03-21T11:52:27
2025-04-01T04:32:46.156881
{ "authors": [ "JasonSowers", "alexeymkh", "nodeomega" ], "repo": "Microsoft/BotBuilder", "url": "https://github.com/Microsoft/BotBuilder/issues/4352", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
374167765
Fixed linux auto update and packaging Fix for #1030 ===== 32-bit Linux build was being packaged but never published so I switched packaging to only create 32-bit binaries so the emulator can be used on both 32 and 64-bit Linux machines until we improve on that aspect of the build system Build process now generates latest-linux-ia32.yml so that auto update works on Linux Pruned some unused build files Pull Request Test Coverage Report for Build 1099 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 53.222% Totals Change from base Build 1093: 0.0% Covered Lines: 4085 Relevant Lines: 6903 💛 - Coveralls
gharchive/pull-request
2018-10-25T23:10:48
2025-04-01T04:32:46.162684
{ "authors": [ "coveralls", "tonyanziano" ], "repo": "Microsoft/BotFramework-Emulator", "url": "https://github.com/Microsoft/BotFramework-Emulator/pull/1038", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
298420443
How do objects created using JsCreateExternalObject get cleared up I was wondering how you are able to clean up objects created on the heap and used via JsCreateExternalObject. I tried using the FinalizerCallback (in my previous issue #4679) but whenever i have that referenced in my JsCreateExternalObject call (and not even do anything) I get all sorts of crashes and corruptions happening. So I must be doing something wrong. I tried looking at one of the hosting samples and that doesn't delete the GLPoint objects created so it must be leaking unless JsCreateExternalObject auto-cleans up? You can take a look at how we do things in node-chakracore, e.g. in https://github.com/nodejs/node-chakracore/blob/master/deps/chakrashim/src/v8functiontemplate.cc. The general shape of things is we new up some native thing, and then wrap it in an ExternalObject with an appropriate finalizer function. In that finalizer, we cast the void * argument back to the correct type that we expect it to be, and then delete it to clean up. As long as you are passing plain c-style functions (not member functions etc, but static class functions are fine) as the finalizer callback then nothing strange should happen when calling JsCreateExternalObject. Hmm, that's effectively what i am doing but its crashing when disposing of the runtime with "Object reference not set to instance" or other strange messages, sometimes even with memory corruption errors. These all go away as soon as I remove the finalizer. I ran into similar issues when trying to read properties off of JS objects in my finalizers. As the finalizer may be called during runtime destruction (i.e. after the JsContextRef is out of scope) you can only safely access native resources in your finalizer. I get exceptions and all sorts of odd errors if i just setup a FinalizerCallback which is empty and does nothing. As soon as i pass null as my callback in the JsCreateExternalObject function, those exceptions go away. You are still using your c# wrapper, is that correct? Are you making sure to pass a plan c-style function to JsCreateExternalObject? Could your wrapper be taking or converting references somehow? Yes still using the C# Wrapper, yes its a plain static function: private static void ImageFinalizeCallback( IntPtr dataPtr ) { // empty } var image = new MagickImage( "d:\\file.png" ); var imageHandle = GCHandle.Alloc( image ); var externalInstanceObject = JavaScriptValue.CreateExternalObject( GCHandle.ToIntPtr( imageHandle ), ImageFinalizeCallback ); externalInstanceObject.Prototype = PawaImagePrototype; But even an empty one causes exceptions on disposing of the runtime. The only thing that works is passing null as the callback to JavaScriptValue.CreateExternalObject (which is a simple wrapper around JsCreateExternalObject. I am really confused!! Maybe your finalizer function gets GC'ed? Try making a delegate holding on to the finalizer and then pass the delegate to JsCreateExternalObject to keep it alive. Yeah, very good point. This would explain the oddities and also that sometimes it gets called, sometimes it doesn’t. Don’t know why I didn’t think of that as you have to do that for the FunctionCallbacks to ensure they don’t get GC’ed. I will try in the morning and feedback Admittedly I'm not that familiar with the inner workings of the CLR, but I'm confused on 1) why a function defined as part of a class and presumably baked right into the IL would (or even could) ever be GC'd, and 2) how wrapping it in a delegate prevents that same function from being GC'd once the delegate goes out of scope. Basically in .NET when you pass a function to unmanaged code, .NET doesn't know unmanaged side has an implicit reference after that line of code and will proceed and GC. Having a static delegate is an explicit way of keeping that reference in .NET world. That was it, thanks everyone. Seems obvious now, argh!!
gharchive/issue
2018-02-19T22:44:55
2025-04-01T04:32:46.178449
{ "authors": [ "MSLaguana", "coderangerdan", "fatcerberus", "liminzhu" ], "repo": "Microsoft/ChakraCore", "url": "https://github.com/Microsoft/ChakraCore/issues/4706", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
167091614
split ChakraCoreVersion.h file Split version defs out from CommonDefines.h into a separate file. Hosts that statically link to ChakraCore can then access ChakraCore version info by #include this file at build time. LGTM. Is this file supposed to be redistributable with Jsrt? LGTM. Is this file supposed to be redistributable with Jsrt? Thanks. Yes, I suppose so. Not needed for Windows/Chakra JSRT. Needed for ChakraCore.
gharchive/pull-request
2016-07-22T17:17:00
2025-04-01T04:32:46.180448
{ "authors": [ "digitalinfinity", "jianchun" ], "repo": "Microsoft/ChakraCore", "url": "https://github.com/Microsoft/ChakraCore/pull/1313", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
260777798
Fix #3040: Incorrect parsing of class expression Class extends exprs should only be LHS exprs. Change to ParseTerm instead of blanket ParseExpr. @dotnet-bot test Ubuntu static_ubuntu_linux_test please
gharchive/pull-request
2017-09-26T21:35:26
2025-04-01T04:32:46.181458
{ "authors": [ "tcare" ], "repo": "Microsoft/ChakraCore", "url": "https://github.com/Microsoft/ChakraCore/pull/3813", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
229329021
Related to issue #1: The SDK does not seem to build out of the box Related to issue #1 - I wanted to include a local build of the SDK with the sample, however, it does not build. I tried to build the SDK from two different machines - a Windows PC and a Windows Server, both with the newest Visual Studio 2017 installed. They both returned the following errors: It looks like it is trying to use some modules within the SDK that is neither self-contained nor referenced in the list of references. The SDK is autogenerated using https://github.com/Azure/AutoRest We're only shipping the non-generated components here; you need to generate before the sln will build. I'll add a readme with instructions for building Same here - Client library does not build out of box. If I use the client libraries from nuget, in below 2 lines of my code for prediction, the second line throws error. `Dim iOnImage As New MemoryStream(File.ReadAllBytes(iCurImg.ImgFile)) Dim PredRes = PrdEndPoint.PredictImage(TrgProject.id, iOnImage)` The error returned is Message: Operation returned an invalid status code 'NotFound' Stack trace: at Microsoft.Cognitive.CustomVision.PredictionEndpoint.d__28.MoveNext() in C:\projects\IRIS\Iris.SDK\CustomVisionSDK\Cognitive-CustomVision-Windows\ClientLibrary\Microsoft.Cognitive.CustomVision.Prediction\Generated\PredictionEndpoint.cs:line 597 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Cognitive.CustomVision.PredictionEndpointExtensions.d__4.MoveNext() in C:\projects\IRIS\Iris.SDK\CustomVisionSDK\Cognitive-CustomVision-Windows\ClientLibrary\Microsoft.Cognitive.CustomVision.Prediction\Generated\PredictionEndpointExtensions.cs:line 151 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Cognitive.CustomVision.PredictionEndpointExtensions.PredictImage(IPredictionEndpoint operations, Guid projectId, Stream imageData, Nullable`1 iterationId, String application) in C:\projects\IRIS\Iris.SDK\CustomVisionSDK\Cognitive-CustomVision-Windows\ClientLibrary\Microsoft.Cognitive.CustomVision.Prediction\Generated\PredictionEndpointExtensions.cs:line 125 at SBToiletFinder.MainWindow.VB$StateMachine_17_ProcessToiletConfidence.MoveNext() in F:\Projects\MyProject\MyProject\MainWindow.xaml.vb:line 317 Awaiting next build at the earliest. That looks like you don't have a default iteration set on your project. Either select an iteration to be the default, or pass in an explicit iteration id to your PredictImage call. Hi, Thanks for that quick response. Now I changed the code Dim PredRes = PrdEndPoint.PredictImage(TrgProject.id, iOnImage) to Dim PredRes = PrdEndPoint.PredictImage(TrgProject.Id, iOnImage, TrgProject.CurrentIterationId, TrgProject.Name) Checked all parameters in debug and found correct values are passed. However still it gives the same error. In the site the iteration count is 6 and it is the default. Any help would be appreciated. CurrentIterationId is misleading; it's not the last trained iteration, but the iteration that is about to be trained. Asking for a prediction from it will fail by its very nature, as it's not been trained yet. You can get the iteration id of a given iteration by selecting it in the UI, choosing 'Prediction URL' and copying the last part of the url that says 'iterationId=' Alternatively you can use the API to call GetIterations(), and get the ID from one of the objects in the list returned.
gharchive/issue
2017-05-17T12:03:04
2025-04-01T04:32:46.191191
{ "authors": [ "chsienki", "jensemil", "rrvenki" ], "repo": "Microsoft/Cognitive-CustomVision-Windows", "url": "https://github.com/Microsoft/Cognitive-CustomVision-Windows/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
421842836
Many Bug Fixes and Test Spec Refactor I fixed multiple bugs in test cases - All known test bugs are fixed now! Along the way I refactored the test spec files that I was debugging into our new pattern. I found some more globals hanging off of the "window" object, I changed to using "let" :tada: This PR is included in version 0.346.0 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2019-03-16T19:16:45
2025-04-01T04:32:46.194351
{ "authors": [ "SDESkowronski", "mattmazzola" ], "repo": "Microsoft/ConversationLearner-UI", "url": "https://github.com/Microsoft/ConversationLearner-UI/pull/1043", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
250895641
Merge pull request #1 from Microsoft/master merge from master This seems like a small (but important) contribution, so no Contribution License Agreement is required at this point. We will now review your pull request. Thanks, Microsoft Pull Request Bot
gharchive/pull-request
2017-08-17T10:18:29
2025-04-01T04:32:46.197510
{ "authors": [ "msftclas", "vanish87" ], "repo": "Microsoft/DirectX-Graphics-Samples", "url": "https://github.com/Microsoft/DirectX-Graphics-Samples/pull/269", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
367586328
The German translation of the README is a mess The German translation of the README is a mess, full of spelling and grammar mistakes. Even #149 doesn't fix all problems. Argh. @vardai Checkout #240 for the updated translation. @vardai I just did a complete rewrite of the original Readme as I wasn't really satisfied with the attempts so far. See #285. I'm open for discussion on changes. Is a German translation of that text really needed? Well, that's a good rather question, but it seems as if @bitcrazed and others at Microsoft are actively encouraging it, so there's your answer. #285 fixes all errors. Please merge it, @bitcrazed, and close this issue.
gharchive/issue
2018-10-07T19:40:01
2025-04-01T04:32:46.199990
{ "authors": [ "akuckartz", "ritterasdf", "swetax", "vardai" ], "repo": "Microsoft/MS-DOS", "url": "https://github.com/Microsoft/MS-DOS/issues/170", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
326381235
Can I Change length of Motion Controller pointing ray when if lost focus from a object? Overview Can I Change length of Motion Controller pointing ray when if lost focus from a object? I feel short it in the setting of MRTK2017.4.0.0-RC1. When we lost focus from a object,we would be better to "Ray" is pointing far as possible. We will be easer to understand the state of lost focus . Steps to reopen Open scene "InputNavigationRotateTest.unity" and execution in HoloToolkit-Unity-2017.4.0.0-rc.unitypackage. Unity Editor Version Unity 2017.4.3f1 Mixed Reality Toolkit Release Version MRTK 2017.4.0.0-RC1 It seems I have been able to extend it some, but there seems to be a cap some where. It doesn't really seem to use my 200 value. Okay I figured it out. Unless I configured something that didn' t need to be set. I found three places that needed to be updated. Yep, Default Cursor Distance is (at the moment) where you'd change it. I'm working on a change that'll make this a little more configurable, especially depending on the type of pointer your cursor is attached to (you likely want the default distance further from the controllers than from your gaze, etc). I'll also look into the observed cap above. I'm not sure what that might be. @TakahiroMiyaura @keveleigh I had similar feedback. Thanks for opening the issue! @keveleigh the Cap I spoke about was because I was not editing all three spots. I was only changing the DefaultCursor. I don't have any cap issues. Has there been any talk of scaling the cursor bigger as it gets further way? I'll be sure to make it easier to update this option in vNEXT @gobanjoboy Thank you for the information! @StephenHodgson ,@cre8ivepark. Thank you for your consideration! When it becomes to set it simple, we'll be more easer to use MRTK! In addition,I was wondering you could provide by the initial value which set to long distance? @keveleigh I agree. the default length feels somewhat short. it would be great if we can increase the default length. Definitely! From my tests on Friday, 10 feels good as a default. I'll have a PR open early this week. PR https://github.com/Microsoft/MixedRealityToolkit-Unity/pull/2201 contained the update, and the latest release has this fix in it. The controller pointer ray now gets its off-hologram distance from the FocusManager's PointingExtent. The other two locations in https://github.com/Microsoft/MixedRealityToolkit-Unity/issues/2166#issuecomment-392096374 can be left untouched. @keveleigh if the input is disabled and we are not gazing an object the loading cursor is not visible because it is too far. Is it possible to show the loading cursor more near to the user and leave the ray with a length of 10?
gharchive/issue
2018-05-25T04:55:47
2025-04-01T04:32:46.209346
{ "authors": [ "StephenHodgson", "TakahiroMiyaura", "cre8ivepark", "gobanjoboy", "ivan2007", "keveleigh" ], "repo": "Microsoft/MixedRealityToolkit-Unity", "url": "https://github.com/Microsoft/MixedRealityToolkit-Unity/issues/2166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
379344936
Photon Sharing Preview Overview Pulling the vNEXT sharing with updated photon / Unity 2018.2 compatibility into new photon sharing branch Just FYI for those who are reviewing. This is just the initial dump. We plan on opening PRs against this to get it into compliance with the features/mrtk_sharing branch as arch docs are drafted and created. @StephenHodgson Photon has been removed @StephenHodgson I believe are issues are now resolved Weird, when I go to open a new PR with the same branch it's only showing 205 file changes as opposed to the ones shown here. Yeah this is frustrating. GitHub might be messed up for this PR. I branched this and opened a test pr to see the changes. Shows a different change set even though it's a clone. @mr0ng please also remove the docfx references Also let's address these warnings as well: Also is this using PUN 2 or PUN Classic? @mr0ng I made most of my change requests on my branch. If you wanna pull it and test it we can review and merge the other PR with just 30 file changes instead of this one. Up to you. Opened https://github.com/mr0ng/MixedRealityToolkit-Unity/pull/2 for Change requests @StephenHodgson I think we should be all set on this now?
gharchive/pull-request
2018-11-09T22:32:11
2025-04-01T04:32:46.214284
{ "authors": [ "StephenHodgson", "mr0ng" ], "repo": "Microsoft/MixedRealityToolkit-Unity", "url": "https://github.com/Microsoft/MixedRealityToolkit-Unity/pull/3111", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
255122974
DirectPairing crashes or fails to connect https://github.com/Microsoft/MixedRealityToolkit-Unity/issues/506 https://github.com/Microsoft/MixedRealityToolkit-Unity/issues/748 @keveleigh was this fixed in the latest sharing service update?
gharchive/issue
2017-09-04T22:13:27
2025-04-01T04:32:46.215798
{ "authors": [ "StephenHodgson" ], "repo": "Microsoft/MixedRealityToolkit", "url": "https://github.com/Microsoft/MixedRealityToolkit/issues/115", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
232974203
Fixes UWP and FindFile Fixes debugging UWP support Fixes PathUtils.FindFile not handling search depth correctly. Adds FindFile test. 👍 if the data files are added or already there There now :)
gharchive/pull-request
2017-06-01T18:48:01
2025-04-01T04:32:46.222702
{ "authors": [ "huguesv", "zooba" ], "repo": "Microsoft/PTVS", "url": "https://github.com/Microsoft/PTVS/pull/2627", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
233326857
Adds a user agent string for requests to pypi.org. Looks like we also need to reduce the number of calls overall, but I'll do that as a separate issue. 👍
gharchive/pull-request
2017-06-02T23:42:58
2025-04-01T04:32:46.223773
{ "authors": [ "huguesv", "zooba" ], "repo": "Microsoft/PTVS", "url": "https://github.com/Microsoft/PTVS/pull/2638", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
173662651
Implement connection management UI in Server Explorer Remaining work is in #2222 Cut in favor of Alex's proposal
gharchive/issue
2016-08-28T19:23:00
2025-04-01T04:32:46.224545
{ "authors": [ "MikhailArkhipov" ], "repo": "Microsoft/RTVS", "url": "https://github.com/Microsoft/RTVS/issues/2230", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
307436764
[FR/ES/PT Unit] mm unit wrongly recognized as negative in range Ex: "En 1995 Cannon introdujo la primera lente SLR comercialmente disponible con estabilización de imagen interna, 75 - 300 mm f / 4 - 5. 6 es usm." should find "value": "300", "unit": "Milímetro" for now. But it's wrongly returning -300. It works fine in English. Pending in javascript.
gharchive/issue
2018-03-21T22:13:13
2025-04-01T04:32:46.225874
{ "authors": [ "tellarin" ], "repo": "Microsoft/Recognizers-Text", "url": "https://github.com/Microsoft/Recognizers-Text/issues/426", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
317666047
[EN Number] "between" should include boundaries "between four and seven" => "(4,7)" meaning 4 and 7 are not included. My understanding of between is that boundaries are included to the interval Including boundaries should be the default behavior. A flexible option would be to support "exclusive" or "inclusive" suffix to remove any uncertainty. Maybe "from ... to ..." ranges should include end bound as well. Hi @RomainWilbert, thanks for bringing this up. We're doing issue triage and I'll update this issue later. For consistency, all ranges include the start and don't include the end. Additional modifiers (MOD attribute) currently give further insight on what extra interpretation should be applied to the provided resolutions. This makes sure the recognizers are backwards compatible, provides a stable platform, and also let's consumers decide how to interpret in their particular scenarios (devs have better domain knowledge of their usage cases and no solution fits all). If you find cases that deviate from the design, please open issues for them. They'd be very helpful. Thanks for all the feedback!
gharchive/issue
2018-04-25T14:56:35
2025-04-01T04:32:46.229181
{ "authors": [ "RomainWilbert", "tellarin" ], "repo": "Microsoft/Recognizers-Text", "url": "https://github.com/Microsoft/Recognizers-Text/issues/492", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
168882750
Feature request: Autocomplete for string literal types From @calebegg on July 14, 2016 17:56 VSCode Version: 1.3.1 OS Version: Ubuntu 14.04.1 It would be useful if string literal types in TypeScript had autocomplete For example: Copied from original issue: Microsoft/vscode#9294 Hmz, isn't this issue #606 and fixed by #8428? Thanks for finding @DickvdBrink
gharchive/issue
2016-08-02T13:18:32
2025-04-01T04:32:46.231660
{ "authors": [ "DanielRosenwasser", "DickvdBrink", "dbaeumer" ], "repo": "Microsoft/TypeScript", "url": "https://github.com/Microsoft/TypeScript/issues/10089", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
201442900
[Suggestion] Allow use of const enum values in computed properties in type literals and interfaces Considering how const enum operates at compile-time, it seems odd that the following is not valid: const enum Ids { alpha = 1 } interface Foo { [Ids.alpha]: string } when it should be precisely equivalent to: interface Foo { [1]: string } which compiles just fine. Proposal Allow const enum values to be used in computed property specifiers in interface declarations and type literals, just as well-known symbols are today. #10770 / #5579 Ah, I missed #10770 because I was filtering on currently open issues, and then #5579 didn't reference const enums in the description.
gharchive/issue
2017-01-18T00:09:06
2025-04-01T04:32:46.233937
{ "authors": [ "PyroVortex", "RyanCavanaugh" ], "repo": "Microsoft/TypeScript", "url": "https://github.com/Microsoft/TypeScript/issues/13550", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
56838013
Suggestion -- condsider making different syntaxKinds for object literal properties and class properties In earlier versions of typescript, object literal properties and class properties were different syntax kinds. Now they are the same syntax kind -- property -- which caused this bug here: https://github.com/palantir/tslint/issues/292 for when we converted the rule to the new compiler. It would be easier if they were different, as they were before. this should be fixed now.
gharchive/issue
2015-02-06T17:00:55
2025-04-01T04:32:46.235460
{ "authors": [ "gscshoyru", "mhegazy" ], "repo": "Microsoft/TypeScript", "url": "https://github.com/Microsoft/TypeScript/issues/1960", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
314801243
gulp runtests and jake runtests fail: ts is not defined To reproduce: git pull git clean -xfd npm install gulp runtests causes: /home/andy/TypeScript_1/TypeScript/scripts/processDiagnosticMessages.js:2 var sys = ts.sys; ^ ReferenceError: ts is not defined at main (/home/andy/TypeScript_1/TypeScript/scripts/processDiagnosticMessages.js:2:15) at Object.<anonymous> (/home/andy/TypeScript_1/TypeScript/scripts/processDiagnosticMessages.js:86:1) at Module._compile (module.js:649:30) at Object.Module._extensions..js (module.js:660:10) at Module.load (module.js:561:32) at tryModuleLoad (module.js:501:12) at Function.Module._load (module.js:493:3) at Function.Module.runMain (module.js:690:10) at startup (bootstrap_node.js:194:16) at bootstrap_node.js:666:3 jake runtests causes an identical stacktrace. Appears to be due to #23354 -- fails in the merge commit for that. Succeeds (after another git clean -xfd) in the merge commit for #22127. @RyanCavanaugh @andy-ms not reproing for me. More info? Specific to gulp not jake (though running gulp first causes jake to fail)
gharchive/issue
2018-04-16T20:14:33
2025-04-01T04:32:46.239387
{ "authors": [ "RyanCavanaugh", "andy-ms" ], "repo": "Microsoft/TypeScript", "url": "https://github.com/Microsoft/TypeScript/issues/23442", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }