added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:37:23.417838
2020-02-19T10:43:22
567477967
{ "authors": [ "al3xhh", "vholer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2221", "repo": "OpenNebula/one", "url": "https://github.com/OpenNebula/one/issues/4217" }
gharchive/issue
Provision objects count Description Provision template precisely defines how the cluster will look like, which objects and how many of them. That needs to precisely know cluster sizing, number of hosts. E.g., in this case we create a cluster of just 2 hosts: hosts: - reserved_cpu: 100 im_mad: kvm vm_mad: kvm provision: hostname: "myhost1" - reserved_cpu: 100 im_mad: kvm vm_mad: kvm provision: hostname: "myhost2" It would be nice if objects could have a parameter to indicate number of such objects to create, the example above could be reduced to: hosts: - reserved_cpu: 100 im_mad: kvm vm_mad: kvm provision: hostname: "myhost<%= @a_suitable_unique_identifier_of_object %>" count: 2 or, ideally could be also parameterized via user inputs (#4216): inputs: hosts_count: 2 hosts: - reserved_cpu: 100 im_mad: kvm vm_mad: kvm provision: hostname: "myhost<%= @a_suitable_unique_identifier_of_object %>" count: "<%= @inputs['hosts_count'] %>" Just an idea, needs to be designed (e.g., can't be the "count" name confused with regular object template parameters?). Use case Provision templates, which can be easily customized without the need to change / hack the base template itself. Progress Status [ ] Branch created [ ] Code committed to development branch [ ] Testing - QA [ ] Documentation [ ] Release notes - resolved issues, compatibility, known issues [ ] Code committed to upstream release/hotfix branches [ ] Documentation committed to upstream release/hotfix branches PRs to merge in master: code docs tests
2025-04-01T06:37:23.423182
2020-04-07T09:10:11
595714820
{ "authors": [ "OpenNebulaSupport", "al3xhh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2222", "repo": "OpenNebula/one", "url": "https://github.com/OpenNebula/one/issues/4491" }
gharchive/issue
Service role terminate action is not working Description When a role is terminated, the server is returning an error. To Reproduce Try to terminate a role. Expected behavior The VMs in the role should be terminated and the role information should be updated properly. Details Affected Component: OneFlow Version: development. Progress Status [ ] Branch created [ ] Code committed to development branch [ ] Testing - QA [ ] Documentation [ ] Release notes - resolved issues, compatibility, known issues [ ] Code committed to upstream release/hotfix branches [ ] Documentation committed to upstream release/hotfix branches PRs to merge in master: code: https://github.com/OpenNebula/one/pull/4492 tests: https://github.com/OpenNebula/development/pull/916
2025-04-01T06:37:23.434759
2024-09-10T07:26:56
2515630840
{ "authors": [ "Lungsangg", "kaldan007", "lobsam" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2223", "repo": "OpenPecha/pecha.org-roadmap", "url": "https://github.com/OpenPecha/pecha.org-roadmap/issues/101" }
gharchive/issue
PWA app Description : Pecha.org need to integrate pwa (progressive web apps) so that user can experience the website similar to mobile app. Implementation steps : [x] create manifest.json [x] create service_worker.js [x] update html to link manifest.json and register service_worker.js [x] Check Manifest and Service Worker status in dev tool [x] Test the "Add to Home Screen" feature in mobile device @lobsam it is solved, can you please test in your mobile in dev.pecha.org: @Lungsangg "error: base.html could not be found" Above bug are fixed. The pwa install button will appear for 10 second and if user does not install, then it will disappear until page get reload. If pwa is installed, the button wont appear again. @Lungsangg Install button is disapearing after some seconds, which need to be appear for 10 - 20 seconds. Current install button needs to be redesign which should includes Pecha Logo Cross icon : if user does not want to install and avoid the install button
2025-04-01T06:37:23.442164
2023-04-12T15:27:41
1664785822
{ "authors": [ "alexpevzner", "gustingonzalez" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2224", "repo": "OpenPrinting/ipp-usb", "url": "https://github.com/OpenPrinting/ipp-usb/issues/66" }
gharchive/issue
ipp-usb needs to be restarted after Pantum M6559NW is reconnected Hello, I'm experiencing an issue with a Pantum M6559NW, where the scanner and printer functions stop working after disconnecting and reconnecting the device. Checking the logs of the ipp-usb execution, I found the following: > HTTP[008]: GET http://localhost:60000/eSCL/ScannerCapabilities USB[1]: connection allocated, 1 in use: --- a-- --- HTTP[008]: connection 1 allocated ! USB[1]: send: libusb_bulk_transfer: Input/Output Error ! HTTP[008]: libusb_bulk_transfer: Input/Output Error USB[1]: connection released, 0 in use: --- --- --- ! ESCL: eSCL: Get "http://localhost:60000/eSCL/ScannerCapabilities": libusb_bulk_transfer: Input/Output Error [...] > HTTP[009]: POST http://localhost:60000/ipp/print > HTTP[009]: request body: got 173 bytes; EOF > HTTP[009]: body is small (173 bytes), prefetched before sending USB[2]: connection allocated, 1 in use: --- --- a-- HTTP[009]: connection 2 allocated ! USB[2]: send: libusb_bulk_transfer: Input/Output Error ! HTTP[009]: libusb_bulk_transfer: Input/Output Error USB[2]: connection released, 0 in use: --- --- --- > HTTP[009]: POST http://localhost:60000/ipp/print ! HTTP[009]: libusb_bulk_transfer: Input/Output Error To try to solve the issue, I set the max-usb-interfaces to 1. While this did work, I noticed that it resulted in the loss of the job cancelling feature, which is required to prevent the printer from remaining in the 'printing' state after the document is printed (as detailed in this issue). Since limiting the USB interfaces to 1 is not a feasible solution if you want to implement the aforementioned workaround, I tried re-run ipp-usb when the printer is reconnected. This solution works well and provides a similar effect without the need to limit the USB interfaces. I tested both the ipp-usb_0.9.23-1+53.1_amd64.deb version and a compiled one from the master branch, and observed same behavior in both cases. Please let me know if you need any more information, or if there is anything I can contribute. Thanks in advance! Hi @gustingonzalez, looks like if you connect the printer when ipp-usb is running, ipp-usb begins device initialization immediately, when device is not ready yet. While if you connect the printer and then start (or restart) the ipp-usb daemon, device has enough time to initialize itself before the first request comes could you please play a little bit with device quirks parameters? The most promising is, probably, init-delay and, may be, init-reset
2025-04-01T06:37:23.564697
2019-12-12T19:10:47
537161503
{ "authors": [ "runnwerth", "tfmorris", "thadguidry", "wetneb" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2225", "repo": "OpenRefine/OpenRefine", "url": "https://github.com/OpenRefine/OpenRefine/issues/2243" }
gharchive/issue
ODS file cannot be imported, causes ODF Toolkit exceptions Describe the bug Some ODS files cannot be imported, but open just fine in LibreOffice. To Reproduce Steps to reproduce the behavior: Try to Import one of the test files provided such as test.ods file. Spinner is shown with "Updating Preview...". Error is shown in console about NOT_FOUND_ERR. Current Results 12:59:10.360 [ org.mortbay.log] /command/core/importing-controller (17ms) org.w3c.dom.DOMException: NOT_FOUND_ERR: An attempt is made to reference a node in a context where it does not exist. at org.apache.xerces.dom.ParentNode.internalInsertBefore(Unknown Source) at org.apache.xerces.dom.ParentNode.insertBefore(Unknown Source) at org.odftoolkit.odfdom.pkg.OdfElement.insertBefore(OdfElement.java:491) at org.odftoolkit.odfdom.doc.table.OdfTable.appendColumn(OdfTable.java:1092) at org.odftoolkit.odfdom.doc.table.OdfTable.appendColumns(OdfTable.java:1123) at org.odftoolkit.odfdom.doc.table.OdfTableRow.getCellByIndex(OdfTableRow.java:254) at com.google.refine.importers.OdsImporter$1.getNextRowOfCells(OdsImporter.java:174) at com.google.refine.importers.TabularImportingParserBase.readTable(TabularImportingParserBase.java:120) at com.google.refine.importers.OdsImporter.parseOneFile(OdsImporter.java:185) at com.google.refine.importers.ImportingParserBase.parseOneFile(ImportingParserBase.java:118) at com.google.refine.importers.ImportingParserBase.parse(ImportingParserBase.java:89) at com.google.refine.importing.ImportingUtilities.previewParse(ImportingUtilities.java:961) at com.google.refine.importing.DefaultImportingController.doUpdateFormatAndOptions(DefaultImportingController.java:174) at com.google.refine.importing.DefaultImportingController.doPost(DefaultImportingController.java:93) at com.google.refine.commands.importing.ImportingControllerCommand.doPost(ImportingControllerCommand.java:68) at com.google.refine.RefineServlet.service(RefineServlet.java:189) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) Expected behavior Preview and import should work for this test.ods file and perhaps others as well. Screenshots OpenRefine cannot open test.ods and just waits: LibreOffice does open it just fine, but OpenRefine shows Console Error: Desktop (please complete the following information): OS: Windows 10 Browser Version: Firefox JRE or JDK Version: \openjdk-13.0.1_windows-x64_bin\jdk-13.0.1 OpenRefine (please complete the following information): Version: OpenRefine 3.3-beta Datasets These test files were used in: https://github.com/chainsawriot/readODS/tree/master/tests/testdata Here is the zip of all of the ODS test files from that readODS project: ODS_TEST_FILES_1.zip Additional context Add any other context about the problem here. This needs to be fixed upstream, in the odftoolkit library. Yeap, I know. But we always create our own good issue(s) to track the areas of OpenRefine that break. :-) Same problem here! Any updates on this particular problem? Same problem here! Any updates on this particular problem? We might be able to improve user experience here by catching relevant exceptions in the right places. If this only happens when computing the value of some cell, it would be nice if we could be able to import the rest of the table. Perhaps some of these exceptions can be meaningfully converted to our own EvalError. We might be able to improve user experience here by catching relevant exceptions in the right places. If this only happens when computing the value of some cell, it would be nice if we could be able to import the rest of the table. Perhaps some of these exceptions can be meaningfully converted to our own EvalError. This needs to be fixed upstream, in the odftoolkit library. It would be useful to have a link to the upstream bug, so that we can track it. This needs to be fixed upstream, in the odftoolkit library. It would be useful to have a link to the upstream bug, so that we can track it.
2025-04-01T06:37:23.566905
2021-04-22T05:13:26
864547329
{ "authors": [ "wetneb" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2226", "repo": "OpenRefine/OpenRefine", "url": "https://github.com/OpenRefine/OpenRefine/pull/3839" }
gharchive/pull-request
Conditionalize CI steps requiring secrets Fixes #3677. Follow-up to #3680 which this PR builds on. The problem with #3680 (I think?) is that it disables secrets (so, coverage reporting and the Cypress dashboard) even for PRs made by org members / from within the repository. We want a way to still enable those as easily as possible when we see well-meaning PRs. One option is to add a label on the PR to mark it as trusted. I would also like to check if the author of the PR is an org member (in which case we would not have to label the PR manually) but I haven't found the syntax for that yet. I am merging this without review as this change on the CI cannot be tested without being merged. Happy to revert if there are concerns. It looks like the syntax I tried to introduce conditions is incorrect so I reverted this change.
2025-04-01T06:37:23.572834
2022-04-28T23:54:52
1219467783
{ "authors": [ "elroykanye", "thadguidry", "wetneb" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2227", "repo": "OpenRefine/OpenRefine", "url": "https://github.com/OpenRefine/OpenRefine/pull/4816" }
gharchive/pull-request
#3515 notify user about disk space availability Fixes #3515 Changes proposed in this pull request: addition to pom deps to include oshi-core from here. a suggestive action area for system information command for retrieving system information an ajax request that communicates with the servlet for the command stated above. Hi @antoine2711 @wetneb , I wish to ask for your guidance on this. The approach I followed to come up with a solution for this, was to setup a GetSystemInfoCommand which has a doGet method for responding to requests which require the system information. Using the oshi tool suggested in this, I was able to return a serialised object containing info like the RAM, hostname, etc. On the frontend, I used ajax to request for this information from the command stated above. This information has to be updated so I set a 1000ms interval using the setInterval function for repeatedly requesting the data. All this was done on an isolated html + js file so I can see the results without affecting much of the codebase. My problem is that the logs are almost completely filled with the calling of the command I created, and integrating it into an open project may create some problems. Is there some better you think I could use to maybe subscribe to the command so I get the data realtime or avoid the logging? @wetneb @antoine2711 From the discussion yesterday, I went through the RefineServlet again and realised the commands are already checked for the value returned by logRequests(). I just updated the new command then by overriding the default logRequests() to return false. This solves the issue I had previously. @elroykanye the ball is in your camp on this PR too :) @elroykanye @elroykanye Where are we at on this PR? Almost ready to merge? Ready? Needs heavy testing by all on all platforms? Closing per inactivity.
2025-04-01T06:37:23.582126
2019-07-24T09:20:15
472171975
{ "authors": [ "bjost2s", "rbudde" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2228", "repo": "OpenRoberta/openroberta-lab", "url": "https://github.com/OpenRoberta/openroberta-lab/issues/202" }
gharchive/issue
view of ev3 block in simulation not working, change background not working write an ev3 program open sim click "EV3" button screen is white (press ctrl&- to see that the brick is outside of the screen (?) changing background in sim and pressing "ev3" button generates many different errors (I expect for the same reason as above). Sometimes many ev3 bricks appears an the screen. the same behavior on both chrome and firefox already fixed in #61
2025-04-01T06:37:23.584195
2019-11-20T10:28:49
525712888
{ "authors": [ "boonto", "philippmaurer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2229", "repo": "OpenRoberta/openroberta-lab", "url": "https://github.com/OpenRoberta/openroberta-lab/issues/325" }
gharchive/issue
Add romanian translation Describe the feature you'd like Add the romanian translations from the Open Roberta Translation Sheet to the Open Roberta Lab. Additional context Link to the translation sheet (read only), over which the .json file can be generated. https://docs.google.com/spreadsheets/d/18lcfyYL2UsNJEWJYEcxSujWGMCSKYDomif2Xmvz-RFM/edit?usp=sharing Romanian translation is available to be selected and changes accordingly.
2025-04-01T06:37:23.679795
2013-04-24T16:39:36
13597707
{ "authors": [ "normanjaeckel", "ostcar" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2230", "repo": "OpenSlides/OpenSlides", "url": "https://github.com/OpenSlides/OpenSlides/issues/619" }
gharchive/issue
Simplify get_version() Can we simpify the get_version function and the VERSION object? At the moment it is a 5-tuple, but it can easily be a string like '1.4b1' so we do not need to do the composition work also in every plugin again. @normanjaeckel is this an issue for the 2.0 release?
2025-04-01T06:37:23.680867
2015-01-02T21:11:20
53278226
{ "authors": [ "normanjaeckel", "ostcar" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2231", "repo": "OpenSlides/OpenSlides", "url": "https://github.com/OpenSlides/OpenSlides/pull/1385" }
gharchive/pull-request
Merge stable branch This is to show the merge commit. If it is ok, I will push it directly into master branch. @normanjaeckel you can push this into the master branch right now. There will be a lot of conflicts in #1381 and #1380 that have to be manually resolved. So it will be best, when I merged them into master.
2025-04-01T06:37:23.753093
2017-12-01T19:46:19
278575839
{ "authors": [ "GoFroggyRun", "MattHJensen", "hdoupe", "martinholmer" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2232", "repo": "OpenSourcePolicyCenter/PolicyBrain", "url": "https://github.com/OpenSourcePolicyCenter/PolicyBrain/issues/763" }
gharchive/issue
Issues with 'cpi_offset' parameter Both the House bill and the Senate bill adopt chained-CPI indexing. While working on the reform file upload option and TB's GUI input work, I realized that there might be some issues with how this parameter works. To realize chained-CPI indexing for year 2018, the cpi_offset parameter needs to be set to -0.0025 in year 2017. However, both reforms (House and Senate) actually take affect in 2018. In this case, to implement either reform in TB's GUI, users have to always begin with year 2017. So each input field other than cpi_offset has to have an extra *, indicating no change in 2017's law. Also, the result table would build an extra column for year 2017 with no change whatsoever. Does this really make sense? A couple solutions to simplify the issue: Come up with some ways to allow users to edit previous year parameter on TB's GUI page, just like what we have done to allow future year edits. Modify how cpi_offset parameter work in tax-calculator by simply implementing the chained-CPI indexing one year head of reform year specified. @martinholmer @MattHJensen @hdoupe @GoFroggyRun said we could: Modify how cpi_offset parameter work in Tax-Calculator by skimpily implementing the chained-CPI indexing one year head of reform year specified. I view this as a very bad idea and I'm not even sure how it would work. Tax-Calculator has no problem with a reform that starts before the ten-year budget window. It is the design of TaxBrain that is the problem. When I started on this project, TaxBrain required all reform provision to start in the first budget year (that is, there was no * capability to specify delayed implementation of a reform provision). @GoFroggyRun said we could: Come up with some ways to allow users to edit previous year parameter on TaxBrain's GUI page, just like what we have done to allow future year edits. And @martinholmer said: Tax-Calculator has no problem with a reform that starts before the ten-year budget window. It is the design of TaxBrain that is the problem. One way to think about how to redesign TaxBrain, is to focus on one particular limitation of TaxBrain: the concept of "Start Year" has two conceptually distinct meanings. In TaxBrain, "Start Year" means the first year of the reform and it also means the first year of the ten-year budget window (for which tax results are shown). It seems to me that those are distinct concepts and we don't necessarily want them to be equal in all cases. So, why not have two start years on the TaxBrain GUI page? One would be the "Reform Start Year" and the other would be the "Output Start Year". @hdoupe @MattHJensen So, why not have two start years on the TaxBrain GUI page? One would be the "Reform Start Year" and the other would be the "Output Start Year". Or maybe there are better labels for these two distinct start years. @martinhomer's proposal might be best, and that option would certainly be an improvement from where we are now because it would at least enable users to generate 10 years of output for the TCJA -- currently they can not. But it would not solve one of the problems @GoFroggyRun mentioned "each input field other than cpi_offset has to have an extra *, indicating no change in 2017's law" and it would create a fairly significant source of user error, I imagine. Users might change the Output Start Year and forget to change the Reform Start Year or similar. So I wonder if anyone is aware of users encountering a problem like the one we face now before, where the user wants a different reform start year than output start year? If not, the simplest solution might be to just change the definition of cpi_offset parameter so that changing the 2018 value of cpi_offset changes the 2018 value of parameters, but I don't know how difficult it would be to implement this in tax-calculator. One advantage of separating the reform start year and the output start year (@martinholmer's proposal) is that 5 years for now, say, it would be very easy to see how the TCJA implemented way back in 2017/2018 affects the forward looking 10 year budget outlook. So actually, I think my preference would be to do both: separate the Output Start Year from the Reform Start year and redefine cpi_offset in tax-calculator. The redefinition of cpi_offset should be a higher priority because it would solve immediate problems for users analyzing TCJA, and the separation of Output Start Year and Reform Start year could happen on a slower time frame. -- (An alternative option (which I am not terribly fond of but will present for others) would be to add a new input character that indicates that the proceeding value is for a previous year to the start year. For example, if we chose ; for such a parameter and the start year is 2018, then -0.0025; would indicate that the cpi indexing is set in 2017.) I agree with @martinholmer's proposal of having a start year and an output start year. I also agree with @MattHJensen that the cpi_offset parameter should be redefined in Tax-Calculator if this is possible. It doesn't make much sense to me to have a parameter not have any effect until a year after it's activated. @hdoupe said: I also agree with @MattHJensen that the cpi_offset parameter should be redefined in Tax-Calculator if this is possible. It doesn't make much sense to me to have a parameter not have any effect until a year after it's activated. I have no idea how to "redefine" the cpi_offset parameter given the way Tax-Calculator does price-inflation and wage-growth indexing. The current indexing logic was built in at the very beginning of the project before I got involved with Tax-Calculator. I agree with @martinholmer, @MattHJensen and @hdoupe that having a start year and an output year on TaxBrain would be a huge improvement. We definitely want to incorporate such enhancement after things get cool down a bit. My apologies that my initially suggestion was not clear enough. Indeed, after looking into the price-inflation and wage-growth indexing mechanisms planted in tax-calculator, I agree with @martinholmer that modifying these logic is a bad idea. And I have no idea how to do that either. In fact, what I have in mind, instead of dealing with those convoluted logic, is to take a roundabout approach. After reforms are read into tax-calculator, but before implementing any of them, are we able to apply some special treatment to cpi_offset parameter in the way that, if it were specified in year n, it would be processed as n-1 in the calculator? I understand this might not be an elegant solution, but, to not deal with any indexing logic, it would seem a reasonable one. Also, given I am not very familiar with the indexing logic in tax-calculator, it is very likely that my proposal is implausible. @martinholmer, does the approach make sense to you? Would that allow 1-year lag in specifying cpi_offset-related reforms without touching any of the indexing logic? It seems like the consensus is that we need both a parameter (or reform) start year and an output start year. The interface would then look something like this: I think that we would only have to make changes on the TaxBrain side. Instead of passing the reform start year to TC, we would send the ouptut start year as the start_year in parameter in tbi.py. While we are adding this functionality, I think we should allow for the user to select a start year (output and reform) for any year up to 2027. The final year of output would still be 2027 or output start_year + 9, but the user would have an option to show less years. The output tables already allow us to do this: @GoFroggyRun and I were discussing implementation issues with adding an output start year. We circled back to the idea of adding a special reverse character. Here's our conversation: From @GoFroggyRun Also, having separate reform year and output year is not the only solution: the CPI offset parameter still has to be specified in 2017, and users will have to add an extra * for each reform provision other than CPI offset parameter, which can be annoying and confusing. An alternative approach would be to allow for previous year GUI input edits (for example in start year 2018, we allow users to specify 2017 and previous parameter values in some format). This alternative approach doesn’t involve separating reform year and start year, and would better solve the problem, at least for the moment, in my opinion. I’m not sure, however, how difficult this approach is. What do you think about this alternative? From me: I agree that separating the reform year and output years is not the only solution. However, I think it gives us some flexibility that we may want in the future. I’m not a huge fan of adding a reverse parameter that pushes the following parameter back a year. The nice part about the GUI interface is that you don’t need to know how to program in order to use it. So, I’m wary of adding another special character and creating our own little programming language. On the other hand, adding a reverse character seems like a pretty simple addition. Further, if we implement this character we could implement the TCJA in a pretty straight forward way: Enter cpi_offset as “<,-0.0025” and fill in the other parameters like usual (I’m thinking “<” would be a good character, but I’m open to other suggestions) If we add this parameter, we should only allow it to be used as the first character in the string. We don’t want to implement some function that has to figure out what this means: 7000,,,8000,<,<,10000,<,* etc. From @GoFroggyRun: HANK: I agree that separating the reform year and output years is not the only solution. However, I think it gives us some flexibility that we may want in the future. SEAN: Right. I agree. This is definitely something nice to have. HANK: I’m not a huge fan of adding a reverse parameter that pushes the following parameter back a year. The nice part about the GUI interface is that you don’t need to know how to program in order to use it. So, I’m wary of adding another special character and creating our own little programming language. On the other hand, adding a reverse character seems like a pretty simple addition. Further, if we implement this character we could implement the TCJA in a pretty straight forward way: Enter cpi_offset as “<,-0.0025” and fill in the other parameters like usual (I’m thinking “<” would be a good character, but I’m open to other suggestions) SEAN: Me neither haha. But it seems to me that this is an easy way to deal with the special case for parameters like CPI offset --- hopefully we won't have too many of them. If having such addition is simple, the only thing we need to worry about is to come up with some symbol straightforward yet special enough. Let's move the discussion to Github and see if others have any better ideas. HANK: If we add this parameter, we should only allow it to be used as the first character in the string. We don’t want to implement some function that has to figure out what this means: 7000, * , * , 8000,<,<,10000,<,* etc. SEAN: This is exactly what I have in mind as well. HANK: Do you mind if I move the last two comments to github #763? SEAN: Not at all. cc @MattHJensen @martinholmer @MaxGhenis I guess the only question remains regarding "reverse editing" is what the syntax should look like. The <, symbol @hdoupe suggested is a good one. If <, were adopted, the "reverse editing" would be something look like: (just some random example) -0.001 <, -0.0025 <, *, 0, *, * or, if we were using <,<, -0.001 <,< -0.0025 <,< *, 0, *, * @hdoupe Is this what you were thinking? What do you think of the <,< symbol? @GoFroggyRun asked @hdoupe Is this what you were thinking? What do you think of the <,< symbol? Sort of. I think we should impose some strict rules on how this symbol can be used so that we can keep everything simple. It can only be used at the beginning of the string. It can only send a parameter back one year (this rule could be relaxed fairly easily) For example, if you set the start year as 2018 and the cpi_offset parameter to "<,-0.0025", then this sets the cpi_offset to -0.0025 in 2017. Implementing this is pretty straight forward. In fact, I just put together a prototype. I'll open a PR in a few minutes. I think adding a reverse parameter and the ability to specify a different output year adds significant flexibility to TaxBrain. Consider a reform that goes into effect in 2018, but the vast majority of it's parameters do not take effect until 2020. You could set the reform year to 2020 and the output year to 2018. You could then use the "<" character to enter the parameters that take effect in 2018 and simply enter the other parameters with out having to use a bunch of "*" characters to get them up to 2020. I guess the argument against this character is that if you are going to learn how to use this character then wouldn't it be easier just to write a json file? I think adding a reverse parameter and the ability to specify a different output year adds significant flexibility to TaxBrain. Definitely agreed. I guess the argument against this character is that if you are going to learn how to use this character then wouldn't it be easier just to write a json file? I don't think so -- there are significant other benefits of the GUI, such as being able to view documentation, current-law values, and the reform all in one place. I don't think so -- there are significant other benefits of the GUI, such as being able to view documentation, current-law values, and the reform all in one place. Ok, I see. That makes sense.
2025-04-01T06:37:23.793997
2019-08-13T06:48:47
479987319
{ "authors": [ "ghickman" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2233", "repo": "OpenTechFund/opentech.fund", "url": "https://github.com/OpenTechFund/opentech.fund/pull/1406" }
gharchive/pull-request
Add Debug Toolbar to Dev This sets up development dependencies so we're not installing testing libraries into production environments and installs the Django Debug Toolbar in development, configuring it so it's always running but with its panels disabled. I've disabled the panels to cope with the prohibitive performance penalty some of the panels bring, but they can be toggled on in the UI easily. Updated as per comments and rebased into existing changes.
2025-04-01T06:37:23.838451
2016-11-14T15:10:12
189136182
{ "authors": [ "andySigler", "coveralls" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2234", "repo": "OpenTrons/opentrons-api", "url": "https://github.com/OpenTrons/opentrons-api/pull/107" }
gharchive/pull-request
including total commands count within robot run emit Small PR including 'commands_total': len(robot._commands) within the robot.run 'command-run' notification Coverage decreased (-0.003%) to 94.882% when pulling cdbafee3e8dcdeadd9287a4780c1e31f69c0de8f on 258-s-protocol-command-emits-include-total-commands-to-calculate into c390ef87fd3f634cfb0db631b14ce52e409d7c25 on master.
2025-04-01T06:37:23.879480
2019-10-29T18:33:11
514134817
{ "authors": [ "frangio", "spalladino" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2236", "repo": "OpenZeppelin/openzeppelin-sdk", "url": "https://github.com/OpenZeppelin/openzeppelin-sdk/pull/1271" }
gharchive/pull-request
Force compilation to fix tests Fixes issues reported by @abcoathup in #1241. I'm not sure why the test script was only compiling in the CI before. I suspect it was because it doesn't work without rm -rf-ing the build/contracts directory first... And this wasn't necessary in the CI since it's a clean environment. I'm not sure why the test script was only compiling in the CI before I think the reason was to make tests faster by avoiding the compilation step. But I prefer having to run truffle test manually (instead of npm t) rather than having tests fail for someone new to the project due to missing steps.
2025-04-01T06:37:23.881811
2016-09-08T23:14:40
175888202
{ "authors": [ "Openarl", "dein0s", "mey1R" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2237", "repo": "Openarl/PathOfBuilding", "url": "https://github.com/Openarl/PathOfBuilding/issues/19" }
gharchive/issue
ring template and flat ele damage to attacks It's not possible to add "cold, fire or lightning to attacks" for any rare ring template. Yep, can confirm that this affixes are missing in ring templates. Also some other affixes are missing too (checked '+ to Evasion', '% of Physical Attack Damage Leeched as Life/Mana'). Not counting new ones from essences, just default from http://poeaffix.net/ Hard to implement or just forgot to put them in templates? Affixes are added to templates manually; so it's almost always the case that I haven't bothered to include them. I've generally been added them as people request them, so I'll put these ones on (minus the leech one for now since PoB doesn't do leech calculations (yet!)). Much appreciated as always :)
2025-04-01T06:37:23.883263
2017-03-02T13:15:09
211387701
{ "authors": [ "Openarl", "Teriderol" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2238", "repo": "Openarl/PathOfBuilding", "url": "https://github.com/Openarl/PathOfBuilding/issues/205" }
gharchive/issue
Lioneye's fall The jewel seems to not affect the passive notes in range It seems to be working, are there specific nodes that don't appear to be coverted? Note that conversion jewels don't currently change the node descriptions to reflect the converted stats. restarted my program and now it treats them as jewel conversion, thx though, works great
2025-04-01T06:37:23.890401
2022-04-11T11:55:52
1199864236
{ "authors": [ "barakplasma", "mcous" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2239", "repo": "Opentrons/opentrons", "url": "https://github.com/Opentrons/opentrons/issues/9927" }
gharchive/issue
feat: Development environment with nix-shell as alternative to DEV_SETUP.md Overview The DEV_SETUP.md is quite complicated, with many manual steps, especially if someone only wants to work on part of opentrons. This feature request (which I could make a PR for) would be to add a development environment with nix-shell as an alternative. Implementation details A section of DEV_SETUP.md or a replacement would be: Install Nix (or use Nix docker) Install docker, then run docker run -it -p 8080:8080 -v $(pwd)/:/workdir nixos/nix from opentrons mono-repo run cd workdir then nix-shell use this file shell.nix { pkgs ? import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/715dc137b08213aabbbe0965b78ab938e5d8d3b7.tar.gz") {} }: pkgs.mkShell { buildInputs = [ pkgs.nodejs-14_x pkgs.yarn pkgs.libudev0-shim pkgs.python37 pkgs.gnumake pkgs.curl pkgs.openssh pkgs.git ]; } the development environment would be ready, leaving a dev to just run cd protocol-designer; make dev and see it just work Design Similar to https://nixos.org/download.html#nix-install-docker https://nixos.org/guides/declarative-and-reproducible-developer-environments.html https://dev.to/edbentley/nix-for-frontend-developers-64g Acceptance criteria The shell.nix file above let's me hack on the protocol designer already; but it doesn't support the root make setup command yet due to lack of support for the usb-detection npm package so far. Hi @barakplasm, this is an interesting proposal! While, I don't think we have the resources to test and support an "officially blessed" Docker/NixOS-based development environment, if you were to create a GitHub repository or Gist with your nix-shell configuration + setup instructions, I think that would be fantastic. I'd definitely be down to link to those instructions under an "Alternative Community Setups" section or something similar Sure, I can make an external repo for this contribution @mcous I opened a repo here with instructions on how to use nix-shell for dev setup (at least on the protocol designer). I'll make a PR after I use this alternate setup a bit longer
2025-04-01T06:37:23.895989
2017-09-22T18:05:46
259893299
{ "authors": [ "PrimeTimeTran", "hpjaj" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2240", "repo": "OperationCode/operationcode_backend", "url": "https://github.com/OperationCode/operationcode_backend/pull/164" }
gharchive/pull-request
CodeSchool API Endpoints for CRUD & Rake Task for seeding from code_schools.yml Description of changes #76 Created CRUD API endpoints for the CodeSchool model. Also added Location model as well because some of the CodeSchools had multiple locations. Also created rake task for seeding data. rake schools:populate Reads from the ./config/code_schools.yml and creates records in the database. @hpjaj could you take a look at the API Documentation I've added? Besides this I'll remove the configuration for Raven I added in ./config/application.rb and the change the host: back to host: operationcode-psql in ./config/database.yml and I think it'll be close to done. Is there anything else you can see that I've missed? @hpjaj I've worked on the requests you had. @hpjaj I've committed and pushed the changes you requests =). @PrimeTimeTran - There ended up being a couple issues with this PR. I wanted to chat with you about them. Are you a member of the Operation Code Slack channel? If yes, what is your @ name? If no, you can join by going to https://operationcode.org/profile and clicking Enter our Slack channel. Once you are in Slack, my handle is @ harry. @hpjaj I see, sorry I missed them =(. No, I'm not. ok I'll join =)
2025-04-01T06:37:23.918349
2022-11-07T16:12:08
1438617104
{ "authors": [ "albgus", "swoehrl-mw" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2241", "repo": "Opster/opensearch-k8s-operator", "url": "https://github.com/Opster/opensearch-k8s-operator/issues/350" }
gharchive/issue
Sensitive configuration in the OpenSearch Dashboards configuration I'm looking into implementing OpenID Connect authentication. In this scenario the IdP client secret should be configured in Kibana: https://opensearch.org/docs/latest/security-plugin/configuration/openid-connect/#configuration-example The only way I can see to inject values from a secret is using the env: section with secretKeyRef, but unfortunately it seems that the dashboards security plugin doesn't support reading configuration from the environment. I tried using the keystore as well but that generated an error that opensearch_security.openid.client_secret is not a supported key, and it seems to only be mounted into the OpenSearch nodes anyway. Hi @albgus. AFAIK You can use environment variables inside the dashboards.yml config. So you should be able to inject the sensitive information as environment variables from a secret via the env option and then reference that in the additionalConfig. Something like: # ... dashboards: env: - name: OPENID_CLIENT_SECRET valueFrom: secretKeyRef: ... additionalConfig: opensearch_security.openid.client_secret: "${OPENID_CLIENT_SECRET}" Can you try that? I tried using the keystore as well [...] and it seems to only be mounted into the OpenSearch nodes anyway. Correct, the keystore is only used in the opensearch pods, not in dashboards. Alright, didn't realize that the config file supported variable substitution as well. It seems that this will work. Thanks!
2025-04-01T06:37:23.921944
2023-11-11T03:14:29
1988684016
{ "authors": [ "research4pan", "youngcraft" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2242", "repo": "OptimalScale/LMFlow", "url": "https://github.com/OptimalScale/LMFlow/issues/676" }
gharchive/issue
How to fix the git clone Recently I use the github "https://github.com/OptimalScale/LMFlow" to clone on my desktop computer . I use command "git clone XXXXX" on my "D://",Then Error downloading object: assets/multimodal-chatbot-demo.gif (2062965): Smudge error: Error downloading assets/multimodal-chatbot-demo.gif (206296519e7892d65cacc48c7e98c6743301b74c29401d57e325197bd6e41cac): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access. Thanks for your interest in LMFlow! It was caused by running out of Git LFS quota. We have added a data package to increase the quota. You may try again to see if the problem occurs. Thanks 😄
2025-04-01T06:37:23.937218
2022-06-17T12:54:20
1274997303
{ "authors": [ "B3nz01d", "paulinea" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2243", "repo": "Orange-OpenSource/ods-android", "url": "https://github.com/Orange-OpenSource/ods-android/issues/189" }
gharchive/issue
Documentation - Homogenize documentation structure for components Each component documentation page should be built using the following structure: Component Name (only displayed on standalone documentation) (the paragraph introductory description of the component found in the DSM) (Description only displayed on standalone documentation) Page Summary (a list of internal anchors for the headers of this component page in order to access directly to a specific section) Specifications references (a list of usefull references for this component) DSM link (only displayed on standalone documentation) Material Design link Javadoc link Accessibility (contains a link to the accessibility website that are required to build this component) Variants (A list of all the variants of this component) Example of variants for buttons: text button, text button with icon, outlined button, outlined button with icon, contained button, contained button with icon, toggle button For each variant: Title Description Screenshots (light and dark) « Implementation in XML » or « Implementation in Jetpack Compose » depending on the documentation displayed Component Specific Tokens (contains a list of all the available component specific tokens: background color, foreground color, variant specifics, …) updates to the doc have been confirmed
2025-04-01T06:37:23.939744
2017-07-11T14:22:44
242066094
{ "authors": [ "BenedekFarkas", "Xceno", "sebastienros" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2244", "repo": "OrchardCMS/Orchard", "url": "https://github.com/OrchardCMS/Orchard/pull/7771" }
gharchive/pull-request
ContentPart names should be sanitized upon creation. Fixes #7770. Added the ToSafeName() sanitation already used in CreatePart, to CreatePartPOST. This ensures that only valid technical names can be used. Shouldn't it be a validation message instead, if the ToSafeName result is different? I'll decline this PR because I found that creating and editing both Parts and Types lack some validation against special characters and the change in this PR needs more work (I'll commit the fixes soon).
2025-04-01T06:37:23.993294
2024-03-29T23:32:28
2216085933
{ "authors": [ "OreCruncher", "jesuisgrogro" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2245", "repo": "OreCruncher/DynamicSurroundingsFabric", "url": "https://github.com/OreCruncher/DynamicSurroundingsFabric/issues/104" }
gharchive/issue
No biome sounds on Fabric server 1.20.4 Hello Firstly, the mod is really nice, well done! I've been using it for weeks on a vanilla server with friends and everything was working just fine. Recently, we started a new adventure on a Fabric server and biomes doesn't produce ambiance sounds no more. Yet, some sounds are working like crows, owl... We installed VoiceChat, Lithium and AudioPlayer but we also tried to restart the server with no mods or datapack (sounds related) and it doesn't fix anything. I have the mod in my own mods file, it is working on solo map and on others vanilla servers but not on the Fabric one... I've tried to look if anyone was having that type of issue with Fabric but I didn't find anything. What can we try that may fix this issue ? Thanks you! Grogro Which version of DS are you using? And is the server 100% Fabric as opposed to some other server flavor (like paper, bungiecord, etc.)? I have seen this issue with non-Fabric servers, as well as times when the internal sequence of connecting to a server is out of order. Hello Ore. Thanks for your fast reply. The F3 says "fabric-loader-0.15.7-1.20.4/fabric/Fabric" so I guess the server is 100% Fabric. Fabric API is 0.96.11. I'm using DS Fabric 0.3.3. We also use AudioPlayer 1.8.9 and VoiceChat 2.5.9. Everyting is in 1.20.4. Grogro
2025-04-01T06:37:24.016181
2018-03-13T20:40:30
304927354
{ "authors": [ "joshfraser", "micahalcorn", "wanderingstan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2246", "repo": "OriginProtocol/demo-dapp", "url": "https://github.com/OriginProtocol/demo-dapp/issues/98" }
gharchive/issue
Switch to standard alertify npm module @micahalcorn had to patch the old alertify to work with our webpack, which necessitated us creating our own fork of the library. See commit https://github.com/OriginProtocol/demo-dapp/commit/b3dd1babcac01e1c6f7ecca039bb0fd40b510dba I'd vote that we just pick one of the standard supported npm packages and run with it rather than have the messiness of maintaining our own fork. @ryana @joshfraser any opinions on this? (Micah's patch: https://github.com/micahalcorn/alertify.js/commit/fe25d03893b0a10697127df2b1d90718be47089e ) I would agree. It's worth noting that there was at least some desire for uniformity between the DApp and company website (see #46). I'm happy to update both to a newer version and/or tweak the styles in an effort to offer the same UX, if necessary. We should have a longer conversation around dependencies on the engineering call tomorrow, but we should try and keep this app really simple with as few third-party dependencies as possible. It feels a bit silly to have to maintain a whole separate package around this when we can replicate this functionality ourselves with a few lines of CSS and JS. It feels like we're probably making this way too complicated. @micahalcorn What did we end up implementing on this, and can I close this one out? :) @wanderingstan nothing yet. I have a partially-baked replacement, but I've had it on the back burner while focusing on the UI. And it would really be best to have Matt & Aure settle on the alert/message/notification UX distinction before merging a solution to this issue. Done via #145 ✅
2025-04-01T06:37:24.018013
2018-02-03T02:46:08
294076893
{ "authors": [ "wanderingstan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2247", "repo": "OriginProtocol/origin-dapp", "url": "https://github.com/OriginProtocol/origin-dapp/issues/78" }
gharchive/issue
Show warning when no listing type selected On create page http://localhost:3000/create If no listing type is selected, and user hits Next, nothing happens. We should show some message asking them to select a type. Related to #46 I think this is already fixed. WIll confirm. fixed.
2025-04-01T06:37:24.028026
2024-03-07T16:09:52
2174220721
{ "authors": [ "DanielVF", "shahthepro", "sparrowDom" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2248", "repo": "OriginProtocol/origin-dollar", "url": "https://github.com/OriginProtocol/origin-dollar/pull/1993" }
gharchive/pull-request
Disable minting with LSTs Changes: Disables minting with LSTs Adds a new _mint internal method, since super.mint() won't be possible because it's external and also will have to remove nonReentrant modifier from the inherited contract Removes payable from fallback() since neither VaultCore or VaultAdmin have a receive() method and we don't expect the Vault to hold ETH directly either Deployment: OETHVaultCore implementation (upgraded): 0x9535413B1B9862D0123A36F55e9bf20EBA4b152e Deployer: 0x58890a9cb27586e83cb51d2d26bbe18a1a647245 Governance Proposal: Proposal ID:<PHONE_NUMBER>1051679346398237778682113450913991391551128830137727748559915078301 Proposal Tx: 0xbfdd55ea8376042a6899c9df47ce0e62a7308cb62b48b89c6e9832d16183aea0 Proposer: 0x6a6D776120f7e4a8dba5F6bF49b85cb340Cfe241 If you made a contract change, make sure to complete the checklist below before merging it in master. Refer to our documentation for more details about contract security best practices. Contract change checklist: [ ] Code reviewed by 2 reviewers. [ ] Copy & paste code review security checklist below this checklist. [ ] Unit tests pass [ ] Slither tests pass with no warning [ ] Echidna tests pass if PR includes changes to OUSD contract (not automated, run manually on local) I've verified that: ✅ the governance proposal with id 72....8301 matches the deploy script ✅ OETH vaultCore is upgraded to the new implementation ✅ The published code of the new implementation matches the one in this PR Deploy review: [x] All deployed contracts are listed in the deploy PR's description [x] Deployed contract's verified code (and all dependencies) match the code in master [x] The transactions that interacted with the newly deployed contract match the deploy script. [x] Governance proposal matches the deploy script [ ] Smoke tests pass after fork test execution of the governance proposal (will do as a part of code review) Smoke fork tests check out. from world import * NEW_IMPL = "0x9535413b1b9862d0123a36f55e9bf20eba4b152e" VAULT_CORE_PROXY = "0x39254033945AA2E4809Cc2977E7087BEE48bd7Ab" proxy = Contract.from_explorer(VAULT_CORE_PROXY, as_proxy_for=VAULT_CORE_PROXY) proxy.upgradeTo(NEW_IMPL, {'from': TIMELOCK}) WETH_WHALE = "0x2fEb1512183545f48f6b9C5b4EbfCaF49CfCa6F3" weth.approve(oeth_vault_core, 1e70, {'from': WETH_WHALE}) oeth_vault_core.mint(WETH, 1e18, 1e18, {'from': WETH_WHALE}) print(oeth.balanceOf(WETH_WHALE)) print(oeth.balanceOf(WETH_WHALE) / 1e18) STETH_BAGS = "0x5fEC2f34D80ED82370F733043B6A536d7e9D7f8d" steth.approve(oeth_vault_core, 1e70, {'from': STETH_BAGS}) oeth_vault_core.mint(steth, 1e18, 0, {'from': STETH_BAGS}) oeth_vault_core.redeemAll(0, {'from': WETH_WHALE})
2025-04-01T06:37:24.108312
2021-07-14T12:51:48
944396593
{ "authors": [ "Ousret", "akx", "codecov-commenter", "dopplershift", "sethmlarson" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2250", "repo": "Ousret/charset_normalizer", "url": "https://github.com/Ousret/charset_normalizer/pull/57" }
gharchive/pull-request
Don't inject unicodedata2 into sys.modules I noticed charset_normalizer meddles with sys.modules, causing this: >>> import charset_normalizer >>> import unicodedata >>> unicodedata <module 'unicodedata2' from '.../site-packages/unicodedata2.cpython-39-darwin.so'> This PR fixes that by using a fairly standard try: except ImportError: guard instead of the sys.modules hook. >>> import charset_normalizer >>> import unicodedata >>> unicodedata <module 'unicodedata' from '.../python3.9/lib-dynload/unicodedata.cpython-39-darwin.so'> Hi @akx Thanks for your report and your PR proposal. Why do you say that PR "fixes" when this behavior is well documented and is expected? I would be open to change that, considering recent events. I have some further questions regarding this patch. Why did you install and not expect it to be used instead of your cPython unicodedata distribution? Removing the charset_normalizer.hook and replacing it with a plain import won't be useful unless we create some intermediary compat like solution. It would be nicer to propose something that both keeps the backward compatibility AND "fix" your concern. Thanks. Hi @Ousret. As far as I can see, it's certainly not documented that simply importing charset_normalizer while having unicodedata2 installed will make it impossible to access unicodedata (unless you had imported unicodedata beforehand). Sure, it's documented that charset_normalizer will internally use unicodedata2 if it's available, and that's fine, but I don't expect it to mess with the global state of my Python interpreter. If I wanted to patch in unicodedata2 for all unicodedatas in my interpreter, I'd want to be explicit about that. (Explicit is better than implicit.) My proposal is exactly the same mechanism that patches in charset_normalizer for chardet in requests (which is how I stumbled upon this library in the first place): https://github.com/psf/requests/blob/a1a6a549a0143d9b32717dbe3d75cd543ae5a4f6/requests/compat.py#L11-L14 I really do not understand why having more up-to-date unicodedata is messing with your global Python environment. Anyway. Like I said earlier, I am open to change that behavior, but I must be convinced that using your method/patch actually does what is expected. I really do not understand why having more up-to-date unicodedata is messing with your global Python environment. It's a matter of principle. If I do import unicodedata, I want unicodedata, not another module just because I had happened to import an unrelated module (charset_normalizer) before importing unicodedata. It's maybe slightly far-fetched, but without this patch, you can't easily write a program that compares the differences between unicodedata and unicodedata2 unless you're specific about your import order! I must be convinced that using your method/patch actually does what is expected. Well, I'm pretty sure the CI suite for charset_normalizer would show that it doesn't break things. Can you enable running the GitHub workflows for this PR? I need another person's opinion on that. @sethmlarson What do you think of that? Unfornutally, in the current state the test suite does not prove without a doupt that unicodedata2 is correctly used. I am working on it. How is that the right thing for us? import unicodata2 as unicodedata "a".isalpha() What I am concern about is methods like .isalpha() from a str instance. How do you ptr them to unicodedata2? import unicodedata2 as unicodedata "a".isalpha() What I am concern about is methods like .isalpha() from a str instance. How do you ptr them to unicodedata2? As mentioned in https://github.com/Ousret/charset_normalizer/pull/57#discussion_r669703707 , I don't believe having unicodedata2 installed will help at all with str.isalpha(), etc. calls. In other words, you'd need to have your own def isalpha(s) sort of implementation that'd consult unicodedata2's tables, and doing that in pure Python will likely be slow. @Ousret Don't know much about the unicodedata module or whether it even interacts with isalpha, from a brief reading of CPython it doesn't seem like it interacts with this static table: https://github.com/python/cpython/blob/bb3e0c240bc60fe08d332ff5955d54197f79751c/Python/pyctype.c If you chase down the implementation of str.isalpha it ends up checking flags in this table, at least from my reading. If this is the case we shouldn't be injecting unicodedata2 as it doesn't modify str.isalpha behavior. @sethmlarson whether it even interacts with isalpha, from a brief reading of CPython it doesn't seem like it interacts with this static table: https://github.com/python/cpython/blob/bb3e0c240bc60fe08d332ff5955d54197f79751c/Python/pyctype.c That's the non-Unicode table. See https://github.com/Ousret/charset_normalizer/pull/57#discussion_r669703707 for the Unicode isalpha chase. Codecov Report Merging #57 (a30cf9f) into master (929f13c) will decrease coverage by 0.15%. The diff coverage is 100.00%. @@ Coverage Diff @@ ## master #57 +/- ## ========================================== - Coverage 84.46% 84.31% -0.16% ========================================== Files 12 11 -1 Lines 1062 1058 -4 ========================================== - Hits 897 892 -5 - Misses 165 166 +1 Impacted Files Coverage Δ charset_normalizer/__init__.py 100.00% <ø> (ø) charset_normalizer/utils.py 76.25% <100.00%> (+0.52%) :arrow_up: charset_normalizer/api.py 82.65% <0.00%> (-1.16%) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 929f13c...a30cf9f. Read the comment docs. Thanks, @akx @sethmlarson for your inputs. I am okay with merging this. I don't know when a new tag will be available, It would be wise to wait upon any remarks or concerns beforehand. Do you have any more concerns @akx ? @Ousret I don't have any further concerns regarding this PR :) Thank you for your consideration! You are welcome, thanks for your contribution as well. Now available under https://github.com/Ousret/charset_normalizer/releases/tag/2.0.2 I just found this while trying to debug a problem, and I just wanted to make absolutely clear that the old way causes problems. With charset_normalizer 2.0.0 (and 2.0.1), I saw this behavior: No import: ❯ python Python 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 05:59:23) [Clang 11.1.0 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> compile(r's="$\N{DEGREE FAHRENHEIT}$"', 'foo.py', 'exec') <code object <module> at 0x7fde000c4810, file "foo.py", line 1> >>> With import charset_normalizer: ❯ python Python 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 05:59:23) [Clang 11.1.0 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import charset_normalizer >>> compile(r's="$\N{DEGREE FAHRENHEIT}$"', 'foo.py', 'exec') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "foo.py", line 1 SyntaxError: (unicode error) \N escapes not supported (can't load unicodedata module) >>> With the fix merged here in 2.0.2 (and the latest 2.0.7), the problem disappears. (I have an entirely different issue to solve where conda-forge is still giving me 2.0.0 🤷‍♂️ ) Thanks for the report. Indeed I saw this weird behavior at conda-forge giving by default 2.0.0 instead of latest. I do not know why and who to reach to explain that. https://anaconda.org/conda-forge/charset-normalizer/files The download tendency is revealing that something seems wrong. Could it be that the requests version requirement ~=2.0.* for this lib is applied in the wrong way? Incorrect dependencies for requests was exactly the reason: https://github.com/conda-forge/requests-feedstock/pull/48
2025-04-01T06:37:24.110624
2024-10-01T03:23:11
2558084482
{ "authors": [ "cjohannsen81" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2251", "repo": "OutSystems/cloud-connector", "url": "https://github.com/OutSystems/cloud-connector/pull/90" }
gharchive/pull-request
Update go.mod Fix for Darwin MacOs as it won't build without it. As stated here: https://github.com/golang/go/issues/65568
2025-04-01T06:37:24.117396
2019-09-22T12:16:59
496771707
{ "authors": [ "MRoci", "SharpEdgeMarshall", "mdawar", "orkenstein" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2252", "repo": "OvalMoney/celery-exporter", "url": "https://github.com/OvalMoney/celery-exporter/issues/9" }
gharchive/issue
Queue name in the Prometheus metrics Hi, thank you for this project, I have a question please about the exposed metrics. I'm trying the exporter locally, without any tasks I can check the celery_tasks_total metric in the Prometheus dashboard and I can see my tasks: celery_tasks_total{instance="celery_exporter:9540",job="celery_exporter",name="info",namespace="celery",queue="default",state="STARTED"} The queue as you can see is default, for this task the value is right because I have changed my default task name from celery to default, but the problem is that all the other tasks also have the queue as default even thought each task has a queue with the same name: celery_tasks_total{instance="celery_exporter:9540",job="celery_exporter",name="process",namespace="celery",queue="default",state="STARTED"} As you can see, here the queue should be process for my task process but it's default, that's also the case for other tasks. Also when I delay new tasks to be processed, these metrics stay the same with their values of 0 and new metrics show up for every task but this time with the queue value of undefined with the correct number of tasks: celery_tasks_total{instance="celery_exporter:9540",job="celery_exporter",name="info",namespace="celery",queue="undefined",state="STARTED"} So what could be wrong here? I have my queues defined as the task_queues Celery option, and for all the tasks except the default one I'm setting the queue argument of the @app.task() decorator. Have you configured workers with task_send_sent_event option ( http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-send-sent-event )or are you launching the exporter with --enable-events flag? This seems like the exporter is not capturing events containing queue infos @MRoci I'm not using --enable-events but I have task_send_sent_event and worker_send_task_events set to True on the workers, I have tested this configuration again and got the same results, the queue is default for tasks that don't use the default queue and sometimes undefined. Are you able to add a test reproducing the issue in a PR so we could take a look at it? @SharpEdgeMarshall Yes I will when I get some free time. So, looks like this never happened. @mdawar were you able to fix that? @orkenstein sorry I haven't had the time to find a fix and I no longer use this exporter because I moved to RQ from Celery.
2025-04-01T06:37:24.119640
2018-01-16T21:01:37
289054636
{ "authors": [ "Over17", "gromilQaaaa" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2253", "repo": "Over17/UnityShowAndroidStatusBar", "url": "https://github.com/Over17/UnityShowAndroidStatusBar/issues/8" }
gharchive/issue
Huawei Y7, android 7.0 black screen on launch Huawei Y7, android 7.0 Fullscreen = true, gives such result right after unity splash screen. If false - no problems, but no status bar @gromilQaaaa any news? I don't have the phone in question at hand, but if you share your project and logcat of a development build - I'll try to check it out. Based on the garbage most likely buffer not properly cleared. No answer
2025-04-01T06:37:24.139820
2024-01-04T10:50:41
2065427164
{ "authors": [ "OvidijusParsiunas", "easonoob" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2254", "repo": "OvidijusParsiunas/deep-chat", "url": "https://github.com/OvidijusParsiunas/deep-chat/issues/87" }
gharchive/issue
re-rendering When the ColorChanger button is clicked to change the color, it triggers a re-render of the DeepChatReact component. This re-render, in turn, causes the Chat component to also be re-rendered, resulting in the loss of its internal state, such as any unfinished text in the input box. However, if the DeepChatReact component is used without passing props, like , the unfinished text in the input box is preserved. This behavior suggests that the re-rendering of DeepChatReact due to prop changes is impacting the state of Chat. import React, { useState } from 'react'; import { DeepChat as DeepChatReact, DeepChat } from 'deep-chat-react'; const ColorChanger = ({ onChangeColor }) => { const getRandomColor = () => { const letters = '0123456789ABCDEF'; let color = '#'; for (let i = 0; i < 6; i++) { color += letters[Math.floor(Math.random() * 16)]; } return color; }; const handleChangeColor = () => { const newColor = getRandomColor(); onChangeColor(newColor); }; return ( <button className="btn-round mr-1" color="neutral" target="_blank" outline onClick={handleChangeColor} > Change Color </button> ); }; const Chat = ({ index, onClose }) => { const [color, setColor] = useState('lightblue'); const componentStyle = { width: '98%', height: '90%', position: "fixed", bottom: "1%", borderRadius: "10px", zIndex: 10, left: "1%", backgroundColor: color, display: 'flex', flexDirection: 'column', alignItems: 'center', boxShadow: "0 0 10px rgba(0, 0, 0, 0.15)" }; return ( <div style={componentStyle}> <DeepChatReact request={"request"} stream={{ simulation: 31 }}/> <ColorChanger onChangeColor={setColor} /> </div> ); }; export default Chat; Hi @easonoob. I'm not 100% sure what you mean by the comment or what the question is, could you perhaps elaborate on the issue. If you are referring to the fact that React re-renders the component when you use useState, I have discussed this topic in the following issue. Let me know if you need further help. Thanks! thanks, #61 I'm going to close this issue as the conversation on this topic should be continued in the following issue. Thanks!
2025-04-01T06:37:24.178648
2022-10-09T07:54:07
1402188995
{ "authors": [ "Ayushpanditmoto" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2255", "repo": "PARKHI277/My-blogwebsite-using-Nodejs", "url": "https://github.com/PARKHI277/My-blogwebsite-using-Nodejs/pull/5" }
gharchive/pull-request
added backend code and modified code Structure File Structure modified and easy to Read codes @PARKHI277 #6
2025-04-01T06:37:24.182215
2020-05-19T15:41:04
621084123
{ "authors": [ "mobb" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2256", "repo": "PASTAplus/ezEML", "url": "https://github.com/PASTAplus/ezEML/issues/23" }
gharchive/issue
Preferences for editing dataTable attributes For fun, I tried to lay out a "perfect" dataTable attribute editor. A whiteboard or sketch would work best, but I used a google doc. https://docs.google.com/document/d/1O4xziJ1rpYb06AgI9NV2ziHIXOd01bv1_agv803OO48/edit# Summary goals for a pie-in-the-sky dataTable attribute editor: User is told what fields will need to be filled in (specific to that measType). Maybe this is a help-popup. Nominal/ordinal: enumerations (optional) dateTime: a format string interval/ratio: unit User can know when they’re done from a summary page. (maybe rows turn green when minimum info is added?) User can view and edit common info together (e.g, fields used by all are on the summary page) Margaret's link doesn't work now, but this sounds like something I just came here to add. One thing that keeps me from using ezEML on some datasets is a wide table. With 50 columns, it just isn't workable to click through each column to set definitions and units after a table import. With a gridded attribute editor as suggested here this barrier to using ezEML would be solved! @hubbardbrook I updated the sharing on that link.
2025-04-01T06:37:24.280069
2015-01-17T06:36:22
54653195
{ "authors": [ "Dino-SherComp", "Stormbow" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2257", "repo": "PEXPlugins/PermissionsEx", "url": "https://github.com/PEXPlugins/PermissionsEx/issues/1857" }
gharchive/issue
Crash on Reload Spigot 1.8 Hello, Everytime when i do /reload the server crashes. its PermissionsEx. Crash log : http://pastebin.com/ApjgF8tr The /reload command is a vanilla command which has been broken for more than 4 years. Don't use it. Reload plugins individually as needed. /pex reload, /ess reload, etc.
2025-04-01T06:37:24.336106
2022-03-11T12:03:30
1166338641
{ "authors": [ "PINTO0309", "callmesora" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2258", "repo": "PINTO0309/PINTO_model_zoo", "url": "https://github.com/PINTO0309/PINTO_model_zoo/issues/192" }
gharchive/issue
Bytetrack documentation (how to run) Issue Type Documentation Feature Request OS Ubuntu OS architecture aarch64 Programming Language Python Framework TensorRT Model name and Weights/Checkpoints URL NA Description I've wanted to test Bytetrack and I'm having a rough time from their original repo. Does your version offer any documentation on how to run it or any help? Relevant Log Output No response URL or source code for simple inference testing code No response I just converted the models from the repository cited here for various frameworks. Therefore, all documents are listed in the citation repository. We welcome your pull requests for demo codes. https://github.com/ifzhang/ByteTrack
2025-04-01T06:37:24.358879
2017-10-17T16:19:09
266191657
{ "authors": [ "eseiver" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2259", "repo": "PLOS/allofplos", "url": "https://github.com/PLOS/allofplos/issues/27" }
gharchive/issue
py3 virtualenv install not working on RedHat problems with SSL connection, even when cloning the GH repository Wonder if this could be related to #84 Will consider this resolved when #110 is fixed.
2025-04-01T06:37:24.404575
2018-11-15T04:26:13
380995022
{ "authors": [ "S259420", "hamishwillee", "lbegani", "matteoscanavino" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2261", "repo": "PX4/Firmware", "url": "https://github.com/PX4/Firmware/issues/10851" }
gharchive/issue
multi-vehicle simulation sensor Hello! I am following the guide on multi-vehicle simulation: https://dev.px4.io/en/simulation/multi-vehicle-simulation.html. Everything works well and I was able to insert the swarm on my world file. Now I'd like to add a camera to one of the iris but I don't know how to do that. I know that for each vehicle inside the launch file I can add tag to define mavlink_upd_port, pose, vehicleID.... do I have to specify a tag for the camera? Does anyone has experience? I didn't find any documentation on this topic and I don't have experience... Thankyou in advance The docs we have on camera in gazebo are here. I have not tried this out. @S259420 I'm not a developer and I don't know anything about this topic. You're going to have to wait for some real help :-). The people I linked might be good ones to talk to. @TSC21 might also have some insight. @hamishwillee Thanks a lot! :) @lbegani - Any ideas on setting up Gazebo camera for multiple vehicles in a multi-vehicle simulation. Even knowing this is not supported would be useful .... I am not sure how it works with multi-vehicle simulation. My understanding is that you add a camera sensor in the model's sdf file, specify the properties and you will have the camera frames published on the gazebo topic. For reference check - Firmware/Tools/sitl_gazebo/models/typhoon_h480/typhoon_h480.sdf . Try adding camera sensor in the model and see if it works. @lbegani For a multivehicle, an urdf file is needed instead of the sdf... @helenol @burrimi @birchera @devbharat @andre-nguyen Hello, could you please explain how to use "component_snippets.xacro"? I've found it is called by iris_base.xacro. It should allow to set a camera sensor and create an urdf for themultivehicle simulation. I've found you contributed to component_snippets.xacro, any suggestion would be appreciated. Thankyou in advance
2025-04-01T06:37:24.418300
2018-12-18T09:43:03
392071616
{ "authors": [ "Antiheavy", "TSC21", "almaaro", "dagar" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2262", "repo": "PX4/Firmware", "url": "https://github.com/PX4/Firmware/pull/11067" }
gharchive/pull-request
Feature: go-around patterns for missions Test data / coverage Here is one flight with one go-around. In total, I have tested this feature at least over 10 times. https://logs.px4.io/plot_app?log=4f523186-44dc-439f-8c31-2da5a1d66e86 Describe problem solved by the proposed pull request To my understanding (please correct me if this is nonsense), the vehicle will always loiter above the landing waypoint after an aborted landing. So it is not possible to add regular waypoints after the LAND point that the vehicle would follow automatically after the aborted landing. Describe your preferred solution If the landing is aborted and a valid waypoint exists after the LAND, just continue to the next item. Edit. Some explanation of the code that might help (happens within Mission:on_active()): Mission::on_active() checks if MissionBlock::is_mission_item_reached() is true If true, and if autocontinue is true (seems to be for LAND), then advance_mission() and set_mission_items(). This sets the mission item to the next one. check if _mission_item.nav_cmd == NAV_CMD_LAND. It is not true, because the item changed (if the next item isn't also LAND (which would be weird)). Because the previous clause wasn't true, don't do do_abort_landing(); FW position controller will reset the _land_abort flag. This looks right, I just want to give it a more careful review as the navigator states are becoming more intertwined. No, the statement is for both NAV_CMD_LAND and NAV_CMD_VTOL_LAND cases. Does this change impact the various RTL_TYPE options? Specifically RTL_TYPE 1 and 2 where parts of the mission are used during Return Mode? I'm wondering if there is a use case where some people would want this feature disabled? - I have to think about it some more. If this is the case, maybe the correct place to effectively "disable" this feature would be Mission Feasibility checker. Thoughts? I haven't considered if this will interfere with any return modes. A quick investigation showed that: landing to the current position seems to be OK (the next waypoint will be set as invalid in navigator/land.cpp). A quick look showed that if there is a valid landing point in the mission, it will just jump to NAV_CMD_DO_LAND_START. So no weird behaviour to be expected (navigator/rtl.cpp). In case of landing at the home position without a LAND wpt, it could possibly just continue the mission from the point where the RTL was issued if an aborted landing was to happen. I did't see that the next waypoint would have been set to invalid. So this is a problem (navigator/rtl.cpp). Another thing comes to mind concerning the FW position controller: Maybe it would be better to disable landing aborts under some situations, such as the "low battery immediate landing". I don't know if it is already implemented though. Maybe it would be better to disable landing aborts under some situations, such as the "low battery immediate landing". this is an interesting idea and maybe could be discussed in a separate issue post? I might suggest tying it to BAT_EMERGEN_THR with some logic for BAT_EMERGEN_THR where if already in a landing state, stay in the landing state and block the ability to abort? @almaaro please rebase. @TSC21 I'm a bit confused, something eeird has apparently happened (I'm not a git master)... Do I rebase on master and push to the original branch? Thank you. If this feature were to move forward, I think it should need to be enabled via parameter, e.g. MIS_DO_GO-AROUND or maybe MIS_ABORT_CONTINUE or something. Actually the term "go-around" doesn't accurately reflect the proposed feature - it is more of an "abort pattern" or "continue after abort" or something. The current default behavior of aborting to loiter over the landing is generally pretty safe, consistent, and easily understood by the general user. If a user intentionally or accidentally adds waypoints after a Land waypoint, the system should still behave safely. I think a user should be required to enable the feature via a parameter so they have some understanding that the vehicle will go somewhere else (like maybe into some trees/buildings) after an abort. Please reopen. See https://github.com/PX4/Firmware/pull/11099#issuecomment-546703322
2025-04-01T06:37:24.432174
2015-08-12T19:45:26
100620174
{ "authors": [ "AndreasAntener", "LorenzMeier", "SimonWilks", "tumbili" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2263", "repo": "PX4/Firmware", "url": "https://github.com/PX4/Firmware/pull/2686" }
gharchive/pull-request
VTOL updates for assisted modes Use multicopter landed detector as it will register takeoff at ground level and therefore give the fixed-wing position controller the correct home position later Also consider transition state in posctl Tested on a standard VTOL: http://dash.oznet.ch/view/zqm9GvcUzafR8SnaPYEHf9 Doing transitions in alt and posctl Rebased in #2686 @tumbili rebased on master with cleanup changes @AndreasAntener Thanks! My argument was mostly about architecture: You can assume that the status field is zero, but making assumptions about the VTOL status for non-VTOL vehicles is a stretch, because that's a completely non-expected dependency. It looks good now. @tumbili I didn't realize you put this check in here: https://github.com/PX4/Firmware/pull/2652/files#diff-fc77c3ef569029d45764664c75d4b0c1R1467 and here: https://github.com/PX4/Firmware/pull/2652/files#diff-e286114cb4359c03dcd9e90c57b99214R944 We have now 2 different places where we care about the transition phase The mc pos controller is now completely shut off during the transition, which is not what I tested Ok, can we go 2 steps back: Please do not propose PRs for merging with non-trivial changes which are not flight tested. We've had this now a couple of times. And please separate architectural cleanup from flight testing / development. I never change code after I come back from testing because it always fails. There should be one branch which exactly represents what has been flown and should be merged as is (and any detail change has to be re-tested). @AndreasAntener Why should the mc_pos_controller be shut off? @AndreasAntener @LorenzMeier In fact my changes tell the mc_pos_controller to keep publishing during a transition. And they tell the fw_att_control module not to publish attitude setpoints during a transition otherwise we would have two modules publishing. This is exactly what we flight tested with the FireFly. I just noticed that the "in_transition_flag" also needs to go into the else statement here https://github.com/PX4/Firmware/pull/2652/files#diff-fc77c3ef569029d45764664c75d4b0c1R1467 Sorry for the confusion, you're right Roman, I misinterpreted the change in mc_pos_control. And yes, it's missing in the else ;) We were on different branches yesterday with the offboard stuff. The detail change from me that already went in with the cleanup branch only affects offboard. For this I'll do another flight test toady. Flown: http://dash.oznet.ch/view/A7A4so48PhK3YtWCaSiggD (manual, alt- and posctl) @tumbili ready for a flight test with the Firefly ;) @tumbili I gave the transitions in POSCTL some more thoughts. I see the following open issues: a) to FW: altitude increase during transition (especially with the FunCub), drop in FW right after the transition b) to MC: the position controller in manual flight steps on the brakes. Best case this just doesn't look so nice.. a) The climb makes sense. The MC pos controller is actually reducing thrust when it notices the climb, but our FunCub just has a natural pull up as soon as it gets some airspeed. You were also talking about adding pitch blending, this might help. Since we're still transitioning well below cruise airspeed, a drop after the transition is somewhat expected, but is never that extreme if we do it in manual, so I'm not quite sure what to make of that. I tried transitioning at a higher speed once, but it got away quite a bit before it actually transitioned. On the other hand it started blending in FW controls already, so I was able to fly it in FW before the transition completed (which could be the correct and most seamless way to do it). b) Our back transitions in offboard are actually smoother and just in POSCTL. It might have something todo with the fact that in the offboard test, the MC position controller still has a setpoint ahead of him, and not at the position were the transition happened. Ideally we would put the MC controller in velocity mode after the transition, starting with the current velocity and reducing that vector until it's 0. Thoughts? @AndreasAntener a) I agree that we should try blending the fw pitch controls with the ones from the multicopter. I'm pretty sure this will make a big difference, also for the FireFly. b) This also sounds reasonable. Who would then publish this velocity? a) I think we can try that this week on the Cub at least b) I have the same thoughts about this as about the transition work your currently doing. Architecture-wise it fits in the vtol type code, but we don't want to duplicate position controller code there. Depending on what we do there we might need to update the interface to the position controller. E.g. we have the setpoint triplet that could carry a velocity input but this is not handled except for offboard. The same "velocity control" for manual looks at manual input. We're seeing output pulsing during timed back transitions with the standard VTOL, it is unclear yet if this is related to the mode flags or if the standard VTOL blending (or both). http://dash.oznet.ch/view/bgJVKEGhAtkcwqsRd46KE7#Thrust_PLOT @AndreasAntener @SimonWilks Found the problem, will push a fix. Flight tested, will merge soon. Merged.
2025-04-01T06:37:24.443844
2016-05-30T08:15:16
157453554
{ "authors": [ "SamChenzx", "julianoes", "mhkabir" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2264", "repo": "PX4/Firmware", "url": "https://github.com/PX4/Firmware/pull/4690" }
gharchive/pull-request
Snapdragon MPU9250 rotation support @julianoes @LorenzMeier This PR : Fixes Snapdragon build after a slew of changes from this week broke it. Renames rc_in_pwm_out to snapdragon_io (I prefer the more concise name). Also fixes broken build due to incomplete renaming in the last PR. Works around missing dprintf in QuRT. Fixes build. Adds sensor rotation support to MPU9250 wrapper to make supporting P1/P2 Flight boards possible. ( #4652 ) TODO : Mag rotation support (for HMC5883 and AK mag in MPU9250) Automagically detect P1/P2 boards and set rotations. (@mcharleb @jywilson Any suggestions?) I haven't added rotations support to the mag driver yet. I can do this if required, but there are hacks in the driver to make it consistent with the 3dr gps, and I'd need to check how we can deal with transitional support. Let me know if we need rotation support in the mag driver at this stage. I was planning to add it for the AK mag in the 9250 after #4651 is merged. Please merge this PR before #4651. FYI @SamChenzx You can test this now. You will need to start the MPU9250 wrapper with an -R <rotation> parameter. Check the P2 board's rotation w.r.t P1 and set the correct value from here (https://github.com/PX4/Firmware/blob/master/src/lib/conversion/rotation.h#L50) in px4.config(https://github.com/PX4/Firmware/blob/master/posix-configs/eagle/flight/px4.config#L7) Please stand by for test flights on Eagle. Will be done by today. Thanks @mhkabir for working on this. I disagree that I didn't do the rename completely. One of the changes must have just gotten lost in a merge, if you look at: https://github.com/PX4/Firmware/pull/4668/files. I would vote for my naming because snapdragon_io can be confused with the IO board on Pixhawk. Also, it's not really tied to snapdragon, it could as well be used on other hardware. What about mavlink_io? I've already cherry-picked d53b560, it' be skipped in a rebase. @julianoes All done. Please merge. @mhkabir is it tested? Thanks, I'll test and merge it tomorrow together with #4651. @mhkabir @julianoes I'm ready to test, first of all, I should switch to snappy_rotation branch? @mhkabir I make a pull request and update to latest commit. Can't find the snappy_rotation branch, so I still use master. Set "df_mpu9250_wrapper start -R 6" in the px4.config file, then issue ./mainapp mainapp.config, attempt to re-calibrate sensors, but can't proceed the calibration progress, stop at the calibration start page. @SamChenzx - That would never work. Please checkout my branch from mhkabir/Firmware. @mhkabir I just wanted to merge this and realized that the rotation is done in sensors.cpp and not in the drivers. @LorenzMeier, @mhkabir: I guess we need both. We need rotation on the driver level to rotate for boards such as Snappy or RPi. At the same time we have the parameter which gets applied in sensors.cpp. You need both. Driver level for the relative rotation of the board and sensor axes, and the sensors app one for board rotation relative to the vehicle body. @mhkabir Just now I've tested the snappy_rotation branch in the field, I set the rotation parameter by "df_mpu9250_wrapper start -R 6" in the px4.config file, but leave rotation parameters in QGC as "ROTATION_NONE", let 3DR GPS module direction towards the Y- direction of P2 board(as Y- is the heading of P1 board), I can manually takeoff it, but when takeoff, it will rotation about 90 degrees, then gradually turn into stable. I will do another test if I set the rotation parameters in QGC as well. Is your 3dr GPS arrow pointing forward towards the nose of the vehicle ? It should, for correct behaviour . Don't add any other rotations other than the - R 6. It would be much easier to check the horizon and heading indicators in QGC to see if they are consistent with the vehicle, rather than flying every time. @mhkabir let's close this in favor of #4704. The horizon indicator didn't work on QGC, both android and Linux version, I don't know why. The heading indicator can be checked by the GPS apps in mobile phone, and heading works. After "df_mpu9250_wrapper start -R 6" is set, the nose of the vehicle no longer along the camera direction but the direction of power interface and that's the direction I place the 3DR GPS module.
2025-04-01T06:37:24.556202
2021-01-07T00:54:28
780946791
{ "authors": [ "jhorgint", "stuartleeks" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2265", "repo": "PacktPublishing/Windows-Subsystem-for-Linux-2-WSL-2-Tips-Tricks-and-Techniques", "url": "https://github.com/PacktPublishing/Windows-Subsystem-for-Linux-2-WSL-2-Tips-Tricks-and-Techniques/issues/4" }
gharchive/issue
Chaper4 (WebApp)Should I copy these files inside the Ubuntu distro? in order to run the Web-App Chapter-4, should I copy these files inside Ubuntu? or it's unnecessary. Hi @jhorgint, The .ps1 file is example PowerShell commands from the chapter and designed to be run from PowerShell in Windows. The web-app folder is intended to be run from WSL. Hope that helps! Stuart Hi @jhorgint, The .ps1 file is example PowerShell commands from the chapter and designed to be run from PowerShell in Windows. The web-app folder is intended to be run from WSL. Hope that helps! Stuart Thanks Stuart. I have finally gotten it running. using PowerShell and this command "Bash run.sh" PS C:\web-app> Bash run.sh
2025-04-01T06:37:24.574298
2024-03-29T01:05:40
2214419070
{ "authors": [ "CLAassistant", "ChaoII", "Jiang-Jia-Jun" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2266", "repo": "PaddlePaddle/FastDeploy", "url": "https://github.com/PaddlePaddle/FastDeploy/pull/2418" }
gharchive/pull-request
[BUG FIX] fix memory leak for ort backend fix memory leak for ort backend #2414 Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.1 out of 2 committers have signed the CLA.:white_check_mark: ChaoII:x: Jiang-Jia-JunYou have signed the CLA already but the status is still pending? Let us recheck it. @ChaoII 这里修复的内存泄露问题是指读入的模型buffer吗?
2025-04-01T06:37:25.028574
2016-04-23T15:01:53
150562581
{ "authors": [ "PalmerAL", "b0elter" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2267", "repo": "PalmerAL/min", "url": "https://github.com/PalmerAL/min/pull/102" }
gharchive/pull-request
Reload only webview of current tab This would resolve #50. Thanks, but I believe this was already fixed yesterday in a50ffdd8fc55b2d812421b31d75eaa7d4739d906. Roger. Wasn't up to date!
2025-04-01T06:37:25.169086
2016-09-12T09:21:53
176330121
{ "authors": [ "jtschichold", "review-ninja" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2268", "repo": "PaloAltoNetworks/minemeld-core", "url": "https://github.com/PaloAltoNetworks/minemeld-core/pull/49" }
gharchive/pull-request
Fixes PaloAltoNetworks/minemeld-core#48 New prototypes are added to the minemeldlocal.yml library. The library PATH can be set via MINEMELD_LOCAL_PROTOTYPE_PATH config or env variable. If not set the API will take the first directory inside PROTOTYPE_ENV containing the string '/local/', mostly for compatibility. Signed-off-by: Luigi Mori<EMAIL_ADDRESS>
2025-04-01T06:37:25.174116
2022-04-19T14:56:10
1208486553
{ "authors": [ "clienthax", "jhlasnik" ], "license": "ISC", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2269", "repo": "PaloAltoNetworks/pan-os-python", "url": "https://github.com/PaloAltoNetworks/pan-os-python/issues/444" }
gharchive/issue
refresh_variable broken due to recent commit Describe the bug Commit https://github.com/PaloAltoNetworks/pan-os-python/commit/0b47a3a6afa8379cdf63d04f90f555e564e235fd changed behaviour of parse_value_from_xml_last_tag to require an additional argument refresh_variable does not pass the new required argument to parse_value_from_xml_last_tag here https://github.com/PaloAltoNetworks/pan-os-python/blob/0b47a3a6afa8379cdf63d04f90f555e564e235fd/panos/base.py#L959 Expected behavior It to work Current behavior File "/usr/local/lib/python3.8/dist-packages/panos/base.py", line 970, in refresh_variable var_path.parse_value_from_xml_last_tag(obj, settings) TypeError: parse_value_from_xml_last_tag() missing 1 required positional argument: 'attr' Possible solution Add the missing argument to the caller Steps to reproduce Use pan-os-ansible to gather facts from a host. Context Gathering facts from network devices using ansible Your Environment Version used: https://github.com/PaloAltoNetworks/pan-os-python/commit/ab4d088e9f231889ef25b926827e77d75d47d6cb Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): N/A Operating System and version (desktop or mobile): N/A Link to your project: N/A I also hit this same issue, rolling back to v1.6.0 has resolved my issue for the time being.
2025-04-01T06:37:25.179667
2023-05-18T13:21:42
1715632212
{ "authors": [ "Pubs-MV", "ssugandh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2270", "repo": "PaloAltoNetworks/pan.dev", "url": "https://github.com/PaloAltoNetworks/pan.dev/pull/357" }
gharchive/pull-request
Maxwell Update 1 APIs Description Maxwell Update 1 APIs Motivation and Context New release update version How Has This Been Tested? Tested on a local. Screenshots (if appropriate) PCEE (latest): PCEE (Minor release version): PCCE (Major release versions): PCCE (Minor release versions): Types of changes New feature (non-breaking change which adds functionality) Checklist [x] I have updated the documentation accordingly. [x] I have read the CONTRIBUTING document. [x] I have added tests to cover my changes if appropriate. [x] All new and existing tests passed. @sserrata , @blindaa121: Can you please approve and merge this? This is required for the publish Prisma Cloud Compute Edition APIs release today? I don't have the merge rights. @sserrata @blindaa121 can you please approve for publish? The release is going out
2025-04-01T06:37:25.184098
2022-08-22T19:00:29
1346852315
{ "authors": [ "cfarquhar", "tkishel" ], "license": "ISC", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2271", "repo": "PaloAltoNetworks/prismacloud-api-python", "url": "https://github.com/PaloAltoNetworks/prismacloud-api-python/pull/75" }
gharchive/pull-request
Add CWP incident acknowledgement method and bulk archiver script Description This PR: adds a method for acknowledging/archiving CWP runtime incidents adds a script to bulk archive runtime incidents based on the contents of a CSV file fixes some minor _tags.py syntax issues introduced in 53000c11f71e Motivation and Context The UI does not currently provide a mechanism for bulk archiving runtime incidents. As customers tune their runtime rules, they would like to remove incidents that would not have been generated under the tuned rule set. How Has This Been Tested? The new method and script were tested against a 22.06.197 SaaS environment. Types of changes Bug fix (non-breaking change which fixes an issue) New feature (non-breaking change which adds functionality) Checklist [x] I have updated the documentation accordingly. [x] I have read the CONTRIBUTING document. [x] I have added tests to cover my changes if appropriate. [x] All new and existing tests passed. LGTM!
2025-04-01T06:37:25.201204
2023-05-12T23:07:55
1708323296
{ "authors": [ "PandarusAnon", "oEjk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2272", "repo": "PandarusAnon/slaude", "url": "https://github.com/PandarusAnon/slaude/issues/6" }
gharchive/issue
help Error: no_text at postSlackMessage (file:/slaude/app.js:428:23) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async createSlackThread (file:/slaude/app.js:439:12) at async file:/slaude/app.js:54:24 Just pulled today. double checked config. This might be two weeks old but I'd still like to at least address it. As the error message implies, Slaude tried to create the original thread message with no contents. That could have a number of reasons but to say for sure I'd need to know what exactly your config settings were (not your Slack specific settings, just the Slaude stuff like MAINPROMPT_LAST and PING_MESSAGE as well as the prompt you were using at the time. My guess would be that a combination of your config and the prompt caused the first message we wanted to send to Slack to just be an empty string. I'm just not sure how we would've gotten there since all messages are put together by combining the individual OpenAI formatted messages until we exceed the character limit. But without knowing exactly what you prompted and which settings you used guessing is all I can do. Of course this is two weeks old and you might not be comfortable with sharing what you were prompting so I'm not expecting you to actually do this, I'm just saying that's the only way I can pin this down and I don't really have the time to try and reproduce bugs with pure trial and error anymore.
2025-04-01T06:37:25.251797
2019-12-16T21:22:31
538667839
{ "authors": [ "hrharder" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2273", "repo": "ParadigmFoundation/zaidan-dealer-specification", "url": "https://github.com/ParadigmFoundation/zaidan-dealer-specification/issues/15" }
gharchive/issue
Always return 0x transaction hash and signed order Remove includeOrder and includeTx fields @fabioberger I just remembered one of the reasons we had includeOrder specifically. From the spec: Quotes indicated as includeOrder as false can be seen as traders checking if a dealer’s prices are favorable at a given time for a certain market and trade size. For our internal risk-tracking mechanisms, we keep track of how many outstanding quotes there are that could actually be filled, and we’d like to be able to separately differentiate between requests that seek only to record price data. For that reason my plan is to remove includeTx, and make the default to include the signed order and the 0x transaction hash, but also have the option to request what amounts to a “price only” quote.
2025-04-01T06:37:25.279581
2021-10-11T06:46:58
1022325363
{ "authors": [ "CarbonY26" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2274", "repo": "ParadoxGameConverters/EU4toVic2", "url": "https://github.com/ParadoxGameConverters/EU4toVic2/pull/788" }
gharchive/pull-request
Representation for Vietnamese slaves in Formosa Historically, slavery in Taiwan was done by the Dutch with some aboriginal people helping in recapturing escaped slaves. I'm going to try this again in a new PR and hopefully I don't break anything again.
2025-04-01T06:37:25.290909
2023-07-13T06:50:25
1802325763
{ "authors": [ "bnasif25", "kmathoora" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2275", "repo": "Parallels/parallels-vscode-extension", "url": "https://github.com/Parallels/parallels-vscode-extension/issues/28" }
gharchive/issue
Unable to create Ubuntu VM Describe the bug I am unable to create Ubuntu VM from the extension. I have tried to install it with UI, with User Interface, and with Vagrant Box. All three instances result in an error "Error creating VM: Error generating packer file" To Reproduce Steps to reproduce the behavior: Open extension Click on the "+" sign and choose operating system Linux, distribution choose ubuntu, and version 22.04 Click on Generate vm Expected behavior VM is created with selected settings Screenshots Extension Version 'v0.0.8' preview Same issue on x86_64 platform
2025-04-01T06:37:25.296465
2020-03-31T14:52:52
591175460
{ "authors": [ "ndessart" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2276", "repo": "Parrot-Developers/olympe", "url": "https://github.com/Parrot-Developers/olympe/pull/15" }
gharchive/pull-request
[DEBUG] fix compatibility with python 3.8 This should fix #14 This has been integrated into Olympe 1.2.1
2025-04-01T06:37:25.300793
2017-01-20T01:45:48
202026514
{ "authors": [ "layerssss" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2277", "repo": "ParsePlatform/Parse-SDK-JS", "url": "https://github.com/ParsePlatform/Parse-SDK-JS/issues/399" }
gharchive/issue
Unsuccessful save-operation still causes the first method to return object with changed attributes I've put a clean code example to help reproduce the issue: https://github.com/layerssss/parse-hook-error-quirk When a hook yields an error in beforeSave hook. The change to an object is not applied. Call Query.find in browser again. API from parse-server are returning unchanged attributes of the object. But the Query.find is returning the changed object. Expect: Query.find should return the unchanged object. parse-server version: 2.3.2 Parse-JS-SDK: 1.9.2 Note: We have a running system with Parse-JS-SDK 1.5 with parse.com, on which this issue doesn't exist. Cloud code: Parse.Cloud.beforeSave('Folder', (request, response) => { if (!request.object.name) { return response.error('Name is invalid.'); } response.success(); }); (new Parse.Query('Folder')).first() .then(folder => { if (folder) return Promise.resolve(); folder = new Parse.Object('Folder', { name: 'Untitled Folder' }); return folder.save(); }); Client code: new Parse.Query('Folder').first() .then(folder => { console.log('expected old name: Untitled Folder'); console.log('old name:' + folder.get('name')); folder.set('name', ''); // try to change it to invalid return folder.save() .fail(error => { console.log('name not changed: ' + error.message); return Parse.Promise.as(); }); }) .then(() => new Parse.Query('Folder').first()) .then(folder => { console.log('expected new name: Untitled Folder'); console.log('new name:' + folder.get('name')); }); Please have a look, thanks! It turns out that it's the expected behaviour when "single instance" mode is on (default in browser). On which version did this become the default behaviour? Can you point me to the relevant changelog entries?
2025-04-01T06:37:25.304508
2016-05-05T03:57:57
153163853
{ "authors": [ "flovilmart", "jiawenzhang", "macmoe", "xor22h" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2278", "repo": "ParsePlatform/parse-server", "url": "https://github.com/ParsePlatform/parse-server/issues/1705" }
gharchive/issue
Android device still receives push from old installationId after app re-install install the app on Android phone, signup a new user A in Parse, get installationId_1 uninstall the app and re-install the app on the same phone, login into the previously created user A, get installationId_2 note that the app on this phone is now associated with installationId_2 now from parse server cloud code, send push notification targeting installationId_1, the app still receives the push even when its installationId is not installationId_1. Probably both installationsId's have the same GCMToken as it's still the same phone. GCMToken is provided by google play services library which you are not reinstalling, so it may return the same GCMToken. So even if you target installation_1, all other installations which includes same GCMToken will still receive the same notification; @xor22h the "deviceToken" for the two installations are different. I believe the "deviceToken" is the GCMToken. I'm closing this here for now, please update parse-server to the latest version. If the issue is still here, please open on parse-server-push-adapter. I'm experiencing the same issue and have created https://github.com/parse-server-modules/parse-server-push-adapter/issues/40
2025-04-01T06:37:25.312454
2016-12-09T11:56:58
194579449
{ "authors": [ "allen8300", "flovilmart" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2279", "repo": "ParsePlatform/parse-server", "url": "https://github.com/ParsePlatform/parse-server/issues/3217" }
gharchive/issue
Push Notification - Two apps using same Parse Serve My two apps (a free app and a pro app)point to the same parse database. While only one app can send push notification successfully. I have read everything on the Parse support / help pages and it appears that I am doing the right thing but failed.I have tried the way in https://github.com/ParsePlatform/parse-server/issues/2188 But didn't work. My p12 files' path are: My cloud code in index.js are: push: { ios: [ { pfx: __dirname +'/push_certs/DevPushLoveAgainPro.p12', // Dev PFX or P12 bundleId: 'com.app1', production: false // Dev }, { pfx: __dirname +'/push_certs/ApplePushLoveAgainPro.p12', // Prod PFX or P12 bundleId: 'com.app1', production: true // Prod }, { pfx: __dirname +'/push_certs/DevPushLoveAgainFree.p12', // Prod PFX or P12 bundleId: 'com.app2', production: false // Prod }, { pfx: __dirname +'/push_certs/ApplePushLoveAgainFree.p12', // Prod PFX or P12 bundleId: 'com.app2', production: true // Prod } ] } can you re open the issue filling the issue template completely please? @flovilmart , yes, thanks so much. Created: https://github.com/ParsePlatform/parse-server/issues/3219 @flovilmart , hello? I haven't had time to look into it, but your new issue is again missing server logs etc... that doesn't help. Ok, thanks so much. But can you let me know whether it's feasible that two apps sharing same parse db and notification can work in both apps ? If feasible, can you point out the key points for me directly? This issue is closed, ca you keep the conversation on the proper issue please?
2025-04-01T06:37:25.379352
2016-03-09T23:04:19
139728955
{ "authors": [ "Heman6886", "benitech", "flovilmart", "jorgemendiza", "karkaletsis", "lifeisfunny", "mahabubakram", "otymartin", "sandeepkacha", "sekharrockz", "weengo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2280", "repo": "ParsePlatform/parse-server", "url": "https://github.com/ParsePlatform/parse-server/issues/942" }
gharchive/issue
Push notification doesn't get to the device Issue On Cloud Code I have an after save trigger that sends a Push to a specific user, but it doesn't get to the device and I neither get a GCM request and response. Any ideas? Prerequisites I've migrated to AWS Elastic Beanstalk(64bit Amazon Linux 2015.09 v2.0.7, running Node.js 4.2.3) and MongoLab I'm running the 2.1.4 of Parse Server. I'm testing on Android SDK I've followed everything on Push tutorial here Log from Verbose POST /parse/push { host: 'parseserver-xxx-env.elasticbeanstalk.com', 'x-real-ip': 'xxx', 'x-forwarded-for': 'xxx', 'content-length': '406', accept: '*/*', 'content-type': 'text/plain', 'user-agent': 'node-XMLHttpRequest, Parse/js1.7.1 (NodeJS 4.3.0)', 'x-forwarded-port': '80', 'x-forwarded-proto': 'http' } { "where": { "user": { "__type": "Pointer", "className": "_User", "objectId": "xxxxx" } }, "data": { "alert": "Bla bla bla", "badge": "Increment", "uri": "com.xxxx.xxxx://xxxx?id=xxxxx", "p": "xxxxxx" } } response: { "response": { "result": true } } Push to user: xxxxx was successful How I initialise Parse Server on index.js var api = new ParseServer({ databaseURI: databaseUri || 'mongodb://localhost:27017/test', cloud: process.env.PARSE_SERVER_CLOUD_CODE_MAIN || __dirname + '/cloud/main.js', appId: 'xxxxxxx', masterKey: 'xxxxxxx', fileKey: process.env.PARSE_SERVER_FILE_KEY || 'xxxxxxx', facebookAppIds: ['xxxxxxx'], serverURL: 'http://parseserver-xxxxx-env.elasticbeanstalk.com/parse/', filesAdapter: new S3Adapter( "xxxxxxx", "xxxxxxx", "xxxxxxx", {directAccess: true} ), push: { android: { senderId: 'xxxxxxx', apiKey: 'xxxxxxx' }, ios: { pfx: __dirname + '/development.p12', bundleId: 'com.xxxxxxx.xxxxxxx', production: false }, ios: { pfx: __dirname + '/production.p12', bundleId: 'com.xxxxxxx.xxxxxxx', production: true } } }); Function on Cloud Code var pushQuery = new Parse.Query(Parse.Installation); pushQuery.equalTo('user', user); Parse.Push.send({ where: pushQuery, data: { alert: "Bla bla bla", badge: "Increment", uri: "com.xxxx.xxxx://xxxx?id=" + offer.id, p: offer.id } }, {useMasterKey: true}).then(function (result) { console.log("Push to user: " + user.id + " was successful"); }, function(error) { console.log("Error sending push: " + error.code + " - " + error.message); } ); Possible Similar Issues #401 The where cluse for push are supposedly on _Installation and not _User So if I want to send to a specific user, I need the get the Installation ID from that user and query like this? var pushQuery = new Parse.Query(Parse.Installation); pushQuery.equalTo('objectId', installationId); You should set the userId to the installation object, and do: var pushQuery = new Parse.Query(Parse.Installation); pushQuery.equalTo('userId', userId); ```  or set the user pointer var pushQuery = new Parse.Query(Parse.Installation); pushQuery.equalTo('user', user); Ok, but that's exactly what I did. I've set the user pointer: pushQuery.equalTo('user', user); Here's this user _Installation data on mongodb { "_id": "xxxxxx", "appName": "xxxxxx", "appVersion": "xxxxxx", "deviceType": "android", "appIdentifier": "com.xxxxxx.xxxxxx", "installationId": "xxxxxx", "pushType": "gcm", "timeZone": "America/Sao_Paulo", "localeIdentifier": "pt-BR", "parseVersion": "1.13.0", "_p_user": "_User$xxxxxx", "_updated_at": { "$date": "2016-03-09T21:45:01.254Z" }, "_created_at": { "$date": "2016-03-09T21:45:01.254Z" } } @flovilmart may I understand why you close this issue? I'm no android specialist, but it looks like the deviceToken is not set on your installation I am having the same problem. As I have configured only the ios device. As my application is ios based. So I get the error in push. this is the log. { params: { id: '45J1QIaKIi' }, master: false, user: ParseUser { _objCount: 0, className: '_User', id: '45J1QIaKIi' }, installationId: '82e1d517-7aac-d887-d02b-e407c3de5e7f' } ##### PUSH OK Can not find sender for push type android, {"where":{"deviceType":"ios","channels":"user_45J1QIaKIi"},"data":{"alert":"Sending push notification"}} APNS Connection 0 Socket Error APNS Connection 0 Socket Error APNS Connection 0 Socket Error APNS Connection 0 Disconnected I dont get push on my device. How to solve this problem. @flovilmart good point, there's no deviceToken column on _Installation for users who were created after the migration. Any idea why the parse-server might not be creating it? Look at this POST sent from user's first login: POST /parse/classes/_Installation { host: 'parseserver-xxxxx-env.elasticbeanstalk.com', 'x-real-ip': 'xxxxx', 'x-forwarded-for': 'xxxxx', 'content-length': '326', 'accept-encoding': 'gzip', 'content-type': 'application/json', 'user-agent': 'Parse Android SDK 1.13.0 (com.xxxxx.xxxxx/7) API Level 16', 'x-newrelic-id': 'xxxxx==', 'x-parse-app-build-version': '7', 'x-parse-app-display-version': '<IP_ADDRESS>', 'x-parse-application-id': 'xxxxx', 'x-parse-client-key': 'xxxxx', 'x-parse-client-version': 'a1.13.0', 'x-parse-installation-id': 'xxxxx', 'x-parse-os-version': '4.1.2', 'x-parse-session-token': 'r:xxxxx', 'x-forwarded-port': '80', 'x-forwarded-proto': 'http' } { "appName": "xxxxx", "appVersion": "<IP_ADDRESS>", "deviceType": "android", "appIdentifier": "com.xxxxx.xxxxx", "installationId": "xxxxx", "pushType": "gcm", "timeZone": "America/Sao_Paulo", "localeIdentifier": "pt-BR", "parseVersion": "1.13.0", "user": { "__type": "Pointer", "objectId": "g83vqLMR0N", "className": "_User" } } response: { "status": 201, "response": { "objectId": "XRR1BEqCZa", "createdAt": "2016-03-09T21:45:01.254Z" }, "location": "http://parseserver-xxxxx-env.elasticbeanstalk.com/parse/classes/_Installation/XRR1BEqCZa" } And also just for testing I went on mongodb and updated an user _Installation with a deviceToken. And now when trying to push I got this MismatchSenderId error: GCM request and response {"request":{"params":{"priority":"normal","data":{"time":"2016-03-10T12:17:50.894Z","push_id":"84kHd9B0fX","data":"{\"alert\":\"Bla bla bla\",\"uri\":\"com.xxxx.xxxx://offer?id=kHrtmMOeYF\",\"p\":\"kHrtmMOeYF\"}"}}},"response":{"multicast_id":xxxx,"success":0,"failure":1,"canonical_ids":0,"results":[{"error":"MismatchSenderId"}]}} hey @mahabubakram, your error message looks similar to mine. They have a problem with the sender. Can you check if there's this column deviceToken on your db? GCM: MismatchSenderId APNS: Can not find sender for push type android @weengo , I am trying to send to an old user. Who is already there in my parse database. So after migration to mongo lab I am trying to send push notification to that user and I have deviceToken to that user. But as you can see push is not going into the device. And I am having this problem for quite a long time but no one has given response on it. Ok, got it solved. On my android manifest file I was using the API key, instead of Sender Id. And I was also using key of type Android, but I have now changed to one of type server. So anyone with similar issue remember to check that on the google console. Now my _Installation is receiving the deviceToken normally. And oddly I'm still getting the GCM error: GCM request and response {"request":{"params":{"priority":"normal","data":{"time":"2016-03-10T15:07:40.144Z","push_id":"xxxxxxx","data":"{\"alert\":\"bla bla bla\",\"uri\":\"com.xxxxxxx.xxxxxxx://offer?ofid=xxxxxxx\",\"p\":\"xxxxxxx\"}"}}},"response":{"multicast_id":xxxxxxx,"success":1,"failure":2,"canonical_ids":0,"results":[{"error":"MismatchSenderId"},{"error":"MismatchSenderId"},{"message_id":"0:xxxxxxx"}]}} @mahabubakram I would suggest you create a new user on mongodb, and try to send push to this new user. I have the same problem. The notification doesn't reach to device. I run parse server example and try to send notification using rest api curl -X POST -H "X-Parse-Application-Id: myOtherAppId" -H "X-Parse-Master-Key: myMasterKey" -H "Content-Type: application/json" -d '{ "userId": "8JKVn0ummZERXG53hre2GA==", "deviceType": "android", "channels": [ "kostas" ], "data": { "title": "Test", "alert": "Test" } }' http://localhost:9999/myparseapp/push The rest response is {"result":true}. I don't see any log in the server and I wonder what is wrong. I am using parse-server 2.2.2 @karkaletsis can you show how you're declaring the push on your index.js file? @weengo I refer it like this var api = new ParseServer({ databaseURI: 'mongodb://localhost:27017/dev', cloud: __dirname + '/cloud/main.js', appId: 'myOtherAppId', masterKey: 'myMasterKey', serverURL: 'http://<IP_ADDRESS>:9999/myparseapp', push: { android: { senderId: '408817503931', apiKey: 'AIzaSyBtzYPBj6r6ZDmpUMOhTmNV85QZZbeKIwM' }, ios: { pfx: '/home/ubuntu/git/Certificates_PushNotification_Prod.p12', bundleId: '11', production: false } } Bundle id should be you CFBundleIdentifier from your iOS app I try to test only android push for now and so I places a random key in bundleId @karkaletsis, are you sure to be using a Server key from Google? Does your android manifest has something like this? The XXXX being you senderId: <meta-data android:name="com.parse.push.gcm_sender_id" android:value="id:XXXX"/> Yes, I have this in my android maniferst xml this entry. Didn't I have to see something in error log both in parse server and google developer logs? Yes i have the same problem. On the client side, the push is successfully sent but the device doesn't receive it. Im thinking of just using onesignal.com since parse's push notifications does not support "high throughput" to better invest in a more longterm solution. I'd still like to debug the issue to learn why its not working. @karkaletsis I believe you do should see something only if the message is actually being sent. When you run with VERBOSE=1 don't you get nothing? @otymartin Could you also run with VERBOSE=1 so we can see what's message when the push is sent? How do I enable VERBOSE=1, to curl that posts the message or to the server? and how? @weengo How do I do that? I only started learning javascript since the migration so Its a complete gray area. //SEND PUSH NOTIFICATIONS Parse.Cloud.define("sendPushToContact", function(request, response) { // request has 2 parameters: params passed by the client and the authorized user var params = request.params; var user = request.user; // Our "Message" class has a "text" key with the body of the message itself var messageText = params.text; var pushQuery = new Parse.Query(Parse.Installation); pushQuery.equalTo('user', user); // targeting incomingUser pushQuery.equalTo('deviceType', 'ios'); // targeting iOS devices only Parse.Push.send({ where: pushQuery, // Set our Installation query data: { alert: "Message: " + messageText } }, { success: function() { console.log("#### PUSH OK"); }, error: function(error) { console.log("#### PUSH ERROR" + error.message); }, useMasterKey: true}); response.success('success'); }); I enabled it but can't see something important except the response plus the request. The response is .... ], "data": { "title": "The Shining", "alert": "The Giants won against the Mets 2-3." } } response: { "response": { "result": true } } I can't see something more related to push notifications Try with DEBUG=aps as well as VERBOSE=1 I put also DEBIG=aps to server, no change to output (same as verbose) DEBUG not DEBIG, this should enable aps logs, if it doesn't' I'm. Not sure if you didn't set your Env variable incorrectly. Also this is for aps, for GCM, there is another DEBUG flag, check the node GCM doc Yes, I have put debug not debig (was a typo). Where can I find this gcm doc? @flovilmart Would this log reveal any reason why my push was successful on client side but was not delivered to the target device? This is from Google App Engine log 13:59:40.784 POST 200 535 B 72 ms AppName/1 CFNetwork/758.3.15 Darwin/15.3.0 /parse/functions/sendPushToContact <IP_ADDRESS> - - [28/Mar/2016:13:59:40 -0400] "POST /parse/functions/sendPushToContact HTTP/1.1" 200 535 - "AppName/1 CFNetwork/758.3.15 Darwin/15.3.0" "appName-9203.appspot.com" ms=72 cpu_ms=0 cpm_usd=5.979e-8 instance=- app_engine_release=1.9.35 trace_id=a59c4618197766a5510e12f3ea6230ca { metadata: { projectId: "3232341754198" serviceName: "appengine.googleapis.com" zone: "us3" labels: { appengine.googleapis.com/request_id: "56f9710c00ff0bf8ab5348ddb00001737e626172732d34354334300001323031363033323774313430393434000100" appengine.googleapis.com/module_id: "default" appengine.googleapis.com/version_id: "20160327t140944" } timestamp: "2016-03-28T17:59:40.784555Z" projectNumber: "3232022342198" } protoPayload: { @type: "type.googleapis.com/google.appengine.logging.v1.RequestLog" appId: "s~appName-9203" versionId: "20160327t140944" requestId: "56f9710c00ff0bf8ab5d28ddb00001737e626172732d343034300001323031363033323774313430393434000100" ip: "<IP_ADDRESS>" startTime: "2016-03-28T17:59:40.784555Z" endTime: "2016-03-28T17:59:40.856557Z" latency: "0.072002s" method: "POST" resource: "/parse/functions/sendPushToContact" httpVersion: "HTTP/1.1" status: 200 responseSize: "535" userAgent: "AppName/1 CFNetwork/758.3.15 Darwin/15.3.0" urlMapEntry: "PLACEHOLDER" host: "bars-4040.appspot.com" cost: 5.979e-8 appEngineRelease: "1.9.35" traceId: "a59c4618197766a5510e12f3ea6230ca" } insertId: "2016-03-28|10:59:44.017917-07|<IP_ADDRESS>|-1325307681" log: "appengine.googleapis.com/request_log" httpRequest: { status: 200 } operation: { id: "56f9710c00ff0bf8ab5d28ddb00001737e62617273we43034323423400013345031363033323774313430393434000100" producer: "appengine.googleapis.com/request_id" } } I am also facing same issue, not able to send push notifications after migration, i could not find device token in parse dashboard for an existing user after migration. i have used Android Code: final Map<String, Object> params = new HashMap<>(); params.put("message", message); params.put("userId", ParseUser.getCurrentUser().getObjectId()); ParseCloud.callFunctionInBackground("sendPush", params, new FunctionCallback<String>() { @Override public void done(String result, com.parse.ParseException e) { // TODO Auto-generated method stub if (e == null) { Toast.makeText(context, "HEHE", Toast.LENGTH_SHORT) .show(); Log.d("ANNOUNCEMENT", "SUCCESS"); } else { Toast.makeText(context, "FAilure " + e.toString(), Toast.LENGTH_SHORT).show(); Log.d("ANNOUNCEMENT", "FAILURE" + e.toString()); } } }); /////////////////////////////////////////////// main.js Parse.Cloud.define("sendPush", function(request, response) { var sendUserId = request.params.userId; var msg = request.params.message; var query = new Parse.Query(Parse.Installation); query.equalTo('userId', sendUserId); Parse.Push.send({ where: query, data: { alert: msg, sound: 'default' } }, { success: function() { // Push was successful response.success("Push sent"); }, error: function(error) { // Handle error response.error(error); }, useMasterKey: true }); }); But not getting push Even i am getting this issue. can we reopen the issue and get it solved. i dont find any valid solutions to the one's who closed the issue i have solved it can u send me the screenshot i can help u Parse.Push.send({ where: query, data: { alert: msg, sound: 'default' } I read in another issue that where should have braces eg. where: { query }, that solved it for someone else. Someone please confirm! @Heman6886 : Here goes my code. waiting for your response Android code: Android Manifest Server Code My Server's Dashboard Where is your cloud code do we need to call push API only from the cloud code? Normally, I do push from the rest api which has master-key as the header. Yes the meta-data is inside the application tag. yes u need to call the cloud code function from the android device Any reason behind calling push API from the cloud code ? Also i need to know whether my Installation Object is proper? because i don't see few columns which were there in generic parse server dashboard. Beacuse the Client push is insecure @Heman6886 i tried with the cloud code, still the notification does not appear in device. My Cloud code My Android Code My Dashboard @Heman6886 i am waiting for your response. can you please help me out with this issue Actually u cannot send push to an installation id ok. I tried with this cloud code as well. its still not working. can you verify this, i got the senderID and api key from this screen. if this is wrong can you tell the correct method. Ok, so i have this same problem. I am trying with a clean Parse Starter Project with only the exact AndroidManifest.xml additions as specified in "https://github.com/ParsePlatform/Parse-Server/wiki/Push-Configuring-Clients" and i have configured my Parse-Server exactly as specified in "https://github.com/ParsePlatform/parse-server/wiki/Push". I am running my own Parse-Dashboard and trying to use that to send a push notification to my android device running the Parse Starter app. Also trying to use the "curl example" Nothing. Nada. What's going on. How do i enable logging inside index.js file to check what;s going on? I don't see anything in /logs nor do i see any "_Push folders" being created in my mongo, as f@flovimart commented elsewhere. Start your server with VERBOSE=1 as an environment variable and send out the logs related to push sending. i'd love to. just can't figure out how to pass that variable directly from inside "index.js". I am running my parse-server on CentOS 7 with nvm and node v 5.10.1 with simple "node index.js" or "pm2 start index.js" this is an environment variable so, wither VERBOSE=1 node index.js, or in your pm2 configuration Logs: From curl. Pretty much similar using Dashboard. verbose: POST /parse/push { 'user-agent': 'curl/7.29.0', host: 'My-Server-IP:Port', accept: '/', 'x-parse-application-id': 'My-Key', 'x-parse-master-key': 'My-Key', 'content-type': 'application/json', 'content-length': '311' } { "where": { "deviceType": { "$in": [ "ios", "android" ] } }, "data": { "title": "The Shining", "alert": "All work and no play makes Jack a dull boy." } } verbose: { "headers": { "X-Parse-Push-Status-Id": "JPgxVmHgoL" }, "response": { "result": true } } verbose: sending push to 2 installations verbose: sent push! 0 success, 0 failures does the device have the proper deviceTokens? Android installation does not add any device tokens. I used the exact tutorial to add " " in a clean ParseStarterProject. // Native: Application.java public void onCreate() { // ... ParseInstallation.getCurrentInstallation().saveInBackground(); } Though after the latest "parse-server" update to 2.2.7, i notice the schema added more fields such as "DeviceToken, Channels, GcmSender, PushType, Badge". These did not show up in the earlier version "2.2.6". My New installs on a anroid device, show these fields as empty, though both app and parse-server side are properly configured with GCM ID/Keys. does the GCMSender is set? I'm wondering here as there are many users with valid and functioning configurations. Of course. In my Parse-Server i have set: push: { android: { senderId: '11112222233333', // The Sender ID of GCM apiKey: 'ABC123DEF456GHI789JKF' // The Server API Key of GCM } }, Try to ask over stack overflow. With VERBOSE=1 you should also have logs from the GCM adapter itself. Seems that it's improperly configured to me. ok. will do. it's just that we don't really have an active community over on stackoverflow for now. most queries/replies are still related to the older parse.com and lead to a frustrating experience going round in circles. @benitech did u use custom reciever for recieving push notification no. just added the "ParseInstallation.getCurrentInstallation().saveInBackground();" in the android client side code, as the tutorial says have u declare the server url in application activity and which parse library are you using you mean, am i connecting to my own parse-server properly? of course. It works fine for other things, i can see my installations and sessions from the starter app right away. Parse.initialize(new Parse.Configuration.Builder(getBaseContext()).applicationId("My-Key").server("http://My-Server-IP:Port/parse/").build()); ParseInstallation.getCurrentInstallation().saveInBackground(); ParseUser.enableAutomaticUser(); ParseACL defaultACL = new ParseACL(); And so far, with Parse-Server, only Parse 1.13.0 can be used. dependencies { compile 'com.parse.bolts:bolts-android:1.+' compile 'com.parse:parse-android:1.+' } Please run npm start enable verbose VERBOSE=1 DEBUG=apn,node-gcm npm start Same as i posted earlier. verbose: POST /parse/push { 'user-agent': 'curl/7.29.0', host: 'My-Server-IP:Port', accept: '/', 'x-parse-application-id': 'My-Key', 'x-parse-master-key': 'My-Key', 'content-type': 'application/json', 'content-length': '288' } { "where": { "deviceType": { "$in": [ "android" ] } }, "data": { "title": "The Shining", "alert": "All work and no play makes Jack a dull boy." } } verbose: { "headers": { "X-Parse-Push-Status-Id": "JZIN15X0aM" }, "response": { "result": true } } verbose: sending push to 3 installations verbose: sent push! 0 success, 0 failures no device is registered that is the reason please reverify the project id and app key of GCM @Heman6886 It is working now. Thanks for the help. Actually i updated my parse server to the latest one and it started working. Still there needs to be a proper documentation for others to understand. There is so much of time loss just to make this work. Parse developers need to contribute to these issues and make sure it works for everyone. ok @Heman6886 One more thing, we need not call push only from cloud code. we can also call from rest api's as well. I'll add a Working Sample for both Parse-Server and Parse-Starter project on my github later. Might help others with any number of config issues in either. @sekharrockz yes using cloud code O god , The Push Section is a mess , Yet You guys did a fantastic job on bring parse to public ! However , i am encounter a issue when i put push query in aftersave function , I got result success but nothing else T-T . Does anyone else have is problem ?? does my after send verbose: { "response": { "updatedAt": "2016-07-14T17:51:34.864Z" } } How did this even happened LOL I literally got nothing to work with LOL! Please some one give me a hint And thank your all for doing a great job ! By the way , This is my afterSave Block :`Parse.Cloud.afterSave("OBJECTCLASS",function(req,res){ console.log("does my after send"); var user = req.object.relation("requestUser").query(); user.notContainedIn("objectId",req.object.get("rejectUserList")); var pushQuery = new Parse.Query(Parse.Installation); pushQuery.matchesQuery('user',user); Parse.Push.send({ where: pushQuery , data: { alert: 'Test', //sound: 'default' } }, { useMasterKey: true, success: function(object) { // Push sent! console.log(object); res.success(); }, error: function(error) { console.log(error.message); res.error(error.message); // There was a problem :( } }); });` Hello! I had the same issue. I delete the line "res.success();" or "response.success();" and the pushes are send. There is no response object passed as a second argument in afterSave
2025-04-01T06:37:25.391751
2015-05-02T17:52:05
72676234
{ "authors": [ "atarzwell", "dan-blanchard", "emmett9001", "kbourgoin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2281", "repo": "Parsely/pykafka", "url": "https://github.com/Parsely/pykafka/pull/159" }
gharchive/pull-request
move version.py fails to import when importing, moving the version.py to pykafka/ seems to resolve this. Python 2.7.9 (default, Mar 1 2015, 12:57:24) [GCC 4.9.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pykafka Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/pykafka-1.0.0-py2.7.egg/pykafka/__init__.py", line 1, in <module> from version import version ImportError: cannot import name version >>> I am new to this project, so I may be doing something wrong. 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt7-1 (2015-03-01) x86_64 GNU/Linux Using pykafka.version will cause the whole package to get imported at setup time, which is a big problem if you don't already have the prerequisites installed. We should just do what we do with streamparse and grab it with a regex. I saw that, but I really don't care for using regex for that either. I've seen it done a couple different ways, so I want to take a quick look around and see what the options are. Otherwise, yeah, regex is the best option I've seen. @kbourgoin I've seen a few successful approaches to this in the past: Regular expressions (like we do with streamparse). Using execfile (we do this with SKLL). Storing it in a plaintext file and just reading that (although then you need to add something to __init__.py that loads from the file as well). Injecting a variable into __builtins__ in setup.py that says "I'm running setup," and have __init__.py only import the subpackages when that is false. This is what scikit-learn does, although more for the reason that their C modules won't be built at setup time. Oh, and conda-build also switched recently to using Python Versioneer, which grabs the version from your git tags. It might be nicer. I fixed this via the regex-based solution that @dan-blanchard mentioned. The commit is 7ce19f66d909331c4ca5e1a6b9700157022a3faf. @emmett9001 can I get your opinion on the version number? I decided to go with Kafka's version number followed by our own counter. Anyone else is welcome to comment as well, of course. It's just a scheme that made sense to me, no particular attachment there. @kbourgoin I don't like tying pykafka's version number to kafka's - it seems like it could get confusing quickly. Personally I'd use a simpler versioning scheme, but I could be convinced either way. I did it mostly because I was worried about protocol versioning, but maybe it's not really worth it. If the protocol changes so much that we have to break compatibility, we can just do a major version of pykafka. Then again, there's currently no way to interrogate Kafka to find its version, so it's not like we have a way of managing compatibility. Let's stick with a 1.0.0 release next.
2025-04-01T06:37:25.393784
2015-05-11T22:55:11
75386444
{ "authors": [ "emmett9001", "kbourgoin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2282", "repo": "Parsely/pykafka", "url": "https://github.com/Parsely/pykafka/pull/162" }
gharchive/pull-request
Make sure retrying the rebalance also re-checks partition allocation. This fixes a case where a balanced consumer is trying to acquire partitions it should have because it's balancing information is out of date. To fix, we ensure that every time we retry rebalancing that we re-check which partitions we should have. Looks great to me, if it works in production. Yeah, it's working. We'll write real integration tests for it later. :wind_chime:
2025-04-01T06:37:25.397693
2018-04-03T18:28:40
310947182
{ "authors": [ "danielskatz" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2283", "repo": "Parsl/parsl", "url": "https://github.com/Parsl/parsl/issues/191" }
gharchive/issue
parsl examples It would be nice to have a generic master/worker application example, and a genetic algorithm example build on top of it. the MW part would be: a master main program that calls a number of instances of a worker program, does something with them, then calls the workers again. Calls to workers should be scalable, as the number can change. Additionally, the number sometimes may be bigger than the number of available resources. the GA part would be layered on top as a specific kind of MW: we would find a GA that's written in Python, and use Parsl function to evaluate the various "genes" maybe could use something from https://github.com/handcraftsman/GeneticAlgorithmsWithPython
2025-04-01T06:37:25.400838
2023-03-17T08:02:10
1628871193
{ "authors": [ "benclifford", "vsoch" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2284", "repo": "Parsl/parsl", "url": "https://github.com/Parsl/parsl/pull/2634" }
gharchive/pull-request
fix: launch_cmd not set when provided to FluxExecutor Problem: Currently, the launch_cmd is not set if provided to the FluxExecutor directly. Many jobs are likely to start already with access to a flux instance, in which case the launch command should use flux submit instead of flux start. Solution: Ensure the launch_cmd is set. Description Please include a summary of the change and (optionally) which issue is fixed. Please also include relevant motivation and context. Fixes #2633 Type of change Choose which options apply, and delete the ones which do not apply. Bug fix (non-breaking change that fixes an issue) Looks like an issue with pytest versioning: ImportError: cannot import name 'Config' from 'pytest' (/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pytest/__init__.py) make: *** [Makefile:53: local_thread_test] Error 1 Error: Process completed with exit code 2. I'll await further instruction, since this is out of scope for my PR. Gnite! Yeah there's a separate PR open to fix that, which should get merged in the next few days Thanks @jameshcorbett !
2025-04-01T06:37:25.438003
2021-12-30T21:52:02
1091318801
{ "authors": [ "Fatih120", "Luiz12010" ], "license": "unlicense", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2285", "repo": "PartyPlanner64/PartyPlanner64", "url": "https://github.com/PartyPlanner64/PartyPlanner64/issues/124" }
gharchive/issue
Custom Character models Since, we can only make custom boards right now with the PartyPlanner. It would be nice to add support for editing character models in the game. How feasible is this nearly two years later? Can this be done manually?
2025-04-01T06:37:25.440263
2021-05-26T19:25:13
902831178
{ "authors": [ "Greg-J", "Paspartout" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2286", "repo": "Paspartout/UntitledDuckMod", "url": "https://github.com/Paspartout/UntitledDuckMod/issues/20" }
gharchive/issue
No data fixer registered for * I get the following errors in log on startup: [11:22:08] [main/ERROR]: No data fixer registered for untitledduckmod:duck [11:22:08] [main/ERROR]: No data fixer registered for untitledduckmod:duck_egg [11:22:08] [main/ERROR]: No data fixer registered for untitledduckmod:goose [11:22:08] [main/ERROR]: No data fixer registered for untitledduckmod:goose_egg Thanks! These are normal and actually occur for other mods that add entites aswell. Data fixers are only for migration between major minecraft versions which mods usually don't support.
2025-04-01T06:37:25.441841
2017-02-21T20:51:37
209268890
{ "authors": [ "Pat-Laugh" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2287", "repo": "Pat-Laugh/WebssonProjects", "url": "https://github.com/Pat-Laugh/WebssonProjects/issues/23" }
gharchive/issue
Check to make sure char escapes are valid UTF-8 The char escape with \U has to be checked for overflow. It's easier than for number parsing since the value is obligatorily 8 chars, so only the first char has to be checked so that it's not > 7. (The max value is 0x7fffffff.) For the escaped `\X', it'll probably use the functions used to parse regular hexadecimal numbers. That's just the parsing. Then the number returned should be checked to make sure it's a valid Unicode code point. But that's really optional. Perhaps people should be allowed to put any char, even if it's not a Unicode char. Whether or not the char is a valid Unicode code point will not be checked. Anyways, a valid Utf-8 char, but non valid Unicode code point, could be inserted without being escaped in a string; that wouldn't be checked.
2025-04-01T06:37:25.444936
2020-08-25T14:49:03
685552183
{ "authors": [ "tparesi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2288", "repo": "Path-Check/gaen-mobile", "url": "https://github.com/Path-Check/gaen-mobile/pull/273" }
gharchive/pull-request
Remove remnants of Tracing Strategy Description: Noticed a couple places referencing tracing strategy - removed them. :shipit: I believe that the types file is still called tracingStrategy maybe we can add it to this one or a new PR , your call @aledustet I was thinking this too - I will take care of it in this PR.
2025-04-01T06:37:25.461712
2023-03-11T21:09:44
1620146566
{ "authors": [ "PatrickWLowe" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2289", "repo": "PatrickWLowe/prework-study-guide", "url": "https://github.com/PatrickWLowe/prework-study-guide/issues/3" }
gharchive/issue
JavaScript JavaScript User Story As a boot camp student I want the prework notes to be structured on a webpage So that I can easily find and read the information Acceptance Criteria GIVEN a Prework Study Guide website WHEN I view the study guide THEN I can see the four topics I learned along with a suggestion on what I should study first Completed
2025-04-01T06:37:25.482584
2022-05-31T12:28:30
1253796062
{ "authors": [ "Akascape", "PaulleDemon" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2290", "repo": "PaulleDemon/tkVideoPlayer", "url": "https://github.com/PaulleDemon/tkVideoPlayer/issues/7" }
gharchive/issue
Scale problem I got another error in tkVideoPlayer's scaled parameter, when we set scaled to True it works and the video fits with the label size, but I don't want stretching in videos so I set scaled to False but it is giving lots of errors. Can we fit the video on the label with original ratio but with other remaining portions as black/label color? And the set_scaled parameter is also not working. @Akascape I have updated the library. Try tkvideoplayer 2.2: https://pypi.org/project/tkvideoplayer/2.2/ .Let me know if you find any issues, Now I have added the ability to keep the aspect ratio, refer docs here, you can then use tkvideo.config(bg="black") to get a black background. Thanks, its now working properly :)
2025-04-01T06:37:25.490264
2022-12-16T17:36:48
1500611865
{ "authors": [ "Klakurka", "chedieck" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2291", "repo": "PayButton/paybutton-server", "url": "https://github.com/PayButton/paybutton-server/pull/340" }
gharchive/pull-request
[Issue #341] feat: bullmq for sync newly added addresses Description: Adds job to verify if new addresses were added to the database and, if so, sync their transactions. Depends on: [x] #339 [x] #337 Conflict.
2025-04-01T06:37:25.496540
2022-10-13T10:48:48
1407581238
{ "authors": [ "CorwinDev", "kanetjuh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2292", "repo": "Paymenter/Paymenter", "url": "https://github.com/Paymenter/Paymenter/pull/3" }
gharchive/pull-request
Updated README.md I've updated the README.md file with the appropriate documentation, rather than simple, vague instructions on how to install this system. For example: Install database Please try to combine it in a single link not 2 Done! I reduced the number of links from 2 or more into 1.
2025-04-01T06:37:25.503830
2023-03-28T17:56:55
1644423599
{ "authors": [ "svengato" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2293", "repo": "PeanutBase/jekyll-peanutbase", "url": "https://github.com/PeanutBase/jekyll-peanutbase/issues/32" }
gharchive/issue
Germplasm Plant Images page Links point to peanutbase.org, we should move them to dev.peanutbase.org Image of Krinkle mutant plant shows no images. (Is this an especially hideous mutant plant?) Image of Krinkle mutant plant shows no images. Found it - it and some of the other missing images were at a different URL. This will not be a problem once I hunt down the remaining missing images, they will all go in files/brilliant_gallery_temp. All images now live under files/. Pages generated automatically since commit 4abdbe5.
2025-04-01T06:37:25.662904
2015-01-10T18:56:39
53967677
{ "authors": [ "brossi", "glv2" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2294", "repo": "Peerunity/Peerunity", "url": "https://github.com/Peerunity/Peerunity/pull/145" }
gharchive/pull-request
Guard against openssl's new strict DER checks. This is a port of the patches 488ed32f2ada1d1dd108fc245d025c4d5f252783 and 8dccba6a45db0466370726ed462b9da2eae43bce made by theuni for bitcoin core to fix the problem with the new openssl versions being stricter on the format of DER encoded ECDSA signatures (which could split the network). I'm going to review this PR today against the Bitcoin patch and get it merged if I don't see any problems building it. Thank you for submitting it, @glv2!
2025-04-01T06:37:25.664510
2020-08-17T06:38:43
679984226
{ "authors": [ "ajsutton" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2295", "repo": "PegaSysEng/teku", "url": "https://github.com/PegaSysEng/teku/pull/2592" }
gharchive/pull-request
Skip running fork choice if there is already a previous update still in progress PR Description Currently we run fork choice and then may fail to actually apply the new chain head because a new change comes in while we were regenerating the state. Instead if we're still waiting for a previous result to apply, skip running fork choice entirely. Documentation [x] I thought about documentation and added the documentation label to this PR if updates are required. This has not proven effective. Need to try a different approach.
2025-04-01T06:37:25.667603
2017-06-16T12:17:47
236466668
{ "authors": [ "jacky309", "jryannel" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2296", "repo": "Pelagicore/qface", "url": "https://github.com/Pelagicore/qface/issues/48" }
gharchive/issue
Properties of "interface" type It seems like it is already possible to have a property of interface type. Example: interface SubInterface { void doSomething(); } interface MainInterface { readonly SubInterface myProperty; } However, I could not find out how a code generator can check whether a property is of interface type or not. I would expect that an additional "is_interface" field is added, so that we could write the following: {% if property.type.is_model -%} // "is_model" field already existing // the property is a model {% elif property.type.is_interface -%} // "is_interface" field not existing yet // the property is a sub interface {% endif %} Fixed by https://github.com/Pelagicore/qface/pull/49 Seems to be okay. Done. I added some tests to validate the expectation
2025-04-01T06:37:25.679201
2024-10-12T23:13:18
2583493165
{ "authors": [ "AD1340", "JeremyGamer13", "RedMan13" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2297", "repo": "PenguinMod/PenguinMod-Vm", "url": "https://github.com/PenguinMod/PenguinMod-Vm/issues/74" }
gharchive/issue
temporary variables turbowarp version bug delete all runtime variables block doesnt delete any variables, and when you click the checkbox on the active runtime variables reporter it doesnt show anything but when you click the block it does getting error TypeError: Cannot read properties of undefined (reading 'delete') for delete runtime var block and error TypeError: Cannot read properties of undefined (reading 'has') using runtime var exists? im getting both these errors while trying to use these blocks in a custom block, pls fix ur stuff penguinmod 😭 tf wait, do your variable names invlove periods oh well um erm its um its well its um complicated um maybe um yes wat do u mean by loss of tiny bit of simplicity dangerouz optimizations worked :D dangerouz optimizations worked :D yay! wat do u mean by loss of tiny bit of simplicity as in instead of get (baz) in (get (bar) in (get (foo) in (get (tempvar)))) you can just do get (tempvar.foo.bar.baz) (assuming tempvar contains {"foo":{"bar":{"baz":"hellorld!"}}}) for this feature to work it is important that it is an obejct/map-like and not a json string that is inside the tempvar, a map-like would be something like the pm "objects" extension in the extension gallery why is it closed when the main issue is still present @RedMan13 REDMANNNNNN!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! REOPEN ITTTTTTTTTTTTTTTT!!!!!!!!!! why is it closed when the main issue from the original comment is still present which one ehh both need fixed anyways lol so, tf ? what on skidi was making the second set of errors also fixed the bug this thread was orignally for NOOOOOOOOOOO!!!!!!!!!! WAAAAAAAAAAAIIIIIIIIITTTTTTTTT!!!!!!! THERES ONE LAST BUGGGGGGGGGGGGGG!!!!!!!!!
2025-04-01T06:37:25.703895
2020-05-16T23:03:14
619571576
{ "authors": [ "DeFUCC", "Perlkonig", "RojerGS" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2298", "repo": "Perlkonig/grav-plugin-count-views", "url": "https://github.com/Perlkonig/grav-plugin-count-views/issues/15" }
gharchive/issue
Could a change to the event hook make for a more robust count? I was poking around Grav's docs to try and find a way to make the view counter slightly more robust/accurate; e.g. I'd like the counter not to increase when someone presses F5 (soft refreshes the page). Would it help if, instead of incrementing the counter on onPageInitialized, instead increment it in another event like onPageContentRaw? It might. I don't have time to test anything at the moment. Feel free to give it a shot. Pull requests are always welcome. That said, it shouldn't always increment when refreshing. If the page is cached, it shouldn't be triggering the plugin. I seem to recall checking that when I last looked at this forever ago. Again, something to look at when things slow down for me. May be onShutdown is better? @DeFUCC on what measure would onShutdown be better? I am not very familiar with Grav's event hooks so your explanation could shed some light on this. onShutdown A new and very powerful event that lets you perform actions after Grav has finished processing and the connection to the client has been closed. This is particularly useful for performing actions that don't need user interaction and potentially could impact performance. Possible uses include user tracking and jobs processing. Grav docs So it increments the view count when the user is actually viewing the page ) I've interchanged the Event in my version of the plugin and it works fine. ;) @DeFUCC sure, but why is it better than onPageInit or onPageContentRaw? It doesn't take resources on page loading time, may be? I'm new to Grav and not an expert in PHP, so it's just an idea about improving performance of all those plugins I added )
2025-04-01T06:37:25.706825
2018-02-25T19:54:44
300055400
{ "authors": [ "Perryvw", "zapp-brannigan-dota" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2299", "repo": "Perryvw/TypescriptToLua", "url": "https://github.com/Perryvw/TypescriptToLua/issues/56" }
gharchive/issue
Experimental feature: Alias calls Related pull request: #55 You can define a type alias as follows: type C_DOTA_BaseNPC = CDOTA_BaseNPC; Then an explicit cast would cause the transpiler to use the alias instead of the original class. For example unit.GetMana() ==> CDOTA_BaseNPC.GetMana(unit) (<C_DOTA_BaseNpc>unit).GetMana() ==> C_DOTA_BaseNPC.GetMana(unit) From now on this experimental feature will be merged into the main branch to test. Please report any issues with this mechanic here. This feature will remain open until this experimental feature is reverted or added permanently. The type aliases pollute the code completion and should probably be kept in a separate file so they can be excluded. You should be able to simply put them in your declarations file. For Dota they are in the general declarations file and that causes them to pop up in code completion which is annoying. If they were in a separate file people could choose to exclude them from their code completion. Removed the need for this in 17378efff93ed72bfb7ed01016c0aeb17d5092c3
2025-04-01T06:37:25.722606
2018-11-24T02:49:55
383953262
{ "authors": [ "PetarV-", "xuhaiyun42" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2300", "repo": "PetarV-/GAT", "url": "https://github.com/PetarV-/GAT/issues/14" }
gharchive/issue
Citeseer data set accuracy Hello!I used the GAT network to run the citeseer database, but the accuracy could not reach 72.5, only 70.3. How did you set the parameters to run so high? Hello, Thanks for your issue! Regarding the Citeseer setup, we have found that early stopping just on the accuracy (rather than loss and accuracy) yielded better results. Here is a relevant code segment: if val_acc_avg/vl_step >= vacc_mx: if val_acc_avg/vl_step >= vacc_mx: vacc_early_model = val_acc_avg/vl_step vlss_early_model = val_loss_avg/vl_step saver.save(sess, checkpt_file) vacc_mx = np.max((val_acc_avg/vl_step, vacc_mx)) vlss_mn = np.min((val_loss_avg/vl_step, vlss_mn)) curr_step = 0 Hope that helps! Note that the standard deviation on Citeseer is large (0.7) so it might take multiple runs to achieve a satisfactory accuracy. For example, I had five runs in a row with 73.1%, 74.2%, 71.9%, 73.1%, 70.9% under this configuration. Thanks, Petar Thank you very much for the reply. Thanks, Xu Haiyun
2025-04-01T06:37:25.748361
2022-01-03T16:58:46
1092667210
{ "authors": [ "joaoluis-pdm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2302", "repo": "PharmaLedger-IMI/ctr-workspace", "url": "https://github.com/PharmaLedger-IMI/ctr-workspace/issues/80" }
gharchive/issue
patient-ssapp v0.10.10 the PharmaLedger logo should navigate to the dashboard The PharmaLedger logo at the top right should act as a home button. c3e58dc does the job, but the mouse over the image does not change to "hand"... @lehialessandro (low-pri) do you know how to solve the mouse hover change ? I've tried ion-button around the image, but the colors/bg of the image change. I've tried ion-anchor, but the mouse-over did not change at all.
2025-04-01T06:37:25.750187
2022-09-22T11:40:03
1382306534
{ "authors": [ "Mastaleru" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2303", "repo": "PharmaLedger-IMI/eco-iot-pmed-workspace", "url": "https://github.com/PharmaLedger-IMI/eco-iot-pmed-workspace/issues/511" }
gharchive/issue
Redesign this page using ionic cards component It should be similar with cards used for studies and added more details on it (e.g visit date and eventually possibility for rescheduling)
2025-04-01T06:37:25.752412
2022-08-04T11:23:15
1328474329
{ "authors": [ "TiagoV-PDMFC" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2304", "repo": "PharmaLedger-IMI/fgt-workspace", "url": "https://github.com/PharmaLedger-IMI/fgt-workspace/issues/109" }
gharchive/issue
(1) LOW - Client_Use_Of_Iframe_Without_Sandbox From #94 add the proper ifram sandbox configs to the created iframe for the wallet: //iframe.setAttribute("sandbox", "allow-scripts allow-same-origin allow-forms"); for isntance The code that produces this warning is not used in production (and is originally from Romsoft's implementation). It is used for the web representation of the wallets. We will address this, for reference's sake, but it's low priority
2025-04-01T06:37:25.759325
2023-11-06T16:07:53
1979519252
{ "authors": [ "PhiBabin", "neopostmodern" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2305", "repo": "PhiBabin/Redox-Lipo-Adapter", "url": "https://github.com/PhiBabin/Redox-Lipo-Adapter/issues/3" }
gharchive/issue
Regulated voltage level? I come here from this issue and am interested in integrating this into my Redox. While looking through the repo I got confused about the regulated voltage of the most recent design. At the bottom of the README it says "Change regulated voltage from 3.3V to 3V", but the schematic mentions a "3.3V regulator" with no further specification of VREG. So – which is it? I'm asking because my current hypothesis to mitigate connectivity issues I'm facing is running the Redox at a slightly elevated voltage. V1.0 has a 3.3V regulator. V1.2 and V1.3 used a 3V regulator. In theory, you get better battery life with a 3V regulator, since you can use the 4.2V to 3.0V range of the battery. I don't know if it actually makes a significant difference in practice, since the Li-po's voltage is not linear with the state of charge. So if you're worry about low voltage, using the 3.3V should be fine. The microcontroler (nRF51822) used by the Redox support an input voltage between 1.8V and 3.6V.
2025-04-01T06:37:25.778402
2020-06-29T13:31:05
647378975
{ "authors": [ "PhilipVinc", "wattik" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2306", "repo": "PhilipVinc/TensorBoardLogger.jl", "url": "https://github.com/PhilipVinc/TensorBoardLogger.jl/issues/71" }
gharchive/issue
log_value sets step to nothing on default https://github.com/PhilipVinc/TensorBoardLogger.jl/blob/08f57d854ca1b32cc4cfb0a24881d57ec9f5bb1c/src/Loggers/LogValue.jl#L8 Hello, in contrary to the description and probably the expected behavior, the method log_value sets the argument step to nothing on default. As suggested in the description, a more natural default value for the argument would be step(logger). I believe this was intended but somehow accidentally omitted during development. Cheers, Hi wattik, While step is set to nothing instead of the current step, the behaviour is what is documented, as later on in the serialisation chains nothing gets converted to step(logger). See https://github.com/PhilipVinc/TensorBoardLogger.jl/blob/08f57d854ca1b32cc4cfb0a24881d57ec9f5bb1c/src/event.jl#L10 Indeed our implementation is weird, the code should be cleaned up and the conversion should be moved up to improve the code quality. I have very little time those days, but if you want to pick this up, I'll be fast in reviewing.
2025-04-01T06:37:25.803890
2024-06-03T06:54:02
2330322516
{ "authors": [ "0xbharath", "Prateek-Thakare", "xme" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2307", "repo": "PhonePe/mantis", "url": "https://github.com/PhonePe/mantis/issues/26" }
gharchive/issue
Best way to upgrade a Docker instance? I was wondering if you have tips to upgrade a Mantis instance running in Docker? Without losing existing data. Just update the git repo and restart the ./docker-setup-ubuntu.sh script to build new images? Tx! Hey @xme You need to get the latest Mantis repo (clone), re-run the setup script. The setup script will detect existing Mantis setup and provides you options on how to proceed. Choose an option where MongoDB instance is not deleted so that your existing data and the MongoDB instance are preserved. Hi Prateek, Nice, it ran smoothly! Tx! I does not seem to be critical, Mantis works, but I got this error at the end of the setup script: jq: error (at <stdin>:1): Cannot index array with string "Service" This error occurs because of changes to docker compose ps command output in the latest docker compose versions. Can you update your docker compose to the latest version and try the same? It should resolve it.
2025-04-01T06:37:25.867861
2024-12-12T17:22:03
2736487430
{ "authors": [ "jaein4722", "nassunii", "yjyoo3312" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2308", "repo": "PiLab-CAU/ImageProcessing-2402", "url": "https://github.com/PiLab-CAU/ImageProcessing-2402/issues/60" }
gharchive/issue
[Lecture 11_2][12/13] CycleGAN question What does "target_fake" mean in the process of calculating "loss_D_fake" for discriminator A? I understand "pred_fake" as the generated fake A, but I was pondering the meaning of "target_fake" and decided to ask for clarification. In the implementation code in https://github.com/aitorzip/PyTorch-CycleGAN/blob/master/train , # Inputs & targets memory allocation Tensor = torch.cuda.FloatTensor if opt.cuda else torch.Tensor input_A = Tensor(opt.batchSize, opt.input_nc, opt.size, opt.size) input_B = Tensor(opt.batchSize, opt.output_nc, opt.size, opt.size) target_real = Variable(Tensor(opt.batchSize).fill_(1.0), requires_grad=False) target_fake = Variable(Tensor(opt.batchSize).fill_(0.0), requires_grad=False) target_real represents the label 1 (real) for real data, while target_fake represents the label 0 (fake) for fake data. In this implementation, criterion_GAN is defined as MSELoss (Mean Squared Error Loss), which trains the model to minimize the difference between the predicted value and the target label. Specifically, the Discriminator is trained to output 1 for real data and 0 for fake data. Here, target_fake serves as the label "0" indicating fake data, and it is used in the calculation of loss_D_fake. This ensures that the Discriminator learns to correctly distinguish fake data from real data and avoids misclassifying fake data as real. @nassunii @jaein4722 Thank you for the question and the answer:) Thank you for the great answer, @jaein4722 . It seems my additional comments are not needed.
2025-04-01T06:37:25.869878
2015-10-09T11:34:20
110640131
{ "authors": [ "Piasy", "androidmalin" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2309", "repo": "Piasy/OkBuck", "url": "https://github.com/Piasy/OkBuck/issues/18" }
gharchive/issue
Unsupported major.minor version 52.0 buck install app Using watchman. Using buckd. BUILD FAILED: com/android/dx/command/dexer/Main : Unsupported major.minor version 52.0 [-] PROCESSING BUCK FILES...FINISHED 0.5s [100%] [-] BUILDING...FINISHED 1.3s [100%] (14/43 JOBS, 13 UPDATED, 30.2% CACHE MISS) BUILD FAILED: com/android/dx/command/dexer/Main : Unsupported major.minor version 52.0 I think this is neither buck nor OkBuck's problem, you seems use jdk 1.8 to build Android project. Google your error message to find more details.
2025-04-01T06:37:25.879244
2023-06-25T01:01:11
1772992934
{ "authors": [ "Picnic-Bot" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2310", "repo": "PicnicSupermarket/error-prone-support", "url": "https://github.com/PicnicSupermarket/error-prone-support/pull/698" }
gharchive/pull-request
Upgrade Swagger 2.2.12 -> 2.2.13 This PR contains the following updates: Package Type Update Change Swagger compile patch 2.2.12 -> 2.2.13 Release Notes swagger-api/swagger-core v2.2.13: Swagger-core 2.2.13 released! Compare Source fix: makes populating instance variables accessible to subclasses (#​4434) OAS 3.1 - properties and ref as siblings / fix ModelConvertes usage (#​4433) support custom annotation for containers (#​4429) [ ] If you want to rebase/retry this PR, check this box Warning Renovate's suggested commit message is being replaced with improved initial commits to enable automerging. As a side effect, these suggested commit messages might have changed. Consider comparing the initial commit message and this message to determine the most suitable one. Please leave feedback in #sys-renovate. Suggested commit message: Upgrade Swagger 2.2.12 -> 2.2.13 See: - https://github.com/swagger-api/swagger-core/releases/tag/v2.2.13 - https://github.com/swagger-api/swagger-core/compare/v2.2.12...v2.2.13
2025-04-01T06:37:25.882777
2022-03-10T08:41:13
1164921328
{ "authors": [ "fquirin", "kenarsa" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2311", "repo": "Picovoice/porcupine", "url": "https://github.com/Picovoice/porcupine/issues/680" }
gharchive/issue
Allow training of wake words for older versions via Picovoice console Is your feature request related to a problem? Wake words trained for Porcupine v2.0 won't work with v2.1 and vice versa. Since developers cannot always update right away and release new versions they depend on support for older versions at least for some transition period. Currently it is not possible to train v2.0 wake words anymore via the online console which effectively breaks the custom wake word feature for all apps and devices that still have to run v2.0. Describe the solution you'd like Let users train custom wake words for older versions (v2.0 atm) via the console or release offline tools to train older wake words. you can keep using trained v2.0 models, no? Yes, but users who are just getting started cannot create custom wake words at the moment :-( got it. we don't have plans to provide this at the moment as it incurs lots of ops and almost all customers are either happy to stay with already trained (older) models or upgrade to the newest version. I will keep this in mind for the future but closing now as there is no immediate action. Ok. I'm about to update to 2.1 soon ... any immediate plans for 2.2? ^^ :see_no_evil:
2025-04-01T06:37:25.886128
2018-07-19T20:07:02
342870907
{ "authors": [ "PierreBresson" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2312", "repo": "PierreBresson/Thinkerview", "url": "https://github.com/PierreBresson/Thinkerview/issues/6" }
gharchive/issue
Add pagination Add pagination for better performance. Currently, the app is getting the 100 latest articles with a GET on http://thinkerview.com/wp-json/wp/v2/posts?categories=9&per_page=100 instead of http://thinkerview.com/wp-json/wp/v2/posts?categories=9&?page=1 and http://thinkerview.com/wp-json/wp/v2/posts?categories=9&?page=2 etc... Done in 1.3.0
2025-04-01T06:37:25.887140
2018-05-07T22:22:25
320979099
{ "authors": [ "Quasilyte", "fexolm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2313", "repo": "PieselBois/kfulint", "url": "https://github.com/PieselBois/kfulint/issues/16" }
gharchive/issue
readme: fill minimum info about the project At least write whats this project is about. @Quasilyte, maybe close this issue?
2025-04-01T06:37:25.897450
2024-12-07T13:11:49
2724588063
{ "authors": [ "jueank" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2314", "repo": "PinballY/PinballY-Addons-and-Examples", "url": "https://github.com/PinballY/PinballY-Addons-and-Examples/pull/22" }
gharchive/pull-request
Added PinMAME functions script & helper files Allows to control ROM volume & DMD of PinMAME conveniently from PinballY Showpindmd (0 / 1) shows or hides the DMD controlled by PinMAME. Disabling the DMD is useful for older ROM based tables with alphanumeric displays, where PinMAME is showing basic numbers only on the DMD. Volume (0 to -32) attenuates the volume of the ROM sound from PinMAME. This allows to balance out the volume coming from the backbox speakers vs. the volume of the playfield sound. This is for older ROM based tables, which do not have a DMD menu with volume control. 0 is the loudest (default). I have never needed values below -16. So this is the range being shown in the menu. It is usually sufficient to go in steps of 2, as I have never needed the steps to be finer. The only change that needs to be done to the script is the scriptpath itself. It defaults to "C:\PinballY\Scripts" which should work out of the box for most installations. The registry key for the ROM is retrieved from PinballYs metadata. In my experience, this works quite well for 95% of the tables that I have used it for so far. The changes to the Windows registry are being done by a small helper application, built with AutoIt3. You can use the provided EXE or build it yourself from source. @mjrgh Not sure if you noticed this PR. You might want to have a look at it.
2025-04-01T06:37:26.019239
2015-09-28T04:38:25
108588150
{ "authors": [ "vosen", "yacoder" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2315", "repo": "PistonDevelopers/VisualRust", "url": "https://github.com/PistonDevelopers/VisualRust/issues/185" }
gharchive/issue
Couldn't build "Debug" configuration in VS 2015 Building in "Debug" configuration returns 1519 errors on my machine using VS 2015, and this was the default build configuration when I opened the solution. Switching to "Debug-CI" or "Release" fixes the errors. We should maybe remove the "Debug" configuration? Or merge it with "Debug-CI"? Paste the error log somewhere (gist or pastebin). Debug configuration is pretty much identical to Release, Debug-CI is a special configuration for AppVeyor. Hmm... couldn't repro today... will keep an eye on it :-)
2025-04-01T06:37:26.029201
2015-04-05T12:52:10
66436032
{ "authors": [ "atheriel", "fenhl", "toqueteos" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2316", "repo": "PistonDevelopers/hematite_server", "url": "https://github.com/PistonDevelopers/hematite_server/pull/76" }
gharchive/pull-request
Region file (.mca) handling. DO NOT MERGE YET. First step towards having a chunk loader. [x] .mca file reading. [ ] .mca file writing. [ ] Chunk loader capabilities. File writing is utterly incomplete. @toqueteos I'm working on #[derive] functionality for NBT file formats that should make this task massively easier. Hmmm... Is #[derive] customizable now? That's awesome! #[derive(NbtFmt)] has landed now. I'm closing this PR in favor of a new updated one.
2025-04-01T06:37:26.033495
2020-03-26T04:28:45
588146340
{ "authors": [ "Zmwang622", "azharichenko" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2317", "repo": "PittCSWiki/pittcswiki", "url": "https://github.com/PittCSWiki/pittcswiki/issues/17" }
gharchive/issue
Create guide "5 Tips for Success" 5 Tips For Success Solid Resume Apply EARLY Study Interview Questions closed this by accident. I'll work on it this week lol dis just zero to over
2025-04-01T06:37:26.052893
2021-06-27T10:00:45
930893914
{ "authors": [ "AnTheMaker", "PiyushSuthar", "Rajaniraiyn" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2318", "repo": "PiyushSuthar/Windows-11-Web", "url": "https://github.com/PiyushSuthar/Windows-11-Web/pull/3" }
gharchive/pull-request
Add autofocus to search bar Love the project! This is a minor minor change: I just added autofocus to the search-bar so whenever you open the task menu you can automatically start typing without having to click on the search bar first, just like in the real windows. Thanks for the PR. It looks like we're facing some issues with it. Thanks for the PR. It looks like we're facing some issues with it. Hey guys my project also same issue but I fixed it. This is because we focus the search before the Startmenu animation. Try focus after animation using setTimeout(() => { **startmenu**.focus(); }, **animationduration**) Thanks for the PR. It looks like we're facing some issues with it. Hey guys my project windows11 also same issue but I fixed it. This is because we focus the search before the Startmenu animation. Try focus after animation using setTimeout(() => { **startmenu**.focus(); }, **animationduration**) Ohh Great! Thanks for sharing the solution! No I can't. I doesn't know any frameworks. I am comfortable only with plain JS.
2025-04-01T06:37:26.055773
2021-08-23T12:30:31
976981587
{ "authors": [ "w-le" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2319", "repo": "PlaceOS/drivers", "url": "https://github.com/PlaceOS/drivers/issues/240" }
gharchive/issue
Generic MQTT: support certificate based auth https://github.com/PlaceOS/drivers/blob/master/drivers/place/mqtt.cr currently supports only username/password authentication. There is a current project which requires publishing device state to an MQTT broker on the internet that requires certificate based authentication. Could we add support for certificate based auth? fyi @jeremy-west I'll assign this to Steve as he created this MQTT driver and the lib which it uses (https://github.com/spider-gazelle/crystal-mqtt). Can we allocate some of his time to it over the coming weeks? Sorry I deliberately didn't raise this earlier as the original documentation I was provided for the remote MQTT broker stated that certificate based auth was OPTIONAL (password auth could be used instead). But now that MQTT service has rebranded and their new documentation states a requirement for cert based auth. It's a good thing to support anyway and I'm sure we will see it required again soon.
2025-04-01T06:37:26.078073
2023-10-31T09:17:14
1969988808
{ "authors": [ "Plachtaa", "zhou20120904" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2320", "repo": "Plachtaa/VALL-E-X", "url": "https://github.com/Plachtaa/VALL-E-X/issues/129" }
gharchive/issue
It's useless on mac I deploy it on my mac(Apple M1,8GB,Ventura 13.5),when i use it,it always run a while,then `VALL-E EOS [413 -> 727] libc++abi: terminating due to uncaught exception of type c10::Error: Unsupported type byte size: ComplexFloat Exception raised from getGatherScatterScalarType at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/View.mm:758 (most recent call first): frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&) + 92 (0x16a4f92b8 in libc10.dylib) frame #1: at::native::mps::getGatherScatterScalarType(at::Tensor const&) + 304 (0x28e923150 in libtorch_cpu.dylib) frame #2: invocation function for block in at::native::mps::gatherViewTensor(at::Tensor const&, at::Tensor&) + 128 (0x28e924ca0 in libtorch_cpu.dylib) frame #3: dispatch_client_callout + 20 (0x19acb4400 in libdispatch.dylib) frame #4: dispatch_lane_barrier_sync_invoke_and_complete + 56 (0x19acc397c in libdispatch.dylib) frame #5: at::native::mps::gatherViewTensor(at::Tensor const&, at::Tensor&) + 888 (0x28e923838 in libtorch_cpu.dylib) frame #6: at::native::mps::mps_copy(at::Tensor&, at::Tensor const&, bool) + 3096 (0x28e87ab58 in libtorch_cpu.dylib) frame #7: at::native::copy_impl(at::Tensor&, at::Tensor const&, bool) + 1944 (0x28a5f7604 in libtorch_cpu.dylib) frame #8: at::native::copy(at::Tensor&, at::Tensor const&, bool) + 100 (0x28a5f6dac in libtorch_cpu.dylib) frame #9: at::ops::copy::call(at::Tensor&, at::Tensor const&, bool) + 288 (0x28b32d718 in libtorch_cpu.dylib) frame #10: at::native::clone(at::Tensor const&, c10::optionalc10::MemoryFormat) + 444 (0x28a981f84 in libtorch_cpu.dylib) frame #11: at::_ops::clone::call(at::Tensor const&, c10::optionalc10::MemoryFormat) + 276 (0x28b03b0c4 in libtorch_cpu.dylib) frame #12: at::_ops::contiguous::call(at::Tensor const&, c10::MemoryFormat) + 272 (0x28b45fa60 in libtorch_cpu.dylib) frame #13: at::TensorBase::__dispatch_contiguous(c10::MemoryFormat) const + 40 (0x28a447130 in libtorch_cpu.dylib) frame #14: at::native::mps::binaryOpTensor(at::Tensor const&, at::Tensor const&, c10::Scalar const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator>, MPSGraphTensor* (at::native::mps::BinaryOpCachedGraph*, MPSGraphTensor*, MPSGraphTensor*) block_pointer) + 968 (0x28e863330 in libtorch_cpu.dylib) frame #15: at::native::structured_mul_out_mps::impl(at::Tensor const&, at::Tensor const&, at::Tensor const&) + 128 (0x28e8673f0 in libtorch_cpu.dylib) frame #16: at::(anonymous namespace)::wrapper_MPS_mul_Tensor(at::Tensor const&, at::Tensor const&) + 140 (0x28c003ea8 in libtorch_cpu.dylib) frame #17: at::_ops::mul_Tensor::call(at::Tensor const&, at::Tensor const&) + 284 (0x28ae41898 in libtorch_cpu.dylib) frame #18: torch::autograd::THPVariable_mul(_object*, _object*, _object*) + 396 (0x1781f82dc in libtorch_python.dylib) frame #19: object* torch::autograd::TypeError_to_NotImplemented<&torch::autograd::THPVariable_mul(_object*, _object*, _object*)>(_object*, _object*, _object*) + 12 (0x178154330 in libtorch_python.dylib) frame #20: method_vectorcall_VARARGS_KEYWORDS + 144 (0x104b77f88 in Python) frame #21: vectorcall_maybe + 104 (0x104bd5824 in Python) frame #22: slot_nb_multiply + 148 (0x104bd2588 in Python) frame #23: binary_op1 + 228 (0x104b5021c in Python) frame #24: PyNumber_Multiply + 36 (0x104b5082c in Python) frame #25: _PyEval_EvalFrameDefault + 51104 (0x104c467d0 in Python) frame #26: _PyEval_Vector + 116 (0x104c48564 in Python) frame #27: method_vectorcall + 164 (0x104b6e0c0 in Python) frame #28: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python) frame #29: _PyEval_Vector + 116 (0x104c48564 in Python) frame #30: _PyObject_FastCallDictTstate + 96 (0x104b6afe8 in Python) frame #31: slot_tp_call + 180 (0x104bd076c in Python) frame #32: _PyObject_MakeTpCall + 128 (0x104b6ad3c in Python) frame #33: _PyEval_EvalFrameDefault + 40584 (0x104c43eb8 in Python) frame #34: _PyEval_Vector + 116 (0x104c48564 in Python) frame #35: _PyVectorcall_Call + 152 (0x104b6b82c in Python) frame #36: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python) frame #37: _PyEval_Vector + 116 (0x104c48564 in Python) frame #38: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python) frame #39: _PyEval_Vector + 116 (0x104c48564 in Python) frame #40: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python) frame #41: _PyEval_Vector + 116 (0x104c48564 in Python) frame #42: _PyObject_VectorcallTstate.4608 + 88 (0x104c62a28 in Python) frame #43: context_run + 92 (0x104c628e4 in Python) frame #44: cfunction_vectorcall_FASTCALL_KEYWORDS + 76 (0x104bb3a00 in Python) frame #45: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python) frame #46: _PyEval_Vector + 116 (0x104c48564 in Python) frame #47: method_vectorcall + 380 (0x104b6e198 in Python) frame #48: thread_run + 168 (0x104cfaad4 in Python) frame #49: pythread_wrapper + 48 (0x104c9c1cc in Python) frame #50: _pthread_start + 148 (0x19ae63fa8 in libsystem_pthread.dylib) frame #51: thread_start + 8 (0x19ae5eda0 in libsystem_pthread.dylib) [1] 36424 abort python3 -X utf8 launch-ui.py` IT EVEN USE ME 20GB RAM!!!!(zipped,i dont know what's that mean) Optimize it,please. ComplexFloat seems to be a common problem reported by several Mac users, but I personally don't have a MacBook to do debugging. So, I apologize that currently I'm unable to fix this problem. ComplexFloat seems to be a common problem reported by several Mac users, but I personally don't have a MacBook to do debugging. So, I apologize that currently I'm unable to fix this problem. ok thanks
2025-04-01T06:37:26.082665
2016-05-02T14:15:11
152562084
{ "authors": [ "Ketchup901", "Plailect" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2321", "repo": "Plailect/Guide", "url": "https://github.com/Plailect/Guide/issues/107" }
gharchive/issue
Why does your 9.2 upgrade section use an outdated version of rxTools? There is no reason not to use the newer version, load it on http://dukesrg.github.io/?rxTools/sys/code.bin rather than reboot.ms. Very old system versions cannot run newer rxTools versions.
2025-04-01T06:37:26.084211
2016-07-29T04:39:58
168249363
{ "authors": [ "Plailect", "SuperStuck" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2322", "repo": "Plailect/Guide", "url": "https://github.com/Plailect/Guide/issues/274" }
gharchive/issue
Black screen after restoring RedNand On my new 3DS I get a black screen on boot after finishing A9LH installation and restoring the RedNand, sysnand and sysnand from part 2 don't work either. A9LH has installed correctly (I tested it with the tester package, a powers off the console just fine). Any idea what could be wrong? Copy Luma's files again, make sure your options are right.
2025-04-01T06:37:26.091277
2015-08-19T20:40:40
101993492
{ "authors": [ "Planeshifter", "dariusk", "oskarflordal" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2323", "repo": "Planeshifter/node-word2vec", "url": "https://github.com/Planeshifter/node-word2vec/pull/6" }
gharchive/pull-request
Strlenfix This avoids the crash on gnews.bin but unfortunatley I haven't been able to confirm it works since I run out of memory. Attempting to run this code now on an Amazon cluster where I'd previously gotten the error this is supposed to fix. Not sure how long it'll take but I'll update when it's done. w2v.loadModel('../GoogleNews-vectors-negative300.bin', function(err, model){ console.log('model',model); }); Thanks for the commit & the fix of the string length. I am thinking that maybe we can remove the slice operation when creating a new WordVec instance. Apparently, node Buffers are allocated in memory outside of the V8 heap, so if we avoid creating a shallow copy and would instead just provide a new view on the underlying data, this might help. I made some little changes to the code to facilitate this and merged it into the master branch. Oh excellent. I'll use the current master branch and give it a shot now. $ node index.js --max_old_space_size 4096 > out.txt FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory Aborted (core dumped) Even with the optimization it's still hitting 4GB of memory usage and dumping. Thanks for trying out the updated code! You were right from the start, and it seems that we might not be able to get this working without a major rewrite of the code, which could utilize either multiple node processes or native C++ code via an add-on. I am a bit at my wit's end, but will let all of you know in case I come up with something in the future. This might magically get fixed by the upcoming Node "4.0" release, which you can read about here: https://medium.com/node-js-javascript/4-0-is-the-new-1-0-386597a3436d (io.js is a fork of node with lots of improvements that is getting folded back into the trunk)
2025-04-01T06:37:26.115381
2022-02-28T22:35:42
1154623313
{ "authors": [ "brianwp3000" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:2324", "repo": "PlayFab/MpsPowershell", "url": "https://github.com/PlayFab/MpsPowershell/issues/8" }
gharchive/issue
MonitoringApplicationConfiguration doesn't work Even if you specify an argument for -MonitoringApplicationConfiguration in New-PfBuild, nothing gets sent to the server So turns out I was wrong--if you pass in a correctly-shaped object it works. But if the passed in object isn't exactly correct, Autorest just silently ignores it. Which is terrible, and I can't find any switches in Autorest to change this behavior. This affects all arguments, not just MonitoringApplicationConfiguration