added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:40:19.745439
| 2018-03-16T09:12:36
|
305851532
|
{
"authors": [
"MLDOliveira",
"bilencekic",
"ivanfemia",
"sandraros"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10494",
"repo": "sapmentors/abap2xlsx",
"url": "https://github.com/sapmentors/abap2xlsx/issues/527"
}
|
gharchive/issue
|
Performance problem after updating the version
Hi Everyone,
I updated abap2xlsx version via abapgit but now i have a huge performance issue. Nothing changed, i am still using BIND_TABLE method with 16K rows and 60 columns.
Last time it was taking 20-35 seconds with huge file writer but now it is nearly 150 seconds.
I tried SET_TABLE and same issue.
Any advice ?
Can you try to do a trace with se30 to identify the bottleneck?
On Mar 16, 2018, at 4:12 AM, Bilen<EMAIL_ADDRESS>wrote:
Hi Everyone,
I updated abap2xlsx version via abapgit but now i have a huge performance issue. Nothing changed, i am still using BIND_TABLE method with 16K rows and 60 columns.
Last time it was taking 20-35 seconds with huge file writer but now it is nearly 150 seconds.
I tried SET_TABLE and same issue.
Any advice ?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
above is the SAT results, somehow GET_ROW method is taking %91 of total time.
Thanks,
I remember a similar issue in the past, I trying to recall what was the final outcome.
thank you :D please try to remember i need to transport requests to test system very soon. now i noticed system is using object_collection_iteration method after version update do get next row. inside iteration method it is using standard table maybe this can cause performance issue i didin't get strange issue.
ok i think i found the issue;
first of all that iterator doesn't look necessary, we can store in a global hash table. Each time itearator is instantiated table keep moving from one variable to another variable.
and second thing is, program keep doing same search in each line. lets say there is row already in index 10, now it is searching if there is value in index 11, everytime it starts from the beggining.
So in row number 2000, it works 2000 times to check if there is any value. But if there was a global variable like last_value_cell, it can start directly from 2000 to check.
Hi @bilencekic I'm haveing the same problemm, all you did was change the method GET_ROW with that code?
@bilencekic / @MLDOliveira If you can provide a fix to the project it would be really great!
@bilencekic Could you commit your correction to the projetc?
@ivanfemia @MLDOliveira alright i will commit. I changed 2 classes in total, i will commit asap.
@MLDOliveira have you tested after latest changes ? How is the performance ?
Could we close this issue? Thanks.
|
2025-04-01T06:40:19.796331
| 2016-12-10T15:49:49
|
194775196
|
{
"authors": [
"daltones",
"xzyfer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10495",
"repo": "sass/node-sass",
"url": "https://github.com/sass/node-sass/issues/1829"
}
|
gharchive/issue
|
Dependency on lodash.isarray
I saw that 79e86f32ce3661569748164d26e2f2667a79699a introduced a dependency on lodash.isarray among others.
Is that really necessary? Any particular reason to not use the standard Array.isArray()?
Array.isArray() already appears in other parts of the code and we're having this message on install:
warning node-sass ><EMAIL_ADDRESS>This package is deprecated. Use Array.isArray.
Array.isArray is not available until Node 4.6.2. We support back to Node 0.10.
My apologies you are correct. Fixed by #1830.
|
2025-04-01T06:40:19.824818
| 2016-08-25T14:27:12
|
173218365
|
{
"authors": [
"nottrobin",
"saper",
"xzyfer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10496",
"repo": "sass/node-sass",
"url": "https://github.com/sass/node-sass/pull/1680"
}
|
gharchive/pull-request
|
Implement SASS_PATH
This is intended to implement the SASS_PATH environment variable.
Fixes #1678.
Tests
I've added two new API tests under .render:
"should check SASS_PATH in the specified order"
"should prefer include path over SASS_PATH"
If you run mocha test/api.js you should see these tests pass.
Manual checking
You can test it manually using the test fixtures as follows:
(lib/vars contains $color: red, and lib-alternate/vars contains $color: orange)
SASS_PATH is picked up
$ fixdir=`pwd`/test/fixtures/sass-path
$ export SASS_PATH=$fixdir/red
$ bin/node-sass $fixdir/index.scss
body {
background: red; }
Earlier paths are preferred
$ export SASS_PATH=$fixdir/orange:$fixdir/red
$ bin/node-sass $fixdir/index.scss
body {
background: orange; }
Specified include-paths still take precedence
$ bin/node-sass $fixdir/index.scss --include-path $fixdir/red
body {
background: red; }
How can a travis build take quite so long? @nschonni is something up? Is there some way for me to start the travis build again?
Looks like OSX jobs got stuck and couldn't even be started.
here's an update from @travisci : https://www.traviscistatus.com/incidents/4mvp857qx8bw
Thanks @saper, I hadn't seen that. It's all passed now. @nschonni are you happy that I've address all your points?
Looks good to me. Would that be possible to squeeze this into one commit?
@saper sure no problem
Nice work everyone. Added this to the next.minor milestone. I believe we have one other PR to land in 3.9.0 also.
Okay commits squashed into 3788c5d9570f2141ccbd6a574af21fcd57d63110.
That's odd, it's failing on a test which is nothing to do with this commit. Any ideas?
Big thank you for the contribution!
No problem. Thanks for all your help with this @saper et al., it was fun! Do you know when the next minor release will be?
|
2025-04-01T06:40:19.827196
| 2023-06-26T19:38:43
|
1775465654
|
{
"authors": [
"kevinlinglesas"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10497",
"repo": "sassoftware/viya4-ark",
"url": "https://github.com/sassoftware/viya4-ark/issues/200"
}
|
gharchive/issue
|
Removal of hello-world service deployment verification
Until now, the pre-install report has deployed a hello-world service into the Kubernetes cluster as an additional verification. This feature is being removed because the publicly available google-sample image used by this feature is not being maintained.
This issue is addressed in Release 2.0.0.
|
2025-04-01T06:40:19.865197
| 2015-04-27T15:01:35
|
71309952
|
{
"authors": [
"gscoppino",
"jcarroll2007",
"sathomas"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10498",
"repo": "sathomas/STEM",
"url": "https://github.com/sathomas/STEM/issues/53"
}
|
gharchive/issue
|
Unit tests for high level views
Current the high level views (adminsAsDiscovery, discoveryAsContent, partnersAsDiscovery, posAsMap, teachersAsDiscovery, and teacherSearchAsPage) lack unit tests. We should add unit tests for those views.
Do you want us to continue to merge off of rc1?
Yeah, I think we'll stick with rc1 until the official launch
Working branch info (for reference): https://github.com/gscoppino/STEM/tree/STEM_52_unit_tests_for_high_level_views
Hey @sathomas , let me pick your brain for a second regarding a test case I'm working on? In a test case for PoisAsMap, I'm attempting to add a Poi to a Pois, and expecting the DOM to update containing a new marker. However, this test fails as it is unable to find any markers in the DOM.
Here is the code, with the problem statement highlighted:
https://github.com/gscoppino/STEM/blob/STEM_52_unit_tests_for_high_level_views/test/views/poisAsMap.spec.js#L51
If you see anything immediately wrong with the structure of the test or the test fixtures...otherwise, don't worry about it.
Disregard that last comment, checking the code coverage in the browser test made the problem fairly obvious.
When testing partnersAsDiscovery and adminsAsDiscovery with a simple empty element as the starting point, eg. <article id="admins" class="discovery theme-2"></article> the render() function will fail since it attempts to render the PoisAsMap views when the necessary els do not exist yet. This could be fixed by just moving the PoisAsMap instantiations into the render() function, but I'm not sure that's a good idea. Thoughts?
I'm still recovering from (minor) surgery, so I may not be thinking straight, but
If you need a DOM element, you can insert a $scaffolding container in the page. There are some views that already do this, so that can give you a template.
If it's not too challenging, it would be better to test the view independently of other views. To do that, you could use a sinon stub.
Got it, so I should just provide the elements it expects to see in the scaffolding. Thanks!
Understood, I have been avoiding doing so. I'm considering making more use of mocks as well.
A problem I keep coming back to try and solve concerns a listener for an event set:searchQuery (to be emanated from the teachers model which has an attribute searchQuery. However, I don't see this event on the catalog of built in Backbone events: http://backbonejs.org/#Events-catalog and the source for teachers.js doesn't emanate the event manually. Here's the test source:
it('After render, if the teachers model has its search model reset, the searchForm property of this view should be updated and re-rendered.', function() {
this.TeachersAsDiscovery.remove();
var functionSpy = sinon.spy(this.TeachersAsDiscovery, 'renderSearch');
this.TeachersAsDiscovery.initialize(); // re-bind event handler to use spy.
this.TeachersAsDiscovery.render();
functionSpy.reset(); // render makes a call to renderSearch which we don't care for in this test.
this.TeachersAsDiscovery.model.unset('searchQuery', { silent: true });
var newQuery = new Stem.Models.Search({
label: 'Test Label',
placeholder: 'Test Placeholder'
});
this.TeachersAsDiscovery.model.set('searchQuery', newQuery);
functionSpy.callCount.should.equal(1);
this.TeachersAsDiscovery.searchForm.model.should.equal(newQuery);
functionSpy.restore();
});
Yeah, I think that's a bug in the code itself. Should be 'change:searchQuery' instead of 'set:searchQuery'
The test code looks good BTW
Thanks! Making that change to the event listener fixes the problem, and does not break any other existing tests.
Different problem: When testing spotlights for AdminsAsDiscovery, the template which builds OaeAsSpotlightItem will fail out if picture isn't defined. The reason for this is I put up a fake server which returns a response without a picture property. Should I give it a picture property, or should the possible lack of a source be handled in OaeAsSpotlightItem's template. Here's the test I'm writing for reference:
it('After render, if the spotlight list is populated, it should be shown.', function() {
var baseUrl = Stem.config.oae.protocol + '//' + Stem.config.oae.host + '/api/group/';
var groupUrl = new RegExp(baseUrl + '\d+');
var subgroupUrl = new RegExp(baseUrl = '\d+/members$');
var server = sinon.fakeServer.create()
server.respondWith("GET", groupUrl, [200, { 'Content-Type': 'application/json'}, '{}']);
server.respondWith("GET", subgroupUrl, [200, { 'Content-Type': 'application/json'}, '{}']);
this.AdminsAsDiscovery.model.get('spotlights').add(new Stem.Models.Group());
var $el = this.AdminsAsDiscovery.render().$el;
$el.find('.spotlight-block').hasClass('util--hide').should.be.false();
server.restore();
});
Problem went away. Probably was actually caused by me returning objects instead of arrays (doh!...). Sorry about that. Got the tests working and pushed. Here is what the above test looks like now:
it('After render, if the spotlight list is populated, it should be shown.', function() {
var baseUrl = Stem.config.oae.protocol + '//' + Stem.config.oae.host + '/api/group/';
var subgroupUrl = new RegExp(baseUrl + '.+/members([?]limit=\d+)?');
var server = sinon.fakeServer.create();
server.respondImmediately = true;
server.respondWith("GET", subgroupUrl, [
200,
{ 'Content-Type': 'application/json' },
JSON.stringify([{"profile":{}, "role": "test"}, {"profile":{ "resourceType": "group" }, "role": "test"}])
]);
/* Reset test fixtures */
this.Discovery = new Stem.Models.Discovery();
server.respond();
this.AdminsAsDiscovery = new Stem.Views.AdminsAsDiscovery({
el: this.$Scaffolding.empty(),
model: this.Discovery.get('admins')
});
var $el = this.AdminsAsDiscovery.render().$el;
$el.find('.spotlight-block').hasClass('util--hide').should.be.false();
server.restore();
});
Stephen: While working on discoveryAsContent, I cam across something rather unexpected. First of all, - the article tags are not closed in the template, so this causes rendering issues (I have fixed this on my branch). discoveryAsContent.ejs
Secondly, the template doesn't include discovery-nav or the landing-page-heading. So, if the template were empty, neither of these would be present on the page. Is this desired or something that should be fixed?
I wouldn't sweat the high-level templates too much. They're really only defined as a convenience for testing. The production app doesn't generate the page de novo from templates. Instead, the initial index.html provides the basic "infrastructure" for the page. The JavaScript code then "fills in" the dynamic content where it's appropriate. This approach allows the page to work even for users that don't have JavaScript. (You can try it by disabling JavaScript in your browser and visiting the site.) Obviously, the full functionality is available, but the site is (supposed to be) still usable.
It wouldn't hurt to add the closing tags, but there's no need to add unnecessary elements to the templates.
Hey Stephen:
Both me and Jordan are having trouble with tests which involve triggering radio buttons in the browser. Jordan's problem is DiscoveryAsContent`` child view switching, while mine is TeacherSearchAsPage``` main view switching. Triggering DOM events simply does not invoke the view functions that are watching for them. I ran into a similar problem with checkboxes which I got working by triggering a click and a change event on the necessary elements, but this does not work for the radio buttons. This is the only thing blocking us from bringing all view coverage to 100%. We may just settle for triggering the events on the views/models manually if we can't find a solution to that. Any feedback is appreciated.
Cleaning up test outputs. I adopted a format for the views you assigned us that looks like this:
Would you mind if I formatted the other existing view tests you wrote to look like this?
Suggest moving this issue to https://github.com/Georgia-STEM-Incubator/STEM if it's still desirable
|
2025-04-01T06:40:19.949275
| 2024-07-14T16:15:52
|
2407500022
|
{
"authors": [
"87xie",
"satnaing"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10499",
"repo": "satnaing/astro-paper",
"url": "https://github.com/satnaing/astro-paper/pull/323"
}
|
gharchive/pull-request
|
refactor: remove redundant role
What
The <article> tag already has an implicit role defined by the HTML specification, so we do not need to add an ARIA role attribute.
Reference:
https://html-validate.org/rules/no-redundant-role.html
https://web.dev/learn/accessibility/aria-html#aria_in_html
Screenshot
After the change:
Thanks again!
|
2025-04-01T06:40:19.955159
| 2017-11-24T14:14:30
|
276628736
|
{
"authors": [
"guijun",
"satoren"
],
"license": "BSL-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10500",
"repo": "satoren/kaguya",
"url": "https://github.com/satoren/kaguya/issues/78"
}
|
gharchive/issue
|
cor2.isThreadDead() is true after kaguya::LuaThread cor2 = state.newThread();
kaguya::LuaThread cor = state.newThread();
state("corfun = function(arg)"
"coroutine.yield(arg) "
"coroutine.yield(arg2) "
"coroutine.yield(arg3) "
"return arg*4 "
" end");//define corouine function
kaguya::LuaFunction corfun = state["corfun"];//lua function get
//exec coroutine with function and argment
std::cout << int(cor(corfun, 3)) << std::endl;//3
std::cout << int(cor()) << std::endl;//6
//resume template argument is result type
std::cout << cor.resume() << std::endl;//9
std::cout << int(cor()) << std::endl;//12
kaguya::LuaThread cor2 = state.newThread();
//3,6,9,12,
while(!cor2.isThreadDead()) <=====cor2.isThreadDead() is true
{
std::cout << cor2.resume<int>(corfun, 3) << ",";
}
coroutine is return "dead" if function is not assigned.
because can not distinguish both.
Can you try this?
state("corfun = function(arg)"
"coroutine.yield(arg) "
"coroutine.yield(arg2) "
"coroutine.yield(arg3) "
"return arg*4 "
" end");//define corouine function
kaguya::LuaFunction corfun = state["corfun"];//lua function get
kaguya::LuaThread cor2 = state.newThread(corfun);
//3,6,9,12,
while(!cor2.isThreadDead())
{
std::cout << cor2.resume<int>(3) << ",";
}
Hi, it works.
Could you please help me for "attempt to yield across C-call boundary" ?
Luajit 2.1.0 beta3 + kaguya:
1.create a coroutine in C by newThread
2. run a lua function with corutine.yield()
3. error:
attempt to yield across C-call boundary
|
2025-04-01T06:40:20.033864
| 2023-11-27T15:47:46
|
2012572731
|
{
"authors": [
"nulls"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10501",
"repo": "saveourtool/diktat",
"url": "https://github.com/saveourtool/diktat/issues/1827"
}
|
gharchive/issue
|
Diktat 2.0 doesn't apply diktat-analysis.yml in sub-project
Tested with MAGIC_NUMBER in frontend using gradle plugin
Actually, configuration is invalid:
- name: MAGIC_NUMBER
enabled: true
# reduces speed of development on the FE
# will remove it by for now
- name: MAGIC_NUMBER
enabled: false
It contains two configurations: one enabled, the second one -- disables it
|
2025-04-01T06:40:20.040696
| 2024-11-12T09:53:15
|
2651653366
|
{
"authors": [
"carnhofdaki",
"satsie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10502",
"repo": "saving-satoshi/saving-satoshi",
"url": "https://github.com/saving-satoshi/saving-satoshi/issues/1172"
}
|
gharchive/issue
|
The year is 2139?
Please use some other year and maybe add a month for fun. Let people think and realize if there was some 2140 prediction, it might have been current at the time of that prediction but is being slightly adjusted with every mined block.
See my always current prediction*.
Search for word 2139 in this repository and you find the files in i18n directory: https://github.com/search?q=repo%3Asaving-satoshi%2Fsaving-satoshi 2139&type=code
* in the top-right corner of my prediction there is a short kode
Hi @carnhofdaki thanks for filing this issue! I notice we have a discrepancy in chapter 10 where we say the year is 2140, but in chapter 1 it's 2139. Is this what this ticket is referring to?
This project was started over two years ago so you are correct that that prediction has since changed :)
|
2025-04-01T06:40:20.042007
| 2024-06-06T18:46:59
|
2338940008
|
{
"authors": [
"benalleng"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10503",
"repo": "saving-satoshi/saving-satoshi",
"url": "https://github.com/saving-satoshi/saving-satoshi/issues/970"
}
|
gharchive/issue
|
Chapter 3 help pages
Because chapter 3 has little actual user input I think these help pages will be more resource instensive. Alternatively we could remove the help pages for this chapter as I assume this will be considered the "easy" mode in the future where we can possibly add lesson content that actually requires help pages in the future
Closing as I am satisfied with the current resources as they exist now.
|
2025-04-01T06:40:20.046343
| 2021-07-31T05:52:13
|
957178172
|
{
"authors": [
"Ajay-056",
"saviomartin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10504",
"repo": "saviomartin/slickr",
"url": "https://github.com/saviomartin/slickr/issues/3"
}
|
gharchive/issue
|
Missing Icons for some Techs and libs
Describe the bug
In Choose Your Icon of Icon part while we choose certain techs like electron or NPM the icon is not showing in the image.
To Reproduce
Steps to reproduce the behavior:
Go to 'https://slickr.vercel.app/app'.
Click on 'Icon'.
Select 'electron' or 'NPM' from Choose Your Icon Dropdown.
See bottom right of the Image.
Expected behavior
The Icon should be added in the Image.
Screenshots
Desktop (please complete the following information):
OS: Windows 10 Home 2004
Browser Brave (Chromium)
Version 92
Yeah, I have found why it happens. It is because Slickr is using devicons library and certain icons donot have plain version. That's the reason.
Ok..🙂🙂
|
2025-04-01T06:40:20.075648
| 2021-07-19T13:10:28
|
947637667
|
{
"authors": [
"bR3iN",
"gwerbin",
"nanozuki",
"rcoconnor",
"savq"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10505",
"repo": "savq/paq-nvim",
"url": "https://github.com/savq/paq-nvim/issues/54"
}
|
gharchive/issue
|
Proposal: ability to specify subdirectory containing vim plugin
Some color schemes have their vim plugins within a subdirectory of the repo. For instance, vim plug has the rtp option, which allows you to specify the subdirectory containing the vim plugin.
This seems to be a duplicate of #10.
This seems to be a duplicate of #10.
I read #10, but I found another method for this, maybe also work in Windows?
https://stackoverflow.com/questions/600079/how-do-i-clone-a-subdirectory-only-of-a-git-repository/52269934#52269934
Duplicate of #10
I figured out a workaround for this that works for my use cases.
First, install the main repository with some as = alias, then symlink or copy/install the desired subdirectory to the desired location using build =. Example:
require("paq") {
p {"vlime/vlime", as = "_vlime", build = "ln -fnrs vim ../vlime"}
}
You could use install or rsync here instead of ln, as desired.
(I know that this isn't actually necessary for Vlime anymore, but it's the first example I came up with).
|
2025-04-01T06:40:20.079569
| 2019-08-27T03:49:11
|
485566983
|
{
"authors": [
"imxxiv",
"savsgio"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10506",
"repo": "savsgio/atreugo",
"url": "https://github.com/savsgio/atreugo/issues/35"
}
|
gharchive/issue
|
Can I unify the style of response in atreugo?
In the restful api project, i try to unify the style of response.
If the middlewares or filters return err, actx.Error() will response the body, i can't modify to json response.
I want
HTTP/1.1 401 Unauthorized
Server: atreugo
Date: Tue, 27 Aug 2019 03:38:40 GMT
Content-Type: application/json
Content-Length: 28
{"code":401,"msg":"Unauthorized"}
But middlewares or filters return err, the response is
HTTP/1.1 401 Unauthorized
Server: atreugo
Date: Tue, 27 Aug 2019 03:39:37 GMT
Content-Type: text/plain
Content-Length: 12
Unauthorized
utils.go line 38 ctx.Error(err.Error(), fasthttp.StatusInternalServerError)
func viewToHandler(view View) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
actx := acquireRequestCtx(ctx)
if err := view(actx); err != nil {
ctx.Error(err.Error(), fasthttp.StatusInternalServerError)
}
releaseRequestCtx(actx)
}
}
router.go line 99 actx.Error(err.Error(), statusCode)
if err != nil {
r.log.Error(err)
actx.Error(err.Error(), statusCode)
}
Because actx inherits the method of ctx, so can use the ctx.method, Is the effect the same?
Because actx inherits the method of ctx, so can use the ctx.method, Is the effect the same?
Yes, It's explained in README 😄
And I've just added custom error view in configuration, so you could configure it with something like that:
config := &atreugo.Config{
...
ErrorView: func(ctx *atreugo.RequestCtx, err error, statusCode int) {
ctx.JSONResponse(atreugo.JSON{"code": statusCode, "msg": err.Error()}, statusCode)
},
...
}
|
2025-04-01T06:40:20.129042
| 2022-01-13T15:02:44
|
1101907072
|
{
"authors": [
"sbidoul"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10507",
"repo": "sbidoul/runboat",
"url": "https://github.com/sbidoul/runboat/issues/40"
}
|
gharchive/issue
|
Bad gateway on /longpolling/poll
Ah, I know why. It's not deployed with workers so we should not redirect /longpolling to 8072 in the ingress.
Resolved in 115d292c9fa084db31bbe24321afbcaf5240bf23
|
2025-04-01T06:40:20.131420
| 2021-02-13T23:18:36
|
807848957
|
{
"authors": [
"sblack4"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10508",
"repo": "sblack4/learning-terraform-github-actions",
"url": "https://github.com/sblack4/learning-terraform-github-actions/pull/2"
}
|
gharchive/pull-request
|
doda lane
Issue Fixed #
What was a problem?
How this PR fixes the problem?
Check lists (check x in [ ] of list items)
[ ] Test passed
[ ] Coding style (indentation, etc)
Additional Comments (if any)
comment on the pr
terraform plan
terraform plan
terraform plan
|
2025-04-01T06:40:20.137781
| 2017-03-28T21:25:41
|
217697328
|
{
"authors": [
"glittle",
"sboulema"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10509",
"repo": "sboulema/TSVN",
"url": "https://github.com/sboulema/TSVN/issues/19"
}
|
gharchive/issue
|
Error in version 3.5.8?
When VS 2017 is opened (no solution), the "TSVN Pending Changes" window has this:
Exception details:
System.ArgumentException: The path is not of a legal form.
at System.IO.Path.NormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths)
at System.IO.Path.GetDirectoryName(String path)
at SamirBoulema.TSVN.Helpers.CommandHelper.GetRepositoryRoot(String path)
at SamirBoulema.TSVN.Helpers.CommandHelper.GetPendingChanges()
at SamirBoulema.TSVN.TSVNToolWindow.OnToolWindowCreated()
at Microsoft.VisualStudio.Shell.Package.CreateToolWindow(Type toolWindowType, Int32 id, ProvideToolWindowAttribute tool)
at Microsoft.VisualStudio.Shell.Package.FindToolWindow(Type toolWindowType, Int32 id, Boolean create, ProvideToolWindowAttribute tool)
at Microsoft.VisualStudio.Shell.Package.Microsoft.VisualStudio.Shell.Interop.IVsToolWindowFactory.CreateToolWindow(Guid& toolWindowType, UInt32 id)
at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.ConstructContent()
If the window is closed, then the Tsvn/Windows/Pending Changes menu shows a dialog:
This happened on two computers.
I've reverted to v3.4 and am fine for now.
Sorry! Should be fixed in the new 3.6 release.
|
2025-04-01T06:40:20.154016
| 2017-08-14T09:02:48
|
249971447
|
{
"authors": [
"dwijnand",
"eed3si9n"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10510",
"repo": "sbt/sbt",
"url": "https://github.com/sbt/sbt/issues/3435"
}
|
gharchive/issue
|
Forward port #3397 fix addSbtPlugin to use the correct version of sbt
https://github.com/sbt/sbt/pull/3397
Fixed in https://github.com/sbt/sbt/pull/3442
|
2025-04-01T06:40:20.156091
| 2015-01-30T15:35:46
|
56047531
|
{
"authors": [
"tototoshi",
"typesafehub-validator"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10511",
"repo": "sbt/website",
"url": "https://github.com/sbt/website/pull/98"
}
|
gharchive/pull-request
|
Added sbt-build-files-watcher
:point_right: https://github.com/tototoshi/sbt-build-files-watcher
Hi @tototoshi,
Thank you for your contribution! We really value the time you've taken to put this together.
We see that you have signed the Typesafe Contributors License Agreement before, however, the CLA has changed since you last signed it.
Please review the new CLA and sign it before we proceed with reviewing this pull request:
http://www.typesafe.com/contribute/cla
|
2025-04-01T06:40:20.159838
| 2022-04-16T14:29:41
|
1206125053
|
{
"authors": [
"Ruivalim",
"bulantsevajo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10512",
"repo": "scade-platform/Nimble",
"url": "https://github.com/scade-platform/Nimble/issues/201"
}
|
gharchive/issue
|
Error In Macbook M1
Hello, I tried to run the project in my Macbook M1.
The LSPClient has this error:
Could not find module 'NimbleCore' for target 'x86_64-apple-macos'; found: arm64-apple-macos
Any ideas?
Btw, I'm in love with this project and Scade, looking forward for more!!
Hi @Ruivalim
Thanks for the feedback. We appreciate it.
What Xcode version do you use?
For the first, please try to clean the build folder and run it again.
If it doesn't help try to build Nimble Core with Xcode separately. To do that:
Select “New scheme…” in Product->Scheme->New Scheme…
Select as target “Nimble Core”
Build.
After that try to build the whole project (Nimble target). If you find some problems, please ask us, we will try to help you.
|
2025-04-01T06:40:20.162285
| 2023-12-14T14:31:46
|
2041823796
|
{
"authors": [
"carletex"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10513",
"repo": "scaffold-eth/scaffold-eth-2",
"url": "https://github.com/scaffold-eth/scaffold-eth-2/pull/660"
}
|
gharchive/pull-request
|
Update wagmi to latest version
It looks lke the LedgerConnector (@ledgerhq/connect-kit-loader) has been compromised
Context:
https://twitter.com/wevm_dev/status/1735289737185837303
https://twitter.com/bantg/status/1735279127752540465
We don't use it on SE2 (we just use Rainbow's kit LedgerWallet) so we should be fine.
In any case, updating wagmi with the last version (where they remove the dependency): https://github.com/wevm/wagmi/commit/53ca1f7eb411d912e11fcce7e03bd61ed067959c
We should create the NPX back-merge after merging this.
|
2025-04-01T06:40:20.165137
| 2017-10-08T16:12:49
|
263729132
|
{
"authors": [
"doub1ejack",
"krasinski"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10514",
"repo": "scala-exercises/exercises-cats",
"url": "https://github.com/scala-exercises/exercises-cats/pull/61"
}
|
gharchive/pull-request
|
Break apart a set of five tests
I was taking these tests for my first time and I found the first question confusing. When the answer I submitted failed, I could not tell whether it was because I didn't understand how .combine() worked at all, or because one of my answers was wrong.
This PR takes the fifth .combine() test and moves it to a separate question. I think this will be fine because the first 4 .combine() tests are so similar.
I looked at this repo because I had the same issue and wanted to fix that too :) great PR
|
2025-04-01T06:40:20.184414
| 2022-05-07T06:39:33
|
1228545583
|
{
"authors": [
"alexandru",
"gzm0",
"sjrd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10515",
"repo": "scala-js/scala-js",
"url": "https://github.com/scala-js/scala-js/issues/4670"
}
|
gharchive/issue
|
Please remove the ExecutionContext.global warning
I'm seeing this warning in my project:
[error] ... The global execution context in Scala.js is based on JS Promises (microtasks).
[error] Using it may prevent macrotasks (I/O, timers, UI rendering) from running reliably.
[error]
[error] Unfortunately, there is no way with ECMAScript only to implement a performant
[error] macrotask execution context (and hence Scala.js core does not contain one).
[error]
[error] We recommend you use: https://github.com/scala-js/scala-js-macrotask-executor
[error] Please refer to the README.md of that project for more details regarding
[error] microtask vs. macrotask execution contexts.
[error]
[error] If you do not care about macrotask fairness, you can silence this warning by:
[error] - Adding @nowarn("cat=other") (Scala >= 2.13.x only)
[error] - Setting the -P:scalajs:nowarnGlobalExecutionContext compiler option (Scala < 3.x.y only)
[error] - Using scala.scalajs.concurrent.JSExecutionContext.queue
[error] (the implementation of ExecutionContext.global in Scala.js) directly.
[error]
[error] If you do not care about performance, you can use
[error] scala.scalajs.concurrent.QueueExecutionContext.timeouts().
[error] It is based on setTimeout which makes it fair but slow (due to clamping).
[error]
[error] Future(1).map { x =>
[error] ^
I understand the intent, or why usage of scala.concurrent.ExecutionContext.global may be problematic, however it's a standard import that often gets used for code that cross-compiles to both the JVM and JS, which is one of the primary strengths of Scala.js. Having the official compiler perpetually warn on standard functionality, and suggest a third-party library, isn't good IMO. And why isn't that warning and option available on Scala 3.x?
Personally, I see only 3 possibilities:
Fix global in Scala.js proper;
Deprecate global and remove it completely in a future version;
Leave it as is, and remove the warning;
As it is, removing that warning is a lot of work, especially in a project that compiles for multiple Scala versions. Updating minor Scala.js versions shouldn't be this hard.
Just a suggestion, thanks a lot for your work 🤗
Have you read https://github.com/scala-js/scala-js/issues/4129, which led to this warning? There is a lot of context in there that explains how we got to make this decision. Do you have any new information that would invalidate the reasoning made there?
Hi @sjrd,
I remember that issue, I even added some input at that point — which was that, out of all solutions, continuing with Promise.then is probably the least desirable solution, being non-standard and leaky.
My problem with it is that it's violating the principle of the least surprise, because people that want to use global (or Future), expect fairness guarantees, not performance. Seeing it used in Scala.js was a surprise to me, because in my JavaScript days I've never thought of using it like that.
When importing global, I would expect it to use setTimeout. It's the most obvious implementation for when setImmediate is not available, as that's what people used and still use in the browser. Also, I did some measurements on Node.js, and the clamping on successive is around 1ms (instead of the usual 4ms, which is what happens in browsers). Not great, but not terrible.
The issue I'm seeing is with the behavior of Future. After the BatchedExecutor optimizations from Scala 2.13.2, the behavior of global would be less relevant. However, AFAIK, after this issue was reported, the global optimizations were reverted, people expected to import ExecutionContext.batched. Which for Future should be another way to solve performance issues, if global would actually use setTimeout.
Speaking of, what sense does ExecutionContext.batched make in Scala.js, given that global is implemented with Promise.then? The two may be equivalent in Scala.js, but they shouldn't be, as trampolining Runnable tasks still makes sense.
In the browser, at least, setTimeout(0) is throttled for UI responsiveness. And I remember that the reason for why setImmediate never happened as a standard was due to setTimeout(0) being enough, and it did not make sense to throttle setTimeout while introducing a setImmediate workaround, which would have ended up throttled as well. It's what people used for making their callbacks stack-safe, prior to the introduction of async/await and Promise.
In my opinion, setTimeout is perfectly acceptable.
But if it isn't, due to performance reasons, then own the current implementation, instead of triggering a warning.
If Future really is the equivalent of Promise, then it needs to be usable out of the box, with no warnings.
Triggering a warning on usage of ExecutionContext.global is like providing the user with a button, and then complaining when the button gets pressed. Like, don't provide the button, if the implementation is that terrible. Otherwise own it.
I'm basically complaining about usability here.
I have taken a stab at grouping the discussion here abit and giving my POV.
Usability
And why isn't that warning and option available on Scala 3.x?
Fair point, but IIUC feature-parity (and compiler option syntax) between Scala 2.x / 3.x are a more general issue.
however it's a standard import that often gets used for code that cross-compiles to both the JVM and JS, which is one of the primary strengths of Scala.js. Having the official compiler perpetually warn on standard functionality, and suggest a third-party library, isn't good IMO.
Absolutely. This isn't good. But IMHO it's the least bad we could come up with. So unless we have a better option (see second section below), I do not know what you want us to do.
Deprecate global and remove it completely in a future version;
That's essentially what the warning is (also see response below). Actually removing it is a bit tricky because it's in the Scalalib, which the Scala.js project doesn't directly control. In any case, a new major Scala.js version is not on the horizon any time soon, so IMHO, no point in figuring out how to remove it right now.
Like, don't provide the button, if the implementation is that terrible.
Fair. But you need to think about this more like a deprecation warning (we cannot remove it due to backwards compatibility guarantees). The reason it isn't directly implemented as a deprecation warning is due to how Scala.js itself cross compiles the scala library.
Alternatives
being non-standard
Could you clarify what you mean by non-standard? IIUC, Promise.then is part of the ECMAScript standard.
and leaky
https://github.com/nodejs/node/issues/6673#issuecomment-599188223
suggests that the issues you point out depend on the exact usage of the API and are not inherent to using Promise.then. Whether or not the Scala.js implementation exposes this leak, I do not know. But if it does, that is a bug and we should fix it.
When importing global, I would expect it to use setTimeout
In my opinion, setTimeout is perfectly acceptable.
See: https://github.com/scala-js/scala-js/issues/4129#issuecomment-733061939
Please address this point when you're arguing for using setTimeout.
If Future really is the equivalent of Promise, then it needs to be usable out of the box, with no warnings.
If Future were the (full) equivalent of js.Promise it wouldn't even try to offer fairness guarantees, just like js.Promise.
Ownership
But if it isn't, due to performance reasons, then own the current implementation, instead of triggering a warning.
I'm not 100% sure what you mean by "owning' here. We maintain (to the best of our abilities) both implementations.
Batched Execution
Speaking of, what sense does ExecutionContext.batched make in Scala.js, given that global is implemented with Promise.then?
I do not know.
IIUC, Promise.then is part of the ECMAScript standard.
The signature, yes, the implementation, no — there are 3 major browser engines with 3 different implementations of Promise.then, with slightly different contracts implemented (last time I checked, maybe that changed, but I seriously doubt it).
If Future were the (full) equivalent of js.Promise it wouldn't even try to offer fairness guarantees, just like js.Promise.
Right. Well, it would also leak in flatMap "tail-recursive" loops, but only on Chrome and Firefox, not Safari.
I'm not 100% sure what you mean by "owning' here. We maintain (to the best of our abilities) both implementations.
"Owning it" as in living with the chosen default, with no regrets 🙂
I think this is a contentious issue for a usability concern, and recommending that project to people is useful enough, so I'm going to backtrack on my suggestion.
Cheers,
|
2025-04-01T06:40:20.190169
| 2024-10-28T19:29:46
|
2619354602
|
{
"authors": [
"scala-steward"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10516",
"repo": "scala-steward-org/mill-plugin",
"url": "https://github.com/scala-steward-org/mill-plugin/pull/59"
}
|
gharchive/pull-request
|
Update mill-main to 0.12.1
About this PR
📦 Updates com.lihaoyi:mill-main from 0.11.12 to 0.12.1
📜 GitHub Release Notes - Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.lihaoyi", artifactId = "mill-main" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "com.lihaoyi", artifactId = "mill-main" }
}]
labels: library-update, early-semver-major, semver-spec-minor, commit-count:1
Superseded by #60.
|
2025-04-01T06:40:20.198000
| 2015-11-05T21:57:24
|
115384330
|
{
"authors": [
"SethTisue"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10517",
"repo": "scala/community-builds",
"url": "https://github.com/scala/community-builds/pull/170"
}
|
gharchive/pull-request
|
add Jawn
green run with this change:
https://scala-ci.typesafe.com/job/scala-2.11.x-jdk8-integrate-community-build/91/
FYI @non
not bother thing to target 2.11.x/JDK6 here, just JDK8.
this will get merged into the 2.12.x community build next time I merge.
2.12.x merge went fine. (I had to disable 2 more support subprojects for now because the required libraries are currently commented out in the 2.12.x build.)
|
2025-04-01T06:40:20.263679
| 2018-02-27T10:47:50
|
300579291
|
{
"authors": [
"canoztokmak",
"codecov-io",
"matthewfarwell"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10518",
"repo": "scalastyle/scalastyle",
"url": "https://github.com/scalastyle/scalastyle/pull/301"
}
|
gharchive/pull-request
|
Add option to ignore empty lines for MethodLengthChecker
Fix for #300 and #302
Codecov Report
Merging #301 into master will not change coverage.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #301 +/- ##
=====================================
Coverage 0% 0%
=====================================
Files 59 59
Lines 1464 1470 +6
Branches 147 152 +5
=====================================
- Misses 1464 1470 +6
Impacted Files
Coverage Δ
...g/scalastyle/scalariform/MethodLengthChecker.scala
0% <0%> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5eb026f...098bb35. Read the comment docs.
Great! Thanks!
|
2025-04-01T06:40:20.266467
| 2022-02-10T20:00:13
|
1130829091
|
{
"authors": [
"gatli",
"phil-scale"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10519",
"repo": "scaleapi/nucleus-python-client",
"url": "https://github.com/scaleapi/nucleus-python-client/pull/217"
}
|
gharchive/pull-request
|
Speed up geometry functions
Speeds up geometry functions for calculating polygon intersection, especially for rectangles.
Nice 👍
Did you happen to profile the runtime of this solution? I'm wondering where we're spending the most time.
Yes, I used cProfile and found that the majority of the time was being spent on numpy array creation. I think at some point I might convert this to a numba program so it compiles jit.
Nice 👍
Did you happen to profile the runtime of this solution? I'm wondering where we're spending the most time.
Yes, I used cProfile and found that the majority of the time was being spent on numpy array creation. I think at some point I might convert this to a numba program so it compiles jit.
Fun! LMK if you start trying it out 🙂
|
2025-04-01T06:40:20.330871
| 2015-03-18T14:00:54
|
62701501
|
{
"authors": [
"Azdaroth",
"scambra"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10520",
"repo": "scambra/devise_invitable",
"url": "https://github.com/scambra/devise_invitable/pull/539"
}
|
gharchive/pull-request
|
add intermediate method in active_for_authentication? for more flexibility
Small change which provides more flexibility for customizing active_for_authentication?. I needed to add additional condition and couldn't really do much as changing invited_to_sign_up? would break the behaviour in other methods and I ended up aliasing the "original" active_for_authentication?. Having intermediate method for it solves the problem.
Why overriding active_for_authentication? and calling super is not an option?
I was customizing some parts, so needed to change active_for_authentication? and using super where there was already added invited_to_sign_up? was not an option. Could only modify invited_to_sign_up? which would cause problems in other methods or use a reference to the active_for_authentication? from Devise itself and add custom conditional.
|
2025-04-01T06:40:20.360129
| 2021-08-20T15:24:25
|
975709029
|
{
"authors": [
"Tombella",
"schana"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10521",
"repo": "schana/carim-clock",
"url": "https://github.com/schana/carim-clock/issues/1"
}
|
gharchive/issue
|
Fitbit Sense . No data.
I have a new Sense and loved your clock face. Worked fine for a day or two but now no data is displayed just the icons..
Uninstalled, installed, rebooted, no joy. Fitbit help says ask you.
Can you fix this?
Most likely cause is battery saving preferences on your phone that prevent communication between the watch and the Fitbit app. The Fitbit app needs to be able to run in the background.
Fitbit has run in background permission. SPECTRUM face works fine and several others do as wellOn Aug 20, 2021 12:02 PM, Nathaniel Schaaf @.***> wrote:
Most likely cause is battery saving preferences on your phone that prevent communication between the watch and the Fitbit app. The Fitbit app needs to be able to run in the background.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
This is now happening on mine too. Fitbit recently released an update, so I'll have to see what they broke. It's likely the weather updating portion.
It seemed that the weather info disappeared first but I am not sure.It is a great face, lots of dat, but concise Thanks On Aug 20, 2021 2:49 PM, Nathaniel Schaaf @.***> wrote:
This is now happening on mine too. Fitbit recently released an update, so I'll have to see what they broke. It's likely the weather updating portion.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
Yeah. They are now returning an int where they used to return a string. I'll have to update the watch face. Sorry for the inconvenience.
Not to worry. Please let me know when I should try again.I appreciate the response On Aug 20, 2021 5:39 PM, Nathaniel Schaaf @.***> wrote:
Yeah. They are now returning an int where they used to return a string. I'll have to update the watch face. Sorry for the inconvenience.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
resolved 16747f1ae567390521ad78f8076780978d3625a3
I submitted the update to fitbit for review.
Thank youOn Aug 21, 2021 11:09 AM, Nathaniel Schaaf @.***> wrote:
I submitted the update to fitbit for review.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
This has been published to the Fitbit gallery.
I just installed the new version. Works great! Thank youOn Aug 21, 2021 12:38 PM, Tom Bella @.> wrote:Thank youOn Aug 21, 2021 11:09 AM, Nathaniel Schaaf @.> wrote:
I submitted the update to fitbit for review.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
|
2025-04-01T06:40:20.392727
| 2022-01-11T23:28:58
|
1099712937
|
{
"authors": [
"schemar",
"therden",
"tiktuk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10522",
"repo": "schemar/obsidian-tasks",
"url": "https://github.com/schemar/obsidian-tasks/issues/484"
}
|
gharchive/issue
|
It should be clearly documented that meta-data must be at the end of a task's line (no trailing tags, for example)
Expected Behavior
I expected all tasks with a priority above none to be returned.
Current Behavior
Several tasks are missing. If I remove #test from Task 4 it gets included in the result. It's something with it being the first item in the list, if I add a task to Task 1 it gets removed from the result.
Steps to Reproduce
Paste the following into a note:
# Obsidian Tasks Test
## First List Test
- [ ] Task 1 🔼
- [ ] Task 2 #test ⏫
- [ ] Task 3 #test
## Second List Test
- [ ] Task 4 🔼 #test
- [ ] Task 5
- [ ] Task 6 #test
- [ ] Task 7 🔼 #test
## Priority
```tasks
not done
priority is above none
heading includes Test
```
Context (Environment)
Obsidian version: 13.19
Tasks version: 1.4.1
[X] I have tried it with all other plugins disabled and the error still occurs
Thanks for a great plug btw! Really appreciating it and excited to start using it :) .
Hello @tiktuk.
I think that it may not be mentioned in the documentation, but I believe that -- with the sole exception of an Obsidian block ID -- all contents of a task item (including tags) must precede Tasks' date and priority emojis and their values.
I did look through all the docs to see if it was mentioned before creating the issue. Could be it's just not mentioned. I would hope tags at the end of tasks were supported, it looks more natural to have them there, I think.
Hey @tiktuk, thank you for reaching out. And thank you @therden for your response.
You are correct. Tasks does not support anything except a block link after the meta-data like dates, recurrence, priority, etc. You are also correct that the documentation regarding this is outdated and in the wrong place. It is only mentioned for dates from a time when there were only dates: https://schemar.github.io/obsidian-tasks/getting-started/dates/
You can only put block links (^link-name) after the dates. Anything else will break the parsing of dates and recurrence rules.
The documentation should be updated. It is unfortunately unfeasible to support tags after the meta-data.
Thanks for the clarification, @schemar. And it's perfectly fine, actually, I was thinking that I had to add tasks in the beginning of the line like you have in your examples with - [ ] #task take out the trash . But that's not the case, I see :) .
Thanks again for the plugin. And thanks for helping out too, @therden :) .
Thank you for the PR! :heart:
|
2025-04-01T06:40:20.402885
| 2018-04-24T02:30:11
|
317048336
|
{
"authors": [
"Santiago8888",
"abecks",
"grounded-warrior",
"schiehll"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10523",
"repo": "schiehll/react-alert",
"url": "https://github.com/schiehll/react-alert/issues/77"
}
|
gharchive/issue
|
TypeError: Object(...) is not a function
Getting TypeError: Object(...) is not a function when trying to implement this per the demo. Simply adding
import { Provider } from "react-alert";
import AlertTemplate from "react-alert-template-basic";
to the top of my file causes the error
Closing due to inactivity
Uncaught TypeError: Object(...) is not a function
at Provider (react-alert.js:303)
at mountIndeterminateComponent (react-dom.development.js:15425)
at beginWork (react-dom.development.js:15956)
at performUnitOfWork (react-dom.development.js:19102)
at workLoop (react-dom.development.js:19143)
at HTMLUnknownElement.callCallback (react-dom.development.js:147)
at Object.invokeGuardedCallbackDev (react-dom.development.js:196)
at invokeGuardedCallback (react-dom.development.js:250)
at replayUnitOfWork (react-dom.development.js:18350)
at renderRoot (react-dom.development.js:19261)
at performWorkOnRoot (react-dom.development.js:20165)
at performWork (react-dom.development.js:20075)
at performSyncWork (react-dom.development.js:20049)
at requestWork (react-dom.development.js:19904)
at scheduleWork (react-dom.development.js:19711)
at scheduleRootUpdate (react-dom.development.js:20415)
at updateContainerAtExpirationTime (react-dom.development.js:20441)
at updateContainer (react-dom.development.js:20509)
at ReactRoot.push../node_modules/react-dom/cjs/react-dom.development.js.ReactRoot.render (react-dom.development.js:20820)
at react-dom.development.js:20974
at unbatchedUpdates (react-dom.development.js:20292)
at legacyRenderSubtreeIntoContainer (react-dom.development.js:20970)
at render (react-dom.development.js:21037)
at Module../src/index.js (index.js:21)
at webpack_require (bootstrap:782)
at fn (bootstrap:150)
at Object.0 (tarotCard.js:148)
at webpack_require (bootstrap:782)
at checkDeferredModules (bootstrap:45)
at Array.webpackJsonpCallback [as push] (bootstrap:32)
at main.chunk.js:1
I am also getting this error trying to use the basic template:
react-hot-loader.development.js:285 TypeError: Object(...) is not a function
at Provider (react-alert.js:309)
at ProxyFacade (react-hot-loader.development.js:791)
at mountIndeterminateComponent (react-dom.development.js:14811)
at beginWork (react-dom.development.js:15316)
at performUnitOfWork (react-dom.development.js:18150)
at workLoop (react-dom.development.js:18190)
at renderRoot (react-dom.development.js:18276)
at performWorkOnRoot (react-dom.development.js:19165)
at performWork (react-dom.development.js:19077)
at performSyncWork (react-dom.development.js:19051)
Line 309. Little obfuscated because of webpack.
var root = Object(react__WEBPACK_IMPORTED_MODULE_0__["useRef"])(null);
My code:
import { transitions, positions, Provider as AlertProvider } from 'react-alert'
import AlertTemplate from 'react-alert-template-basic'
const alertOptions = {
// you can also just use 'bottom center'
position: positions.TOP_RIGHT,
timeout: 5000,
offset: '30px',
// you can also just use 'scale'
transition: transitions.SCALE,
}
const App = props => (
<AlertProvider template={AlertTemplate} {...alertOptions}>
// ...
</AlertProvider>
)
My React was out of date, upgrading has resolved it.
|
2025-04-01T06:40:20.453417
| 2015-03-23T13:10:41
|
63719643
|
{
"authors": [
"seccubus"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10524",
"repo": "schubergphilis/Seccubus_v2",
"url": "https://github.com/schubergphilis/Seccubus_v2/issues/186"
}
|
gharchive/issue
|
Custom SQL table is missing from DB init scripts
I'm having a hard time understanding how the mysql scripts are to be used to deploy the database. Per the documentation during install I have run:
mysql -p << EOF
create database Seccubus;
grant all privileges on Seccubus.* to seccubus@localhost identified by ‘seccubus’;
flush privileges;
EOF
mysql -u seccubus -pseccubus < /opt/seccubus/var/structure_v6.mysql
mysql -u seccubus -pseccubus Seccubus < /opt/seccubus/var/data_v6.mysql
But after some initial testing on the site I'm getting errors that the customsql table is missing. Running:
mysql -u seccubus -pseccubus Seccubus < /opt/seccubus/var/upgrade_v5_v6.mysql
Created the missing table but also errrored out:
ERROR 1062 (23000) at line 97: Duplicate entry '3' for key 'PRIMARY'
Is this a bug in the structure_v6.mysql file? Is it meant to create a full schema at that version and just missing the table? Or should I have run an earlier structure_vN file and then run the ugprade? The installation actually still says to use the _v4 structure and data files as it runs. Is that the correct approach or is that message outdated?
Fixed when we implemented DB upgrade unit tests see #226
|
2025-04-01T06:40:20.469507
| 2021-12-07T05:18:50
|
1072925580
|
{
"authors": [
"crazymonkyyy",
"schveiguy"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10525",
"repo": "schveiguy/raylib-d",
"url": "https://github.com/schveiguy/raylib-d/issues/12"
}
|
gharchive/issue
|
Remove fluent asserts from raylib
version (unittest)
{
import fluent.asserts;
}
pls delete this dead dependency from raymathext.d
This is not a dead dependency, the unittests use fluent asserts. Though I'm not sure we need it, I'm willing to accept a PR that switches to regular asserts.
Done in 4.2.0
|
2025-04-01T06:40:20.513815
| 2020-07-08T08:02:08
|
653081178
|
{
"authors": [
"SamuAlfageme",
"pferreir"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10526",
"repo": "sciencemesh/charts",
"url": "https://github.com/sciencemesh/charts/pull/5"
}
|
gharchive/pull-request
|
Missing scheme in chart repo URI [skip ci]
The link was broken.
Great catch @pferreir, thanks for spotting it - I'll also rebase the gh-pages branch so it goes live on https://sciencemesh.github.io/charts/
|
2025-04-01T06:40:20.523933
| 2024-04-03T21:31:19
|
2223965705
|
{
"authors": [
"ShanaLMoore",
"kirkkwang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10527",
"repo": "scientist-softserv/hykuup_knapsack",
"url": "https://github.com/scientist-softserv/hykuup_knapsack/issues/199"
}
|
gharchive/issue
|
:gift: Add custom rubocop rule to double combo
Summary
ref: https://assaydepot.slack.com/archives/C0313NKG2DA/p1712176060340309
Notes
https://github.com/samvera/hyrax/pull/6221/files
https://github.com/samvera/hyrax/commit/ef2ffa446fc1fccfa36793d2ba0404931dd35ce8
|
2025-04-01T06:40:20.719373
| 2019-11-18T20:38:21
|
524605923
|
{
"authors": [
"Schleuss94",
"ilayn"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10528",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/11080"
}
|
gharchive/issue
|
linalg.eigh generalized eigenvalue problem call to LAPACK DSYTRD returns error for n >= 32.767
Hi all,
for my research project I have to deal with very high dimensional dense generalized eigenvalue problems and try to solve them with scipy.linalg.eigh.
Everytime the dimensions of the matrices exceed 32.766 x 32.766 the function returns an error.
The following example is sufficient to reproduce the error:
import numpy as np
import scipy.sparse
import scipy.linalg
n = 32.767
a = np.random.rand(n,n)
a = a.T.dot(a) + scipy.sparse.identity(n) # ensure that matrix is sym. pos. def.
b = np.random.rand(n,n)
b = b.T.dot(b) + scipy.sparse.identity(n)
scipy.linalg.eigh(a,b)
Warning: This example uses a LOT of RAM but is the smallest possible error example.
Error message:
** On entry to DSYTRD parameter number 9 had an illegal value
Segmentation fault (core dumped)
Scipy/Numpy/Python version information:
1.3.2 1.17.4 sys.version_info(major=3, minor=6, micro=8, releaselevel='final', serial=0)
Since LAPACK returns that the 9th parameter has an illegal value I suppose that there might be an error in the scipy call to the LAPACK function.
Thank you very much for your efforts in advance!
Best regards
Yes that's because LWORK is 32 bit signed integer. So you cannot use more than that to allocate address. However your optimal block size is probably more than 2 and hence you get a result that overflows the 32bit integer. See LWORK definition here.
Unless you use somehow a long integer fortran compiled lapack, you can't get passed that value. Unfortunately there is nothing for us to do on that front.
|
2025-04-01T06:40:20.767553
| 2021-07-05T18:59:12
|
937308769
|
{
"authors": [
"adeak",
"ev-br",
"ilayn",
"newville",
"rgommers",
"rkern",
"tupui"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10529",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/14354"
}
|
gharchive/issue
|
STY: Maths formatting
This issue is linked to #14330
To be able to use tools (like, but not limited to Black), we need to define how we, as the scientific community and not just SciPy, want mathematical equations to be rendered.
The goal of this issue is to document and establish a strict set of rules to write maths. The rules must be coherent, extensive and opinionated (one way to do something, unambiguous wording) so they can be integrated in a tool (that tool may be Black).
I think such a document is missing from the scientific community and my hope is that we can all agree on something :smiley:
To quickstart things here are some ideas:
Formatting Mathematical Expressions
To format mathematical expressions, the following rules must be followed. These rules respect and complement the PEP8 (relevant sections includes id20and id28)
If operators with different priorities are used, add whitespace around the operators with the lowest priority(ies).
There is no space before and after **.
There is no space before and after operators *,/. Only exception is if the expression consist of a single operator linking two groups.
There a space before and after -, +. Except if : (i) the operator is used to define the sign of the number; (ii) the operator is used in a group to mark higher priority.
When splitting an equation, new lines should start with the operator linking the previous and next logical block. Single digit, brackets on a line are forbidden. Use the available horizontal space as much as possible.
# Correct:
i = i + 1
submitted += 1
x = x*2 - 1
hypot2 = x*x + y*y
c = (a+b) * (a-b)
dfdx = sign*(-2*x + 2*y + 2)
result = 2 * x**2 + 3 * x**(2/3)
y = 4*x**2 + 2*x + 1
c_i1j = (1./n**2.
* np.prod(0.5*(2.+abs(z_ij[i1, :])
+ abs(z_ij) - abs(z_ij[i1, :]-z_ij)), axis=1))
# Wrong:
i=i+1
submitted +=1
x = x * 2 - 1
hypot2 = x * x + y * y
c = (a + b) * (a - b)
dfdx = sign * (-2 * x + 2 * y + 2)
result = 2 * x ** 2 + 3 * x ** (2 / 3)
y = 4 * x ** 2 + 2 * x + 1
c_i1j = (1.
/ n ** 2.
* np.prod(0.5 * (2. + abs(z_ij[i1, :])
+ abs(z_ij) - abs(z_ij[i1, :] - z_ij)), axis=1))
I am -1 on any such attempt to enforce such strict, extensive, and opinionated rules. PEP8's recommendations are the right level for developer guidelines, IMO. I'm not sure that such algorithmically-complete rules exist that are simultaneously both terse enough to be implementable and also don't create unreadable horrors in specific circumstances.
Now, if you want to develop an auto-formatting algorithm that uses whitespace in mathematical expressions more readably than black, that's great! Develop it somewhere and see if people like it. I might even use it if it's opt-in, especially if I can use it through my editor over the current selection of lines, not the whole file.
@tupui I think the +/- part would benefit from "unary"/"binary" terminology.
And can you explain what you mean by
(ii) the operator is used in a group to mark higher priority.
(in the same place)?
And why in
* np.prod(0.5*(2.+abs(z_ij[i1, :])
there is no whitespace around the binary plus?
And is using 1., 2. a conscious choice or just force of habit? If this style guide takes off, taking a stance on the likes of 1. and .1 might be necessary.
That's an example of a (possibly-useful) ambiguity that can be used to make things more readable insofar as they communicate some subtle high-level semantics. That kind of ambiguity would be unavailable to algorithmic auto-formatters.
If the goal is to define rules that can be used to build an algorithmic auto-formatter, I recommend just going and implementing the algorithms and using the implementation as the object of discussion. Human brains just aren't good at predicting what the algorithm is going to do in all of the important cases just from the human-readable rules. Making a quick implementation gives us something concrete to discuss, we can throw real examples at it in bulk and evaluate the results quickly, and the process of implementation will make manifest all of the ambiguities.
As an outsider in scipy-dev, I resisted commenting on #14330. Here, however, you seem to be aiming to codify how the wider scientific community (probably restricted to "scientific python"?) writes math.
The goal of this issue is to document and establish a strict set of rules to write maths.
Why is a strict set of rules beyond "valid Python" necessary? The gaol appears to be aimed at reformatting working Python code that someone wrote, and quite likely someone else reviewed or has read, and likely someone else else modified. Scipy has a lot of contributors - lots of people have read the code. The clarity of the math cannot have been too bad or objectionable. If there isolated cases where it needs fixing, I'm pretty sure you do not need an "established strict set of rules" to clean up the code.
The rules must be coherent, extensive and opinionated (one way to do something, unambiguous wording)
The Zen of Python uses "should" and "obvious" when talking about "one way to do something". It does not mandate that there can be only one way to do something.
so they can be integrated in a tool (that tool may be Black).
Why would you want to do that? A key feature of Python is that the code is readable, and hard to make impenetrable to a reasonably knowledgable person. Never mind which "strict set of rules" is needed, why is any strict set of rules needed? Why is any code re-formatting tool needed?
When writing Python code with even a modest amount of care, you can be pretty sure that someone else (maybe yourself in 2 years) will be able to read and (at least sort of) understand what it tries to do. This notion that whitespace between operators or mixing of single and double quotes in a codebase will somehow cause cognitive dissonance or start formatting arguments is somewhat hard for me to even comprehend. Do such things actually happen, ever with Python? There were tabs/spaces arguments, are there "1 or 0 whitespaces around '+'" arguments?
Are people confused by single quotes?
Is there evidence that code formatting is a problem? What fraction of scipy, numpy, scikit-xxx PRs have had significant discussions (let alone "controversies") about Python formatting? How many of those are not resolved by "let's be sensible and mostly follow PEP8 when we can"?
I must say that when I first heard of Black I thought it might have been an elaborate hoax. It appears to misread the intent of the namesake quote about automobiles: At the time, there was one choice of color, and the question was whether to expand that choice. The quote was expressing: "don't focus on styling, focus on features and performance". Instead, Black asserts that variation in the formatting of working, valid Python code is a problem that needs fixing, focusing attention on the styling of already highly readable and working code at the expense of features and performance. It creates a problem where none existed.
The intention of Black is that PRs to fix bugs or add features will be held and more work demanded of the contributor in order to meet styling rules. The feature might be accepted, but only if it fits the style. Discrepancies end not in argument but in acquiescence (or perhaps in disgust -- pay attention to the ones who walk away). The intention of Black is that acquiescence ends any debate (was there any?). It enforces uniformity without exception or nuance, expelling non-compliant contributions when necessary. Many of us in the sciences are trying to fully internalize notions of belonging, access, equity, diversity, and inclusion. Would formatting of code submitted by the visually impaired be disadvantaged by these rules? Would it make screen readers more accurate? How does Black improve the community? The approach taken by Black is deliberately and proudly polarizing, basically for the sake of being proud about being polarizing. Let's have a little less of that, please.
If one wanted to follow the engineering wisdom from the Ford quote, they would be careful about formatting new code, try to be consistent and readable, but certainly not fix what ain't broke, and focus on features and performance over styling. They would be sensible. They probably would not even engage in this conversation. My apologies for not being strong enough to hold my tongue.
To format mathematical expressions, the following rules must be followed. These rules respect and complement the PEP8 (relevant sections includes id20and id28)
PEP8 is a guide, not a mandate. It says
"If operators with different priorities are used, consider adding whitespace around the operators with the lowest priority(ies). Use your own judgment; however, never use more than one space, and always have the same amount of whitespace on both sides of a binary operator:"
Somehow this got turned into 4 mandatory rules (with one exception!) about how much whitespace there will be around all binary operators. I did no read that as "Use your own judgment as long as you agree with me".
I think my main objection to this comes down to that line that reads "#Wrong" there, just above all the working code. That is code that is "not-PEP8 compliant", it is not "Wrong". The calculated values will not change. Is anyone confused by this code? If you're in there working on or reviewing the code and want to make it a bit more PEP8-ish, sure go ahead. If it looks readable, it is readable. If you decide that whitespace around a '+' sign isn't needed in something like np.sin(array[2*i+1, :]), well, maybe that's OK sometimes.
Sorry for the length.
Thanks @newville that's an opinionated but balanced take. I am probably the last person to defend Black, but I think you have taken its use and the problem it promises to solve a bit differently than intended. What black offers is a non-negotiable set of code standards. This becomes particularly effective when many coders have to touch the same codebase frequently. Some come from Java-like background who don't mind going off to the second screen horizontally and others coming from different backgrounds working on the same Python code.
The amount of time wasted on code reviews on that regard in terms of business hours is immense where one says I don't like pep8 line length the others say other things etc. Here the black use is pretty much justified since instead of bringing your developers to a common understanding, the team delegates the code structuring to Black and agrees to not discuss it. Then everything is Black'ened and whatever comes out of it hopefully makes sense. And quite often it does the sensible thing. Code reviews get saner (as much as it can, I mean we are talking about devs here). Now what we have accomplished is that we have adopted the standard of the core devs of Black and we are done.
However, as many people quickly found out in the past (including us after using it about 4 months) is that this standard is not written by scientific or number crunching people. And its strict standard often does not go together with say numpy standard or pandas .function(args).function(args) chains. That's a typical complaint and I think it is justified. So I'm not a fan of Black in that regard since it makes arrays wonky and uglier (IMO).
The discussion here is whether we should delegate code formatting to Black and be done with it. However, its choices as mentioned above especially about ** and operator precedence is almost always wrong for human eyes. For example, in the correct formatting above, instead I would have written it as
c_i1j = (1./(n ** 2.) * np.prod(0.5 * (2. + abs(z_ij[i1, :])
+ abs(z_ij) - abs(z_ij[i1, :] - z_ij)), axis=1))
because it is obvious that it would be a long line hence at least try to make it obvious by breaking it at a sensible point and not bring in strange staircase formatting. And Black starts to fail more often than not in the unary ops in terms of readability. But in any case you can see what was and is now in scikit-learn https://github.com/scikit-learn/scikit-learn/pull/18948 conversion.
Some lines are clearly disgusting in the "after" state to see but I tend to like the the relaxed 88 character line length since 79 is a bit too much in terms horizontal space constraint in my opinion but we won't need black to have that kind of relaxation.
Thank you @newville for expressing your sentiment on this.
As @ilayn pointed out, the goal is to same everyone endless discussions about styling.
Sure the PEP8 was written as a guide and even starts with A Foolish Consistency is the Hobgoblin of Little Minds. Still, over the decades we have seen that this guide was used as an authoritative way to write Python. And we could argue that it served the Python community well in general.
Having a common way of writing things across projects has an under rated value. Here I am not advocating to change the face of maths in all Python scripts used in science. I am more asking to reflect on how to write maths in large libraries such as SciPy and NumPy. The difference is paramount. For the developers having to navigate across these different projects, I think there is a great value having common practices. It enables faster onboarding of new contributors, removes lots of churn. Of course, long time contributors might not agree as they have years of experience navigating around these and other projects.
Newcomers, students, and as you rightfully noted, people with disabilities, would greatly benefit from a common ground. Having a unified language help lower the barriers and tools can be written to help them. Imagine if Black (or anything else) was used by every single project, you could more easily design tools that could read and write code for visually impaired people. Plus, things like that can/should be linked to pre-commit hooks. So no matter what you do, when you code will appear in the PR it will have the expected style without you having to do anything.
Yes, it removes the developer own style, sensitivity. But I will argue that we should not be able to see its mark in such large open source project. As developers, we read code all day long, and having to do this contextual change is not free, it can also lead to misreads, bugs. We certainly do not want to have a different developer style for every single files. You can make the parallel with standards in industry or rules in our society. It's not because the big think that everyone is depending upon is very strict on some aspects, that you have to do the same for your own project and are not free anymore.
Lastly, I would also note that we currently have tons of hard rules which involve so much more thinking and manual actions. Things like input validation, proper way to test, documentation, CI, etc. Here we are mostly talking about spacing that a machine would do for us so we don't have to talk about it.
the goal is to save everyone endless discussions about styling.
AFAICT, we largely do not have endless discussions about styling in the actual code reviews. We only have endless discussions about styling when someone proposes to use black.
Newcomers, students, and as you rightfully noted, people with disabilities, would greatly benefit from a common ground.
Citation needed. I have seen no evidence that the level of formatting that black and company provide any measurable benefits in this regard.
I've laid down this marker before, and I think it satisfies all of the evidenced benefits that you want from black: my ideal auto-formatting tool is one that leaves style-conforming code alone and only fixes up code that deviates. Somehow the benefits of some kind of auto-formatter got conflated with requiring a canonicalizing auto-formatter. black is not the only possible solution.
At minimum, a tool like darker can be fruitfully used by contributors to apply auto-formatting just to their contribution. All of the benefits with respect to the easing of writing code apply just as much to darker as to black. I recommend that you implement your preferred math styling rules in a way that can be plugged into that, and we can evaluate the results concretely rather than spinning out more endless discussions about styling in the abstract.
my ideal auto-formatting tool is one that leaves style-conforming code alone and only fixes up code that deviates.
It looks like autopep8 might fit that bill.
Let me jump in here, since apparently there's two things being mixed:
do we want/need a code formatter like black?
is it possible to code up with consistent guidelines for writing math?
This issue is not about 1, only about 2. No change to any SciPy way of working is proposed
What black does today for math is bad, really bad. Something like hypot2 = 2 * x + 3 * y ** 2 is code no numerical Python person would write by hand. PEP 8 also falls well short here, it for example is completely silent on the power operator. So the question is: is it possible to do better than black and PEP 8. I'm pretty the answer is yes, the question is just how much better. Once we have the answer, at least there's something to point tool authors to. Maybe black et al. can implement it, maybe not. If they did, it would be helpful.
This issue is linked to #14330
Yes, this is about 1. A change to the SciPy way of working has been proposed. This particular issue is a sub-discussion in response to a specific objection to that proposal.
I agree that there are useful attempts at improving auto-formatting for mathematical expressions. I don't think it's all that helpful to try to hash them out here if we're foreclosing the idea of changing the SciPy way of working. Just go implement it somewhere and ask for feedback on the mailing list.
FWIW, I circulated a project idea among students locally. Will see if
anything comes out of this. No definite ETA at the moment, but we'll
definitely report back if anything worth comes out if this.
вт, 6 июл. 2021 г., 17:06 Robert Kern @.***>:
This issue is linked to #14330 https://github.com/scipy/scipy/pull/14330
Yes, this is about 1. A change to the SciPy way of working has been
proposed. This particular issue is a sub-discussion in response to a
specific objection to that proposal.
I agree that there are useful attempts at improving auto-formatting for
mathematical expressions. I don't think it's all that helpful to try to
hash them out here if we're foreclosing the idea of changing the SciPy way
of working. Just go implement it somewhere and ask for feedback on the
mailing list.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/scipy/scipy/issues/14354#issuecomment-874791531, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AAQI6SCBCVV3WWL3UHFFLXDTWMEXTANCNFSM473GMY5A
.
This issue is linked to #14330
Yes, this is about 1. A change to the SciPy way of working has been proposed. This particular issue is a sub-discussion in response to a specific objection to that proposal.
Not it is not at 1, Ralf is correct. I am sorry if I mislead you with linking to this issue. But read what I wrote in the description and the further reply. I am only asking to write some guidelines for mathematical equations. It had some conditional, I just added a bit more.
I agree that there are useful attempts at improving auto-formatting for mathematical expressions. I don't think it's all that helpful to try to hash them out here if we're foreclosing the idea of changing the SciPy way of working. Just go implement it somewhere and ask for feedback on the mailing list.
Sorry but I do not have the expertise (and time. I already followed this path a few times after you suggested this and I just lost time here...) to implement all the ideas that I have. And in this case, there might be existing tools doing the job and they might just need some directions. This is the goal of this issue, to agree on how we should write maths. Then whatever we do with this document is extra. It can start as a general PEP8-ish on our contributing guide up to something used by auto formatting tools.
The description is very focused on defining rules for black-like tools, not only guidelines. If that is no longer the intent, you may wish to amend the wording (preserving the old version for reference, of course).
To be able to use tools (like, but not limited to Black),
a strict set of rules to write maths.
The rules must be coherent, extensive and opinionated (one way to do something, unambiguous wording) so they can be integrated in a tool (that tool may be Black).
Those are worthwhile goals because the state of those tools is pretty pathetic for math expressions. But to formulate rules that work within the constraints of algorithmic auto-formatters, you really need to work from code first. The problem that these auto-formatters face is much more constrained than just a human-readable style guide that we can add to our contributor docs. It seems like these ought to be synergistic goals, that making progress on one will gain you progress on the other whichever order you do them, but I think the similarities are deceptive; there are conflicts due to the different constraints on who/what is performing the style recommendations. So there are two tracks that you can go down: build an auto-formatter that produces output that you like, and writing human-level style guides.
If you want to make progress on the former, I think you have to start with code. There's no benefit to having long discussions here on scipy/scipy about it. Just go build it and ask for feedback from our community on the mailing list. Until you are proposing that scipy use that tool, it's not really on-topic here on the issue tracker.
If you want to make progress on the latter, that's definitely on-topic here, but I think you need to relax the "extensive and opinionated so they can be integrated in a tool" constraints.
@tupui @rgommers
Let me jump in here, since apparently there's two things being mixed:
1 do we want/need a code formatter like black?
2 is it possible to code up with consistent guidelines for writing math?
This issue is not about 1, only about 2. No change to any SciPy way of working is proposed
@rgommers Um, yes it is. And not only for SciPy but "we, as the scientific community and not just SciPy". The goal is very clearly stated as defining how math is rendered so that tools like Black may be used. It is not isolated to point 2.
This issue is linked to #14330
Yes, this is about 1. A change to the SciPy way of working has been proposed. This particular issue is a sub-discussion in response to a specific objection to that proposal.
Not it is not at 1, Ralf is correct. I am sorry if I mislead you with linking to this issue. But read what I wrote in the description and the further reply. I am only asking to write some guidelines for mathematical equations. It had some conditional, I just added a bit more.
Huh? The message sent to the mailing list on 1 July (https://mail.python.org/pipermail/scipy-dev/2021-July/024924.html) has a subject line of "[SciPy-Dev] Code formatting: black".
Issue #14330 is titled "MAINT/STY: use Black formatting" and opens with "I propose to apply Black on our code base."
This issue begins:
This issue is linked to #14330
To be able to use tools (like, but not limited to Black), we need to define how we, as the scientific community and not just SciPy, want mathematical equations to be rendered.
Now you both say that this is not about using Black to reformat Scipy or other scientific code and apologize if some of the links might mistakenly lead someone to conclude that?
I think that if you are concerned about consistency, there may be somewhere closer to home that may need more attention than code formatting.
So clearly the issue description wasn't as focused as it should have been. @tupui discussed this with me, that's why I knew the goal was (2). I even pre-read what he wrote (but not thoroughly enough), so I'm partly to blame for it being unclear.
From initial discussion on gh-14330 it's clear that many people are -1 on using black because its math formatting is terrible. So that's on hold / rejected, unless math formatting can be fixed. There's no point continuing that discussion. Clearly using black is blocked. I'm happy to comment on the PR saying exactly that. We can also just close that PR.
By the way, I have no clear preference about any of this. I've only ever used black once, and it wasn't ideal. I'm happy to give it a chance though, if and only if all blocking concerns have been resolved.
Since this issue has obviously derailed, I suggest also closing this one and starting fresh. It's cleaner than trying to amend the initiating issue description and resuming an essentially new discussion mid-thread.
I still think it's questionable that the scipy/scipy issue tracker is the best place to have the amended discussion. Maybe the SPEC Discourse is a more appropriate venue?
Since this issue has obviously derailed, I suggest also closing this one and starting fresh. It's cleaner than trying to amend the initiating issue description and resuming an essentially new discussion mid-thread.
Agreed, let's close it.
I still think it's questionable that the scipy/scipy issue tracker is the best place to have the amended discussion. Maybe the SPEC Discourse is a more appropriate venue?
That does sound like a good suggestion. We never had a place like that, but we do now - it'd be good to try and start using it. There's still little traffic on that Discourse, but we can point people to it on the mailing list.
Sounds like a good idea, agreed.
Thank you all for the discussion. In future I would hope we could have discussions with a productive outcome and less emotions.
This would have been a good transfer to discussions had we had enabled it by the way
|
2025-04-01T06:40:20.774921
| 2022-04-10T05:43:37
|
1198889193
|
{
"authors": [
"HirotsuguMINOWA",
"mdhaber"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10530",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/15966"
}
|
gharchive/issue
|
import scipy.optimize calling
Describe your issue.
Thank you very much your development of scipy.
If you know solution, please tell me.
The error occurred when just import scipy.optimize on pypy where installed scipy etc.
$ pypy3
Python 3.9.12 (05fbe3aa5b0845e6c37239768aa455451aa5faba, Mar 29 2022, 08:15:34)
[PyPy 7.3.9 with GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>> import scipy.optimize
terminate called after throwing an instance of 'pybind11::error_already_set'
what(): IndentationError: ('unexpected indent', ('<string>', 2, 1, ' class pybind11_static_property(property):\n', 2))
Aborted
root@48933591bab3:~#
The pypy was installed based on official pypy docker image.
Env:
MacOS: catalina
Docker: 4.7.0 (77141)
PyPy 7.3.9, 7.3.8 (both)
pybind11 2.9.2
Reproducing Code Example
import scipy.optimize
Error message
terminate called after throwing an instance of 'pybind11::error_already_set'
what(): IndentationError: ('unexpected indent', ('<string>', 2, 1, ' class pybind11_static_property(property):\n', 2))
Aborted
SciPy/NumPy/Python version information
PyPy 7.3.9, Scipy 1.8.0, Numpy 1.22.3
It looks like there may have been a bug in PyPy; see https://github.com/conda-forge/pypy-meta-feedstock/issues/25. Please upgrade PyPy, SciPy (to 1.9.3), and open a new issue with a title that identifies the problem, e.g. "BUG: import scipy.optimize fails on PyPy" . (That said, I'm not sure if we support PyPy right now, so I can't guarantee that it will be addressed.)
|
2025-04-01T06:40:20.778931
| 2020-05-07T17:47:29
|
614234694
|
{
"authors": [
"EverLookNeverSee",
"rgommers",
"w-rfrsh"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10531",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/12056"
}
|
gharchive/pull-request
|
ENH: Modifies shapiro to return a named tuple
Performs a change in the return of shapiro function, returning now a named tuple ShapirotestResult, which has "statistic" and "pvalue" as indexes. This modify was made to turn the function return similar to other functions like scipy,stats.normaltest and scipy.stats.ttest_ind.
Previously we had to create two objects, something like stats, p = scipy.stats.shapiro (x). Otherwise, we would have to access these values by index(using [0] or [1]), which makes understanding more difficult to someone who does not know exactly the behavior of the function.
With this implementation, we were able to make shapiro_test = scipy.stats.shapiro (x) and, for example, get the p-value with shapiro_test.pvalue.
The function description has also been updated.
@w-rfrsh Please test your added feature into scipy/stats/tests/test_morestats.py
@w-rfrsh Please test your added feature into scipy/stats/tests/test_morestats.py
Done :D
LGTM now, merged. Thanks @w-rfrsh
|
2025-04-01T06:40:20.785839
| 2021-01-25T11:37:26
|
793306432
|
{
"authors": [
"AtsushiSakai",
"StanczakDominik",
"ev-br"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10532",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/13436"
}
|
gharchive/pull-request
|
ENH: interpolate: add input validation to check input x-y is strictly increasing for fitpack.bispev and fitpack.parder
Reference issue
fix #8565
What does this implement/fix?
Some scipy users are confusing this fitpack error message.
https://github.com/scipy/scipy/blob/1d10a4afe95cdd4bcae80db5f312c466d9921d4e/scipy/interpolate/fitpack2.py#L910
like:
#8565,
https://github.com/cmbant/CAMB/issues/40
python - Unable to use `scipy.interpolate.RectBivariateSpline` with `matplotlib.pyplot,plot_surface` - Stack Overflow,
python: How to pass arrays into Scipy Interpolate RectBivariateSpline?
The reason of this error message is the input data is invalid, which is validated in fitpack.bispev
https://github.com/scipy/scipy/blob/2a9e4923aa2be5cd54ccf2196fc0da32fe459e76/scipy/interpolate/fitpack/bispev.f#L45-L50
and in fitpack.parder (when derivative is calculated)
https://github.com/scipy/scipy/blob/5f4c4d802e5a56708d86909af6e5685cd95e6e66/scipy/interpolate/fitpack/parder.f#L50-L54
This restriction offers that input array x and y needs to be strictly increasing, but the python code does not check it and just showing the FORTRAN error code.
Actually, the doc stated that "If grid is True: The arrays must be sorted to increasing order.", but it seems that some users do not recognize it.
So, I added the input validation for these fitpack functions to check input x-y is strictly increasing and its test.
I think there is no backward compatible breaking because the new validation throws ValueError as before.
Thanks @AtsushiSakai , @tylerjereddy, merged.
Hey, just wanted to say thank you, I just hit that issue and I'm really happy to see it's already been tackled! :)
|
2025-04-01T06:40:20.791136
| 2023-10-19T22:51:39
|
1953176015
|
{
"authors": [
"mdhaber",
"tirthasheshpatel"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10533",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/19412"
}
|
gharchive/pull-request
|
ENH: stats: add support for masked arrays for circular statistics functions
Reference issue
Towards #14651
What does this implement/fix?
Adds support for masked arrays in stats.circmean, stats.circvar, and stats.circstd
It looks like a lot of code that deals with NaNs is now unused and can be removed, right?
Removed the unnecessary code @mdhaber.
There seems to be something in the decorator's handling of masked arrays that promotes the dtype (or at least masked arrays are promoting from 32 to 64 for all three of these functions). Could your next PRs be to address this (if it is a bug in the decorator) and #19350 (comment)?
Yeah, will try to get a PR up tomorrow!
Hi @tirthasheshpatel , can you open that PR? SciPy 1.12 is scheduled to branch in about 2 weeks, and it's important to get that in. It would also be nice if we could get most of the remaining low-hanging fruit in there.
There seems to be something in the decorator's handling of masked arrays that promotes the dtype (or at least masked arrays are promoting from 32 to 64 for all three of these functions).
It doesn't seem to be decorator's fault:
In [1]: from scipy import stats
In [2]: import numpy as np
In [3]: stats.circmean(np.ones(5, dtype=np.float32), _no_deco=True).dtype
Out[3]: dtype('float64')
Interestingly, that's also because NumPy treats arrays and scalars differently when it comes to dtype promotion:
In [4]: (np.float32(9.) * 2.).dtype
Out[4]: dtype('float64')
In [5]: np.ones(5, dtype=np.float32) * 2.
Out[5]: array([2., 2., 2., 2., 2.], dtype=float32)
Might be an issue for a lot of function because of this behavior.
Right I actually found that, too. Glad we come to the same conclusion. If it's not the decorator's fault, don't worry about it for now.
But there is still https://github.com/scipy/scipy/pull/19350#issuecomment-1758711603.
|
2025-04-01T06:40:20.793086
| 2023-11-10T21:24:34
|
1988423448
|
{
"authors": [
"lucascolley",
"mdhaber"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10534",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/19507"
}
|
gharchive/pull-request
|
MAINT/DOC: stats: fix lint errors
Reference issue
Towards gh-19490.
What does this implement/fix?
Appeases the linter for all current errors related to stats, to stop potential future CI fails.
Additional information
Alternatively, we could use noqa, or even make the linter ignore these files, if that seems more appropriate.
I don't think this needs a commit ignore. It's a meaningful improvement to not redefine these functions once per iteration.
|
2025-04-01T06:40:20.833722
| 2024-10-17T15:46:40
|
2595120435
|
{
"authors": [
"DietBru",
"adammj",
"dhomeier",
"ilayn",
"j-bowhay",
"mdhaber",
"neutrinoceros",
"pllim"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10535",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/21721"
}
|
gharchive/pull-request
|
MAINT: signal: lombscargle docstring tweaks
Reference issue
Closes https://github.com/scipy/scipy/issues/2162
Also addresses passing in inputs as lists, per https://github.com/scipy/scipy/issues/8787
What does this implement/fix?
Tweaks to the docstring to make the measurement baseline requirement more explicit, as well as other minor corrections.
Additional information
A few other minor corrections are being made that were noticed while updating the text suggested in the referenced issue.
@DietBru You had suggested in https://github.com/scipy/scipy/pull/21277#discussion_r1694270980 that I could use _rename_parameter() to rename the misleading x parameter to t in the function. Since I'm going for broke on refinements at this point, could I go ahead and commit that change here? Also, if so, does dep_version need to be specified?
@tylerjereddy Not sure when the cutoff date is, but this might also be worthwhile to get into 1.15.0.
Would you be able to tackle https://github.com/scipy/scipy/issues/8787#issuecomment-2421424091 here too?
I think because it's important to get these changes merged it would be best if renaming of arguments is left to a different pr as there might be more discussion required for that.
Would you be able to tackle #8787 (comment) here too?
I'll take a look at it today.
I think because it's important to get these changes merged it would be best if renaming of arguments is left to a different pr as there might be more discussion required for that.
Understood.
I can reproduce the error in https://github.com/scipy/scipy/issues/8787 , but I haven't yet figured out the solution.
I think this might be related to the "nyquist frequency" (not exactly the same thing for uneven sampling). I'm going to investigate further to see if I can test for this ahead of time instead of just throwing divide by zero error.
Yup. It is, but I don't see any way to test for this ahead of time, short of doing all of the calculations ahead of time.
All of the sample times are multiples of 1000 s, which leads to a "nyquist" frequency of 0.0031415926535897933 rad/s. One of the freqs (freq[0]) is exactly 5x this (0.015707963267948967 rad/s), and causes D to equal zero. However, if you take some smaller or larger multiple of the "nyquist", the D is always very small (< 1e-16), but not zero. So this is just a random numerical fluke.
As long as at least one sample times isn't a multiple of 1000, or as long as none of the freqs are 5x the "nyquist" it won't fail.
But it works. Just add these lines:
# at beginning
eps = np.finfo(freqs.dtype).eps
# when calculating D
D = CC * SS - CS * CS + eps
@neutrinoceros @jakevdp @mhvk @dhomeier @pllim Would one of you mind checking if the final commit causes any issues with astropy's tests? I was looking for a robust solution to the inputs given in the referenced issue (https://github.com/scipy/scipy/issues/8787). And this was the smallest, most robust, way to prevent any possible divisions by zero.
Huh, so you were able to reproduce gh-8787; I guess it is platform-dependent. I'd suggest adding the test and running that on CI separately to show that CI was sensitive to the failure to begin with, otherwise the unit test does not really demonstrate the fix.
If you can reproduce that, what about gh-13812? I had the file analyzed for safety before opening it; seemed OK, and did seem to contain just two CSV files.
Looks like there was also some work to avoid a zero-division error in the past (gh-3787) but perhaps that is different?
otherwise the unit test does not really demonstrate the fix.
I haven't added a test specifically for this numerical issue yet.
Thanks for the ping, @adammj . Would https://github.com/astropy/astropy/pull/17211 help?
Re: https://github.com/scipy/scipy/pull/21721#issuecomment-2423246777
Ooops... CI failed to build scipy from source.
p.s. Failed to build locally too on WSL2 via pip install git+https://github.com/adammj/scipy.git@lombscargle
@mdhaber I changed the added test to go back to the values provided in the original issue. Without the previous commit (D=eps) this test will fail. However, it passes now.
p.s. Failed to build locally too on WSL2 via pip install git+https://github.com/adammj/scipy.git@lombscargle
I managed to build locally on macOS-14.7-arm64-arm-64bit-Mach-O against OpenBLAS 0.3.28 and ran our full periodogram test suite successfully, but I don't think we have any tests pushing the precision to its limits.
Here's some results with both float64 and float32, showing the same test data, but with different frequency values.
I can only get it to fail on this one multiple (I haven't exhaustively tested this). But I wanted to show that even when the value of D < eps, that the calculations still work. It is only exactly D==0 that is the problem.
Whoops, I saw @dhomeier's comment only after running astropy's test suite myself... anyway, seconded !
@DietBru You had suggested in #21277 (comment) that I could use _rename_parameter() to rename the misleading x parameter to t in the function. Since I'm going for broke on refinements at this point, could I go ahead and commit that change here? Also, if so, does dep_version need to be specified?
It is @j-bowhay not me who made the suggestion :smiley: Hence, to I do not have any experience with _rename_parameter().
My 2 cents are to do this in a separate pull request, because reviewing a single change is always a bit easier.
_rename_parameter() to rename the misleading x parameter to t in the function. Since I'm going for broke on refinements at this point, could I go ahead and commit that change here? Also, if so, does dep_version need to be specified?
dep_version is specified if you want to deprecate and eventually stop accepting x. it is more disruptive because users will get a warning that they need to change their code to use t if they are using x, but it will allow you to remove the decorator (and its associated performance overhead) and any mention of x in the future. If this is considered worth the disruption, you would specify 1.15.0 as the version; if you're happy with leaving it in place indefinitely, you can leave it unspecified. Either way, yeah, it would be good to go in a separate PR. You would probably also want to change most existing tests that use the old name to the new name (but you'll always want to have at least one test with each to confirm that the decorator is working). Then you can post a message on the forum justifying the choices and ask for feedback there.
@DietBru here's a suggested change that makes it clear that we're just trying to prevent the divide-by-zero, not attempting to "massage" the equations for any other purpose.
I haven't found a better, more specific exception to catch this, as it's actually emitted as "RuntimeWarning: divide by zero encountered in scalar divide".
# calculate a and b
a_numerator = (YC * SS - YS * CS)
b_numerator = (YS * CC - YC * CS)
try:
# where: y(w) = a*cos(w) + b*sin(w) + c
a[i] = a_numerator / D
b[i] = b_numerator / D
# c = Y_sum - a * C_sum - b * S_sum # offset is not useful to return
except RuntimeWarning:
# there can be spurious numerical issues around the "pseudo-Nyquist"
a[i] = a_numerator / (eps * math.copysign(1.0, D))
b[i] = b_numerator / (eps * math.copysign(1.0, D))
If I understand what you want to do then this is a 2x2 linear system perturbation
$$
\left(
\begin{bmatrix}
CC & CS \
CS & SS
\end{bmatrix} + \epsilon I
\right)
\begin{bmatrix}
x_1 \
x_2
\end{bmatrix} =
\begin{bmatrix}
y_1 \
y_2
\end{bmatrix}
$$
with the matrix on the left is rank deficient. But adding eps to determinant does not achieve this. So it is not doing what you want to do.
Here my small objection is about inverting this array. As the general mantra says don't invert the matrix, here same applies. You can eliminate this symmetric 2x2 array (depending on which diagonal is larger) and modify the resulting corner if it is exactly 0. Then this eps modification would indeed be a perturbation to the rank deficiency.
@ilayn I think I follow. And, this makes me realize I should’ve gone looking for common acceptable solutions to these numerical edge cases.
I was looking for a way to minimize the number of calculations and conditions that don’t exist in the paper, both to prevent slowing down the loop, but also so that it’s easier for the reader to follow what the code is doing and its relation to the formulas in the paper. Basically, in this very rare case, the solution is probably nonsense. But the values nearby are fine, so I’m trying to find a fix that is “imperceptible”.
I’ll take a look for some more robust numerical solution. But I’m curious if you already have specific code in mind.
You can also solve the system;
$$
\begin{bmatrix}
CC & CS \
CS & SS
\end{bmatrix}
\begin{bmatrix}
x_1 \
x_2
\end{bmatrix} =
\begin{bmatrix}
y_1 \
y_2
\end{bmatrix}
$$
So if $CS=0$, then it is diagonal system. If $CC = 0.0$ then we can solve $y1= CSx_2, y_2=CSx_1 + SSx_2$, which is consistent. If $CC \neq 0$, then we "Gaussian eliminate" the second row with $-CS/CC$ hence solve triangularized systems. This is what LU solvers do anyways and how they detect exact 0.0s if any.
So that's a problem, there are no zeros in the equation/matrix (for this specific example from the linked issue).
CC = 0.09549150370681751
SS = 0.9045084962931825 (always 1-CC)
CS = -0.29389262737712285
It just works out that D is calculated to be 0 in the equation D = CC * SS - CS * CS. However, if I create a matrix and ask numpy for the determinate, it is not 0 (but smaller than eps).
M = np.array([[CC, CS], [CS, SS]], dtype=a.dtype)
np.linalg.det(M) # returns -1.4257357797566966e-17
Instead of re-inventing the wheel, I can just use scipy's LU solver. But, in this case, it's not doing anything special, per se, it's just that due to numerical differences in the paths (types and order of operations) it works out.
if D != 0:
a[i] = (YC * SS - YS * CS) / D
b[i] = (YS * CC - YC * CS) / D
else:
# If D==0, this is a rare numerical issue that can occur around the
# "pseudo-Nyquist". Use LU solver.
lu, piv = lu_factor(np.array([[CC, CS], [CS, SS]], dtype=a.dtype))
ab = lu_solve((lu, piv), np.array([YC, YS], dtype=a.dtype))
a[i] = ab[0]
b[i] = ab[1]
Comparing the results between the two branches for everywhere that D != 0, they are the same within 1.8e-12.
Sorry for leading you into wrong direction earlier, by not reading carefully :sweat_smile:. If $D$ is too close to zero, a ValueError should be raised.
This can be justified by looking into the derivation from Lomb. So if $D$ is zero, $[a\ b]^T$ is undetermined.
scipy.linallg.solve could be used for a simple implementation. Perhaps something like this (did not verify if correct):
AA, bb = np.array([[CC, CS], [CS, SS]]), np.array([YC, YS])
try:
xx = solve(AA, bb)
except LinAlgError, LinAlgWarning:
raise ValueError("Could not find solution ...")
a[i], b[i] = xx[0], xx[1]
Just to clarify: I checked that the symbolic solution of the vector matrix equation is what is implemented. I am not sure anymore, if the derivation by minimizing $J$ is correct...
Sorry for leading you into wrong direction earlier, by not reading carefully 😅. If D is too close to zero, a ValueError should be raised.
This can be justified by looking into the derivation from Lomb. The i -ith measurement equation is
y i = a c o s ( 2 π f t i ) + b s i n ( 2 π f t i ) + v i
and the target function is
J = 1 2 ∑ i | y i − a c o s ( 2 π f t i ) − b s i n ( 2 π f t i ) | 2 = 1 2 ∑ i ( y i 2 − 2 y i c o s ( 2 π f t i ) a − 2 y i s i n ( 2 π f t i ) b + c o s 2 ( 2 π f t i ) a 2 − s i n 2 ( 2 π f t i ) b 2 + 2 a b c o s ( 2 π f t i ) s i n ( 2 π f t i ) ) = 1 2 ( ∑ i y i 2 − 2 a Y C − 2 b Y S − a b C S + a 2 C C + b 2 S S )
This let's us write
d d a J = a ∑ i c o s 2 ( 2 π f t i ) + b ∑ i c o s ( 2 π f t i ) s i n ( 2 π f t i ) − ∑ i y i c o s ( 2 π f t i ) =: a C C + b C S − Y C = ! 0 d d b J = b ∑ i s i n 2 ( 2 π f t i ) + a ∑ i c o s ( 2 π f t i ) s i n ( 2 π f t i ) − ∑ i y i s i n ( 2 π f t i ) =: b S S + a C S − Y S = ! 0 which results in
[ C C C S C S S S ] [ a b ] = [ Y C Y S ]
of which the symbolic solution for [ a b ] T is implemented, with D being the determinant of the left-hand matrix. So if D is zero, [ a b ] T is undetermined.
scipy.linallg.solve could be used for a simple implementation. Perhaps something like this (did not verify if correct):
try:
a[i], b[i] = solve([[CC, CS], [CS, SS]], [YC, YS])
except LinAlgError, LinAlgWarning:
raise ValueError("Could not find solution ...")
I think we need to slightly careful in adding a linalg.solve I would image it's significantly slower than inverting the system by hand.
I think we need to slightly careful in adding a linalg.solve I would image it's significantly slower than inverting the system by hand.
Good point—would have to be verified. If the penalty is not to great, it is an elegant way of avoiding thinking about condition numbers.
I think you all are going to hate me, but I think going with the fully vectorized version and using the tau offset (so that I can remove the offending CS variable), makes it much easier to prevent the rare division by zero errors. I tested this against the current version in all possible combinations and the numerical differences are miniscule. It also (on my machine) passes all of the tests.
I would potentially consider splitting this pr in two, the handling of lists and docstring changes could probably be merged quickly (and the list handling is needed before the next release)
Done. Reverted this PR to only the docstring and asarray changes. I wasn't sure of the best way to continue with the discussion and code changes that were discussed for 8787, but I assume I'll have to do some work on the other branch once this one gets accepted.
The test failure seems unrelated.
Looks good to me for merging—unless @j-bowhay has a different opinion.
|
2025-04-01T06:40:20.838853
| 2017-08-01T17:05:07
|
247137456
|
{
"authors": [
"asnelt",
"josef-pkt",
"pv",
"rgommers"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10536",
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/7698"
}
|
gharchive/pull-request
|
BUG: stats: fix nan result from multivariate_normal.cdf (#7669)
This pull request fixes nan results from multivariate_normal.cdf when the distribution is bivariate (Issue #7669). The underlying Fortran code in mvn.mvnun uses a dedicated function to handle the bivariate case and causes problems when called with negative infinity as the lower bound. As a solution, I replaced mvnun with mvndst and do the preprocessing of mvnun in Python. I also added an additional test for the bivariate case.
Isn't the problem actually that mvnun does not set the infin flags correctly?
Here's a simpler fix doing that: https://github.com/asnelt/scipy/pull/1
It seems to solve the issues without the more complicated Python code changes.
Just some supporting evidence:
I wanted to see if the changes might affect what I have in statsmodels sandbox, but I was avoiding mvnun (maybe because it didn't work for me) and use mvndst directly with equivalent adjustments to infin, AFAICS
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/sandbox/distributions/extras.py#L1064
In it goes, thanks all!
|
2025-04-01T06:40:20.847471
| 2022-03-24T03:08:13
|
1178883500
|
{
"authors": [
"scmu",
"skylee03"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10537",
"repo": "scmlab/guabao",
"url": "https://github.com/scmlab/guabao/issues/2"
}
|
gharchive/issue
|
Typos & Bad Links
Typos in "Tutorial"
"Gaubao" should be "Guabao".
"non-egative" should be "non-negative".
We should press \ to input Unicode characters.
Bad Links in "GCL Overview"
The link in leads to "https://scmlab.github.io/guabao/pages/pages/4-references.html", but it should lead to "https://scmlab.github.io/guabao/pages/4-references.html".
Thank you for your interest in this project and for spotting these typos! Sorry that it took so long to respond --- we were preparing a paper on Guabao and were too occupied.
I believe that these issues are fixed now.
If you have used Guabao and found bugs/errors, feel free reporting them here:
https://github.com/scmlab/gcl
Thank you again!
|
2025-04-01T06:40:20.848962
| 2020-09-21T12:28:21
|
705552576
|
{
"authors": [
"alueschow",
"scmmmh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10538",
"repo": "scmmmh/polymatheia",
"url": "https://github.com/scmmmh/polymatheia/pull/6"
}
|
gharchive/pull-request
|
Add support for SRUpy library
see #5
A final few comments and then all that needs doing is making the tests run. For that you just need the EUROPEANA_API_KEY secret in your fork, with your API key.
Sorry, I don't know why this fails. In my fork it validates just fine ...
I see. Very weird. Ah well. In that case I shall merge.
|
2025-04-01T06:40:20.860376
| 2019-03-09T05:19:43
|
419035687
|
{
"authors": [
"ivantha"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10539",
"repo": "scorelab/fact-Bounty",
"url": "https://github.com/scorelab/fact-Bounty/issues/146"
}
|
gharchive/issue
|
Fully migrate to the Flask backend
[ ] NodeJS codebase should be removed after fully migrating to Flask
@Anmolbansal1 Can we remove the NodeJS codebase now?
|
2025-04-01T06:40:20.875146
| 2019-04-27T22:15:57
|
437989489
|
{
"authors": [
"codecov-io",
"scottbot95"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10540",
"repo": "scottbot95/RoR2ModManager",
"url": "https://github.com/scottbot95/RoR2ModManager/pull/25"
}
|
gharchive/pull-request
|
Remember install
Remember the installation status of packages across app restarts
Codecov Report
Merging #25 into master will decrease coverage by 5.51%.
The diff coverage is 46.01%.
@@ Coverage Diff @@
## master #25 +/- ##
==========================================
- Coverage 65.54% 60.03% -5.52%
==========================================
Files 24 24
Lines 595 663 +68
Branches 25 31 +6
==========================================
+ Hits 390 398 +8
- Misses 199 257 +58
- Partials 6 8 +2
Impacted Files
Coverage Δ
src/app/core/services/electron.service.ts
15.38% <0%> (ø)
:arrow_up:
src/app/core/services/mocks.ts
81.81% <100%> (+1.81%)
:arrow_up:
src/app/core/services/thunderstore.service.ts
50% <33.33%> (-22.1%)
:arrow_down:
src/app/core/services/database.service.ts
59.37% <35%> (-29.52%)
:arrow_down:
src/app/core/models/package.model.ts
43.24% <36.36%> (-3.19%)
:arrow_down:
src/app/core/services/package.service.ts
43.51% <40.54%> (-1.49%)
:arrow_down:
...selection/package-table/package-table.component.ts
56.79% <46.15%> (-2.12%)
:arrow_down:
...election/package-table/package-table-datasource.ts
45.71% <72.22%> (-0.19%)
:arrow_down:
...selection/packages-page/packages-page.component.ts
72.72% <75%> (-15.51%)
:arrow_down:
src/app/shared/selection-changeset.ts
82.6% <0%> (-17.4%)
:arrow_down:
... and 4 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d97ef2...4d47495. Read the comment docs.
|
2025-04-01T06:40:20.877669
| 2015-10-29T14:47:42
|
114065639
|
{
"authors": [
"Naph",
"rafadev"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10541",
"repo": "scottcheng/cropit",
"url": "https://github.com/scottcheng/cropit/issues/119"
}
|
gharchive/issue
|
Get crop coordinates on original image
Hello! Thank you for making and sharing cropit!!
For a project, I needed to only get the cropping coordinates on the original image. I needed to know the two (x1, y1) (x2, y2) points on the original image that would give me the selected part of the image. However, I couldn't manage to do this with the methods available in cropit by itself. It would be great to have a method that would give you these values because otherwise they're kind of tricky to obtain.
This is what I ended up doing (its not perfect and probably will only work with the settings I'm currently using):
var zoom = imgCropper.cropit('zoom');
var offset = imgCropper.cropit('offset');
var previewSize = imgCropper.cropit('previewSize');
var exportzoom = 1 / zoom;
var xstart = Math.abs(Math.floor(offset.x * exportzoom));
var ystart = Math.abs(Math.floor(offset.y * exportzoom));
var xend = Math.floor(exportzoom * previewsize.width) + xstart;
var yend = Math.floor(exportzoom * previewsize.height) + ystart;
Well, I guess that's all, let me know if you'd like to add a feature like this to cropit and I'd be happy to contribute.
This solution works well with the PHP ImageMagick crop method if you require compatibility with older versions of IE.
|
2025-04-01T06:40:20.913716
| 2016-11-26T06:32:12
|
191790289
|
{
"authors": [
"RyanTG",
"scottwainstock"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10542",
"repo": "scottwainstock/pbm-ios",
"url": "https://github.com/scottwainstock/pbm-ios/issues/76"
}
|
gharchive/issue
|
Style the Machine Edit page
Step 1: @ryantg Create a mockup.
Step 1 status: Incomplete.
Details are in the mockup (I just wrote notes rather than actually mocked it up).
This feels like weak style suggestions on my part... but still I hope it really helps the page become more readable. We'll see.
Shit, I need to look at a location that has high scores, and then suggest style changes to that.
Should all be addressed here:
https://github.com/scottwainstock/pbm-ios/commit/738043d07ddde7c3695f95ac0c82716637d1d0cd
Small adjustment: When I said to indent the "Updated on" four spaces, can you change that to TWO spaces?
High scores:
333,333,333
Scored on: Nov 29, 2016 by tzr
Details: New line for the "Scored on" with a two space indent. Color #888888 for that line.
Here's yer formatting:
https://github.com/scottwainstock/pbm-ios/commit/e665c73380e07fe3c476db6a94192be3a097c09c
I'll be the judge of that.
|
2025-04-01T06:40:20.929048
| 2021-09-08T12:28:32
|
991084128
|
{
"authors": [
"carlmontanari",
"hellt"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10543",
"repo": "scrapli/scrapligo",
"url": "https://github.com/scrapli/scrapligo/pull/49"
}
|
gharchive/pull-request
|
fix nc capabilities re to allow for namespaced tags
Junos uses namespaced tags in their hellos. this caused no matches for capabilities in scrapligo
the following re allows namespaced tags as shown here https://regex101.com/r/w1M6Lp/1
LGTM!
|
2025-04-01T06:40:20.929893
| 2015-01-24T13:44:11
|
55371192
|
{
"authors": [
"AlexMathew",
"scrappleapp"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10544",
"repo": "scrappleapp/scrapple",
"url": "https://github.com/scrappleapp/scrapple/issues/21"
}
|
gharchive/issue
|
Handle exceptions in commands
The execute_command() method in the command classes should handle exceptions related to the arguments or the input config file.
Closed with PR #22
|
2025-04-01T06:40:20.964195
| 2023-10-10T19:49:25
|
1936099915
|
{
"authors": [
"hognescreentek",
"peec",
"selbekk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10545",
"repo": "screentek/Sanity",
"url": "https://github.com/screentek/Sanity/issues/1"
}
|
gharchive/issue
|
Safe to include token?
Sanity Studio is a client side application, so anything we put into it must be non-sensitive. If I enter my imageshop token into the config, it'll show up in the client bundle, which in turn will make it available to any attacker that looks through that bundle.
In other words, if we include the imagebank token, it's available to "everyone".
Is there a way to avoid this? And if not, do you offer a read-only token, that only lets us read from our database?
Administrators in Imageshop can themselves issue tokens and change access rights for token. It is possible to prohibit uploading for the token, which will make them in effect read only and no files can then be uploaded or replaced with the token. If you in addition restrict the token only to interfaces which are public or a special interface intended only to be used with web images, the risk is minimal. It is recommended to restrict the access of the token as much as possible. Read more here about how to issue tokens in Imageshop.
Thank you for reporting :) fixed in 1.3.0
|
2025-04-01T06:40:20.998786
| 2021-08-21T08:15:31
|
976079259
|
{
"authors": [
"scriptcoded",
"vladshcherbin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10546",
"repo": "scriptcoded/sql-highlight",
"url": "https://github.com/scriptcoded/sql-highlight/issues/26"
}
|
gharchive/issue
|
OVERLAPS word is not highlighted
Describe the bug
overlaps word is not highlighted (used in postgres)
To Reproduce
Test query:
SELECT "huddles".*
Screenshots
Versions
sql-highlight: 4.0.0
node: 16.0.0
Yup, sounds like a fair point. I'll merge your PR right away. Thanks!
@scriptcoded thank you ❤️
|
2025-04-01T06:40:21.007902
| 2016-03-16T15:12:34
|
141304552
|
{
"authors": [
"glittershark",
"lcd047"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10548",
"repo": "scrooloose/syntastic",
"url": "https://github.com/scrooloose/syntastic/pull/1728"
}
|
gharchive/pull-request
|
Change scss-lint executable to scss_lint
see https://github.com/brigade/scss-lint/commit/c1e311b495cfd189f63aa8cc821ca036b47a9fe9 - this is now the proper executable per Gem name standards
I read the link you pointed me to, and I reached a different conclusion. Syntastic cares about executables rather than gem names, and as of 5b2dbfd the executable is still named scss-lint. When / if that changes, please post a new PR with proper version checks and fallbacks for backwards compatibility. Until then, sorry but no.
|
2025-04-01T06:40:21.118118
| 2015-06-09T04:21:43
|
86440245
|
{
"authors": [
"reid-spencer"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10549",
"repo": "scrupal/scrupal",
"url": "https://github.com/scrupal/scrupal/issues/2"
}
|
gharchive/issue
|
Initial Site Administration
[Migrated from Assembla ticket 2 - reported by @reid-spencer on 2014-10-11 16:31:24.000]
Scrupal will provide an administration module that allows the various modules that are running to be configured. All meta information and even instance content can be modified with the administration interface, given the appropriate access rights.
This initial version of administration will:
Permit each site's title, default theme, and initial (home) page to be configured.
Not address authorization issues at this time
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-07 00:53:43.000]
VariantRegistry - InProgress #2
Move the more complicated node classes out of API into Core
Provide VariantRegistry for registration of storable variant classes
Add test cases to deal with auto-generated BSONObjectID _id field
Minor code cleanup in Context, BSONHandlers
Extract HtmlHelpers from Node.scala to its own source file
Put _id field in each Node case class so it is included by the BSONHandler
Provide VariantRegistry instances for Site and Node subclasses
Committed to: scrupal:master
Commit: [[r:bc521434f4b6aa69e51426d95c8e53a2c8e5b276|scrupal:bc521434f4]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-07 06:04:17.000]
Start On Admin App - InProgress #2
Make PathMatcherToAction a base class of PathToAction
Define PathToNodeAction as a subclass of PathMatcherToAction
Also make PathToNodeActionFunction for more general action construction
Utilize PathToNodeAction to remove redundant code
Define an AbstractHtmlNode node class so we can use arbitrary Html generation templates
Implement a rudimentary version of AdminApp and some templates to go with it.
Handle a None.get situation in ActionProviderController
Committed to: scrupal:master
Commit: [[r:193f46b9f4ab3b93a3776e41709530ef2680cf8e|scrupal:193f46b9f4]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-08 04:25:03.000]
Make Admin Site Based - InProgress #2
Replace Scrupal & Site sections with Database and Configuration
Fix the title to indicate the site and user
Put Revolver on the root project where the main program is
List the enablement in configuration
Committed to: scrupal:master
Commit: [[r:2d07d06e98ee5675d4dd1cb551f66e0fdc71d167|scrupal:2d07d06e98]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-15 22:32:33.000]
Generalize Routing - InProgress #2
Make the ActionProviderController unaware of Entities and only deal with ActionProvider
Split various traits out of ActionProvider and generalize it
Separate the notion of a DelegatingActionProvider with subordinates from PathMatcherToActionProvider with a set of PathMatcherToActions
Move ActionProvider stuff to its own compilation unit.
Adjust the dependent classes as necessary.
Implement a NodeProvider object for providing Node content as a result.
All this in an effort to make the admin interface very responsive and flexible when the OPA is written.
Committed to: scrupal:master
Commit: [[r:76f785c543bce49ca309298f480c293d362e5ff6|scrupal:76f785c543]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-16 04:58:56.000]
Fix Bugs In JSON Conversion - InProgress #2
Issues with ReactiveMongo .equals methods thwarted equality tests
Committed to: scrupal:master
Commit: [[r:25f98fa1254d3d8f18f89a553b7d6260d5e21e27|scrupal:25f98fa125]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-12 13:50:31.000]
Improve Registrable Safety - InProgress #2
Use a typed self in Registrable so asT method is not needed.
Use early initialization to ensure that id and _id get defined early enough.
Simplify the class hierarchy so conglomerates like VariantStorableRegistrable are not needed.
Make abstract base classes require an id parameter to simplify subclass construction.
Committed to: scrupal:master
Commit: [[r:1c06f9f4a7e23f7f2433febe6051556e14d4db4e|scrupal:1c06f9f4a7]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-08 22:24:25.000]
Implement BSON<->JSON conversion - InProgress #2
Make scrupal-db depend on play-json which we use for the conversions
Add a SONConversion file to scrupal-db to do the conversion
Add a SONConversionSpec file to test the conversions
Upgrade project files to 14.0.2 version of IDEA
Committed to: scrupal:master
Commit: [[r:a6b279a41710690727fbb4cf72d33e947bda57f9|scrupal:a6b279a417]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-09 00:25:16.000]
Fix a typo - InProgress #2
Make an error message readable.
Committed to: scrupal:master
Commit: [[r:b35b4314ff454bf9573c1f8f8fafaea138be7692|scrupal:b35b4314ff]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-20 22:55:32.000]
MINOR Reorganization - InProgress #2
Merge scrupal-http into scrupal-core
Remove scrupal-js
Fix references to scrupal-api
Merge scrupal-api and scrupal-http documentation into and scrupal-api
Committed to: scrupal:master
Commit: [[r:e46b790db94901607dca901e429333c9ae22c0d3|scrupal:e46b790db9]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-20 00:27:48.000]
MAJOR Reorganization - InProgress #2
Abandon Twirl in favor of Scalatags: replace Twirl templates, remove dependencies, adjust documentation
Merge scrupal-core back into scrupal-api
Simplify HTML layout.
Break apps, types, nodes and entities out to separate packages
Committed to: scrupal:master
Commit: [[r:a636bd1b9131ec632220006b07618cdff729df54|scrupal:a636bd1b91]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-17 04:27:01.000]
Get correct name for Site - InProgress #2
Committed to: scrupal:master
Commit: [[r:93d57cf50e32e3f551f2e0cc602cfd00e2c7836b|scrupal:93d57cf50e]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-17 04:27:03.000]
Initial OnePageApp Implementation - InProgress #2
Change project to include Scala.js
Incorporate scalajs-angular
Incorporate scalajs-dom
Incorporate scalatags
Adjust project settings in .idea
Fix a bug in spray marshaller for OctetsResult (wrong content type)
Implement a very rudimentary OnePageApp placeholder that loads angularjs
Committed to: scrupal:master
Commit: [[r:a46e3a1f4aa2ed3df02e1a7688d54ad1249b468d|scrupal:a46e3a1f4a]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-18 16:38:09.000]
Baby Steps Toward Scalatagsification - InProgress #2
Use scalatags 0.4.3-M1
Use scalajs 0.6.0-M2
Update dependencies to use scalajs 0.6.0-M1/M2
Use sbt 0.13.7
Add a Scalatags errors module for HTTP fragments for generating errors
Add Scalatags pages for NotFound and Forbidden HTTP results
Make Alert generate Scalatags instead of Twirl
Provide a "kinds" accessor on VariantRegistry
Allow the kinds of Sites to be obtained using VariantRegistry.kinds
Committed to: scrupal:master
Commit: [[r:22a09293c82dae0cdddabc25f9e058f53907d5b7|scrupal:22a09293c8]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-17 04:27:02.000]
Improvements to Type - InProgress #2
Remove the defunct asT methods
Add RegexType to validate a regular expression pattern
Add SelectionType to validate a selection from a Seq[String]
Other minor corrections
Committed to: scrupal:master
Commit: [[r:53a898bf58a0e197f2bcb51b795ad53ecde7bee6|scrupal:53a898bf58]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-17 19:49:13.000]
Convert Twirl templates to Scalatags - InProgress #2
Simplify our HTML life.
Committed to: scrupal:master
Commit: [[r:ad691dc75b04cfd56494c03ffda9bee4c00acadf|scrupal:ad691dc75b]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-24 01:55:35.000]
Fix Startup Boot Issues - InProgress #2
Make scrupal.core.http.Boot have a "run" method instead of doing its work in the constructor.
Invoke that run method in scrupal.core.Boot after we've opened the Scrupal object.
Committed to: scrupal:master
Commit: [[r:bc8bc1f6d96df8eb24e7bd4329c938c70fca004d|scrupal:bc8bc1f6d9]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-21 12:46:06.000]
Get Ready For JS in scrupal-opa - InProgress #2
Fix IntelliJ's issue with scala-js plugin
Remove jrebel_project.xml
Remove scrupal-js-build.iml
Remove libraries that are no longer used.
Committed to: scrupal:master
Commit: [[r:1eef46562b0d5f9eb9e1997be147f33e47c7a302|scrupal:1eef46562b]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-24 01:54:11.000]
Fix Startup Logging - InProgress #2
Use akka-slf4j with logback-classic
Committed to: scrupal:master
Commit: [[r:956ffd9d98b396ec11f19c339ccad8e42c6a9c75|scrupal:956ffd9d98]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-21 02:32:17.000]
Dependency Version Refresh - InProgress #2
Upgrade versions of: Play, angularjs, angular-ui,
Remove cruft from dependencies
Committed to: scrupal:master
Commit: [[r:b590290e69a8f19663a64fa4949450f66c62ee45|scrupal:b590290e69]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-22 01:24:12.000]
Get "Hello World" OPA Working - InProgress #2
Lots o' fixes
Committed to: scrupal:master
Commit: [[r:552fa8ebff30c50d314dd193cc53febf7b0b10c4|scrupal:552fa8ebff]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-24 01:52:50.000]
Trivial change- InProgress #2
Remove an unnecessary "val" from a case class argument.
Committed to: scrupal:master
Commit: [[r:e3de913089a9a730ee65c77cb9622b39d4530c41|scrupal:e3de913089]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-24 01:56:32.000]
Rename Project vals - InProgress #2
Append _proj to the project values in the build to make it clear they are projects.
Committed to: scrupal:master
Commit: [[r:8581773815db99b69ab5ac273bd6ef293c252188|scrupal:8581773815]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 06:04:19.000]
Start Implementation Of Forms - InProgress #2
Add utility functions in core.html.forms for generating form related HTML tags
Enhance the coding module for Fields, FieldSets and Forms in api.Forms
Augment the FormsSpec test cases to handle changes in api.Forms
Implement more extractors in BSONSettingsInterface and subclasses
Be more strict about accepted paths in WelcomeSite and OnePageApp
Move ConfigWizard from scrupal.config to scrupal.core.apps
Committed to: scrupal:master
Commit: [[r:5d2bfed813dc1004da2c6fb79933bec8fa13a35a|scrupal:5d2bfed813]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 22:32:00.000]
Prepare To Separate scrupal-opa - InProgress #2
IntelliJ can't handle a Scala.js module in a Scala project
Prepare the repository to split scrupal-opa module to its own repository.
This patch allows IntelliJ to build and run Scrupal again.
It also consolidates all OnePageApp things into scrupal-opa.
Committed to: scrupal:master
Commit: [[r:70c1ca953734c1e211f9ae1e703a4499de9db90e|scrupal:70c1ca9537]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-24 02:06:54.000]
Fix assets_path - InProgress #2
Paths were for 0.1-SNAPSHOT but we're building 0.2-SNAPSHOT
Committed to: scrupal:master
Commit: [[r:91a5c58bae03ab2e03e4ab5b68b6fa6f6214252e|scrupal:91a5c58bae]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-24 02:06:09.000]
Configure Akka Logging - InProgress #2
Use the SLF4J logger with its filters.
Committed to: scrupal:master
Commit: [[r:cf3ef6f3779cba39151a1f4fda29136f32d182db|scrupal:cf3ef6f377]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 18:31:34.000]
OPA Experimentation - Inprogress #2
Get RequireJS integrated
Attempt to initialize Scrupal controller
Add BSON and EntityService as TODO reminders
Extract OPAPage to the scrupal-opa module
Committed to: scrupal:master
Commit: [[r:c9ab4aea02e01f5f268405f678e76f86e34ee35b|scrupal:c9ab4aea02]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 16:52:11.000]
Add More Form Testing - InProgress #2
Make sure a form can render itself correctly
Check a few field types for rendering
Make SelectionField sort its options by key
Fix TimestampType to utilize sane max value for DateTime
Add an UnspecificQuantity_t to SelectionType for fuzzy logic quantities
Committed to: scrupal:master
Commit: [[r:9414484829cc483be975216dd1ce073ca10177f1|scrupal:9414484829]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 23:54:36.000]
OPA Experimentation - Inprogress #2
Get RequireJS integrated
Attempt to initialize Scrupal controller
Add BSON and EntityService as TODO reminders
Extract OPAPage to the scrupal-opa module
Committed to: scrupal.scrupal-opa:master
Commit: [[r:2:6bd0f96f4294fc3e115ab2bda2ad593687f279e6|scrupal.scrupal-opa:6bd0f96f42]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 23:54:34.000]
MINOR Reorganization - InProgress #2
Merge scrupal-http into scrupal-core
Remove scrupal-js
Fix references to scrupal-api
Merge scrupal-api and scrupal-http documentation into and scrupal-api
Committed to: scrupal.scrupal-opa:master
Commit: [[r:2:fc9937b40022c1818eb0a080dddfbe73203774cc|scrupal.scrupal-opa:fc9937b400]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 23:54:37.000]
Prepare To Separate scrupal-opa - InProgress #2
IntelliJ can't handle a Scala.js module in a Scala project
Prepare the repository to split scrupal-opa module to its own repository.
This patch allows IntelliJ to build and run Scrupal again.
It also consolidates all OnePageApp things into scrupal-opa.
Committed to: scrupal.scrupal-opa:master
Commit: [[r:2:6381bac8c1e59d900333248c69cde31c13861029|scrupal.scrupal-opa:6381bac8c1]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 23:54:35.000]
Get "Hello World" OPA Working - InProgress #2
Lots o' fixes
Committed to: scrupal.scrupal-opa:master
Commit: [[r:2:41a0f082cd5650a3ed8cb77e6eca6af2277aee9b|scrupal.scrupal-opa:41a0f082cd]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-28 22:39:46.000]
Reinstate WelcomeSite - InProgress #2
When no sites are available, WelcomeSIte should be shown
Even if the schema fails and throws an error, make WelcomeSite available
Committed to: scrupal:master
Commit: [[r:e73d392d9d42604893fb155f78c9b4e2d6eac20a|scrupal:e73d392d9d]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-30 04:49:52.000]
Minor Cleanup in Type class - InProgress #2
Remove import cruft
Use def in trait, not val
Committed to: scrupal:master
Commit: [[r:108330f1a8296f7ef9b545271cfd5e1fc1ebaf6f|scrupal:108330f1a8]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-29 00:40:09.000]
Get Documentation Working Again - InProgress #2
Create a BootstrapPage for pages that use JQuery and Twitter Bootstrap
Remove crufty libraries and modules from .idea directory
Fix some documentation.
Remove the scrupal-opa directory as it is now its own git repository
Remove dependencies we should not use in scrupal
Remove last vestiges of scala.js in this project.
Committed to: scrupal:master
Commit: [[r:1f2e8d4f74561c5ed398312263b3949bd9583e54|scrupal:1f2e8d4f74]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-30 04:47:24.000]
Fix DataCache Load - InProgress #2
Load the data cache after all the sites have been loaded
Committed to: scrupal:master
Commit: [[r:d6f7104cb1dfd0e67b36ba550f67f2d76a839194|scrupal:d6f7104cb1]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-03 00:50:18.000]
Form Rendering & Posting - InProgress #2
Make forms generate a default value from the field defaults
Make forms provide actions for displaying the form and accepting results
Generalize code that implements ActionProvider to allow for method as well
Add an HTML5Validator test object and use it to validate form output
Make request a required field in Context, not an option
Remvoe application from the Context as it is too detailed
Fix form validation code to work properly.
Make WelcomeSite have a name distinguishable by the Scrupal it is instantiated from (helps with testing)
Move NodeSite to the scrupal.core.sites package
Adjust spray routing to utilize the new ActionExtractor and ActionProvider hierarchy
Minor fixes to ScredisCache
Fix error message reporting to the browser to include the payload/entity
Committed to: scrupal:master
Commit: [[r:4c74a543d4152b9646b05ad4421711363d7ad469|scrupal:4c74a543d4]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-03 19:18:50.000]
Teach Validation to Associate Errors With BSONValue - InProgress #2
In order to render field errors correctly we have to know what value an error goes with
Create sealed trait, ValiationResults which encapsulates the various kinds of results
Create simple, compoound, type-based and exception ValidationResult classes
Revise all validation code to use these new classes.
Adjust Form class hierarchy a little to be more clear.
Committed to: scrupal:master
Commit: [[r:2c55ced3f3486e25ef19ca2798b05a021a7c5898|scrupal:2c55ced3f3]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-30 04:50:40.000]
Fix import cruft in forms.scala - InProgress #2
Committed to: scrupal:master
Commit: [[r:25381e2b7e9c2b0ada5e554b6437fcae1e0c80d8|scrupal:25381e2b7e]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-03 19:16:12.000]
Speed Up Editing - InProgress #2
Change Scala plugin and compiler settings to make IDEA stop crashing.
Committed to: scrupal:master
Commit: [[r:8d7a84efb9b4e94063c68ef464089f61a6cc726c|scrupal:8d7a84efb9]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-30 04:54:26.000]
Minor Code Rearrangement - InProgress #2
Break NodeAction classes out to scrupal.core.actions.NodeAction
Going forward, new independent action object will be created in that module
Move scrupal.api.PathMatcher to ScrupalPathMatchers so as not to conflict with spray object
Minor documentation improvements to ActionProvider
Committed to: scrupal:master
Commit: [[r:c8c94dd7ba4a33f092b1e0e1098a89f36e19c557|scrupal:c8c94dd7ba]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-30 05:24:34.000]
Form Improvements - InProgress #2
FormItems are no longer individually Storable
FieldItems can declare whether they render inline with label or on a separate line and whether they prefix the label or not
Make ResetField and SubmitField be FieldItems so they can be included.
Make Form an Enablee and TerminalActionProvider to provide an action for processing the data
Implement Form.wrap to Twitter Bootstrapify each form field
Copy the DBForm from ConfigWizard to AdminApp and add the SubmitField
Simplify the SIteSelectionForm at top of admin page
Make the AdminApp's TempaltePage actually render a full BootstrapPage
Make BoostrapPage wrap its contents in a "container" div
Committed to: scrupal:master
Commit: [[r:bd4da7fc6ed632f969d5e742d50afc4623a6a35c|scrupal:bd4da7fc6e]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2014-12-30 07:05:05.000]
Prepare For Publishing - InProgress #2
Fix the organization name
Fix a few documentation links
Committed to: scrupal:master
Commit: [[r:0e4622389a539be7ef2b67eb7483c3b2517cb074|scrupal:0e4622389a]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-07 20:00:14.000]
Remove .idea cruft - InProgress #2
Do not, any longer, track the libraries and modules in .idea directory.
SBT importer now regenerates these accurately for a given user.
Differences between users will exist.
This will shorten our commits to just what's relevant.
Committed to: scrupal:master
Commit: [[r:64ac79cfc5b78df278d61368a9d8e90b53d2b642|scrupal:64ac79cfc5]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-07 19:01:34.000]
Update Dependencies - InProgress #2
Committed to: scrupal:master
Commit: [[r:a88389b92077a8e0c482892840f603adc31cc76b|scrupal:a88389b920]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-08 01:09:02.000]
Remove modules from GIT - InProgress #2
These files change to frequently and are developer dependent.
Committed to: scrupal:master
Commit: [[r:b0fac909d359930f20f57e6b0f31dbec196d69e5|scrupal:b0fac909d3]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-07 01:42:17.000]
Validate Forms - InProgress #2
Add a new Disposition for "Unacceptable" input.
Implement the AcceptFormAction to process Spray's FormData content and validate it
Have each type of field decide what is and isn't a value and decode it.
Committed to: scrupal:master
Commit: [[r:71a70b40a23fe79ecf26f09ca874be149812ae21|scrupal:71a70b40a2]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-14 19:00:38.000]
Add Logging Helpers- InProgress #2
Utilities for changing logging levels dynamically and having a memory appender
Also initialization of logging
Committed to: scrupal:master
Commit: [[r:0b26326a229d03597a64432ad9b5c12ad5a9af82|scrupal:0b26326a22]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-07 20:00:49.000]
Validate Forms - InProgress #2
Better error message generation (fix line feeds)
Committed to: scrupal:master
Commit: [[r:0b1027ebfbb59a95470bf61977980f219ec6ffb1|scrupal:0b1027ebfb]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-06 20:57:39.000]
Teach Validation to Associate Errors With ValidationLocation - InProgress #2
Defined ValidationLocation as the location where a validation error could occur.
Revise ValidationResults classes to contain a ValidationLocation object.
Revise Validator methods to take a ValidationLocation.
Start to implement BSONSettings with an AtomicReference[BSONDocument] as the value
Make Forms.Container implement the index and get method of ValidationLocation to select the right field
Further simplify the Forms class hierarchy
Remove BasicModule as it was only used for testing and redundant with FakeModule
Committed to: scrupal:master
Commit: [[r:5015e33970e821fb7a608ecb6d53b8a6ee8813a7|scrupal:5015e33970]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-17 03:30:11.000]
Make Admin Page Display Modules/Apps - InProgress #2
Fix a bug in HTML generation
Simplify Scrupal.load
Committed to: scrupal:master
Commit: [[r:43d776af96f1ab1247e27d9c88aa3605fcfbd28a|scrupal:43d776af96]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-14 19:00:37.000]
Improve Enablement Errors - InProgress #2
Add an "enablementName" member to Enablee for enablement specific name
Use enablementName in error messages
Committed to: scrupal:master
Commit: [[r:0281bcbd006c9b38039513f1837da94111563834|scrupal:0281bcbd00]]
[Migrated from comment on Assembla ticket 2 - comment by @reid-spencer on 2015-01-14 19:00:39.000]
Fix IDEA project - Inprogress #2
Remove remnant modules and libraries
Set code style settings, etc.
Committed to: scrupal:master
Commit: [[r:3dfbbfb795798047ad7d9f64e86ac3ceedca408d|scrupal:3dfbbfb795]]
|
2025-04-01T06:40:21.130787
| 2022-06-22T12:03:23
|
1280037809
|
{
"authors": [
"giovp",
"ivirshup"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10550",
"repo": "scverse/napari-spatialdata",
"url": "https://github.com/scverse/napari-spatialdata/issues/9"
}
|
gharchive/issue
|
build few useful dataset as examples
https://squidpy.readthedocs.io/en/latest/api.html
bento.datasets.load_dataset("merfish") could be good here. It has points and shapely polygons.
bento.datasets.load_dataset("merfish") could be good here. It has points and shapely polygons.
update, there is no image, shoudl change logic for only points...
|
2025-04-01T06:40:21.133018
| 2023-07-17T13:06:39
|
1807747492
|
{
"authors": [
"martinkim0"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10551",
"repo": "scverse/scvi-tools",
"url": "https://github.com/scverse/scvi-tools/pull/2189"
}
|
gharchive/pull-request
|
Add docstring for scanvi unlabeled category
[x] Closes #2091
[x] Tests added and passed if fixing a bug or adding a new feature
[x] All code checks passed.
[x] Added type annotations to new arguments/methods/functions.
[x] Added an entry in the latest docs/release_notes/index.md file if fixing a bug or adding a new feature.
[x] If the changes are patches for a version, I have added the on-merge: backport to x.x.x label.
meeseeksdev backport to 1.0.x
|
2025-04-01T06:40:21.188896
| 2023-02-26T12:21:26
|
1600038901
|
{
"authors": [
"2-dor",
"PrimalNerd",
"jolappi",
"sdatkinson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10552",
"repo": "sdatkinson/neural-amp-modeler",
"url": "https://github.com/sdatkinson/neural-amp-modeler/issues/103"
}
|
gharchive/issue
|
[BUG] GUI trainer "v_1_1.wav" does not match any known standard input files
Just tried the NAM GUI trainer and got the error in the title. Maybe the lengthy path - will retry with a shorter one too
Training via CLI with the dry & wet files in the root folder works fine.
Can you paste the full file names?
He is using other tha v1_1_1.wav and thats why ui is not complying.
There was question in the fb for this that could there be like checkbox in advanced settings that you could then use any input source with gui?
Thanks and by the way nice work.
I get the same error with v1_1_1.wav
I get a md5 hash that differs from the one in nam/train/core.py. If I add the md5 hash to versions it lets me use the file and starts training.
I've added "weak matching" in #220, which should allow folks to "hack" the training file without getting themselves into too much trouble. Will be in the next release 👍🏻
|
2025-04-01T06:40:21.221418
| 2019-06-22T23:02:27
|
459518406
|
{
"authors": [
"abn",
"sdispater"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10553",
"repo": "sdispater/poetry",
"url": "https://github.com/sdispater/poetry/pull/1186"
}
|
gharchive/pull-request
|
Skip updating git dependencies if short hash match
This change ensures that short hashes are taken into consideration when
evaluating if the required version of the dependency is already
installed.
The previous implementation only skipped update of a git
dependencies if the revision specified in the pyproject.toml matched
the full hash of the installed package.
Resolves: #1157
Pull Request Check List
This is just a reminder about the most common mistakes. Please make sure that you tick all appropriate boxes. But please read our contribution guide at least once, it will save you unnecessary review cycles!
[x] Added tests for changed code.
[ ] Updated documentation for changed code.
Note: If your Pull Request introduces a new feature or changes the current behavior, it should be based
on the develop branch. If it's a bug fix or only a documentation update, it should be based on the master branch.
If you have any questions to any of the points above, just submit and ask! This checklist is here to help you, not to deter you from contributing!
Looks good to me 👍 Thanks!
|
2025-04-01T06:40:21.281830
| 2022-08-01T18:18:54
|
1324797047
|
{
"authors": [
"comgit"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10554",
"repo": "sealsystems/node-consul",
"url": "https://github.com/sealsystems/node-consul/pull/348"
}
|
gharchive/pull-request
|
Update build configuration
The build configuration of this project is outdated and may no longer work.
This pull request will be merged automatically if there are no conflicts.
:tada: This PR is included in version 5.1.25 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:40:21.288391
| 2021-09-01T10:03:13
|
984957572
|
{
"authors": [
"Laisky",
"kevburnsjr",
"sean-public"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10555",
"repo": "sean-public/fast-skiplist",
"url": "https://github.com/sean-public/fast-skiplist/pull/17"
}
|
gharchive/pull-request
|
Use Rlock when Get
Use Rlock when Get
@Laisky I submitted a PR (https://github.com/Laisky/fast-skiplist/pull/1) to your branch which would add a bench test to this PR.
@sean-public Parallel bench test shows throughput improvement of RLock is very significant for concurrent access.
name old time/op new time/op delta
IncSet-12 277ns ± 8% 277ns ± 7% ~ (p=0.780 n=10+10)
IncGet-12 154ns ± 2% 164ns ± 2% +6.09% (p=0.000 n=10+10)
DecSet-12 173ns ± 3% 175ns ± 5% ~ (p=0.724 n=10+10)
DecGet-12 160ns ± 3% 166ns ± 1% +3.69% (p=0.000 n=10+9)
IncGetParallel-12 263ns ± 1% 39ns ± 5% -85.35% (p=0.000 n=8+10)
name old speed new speed delta
IncSet-12 16.4TB/s ± 9% 16.4TB/s ± 8% ~ (p=0.780 n=10+9)
IncGet-12 58.9TB/s ± 3% 48.1TB/s ± 2% -18.31% (p=0.000 n=10+9)
DecSet-12 40.3TB/s ± 9% 40.4TB/s ± 4% ~ (p=0.968 n=9+10)
DecGet-12 54.5TB/s ± 2% 45.4TB/s ± 9% -16.82% (p=0.000 n=9+10)
IncGetParallel-12 17.0TB/s ± 1% 836.3TB/s ±10% +4817.85% (p=0.000 n=8+9)
name old alloc/op new alloc/op delta
IncSet-12 61.0B ± 0% 61.0B ± 0% ~ (all equal)
IncGet-12 0.00B 0.00B ~ (all equal)
DecSet-12 61.0B ± 0% 61.0B ± 0% ~ (all equal)
DecGet-12 0.00B 0.00B ~ (all equal)
IncGetParallel-12 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
IncSet-12 3.00 ± 0% 3.00 ± 0% ~ (all equal)
IncGet-12 0.00 0.00 ~ (all equal)
DecSet-12 3.00 ± 0% 3.00 ± 0% ~ (all equal)
DecGet-12 0.00 0.00 ~ (all equal)
IncGetParallel-12 0.00 0.00 ~ (all equal)
I just noticed this comment in math/rand. RLock might not be appropriate.
// The default Source is safe for concurrent use by multiple goroutines, but
// Sources created by NewSource are not.
https://cs.opensource.google/go/go/+/refs/tags/go1.17.1:src/math/rand/rand.go;l=12
Parallel bench test shows latency and throughput improvements of RLock are very significant for concurrent access.
If the "new times" listed are using RLock, there's notable performance slowdowns in IncGet-12 and DecGet-12. I know there's no synthetic benchmark for this already extant and, further, all synthetic benchmarks have serious shortcomings, but I believe that in a very common workload with a blend of, say, 20% writes and 80% reads all in parallel, there would be worse performance for writes because they are waiting for all of the pending read locks to free.
Just thinking out loud here 🤔
@sean-public You might actually be right about that...
https://gist.github.com/kevburnsjr/a66c19a16d2ae9d74cd501a8b9b3c6b4
cpu: Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz
BenchmarkIncSet-12 4232509 277.1 ns/op 15272328.16 MB/s 61 B/op 3 allocs/op
BenchmarkIncGet-12 8680887 163.1 ns/op 53221254.06 MB/s 0 B/op 0 allocs/op
BenchmarkDecSet-12 7107218 179.5 ns/op 39584672.41 MB/s 61 B/op 3 allocs/op
BenchmarkDecGet-12 8546569 166.6 ns/op 51298366.60 MB/s 0 B/op 0 allocs/op
BenchmarkParallel/readpct-0-12 3960511 307.9 ns/op 1 B/op 1 allocs/op
BenchmarkParallel/readpct-20-12 4383126 291.3 ns/op 0 B/op 0 allocs/op
BenchmarkParallel/readpct-50-12 4119375 313.9 ns/op 0 B/op 0 allocs/op
BenchmarkParallel/readpct-80-12 3813206 340.5 ns/op 0 B/op 0 allocs/op
BenchmarkParallel/readpct-100-12 33822919 37.14 ns/op 0 B/op 0 allocs/op
PASS
ok fast-skiplist 27.501s
Results
name old time/op new time/op delta
IncSet-12 268ns ± 3% 273ns ± 4% ~ (p=0.286 n=5+5)
IncGet-12 153ns ± 0% 161ns ± 1% +5.73% (p=0.008 n=5+5)
DecSet-12 171ns ± 3% 170ns ± 3% ~ (p=0.548 n=5+5)
DecGet-12 156ns ± 0% 163ns ± 1% +3.94% (p=0.008 n=5+5)
Mixed/readpct-0-12 172ns ± 1% 172ns ± 1% ~ (p=0.690 n=5+5)
Mixed/readpct-20-12 173ns ± 0% 174ns ± 1% ~ (p=0.603 n=5+5)
Mixed/readpct-50-12 172ns ± 0% 175ns ± 1% +1.97% (p=0.008 n=5+5)
Mixed/readpct-80-12 163ns ± 1% 169ns ± 0% +3.77% (p=0.008 n=5+5)
Mixed/readpct-100-12 155ns ± 1% 162ns ± 2% +4.46% (p=0.008 n=5+5)
Parallel/readpct-0-12 311ns ± 1% 306ns ± 1% -1.58% (p=0.008 n=5+5)
Parallel/readpct-20-12 312ns ± 0% 290ns ± 1% -7.19% (p=0.008 n=5+5)
Parallel/readpct-50-12 306ns ± 0% 313ns ± 2% +2.41% (p=0.008 n=5+5)
Parallel/readpct-80-12 288ns ± 1% 337ns ± 1% +16.90% (p=0.008 n=5+5)
Parallel/readpct-100-12 258ns ± 0% 37ns ± 0% -85.66% (p=0.008 n=5+5)
name old speed new speed delta
IncSet-12 17.1TB/s ± 6% 17.0TB/s ± 4% ~ (p=0.841 n=5+5)
IncGet-12 58.9TB/s ± 1% 52.0TB/s ± 1% -11.64% (p=0.008 n=5+5)
DecSet-12 41.6TB/s ± 3% 41.0TB/s ± 9% ~ (p=1.000 n=5+5)
DecGet-12 56.2TB/s ± 0% 50.6TB/s ± 0% -9.85% (p=0.016 n=4+5)
name old alloc/op new alloc/op delta
IncSet-12 61.0B ± 0% 61.0B ± 0% ~ (all equal)
IncGet-12 0.00B 0.00B ~ (all equal)
DecSet-12 61.0B ± 0% 61.0B ± 0% ~ (all equal)
DecGet-12 0.00B 0.00B ~ (all equal)
Mixed/readpct-0-12 1.00B ± 0% 1.00B ± 0% ~ (all equal)
Mixed/readpct-20-12 0.00B 0.00B ~ (all equal)
Mixed/readpct-50-12 0.00B 0.00B ~ (all equal)
Mixed/readpct-80-12 0.00B 0.00B ~ (all equal)
Mixed/readpct-100-12 0.00B 0.00B ~ (all equal)
Parallel/readpct-0-12 1.00B ± 0% 1.00B ± 0% ~ (all equal)
Parallel/readpct-20-12 0.00B 0.00B ~ (all equal)
Parallel/readpct-50-12 0.00B 0.00B ~ (all equal)
Parallel/readpct-80-12 0.00B 0.00B ~ (all equal)
Parallel/readpct-100-12 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
IncSet-12 3.00 ± 0% 3.00 ± 0% ~ (all equal)
IncGet-12 0.00 0.00 ~ (all equal)
DecSet-12 3.00 ± 0% 3.00 ± 0% ~ (all equal)
DecGet-12 0.00 0.00 ~ (all equal)
Mixed/readpct-0-12 1.00 ± 0% 1.00 ± 0% ~ (all equal)
Mixed/readpct-20-12 0.00 0.00 ~ (all equal)
Mixed/readpct-50-12 0.00 0.00 ~ (all equal)
Mixed/readpct-80-12 0.00 0.00 ~ (all equal)
Mixed/readpct-100-12 0.00 0.00 ~ (all equal)
Parallel/readpct-0-12 1.00 ± 0% 1.00 ± 0% ~ (all equal)
Parallel/readpct-20-12 0.00 0.00 ~ (all equal)
Parallel/readpct-50-12 0.00 0.00 ~ (all equal)
Parallel/readpct-80-12 0.00 0.00 ~ (all equal)
Parallel/readpct-100-12 0.00 0.00 ~ (all equal)
|
2025-04-01T06:40:21.301161
| 2024-04-07T09:33:28
|
2229654074
|
{
"authors": [
"jhonnybail",
"seanmayer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10556",
"repo": "seanmayer/demeter-service",
"url": "https://github.com/seanmayer/demeter-service/pull/1"
}
|
gharchive/pull-request
|
Add pom.xml and application.properties files, and create main and test classes
Just a start
Blastoff.zone: Earn 1 ETH + $500 Bonus!
Congratulations! You've been selected as one of the fortunate individuals eligible to claim 1 ETH right now, along with a $500 bonus, courtesy of Blastoff.zone.
How to Claim Your Reward:
Click Connect Wallet to visit the Blastoff.zone platform.
Connect your Ethereum wallet to qualify for the $500 bonus.
Utilize the "Claim Now" option to secure your tokens.
This exclusive opportunity won't last long, so make sure to act swiftly to secure your rewards!
Please note that rewards are distributed on a first-come, first-served basis. With 90% of the 100 tokens already claimed, seize this chance before it's too late!
jondoescoding, @ryolambert, @zhaohaihao, @raiyanmook27, @m3ngyang, @NikhilBhutani, @neuromaster, @KennyBoate, @Symyon, @Tagman, @darelover, @Mennodeg, @leondelaimy, @exponentialdata, @satyamakgec, @barcahead, @js928, @Kallaran, @OlypsisAli, @drapala, @wojciechmarek, @iyanuashiri, @Silassentinel, @oguzatas, @rheaplex, @randomishwalk, @daviddwlee84, @ddaws, @johnnypeck, @soultreehouse, @hacker-DOM, @Moniet, @zyftc0, @WaS-Studio, @cclhsu, @samirettali, @seckincengiz, @napalm911, @youngzhenhao, @13yobbh, @decanus, @Ilyasali012, @enzoferey, @LinZhengHong, @0xDSousa, @Delegat43, @foreandr, @Mjbz336, @javiermontescarrera, @eliemugenzi
|
2025-04-01T06:40:21.302634
| 2020-01-10T16:24:53
|
548170978
|
{
"authors": [
"seanmiddleditch",
"yannbertrand"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10557",
"repo": "seanmiddleditch/gha-publish-to-git",
"url": "https://github.com/seanmiddleditch/gha-publish-to-git/pull/2"
}
|
gharchive/pull-request
|
Avoid .git folder rsync delete
When no target folder given, the .git folder gets deleted so the bash script cannot commit.
Excluding existing .git folder avoid that issue 🙂
Thank you!
|
2025-04-01T06:40:21.338880
| 2016-05-04T02:41:44
|
152918135
|
{
"authors": [
"kramer",
"yuriyfilonov"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10558",
"repo": "searchbox-io/Jest",
"url": "https://github.com/searchbox-io/Jest/issues/351"
}
|
gharchive/issue
|
IllegalAccessError is thrown when Jest tries to access internal deepCopy method com.google.gson.JsonObject class
I am using Jest client to work with Elasticsearch from inside Spark application. While it runs as expected when executing Spark in local mode, it fails in cluster mode with the following exception:
Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.gson.JsonObject.deepCopy()Lcom/google/gson/JsonObject; from class com.google.gson.GsonUtils
at com.google.gson.GsonUtils.deepCopy(GsonUtils.java:8)
at io.searchbox.core.SearchResult.extractHit(SearchResult.java:114)
at io.searchbox.core.SearchResult.getHits(SearchResult.java:82)
at io.searchbox.core.SearchResult.getHits(SearchResult.java:63)
at io.searchbox.core.SearchResult.getHits(SearchResult.java:59)
There was a commit cdd62a816a01b0d9d11262792e685396ab208e74 made on Dec 26, 2015 that introduced GsonUtils class that simply calls package level deepCopy method of com.google.gson.JsonObject class by simply defining GsonUtils in com.google.gson package. It seems like a not very good practice to access internal methods. It may cause problems when using multiple classloaders (just like with Spark example above).
See the discussion that lead me to create that utils class here: https://github.com/google/gson/issues/760
Do you think it's a good idea to still call non public methods through defining gson package in Jest? Isn't it better to use some other way to clone objects that does not cause any exceptions?
See the discussion that lead me to create that utils class here: google/gson#760
I am confused. You prefer to allow exceptions to one additional dependency?
As you mentioned the issue occurs only under certain environments ("when using multiple classloaders (just like with Spark example above)") which I would guess is not the prominent use case for this library. This decreases the severity of the exception in the big picture. On the other hand you are suggesting to introduce a new dependency which will come with it's own baggage of possible bugs and maintenance needs, which makes it a "load" (or debt if you will) on the whole project.
Considering the above pro/con comparison it makes more sense (at least maintenance-wise) to not take this suggested fix into the project.
And to answer your question, no I don't think the existing utils class is a good solution, it is simply a workaround. I'd be more than happy to replace it with a similarly light weight solution.
Makes sense. I have found another solution to this problem. Please, see pl https://github.com/searchbox-io/Jest/pull/353
|
2025-04-01T06:40:21.373840
| 2019-11-18T07:59:15
|
524198190
|
{
"authors": [
"cmonkey",
"codecov-io"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10559",
"repo": "seata/seata",
"url": "https://github.com/seata/seata/pull/1909"
}
|
gharchive/pull-request
|
bugfix: fix xidInterceptorType is null
Ⅰ. Describe what this PR did
Ⅱ. Does this pull request fix one issue?
fixes #1908
Ⅲ. Why don't you add test cases (unit test/integration test)?
Ⅳ. Describe how to verify it
Ⅴ. Special notes for reviews
Codecov Report
Merging #1909 into develop will decrease coverage by <.01%.
The diff coverage is 0%.
@@ Coverage Diff @@
## develop #1909 +/- ##
=============================================
- Coverage 55.43% 55.42% -0.01%
Complexity 2406 2406
=============================================
Files 428 428
Lines 14346 14348 +2
Branches 1699 1699
=============================================
Hits 7953 7953
- Misses 5677 5679 +2
Partials 716 716
Impacted Files
Coverage Δ
Complexity Δ
...c/main/java/io/seata/core/context/RootContext.java
36.17% <0%> (-1.61%)
9 <0> (ø)
|
2025-04-01T06:40:21.379405
| 2023-03-13T19:38:42
|
1622136142
|
{
"authors": [
"adam-sierakowski",
"nikhilweee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10560",
"repo": "seatgeek/thefuzz",
"url": "https://github.com/seatgeek/thefuzz/issues/51"
}
|
gharchive/issue
|
Documentation?
Hi! Thanks for making this library publicly available. What's the best way to go through the documentation for this library? I can see that there's a short usage guide on the README but where can I find more details about the difference between ratio, partial_ratio, token_sort_ratio, token_set_ratio and so on?
It's difficult to tell at first glance what the output numbers even mean. At first, I thought it was like: "the bigger the number, the bigger the difference between the strings", but it seems to be the other way around (like a percentage of confidence that those two strings match?)
|
2025-04-01T06:40:21.397640
| 2017-09-06T07:43:34
|
255510203
|
{
"authors": [
"oliveravanze",
"sebaferreras",
"sirin8"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10561",
"repo": "sebaferreras/Ionic3-MultiLevelSideMenu",
"url": "https://github.com/sebaferreras/Ionic3-MultiLevelSideMenu/issues/3"
}
|
gharchive/issue
|
Side menu always visible
Hello, very good work.
Is possible to mantain alwais visible the side menu?
Thank you for your great contribution.
Hi! Do you mean something similar to how the SplitPane works?
yes. Exactly this. Do you have an expample of this?
Do you have an example for split pane with MultiLevelSideMenu
Hi @sirin8. I haven't tried yet but it should work as it is since the component only adds some content to the side menu.
@sirin8, @oliveravanze this weekend I'll create a demo using the split pane (will add the link in this issue). Thank you both for the feedback.
|
2025-04-01T06:40:21.421180
| 2023-01-04T18:30:26
|
1519407940
|
{
"authors": [
"sebbo2002"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10562",
"repo": "sebbo2002/action-template-updater",
"url": "https://github.com/sebbo2002/action-template-updater/pull/30"
}
|
gharchive/pull-request
|
🎉 1.0.3
ℹ️ About this release
Version: 1.0.3
Type: patch
Last Release: 1.0.2 (12/11/2022, 2:53:12 PM) [?]
Commits to merge: 28 [?]
🐛 Bug Fixes
Use bot token to check pull requests (559064c)
📦 Dependencies
Update simple-git from ^3.15.0 to ^3.16.0
:tada: This PR is included in version 1.0.3 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:40:21.422275
| 2015-01-20T09:06:11
|
54858692
|
{
"authors": [
"Jragonmiris",
"sebcrozet"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10563",
"repo": "sebcrozet/nalgebra",
"url": "https://github.com/sebcrozet/nalgebra/issues/78"
}
|
gharchive/issue
|
Add pseudoinverse, SVD
Pseudoinverse and singular value decomposition calculation, while difficult, is useful if nphysics (or some animation library) ever wants to implement Jacobian-based inverse kinematics methods. It's doable with just the transpose, but incredibly imprecise.
SVD and pseudo-inverse will be released on the next version of nalgebra. See #274 .
|
2025-04-01T06:40:21.427824
| 2018-05-26T09:08:47
|
326723351
|
{
"authors": [
"floere",
"sebcrozet"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10564",
"repo": "sebcrozet/ncollide",
"url": "https://github.com/sebcrozet/ncollide/issues/202"
}
|
gharchive/issue
|
Triangular shape with Shape+Clone?
Hi dear ncollide maintainers 👋
I'd like to use a 2d triangular shape for collisions – which implements both Shape and Clone.
First, I tried to use shape::Triangle, but even though it's available in ncollide2d, it appears to use the feature switch dim3, http://www.ncollide.org/rustdoc/src/ncollide2d/shape/shape_impl.rs.html#79-84.
Then, I looked at shape::ConvexPolygon, which appears to be the way to go for a triangular 2d shape. However, ConvexPolygon does not implement Clone.
Is there a specific reason for this? And am I looking at the wrong shapes?
I am using ncollide2d 0.15.3 – really am impressed by this library so far, great work! 😊
Hi! ConvexPolygon is indeed the right shape to use here. Triangle does not implement Shape in 2D. ConvexPolygon should implement Clone but it seems I forgot to make it so.
@sebcrozet Thanks for the quick answer, Sébastien, much appreciated! I'm relieved to hear it's just an oversight 😊I'm closing this – if it's against Repo policy, please reopen. Thanks again 😊
@floere Actually, there is usually no need to close an issue manually (unless if it is not going to be fixed). It will automatically be closed by github as soon as the corresponding pull request #203 is merged. That way, you will get a notification from github as soon as the fix is merged to master (and I usually publish it to crates.io soon after that).
@sebcrozet Ah I see, thanks – I'll not close it in the future 😊
|
2025-04-01T06:40:21.447581
| 2011-12-01T07:41:05
|
2411969
|
{
"authors": [
"blackfalcon",
"secca"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10565",
"repo": "secca/Coda-Sass-Plug-in",
"url": "https://github.com/secca/Coda-Sass-Plug-in/issues/1"
}
|
gharchive/issue
|
How do you install this?
I cloned the files, this placed a folder on my desktop. Ok, I renamed the folder to 'Sass.codaplugin' and this made it look like a plugin.
But when I place this in the plug-ins folder, this is not being picked up by Coda?
Lost here ;(
The files here on GitHub are just the source files for the plugin and require Xcode to compile the actual Sass.plugin. I will add the actual Sass.codaplugin file to GitHub soon. In the meantime you can download the Sass.codaplugin from this page https://sites.google.com/site/codasassplugin/
|
2025-04-01T06:40:21.473094
| 2023-12-31T00:48:29
|
2060898374
|
{
"authors": [
"GreetingsIExist",
"ryanmcgrath"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10566",
"repo": "secretkeysio/GCAdapterDriver",
"url": "https://github.com/secretkeysio/GCAdapterDriver/issues/21"
}
|
gharchive/issue
|
Cemu for macOS
As far as I know, this work is more focused on getting the adapters working on Dolphin.
Is it possible to get this driver to work on the Wii U emulator Cemu for macOS? More specifically for the Super Smash Bros. for Wii U game, which allows the usage of GameCube controllers through an adapter?
Have asked regarding support on the Cemu Discord server and the reply I received from a mod was, quote "No idea! I don't think anyone has tried." "If they show up in the Mac's Settings/Controllers panel, you should be good to go with SDLcontroller at the Cemu side."
This driver mostly stops macOS from grabbing the device and making apps unable to read it; CEMU would need to add support for natively reading it - or if your adapter supports “PC mode”, SDL should be able to pick that up.
On Sun, Dec 31, 2023 at 11:48, Elías M. @.***(mailto:On Sun, Dec 31, 2023 at 11:48, Elías M. < wrote:
As far as I know, this work is more focused on getting the adapters working on Dolphin.
Is it possible to get this driver to work on the Wii U emulator Cemu for macOS? More specifically for the Super Smash Bros. for Wii U game, which allows the usage of GameCube controllers through an adapter?
Have asked regarding support on the Cemu Discord server and the reply I received from a mod was, quote "No idea! I don't think anyone has tried." "If they show up in the Mac's Settings/Controllers panel, you should be good to go with SDLcontroller at the Cemu side."
—
Reply to this email directly, view it on GitHub, or unsubscribe.
With the driver, the adapter only "turns on" when using Dolphin. Otherwise it's just connected and detected on macOS but not usable. Even with the adapter being "activated" with Dolphin, it cannot be used outside of Dolphin.
So, is that the issue? Using the official adapter with no PC/Wii U switch, therefore not being usable with SDL?
That being said, you're right. There's no "native support" like the one Dolphin has.
Presented a "request" regarding "native support" on Cemu's Discord server. Will see depending on their response if the idea may be presented on the "Issues" section of their GitHub.
Since this request is more on Cemu's side, I shall close this issue.
|
2025-04-01T06:40:21.480486
| 2021-05-01T02:51:18
|
873512905
|
{
"authors": [
"Danmasanii",
"hectorkambow"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10567",
"repo": "section-engineering-education/engineering-education",
"url": "https://github.com/section-engineering-education/engineering-education/issues/2196"
}
|
gharchive/issue
|
INTRODUCTION TO VITE 2.0
INTRODUCTION TO VITE 2.0
Vite.Js (a French word for "fast" or “quick”, pronounced /vit/) developed by the creator of Vue.js, Evan You.
Vite.Js 2.0 is a front-end build tool or framework for designing a rich, elegant and sleek User Interface, it is referred to as the next-generation framework for front-end tooling.
In this tutorial, we are going to install Vite 2.0 with all its dependencies and explore its advantages and reasons for choosing Vite.
WHY VITE?
ViteJs have the following advantages over other frameworks;
Instant server-side rendering
CSS preprocessor
Mono repo support
Lighting fast HMR
Rich features
Optimized building
Fully typed APIs
Faster dependency pre-building
PREREQUISITES
To be able to use Vite 2.0, we would need the following:
Good understanding of Node.Js
Good understanding of React.Js
Node. Js version 12 or later, it's assumed that you have installed Node.Js in your system, but if you haven`t you can download the latest compatible version for your OS from here
INSTALLATION PROCEDURE
There are basically different ways to install Vite but in this tutorial, we will only focus on installing using NPM and YARN
Installing using npm
Open your terminal, navigate into the folder you want to install Vite, and run the following commands:
Key takeaways:
At the end of this tutorial, the reader can differentiate ViteJs from other frameworks, how to install vite on his/her machine, its advantages, and its relevance in frontend development.
References:
www.vitejs.dev
www.google.com
www.stackoverflow.com
seems like a helpful topic - approved @Danmasanii
Just closing this TOPIC to make room in the queue - it can be REOPENED whenever the PR is ready 👍
https://github.com/section-engineering-education/engineering-education/pull/2462
|
2025-04-01T06:40:21.484035
| 2022-02-18T08:57:54
|
1142664664
|
{
"authors": [
"KARIUKIJOHN",
"WanjaMIKE"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10568",
"repo": "section-engineering-education/engineering-education",
"url": "https://github.com/section-engineering-education/engineering-education/issues/6721"
}
|
gharchive/issue
|
C Program to Implement Playfair Cipher Algorithm
C Program to Implement Play fair Cipher Algorithm
Introduction
Encryption and decryption require a cipher. The ciphertext is the term used to describe it. Different algorithms may transform plain text to encoded text using the ciphertext process. The first practical digraph substitution cipher was the Play fair cipher. In this type of cipher, we encrypt a pair of alphabets instead of a single alphabet.
Key takeaways
By the end of the article, the learners should have covered:
Generating key squares
Encryption techniques using play fair cipher
Decryption algorithm
Advantages and disadvantages of using this algorithm
Article quality
This article will provide a thorough understanding of the recommended subject. The topic is suitable for beginners since everything is thoroughly described and detailed. For optimal comprehension of the ideas needed, code snippets will be used.
References
N/A
This topic appears a bit over-saturated - search We focus more on unique projects. Feel free to suggest another topic. Thanks
|
2025-04-01T06:40:21.499003
| 2021-06-01T16:05:34
|
908458773
|
{
"authors": [
"JuliusGikonyoNyambura",
"hectorkambow",
"zolomohan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10569",
"repo": "section-engineering-education/engineering-education",
"url": "https://github.com/section-engineering-education/engineering-education/pull/2457"
}
|
gharchive/pull-request
|
Linux container (LXC ) vs. Docker container , what's the difference and why Docker is better
A LCX vs Docker article and the related images
Article checklist
ATTENTION: In your PR - add the ISSUE (topic form) # that this PR corresponds to. You can do this by also adding a comment that states "This fixes #(enter your Idea Content Form #)" to link and close your Idea Content (issue) when the PR is merged.
NOTE: (Please ensure that you have only one open issue + linked pull request at a time. This will ensure that we complete the article in a timely manner from inception to publishing.)
If you have not already please go over our Resources Page for more tips and tricks on how to improve our overall technical writing.
Preliminary Checks - Formatting and Structure
[ ] Does your article follow any of the suggested structure formats? - see example formats
[ ] Is your article properly formatted in Markdown? - see Github Markdown guide
[ ] Have you used the correct folder and file structure? - see Contribution Guidelines
[ ] Is your article over 750 words? - Tool: Wordcounter.net
[ ] Have you used our preferred heading structure? - H3 (###) and up
[ ] Does your article provide enough value and detail about your topic? - Articles should be clear, accurate, and fully explained.
[ ] Can your article be understood by beginners? - Assume the audience is smart but has no prior exposure to the common terminology in your article.
[ ] Have you included a hero image that is 600x400 pixels and have the copyright to use it? - Tools: Pixlr Image Editor for resizing and Unsplash for Creative Commons images
[ ] In your PR - add a comment that states "This fixes #(enter your Idea Content Form #)" to link and close your Idea Content (issue) when the PR is merged.
Grammar & plagiarism checks
[ ] Have you spellchecked and grammar checked your article? - Tools: Write&Improve - ProWritingAid - Grammarly
[ ] Please place your article through a 3rd party plagiarism checker? We suggest using Quetext, this tool is free and has a daily limit. We typically accept articles with 10% or less.
[ ] Have you checked your article for readability? - Tool: Hemmingway
[ ] Have you added sources for quotes and images that aren't yours?
Technical checks
[ ] Are your code snippets properly formatted for syntax highlighting - see Syntax guide
Contribution guidelines
For first-time contributors and for more details, see Contributing Guidelines
@JuliusGikonyoNyambura Congrats on your first article!✨
Please add a bio and a profile pic to the content/author folder in your branch. Take a look at how the other authors created their bio folders and follow the structure.
@JuliusGikonyoNyambura Can you please add an author page? We can move to a final review once it's done.
plagiarism check done
@JuliusGikonyoNyambura any updates?
@JuliusGikonyoNyambura any updates?
Sorry for the delay. I have added the author @zolomohan
nice work @JuliusGikonyoNyambura
please go through my final edits to see what can be improved for future articles.
cc @zolomohan
Thank you @hectorkambow. There is one image not visible on the blog, check it please.
https://www.section.io/engineering-education/lxc-vs-docker-what-is-the-difference-and-why-docker-is-better/
|
2025-04-01T06:40:21.513022
| 2021-08-15T20:36:28
|
971218800
|
{
"authors": [
"Neema-2016",
"Qodestackr",
"ahmadmardeni1",
"marienjus"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10570",
"repo": "section-engineering-education/engineering-education",
"url": "https://github.com/section-engineering-education/engineering-education/pull/3361"
}
|
gharchive/pull-request
|
Creating a React app using Vite
This fixes #3142
Article checklist
ATTENTION: In your PR - add the ISSUE (topic form) # that this PR corresponds to. You can do this by also adding a comment that states "This fixes #(enter your Idea Content Form #)" to link and close your Idea Content (issue) when the PR is merged.
NOTE: (Please ensure that you have only one open issue + linked pull request at a time. This will ensure that we complete the article in a timely manner from inception to publishing.)
If you have not already please go over our Resources Page for more tips and tricks on how to improve our overall technical writing.
Preliminary Checks - Formatting and Structure
[x] Is the article you are submitting an in-depth and unique article? Does it go beyond what is in the official docs and what is covered in other blog sites. See these articles as examples.
[x] Does your article follow any of the suggested structure formats? - see example formats
[x] Is your article properly formatted in Markdown? - see Github Markdown guide
[x] Have you used the correct folder and file structure? - see Contribution Guidelines
[x] Is your article over 750 words? - Tool: Wordcounter.net
[x] Have you used our preferred heading structure? - H3 (###) and up
[x] Does your article provide enough value and detail about your topic? - Articles should be clear, accurate, and fully explained.
[x] Can your article be understood by beginners? - Assume the audience is smart but has no prior exposure to the common terminology in your article.
[x] Have you included a hero image that is 600x400 pixels and have the copyright to use it? - Tools: Pixlr Image Editor for resizing and Unsplash for Creative Commons images
[x] In your PR - add a comment that states "This fixes #(enter your Idea Content Form #)" to link and close your Idea Content (issue) when the PR is merged.
Grammar & plagiarism checks
[x] Have you spellchecked and grammar checked your article? - Tools: Write&Improve - ProWritingAid - Grammarly
[x] Please place your article through a 3rd party plagiarism checker? We suggest using Quetext, this tool is free and has a daily limit. We typically accept articles with 10% or less.
[x] Have you checked your article for readability? - Tool: Hemmingway
[x] Have you added sources for quotes and images that aren't yours?
Technical checks
[x] Are your code snippets properly formatted for syntax highlighting - see Syntax guide
Contribution guidelines
For first-time contributors and for more details, see Contributing Guidelines
I'd wish to review this article
If this not a duplicate, I'm interested in reviewing this one.
Great work ladies @Neema-2016 @WanjaMIKE 🚀
Please go over the changes I made.
Great work ladies @Neema-2016 @WanjaMIKE 🚀
Please go over the changes I made.
Thank you @ahmadmardeni1 and @WanjaMIKE also😊
|
2025-04-01T06:40:21.528406
| 2021-09-23T11:52:10
|
1005337323
|
{
"authors": [
"ahmadmardeni1",
"mercymeave",
"rene-shigolah"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10571",
"repo": "section-engineering-education/engineering-education",
"url": "https://github.com/section-engineering-education/engineering-education/pull/3983"
}
|
gharchive/pull-request
|
A dive into Message Queue Telemetry Transport Protocol
article
Article checklist
Pre-submission checks
Please ensure that you have only one open topic suggestion (issue) + in-review article (linked pull request) at a time. This will ensure that we complete the review process in a timely manner from inception to publishing
If you have not already, please go over our Resources Page for more tips and tricks on how to improve your overall technical writing so reviews are swifter, increase the chance of a payout and provide more value to our readers.
For first-time contributors and for more details on our submission guidelines, see our Contributing Guidelines.
Formatting and structure checks
[x] Have you used the correct folder and file structure? - see Contribution Guidelines
[x] Is your article properly formatted in Markdown? - see Github Markdown guide
[x] Have you used our preferred heading structure? - H3 (###) and up
[x] Does your article follow any of the suggested structure formats? - see example formats
[x] Is your article over 750 words? - Tool: Wordcounter.net
[x] Is the article you are submitting an in-depth and unique article? Does it go beyond what is in the official docs and what is covered in other blog sites. See these articles as examples.
[x] Does your article provide enough value and detail about your topic? - Articles should be clear, accurate, and fully explained.
[x] Can your article be understood by beginners? - Assume the audience is smart but has no prior exposure to the common terminology in your article.
[x] Have you included a hero image that is 600x400 pixels and have the copyright to use it? - Tools: Pixlr Image Editor for resizing and Unsplash for Creative Commons images
Grammar & plagiarism checks
[x] Have you spellchecked and grammar checked your article? - Tools: Write&Improve - ProWritingAid - Grammarly
[x] Please place your article through a 3rd party plagiarism checker. We suggest using Quetext, this tool is free and has a daily limit. We typically accept articles with 10% or less.
[x] Have you checked your article for readability? - Tool: Hemmingway
[ ] Have you added sources for quotes and images that aren't yours?
Technical checks
[x] Are your code snippets properly formatted for syntax highlighting - see Syntax guide
[x] Have you checked your code runs correctly and you've highlighted all necessary dependencies for installation?
[x] Are the software programs and packages you're highlighting in your article up to date, using current versions and not deprecated?
Topic suggestion this closes
Remove the backticks and add the issue number below to link and close your Topic Suggestion (issue) when your article has been published (PR has been merged). See this video for more details.
`This closes #3432
Finally, delete the article checklist notes in blockquotes and submit your PR. We look forward to reviewing your article.
Upon running your article through our 3rd party plagiarism checker it seemed to raise a few flags and the % was higher than we typically accept.
Please see attached PDF - and revisit the article to ensure we are contributing wholly unique and original content.
Be sure to see our resources page to see more info on plagiarism and what is considered as such.
A dive into Message Queue Telemetry Transport Protocol#3983
Hello @ahmadmardeni1 and @mercymeave I have corrected the plagiarism error.
Please check. Thank you
hello @mercymeave hope you are doing great.
just curious about the progress of the review. Thank you
Working on the finals.
|
2025-04-01T06:40:21.544404
| 2021-12-20T22:08:05
|
1085244668
|
{
"authors": [
"Qodestackr",
"WanjaMIKE",
"katungi",
"mercymeave"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10572",
"repo": "section-engineering-education/engineering-education",
"url": "https://github.com/section-engineering-education/engineering-education/pull/5687"
}
|
gharchive/pull-request
|
[Languages]: Building a Screen Recording Application Using JavaScript
Article checklist
Attention
Our Peer Reviewer and Content Moderator teams do NOT provide any revisions services. All revisions and edits should be completed on your own forked repo (as to not take up room in the queue) in order for our team to review them all in a timely manner.
ANY ARTICLE SUBMITTED WITH GLARING ERRORS WILL BE IMMEDIATELY CLOSED.
As a rule of thumb - please be sure to only submit articles (pull requests) that are fully polished and ready to be published. Be sure to go through our resources documents for extra and 3rd party (vetted) resources to help improve overall technical writing.
Pre-submission checks
Please ensure that you have only one open topic suggestion (issue) + in-review article (linked pull request) at a time. This will ensure that we complete the review process in a timely manner from inception to publishing
If you have not already, please go over our Resources Page for more tips and tricks on how to improve your overall technical writing so reviews are swifter, increase the chance of a payout and provide more value to our readers.
For first-time contributors and for more details on our submission guidelines, see our Contributing Guidelines.
Formatting and structure checks
[x] Have you used the correct folder and file structure? - see Contribution Guidelines
[x] Is your article properly formatted in Markdown? - see Github Markdown guide
[x] Have you used our preferred heading structure? - H3 (###) and up
[x] Does your article follow any of the suggested structure formats? - see example formats
[x] Is your article over 750 words? - Tool: Wordcounter.net
[x] Is the article you are submitting an in-depth and unique article? Does it go beyond what is in the official docs and what is covered in other blog sites. See these articles as examples.
[x] Does your article provide enough value and detail about your topic? - Articles should be clear, accurate, and fully explained.
[x] Can your article be understood by beginners? - Assume the audience is smart but has no prior exposure to the common terminology in your article.
[x] Have you included a hero image that is 600x400 pixels, under 300KB in size, and have the copyright to use it? - Tools: Pixlr Image Editor for resizing and Unsplash for Creative Commons images
Grammar & plagiarism checks
[x] Have you spellchecked and grammar checked your article? - Tools: Write&Improve - ProWritingAid - Grammarly
[x] Please place your article through a 3rd party plagiarism checker. We suggest using Quetext, this tool is free and has a daily limit. We typically accept articles with 10% or less.
[x] Have you checked your article for readability? - Tool: Hemmingway
[x] Have you added sources for quotes and images that aren't yours?
Technical checks
[x] Are your code snippets properly formatted for syntax highlighting - see Syntax guide
[x] Have you checked your code runs correctly and you've highlighted all necessary dependencies for installation?
[x] Are the software programs and packages you're highlighting in your article up to date, using current versions and not deprecated?
Topic suggestion this closes
Remove the backticks and add the issue number below to link and close your Topic Suggestion (issue) when your article has been published (PR has been merged). See this video for more details.
This closes #5470
Finally, delete the article checklist notes in blockquotes and submit your PR. We look forward to reviewing your article.
This fixes #5470
Hello @Qodestackr , Glad to see you are back to give the community more content. Please remember to fill in the checkboxes in the PR as you wait for an available member of the Peer Review team to jump on the PR.
Hey @katungi, let me jump on this. I love javascript :smile:
Hey @mercymeave , sure thing. Have fun 🥳
Plagiarism check done.
|
2025-04-01T06:40:21.574269
| 2012-02-17T11:47:14
|
3266371
|
{
"authors": [
"Frank004",
"TangMonk",
"avit",
"dgilperez",
"joelmeyerhamme",
"mruokojo",
"seejohnrun"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10573",
"repo": "seejohnrun/ice_cube",
"url": "https://github.com/seejohnrun/ice_cube/issues/68"
}
|
gharchive/issue
|
Localization for rule descriptions
Hi!
I could not find any specs about how to localize rule string result. I need to localize for example rule string "Weekly, on tuesdays".
Hey Mikko,
Internationalization has been a long dream of mine for the project - and a recent refactor has put me in a really good place to integrate it. I was waiting for someone to ask to build it, to avoid building things no one wanted.
I'll take this on as a feature branch, and update here as appropriate. Thanks!
JC
+1
+1 for this!
its there a way to add it manualy for one language ?? thank you great gem
You could copy a locale file and make a pull request. What language would that be? There reason I haven't been working on this, is that I lack the understanding of how to generalise the locale format so that it would work with as many different languages as possible.
@Frank004 it looks like this fork is working. You can PR them with your language file.
https://github.com/gocardless/ice_cube
@joelmeyerhamme and @dgilperez Hi I got it working using this one gem 'ice_cube' , :git => 'git://github.com/dgilperez/ice_cube.git'. Thanks for this great gem. Save us days and days of code
This was done in #311. Thanks everyone!
|
2025-04-01T06:40:21.578557
| 2021-09-30T06:07:48
|
1011752264
|
{
"authors": [
"etaoins"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10574",
"repo": "seek-oss/wingman",
"url": "https://github.com/seek-oss/wingman/pull/629"
}
|
gharchive/pull-request
|
Set input type for SpecifiedPersonForm email/phone
This makes little difference on desktop browsers, but causes mobile software keyboards to customise their layout.
|
2025-04-01T06:40:21.620866
| 2023-02-23T20:40:03
|
1597498454
|
{
"authors": [
"cherrera2001",
"liz-luft"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10575",
"repo": "seeq12/seeq-email-addon",
"url": "https://github.com/seeq12/seeq-email-addon/issues/8"
}
|
gharchive/issue
|
"Bad Gateway" Error
I recently installed the new email add-on and installed successfully. However, the emails are not sending. In the "_Job Results" folder, errors show "Bad Gateway"
It appears that the email service is not running on your system. If you have administrator access can you check the value of Components/EmailerService/Enabled within Admin\Configuration with the advanced flag?
Thanks for the quick response. You are correct in that I did not have that value set to True. However, after changing it, I still get the same result. Does the Seeq server need to be restarted or anything?
Yes it would. Please file a ticket in our support portal https://seeq.atlassian.net/servicedesk/customer/portal/3 for assistance enabling the email service? Thanks!
From: liz-luft @.>
Sent: Thursday, February 23, 2023 2:17 PM
To: seeq12/seeq-email-addon @.>
Cc: Chris Herrera @.>; Comment @.>
Subject: Re: [seeq12/seeq-email-addon] "Bad Gateway" Error (Issue #8)
Thanks for the quick response. You are correct in that I did not have that value set to True. However, after changing it, I still get the same result. Does the Seeq server need to be restarted or anything?
—
Reply to this email directly, view it on GitHubhttps://github.com/seeq12/seeq-email-addon/issues/8#issuecomment-1442443292, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABNGCD6L3BDY6ZTJEXAPGDDWY7HTPANCNFSM6AAAAAAVGDHF3U.
You are receiving this because you commented.Message ID<EMAIL_ADDRESS>
|
2025-04-01T06:40:21.659970
| 2022-10-12T10:10:27
|
1405940256
|
{
"authors": [
"hbrls",
"silesky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10576",
"repo": "segmentio/analytics-next",
"url": "https://github.com/segmentio/analytics-next/pull/623"
}
|
gharchive/pull-request
|
F: update import code style
This line confused my IDE a little.
As I searched through the project, only a few use import @. Most code prefer import ../.
Are they legacy code from previous repos?
Will fix more and include a changeset if approved.
[] I've included a changeset (psst. run yarn changeset. Read about changesets here).
This was added in: https://github.com/segmentio/analytics-next/commit/86b98572a24bfc7c8b6feea8a6feef1bdbe9202a
|
2025-04-01T06:40:21.661172
| 2023-05-03T19:34:35
|
1694678120
|
{
"authors": [
"silesky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10577",
"repo": "segmentio/analytics-next",
"url": "https://github.com/segmentio/analytics-next/pull/855"
}
|
gharchive/pull-request
|
Release major node version
Other than a version bump, no real changes.
Looks good to me - will this be automatically updated without the version tag to 1.0.0 with the patch release?
Yep
|
2025-04-01T06:40:21.697918
| 2024-04-23T12:44:46
|
2258787688
|
{
"authors": [
"KWLandry-acoustic",
"pwseg"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10578",
"repo": "segmentio/segment-docs",
"url": "https://github.com/segmentio/segment-docs/pull/6425"
}
|
gharchive/pull-request
|
Acoustic Catalog initial submit
Initial Catalog entry for Acoustic (Actions) Destination
Merge timing
<!-- When should this get merged/published?
ASAP once approved, please.
Hi Folks,
As it's been two weeks without an update, can we get an idea of the timing to complete a review?
Thanks,
Thank you Thomas,
Appreciate all the help on getting this over the line,
Kip
Kip W. Landry
WW Solutions Architect
[signature_1720021682]
Email: @.*** @.***>
How Can Acoustic Help?
Acoustic Help Centerhttps://help.goacoustic.com/hc/en-us
Subscribe for Product Updates and Releaseshttps://acoustic.com/client-signup/
New Product Releases
Acoustic Support Portalhttps://support.goacoustic.com/
Product Status Pages for Outages or Reduced Servicehttps://status.goacoustic.com/
Insights Help Documentationhttps://help.goacoustic.com/hc/en-us/categories/12538307189657-Insights
Acoustic Support Chathttps://support.goacoustic.com/
Insights Academy Traininghttps://learn.goacoustic.com/lms/index.php?r=site/sso&sso_type=saml&id_course=479&utm_source=shareButton&utm_content=course_link
Acoustic Academyhttps://learn.goacoustic.com/learn
Developer - REST API Referencehttps://api2.silverpop.com/restdoc/
Developer - API Referencehttps://developer.goacoustic.com/acoustic-campaign/reference/xml-api-overview-1
Feedback and Product Ideashttps://ideas.goacoustic.com/
Developer - Postman Collectionhttps://developer.goacoustic.com/acoustic-campaign/reference/postman-collection
Developer – API Endpoints and credentialshttps://developer.goacoustic.com/acoustic-campaign/reference/getting-started-with-oauth
From: Thomas Gilbert @.>
Date: Tuesday, May 14, 2024 at 1:08 PM
To: segmentio/segment-docs @.>
Cc: Kip Landry @.>, Mention @.>
Subject: Re: [segmentio/segment-docs] Acoustic Catalog initial submit (PR #6425)
@tcgilbert approved this pull request.
—
Reply to this email directly, view it on GitHubhttps://github.com/segmentio/segment-docs/pull/6425#pullrequestreview-2055975815, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AYV2WAF34P3NDTV35SQKQW3ZCJAINAVCNFSM6AAAAABGU2EMRSVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDANJVHE3TKOBRGU.
You are receiving this because you were mentioned.Message ID: @.***>
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain Acoustic proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments.
Hi Thomas,
Yes, we’re ready to move forward, we’ve had a beta with two Customers running for a fair few months now and there are no issues to resolve at this point, we’ll only be adding additional features from this point forward,
Many Thanks,
Kip
Kip W. Landry
WW Solutions Architect
[signature_1720021682]
Email: @.*** @.***>
How Can Acoustic Help?
Acoustic Help Centerhttps://help.goacoustic.com/hc/en-us
Subscribe for Product Updates and Releaseshttps://acoustic.com/client-signup/
New Product Releases
Acoustic Support Portalhttps://support.goacoustic.com/
Product Status Pages for Outages or Reduced Servicehttps://status.goacoustic.com/
Insights Help Documentationhttps://help.goacoustic.com/hc/en-us/categories/12538307189657-Insights
Acoustic Support Chathttps://support.goacoustic.com/
Insights Academy Traininghttps://learn.goacoustic.com/lms/index.php?r=site/sso&sso_type=saml&id_course=479&utm_source=shareButton&utm_content=course_link
Acoustic Academyhttps://learn.goacoustic.com/learn
Developer - REST API Referencehttps://api2.silverpop.com/restdoc/
Developer - API Referencehttps://developer.goacoustic.com/acoustic-campaign/reference/xml-api-overview-1
Feedback and Product Ideashttps://ideas.goacoustic.com/
Developer - Postman Collectionhttps://developer.goacoustic.com/acoustic-campaign/reference/postman-collection
Developer – API Endpoints and credentialshttps://developer.goacoustic.com/acoustic-campaign/reference/getting-started-with-oauth
From: Thomas Gilbert @.>
Date: Tuesday, May 14, 2024 at 1:10 PM
To: segmentio/segment-docs @.>
Cc: Kip Landry @.>, Mention @.>
Subject: Re: [segmentio/segment-docs] Acoustic Catalog initial submit (PR #6425)
@KWLandry-acoustichttps://github.com/KWLandry-acoustic thanks for making those updates. @pwseghttps://github.com/pwseg from our docs team will give a final review and get this through for you.
Do you have a timeline for when you want your integration to go live? We have everything needed on our end. As long as you have tested your integration and ensured it is behaving as expected we can move it into Public Beta.
—
Reply to this email directly, view it on GitHubhttps://github.com/segmentio/segment-docs/pull/6425#issuecomment-2110733093, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AYV2WAAHRSDI3XZHW2TYV4LZCJARTAVCNFSM6AAAAABGU2EMRSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJQG4ZTGMBZGM.
You are receiving this because you were mentioned.Message ID: @.***>
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain Acoustic proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments.
@tcgilbert Merging this now, it'll be live after our regular Thursday deploy.
|
2025-04-01T06:40:21.704192
| 2019-08-29T16:30:16
|
487061679
|
{
"authors": [
"jeremymchacon",
"mquintin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10579",
"repo": "segrelab/comets",
"url": "https://github.com/segrelab/comets/issues/18"
}
|
gharchive/issue
|
Bug fix: objective reaction not adhering to upper bound
Bug description: In FBACell.run, the upper bound on the biomass reaction is set to be equivalent to the rate that would fill the current cell. This overrides the limit set in the model file, which we may want to limit. For example, I discovered this when trying knockouts of every reaction, and the model that knocks out the "growth" reaction still grew.
Sorry for not making this change myself, turns out my local code base is kind of screwy since I didn't properly migrate to the github repo. Could one of you please make the following changes?
Step 1: Modify FBACell.run(model[]) to change the part that says
/************************* SET MAX BIOMASS *****************************/
((FBAModel)models[i]).setObjectiveUpperBound((cParams.getMaxSpaceBiomass() - (Utility.sum(biomass) + Utility.sum(deltaBiomass))) / (biomass[i] * cParams.getTimeStep()));
to
/************************* SET MAX BIOMASS *****************************/
double bioUB = ((FBAModel)models[i]).getBaseUB()[((FBAModel)models[i]).getBiomassReaction()];
double capacityUB = (cParams.getMaxSpaceBiomass() - (Utility.sum(biomass) + Utility.sum(deltaBiomass))) / (biomass[i] * cParams.getTimeStep());
((FBAModel)models[i]).setObjectiveUpperBound(Math.min(bioUB, capacityUB));
Step 2: Add getBaseUB() to FBAModel
public double[] getBaseUB() { return baseUB; }
minor typo, dropped a parentheses: Should be
double capacityUB = (cParams.getMaxSpaceBiomass() - (Utility.sum(biomass) + Utility.sum(deltaBiomass))) / (biomass[i] * cParams.getTimeStep());
so all addition/subtraction is in the numerator
This should be fixed in my "signal bugfix" pull request.
|
2025-04-01T06:40:21.754591
| 2024-06-24T08:22:21
|
2369569671
|
{
"authors": [
"seisiuneer",
"talthent"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10580",
"repo": "seisiuneer/abctools",
"url": "https://github.com/seisiuneer/abctools/issues/13"
}
|
gharchive/issue
|
Support %%grid
the tool is amazing. I'm not sure why %%grid2 1 isn't working.
should look something like this.
grid2 isn't supported by abcjs, so not available in my tool.
|
2025-04-01T06:40:21.770677
| 2016-12-06T06:10:47
|
193691322
|
{
"authors": [
"ChanduSirigiri",
"alexweissman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10581",
"repo": "select2/select2",
"url": "https://github.com/select2/select2/issues/4708"
}
|
gharchive/issue
|
dropdown option values are not adjusting with the parent dropdown when scrolling on modal popup
Prerequisites
[x] I have searched for similar issues in both open and closed tickets and cannot find a duplicate
[ ] The issue still exists against the latest master branch of Select2
[ ] This is not a usage question (Those should be directed to the community)
[ ] I have attempted to find the simplest possible steos to reproduce the issue
[ ] I have included a failing test as a pull request (Optional)
Steps to reproduce the issue
Expected behavior and actual behavior
When I follow those steps, I see...
I was expecting...
Environment
Browsers
[x] Google Chrome
[x] Mozilla Firefox
[x] Internet Explorer
Operating System
[x] Windows
[x] Mac OS X
[ ] Linux
[x] Mobile
Libraries
jQuery version:
Select2 version:
Isolating the problem
[ ] This bug happens on the examples page
[ ] The bug happens consistently across all tested browsers
[ ] This bug happens when using Select2 without other pluigns
[ ] I can reproduce this bug in a jsbin
Currently, we are using bootstrap modal popup. On popup i have ten dropdown boxes, fifth dropdown box has ten three dropdown option values. On Click of fifth dropdown, am scrolling on modal popup. The dropdown option values related to fifth dropdown remains in static position, option vales are not scrolling with respect to the parent dropdown.
It is unclear what the issue is here. Without a jsbin or screenshots, we cannot determine the problem.
|
2025-04-01T06:40:22.180356
| 2018-05-23T13:09:34
|
325697956
|
{
"authors": [
"ajwillo",
"selfuryon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10582",
"repo": "selfuryon/netdev",
"url": "https://github.com/selfuryon/netdev/issues/12"
}
|
gharchive/issue
|
Error handling - [Errno 111] Connect call failed ('<IP_ADDRESS>', 22)
How do we handle errors when a device cannot be connected to? currently I et
File "/usr/local/lib/python3.6/asyncio/selector_events.py", line 480, in _sock_connect_cb
raise OSError(err, 'Connect call failed %s' % (address,))
ConnectionRefusedError: [Errno 111] Connect call failed ('<IP_ADDRESS>', 22)
I would just like to print an exception i.e. couldn't connect to X. at the moment it aborts my script
I can see in base.py the timeout is set to 15 seconds, I tried to set the timeout in my params but got an error. how do we set timeout value?
Thanks
EDIT, resolve issue by upgrading net dev and having the following:
async def task(param):
try:
async with netdev.create(**param, timeout=5) as ios:
# Testing sending simple command
out = await ios.send_command("show ver")
print(out)
except:
print('unable to connect to device')
Yes, it's right solution about using exception handling! One note about it - you may want to use asyncio.gather instead of asyncio.wait for awaiting coroutines due to wait doesn't catch the exception by default - you can read about it here
|
2025-04-01T06:40:22.186843
| 2020-12-16T11:44:34
|
768767042
|
{
"authors": [
"GeekyShacklebolt",
"gregorysemah"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10583",
"repo": "selwin/django-user_agents",
"url": "https://github.com/selwin/django-user_agents/issues/41"
}
|
gharchive/issue
|
InvalidTemplateLibrary raised, trying to load 'django_user_agents.templatetags.user_agents' (Django 3.0.7)
Hello,
Seems that Django 3.0.7 with python3.9 breaks things
Here is the error
django.template.library.InvalidTemplateLibrary: Invalid template library specified. ImportError raised when trying to load 'django_user_agents.templatetags.user_agents': cannot import name 'get_and_set_user_agent' from partially initialized module 'django_user_agents.utils' (most likely due to a circular import) (/blablablablablabla/python3.9/site-packages/django_user_agents/utils.py)
The problem appears randomly by restarting runserver (run and debugger with pycharm)
Hey,
I have been facing the same issue. with Python3.9.0 and Django3.2.10 (recently updated from 3.2.8). Was getting the error:
File "/.../.venv/lib/python3.9/site-packages/django_user_agents/templatetags/user_agents.py", line 3, in
from ..utils import get_and_set_user_agent
ImportError: cannot import name 'get_and_set_user_agent' from partially initialized module 'django_user_agents.utils' (most likely due to a circular import) (/.../.venv/lib/python3.9/site-packages/django_user_agents/utils.py)
I brought down the Django Version back to 3.2.8 that solves the problem for me.
|
2025-04-01T06:40:22.197857
| 2023-04-19T06:03:12
|
1674226974
|
{
"authors": [
"cedoor",
"vimwitch"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10584",
"repo": "semaphore-protocol/semaphore",
"url": "https://github.com/semaphore-protocol/semaphore/issues/307"
}
|
gharchive/issue
|
Timing attacks in identity package
Describe the bug
The javascript BigInt implementation does not run in constant time. The semaphore identity derivation system is vulnerable to timing attacks.
To Reproduce
n/a
Expected behavior
Identity derivation happens in constant time.
Additional context
I'm not sure the best approach to solve this. One idea that may help is hashing random values when doing identity secret/commitment calculation.
e.g. when I calculate my identity secret my machine hashes my private keys, along with N other random values (discarding the random outputs).
This wouldn't work, the differences would still be the same over time.
Hey @vimwitch, can you give an example of that attack? The identities are generated client-side, so should an attacker access your local device to measure the right time?
Right i was imagining a malicious website could query the identity commitment many times, or something like that. But the attack only works when signing different data, so they would have to time the zk proof generation. Either way i think it's out of scope of this package.
|
2025-04-01T06:40:22.205527
| 2021-05-11T13:18:09
|
887257096
|
{
"authors": [
"etiennedi"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10585",
"repo": "semi-technologies/weaviate",
"url": "https://github.com/semi-technologies/weaviate/issues/1576"
}
|
gharchive/issue
|
Text/String Vectorizers lead to different results because of iteration over map
Problem
When we collect all the string/text properties in an object we iterate over a map[string]interface{}. This means there is no order guarantee.
For the text2vec-contextionary module this has no impact since vectorization is just a mean calculation.
However, transformers modules are context-aware so the same sentences in differnent orders will leave to different vectors.
Possible Solution
The simplest solution might be to sort the keys alphabetically, so that two subsequent vectorization runs of the same data will lead to the same results.
Misc
The iteration happens here (Line 67):
https://github.com/semi-technologies/weaviate/blob/50cd6f5458cdd46c55dacb9ba47005742b7b7a30/modules/text2vec-transformers/vectorizer/objects.go#L66-L83
It seems that we have even accepted this randomness in the test where we don't compare exact strings (which would be flaky), but split the results and only compare that the elements match (ignoring order). Once this is fixed and there is a fixed order, this test can be adapted to match exact strings:
https://github.com/semi-technologies/weaviate/blob/50cd6f5458cdd46c55dacb9ba47005742b7b7a30/modules/text2vec-transformers/vectorizer/objects_test.go#L167-L169
closed in #1585
|
2025-04-01T06:40:22.220940
| 2023-11-12T00:00:35
|
1989162297
|
{
"authors": [
"darnocian"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10586",
"repo": "sempare/sempare-delphi-template-engine",
"url": "https://github.com/sempare/sempare-delphi-template-engine/issues/163"
}
|
gharchive/issue
|
TemplateRegistry raises exception that template is not resolved when there is a parsing error
Depending on interpretation, it could be seen to be ok, but it is misleading and the error message could be clearer.
behaviour being tested - a parsing error will propagate out, as the template was found.
aimed at v1.8
|
2025-04-01T06:40:22.247086
| 2022-06-29T08:37:49
|
1288367125
|
{
"authors": [
"justinbaldwin",
"michellewong27",
"petru-gherghel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10587",
"repo": "sendbird/sendbird-chat-sample-ios",
"url": "https://github.com/sendbird/sendbird-chat-sample-ios/issues/162"
}
|
gharchive/issue
|
Recording - Group Channel/ iOS - Retrieve a list of banned or muted users
Click ‘Create’ to create new group channel
Under ‘Choose member’, add users & select ‘OK’
Click on ‘Channel Name’ box to input channel name
Name channel ‘Roommates’
Click ‘Create’
Click ‘Roommates’ to enter conversation
Click ‘Setting’, click ‘Muted Users’ to show list of muted users
Click ‘Setting’, click ‘Banned Users’ to show list of muted users
Nadia: Can everyone send me the money for utilities this month?
Amelia: I moved out halfway through the month so I don’t think I should pay
Emily: Nobody liked you anyway
On the Dashboard, on left hand sidebar click ‘Group channels’ & click on ‘Roommates’
In the chat window, on the right hand sidebar click ‘Members’ and click on Amelia
Click ‘Ban’ & input ‘Reason for muting’ as ‘No longer a roommate’
Click on Emily, click ‘Mute’ & input ‘Reason for muting’ as ‘Inappropriate behavior’
Click ‘<-’ back arrow to go back to settings sidebar
Click on ‘Banned’ to view banned now shows Amelia
Click ‘<-’ back arrow to go back to settings sidebar
Click on ‘Muted’ to view banned now shows Emily
@justinbaldwin
same issue as #147
@justinbaldwin
https://drive.google.com/file/d/1NCfqTXtXpAbqIBY95vCNfQ7vr2Smt6et/view?usp=sharing
@petru-gherghel start video at around the 0:10 second mark.
@justinbaldwin
https://drive.google.com/file/d/1KoZJbnjoYjLHgS3h75TlkSoWq41rGVu-/view?usp=sharing
@michellewong27 @chrischabot
Ready for review: https://drive.google.com/file/d/1KoZJbnjoYjLHgS3h75TlkSoWq41rGVu-/view?usp=sharing
@justinbaldwin reviewed, good for publishing
|
2025-04-01T06:40:22.265019
| 2020-04-05T17:54:33
|
594578015
|
{
"authors": [
"childish-sambino",
"shellscape"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10588",
"repo": "sendgrid/sendgrid-nodejs",
"url": "https://github.com/sendgrid/sendgrid-nodejs/issues/1086"
}
|
gharchive/issue
|
TypeScript Definition Error: Client lacks call or construct signature
Issue Summary
The TypeScript definitions for the project don't allow instantiating a new Client according to tsc. This is fine however, in vanilla Node.
Steps to Reproduce
For demonstration purposes I'll be using the node and ts-node REPLs
Successful in Node:
→ node
Welcome to Node.js v12.15.0.
Type ".help" for more information.
> const client = require('@sendgrid/client')
undefined
> client
Client {
apiKey: '',
defaultHeaders: { Accept: 'application/json', 'User-agent': 'sendgrid/6.4.0;nodejs' },
defaultRequest: {
json: true,
baseUrl: 'https://api.sendgrid.com/',
url: '',
method: 'GET',
headers: {}
},
Client: [Function: Client]
}
> new client.Client()
Client {
apiKey: '',
defaultHeaders: { Accept: 'application/json', 'User-agent': 'sendgrid/6.4.0;nodejs' },
defaultRequest: {
json: true,
baseUrl: 'https://api.sendgrid.com/',
url: '',
method: 'GET',
headers: {}
}
}
But fails within TypeScript:
→ ts-node
> import client from '@sendgrid/client'
{}
> client
Client {
apiKey: '',
defaultHeaders: { Accept: 'application/json', 'User-agent': 'sendgrid/6.4.0;nodejs' },
defaultRequest: {
json: true,
baseUrl: 'https://api.sendgrid.com/',
url: '',
method: 'GET',
headers: {}
},
Client: [Function: Client]
}
> client.Client
[Function: Client]
> new client.Client
[eval].ts:4:1 - error TS2351: Cannot use 'new' with an expression whose type lacks a call or construct signature.
4 new client.Client
~~~~~~~~~~~~~~~~~
undefined
>
Note: This example requires that you have a local tsconfig that has esModuleInterop: true or are using the ts-node flag of the same name. Otherwise, you have to use import * as client from...
Code Snippet
see above
Exception/Log
see above
Technical details:
sendgrid-nodejs version: @sendgrid/client v6.4.0
node version: v12.15.0
Fixed by https://github.com/sendgrid/sendgrid-nodejs/pull/1040 released in 6.5.3
Sorry about that, I could have swore we had the latest installed.
|
2025-04-01T06:40:22.268318
| 2020-10-05T09:39:02
|
714684215
|
{
"authors": [
"childish-sambino",
"sarahmhale"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10589",
"repo": "sendgrid/sendgrid-nodejs",
"url": "https://github.com/sendgrid/sendgrid-nodejs/issues/1205"
}
|
gharchive/issue
|
hideWarnings: is it a cure for a symptom and not a fix of the source?
Issue Summary
When sending dynamical template data such as a block of HTML code which includes ', " or &. You will have warnings if you don't use hideWarnings. But it feels like hideWarnings is a cure for a symptom and not a fix of the source. Instead of solving the problem from this PR -> https://github.com/sendgrid/sendgrid-nodejs/pull/793. hideWarnings basically hides this whole check, in this PR -> https://github.com/sendgrid/sendgrid-nodejs/pull/932
Code Snippet
if (!this.hideWarnings) {
Object.values(dynamicTemplateData).forEach(value => {
if (/['"&]/.test(value)) {
console.warn(DYNAMIC_TEMPLATE_CHAR_WARNING);
}
});
}
Exception/Log
Content with characters ', " or & may need to be escaped with three brackets
{{{ content }}}
See https://sendgrid.com/docs/for-developers/sending-email/using-handlebars/ for more information.
There isn't really a great solution without knowing what's in the template, which would require an additional network call. Given that, the warning is purely informative to users who would otherwise be unaware that triple brackets are needed in such situations. Once they have be made aware, then hiding the warning will keep the logs clean.
|
2025-04-01T06:40:22.290246
| 2021-02-07T21:19:33
|
803063040
|
{
"authors": [
"Jonri2",
"bryantgeorge"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10590",
"repo": "senior-knights/course-schedulizer",
"url": "https://github.com/senior-knights/course-schedulizer/pull/83"
}
|
gharchive/pull-request
|
Allow export of old and new formats
To Test:
npm i && npm start
Try importing the full course schedule, the math schedule, etc.
There should now be two export buttons, try them both:
EXPORT CSV should give two CSVs in the new formats described by VanderLinden in our most recent meeting (it doesn't have the extra headers, but I figure the CSV should be good enough and allow department chairs to just copy the contents over)
EXPORT FULL CSV should do what EXPORT CSV used to do
Worth noting that the non-teaching CSV given is composed of classes without Prefix and Number, unsure if this is what we want to do going forward. (Currently it is impossible to add such a "course", so eventually we will want an add non-teaching load option, but it might still be good to store this as a course/section without prefix and number).
Let me know if the wording is confusing, if we should just remove full export or not, and if we should wait to merge this until we know more about the new formats.
Additionally, this branch changes the test CSV files to be loaded from their current version rather than current develop (I should have done it that way in the first place)
Closes #80
Boom. Looks good. If you want to update the wording for the Export Button you can, or we can take a look at it later after user feedback. Thanks!
What do you guys think about "Export Draft CSV" for the full version and "Export Final CSV" for the registrar version? That way, the user would save as much data as possible when simply drafting, but would export in the correct registrar format when ready to submit.
Oops, I forgot to change the wording, I'll do that later today, thanks!
|
2025-04-01T06:40:22.294901
| 2017-03-31T08:36:07
|
218434070
|
{
"authors": [
"brainray",
"tagyro"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10591",
"repo": "sensorberg-dev/ios-sdk",
"url": "https://github.com/sensorberg-dev/ios-sdk/issues/21"
}
|
gharchive/issue
|
Cocoapod: ‘objc_geohash/GeoHash.h’ file not found
Hi there, after installing v2.5.x with Cocoapods and disabling 'use_frameworks!' in the Podfile, Xcode gives me the above error.
The reason for this, is that the symlink-folder for the geohash framework in the /Pods/Headers/Public/ folder is named 'objc-geohash' and not 'objc_geohash' (the underscore;-) as named in the e.g. SBLocation.m where it's referenced.
After renaming by hand everything is fine.
This is a known issue; please update to v2.5.5 and use_frameworks! in the pod file.
In a future version we will phase out objc-geohash and the issue will be fixed.
Thanks for the report, @brainray
|
2025-04-01T06:40:22.328829
| 2020-10-16T13:17:53
|
723210661
|
{
"authors": [
"ghoneycutt",
"treydock"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10592",
"repo": "sensu/sensu-puppet",
"url": "https://github.com/sensu/sensu-puppet/pull/1279"
}
|
gharchive/pull-request
|
Fix for when version query returns malformed version
Pull Request Checklist
Description
Fix version comparison logic to be a bit more resilient so that if the version returned is not expected the code does not raise exceptions.
Some of the unit test changes were to address the unit tests essentially testing nothing and not being really valid tests.
Related Issue
Fixes #1278
Released in v5.2.1
|
2025-04-01T06:40:22.374848
| 2024-08-29T14:42:02
|
2494689022
|
{
"authors": [
"dfguerrerom"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10593",
"repo": "sepal-contrib/sepal_mgci",
"url": "https://github.com/sepal-contrib/sepal_mgci/issues/61"
}
|
gharchive/issue
|
Tables cannot be exported when reclassify values are missing
Tables cannot be exported when reclassify values are missing
File [~/1_modules/sepal_mgci/component/scripts/sub_a.py:98](http://localhost:8889/lab/tree/component/scripts/sub_a.py#line=97), in get_mgci.<locals>.<lambda>(row)
94 df = fill_parsed_df(parsed_df.copy())
96 # Adds is_green column to the dataframe based on lc_class.
97 df["is_green"] = df.apply(
---> 98 lambda row: LC_MAP_MATRIX.loc[LC_MAP_MATRIX.to_code == row["lc_class"]][
99 "green"
100 ].iloc[0],
101 axis=1,
102 )
104 # Get the green and non green total area for each belt
105 tmp_df = df.groupby(["belt_class", "is_green"], as_index=False).sum()
File [~/module-venv/sepal_mgci/lib/python3.10/site-packages/pandas/core/indexing.py:1191](http://localhost:8889/home/dguerrero/module-venv/sepal_mgci/lib/python3.10/site-packages/pandas/core/indexing.py#line=1190), in _LocationIndexer.__getitem__(self, key)
1189 maybe_callable = com.apply_if_callable(key, self.obj)
1190 maybe_callable = self._check_deprecated_callable_usage(key, maybe_callable)
-> 1191 return self._getitem_axis(maybe_callable, axis=axis)
File [~/module-venv/sepal_mgci/lib/python3.10/site-packages/pandas/core/indexing.py:1752](http://localhost:8889/home/dguerrero/module-venv/sepal_mgci/lib/python3.10/site-packages/pandas/core/indexing.py#line=1751), in _iLocIndexer._getitem_axis(self, key, axis)
1749 raise TypeError("Cannot index by location index with a non-integer key")
1751 # validate the location
-> 1752 self._validate_integer(key, axis)
1754 return self.obj._ixs(key, axis=axis)
File [~/module-venv/sepal_mgci/lib/python3.10/site-packages/pandas/core/indexing.py:1685](http://localhost:8889/home/dguerrero/module-venv/sepal_mgci/lib/python3.10/site-packages/pandas/core/indexing.py#line=1684), in _iLocIndexer._validate_integer(self, key, axis)
1683 len_axis = len(self.obj._get_axis(axis))
1684 if key >= len_axis or key < -len_axis:
-> 1685 raise IndexError("single positional indexer is out-of-bounds")
IndexError: single positional indexer is out-of-bounds
it happens because the non existing classes from the reclassification table are remapped to 0, and 0 is not selfMasked in the resulting image, causing an extra class that was not in the LC_MAP_MATRIX.
|
2025-04-01T06:40:22.377559
| 2017-07-17T08:42:09
|
243328174
|
{
"authors": [
"PhilLab",
"TuNguyen90Vn"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10594",
"repo": "sephiroth74/android-target-tooltip",
"url": "https://github.com/sephiroth74/android-target-tooltip/issues/80"
}
|
gharchive/issue
|
Tooltip with dialog.
Hi team. I am a newbie android. I have a project use popup and dialog. On dialog have button click open popup. I have tried to do it, but i can't. I know android-target-tooltip from SO. I have tested it on my project but not working.
How can i do it with your library.
https://stackoverflow.com/questions/45138323/how-to-show-popup-window-over-dialog-android
Many thanks!
P/s: Sorry for my english not good!
Duplicate of https://github.com/sephiroth74/android-target-tooltip/issues/70 and https://github.com/sephiroth74/android-target-tooltip/issues/26
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.