id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1437762744
Add an async iterator for table entries We have an iterator for table pages now: https://github.com/photondb/photondb/blob/f35f533a8a928439f79ee336dfd9f04ccfc0633c/src/raw/table.rs#L139 It will be nice to also have an iterator over individual table entries. For example, we can add an iter() method in the Guard, which returns an iterator that implements AsyncIterator and yields key-value entries. @ming535 do you want to give it a try?
gharchive/issue
2022-11-07T04:54:35
2025-04-01T06:45:22.829576
{ "authors": [ "huachaohuang" ], "repo": "photondb/photondb", "url": "https://github.com/photondb/photondb/issues/211", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
699396465
[3.50] Support mix feature in global animation and private animation both In 3.50-beta4, mix is set in animation manager if 2 global animations are existed. It can't support mix for private animation created by sprite.anims.create(). It is nice that mix feature also can support global animation and private animation both. A possible solution might be, put responding mix in each animation object, to store transition delay of previous animation. Yeah I did consider this. I was thinking about making mix based on a frame name/number, instead of delay, too.
gharchive/issue
2020-09-11T14:37:29
2025-04-01T06:45:22.831673
{ "authors": [ "photonstorm", "rexrainbow" ], "repo": "photonstorm/phaser", "url": "https://github.com/photonstorm/phaser/issues/5296", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
160816382
Integrate AdminLTEBundle add advanced page.description and image.description editor : https://github.com/bootstrap-wysiwyg/bootstrap3-wysiwyg/ (this one is embed in AdminLTEBundle) Don't install it on its own ! :) Are you going to do this one?
gharchive/issue
2016-06-17T05:46:02
2025-04-01T06:45:22.870819
{ "authors": [ "amaurybrisou", "jbki" ], "repo": "php-amaurybrisou/carthage", "url": "https://github.com/php-amaurybrisou/carthage/issues/9", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
275069149
Symfony 4 support #SymfonyConHackday2017 styleci is fixed in master
gharchive/pull-request
2017-11-18T10:13:15
2025-04-01T06:45:22.879051
{ "authors": [ "Nyholm", "dbu" ], "repo": "php-http/cache-plugin", "url": "https://github.com/php-http/cache-plugin/pull/45", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1251606922
🛑 phpMyFAQ Password Hash Generator Tool is down In b80e8e1, phpMyFAQ Password Hash Generator Tool (https://password.phpmyfaq.de) was down: HTTP code: 0 Response time: 0 ms Resolved: phpMyFAQ Password Hash Generator Tool is back up in 00ebe1a.
gharchive/issue
2022-05-28T11:42:46
2025-04-01T06:45:22.948633
{ "authors": [ "thorsten" ], "repo": "phpMyFAQ/status.phpmyfaq.de", "url": "https://github.com/phpMyFAQ/status.phpmyfaq.de/issues/3226", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1725219906
adds fix to consider valid a single letter class fixes #385 We loose the check that a class should not be start with a number, but I don't think is critical... I have not more cs-fix errors on my laptop with php 8.2, but I see there are errors on the CI... Maybe older PHP versions use older version of php-cs-fix that have a different behavior? Codecov Report Merging #386 (80b36c0) into main (17b1c43) will not change coverage. The diff coverage is 100.00%. :exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. @@ Coverage Diff @@ ## main #386 +/- ## ========================================= Coverage 94.33% 94.33% Complexity 571 571 ========================================= Files 67 67 Lines 1500 1500 ========================================= Hits 1415 1415 Misses 85 85 Impacted Files Coverage Δ src/Analyzer/FullyQualifiedClassName.php 96.87% <100.00%> (ø)
gharchive/pull-request
2023-05-25T07:06:45
2025-04-01T06:45:22.963657
{ "authors": [ "codecov-commenter", "fain182" ], "repo": "phparkitect/arkitect", "url": "https://github.com/phparkitect/arkitect/pull/386", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1565201612
fix: change csrf token key The change was based on RFC 6648. See https://datatracker.ietf.org/doc/html/rfc6648. Pull Request Test Coverage Report for Build 4059632933 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 100.0% Totals Change from base Build 4050814930: 0.0% Covered Lines: 226 Relevant Lines: 226 💛 - Coveralls
gharchive/pull-request
2023-02-01T00:27:38
2025-04-01T06:45:23.055455
{ "authors": [ "coveralls", "ericfortmeyer" ], "repo": "phpolar/csrf-protection", "url": "https://github.com/phpolar/csrf-protection/pull/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
270718724
Version 0.10.0 Also, please raise the minor version to 0.10.0 in the next release as this is breaking compatibility with child images. We should also document somewhere that lines like: touch /etc/service/syslog-ng/down rm -rf /etc/service/syslog-ng Will have no effect, and need to be changed. Also touch /etc/service/syslog-forwarder/down Will break the build since that folder no longer exists. There was no tag named "latest" or "0.9.x" or "0.9". So any child image would require a manual update which would minimize the risk of breaking automatic builds? Perhaps the main readme should contain a section name "Upgrades"? And a sub-section named "0.9.x to 0.10.x" specifying those issues and possible workarounds? That is a great suggestion @davidhiendl The reason I have not performed a release is for precisely that reason, I am trying to figure out the best way to handle this as there is no precedent. Your feedback is greatly appreciated. There is the tag latest, right now it's the same as 0.9.22. Latest is used by a large number of child images, for example: https://github.com/spujadas/elk-docker/blob/master/Dockerfile I think it shouldn't break anything as long as they are not customising forwarder and/or logger services. You are right, I confused that with there not being any tag like "master" that is build from the github master branch. Sorry about that. I think that the image should have a "0.9" tag that always points to the latest "0.9.x" version (which should never introduce breaking changes) and one "0.10" that points to the latest "0.10.x". Similar to how other images use semantic versioning to avoid introducing breaking changes. Better yet adopt it completely and indicate it in on the readme? Anyone who uses latest should expect the build to break at some point anyhow... At least I do whenever I build images that rely on other images. @davidhiendl as far as a master Docker tag, we are working on adding that and always keeping latest to be the last released version: https://github.com/phusion/baseimage-docker/issues/386 @davidhiendl That information is represented in the README: https://github.com/phusion/baseimage-docker/blame/master/README.md#L142
gharchive/issue
2017-11-02T16:25:32
2025-04-01T06:45:23.124028
{ "authors": [ "Theaxiom", "davidhiendl", "hyperknot" ], "repo": "phusion/baseimage-docker", "url": "https://github.com/phusion/baseimage-docker/issues/448", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2341535876
好像热更新不生效 What platform are you using? weapp What version are you using? 最新版 Describe the bug npm run dev:weapp 保存代码后控制台重新编译了,但是开发者工具必须刷新才能看到变化 Console Logs No response Participation [ ] I am willing to submit a pull request for this issue. 能提供你仓库模板吗,我用https://github.com/phy-lei/taro-solid-cli我这个模板是可以 的,微信开发者工具是最新版。还有就是这个仓库也很快不维护了, 目前该插件我也已经合并到taro4版本中,等到taro4发布后,这边就不会在维护了。 你的意思是说taro4就可以直接选择solid了吗? 那我这个问题不解决了。我等taro4,喜大普奔 是的,你现在可以先用这个包去开发,到时候taro4发布了,直接升级上去就行,业务代码是不需要改动的,无痛迁移,还有就是你这个热更新的问题,不是我这个插件的问题,我觉得是你开发者工具的问题,你看看你这里勾选了没 嗯,taro4有没有发布时间计划啊? 嗯,taro4有没有发布时间计划啊? 本来预计是6月中旬的,应该很快了 已经7月中旬了,好像还是没有消息呢。 已经7月中旬了,好像还是没有消息呢。 今天发布呢🎉 已经7月中旬了,好像还是没有消息呢。 今天发布呢🎉 没有啊,我看文档都还没更新 已经7月中旬了,好像还是没有消息呢。 今天发布呢🎉 没有啊,我看文档都还没更新 文档还没更的,看npm包的链接吧,已经有4.0了 已经7月中旬了,好像还是没有消息呢。 今天发布呢🎉 没有啊,我看文档都还没更新 文档还没更的,看npm包的链接吧,已经有4.0了 额,这怎么用呢? 用3.x的cli创建应用然后再修改package.json升级到4.0吗? 那还要修改哪些文件吗? 已经7月中旬了,好像还是没有消息呢。 今天发布呢🎉 没有啊,我看文档都还没更新 文档还没更的,看npm包的链接吧,已经有4.0了 额,这怎么用呢? 用3.x的cli创建应用然后再修改package.json升级到4.0吗? 那还要修改哪些文件吗? 可以修改你项目的package.json改为4.0.2,升级一下,也可以用@tarojs/cli的4.0.2版本,这个cli创建项目有默认solid模板,记得要用4.0.2 Hi. 我准备正式开始使用taro-solid版本进行生产应用开发了, 目前使用了4.0.4版本,试了几个solidjs的api是可以正常使用并且编译成功的。 但是项目模板好像有点问题, vscode提示找不到一些react的类型定义, 请问需要如何fix这个问题? Cannot find module 'react/jsx-runtime' or its corresponding type declarations.ts(2307) 已经7月中旬了,好像还是没有消息呢。 今天发布呢🎉 没有啊,我看文档都还没更新 文档还没更的,看npm包的链接吧,已经有4.0了 额,这怎么用呢? 用3.x的cli创建应用然后再修改package.json升级到4.0吗? 那还要修改哪些文件吗? 可以修改你项目的package.json改为4.0.2,升级一下,也可以用@tarojs/cli的4.0.2版本,这个cli创建项目有默认solid模板,记得要用4.0.2 Hi. 我准备正式开始使用taro-solid版本进行生产应用开发了, 目前使用了4.0.4版本,试了几个solidjs的api是可以正常使用并且编译成功的。 但是项目模板好像有点问题, vscode提示找不到一些react的类型定义, 请问需要如何fix这个问题? Cannot find module 'react/jsx-runtime' or its corresponding type declarations.ts(2307) 今天会提个pr,修复这个,似乎漏了这个类型 已经7月中旬了,好像还是没有消息呢。 今天发布呢🎉 没有啊,我看文档都还没更新 文档还没更的,看npm包的链接吧,已经有4.0了 额,这怎么用呢? 用3.x的cli创建应用然后再修改package.json升级到4.0吗? 那还要修改哪些文件吗? 可以修改你项目的package.json改为4.0.2,升级一下,也可以用@tarojs/cli的4.0.2版本,这个cli创建项目有默认solid模板,记得要用4.0.2 Hi. 我准备正式开始使用taro-solid版本进行生产应用开发了, 目前使用了4.0.4版本,试了几个solidjs的api是可以正常使用并且编译成功的。 但是项目模板好像有点问题, vscode提示找不到一些react的类型定义, 请问需要如何fix这个问题? Cannot find module 'react/jsx-runtime' or its corresponding type declarations.ts(2307) 今天会提个pr,修复这个,似乎漏了这个类型 感谢,不过我已经找到了修复办法,修改tsconfig的这个部分即可: // "jsx": "react-jsx", "jsx": "preserve", "jsxImportSource": "solid-js", 另外,我还发现了Solidjs的Suspense组件不能使用 报错是这样的 TypeError: Cannot read property 'push' of null at resumeEffects (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:542) at Object.fn (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:1663) at runComputation (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:682) at updateComputation (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:664) at createMemo (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:248) at Object.fn (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:1653) at runComputation (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:682) at updateComputation (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:664) at createMemo (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:248) at Object.get children [as children] (._node_modules_.pnpm_solid-js@1.8.21_node_modules_solid-js_dist_solid.cjs:1644)(env: Windows,mp,1.06.2407110; lib: 3.5.4) Suspense在小程序是不生效的,react也一样,因为小程序不支持queueMicrotask 。上面的类型提示,应该还是会爆红的,原因是taro-components类型定义是没有的,已经提交了pr:https://github.com/NervJS/taro/pull/16358
gharchive/issue
2024-06-08T07:48:40
2025-04-01T06:45:23.166330
{ "authors": [ "mztlive", "phy-lei" ], "repo": "phy-lei/tarojs-plugin-solid", "url": "https://github.com/phy-lei/tarojs-plugin-solid/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1994355515
Added the blog functionality to this amazing theme https://github.com/zepvalue/zepvalue.github.io You can check this Repo with the blog feature added, I still have to add the footer to give you credit but I really enjoy this theme As you can see there is a bit of work around github-pages gem and extra gems needed to make it work properly with sass, maybe even manage it a bit better Grazie mille collega :) I'm very happy to hear you like the theme. Grazie a te!
gharchive/issue
2023-11-15T09:10:29
2025-04-01T06:45:23.197020
{ "authors": [ "piazzai", "zepvalue" ], "repo": "piazzai/hacked-jekyll", "url": "https://github.com/piazzai/hacked-jekyll/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
859582770
iOS cannot display the default language normally [x] I have read the Get Started - Installation section [x] I have read and done the Get Started - Setup Android section [x] I have read and done the Get Started - Setup iOS section [x] I have already searched for the same problem Environment Technology Version Flutter version 2.0.4 flutter_inappwebview 5.3.2 Android version 11 iOS version 14.4.1 Xcode version 12.4 Device information: Description I build a app with flutter_inappwebview , when I click upload image, display english menu. My iphone is chinese. How to change the default language for webview? thanks. Later, it was found that xcode did not add a Chinese problem
gharchive/issue
2021-04-16T08:14:27
2025-04-01T06:45:23.241524
{ "authors": [ "hemlys" ], "repo": "pichillilorenzo/flutter_inappwebview", "url": "https://github.com/pichillilorenzo/flutter_inappwebview/issues/806", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
666138736
Automatisch die Option "Automatische Bug-Fix-Kompatibilität" setzen Wenn ein Plugin hochgeladen wird, können wir zusätzlich entscheiden, ob die "Automatische Bug-Fix-Kompatibilität" aktiviert werden soll. Das kann nach folgenden Schema funktionieren: Man ruft die aktuelle neueste Shopware-Version ab und erhöht die Version um eine Bugfixversion. Dann prüft man, ob die aktuelle Versions-Kompatibilität des Plugin diese Version unterstützt. Wenn ja, dann wird die Option aktiviert, wenn nein, dann wird sie deaktiviert. Das hat den Vorteil, dass wir die "Automatische Bug-Fix-Kompatibilität" in der composer.json bzw. plugin.json mitpflegen können und hier keine Inkonsistenzen entstehen. We won't do that.
gharchive/issue
2020-07-27T09:35:42
2025-04-01T06:45:23.247081
{ "authors": [ "windaishi" ], "repo": "pickware/scs-commander", "url": "https://github.com/pickware/scs-commander/issues/33", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
276175535
close in modals not work This is my code for modal but when I click on close button, it doesn't work well. <section class="navbar-section"> <a href="#login" class="btn btn-link">ورود</a> <div class="modal modal-sm" id="login"> <div class="modal-overlay"></div> <div class="modal-container"> <div class="modal-header"> <button class="btn btn-clear float-right"></button> <div class="modal-title h5">ورود به سیستم</div> </div> <div class="modal-body"> <div class="content"> توضیحات </div> </div> <div class="modal-footer"> </div> </div> </div> <a href="{{ url('register') }}" class="btn btn-link">ثبت نام</a> </section> @AGhasemzadeh I will update the Docs. For now, you can view the Docs source code. @AGhasemzadeh The Docs updated.
gharchive/issue
2017-11-22T19:04:47
2025-04-01T06:45:23.255037
{ "authors": [ "AGhasemzadeh", "picturepan2" ], "repo": "picturepan2/spectre", "url": "https://github.com/picturepan2/spectre/issues/344", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1899429023
Feature: Update Gradle Plugin AGP 8.1 Builds and runs successfully I'm starting to port for new androids Thank you! I see the gradle update is relatively short, but still the effort to actually make it is highly appreciated. I'm more concerned about the sdk update, keep me updated! Note: I'm still not sure how to manage the new updates. Most probably I'll create a branch/tag to keep the currently original state, and continue on master with the proper readme update. Please keep the pr open until then! Note: I'm still not sure how to manage the new updates. Most probably I'll create a branch/tag to keep the currently original state, and continue on master with the proper readme update. Please keep the pr open until then! Thank you. Yes. Updating Cradle is a small commit, but it is necessary for further work in modern Android Studio. At work, I'm used to using git flow, I think it will be most convenient. In my personal checklist: Removing the "wear" module. Bumping the target min sdk to 28 (cutting off old devices). Target SDK 33 Scoped Storage Support Other compatibility bug fixes @TrianguloY please create separate "develop" branch Since I'm not the owner of this repo, and I would like it to be preserved as much as possible, I've though of another alternative: Keep this repo unmodified (maybe even archived?) and move all development to my own fork. There I can even add collaborators if necessary. This can even be done as a test, and if successful this repo can be then modified with a link or synced. On one hand I don't like using my own repo, but on the other I don't want to touch this one without explicit approval from Pierre. Thoughts? @TrianguloY Ok. I agree with your conclusions. Then it is better to create a develop branch in your repository, and continue development within of the standard git flow @TrianguloYSo, I will have to delete my fork, and fork your repository. @TrianguloY So, I will have to delete my fork, and fork your repository. No, you should be able to create a PR from your fork into mine. I can do it myself if you prefer. The development and 'cleanup' was moved to my fork: https://github.com/TrianguloY/LightningLauncher/tree/developer Scoped Storage Support Please don't forget for those who use root and LL to script and work with files on different repositories, see my former plugin on ToDo notes to the directory structure, written directly in LL. I miss him a lot now. :(
gharchive/pull-request
2023-09-16T12:57:09
2025-04-01T06:45:23.262410
{ "authors": [ "JardaG", "SnowVolf", "TrianguloY" ], "repo": "pierrehebert/LightningLauncher", "url": "https://github.com/pierrehebert/LightningLauncher/pull/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1362477538
Video crashes on start Crashes with error message: Movie file finishing error: Optional(Error Domain=AVFoundationErrorDomain Code=-11859 "Movie recording cannot be started" UserInfo={NSUnderlyingError=0x281c02df0 {Error Domain=NSOSStatusErrorDomain Code=-16419 "(null)"}, NSLocalizedFailureReason=A movie recording is already in progress., NSLocalizedRecoverySuggestion=Stop the movie recording in progress and try again., AVErrorRecordingFailureDomainKey=1, NSLocalizedDescription=Movie recording cannot be started}) Crashes in this part of CameraViewController: if error != nil { print("Movie file finishing error: \(String(describing: error))") success = (((error! as NSError).userInfo[AVErrorRecordingSuccessfullyFinishedKey] as AnyObject).boolValue)! } Any idea as to why this is happening? Same issue
gharchive/issue
2022-09-05T21:22:15
2025-04-01T06:45:23.264912
{ "authors": [ "alexmanning14", "leobuckman" ], "repo": "pierreveron/SwiftUICam", "url": "https://github.com/pierreveron/SwiftUICam/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
58628424
Cannot install package via NPM I always get this error when trying to install the package via NPM: npm ERR! peerinvalid The package reactify does not satisfy its siblings' peerDependencies requirements! npm ERR! peerinvalid Peer react-googlemaps@0.4.0 wants reactify@>=0.17.1 I fixed this by updating the reactify version in my package.json
gharchive/issue
2015-02-23T18:38:02
2025-04-01T06:45:23.265991
{ "authors": [ "Davidrums", "luandro" ], "repo": "pieterv/react-googlemaps", "url": "https://github.com/pieterv/react-googlemaps/issues/22", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2057779680
🛑 Newsletter URL is down In 7e003d2, Newsletter URL (https://news.pik.farm/sign-up) was down: HTTP code: 0 Response time: 0 ms Resolved: Newsletter URL is back up in a20969d after 11 minutes.
gharchive/issue
2023-12-27T21:16:47
2025-04-01T06:45:23.271139
{ "authors": [ "cybertheory" ], "repo": "pikfarm/PikfarmStatus", "url": "https://github.com/pikfarm/PikfarmStatus/issues/1659", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2061847799
🛑 Blog URL is down In 88d609b, Blog URL (http://blog.pik.farm) was down: HTTP code: 0 Response time: 0 ms Resolved: Blog URL is back up in 277c919 after 1 hour, 10 minutes.
gharchive/issue
2024-01-02T00:30:31
2025-04-01T06:45:23.273582
{ "authors": [ "cybertheory" ], "repo": "pikfarm/PikfarmStatus", "url": "https://github.com/pikfarm/PikfarmStatus/issues/2475", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2063265385
🛑 Stripe Webhook URL (Test and Live) is down In fe74678, Stripe Webhook URL (Test and Live) ($STRIPEWEBHOOK) was down: HTTP code: 0 Response time: 0 ms Resolved: Stripe Webhook URL (Test and Live) is back up in a579dfe after 23 minutes.
gharchive/issue
2024-01-03T05:24:48
2025-04-01T06:45:23.275583
{ "authors": [ "cybertheory" ], "repo": "pikfarm/PikfarmStatus", "url": "https://github.com/pikfarm/PikfarmStatus/issues/2655", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1729616253
🛑 Blog URL is down In 40f7398, Blog URL (http://blog.pik.farm) was down: HTTP code: 429 Response time: 222 ms Resolved: Blog URL is back up in a874a0f.
gharchive/issue
2023-05-28T18:51:39
2025-04-01T06:45:23.278011
{ "authors": [ "cybertheory" ], "repo": "pikfarm/PikfarmStatus", "url": "https://github.com/pikfarm/PikfarmStatus/issues/289", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1508609761
[Feature Request]: Add OAuth provider Reddit Package @lucia-auth/sveltekit Describe the problem Reddit is another OAuth provider that can be integrated like Google, Github, Twitch. Thank you. Describe the proposed solution Someone code the addition for Reddit Alternatives considered No response Additional information No response Wikipedia has a list of OAuth providers. https://en.wikipedia.org/wiki/List_of_OAuth_providers I suggest Stack Exchange, Apple, Amazon also be added I needed a Reddit provider myself. Created a PR for it #266. Creating an OAuth provider really was quite straightforward. Could some kind of generic callback function be implemented to allow new OAuth implementations without Lucia code updates? From my first brief look at OAuth, it seems every implementation has minor breaking changes. There may be a pattern. If you need another OAuth provider and are using sveltekit, you might want to take a look at https://authjs.dev/reference/sveltekit/modules/main Resolved with #266
gharchive/issue
2022-12-22T22:09:46
2025-04-01T06:45:23.287838
{ "authors": [ "WakeReality", "pilcrowOnPaper", "v-rogg" ], "repo": "pilcrowOnPaper/lucia-auth", "url": "https://github.com/pilcrowOnPaper/lucia-auth/issues/263", "license": "0BSD", "license_type": "permissive", "license_source": "github-api" }
1386001976
Server module - Response isnt always created, leading to an error (NoneType) There is a bug where the Server Module is checking all the scenarios, and if response is still set to None, the line writing out what the response.status is will throw an error (Line 277 in server.py) # write status line status_message = status_message_map.get(**response.status**, "Unknown") either need to check if response is still None (if response is not None) or set it to a default value. There is another line further down 282 that will also fail - for key, value in response.headers.items(): I've been trying to track down an odd bug, that tends to cause my pico's wifi interface to occasionally hang/crash until a full machine.reset(), a few times a day.. Once I add that if response is not None to the lines you suggest, the error that usually lead to the interface crash, doesn't seem to do that anymore. (Or at least, I think it's related to what's causing it: [error / 117kB] need more than 0 values to unpack) So far so good tho, saw the error once, but the device hasn't crashed again yet, so hopefully this was a good enough fix! Here's more specifically what I modified, basically just indenting from line 276 to 285, if anyone else is having this issue: if response is not None: # write status line status_message = status_message_map.get(response.status, "Unknown") writer.write(f"HTTP/1.1 {response.status} {status_message}\r\n".encode("ascii")) # write headers for key, value in response.headers.items(): writer.write(f"{key}: {value}\r\n".encode("ascii")) # blank line to denote end of headers writer.write("\r\n".encode("ascii"))
gharchive/issue
2022-09-26T12:32:39
2025-04-01T06:45:23.321745
{ "authors": [ "Nikorasu", "kevinmcaleer" ], "repo": "pimoroni/phew", "url": "https://github.com/pimoroni/phew/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1629645645
Merge spreadsheet rows updates from queue Problem When targeting a large block range, each update from the Substream will be added to the queue and processed at a rate of 1 req/s (so one block per second). Sample output info: Working on item #145 / Queue size: 4638 {"service":"substreams-sink-sheets"} . . (1 second) . info: Working on item #146 / Queue size: 4637 {"service":"substreams-sink-sheets"} info: Working on item #147 / Queue size: 4636 {"service":"substreams-sink-sheets"} info: Working on item #148 / Queue size: 4635 {"service":"substreams-sink-sheets"} info: Working on item #149 / Queue size: 4634 {"service":"substreams-sink-sheets"} info: Working on item #150 / Queue size: 4633 {"service":"substreams-sink-sheets"} info: Working on item #151 / Queue size: 4632 {"service":"substreams-sink-sheets"} ... Possible solution(s) This could be improved by using the queue for the rows updates instead, merging them and sending it as one request on each timeout. This would also greatly improve the "resistance" to the Google API rate limit by sending fewer but bulkier requests. Implemented in 9e1bf381f778b25fe80b2414bed7fc3f13fa9fa1.
gharchive/issue
2023-03-17T17:01:12
2025-04-01T06:45:23.325098
{ "authors": [ "Krow10" ], "repo": "pinax-network/substreams-sink-sheets", "url": "https://github.com/pinax-network/substreams-sink-sheets/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
193036538
Methods/manager for Subscriptions Am using pinax-stripe in our own app (with our own DRF views), and wanted to display the expiry date for a subscription within the app. Added a few things to make this considerably more logical: Added related_name 'subscriptions' to Customer ForeignKey; allows easier reverse relation queries Adds a SubscriptionManager and a current_subscriptions() methods which can be used with the above, to allow something like the following: def get_expiry_for_user(user): customer = customers.get_customer_for_user(user) if hasattr(customer, "subscriptions"): current_subscription = customer.subscriptions.current_subscriptions().last() if current_subscription: return current_subscription.current_period_end get_expiry_for_user(self.request.user) The code appears to currently allow multiple subscriptions for the same plan + user + valid period. Should this be restricted? (I'm happy to provide more PR's). Closing this; the failing tests demonstrate this is already handled in other ways. Will you guys take PR's on documentation? It's very sporatic. For sure. We'd love more help in docs. Sent from my iPad On Dec 4, 2016, at 10:21 PM, Ryan Verner notifications@github.com wrote: Closing this; the failing tests demonstrate this is already handled in other ways. Will you guys take PR's on documentation? It's very sporatic. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
gharchive/pull-request
2016-12-02T05:19:19
2025-04-01T06:45:23.328665
{ "authors": [ "paltman", "xfxf" ], "repo": "pinax/pinax-stripe", "url": "https://github.com/pinax/pinax-stripe/pull/298", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
487906963
parser: fix *TEXT BYTE | ASCII | UNICODE syntax What problem does this PR solve? Fix compatibility problem about *TEXT BYTE | ASCII | UNICODE syntax Issue: pingcap/parser#529 MySQL Syntax: type: ... | TINYTEXT_SYN opt_charset_with_opt_binary { $$= NEW_PTN PT_blob_type(Blob_type::TINY, $2.charset, $2.force_binary); } | TEXT_SYM opt_field_length opt_charset_with_opt_binary { $$= NEW_PTN PT_char_type(Char_type::TEXT, $2, $3.charset, $3.force_binary); } | MEDIUMTEXT_SYM opt_charset_with_opt_binary { $$= NEW_PTN PT_blob_type(Blob_type::MEDIUM, $2.charset, $2.force_binary); } | LONGTEXT_SYM opt_charset_with_opt_binary { $$= NEW_PTN PT_blob_type(Blob_type::LONG, $2.charset, $2.force_binary); } ... | LONG_SYM opt_charset_with_opt_binary { $$= NEW_PTN PT_blob_type(Blob_type::MEDIUM, $2.charset, $2.force_binary); } opt_charset_with_opt_binary: ... | ascii { $$.charset= $1; $$.force_binary= false; } | unicode { $$.charset= $1; $$.force_binary= false; } | BYTE_SYM { $$.charset= &my_charset_bin; $$.force_binary= false; } Bad SQL Case: create table t (a longtext unicode) create table t (a mediumtext unicode) create table t (a tinytext ascii) create table t (a text byte) create table t (a long byte, b text unicode) create table t (a text unicode, b mediumtext ascii, c int) create table t (a int, b text unicode, c mediumtext ascii) Check List Tests Unit test Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.0 out of 2 committers have signed the CLA.:x: IggieWang:x: hey-kongYou have signed the CLA already but the status is still pending? Let us recheck it. resolve conflicts please @hey-kong Could you please add the following cases to the test? create table t (a long ascii, b long unicode) create table t (a long character set utf8mb4, b long charset utf8mb4, c long char set utf8mb4) By the way, according to the documentation, LONG and LONG VARCHAR map to the MEDIUMTEXT data type. @hey-kong resolve conflicts please @hey-kong resolve conflicts please ok
gharchive/pull-request
2019-09-01T16:42:49
2025-04-01T06:45:23.373844
{ "authors": [ "CLAassistant", "hey-kong", "leoppro", "tangenta" ], "repo": "pingcap/parser", "url": "https://github.com/pingcap/parser/pull/536", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
624078686
parser: support check constraints field Signed-off-by: Arenatlx 314806019@qq.com What problem does this PR solve? support check constraint info. What is changed and how it works? 1:add constraint info in model.go 2:add a item to store the check constraint info in tableInfo. Check List Tests Integration test Code changes Has changed the model Related changes Need to update the documentation this pr is ready to merge PTAL @tangenta @AilinKid please create an issue for this PR and link the issue to the project kanban. @AilinKid please create an issue for this PR and link the issue to the project kanban. ok Linked Issue: https://github.com/pingcap/tidb/issues/17402
gharchive/pull-request
2020-05-25T06:31:27
2025-04-01T06:45:23.378375
{ "authors": [ "AilinKid", "zz-jason" ], "repo": "pingcap/parser", "url": "https://github.com/pingcap/parser/pull/871", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1162203558
many keepalive watchdog fired log in tiflash error log Bug Report Please answer these questions before submitting your issue. Thanks! 1. Minimal reproduce step (Required) set up a cluster with tls enable, then run mpp queries 2. What did you expect to see? (Required) the error log should be clean 3. What did you see instead (Required) many keepalive watchdog fired log in tiflash error log [2022/03/08 12:04:16.189 +08:00] [ERROR] [<unknown>] ["grpc:/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tics/contrib/grpc/src/core/ext/transport/chttp2/transport/chttp2_transport.cc, line number: 2882, log msg : ipv4:172.16.5.85:9626: Keepalive watchdog fired. Closing transport."] [thread_id=221] [2022/03/08 12:04:18.544 +08:00] [ERROR] [<unknown>] ["grpc:/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tics/contrib/grpc/src/core/ext/transport/chttp2/transport/chttp2_transport.cc, line number: 2882, log msg : ipv4:172.16.5.81:9626: Keepalive watchdog fired. Closing transport."] [thread_id=115] It seems the log is harmless since the query is not affected, but we still need to find the root cause, and avoid this error. 4. What is your TiFlash version? (Required) master @ a6058011278c83991dc9eeccda34d Unfortunately, after set GRPC_ARG_KEEPALIVE_TIMEOUT_MS to 8000 ms, TiFlash still meet this error randomly if it is running with high load. The root cause is in GRPC v1.26.0: each epoll will only pull at most 100 events If an epoll has polled more than 16 events, in pollable_process_events, only the first 16 events are handle, the rest will be handled in the next time when pollset_work is called. The backup poller is triggered every 5 second, so if an event is not handle during this trigger, it has to wait at least 5 seconds before got handled And since the keep_alive_timeout is 8 seconds, so if the ping ack event is not handled at the first time it is polled, there will be a high probability of timeout. Since GRPC 1.31.0, there is no limit of handling event number in pollable_process_events, but an epoll will still poll at most 100 events, so if we upgrade GRPC to 1.31.0 or newer, we can avoid this keep alive timeout in most scenes, but still there is a chance that a ping ack event is not polled in time and cause the keep alive timeout error.
gharchive/issue
2022-03-08T04:24:04
2025-04-01T06:45:23.525524
{ "authors": [ "windtalker" ], "repo": "pingcap/tiflash", "url": "https://github.com/pingcap/tiflash/issues/4192", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1341176115
[DNM]*: remove all dictionaries. What problem does this PR solve? Issue Number: close #xxx Problem Summary: What is changed and how it works? Check List Tests [ ] Unit test [ ] Integration test [ ] Manual test (add detailed scripts or steps below) [ ] No code Side effects [ ] Performance regression: Consumes more CPU [ ] Performance regression: Consumes more Memory [ ] Breaking backward compatibility Documentation [ ] Affects user behaviors [ ] Contains syntax changes [ ] Contains variable changes [ ] Contains experimental features [ ] Changes MySQL compatibility Release note None /run-all-tests /run-all-tests
gharchive/pull-request
2022-08-17T05:11:18
2025-04-01T06:45:23.530582
{ "authors": [ "ywqzzy" ], "repo": "pingcap/tiflash", "url": "https://github.com/pingcap/tiflash/pull/5639", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1158100330
owner(ticdc): fix prometheus panic What problem does this PR solve? Issue Number: close #4742 Check List Tests Unit test Integration test Manual test Run TiCDC withe new scheduler enabled. Release note None /check-issue-triage-complete please check the code twice to make sure no more related misbehavior in the codebase, then we can merge it. /merge /run-verify-ci /run-kafka-integration-test /run-integration-test /run-leak-tests /merge /run-integration-test /tikv=pr/12050 /run-kafka-integration-test /tikv=pr/12050 /run-integration-test /tikv=pr/12050 /run-kafka-integration-test /tikv=pr/12050 Codecov Report Merging #4759 (9e46df6) into master (9607554) will decrease coverage by 0.1197%. The diff coverage is 51.8427%. Flag Coverage Δ cdc 59.7173% <51.8427%> (-0.2049%) :arrow_down: dm 52.0460% <ø> (+0.0171%) :arrow_up: Flags with carried forward coverage won't be shown. Click here to find out more. @@ Coverage Diff @@ ## master #4759 +/- ## ================================================ - Coverage 55.6402% 55.5205% -0.1198% ================================================ Files 494 521 +27 Lines 61283 64369 +3086 ================================================ + Hits 34098 35738 +1640 - Misses 23750 25111 +1361 - Partials 3435 3520 +85 /merge /run-dm-compatibility-test /run-kafka-integration-tests /run-verify /run-verify
gharchive/pull-request
2022-03-03T08:13:35
2025-04-01T06:45:23.540293
{ "authors": [ "3AceShowHand", "codecov-commenter", "liuzix", "overvenus" ], "repo": "pingcap/tiflow", "url": "https://github.com/pingcap/tiflow/pull/4759", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1163413596
dep(dm): update tidb Signed-off-by: lance6716 lance6716@gmail.com What problem does this PR solve? Issue Number: close #4812 What is changed and how it works? update TiDB to support new syntax Check List Tests Unit test Release note `None`. /run-all-tests
gharchive/pull-request
2022-03-09T02:22:36
2025-04-01T06:45:23.543602
{ "authors": [ "GMHDBJD", "lance6716" ], "repo": "pingcap/tiflow", "url": "https://github.com/pingcap/tiflow/pull/4815", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1211927889
pkg/p2p: reduce workload of TestMessageClientBasicMultiTopics What problem does this PR solve? Issue Number: close #5210 What is changed and how it works? The previous workload of this test case is for 64 goroutines to use the same MessageClient to send messages, which can be too much for the CI's machine. The workload concurrency is reduced to 16 and the timeout is doubled. Check List Tests Unit test > go test . -test.run TestMessageClientBasicMultiTopics -test.count 1000 ok github.com/pingcap/tiflow/pkg/p2p 62.062s Release note None /run-leak-tests /run-verify /run-unit-tests /run-check-issue-triage-complete /merge /run-dm-integration-test /run-kafka-integration-test /run-leak-test
gharchive/pull-request
2022-04-22T07:20:02
2025-04-01T06:45:23.547833
{ "authors": [ "amyangfei", "liuzix", "zhaoxinyu" ], "repo": "pingcap/tiflow", "url": "https://github.com/pingcap/tiflow/pull/5245", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
197303198
storage: move raw_get to thread pool. @zhangjinpeng1987 @siddontang Any performance bench results? LGTM Any test? Hi @ngaut @siddontang , I've compared it with master branch. (1 tikv, prepare ~10GiB data using rawkv api) The result: a) in read-only scenario, QPS are almost the same, while CPU% of raftsore decreases from 90% to 50%. b) in read-write(1:1) scenario, QPS increased about 25%-40%. Cool. Please add unit test for Get. @siddontang We already have some tests covered rawget in https://github.com/pingcap/tikv/blob/master/tests/storage/test_storage.rs LGTM
gharchive/pull-request
2016-12-23T02:23:19
2025-04-01T06:45:23.551160
{ "authors": [ "disksing", "ngaut", "siddontang", "zhangjinpeng1987" ], "repo": "pingcap/tikv", "url": "https://github.com/pingcap/tikv/pull/1434", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
208807039
raft: fix read index request handling for new leader Hi, This PR fixes issue #1632 PTAL @siddontang @BusyJay @zhangjinpeng1987 PTAL @BusyJay
gharchive/pull-request
2017-02-20T07:54:48
2025-04-01T06:45:23.552355
{ "authors": [ "hhkbp2", "siddontang" ], "repo": "pingcap/tikv", "url": "https://github.com/pingcap/tikv/pull/1634", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1199003725
[BUG] TiExpressionException: Cannot encode row key with non-long type Describe the bug create table with BITINT UNSIGNED primary key, when run sample SQL with range predicate, i will meet TiExpressionException: Cannot encode row key with non-long type. What did you do first , create sample table in mysql client. CREATE TABLE `test` ( `id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, `age` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin than , run sample sql with range predicate, than will meet the exception. like ./bin/spark-sql --jars ~/tispark-assembly-2.4.3-scala_2.11.jar SELECT * FROM test.test WHERE id >= 0 LIMIT 10; What happens instead full stack: com.pingcap.tikv.exception.TiExpressionException: Cannot encode row key with non-long type at com.pingcap.tikv.key.RowKey.toRowKey(RowKey.java:64) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTableScanKeyRangePerId(TiKVScanAnalyzer.java:321) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.lambda$buildTableScanKeyRangeWithIds$0(TiKVScanAnalyzer.java:349) at java.lang.Iterable.forEach(Iterable.java:75) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTableScanKeyRangeWithIds(TiKVScanAnalyzer.java:347) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTableScanKeyRange(TiKVScanAnalyzer.java:376) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildIndexScan(TiKVScanAnalyzer.java:273) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTableScan(TiKVScanAnalyzer.java:228) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTiDAGReq(TiKVScanAnalyzer.java:162) at org.apache.spark.sql.TiStrategy.filterToDAGRequest(TiStrategy.scala:295) at org.apache.spark.sql.TiStrategy.pruneFilterProject(TiStrategy.scala:417) at org.apache.spark.sql.TiStrategy.pruneTopNFilterProject(TiStrategy.scala:320) at org.apache.spark.sql.TiStrategy.collectLimit(TiStrategy.scala:327) at org.apache.spark.sql.TiStrategy.org$apache$spark$sql$TiStrategy$$doPlan(TiStrategy.scala:573) at org.apache.spark.sql.TiStrategy$$anonfun$apply$1.applyOrElse(TiStrategy.scala:119) at org.apache.spark.sql.TiStrategy$$anonfun$apply$1.applyOrElse(TiStrategy.scala:117) at scala.PartialFunction$Lifted.apply(PartialFunction.scala:223) at scala.PartialFunction$Lifted.apply(PartialFunction.scala:219) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.TiStrategy.apply(TiStrategy.scala:117) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:73) at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:69) at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:78) at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:78) at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:209) at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:209) at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:101) at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:209) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:77) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:145) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:71) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:393) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:278) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:866) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:941) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:950) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) com.pingcap.tikv.exception.TiExpressionException: Cannot encode row key with non-long type at com.pingcap.tikv.key.RowKey.toRowKey(RowKey.java:64) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTableScanKeyRangePerId(TiKVScanAnalyzer.java:321) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.lambda$buildTableScanKeyRangeWithIds$0(TiKVScanAnalyzer.java:349) at java.lang.Iterable.forEach(Iterable.java:75) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTableScanKeyRangeWithIds(TiKVScanAnalyzer.java:347) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTableScanKeyRange(TiKVScanAnalyzer.java:376) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildIndexScan(TiKVScanAnalyzer.java:273) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTableScan(TiKVScanAnalyzer.java:228) at com.pingcap.tikv.predicates.TiKVScanAnalyzer.buildTiDAGReq(TiKVScanAnalyzer.java:162) at org.apache.spark.sql.TiStrategy.filterToDAGRequest(TiStrategy.scala:295) at org.apache.spark.sql.TiStrategy.pruneFilterProject(TiStrategy.scala:417) at org.apache.spark.sql.TiStrategy.pruneTopNFilterProject(TiStrategy.scala:320) at org.apache.spark.sql.TiStrategy.collectLimit(TiStrategy.scala:327) at org.apache.spark.sql.TiStrategy.org$apache$spark$sql$TiStrategy$$doPlan(TiStrategy.scala:573) at org.apache.spark.sql.TiStrategy$$anonfun$apply$1.applyOrElse(TiStrategy.scala:119) at org.apache.spark.sql.TiStrategy$$anonfun$apply$1.applyOrElse(TiStrategy.scala:117) at scala.PartialFunction$Lifted.apply(PartialFunction.scala:223) at scala.PartialFunction$Lifted.apply(PartialFunction.scala:219) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3$$anonfun$apply$4.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1$$anonfun$apply$3.apply(TreeNode.scala:176) at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collectFirst$1.apply(TreeNode.scala:176) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.catalyst.trees.TreeNode.collectFirst(TreeNode.scala:175) at org.apache.spark.sql.TiStrategy.apply(TiStrategy.scala:117) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63) at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93) at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:73) at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:69) at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:78) at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:78) at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:209) at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:209) at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:101) at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:209) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:77) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:145) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:71) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:393) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:278) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:866) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:184) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:941) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:950) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Spark and TiSpark version info spark version : spark2.4.0 tispark : 2.4.3-scala_2.11 It's a bug caused by using unsigned long as the primary key. Due to historical reasons, unsigned long must be split into two range requests. One is [0, MaxInt64], and another is [MaxInt64+1, MaxUint64]. https://github.com/pingcap/tidb/blob/master/distsql/request_builder.go#L445-L508
gharchive/issue
2022-04-10T12:00:50
2025-04-01T06:45:23.566771
{ "authors": [ "xuanyu66", "zzzzming95" ], "repo": "pingcap/tispark", "url": "https://github.com/pingcap/tispark/issues/2290", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
293050746
Error calculating average of tp_bigint SQL select avg(tp_bigint) from full_data_type_table Failed with TiSpark: 6.422719009976214E14 Spark With JDBC: 7.4281968802033744E16 TiDB: 74281968802033755.5509 TiSpark Plan == Physical Plan == *HashAggregate(keys=[], functions=[sum(sum(tp_bigint#42L)#153L), sum(count(tp_bigint#42L)#154L)]) +- Exchange SinglePartition +- *HashAggregate(keys=[], functions=[partial_sum(sum(tp_bigint#42L)#153L), partial_sum(count(tp_bigint#42L)#154L)]) +- TiDB CoprocessorRDD{[table: full_data_type_table] , Columns: [tp_bigint], Aggregates: Sum([tp_bigint]), Count([tp_bigint])} 6.422719009976214E14 and 7.4281968802033744E16 differs a lot, need to investigate this ASAP. fixed by #231
gharchive/issue
2018-01-31T06:59:31
2025-04-01T06:45:23.569216
{ "authors": [ "Novemser", "birdstorm" ], "repo": "pingcap/tispark", "url": "https://github.com/pingcap/tispark/issues/230", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
354274913
DateTime and TimeStamp cannot use index when literals are in string format We could change the literal string into timestamp type, or push down casted filter so that index could be used correctly. e.g., select tp_datetime from t where tp_datetime between '2000-01-01 00:00:00' and timestamp '2001-01-01 00:00:00' select tp_datetime from t where tp_datetime between '2018' and date '2019-01-01' can we close it ?
gharchive/issue
2018-08-27T11:04:18
2025-04-01T06:45:23.570466
{ "authors": [ "birdstorm", "shiyuhang0" ], "repo": "pingcap/tispark", "url": "https://github.com/pingcap/tispark/issues/426", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1929289684
Support typescript files as the configuration for webhooks Now that this bad boy is opensouce it's a fine time to bring up the idea again for using typescript .ts files as the configuration of choice instead of .json. ✨ Reasons power of typescript for the payload types Third party types (github, stripe, etc) First party types (your own special sauce) conditional logic in your webhooks payloads comments to document certain things secrets from the filesystem or access any other nodejs API nice declarative typesafe API helpers possibility of defining more than one variation in a single file github-webhooks.ts (workflow_run, push) stripe-webhooks.ts (payment_intent.created, customer.created) What would it look like hypothetically? github-workflow-run.ts import { WorkflowRunCompletedEvent } from "@octokit/webhooks-types"; // NOTE: hypothetical impl import { defineWebhook } from "webhookthing"; // define using the github workflow run using the official type type const githubWorkflowRun: WorkflowRunCompletedEvent = { // see here i made a typo and TS can save me with them sweet sweet squiggles action: "ggcompleted", workflow_run: { /* ... */ }, workflow: { /* ... */ }, repository: { /* ... */ }, sender: { /* ... */ }, } export default defineWebhook({ payload: githubWorkFlowRun, headers: { "x-github-event": "workflow_run", // this should be default 😅 "Content-Type": "application/json", } }); 🙏 ps thanks and happy to help I tried briefly to make some incremental adoptions for at least .ts files and it's very hard. Maybe someone much smarter than I can attempt this but for now I'm going to close this. 🫡
gharchive/issue
2023-10-06T01:18:44
2025-04-01T06:45:23.596005
{ "authors": [ "Hacksore" ], "repo": "pingdotgg/webhookthing", "url": "https://github.com/pingdotgg/webhookthing/issues/111", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1914439247
🛑 jeju43 is down In 433b4e3, jeju43 (http://43archives.or.kr/main.do) was down: HTTP code: 0 Response time: 0 ms Resolved: jeju43 is back up in 1143330 after 10 minutes.
gharchive/issue
2023-09-26T23:34:25
2025-04-01T06:45:23.598800
{ "authors": [ "pinnode" ], "repo": "pinnode/pinnode", "url": "https://github.com/pinnode/pinnode/issues/172", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
948271068
How to mark a transaction as failed When the web server is having an issue like 500 error the transaction is mark as success in the web interface of pinpoint. Return code is 500 but not consider as an error What need to be done in the plugin to make it mark as failed ? Or if it's not possible what is a failed transaction ? @marty-macfly We did not add mark_error as python agent. https://github.com/pinpoint-apm/pinpoint-c-agent/blob/314b46e5e2bf7676cfdde7b3bf02651154b947a7/src/PY/help(pinpointPy).txt#L34-L35 In php-agent, transactions was marked as error when catch fetal error. While, I will add this feature for you. Please watch the process on pinpoint-c-agent ! @marty-macfly Please fellow https://github.com/eeliu/pinpoint-c-agent/tree/feat-php-error https://github.com/eeliu/pinpoint-c-agent/blob/59c80a8705c948fb9af8488897f972ef218ddc74/src/PHP/pinpoint_php_api.php#L65-L72 If you found some problems, use current issue. Please close this issue when test is done! It's working properly, can you merge it into https://github.com/pinpoint-apm/pinpoint-c-agent ? Seen the merge thanks https://github.com/pinpoint-apm/pinpoint-c-agent/pull/349
gharchive/issue
2021-07-20T05:31:23
2025-04-01T06:45:23.604904
{ "authors": [ "eeliu", "marty-macfly" ], "repo": "pinpoint-apm/pinpoint-php-aop", "url": "https://github.com/pinpoint-apm/pinpoint-php-aop/issues/21", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
725053079
[Doc] Modules Update This pull request has automatically created by PinpointAutoBot Updated Item : [google-http-client, druid, jackson-databind, mybatis-spring] has been updated *Updated item list may differ if this PR is updated* Codecov Report Merging #7336 into master will increase coverage by 0.01%. The diff coverage is n/a. @@ Coverage Diff @@ ## master #7336 +/- ## ========================================== + Coverage 38.62% 38.64% +0.01% ========================================== Files 3321 3320 -1 Lines 95977 95909 -68 Branches 11865 11850 -15 ========================================== - Hits 37072 37060 -12 + Misses 56001 55951 -50 + Partials 2904 2898 -6 Impacted Files Coverage Δ ...p/pinpoint/rpc/stream/StreamChannelRepository.java 75.00% <0.00%> (-10.00%) :arrow_down: ...tor/metric/datasource/DefaultDataSourceMetric.java 75.00% <0.00%> (-3.13%) :arrow_down: ...nt/collector/receiver/grpc/PinpointGrpcServer.java 47.01% <0.00%> (-2.36%) :arrow_down: ...pinpoint/profiler/monitor/DeadlockMonitorTask.java 18.60% <0.00%> (-1.17%) :arrow_down: ...oint/web/applicationmap/ApplicationMapBuilder.java 80.00% <0.00%> (-0.83%) :arrow_down: ...navercorp/pinpoint/web/service/MapServiceImpl.java 0.00% <0.00%> (ø) ...p/pinpoint/web/service/FilteredMapServiceImpl.java 64.19% <0.00%> (ø) ...rp/pinpoint/plugin/tomcat/TomcatAsyncListener.java 0.00% <0.00%> (ø) .../web/controller/BusinessTransactionController.java 0.00% <0.00%> (ø) ...mcat/interceptor/RequestStartAsyncInterceptor.java 0.00% <0.00%> (ø) ... and 17 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 4903191...59cb20e. Read the comment docs.
gharchive/pull-request
2020-10-19T23:35:29
2025-04-01T06:45:23.620195
{ "authors": [ "RoySRose", "codecov-io" ], "repo": "pinpoint-apm/pinpoint", "url": "https://github.com/pinpoint-apm/pinpoint/pull/7336", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
652789215
Implement dark mode versions of non-primary colors @ponori Let's chat about this one. We have dark mode versions for a few of the secondary colors but not the full list available. I checked in the main repo and most are still in use at least minimally These are the colors we're currently using. I checked in the doc you sent and none of the hex values quite match up --gestalt-green: #0fa573; --gestalt-pine: #0a6955; --gestalt-olive: #364a4c; --gestalt-blue: #0074e8; --gestalt-navy: #004b91; --gestalt-midnight: #133a5e; --gestalt-purple: #b469eb; --gestalt-orchid: #8046a5; --gestalt-eggplant: #5b2677; --gestalt-maroon: #6e0f3c; --gestalt-watermelon: #f13535; --gestalt-orange: #e3780c; We discussed this and the secondary color palette hasn't been finalized by design yet. We'll keep using the same colors for now
gharchive/issue
2020-07-08T00:52:57
2025-04-01T06:45:23.622131
{ "authors": [ "jennyscript" ], "repo": "pinterest/gestalt", "url": "https://github.com/pinterest/gestalt/issues/982", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
716133061
Run Ktlint on a specific folder Is there any way to specify the folder that ktlint will run on android like: ./gradlew location ktlint Or something like that? Because in large projects, using ktlint and its format function would make pull request larger than expected. (Yes, with fixed codestyle, but for this situation, I want to select where ktlint will run). If you are using ktlint via CLI just pass folder as a last parameter: ./ktlint path/to/folder/**/*.kt if you are using ktlint-gradle check filter { } part in plugin configuration.
gharchive/issue
2020-10-07T01:48:54
2025-04-01T06:45:23.623997
{ "authors": [ "Tapchicoma", "jsouza678" ], "repo": "pinterest/ktlint", "url": "https://github.com/pinterest/ktlint/issues/940", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
644477832
Add max.poll.interval.ms kafka producer setting This PR has for goal to enable the configuration of max.poll.interval.ms. As stated in the documentation (https://kafka.apache.org/documentation/#max.poll.interval.ms). This setting can prevent rebalancing during any long upload in Secor. Looks good, thanks for the contribution.
gharchive/pull-request
2020-06-24T09:49:02
2025-04-01T06:45:23.625904
{ "authors": [ "HenryCaiHaiying", "jehuty0shift" ], "repo": "pinterest/secor", "url": "https://github.com/pinterest/secor/pull/1408", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1072925146
[FIX] return RTPSender by Publish function Description Reference issue Fixes #... LGTM!
gharchive/pull-request
2021-12-07T05:18:01
2025-04-01T06:45:23.687896
{ "authors": [ "adwpc", "zjzhang-cn" ], "repo": "pion/ion-sdk-go", "url": "https://github.com/pion/ion-sdk-go/pull/62", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
437996276
Log the peerconnection id. Motivation For now, there is no log about pc id. We should make a uid for each connection/socket . When there are many pcs, it is easy for debugging @jinleileiking WebRTC doesn't have concept of PeerConnection id unfortunately so I don't know how we would implement this https://www.w3.org/2018/04/12-webrtc-minutes#item06 If you have an alternative proposal/other ideas I am happy to explore them! Just don't see a clear path forward right now :( What about use the address of PeerConnection? @jinleileiking Not entirely sure what you mean by address of peerConnection. A peer connection uses ICE process under the hood which negotiates multiple addresses. Furthermore, these underlying addresses sometimes are renegotiated during a session. One way you can introduce the idea of IDs is by using the encryption keys and simply hashing the public key using some kind of hashing function. That does seem like an application level implementation requirement. Also thinking about it we also have to log everything under all the subsystems (ICE, DTLS, SCTP...) and I don't see how that would be possible. I think a few production users like @hugoArregui have robust logging solutions so might be worth asking them and then we can write a Knowledge Base post on it. This is a great idea/problem to solve though. We need to solve this, but don't think we can do it in the code :( @jinleileiking One you think could do is create your own logger factory for each instance of peer connection (via SettingEngine) to dump logs into separate files. Indeed! Since the LoggerFactory starts from the peer connection but then spreads everywhere, you can use it to customize whatever you want at connection level, I use it to log a peer id everywhere: https://github.com/decentraland/webrtc-broker/blob/master/internal/webrtc/webrtc.go#L47 https://github.com/decentraland/webrtc-broker/blob/master/internal/logging/logging.go But you can also dump each connection to a file as @enobufs suggests or whatever you want.
gharchive/issue
2019-04-28T00:11:05
2025-04-01T06:45:23.694449
{ "authors": [ "Sean-Der", "enobufs", "hugoArregui", "jinleileiking", "trivigy" ], "repo": "pion/webrtc", "url": "https://github.com/pion/webrtc/issues/647", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
892027654
update to latest Need latest as it supports wifi on the portenta @AndyBlightLeeds I'm not sure how to fix this? The using command line instructions tells me its up to date? @nfry321 As discussed, create a new branch and merge the change to the repos file into that. Much easier!
gharchive/pull-request
2021-05-14T15:40:18
2025-04-01T06:45:23.695836
{ "authors": [ "AndyBlightLeeds", "nfry321" ], "repo": "pipebots/micro_ros_arduino", "url": "https://github.com/pipebots/micro_ros_arduino/pull/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1419770903
RiskIQ / Passivetotal does not work anymore RiskIQ has been bought by Microsoft, the login creation has been shut down and replaced by "Defender Threat Intelligence". Old logins are supposed to still work via API. Is there an alternative? Ah, that's too bad. Let me see what can be done here. Thanks for the heads up! Hi @MichaelSchreiner & @pirxthepilot, MS bought RiskIQ but you can still register with this (hidden) link: https://community.riskiq.com/registration/wtfis This is not a solution for the future, but OK for now... Still can't find a viable whois alternative that has a free tier. Any recommendations? Latest version (v0.4.0) (PR #31) adds an additional whois source (ip2whois.com) which offers 500 free queries per month. I find the data isn't as good as PT but it's better than what VT provides. I will keep looking for more quality whois sources. Closing this issue but please create a new one if you have recommendations / ideas. Thanks!
gharchive/issue
2022-10-23T11:59:43
2025-04-01T06:45:23.704711
{ "authors": [ "MichaelSchreiner", "pirxthepilot", "plehr" ], "repo": "pirxthepilot/wtfis", "url": "https://github.com/pirxthepilot/wtfis/issues/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
304957840
update 1.2 docs to reflect new architecture [#155606319] Signed-off-by: Warren Fernandes wfernandes@pivotal.io Accepting and merging into Healthwatch v1.2 docs
gharchive/pull-request
2018-03-13T22:25:08
2025-04-01T06:45:23.707267
{ "authors": [ "AmberAlston", "trevorwhitney" ], "repo": "pivotal-cf/docs-pcf-healthwatch", "url": "https://github.com/pivotal-cf/docs-pcf-healthwatch/pull/66", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
365166574
Adding saml infront of auth config to match what om cli requires Adding same infront of the attributes created for saml. om requires that the attributes to be exactly what they are in the command line: --config, -c string path to yml file for configuration (keys must match the following command line flags) And the flags are: saml-bosh-idp-metadata saml-idp-metadata saml-rbac-admin-group saml-rbac-groups-attribute Currently when running from the pipeline I see: om --env env/env.yml configure-saml-authentication --config config/auth.yml could not execute "configure-saml-authentication": could not parse configure-saml-authentication flags: flag provided but not defined: -idp-metadata Thanks @hjaffan
gharchive/pull-request
2018-09-29T21:00:16
2025-04-01T06:45:23.714761
{ "authors": [ "calebwashburn", "hjaffan" ], "repo": "pivotalservices/pipeline-utilities", "url": "https://github.com/pivotalservices/pipeline-utilities/pull/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
287730353
[next] temp parent refactor Another version of #4596 I agree with @englercj that we should not add this branching in critical place. Although transform itself has two branches inside, I just dont like that temporary parent hack in a critical place. I can rename safeUpdateTransform if anyone has ideas about new name. I'm going to close this because of inactivity, no consensus around naming the new API, and the lack of a clearly-defined performance to understand benefits. If someone can describe the performance benefit, will consider avoiding the extra branch.
gharchive/pull-request
2018-01-11T10:12:46
2025-04-01T06:45:23.725079
{ "authors": [ "bigtimebuddy", "ivanpopelyshev" ], "repo": "pixijs/pixi.js", "url": "https://github.com/pixijs/pixi.js/pull/4598", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2008936513
[BUG] Unable to open Roblox - only with Bloxstrap Acknowledgement of preliminary instructions [X] I have read the preliminary instructions, and I am certain that my problem has not already been addressed. What problem did you encounter? I'm unable to run Roblox using Bloxstrap after installing NTVDMx64. The process is in task manager, but there's no window. I've tried reinstalling + rebooting, it didn't work. i uninstalled bloxstrap and reinstalled the normal roblox. it seemed to work for me. I'm unable to run Roblox using Bloxstrap after installing NTVDMx64 I have a feeling that you've already just pointed out the cause of your issue here. As I asked in the preliminary instructions, does it happen without Bloxstrap? Can I see the list of FastFlags you have configured in the menu? Uninstalling NTVDMx64 fixes it though ...Yet somehow Roblox by itself works completely fine with it installed? Yup... Just throwing this out there, does disabling all your integrations change anything? Nope I'm not really sure if there's even anything I can do here, but at this point I just want to ensure that your Roblox install is identical to the stock one. Go to the folder where Bloxstrap is installed to (typically %localappdata%\Bloxstrap), send the contents of State.json here. Then, go into the version folder, delete the ClientSettings folder, and then start RobloxPlayerBeta.exe manually. See if anything different happens. State.json Nothing happened. Can you send me a copy of your Bloxstrap installation folder? Throw everything into a .zip archive, upload it somewhere, send a link to it here. https://sourceforge.net/projects/nxvdm/ This is happening to me; I have tried re-installing it myself aswell it works on ONE game then when I leave it, (sometimes it doesnt even do one game) it does the same thing, appears in the background but no window. @TheRabbit75 As far as I'm concerned you're talking about something completely different. Open a separate issue for it. Uninstalling NTVDMx64 fixes it though ...Yet somehow Roblox by itself works completely fine with it installed? How was it possible? Was it something to do with the latter's ldntvdm.dll interfering with the former? Uninstalling NTVDMx64 fixes it though ...Yet somehow Roblox by itself works completely fine with it installed? How was it possible? Was it something to do with the latter's ldntvdm.dll interfering with the former? Not sure. Installing ntvdm seems to affect cmd/conhost as well, changing it to the legacy console. (probably unrelated, but just throwing that out there) Is this still occurring? Client issues are no longer accepted.
gharchive/issue
2023-11-24T01:06:31
2025-04-01T06:45:23.751056
{ "authors": [ "ClaytonTDM", "DimaLeon2000", "TheRabbit75", "bluepilledgreat", "goofinglor", "pizzaboxer" ], "repo": "pizzaboxer/bloxstrap", "url": "https://github.com/pizzaboxer/bloxstrap/issues/975", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1696204799
Add save/load a project Need to be able to save and load a project. Verify working, so that I can persist prior to long renders. as part of this, look into recognizing the extracted frames and already processed frames (for if a crash happens) Related to #55, #45, #13 Battling crashes, need persistence.
gharchive/issue
2023-05-04T15:03:26
2025-04-01T06:45:23.752718
{ "authors": [ "pj4533" ], "repo": "pj4533/Pokora", "url": "https://github.com/pj4533/Pokora/issues/56", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2056887770
About Chinese performances. 关于中文能力的询问 您好,谢谢您发布如此好的工作,我们近期也在mixtral-8x7b上做了微调(我们的项目:https://github.com/WangRongsheng/Aurora ),想知道llama-moe的中文能力如何呢? Hi,感谢您对我们项目的关注,以及Aurora对中文MoE社区的贡献~ LLaMA-MoE是基于LLaMA-2的,我们没有做中文任务的测试,不过可能需要额外的词表扩充、继续预训练和fine-tune才能提升中文的理解和生成能力。 Hi there, thanks for your attention on this project~ We built LLaMA-MoE on LLaMA-2, so it is expected to have a poor Chinese capability. It may extend the vocabulary and apply extra continual pre-training or fine-tuning to enhance the Chinese understanding and generation. 感谢回复,我们后续会在中文能力上测试llama-moe!
gharchive/issue
2023-12-27T04:07:46
2025-04-01T06:45:23.756183
{ "authors": [ "Spico197", "WangRongsheng" ], "repo": "pjlab-sys4nlp/llama-moe", "url": "https://github.com/pjlab-sys4nlp/llama-moe/issues/48", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2098882031
Add profile for Data-Warehouse GmbH Automatically generated pull request to add profile for Data-Warehouse GmbH according to application pkic/members#207. This closes pkic/members#207 @skambia uncountably I was unable to find an SVG based logo on the website for Data-Warehouse GmbH, can you provide us with an SVG or other vector based logo?
gharchive/pull-request
2024-01-24T18:50:32
2025-04-01T06:45:23.767898
{ "authors": [ "vanbroup" ], "repo": "pkic/pkic.org", "url": "https://github.com/pkic/pkic.org/pull/95", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2498185039
IDV SDK for react native You have an sdk for implementing IDV in a react native application. If yes please send reference guide Yes, the SDK is located here https://github.com/plaid/react-native-plaid-link-sdk and you can find the docs here https://plaid.com/docs/link/react-native/
gharchive/issue
2024-08-30T21:17:48
2025-04-01T06:45:24.146509
{ "authors": [ "dfvalenciaviamericas", "phoenixy1" ], "repo": "plaid/idv-quickstart", "url": "https://github.com/plaid/idv-quickstart/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2188941779
format the values in the integration Checklist [X] I have filled out the template to the best of my ability. [X] This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request). [X] This issue is not a duplicate feature request of previous feature requests. Is your feature request related to a problem? Please describe. The values provided by livisi are displayed differently in different cards in the HA dashboard. The date displayed at the motion detector cannot be adjusted. Is it possible to format the values in the integration? Describe the solution you'd like Is it possible to format the values in the integration? Similar to the decimal places in temperature and humidity values. Describe alternatives you've considered -- Additional context -- This is not something an integration does. The cards do the formatting (which explains why it differs)
gharchive/issue
2024-03-15T15:53:18
2025-04-01T06:45:24.239002
{ "authors": [ "ThMFlive", "planbnet" ], "repo": "planbnet/livisi_unofficial", "url": "https://github.com/planbnet/livisi_unofficial/issues/99", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
975337530
Remove timeframe of completion from expired tickets Currently, on the ticket page, the expired tickets have a time frame for completion. however, the expired tickets should look like this. Should be fixed by #567
gharchive/issue
2021-08-20T07:16:52
2025-04-01T06:45:24.245942
{ "authors": [ "C-ollins", "Sirmorrison" ], "repo": "planetdecred/godcr", "url": "https://github.com/planetdecred/godcr/issues/570", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
1300520987
[Rust] Implement more methods in Vector Planus vectors are basically just slices with extra steps. They should support more operations like splitting, trimming or more iterators (windows/chunks). Basically just look at everything from the std slice and see if it makes sense to port it. When implementing this, keep in mind that most of the rust standard library was made before const generics -- some of the functions would look a bit different if they were designed today. We cannot really use const-generics for anything here, since most vector operations returns Results.
gharchive/issue
2022-07-11T10:35:16
2025-04-01T06:45:24.269997
{ "authors": [ "TethysSvensson" ], "repo": "planus-org/planus", "url": "https://github.com/planus-org/planus/issues/116", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
194459016
Setup app.logger for python environment Fixes #47 👍
gharchive/pull-request
2016-12-08T22:14:29
2025-04-01T06:45:24.274712
{ "authors": [ "kaustubhvp", "soamvasani" ], "repo": "platform9/fission", "url": "https://github.com/platform9/fission/pull/48", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1668523537
Feature request: @platformatic/client - send header parameters Currently @platformatic/client only sends request body and url parameters, but not any headers defined in the openapi spec. This is somewhat confusing as the client cli includes the header parameters in the types it creates, so an editor will suggest you add a property that won't actually be sent. This is indeed a bug! Would you like to take a stab at fixing it? It should be fairly simple! Hi! Yeah I've actually already done it I just need to write some tests. I'll get something ready soon when I have some time, thanks!
gharchive/issue
2023-04-14T16:00:37
2025-04-01T06:45:24.278883
{ "authors": [ "gunjam", "mcollina" ], "repo": "platformatic/platformatic", "url": "https://github.com/platformatic/platformatic/issues/875", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1686862097
platformatic deploy should ask to memorize the deployment keys (if agreed) Copying and pasting all the parameters of platformatic deploy locally is tiresome and error-prone. platformatic deploy should ask the user if they want to memorize the workspace id, secret, label, etc. The goal is that after the first time, platformatic deploy should ask if they want to use the workspace for deployment and deploy there. i have some questions and ideas. should ask to save each key or just at the end like: "do you want to save for next deploy?" ? should be saved in the .env? saving the api-key is pretty sensitive as in the cloud you have the access just once saving the label can "lock" the user to create more "branches" I think this feature should be a bit more complex. After the first deploy is saved, we can ask deploying to workspace _n_, confirm? or select the workspace to deploy to I have a feeling we might want to keep asking for the label in case of a dynamic workshop. I think we should save this information in a different file, possibly in the home directory of the user or in a hidden folder. We'll do something different.
gharchive/issue
2023-04-27T13:29:49
2025-04-01T06:45:24.282467
{ "authors": [ "mcollina", "valdo99" ], "repo": "platformatic/platformatic", "url": "https://github.com/platformatic/platformatic/issues/912", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1760974332
remove hot reloading from everything but the runtime In an effort to simplify things, this commit moves all hot module reloading responsibilities into the runtime. The functionality will still be available in the CLI now that platformatic start wraps everything in the runtime. There are failures in the ci-client job, but it appears unrelated to the changes in this PR. Here is an existing failure on the head of main. Moving to draft. Found a bug during testing.
gharchive/pull-request
2023-06-16T16:38:45
2025-04-01T06:45:24.284214
{ "authors": [ "cjihrig" ], "repo": "platformatic/platformatic", "url": "https://github.com/platformatic/platformatic/pull/1103", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
80568114
teensy arduino linking problem after upgrade to PIO2.0 Since upgrading to 2.0, I get an error when compiling a project for teensy3.1-arduino: ''' ../lib/gcc/arm-none-eabi/4.8.3/../../../../arm-none-eabi/bin/ld: cannot open linker script file /home/andy/.platformio/packages/ldscripts/mk20dx256.ld: No such file or directory collect2: error: ld returned 1 exit status scons: *** [.pioenvs/teensy31/firmware.elf] Error 1 ''' already deleted .pioenv and even reinstalled everything from scratch, but to no avail. what can I do? --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/17582308-teensy-arduino-linking-problem-after-upgrade-to-pio2-0?utm_campaign=plugin&utm_content=tracker%2F2014790&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F2014790&utm_medium=issues&utm_source=github). Hi @gandy92 It seems ldscript package has been updated incorrectly. This file mk20dx256.ld should be in this directory: /home/user/.platformio/packages/ldscripts If it's not there, try to update your packages via platformio update. If this does not help, delete your appstate.json file from /home/user/.platformio and install teensy platform again. Thanks, that helped - apparently, a few file permissions in ~/.platformio/packages got corrupted during a previous sudo platformio upgrade. After fixing the file permissions, deleting the appstate.json file and installing teensy platform during the next platformio run, my build system is back to normal. @gandy92 Im glad to hear you! :blush: Since PlatformIO 2.0 each upgrade operation to the new version will force platformio platforms update automatically. It should allow to avoid the similar problems like yours. I'm not sure but I gather the problem was a sudo platform upgrade which not only upgraded platformio in /usr/local/ with root permissions, but also updated libs and packages residing in $HOME/.platformio with root permissions. This left some of the files (including packages/ldscripts) owned by root instead of myself, making it impossible to update the files or even whole subdirectories without sudo. Next time I run into this kind of problem I know I have to check permissions in my ~/.platformio. Would it be possible to check if platformio update is running with root permissions and fix file ownership in non-system directories? Would it be possible to check if platformio update is running with root permissions and fix file ownership in non-system directories? I've just fixed it. PlatformIO will not run updating process for the packages/libs when user runs platformio upgrade command. P.S: Since PlatformIO 2.0 automatic library/package updates were disabled by default. Perfect, thanks! :+1: Must have happened some while ago. I remember having encountered problems during some updates but each time, rerunning platformio update appeared to resolve the problem. Now, in retrospect, what really might have happened was that the appstate.json was updated without the complete update ever happening. @gandy92 what do you thing about new feature Continuous Integration in PlatformIO 2.0? See examples: https://travis-ci.org/felis/USB_Host_Shield_2.0 https://travis-ci.org/ivankravets/Arduino-IRremote
gharchive/issue
2015-05-25T15:18:49
2025-04-01T06:45:24.333104
{ "authors": [ "gandy92", "ivankravets", "valeros" ], "repo": "platformio/platformio", "url": "https://github.com/platformio/platformio/issues/214", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
630676825
Particle systems with same settings simulate at different speeds? https://forum.playcanvas.com/t/bug-with-particles-duration/13472 Test project can be seen here: https://playcanvas.com/project/691967/overview/particles-not-starting-at-same I'm unable to reproduce the error on my Mac and now wondering if it could be device specific? I can reproduce on Windows, Chrome Version 83.0.4103.61 (Official Build) (64-bit).
gharchive/issue
2020-06-04T09:48:33
2025-04-01T06:45:24.340369
{ "authors": [ "LeXXik", "yaustar" ], "repo": "playcanvas/engine", "url": "https://github.com/playcanvas/engine/issues/2124", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
487789006
GWT compile-to-JS step doesn't work on Java 10 Not sure what fixing this would involve - if GWT is still being maintained for instance. But have documented it here in case it helps anyone. Note that the error message is very cryptic, but once I understood the problem I was able to find a workaround. $ java -version java 10.0.1 2018-04-17 $ mvn integration-test -P html .... [INFO] --- gwt-maven-plugin:2.7.0:compile (default) @ myproject-html --- [INFO] Compiling module com.example.MyProject [INFO] [ERROR] Hint: Check that your module inherits 'com.google.gwt.core.Core' either directly or indirectly (most often by inheriting module 'com.google.gwt.user.User') ... [INFO] BUILD FAILURE ... [ERROR] Failed to execute goal org.codehaus.mojo:gwt-maven-plugin:2.7.0:compile (default) on project myproject-html: == Failed workaround: There may be a workaround involving using the latest version of GWT - GWT 2.8.2 is supposed to work on Java 10. However, I don't understand the build process well enough to get this to work. == Actual workaround: Downgrading Java to 1.8 or before makes the GWT compilation step work. Note that it is not necessary to uninstall java 10, there are various ways to temporarily give an earlier version of java priority. $ java -version java version "1.8.0_211" I updated the default GWT version to 2.8.2. You can add: <properties> <gwt.version>2.8.2</gwt.version> </properties> to your game's top-level pom.xml file to apply this upgrade to your local game until the next playn release.
gharchive/issue
2019-08-31T16:49:49
2025-04-01T06:45:24.395790
{ "authors": [ "codemonkey232", "samskivert" ], "repo": "playn/playn", "url": "https://github.com/playn/playn/issues/83", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
774897562
article does not decode properly: missing first pages BD00FFCBFFA1FFC1231CC55FFF90FFB9 jAsia-PacificBiodiversity.13.325-330 this article using glyphs only does not decode. It is missing the first pages BD00FFCBFFA1FFC1231CC55FFF90FFB9 jAsia-PacificBiodiversity.13.325-330.pdf Tried both "Render Glyphs Only" and "Decode Unmapped", and this PDF decodes perfectly fine ... not sure which options you used, but but the usual ones for born-digital PDFs seem to both work perfectly fine, both coming up with 6 pages. i use render glyphs, and not decode and both did not work. Did you upload the file? Do you happen to still have any log files? Hard to tell what might have happened otherwise ... Uploaded the IMF with all the pages now. I can reproduce the missing pages. Here the log GgImagine.20201227-1354.out.zip can you please release the IMF - I can't open it, as it is locked by the admin It's released now, not sure what went wrong yesterday. The only potential reason for the two missing pages is that some template got assigned that indicates 2 cover pages ... all 6 pages are generated properly, then the PDF decoder selects template, sorts out any cover pages the latter indicates, and moves on to analyze the content of the remaining pages ... not sure which template could have possibly gotten in the way here, though ... A bit of digging shows Geodiversitas.2018-.journal_article as the culprit ... looks as though its anchors require some refinement.
gharchive/issue
2020-12-26T16:09:30
2025-04-01T06:45:24.400922
{ "authors": [ "gsautter", "myrmoteras" ], "repo": "plazi/GoldenGATE-Imagine", "url": "https://github.com/plazi/GoldenGATE-Imagine/issues/10", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1871155068
🛑 Treatmentbank is down In 5c45412, Treatmentbank (https://tb.plazi.org/GgServer/search?fullText.ftQuery=lepus&fullText.matchMode=prefix&taxonomicName.taxonomicName=&taxonomicName.isNomenclature=true&taxonomicName.exactMatch=true&taxonomicName.order=&taxonomicName.family=&taxonomicName.genus=&taxonomicName.species=&BibMetaData.docAuthor=&BibMetaData.docDate=&BibMetaData.docTitle=&BibMetaData.docOrigin=&BibMetaData.part=&BibMetaData.pageNumber=&BibMetaData.extId=&materialsCitation.location=&materialsCitation.country=&materialsCitation.stateProvince=&materialsCitation.typeStatus=All+Types&materialsCitation.collectionCode=&materialsCitation.specimenCode=&materialsCitation.LSID=&materialsCitation.longitude=&materialsCitation.latitude=&materialsCitation.degreeCircle=1&materialsCitation.elevation=&materialsCitation.elevationCircle=100&indexName=0&subIndexName=0&minSubResultSize=0) was down: HTTP code: 0 Response time: 0 ms Resolved: Treatmentbank is back up in 5b12714 after 22 days, 16 hours, 4 minutes.
gharchive/issue
2023-08-29T08:27:00
2025-04-01T06:45:24.410329
{ "authors": [ "retog" ], "repo": "plazi/monitoring", "url": "https://github.com/plazi/monitoring/issues/283", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
232955653
Question: one controller instance per request Is there a way to create an instance of a controller per request? This would allow providing context about the request to all the methods of the controller (ie: currentUser, logger (with request metadata), requestId, ...), without having to pass the context around via method parameters. To share date between calls, you can use ctx in Koa. In Express you can attach it to the req object. @NoNameProvided that's exactly what I wanted to avoid, because it's harder to get type information and you still have to pass the req around. Makes it harder to break an action implementation in multiple methods and it's harder to reuse with inheritance. I am not sure I understand what do you want then.Do you want a special param, like an injectable Context type? @Get(/item/:id) public async function(@Context() context: Context) { // ... } In this case I dont see the difference between injecting a @Req() req: Request object or this context object. Or do you want Context Attached to the actual request controllers? @Get(/item/:id) public async function(@Context() context: Context) { this.context.mySpecifiedParam; } If you want this, how would you specify it's type? How will they override each other (if speaking about inheritance) In theory it would be possible. Currently we register a single global route handler to handle requests. So we can initialize a new instance of a controller for the matched route, but this would need a rewrite of the drivers to work this way. Also initializing controllers, and all needed dependency on every request probably will have a performance impact on the response time, so this should be made optional if we implement it and I am not sure if this can be done a nice way. But this is a kind of question we should ask @pleerock about What I really would like is to be able to register a Controller with a factory method that then returns the controller instance to handle the request. Quick example of a use case: class MyController { constructor(private logger: Logger, private somethingModel: SomethingModel) { } @Get("/foo") public async getSomething() { this.logger.log("I'm about to get something") this.checkSomething(); return await this.somethingModel.get(); } private checkSomething() { this.logger.log("I'm checking!"); } } const app = createExpressServer({ controllers: [req => new MyController(Logger.fromRequest(req), new SomethingModel("production"))] }); This lets you log with contextual info from any helper method (checkSomething() in the example) and makes it easy to test controllers: // test! const testController = new MyController(consoleLogger, modelMock); await testController.getSomething(); Just an example of a use case. Basically you want controllers as services as I believe. If I'm not mistaken there is not such functionality right now in routing controllers. Proper way right now is to create separate service using typedi and test it instead. Like test service in unit tests, but controllers should be tested in functional/acceptance tests. Basically you want controllers as services as I believe. No, services are singletons injected by typedi (if you use typedi), what @maghis wants is a different instance of controllers for every request. From the Logger.fromRequest(req) line and his comments I think he simply doesn't want to pass id-s and that's kind of stuff around. So he doesnt have to write this.logger.log("I'm checking!", req.contextId); but simply this.logger.log("I'm checking!"); because Logger.fromRequest(req) would return an instance of the logger with the id of the request pre-set. But I don't see much value in adding such feature. Context is easily creatable in a before middleware already. Attaching a logger to the request via req.logger = Logger.fromRequest(req) and then using it in the controllers like req.logger.log('xy') is perfectly can solve his problem. So I still don't understand why it's so important to have a different instance for every request. Like test service in unit tests, but controllers should be tested in functional/acceptance tests. You can already test them in unit tests if you want. You can inject whatever dummy class you want for your controller. They are just simple classes after all. routing-controllers allows you to declare routes and functions that will be executed on user's request on that route. Using decorators and class methods. What you are asking is just complexity - don't do things this way. Simply handle your actions on controller methods level and if you have something to reusable extract functionality into services and reuse services. @NoNameProvided my bad, I meant to say something about "you want to have one request instance and ability to work with it like in "request - process spawn -response - process died" way. Like you work with requests in PHP. There is exactly one instance of controller per request :D The thing is, it's not so easy to get used to thinking "hey, this service/controller/class is not for user who I got from request, it's for everyone, so it can have my app parameters, but not some user's parameters." and passing params everytime and everywhere. If you are from different background, tho @idchlife it's actually one of the requirement of the MVC design pattern, for the controller to be stateless on a request basis. Allows you to do a lot of code reuse by inheriting from common base controllers and guarantees that there is no state "leak" between requests. Design of other popular MVC frameworks: ASP.net MVC: https://stackoverflow.com/questions/5425920/asp-net-mvc-is-controller-created-for-every-request Ruby on Rails: https://stackoverflow.com/questions/14172986/why-does-rails-create-a-controller-for-every-request ASP.net Web API: https://stackoverflow.com/questions/25175626/asp-net-web-api-why-are-controllers-created-per-request Maybe I'm old school, but I'm working on a large number of microservices that need to share common tracing/logging/perf measurement infrastructure and this is currently a requirement. I really like the project, I'll see if I can reuse the controller description/metadata part. Thanks for the help! it's actually one of the requirement of the MVC design pattern, for the controller to be stateless on a request basis actually MVC design pattern is quite abstract does not force you to make such design decisions. But yeah, controllers should be stateless on a request basis and nothing should be stored by a user in a controller, it should be a rule for everyone. In routing-controllers controllers are stateless until you set a container. If you set a container - they are services. If you make them services. You can store in such services something request-based or not, but generally its a bad idea and everyone should avoid doing that. share common tracing/logging/perf measurement You can do that using services. If you need a request object in your services simply pass them as a method parameter, e.g. this.logger.log(req, "I'm checking!");. The only problem is to send req object each time. So, is it the problem for you? @pleerock yep, I would like to avoid that, having to pass around req objects continuously makes it harder to test. I'll try using services that way and maybe give a shot at implementing a driver. Thanks! okay I've got your idea and request, probably something like this is implementable: @JsonController({ factory: (request, response) => { const logger = new Logger(request); return new UserController(logger); } }) export class UserController { constructor(private logger: Logger) { } } so, if factory is defined then it will use it instead of getting controller from a service container. @pleerock in that case you would no longer be able to use @Inject() and other DI utilities. You'd be responsible for resolving all dependencies of the controller yourself, correct? right he would need to do something like this: @JsonController({ factory: (request, response) => { const logger = new Logger(request); const userRepository = Container.get(UserRepository); return new UserController(logger, userRepository); } }) export class UserController { constructor(private logger: Logger, private userRepository: UserRepository) { } } Wouldn't it be simplier if TypeDI (or an other configured container) could create new instance of Controller class with injected services/repository (also shared or newed)? There would only be an option in decorator to mark controller to be 1 instance per 1 request and the injecting boilerplate would happen under the hood. lets say if we do such thing in typedi then how do you see it should work, provide more details. How integration should look like and how request/response objects shall come from routing-controllers to typedi how request/response objects shall come from routing-controllers to typedi I was answering to one controller instance per request problem. And I really don't get the idea of Logger.fromRequest(req), why some class has to be instantied using whole req object instead of explicit params? Even if there is a use case for this kind of weird things, it can be done now easily with custom param decorator which can inject Logger.fromRequest(req), so you can use the logger associated with request in your action handler. And if the logger perform autologging, it should be extracted to a middleware and attached before route. I was answering to one controller instance per request problem. yeah, I was talking about same. I really don't get the idea Imagine you want to log each user action and add to logs request data (browser, ip, etc.). To do so right now you have two inconvenient solutions: @JsonController() export class UserController { constructor(private logger: Logger) { } @Get("/users") all(@Req() request: Request) { this.logger.log("message", req); } @Post("/users") save(@Req() request: Request) { this.logger.log("message", req); } } or @JsonController() export class UserController { @Get("/users") all(@MyLogger() logger: Logger) { logger.log("message"); } @Post("/users") save(@MyLogger() logger: Logger) { this.logger.log("message"); } } With first approach inconvenience is that you have to pass req into logger each time, with second approach inconvenience is that you have to inject your logger each time Currently, all services are singletons, may be you can allow users to create services with different lifetimes at the time of creation, something similar is done in Asp.Net Core: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection#service-lifetimes-and-registration-options Transient Transient lifetime services are created each time they are requested. This lifetime works best for lightweight, stateless services. Scoped Scoped lifetime services are created once per request. Singleton Singleton lifetime services are created the first time they are requested (or when ConfigureServices is run if you specify an instance there) and then every subsequent request will use the same instance. Sorry for the long post, but I think per request injection can lead to very succinct code. Even if the injection hierarchy is complex. Example: @JsonController() export class UserController { // So finally something very simple but with complex hierarchy of dependencies // friends -> currentUser -> request // friends -> db // clients -> currentUser -> request // clients -> db // debt -> clients @Get("/friendsClientsAndDebts") all(friendsService: FriendsService, clientsService: ClientsService, debtService: DebtService) { return { friends: friendsService.getFriends(); clients: clientsService.getClients(); debt : debtService.getDebt(); } } } Explanations of dependencies : // Db is a singleton @Singleton() export class Db { users:collection; } @RequestScope() export class CurrentUserService { currentUser:User = null; constructor( private db:Db, private req:Request) { } getUser() { if (this.currentUser == null) this.currentUser = this.db.users.find(this.req.session.userId); return this.currentUser; } } @RequestScope() export class FriendsService { constructor( private db:Db, currentUser:CurrentUserService) {} getFriends() { if (this.friends == null) this.friends = db.friends.find(currentUser.getUser()); return this.friends; } } @RequestScope() export class ClientsService { constructor( private db:Db, currentUser:CurrentUserService) {} getClients() { if (this.clients == null) db.clients.find(currentUser.getUser()); return this.clients; } } @RequestScope() export class DebtService { constructor( private clientsService:ClientsService) {} debt() { // use clientsService } } I think controller per request is very important to have, if you are using DI. All examples here deal with a simplified use case of logger, that is too simplistic to base design decision on, it has no dependencies and only couple of methods. Sure in that case you can pass req in the call. But what if you have a DI topography of 3 classes down, DBLocator->DBService->SearchSearvice-> and it's DBService that needs user info to check permissions and set createdBy automatically? How do you have to put extra object on every method in DBService and SearchService to pass req through to actual one function that does it? Or wouldn't it be simpler to let DI initialize SearchService, cascading to DBService which will have Zones or Context injected in it and have it as this.userId for the duration of request. So no matter which method you call as long as DBService references this.userId you fine. I implemented this architecture thinking I am in good old MVC world and it worked. Then it stopped, because it only stored first user request during initial creating other users were ignored. That's pretty fragile and confusing. I would definitely appreciate stateless controllers that can be initialized per request, to avoid this problem. Sorry for the long post, but I think per request injection can lead to very succinct code. Even if the injection hierarchy is complex. The whole code can be also succinct using the functional approach: export class Db { users: collection; clients: collection; friends: collection; debts: collection; } export class FriendsService { constructor( private db:Db, ) {} getFriends(user: User) { if (this.friends == null) { this.friends = this.db.friends.find(user); } return this.friends; } } export class ClientsService { constructor( private db:Db, ) {} getClients(user: User) { if (this.clients == null) { this.db.clients.find(user); } return this.clients; } } export class DebtService { constructor( private db:Db, ) {} getDebt(user: User) { this.db.debts.getAccountState(user) } } @JsonController() export class UserController { constructor( private friendsService: FriendsService, private clientsService: ClientsService, private debtService: DebtService, ) {} @Get("/friendsClientsAndDebts") all(@CurrentUser() user: User) { return { friends: this.friendsService.getFriends(user), clients: this.clientsService.getClients(user), debt: this.debtService.getDebt(user), } } } So your services are stateless and they get the data from outside - it's passed to them by action handler which extract the data from the request. It doesn't need shared state and new instances for request but only some changes in app architecture and different thinking as it's single threaded, event-loop based Node.js 😉 But what if you have a DI topography of 3 classes down, DBLocator->DBService->SearchSearvice-> and it's DBService that needs user info to check permissions and set createdBy automatically? This is node.js ecosystem. We don't have 5 layers of abstraction, 10 interfaces to implements and use 3 different design patterns like in C#/Java world to make the things works nice 😆 Now do you have to put extra object on every method in DBService and SearchService to pass req through to actual one function that does it? Or wouldn't it be simpler to let DI initialize SearchService, cascading to DBService which will have Zones or Context injected in it and have it as this.userId for the duration of request. Don't overcomplicate things because when they don't have to be. Keep it simple, stupid 😜 I implemented this architecture thinking I am in good old MVC world and it worked. Node.js with single threaded and event-loop needs different way of thinking. It might be hard for senior C# developers to switch thinking but this requires it. This is not .NET, we don't need overcomplicated MVC implementations. You can't map the way you always create your apps to the node.js world because you would kill all the advantages it has thanks to its nature. If it's hard to you, go back to C# 😉 I think what @19majkel94 want to say (and what I say for everyone who come from java/c#) is that in node.js and javascript/typescript we must think more javascript/typescript-ish way. Most of design patterns were developed for languages like java because of limitation those language have. I don't tell they are bad, they have their own pros and cons (thats why typescript exist - it took pros of another languages and merge them into javascript), but anyway you need to think new way. Thats what I think @19majkel94 want to say and thats what I personally learned in javascript world after years of programming in java. But node.js can have 5 layers of abstraction, why not? Okay, personally I hate people who overcomplicate things and make useless abstractions (one of the biggest mistakes of "pro"-s), but this can be real. And I can admit that its not pleasure to pass data from controller to all downside levels. Its really not a big pleasure to do so. And I would like to fix this problem if there is a good fix. So what is your solution guys? Can you propose some really really good solution that fits routing-controllers, typedi and nodejs ecosystems? One thing I can propose is container-per request that may work, however Im not sure what other problems it may bring Because Node.js is a single threaded and event-loop, it makes it difficult to maintain the scope variables across its callbacks and promises (async/await). C# (founded the async/await syntax) uses per request containers to easily maintain the scope of the request. It will also make it easier to forward request information to downstream services: extract for example an opentracing id in a request interceptor, store it in a request scoped container and use it when making a request to a downstream service. Besides that it will prevent bugs when a programmer stores (accidentally) request variables as a property of a Service. When 2 requests are using that same variable at the 'same time' it will have ugly results. On the developer pc it will work fine, but not in production... I don't really see a down side for per request containers. Maybe a solution like I proposed here will be sufficient. Just wanted to chime in with a real-world item that scoped lifetime (per request) services would solve in node. Say you have a LogService which you want to decorate with per-request information, such as the username of the user making the http call. Something like the following: class LogService { private _identity: Identity; public log(msg) { console.log(`${this._identity.username}: ${msg}`); } } With a singleton service, that won't work. Sure, you can pass around the user's Identity to each call to LogService.log(), but that gets super tedious. Essentially clutters up a ton of function interfaces. You could technically have a singleton which was a factory which makes a new LogService when you call it and provide it with an Identity, but that feels like sidestepping the problem and losing a lot of the benefits that DI containers provide. If you had a scoped lifetime, which is to say scoped to the context of the http call, it's much easier to construct something like the LogService above, and then you wouldn't have to explicitly track the user's Identity by passing it through each relevant service's functions. Multiple instances is not the solution. Continuation-Local Storage is. Right now you can use bare req/ctx object to pass request-related data between services. etc. We will think how to implement request-scoped context in the future. @MichalLytek cls is a good decisions. I have already use it instead of passing req object everywhere.
gharchive/issue
2017-06-01T17:39:15
2025-04-01T06:45:24.449416
{ "authors": [ "19majkel94", "NoNameProvided", "adnan-kamili", "dmikov", "dvlsg", "emallard", "idchlife", "komanton", "maghis", "marshall007", "pleerock", "steven166" ], "repo": "pleerock/routing-controllers", "url": "https://github.com/pleerock/routing-controllers/issues/174", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1625539835
Apply suggestion from browser for password field closes #3990 @stevepiercy No, I don't think it does. It's a hint to the password manager (including one which may be built in to the browser) for which field would be the right one to fill in the current password. The policy decision about whether to allow that to happen would be done elsewhere (in configuration for the browser or password manager), not at the level of a specific site.
gharchive/pull-request
2023-03-15T13:32:20
2025-04-01T06:45:24.500838
{ "authors": [ "davisagli", "lord2anil" ], "repo": "plone/volto", "url": "https://github.com/plone/volto/pull/4524", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
494645780
Improve usability of login form I know that this has a11y issues, but we are wondering if there's no better option (maybe include a hidden for SR-only link inside the proper tab flow... /cc @tisto Can't fully test right now, traveling. But a hidden link might be problematic; there is high probability of SR users not getting to that link, as autofocus grabs the focus and a positive tabindex messes up the tabindex. Plus, autofocus has more a11y issues as well, for mobile users, who will be yanked to the input field, possibly missing the context when they have limited screen realestate. It can work, but only if the login is a dedicated page/form/modal with nothing else on it, whose only purpose and content is to login. I don't know by heart if this component is meant to be a single-purpose one or if it is mixed into a page with multiple other elements. Even then, you have to provide good aria labels to explain to the user that they have to type in a username and that the purpose of that is to login, as they will have missed the title of the form being something like "login to our lovely site"... @polyester It's for just plain login form: It will only sport the username/login, it's unfortunate in this case that we should have the link in the description of the field, that's because the forced "jump". Autofocus is useful in desktop web, but indeed can be problematic in mobile browsers. I can see also a big problem in the fact that the "register" link is the last one in the tabindex chain, that's why I was asking for the hidden one. Thinking out loud: would it be possible to let the default be accessible, and 'only' use some js to autofocus if you're pretty certain it does more good than harm? IE on a desktop without assistive technology? Don't know if there are helper functions for that. Can check later when back home
gharchive/pull-request
2019-09-17T14:05:13
2025-04-01T06:45:24.504399
{ "authors": [ "polyester", "sneridagh" ], "repo": "plone/volto", "url": "https://github.com/plone/volto/pull/861", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1736651249
176 add null support for histograms Describe your changes Added an extra filter query that skips rows with column value Null. Changed the template to accommodate the query. _histogram function does not raise a Value Error if null data spotted and plots for the non-NULL values. Issue number Closes #176 Checklist before requesting a review [x] Performed a self-review of my code [x] Formatted my code with pkgmt format [x] Added tests (when necessary). [x] Added docstring documentation and update the changelog (when needed) :books: Documentation preview :books:: https://jupysql--568.org.readthedocs.build/en/568/ To see the specific tasks where the Asana app for GitHub is being used, see below: https://app.asana.com/0/0/1204085956498808 don't forget to request a review once this is ready need to make changes in doc/api/magic-plot.md Sure, I will make the changes Added some minor comments. Please resolve merge conflicts. Looks good otherwise
gharchive/pull-request
2023-06-01T16:08:16
2025-04-01T06:45:24.514795
{ "authors": [ "AnirudhVIyer", "edublancas", "neelasha23" ], "repo": "ploomber/jupysql", "url": "https://github.com/ploomber/jupysql/pull/568", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1875624454
[Bug] Components not working in the Detail Grid I am wanting to use cell renderers for the Detail Grid. In the example below, I am unable to get markdown to render inside a Detail Grid. I hope to expand this out to use any cellRenderer (not just markdown). Below is an example to reproduce the issue. from dash import Dash, html import dash_ag_grid as dag import os app = Dash(__name__) masterColumnDefs = [ { "headerName": "Country", "field": "country", "cellRenderer": "agGroupCellRenderer", "cellRendererParams": { 'innerRenderer': "markdown", }, }, { "headerName": "Region", "field": "region" }, { "headerName": "Population", "field": "population" }, { "headerName": "Test Link", "field": "test_link", "cellRenderer": "markdown", }, ] detailColumnDefs = [ { "headerName": "City", "field": "city", "resizable": True, "cellRenderer": "markdown" }, { "headerName": "Pop. (City proper)", "field": "population_city", "resizable": True, }, { "headerName": "Pop. (Metro area)", "field": "population_metro", "resizable": True, }, { "headerName": "Test Render", "field": "test_render", "resizable": True, "cellRenderer": "agGroupCellRenderer", "cellRendererParams": { 'checkbox': True, 'innerRenderer': "markdown", }, } ] rowData = [ { "country": "**China**", "region": "Asia", "population": 1411778724, "test_link":"[Blank Test Link](/)", "cities": [ { "city": "**Shanghai**", "population_city": 24870895, "population_metro": "NA", "test_render": "**BOLD TEST**" }, { "city": "**Beijing**", "population_city": 21893095, "population_metro": "NA", "test_render": "**BOLD TEST**" }, { "city": "**Chongqing**", "population_city": 32054159, "population_metro": "NA", "test_render": "**BOLD TEST**" }, ], }, { "country": "**India**", "region": "Asia", "population": 1383524897, "test_link":"[Blank Test Link](/)", "cities": [ { "city": "**Delhi**", "population_city": 16753235, "population_metro": 29000000, "test_render": "**BOLD TEST**" }, { "city": "**Mumbai**", "population_city": 12478447, "population_metro": 24400000, "test_render": "**BOLD TEST**" }, { "city": "**Kolkata**", "population_city": 4496694, "population_metro": 14035959, "test_render": "**BOLD TEST**" }, ], }, { "country": "**United States**", "region": "Americas", "population": 332593407, "test_link":"[Blank Test Link](/)", "cities": [ { "city": "**New York**", "population_city": 8398748, "population_metro": 19303808, "test_render": "**BOLD TEST**" }, { "city": "**Los Angeles**", "population_city": 3990456, "population_metro": 13291486, "test_render": "**BOLD TEST**" }, { "city": "**Chicago**", "population_city": 2746388, "population_metro": 9618502, "test_render": "**BOLD TEST**" }, ], }, { "country": "**Indonesia**", "region": "Asia", "population": 271350000, "test_link":"[Blank Test Link](/)", "cities": [ { "city": "**Jakarta**", "population_city": 10154134, "population_metro": 33430285, "test_render": "**BOLD TEST**" }, ], }, ] app.layout = html.Div( [ dag.AgGrid( id="simplified-master-detail-example", enableEnterpriseModules=True, licenseKey=os.environ["AGGRID_ENTERPRISE"], columnDefs=masterColumnDefs, rowData=rowData, columnSize="sizeToFit", masterDetail=True, detailCellRendererParams={ "detailGridOptions": { "columnDefs": detailColumnDefs, }, "detailColName": "cities", "suppressCallback": True, }, dashGridOptions={"detailRowAutoHeight": True}, ), ] ) if __name__ == "__main__": app.run(host="127.0.0.1", debug=True) Hello @rpforrest1, Thank you for finding this issue, I have created a PR with the fix. :)
gharchive/issue
2023-08-31T14:12:23
2025-04-01T06:45:24.519751
{ "authors": [ "BSd3v", "rpforrest1" ], "repo": "plotly/dash-ag-grid", "url": "https://github.com/plotly/dash-ag-grid/issues/237", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
730790420
Color palette is now more colorblind-friendly I chose a subset of the Wong palette described here: https://www.nature.com/articles/nmeth.1618 closes #450 Looks good, one small comment on the color choice. Otherwise :woman_dancing: Thanks @nicholas-esterer ! Looks good to me, :woman_dancing: @surchs the black color is indeed very distinct but I'm not sure it renders that well for semi-transparent overlays. I'll let Nick decide whether he wants to try it out ;-). It's true that black is more contrasting and it works when changing its alpha channel for a segmentation image but to me it seemed sort of out of place maybe? For example if there were text on the image it would probably be in black. Also I hoped to solve the problem that the yellows and blues are sort of similar, but it seems it isn't possible for 5 colors, even if you include the black (this website tries to simulate the effect). https://davidmathlogic.com/colorblind/#%23000000-%23E69F00-%2356B4E9-%23009E73-%23F0E442-%230072B2-%23D55E00-%23CC79A7 So probably I will just leave it as is :-)
gharchive/pull-request
2020-10-27T20:28:23
2025-04-01T06:45:24.523935
{ "authors": [ "emmanuelle", "nicholas-esterer", "surchs" ], "repo": "plotly/dash-sample-apps", "url": "https://github.com/plotly/dash-sample-apps/pull/516", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1541781
[feature request] can_#{event} method should run validations On active_record objects, the state machine doesn't run validations when determining if an object can transition to a state. This provides unexpected behaviour, as I would assume that can_do_foo? would return false if the validations fail. Here's a simple example: state :foo validate :fooable? end event :do_foo transition all => :foo end def foobale? errors.add(:base, "you can't do this") false end $ obj.can_do_foo? # => true $ obj.do_foo # => false HI Brad, Thanks for commenting on this. The truth is that there are several things that could prevent an event from transitioning, including: validation failures before callback halts (before_transition, before_save, etc.) rollbacks in any before / after callback You'll truly never know if a transition will succeed until you actually attempt it. If we were to make this helper a little more robust, the question becomes: which kinds of failures can it catch and is it obvious that it only catches those failures? In addition, the only way to determine whether validations will pass is to actually change the value of the state field. The act of changing the state has a pretty high potential of causing other unexpected side effects, especially when you call can_#{event}? in succession for multiple events. Currently this helper is meant to answer one question: Is there a transition defined in the state machine that can transition from the current state to the next state ignoring object context? If we make that question more complex, I think there are other subtle problems that we're going to encounter. In your situation, I would consider whether you can just attempt to call do_foo without the need to check for can_do_foo? If there's logic for trying different events, I'd suggest that perhaps you can push that logic into the state machine. If there's a specific reason you need to call can_#{event}? instead of just calling the event, I need to understand the use case to determine what we can do in state_machine to help. A alternative you could consider with the current implementation is the following: obj.state_event = 'do_foo' if obj.valid? ... else ... end What this does is the following: Validate that a transition exists. If it succeeds, make the transition to the new state. Run the remaining validations on the object. If the validations fail, the transition is rolled back. Let me know your thoughts. ya that's a fair response. I was just thrown off a bit because my assumption was that it would check these things. I've already used a similar workaround above so there's no problems per se. I would make a specific mention in the docs that can_#{event} only checks if the transition from one state to another is allowed and doesn't consider things like validations. Thanks for the feedback!
gharchive/issue
2011-09-01T15:34:11
2025-04-01T06:45:24.583250
{ "authors": [ "bradrobertson", "obrie" ], "repo": "pluginaweek/state_machine", "url": "https://github.com/pluginaweek/state_machine/issues/112", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2371695729
🛑 Dick Pluim.com is down In 17f9dfc, Dick Pluim.com (https://dickpluim.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Dick Pluim.com is back up in f784409 after 10 minutes.
gharchive/issue
2024-06-25T05:37:54
2025-04-01T06:45:24.615012
{ "authors": [ "pluim003" ], "repo": "pluim003/upptime", "url": "https://github.com/pluim003/upptime/issues/2446", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2401653313
🛑 Pierced Arrows is down In baabdba, Pierced Arrows (https://www.piercedarrows.nl) was down: HTTP code: 0 Response time: 0 ms Resolved: Pierced Arrows is back up in 440ceec after 38 minutes.
gharchive/issue
2024-07-10T20:33:13
2025-04-01T06:45:24.617556
{ "authors": [ "pluim003" ], "repo": "pluim003/upptime", "url": "https://github.com/pluim003/upptime/issues/2922", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2462205163
🛑 Pierced Arrows is down In 6e258f6, Pierced Arrows (https://www.piercedarrows.nl) was down: HTTP code: 0 Response time: 0 ms Resolved: Pierced Arrows is back up in 2affc50 after 1 hour.
gharchive/issue
2024-08-13T01:52:41
2025-04-01T06:45:24.619929
{ "authors": [ "pluim003" ], "repo": "pluim003/upptime", "url": "https://github.com/pluim003/upptime/issues/3888", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2511393044
🛑 KCG is down In bb3ba59, KCG (https://kunstrijclubgroningen.nl) was down: HTTP code: 0 Response time: 0 ms Resolved: KCG is back up in 769f251 after 32 minutes.
gharchive/issue
2024-09-07T02:17:31
2025-04-01T06:45:24.622481
{ "authors": [ "pluim003" ], "repo": "pluim003/upptime", "url": "https://github.com/pluim003/upptime/issues/4561", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2527212728
🛑 KCG is down In dbb2666, KCG (https://kunstrijclubgroningen.nl) was down: HTTP code: 0 Response time: 0 ms Resolved: KCG is back up in eea5b3f after 9 minutes.
gharchive/issue
2024-09-15T21:35:07
2025-04-01T06:45:24.624861
{ "authors": [ "pluim003" ], "repo": "pluim003/upptime", "url": "https://github.com/pluim003/upptime/issues/4792", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2721686131
🛑 KCG is down In fea46f5, KCG (https://kunstrijclubgroningen.nl) was down: HTTP code: 0 Response time: 0 ms Resolved: KCG is back up in 2af4fe9 after 9 minutes.
gharchive/issue
2024-12-05T23:45:42
2025-04-01T06:45:24.627185
{ "authors": [ "pluim003" ], "repo": "pluim003/upptime", "url": "https://github.com/pluim003/upptime/issues/6837", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
647447959
navitem: explore how to split out markup Goal: Supply non-React projects/products with as much support through nav DS as possible. Existing: They already have CSS separately built. We can split out logic easily enough as needed. There likely won't be a lot in the nav stuff, but who knows. Current criticality: I think this would be good for the story of nav and DS overall. Currently, it likely is not critical because both products will likely have a React implementation. But, maybe not. We still don't know how they'll plan to share it. htm lib is a solution that has been pitched previously. Is there a solution that would work in prism today (hyperapp)? Giving myself until noon Wednesday 7/15/20 to prototype this.
gharchive/issue
2020-06-29T14:59:05
2025-04-01T06:45:24.630296
{ "authors": [ "EdwardIrby", "jaketrent" ], "repo": "pluralsight/design-system", "url": "https://github.com/pluralsight/design-system/issues/1037", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1260666918
CORS! !!!!!!!!!!!!!!!!!! This was an issue with the prod deployment breaking in an infinite loop. it's been fixed
gharchive/issue
2022-06-04T06:52:01
2025-04-01T06:45:24.655762
{ "authors": [ "joswayski" ], "repo": "plutomi/plutomi", "url": "https://github.com/plutomi/plutomi/issues/615", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1740436877
CDK Deploy still continues even if docker build fails (local) This leads to an infinite loop when ECS tries to deploy because it cant find the image No longer a problem as not deploying docker images to ECS
gharchive/issue
2023-06-04T15:35:57
2025-04-01T06:45:24.657420
{ "authors": [ "joswayski" ], "repo": "plutomi/plutomi", "url": "https://github.com/plutomi/plutomi/issues/877", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2593277055
Ibk branch I added the authentication and authorisation to the backend Hi @Ibukun-tech can you please resolve conflicts?
gharchive/pull-request
2024-10-16T23:48:55
2025-04-01T06:45:24.658527
{ "authors": [ "Ibukun-tech", "plutov" ], "repo": "plutov/formulosity", "url": "https://github.com/plutov/formulosity/pull/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2577903619
Move plot module under backtesting module Move plot module under backtesting module feat: move plot module under backtesting module
gharchive/issue
2024-10-10T07:27:23
2025-04-01T06:45:24.659618
{ "authors": [ "pmarino84" ], "repo": "pmarino84/strategy_tester", "url": "https://github.com/pmarino84/strategy_tester/issues/52", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2478801174
Update os-lib to 0.10.4 About this PR 📦 Updates com.lihaoyi:os-lib from 0.10.3 to 0.10.4 📜 GitHub Release Notes - Version Diff Usage ✅ Please merge! I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! 🔍 Files still referring to the old version number The following files still refer to the old version number (0.10.3). You might want to review and update them manually. CHANGELOG.md ⚙ Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "com.lihaoyi", artifactId = "os-lib" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "com.lihaoyi", artifactId = "os-lib" } }] labels: library-update, early-semver-minor, semver-spec-patch, old-version-remains, commit-count:1 Superseded by #63.
gharchive/pull-request
2024-08-21T18:59:00
2025-04-01T06:45:24.668502
{ "authors": [ "scala-steward" ], "repo": "pme123/camundala", "url": "https://github.com/pme123/camundala/pull/60", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
827518944
Fix parmap deprecation warnings Since parmap 1.5.0, parmap allows to have keyword arguments passed to the function, besides positional arguments. On that version I (parmap author) had to deprecate pool and chunksize arguments replacing them with pm_pool and pm_chunksize respectively. parmap changelog: https://github.com/zeehio/parmap/blob/2640864449afc94fad1e1af32b4ef43ac0a3f80c/ChangeLog#L16 This change was necessary to ensure parmap keyword arguments do not collide with your mapped function arguments. If your named arguments in your function don't start with pm_ you are safe of any future argument I add to parmap (so it's just to avoid potential future problems) I hope it helps thanks very much for this PR!
gharchive/pull-request
2021-03-10T10:42:30
2025-04-01T06:45:24.675086
{ "authors": [ "pmelchior", "zeehio" ], "repo": "pmelchior/pygmmis", "url": "https://github.com/pmelchior/pygmmis/pull/19", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1815805874
⚠️ GitHub has degraded performance In 11c0887, GitHub (https://www.githubstatus.com/api/v2/status.json) experienced degraded performance: HTTP code: 200 Response time: 133 ms Resolved: GitHub performance has improved in 7d73325.
gharchive/issue
2023-07-21T13:23:54
2025-04-01T06:45:24.718619
{ "authors": [ "pmmmwh" ], "repo": "pmmmwh/upptime", "url": "https://github.com/pmmmwh/upptime/issues/466", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2172829772
⚠️ Slack has degraded performance In 48cacbf, Slack (https://status.slack.com/api/v2.0.0/current) experienced degraded performance: HTTP code: 200 Response time: 163 ms Resolved: Slack performance has improved in c2aa6d1 after 2 hours, 52 minutes.
gharchive/issue
2024-03-07T02:49:13
2025-04-01T06:45:24.721085
{ "authors": [ "pmmmwh" ], "repo": "pmmmwh/upptime", "url": "https://github.com/pmmmwh/upptime/issues/599", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1125450691
useDrag with pointer: { touch: true } stucks on iOS after pulling the home bar Describe the bug Open the codesandbox below on iOS devices. When you drag the gray box, it follows. Then, hold the gray box with one finger, and pull the iOS home bar up with another finger at the same time. And then, release the two fingers. Now, the gray box get stuck. No matter how you drag it, it won't move or trigger the callback of useDrag. If you comment line 17, which is pointer: { touch: true }. This bug will not exist. So maybe this is related to touch events. I've search the issues of use-gesture, seems this bug is somehow like #349. But I think they are two different bugs. Sandbox or Video Sandbox: https://codesandbox.io/s/use-gesture-stucks-when-switching-between-apps-on-ios-fdsxl?file=/src/App.js Video with the sandbox above: https://user-images.githubusercontent.com/16526078/152717896-d28fcda0-29c3-43fb-a3bd-4aad62c8c9ee.MP4 Another video with a real picker component: https://user-images.githubusercontent.com/16526078/152718245-969569b1-18b1-4482-bfa0-6f7d8545a0d9.MP4 Information: React Use Gesture version: 10.2.5 Device: iPhone 12 mini OS: iOS 15.2.1 Browser: Safari Checklist: [x] I've read the documentation. [x] If this is an issue with drag, I've tried setting touch-action: none to the draggable element. Hey @awmleer thanks for the detailed issue. This should be fixed in 10.2.6! Great! It works now!
gharchive/issue
2022-02-07T03:09:29
2025-04-01T06:45:24.742041
{ "authors": [ "awmleer", "dbismut" ], "repo": "pmndrs/use-gesture", "url": "https://github.com/pmndrs/use-gesture/issues/441", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
24208912
Status graphs don't work on SSL connections Migrated from https://code.google.com/p/alfresco-bulk-filesystem-import/issues/detail?id=134 What steps will reproduce the problem? Initiate an import on an Alfresco server configured with HTTPS access only View the status screen What is the expected output? What do you see instead? Expected: The live graph showing the number of files read/written. Actual: A screen with graphs missing and an error on the console saying: [blocked] The page at https:///alfresco/service/bulk/import/filesystem/status ran insecure content from http://yui.yahooapis.com/3.8.0/build/simpleyui/simpleyui-min.js. status:1 Uncaught ReferenceError: Y is not defined What version of the product are you using? On what operating system? alfresco-bulk-filesystem-import-33-1.2.1.amp OSX Mavericks and Ubuntu 10.0.4 LTS Depends on issue #31. Pretty sure this will now Just Work:tm:, due to the switch to jQuery (which we load over either HTTP or HTTPS - whichever Alfresco is configured to use).
gharchive/issue
2013-12-12T21:14:39
2025-04-01T06:45:24.793522
{ "authors": [ "pmonks" ], "repo": "pmonks/alfresco-bulk-import", "url": "https://github.com/pmonks/alfresco-bulk-import/issues/11", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1346634496
discussion/benchmark-loop-pointer Benchmark Mode Cnt Score Error Units PngEncoderBenchmarkCompressionSpeedVsSize.loopVariableInline thrpt 100.355 ops/s PngEncoderBenchmarkCompressionSpeedVsSize.loopVariablePtr thrpt 100.402 ops/s With b14f4af Benchmark Mode Cnt Score Error Units PngEncoderBenchmarkCompressionSpeedVsSize.loopVariableInline thrpt 6166.051 ops/s PngEncoderBenchmarkCompressionSpeedVsSize.loopVariablePtr thrpt 25824.451 ops/s I get this results on my M1 Max Mac Studio: Benchmark Mode Cnt Score Error Units PngEncoderBenchmarkCompressionSpeedVsSize.loopVariableInline thrpt 25 10582,975 ± 27,052 ops/s PngEncoderBenchmarkCompressionSpeedVsSize.loopVariablePtr thrpt 25 38609,613 ± 119,538 ops/s After changing the ptr to ptr++ in loopVariablePtr I get this values: Benchmark Mode Cnt Score Error Units PngEncoderBenchmarkCompressionSpeedVsSize.loopVariableInline thrpt 25 10546,442 ± 48,624 ops/s PngEncoderBenchmarkCompressionSpeedVsSize.loopVariablePtr thrpt 25 10038,773 ± 127,950 ops/s I.e. in this empty case it's slower as in loopVariableInline you have 2 registers (x + y; height and width can be const folded...) and in loopVariablePtr you have 3 registers (x + y + ptr). Before ptr was always 0 and was able to be const folded. To have a fair compare metric you should read width and height from e.g. a BufferedImage stored as instance variable, so that the compiler can not const fold the values. After using the state to read the sizes from the buffered image (i.e. this patch) diff --git a/src/test/java/com/pngencoder/PngEncoderBenchmarkCompressionSpeedVsSize.java b/src/test/java/com/pngencoder/PngEncoderBenchmarkCompressionSpeedVsSize.java index 639605e..7c371dd 100644 --- a/src/test/java/com/pngencoder/PngEncoderBenchmarkCompressionSpeedVsSize.java +++ b/src/test/java/com/pngencoder/PngEncoderBenchmarkCompressionSpeedVsSize.java @@ -60,9 +60,9 @@ public class PngEncoderBenchmarkCompressionSpeedVsSize { private static Random random = new Random(); @Benchmark - public void loopVariableInline(Blackhole blackhole) { - int height = 1000; - int width = 1000; + public void loopVariableInline(Blackhole blackhole, BenchmarkState state) { + int height = state.bufferedImage.getHeight(); + int width = state.bufferedImage.getWidth(); for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { @@ -72,14 +72,14 @@ public class PngEncoderBenchmarkCompressionSpeedVsSize { } @Benchmark - public void loopVariablePtr(Blackhole blackhole) { - int height = 1000; - int width = 1000; + public void loopVariablePtr(Blackhole blackhole, BenchmarkState state) { + int height = state.bufferedImage.getHeight(); + int width = state.bufferedImage.getWidth(); int ptr = 0; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { - blackhole.consume(ptr); + blackhole.consume(ptr++); } } } I get this numbers: Benchmark Mode Cnt Score Error Units PngEncoderBenchmarkCompressionSpeedVsSize.loopVariableInline thrpt 25 12348,274 ± 100,777 ops/s PngEncoderBenchmarkCompressionSpeedVsSize.loopVariablePtr thrpt 25 11281,532 ± 60,577 ops/s Don't ask me why, it does not really make sense to be faster here... Maybe generating the constant 1000 value inline is more expensive on M1 Max ARM than using a width value which is already in a register. This might be a ARM64 thing. Maybe because of const folding the constant 1000 is generated for every compare / multiplication, instead of being just put into a register and being reused there. As said, optimization is hard... As seen here on ARM loading a constant is some effort. On x64 the immediate value can be loaded with one instruction and sometimes even be used inline with the operation (multiplication / compare in this case). This was all on Zulu JDK 17.0.3 for ARM.
gharchive/pull-request
2022-08-22T15:56:04
2025-04-01T06:45:24.803791
{ "authors": [ "johantiden", "rototor" ], "repo": "pngencoder/pngencoder", "url": "https://github.com/pngencoder/pngencoder/pull/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2718627134
Translation of More Shield Variants into Ukrainian Could you add the Ukrainian translation created by me? This is to "mod-folder"/lang uk_ua.json This is to minecraft/lang uk_ua.json @pnk2u ? Pull requests: https://github.com/pnk2u/More-Shield-Variants/pull/10, https://github.com/pnk2u/More-Shield-Variants/pull/11 Translation has been added
gharchive/issue
2024-12-04T19:32:34
2025-04-01T06:45:24.807438
{ "authors": [ "StarmanMine142" ], "repo": "pnk2u/More-Shield-Variants", "url": "https://github.com/pnk2u/More-Shield-Variants/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1623699007
Speed of initial JITing I realise this is a bit of a hand-wavey, but I have noticed that the initial JITing stage can be quite slow compared to when I used exactly the same setup with Diffrax as the ODE solver e.g. 20s for a simple NODE-type setup. I realise this is new library (while Diffrax has been optimised over the course of several years) and once the JIT staging is complete, each iteration in probdiffeq runs very quickly / optimises models in a few seconds, which is awesome. I just thought I would share my experience of using JAX over the past few years, that slow compilation can often be the symptom of issues I have been quite been able to put my finger on. However, I have found very innocuous changes, such as exactly where you vmap something or changing the order of a calculation, can sometimes result in significant speed-ups in this process e.g. in one incident, literally just changing at what "level" I have used a vmap resulted in halving that JIT staging. I also know that Diffrax has used tricks for speeds up JITing e.g. there is a flag for "scan_stage" which runs a particular mode to substantially improve compilation speed. There is also #140, which might relate to your problem (depending on the number of derivatives in your prior and on how complicated your vector field is). You mention something about selecting when to apply vmap. I would love to hear more! I would love to say I have some hard and fast rules on this issue, but I don't. Very much trial and error, if there is some flexibility in terms of ordering of function calls or level at which vmap can be placed, if JITing is slow, I resort to trying differing options. e.g. I have just found that from time to time for instance you might have a choice if you vmap an outer function or pass a tensor, X, inside a function, then JAX primiatives act on the whole, then vmap a call to an inner function. And you get a difference. I haven't found a hard and fast rule to that. I think it mostly occurs when you are taking gradients / jvps / vjps in the inner function. Also, I have found from time to time that although vmap is suppose to "auto-magically" handle efficient vectorisation, it can be result in faster / more efficient code if you do it manually. My guess in some cases, as in described above, is that the JIT compiler fails to see some efficiency and then takes a long time to build a more complicated representation, perhaps making unnecessary copies of elements that aren't required. The problem is the more complex your model gets, the harder is to look at the JIT build and really try to work out what is going on in there. I see; thanks for elaborating! Sounds like there are some (potentially hard-to-find) performance gains up for grabs :) Let's take a closer look: I assume you're computing some value_and_grad of a fixed-step solver like in the parameter estimation notebooks. Is that correct? Which solver are you using? Knowing where to look will simplify investigating potential compilation-time improvements. A minimal example would be fantastic, but knowing the solver configuration would already be a decent starting point (so I/we can check jaxpr etc.). There is also https://github.com/pnkraemer/probdiffeq/issues/140, which might relate to your problem (depending on the number of derivatives in your prior and on how complicated your vector field is). This appears to be the big factor. Every additional derivative is adding significant additional time to the JITing process, such that the difference between using 1 and 4 results in triple the time to "compile". I will do a bit of further experimentation and set up a minimal example (or two), with some comments on differences in performance, and email them directly to you in a couple of day's time. Sounds great! Looking forward to your email. In the meantime, if you need <= 5 derivatives and the issues in #140 are a blocker, try the Runge-Kutta starter (in probdiffeq.taylor) instead of taylor_mode_fn. This initialisation scheme does not suffer from the problem in #140. The choice between such functions is an argument in the solution routines and for a small number of derivatives, it works just as well. Wondering if you could make use of a similar trick to the one described here. Improve compilation speed with scan-over-layers https://docs.kidger.site/equinox/tricks/ Obviously, the taylor series expands in length each time around the loop, and jax.lax.scan doesn't allow for differing size returns. Perhaps padding could be used to handle this? Unfortunately, the trick discussed here does not apply (the problem is a different one, as you write). But it turns out that there is a way of padding the Taylor coefficients cleverly such that the for loop can be avoided. This has (already) been implemented by #474. Could you please install the latest version from GitHub and check whether the compilation time still bugs you? Winner.....compilation time is less than half what it was previously on a simple example, using number of derivatives = 4. Fantastic! Should we close this issue then? Do you consider your problem to be resolved? Just a thought for minor improvement in taylor_mode_fn in taylor.py. For low order expansions, [0, 1], could handle it and return before getting to scan at all. Something like this. primals= vf(*initial_values) if order == 0: return [y0] y1 = jax.jvp(vf, (state, ), (primals,))[1] # In the case it is first order, just stop here if order == 1: return [y0, y1] ....
gharchive/issue
2023-03-14T15:16:45
2025-04-01T06:45:24.817889
{ "authors": [ "adam-hartshorne", "pnkraemer" ], "repo": "pnkraemer/probdiffeq", "url": "https://github.com/pnkraemer/probdiffeq/issues/456", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
840055097
Fixes broken tests when default output configured to JSON. Closes #2305 Fixes broken tests when default output configured to JSON. Closes #2305 Merged manually
gharchive/pull-request
2021-03-24T18:53:51
2025-04-01T06:45:24.824055
{ "authors": [ "waldekmastykarz" ], "repo": "pnp/cli-microsoft365", "url": "https://github.com/pnp/cli-microsoft365/pull/2306", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1326379700
[BUG] Get-PnPTermGroup do not work with app permissions. Reporting an Issue or Missing Feature When using Get-PnPTermGroup with a connection created with app permissions the cmdlet fails with an out of index error or hangs an Azure Automation Runbook. I believe this also impacts Get-PnPTerm. Expected behavior I expect this to return a collection of Term Groups. Actual behavior I have gotten a number of errors depending on how I try to call the code, but the example below just times out and suspends an Azure Automation Runbook with no error capture. Often Value cannot be null. Parameter name: path but I believe this is because variables used to try and capture the result just time out and get assigned null. I have also received Specified argument was out of the range of valid values. Parameter name: index when attempting to pass the result of Get-PnPTermGroup in a variable. Steps to reproduce behavior ############################################################################################### # Set Preferences ############################################################################################### $ErrorActionPreference = 'Stop' ############################################################################################## # Import Modules ############################################################################################### $forPnPPowerShell = @{ Force = $true Name = 'PnP.PowerShell' RequiredVersion = '1.11.0' } Import-Module @forPnPPowerShell ############################################################################################### # Conenct to services ############################################################################################### $appConnection = Get-AutomationConnection -Name 'APP_CONNECTION' $withTheAppConnection = @{ ClientID = $appConnection.ApplicationId Tenant = $appConnection.TenantId Thumbprint = $appConnection.CertificateThumbprint Url = 'https://tenant-admin.sharepoint.com' } Connect-PnPOnline @withTheAppConnection Get-PnPTermGroup What is the version of the Cmdlet module you are running? ModuleType Version Name ---------- ------- ---- Manifest 1.10.0 PnP.PowerShell Which operating system/environment are you running PnP PowerShell on? [ ] Windows [ ] Linux [ ] MacOS [ ] Azure Cloud Shell [ ] Azure Functions [X] Other : Azure Automation Runbooks @stvpwrs have you given the app registration the required application permissions? e.g. TermStore.Read.All or TermStore.ReadWrite.All Hi @stvpwrs Have you tried adding the SharePoint principal as term store admin? https://github.com/pnp/powershell/issues/1749#issuecomment-1090766211 hi @stvpwrs , any update on this ? Can you try what @milanholemans suggested and let us know ? Hello, sorry for the lack of response here, this issue slipped my mind. I think we should leave it closed, I just ended up not using an automation account for this. There was so much odd behavior with the automation account that the local client didn't have, I just stopped trying to troubleshoot it. I will say that all of the auth and permissions were correct and all the code would correctly run locally, just not in an automation account.
gharchive/issue
2022-08-02T20:41:24
2025-04-01T06:45:24.838963
{ "authors": [ "CallumCrowley", "gautamdsheth", "milanholemans", "stvpwrs" ], "repo": "pnp/powershell", "url": "https://github.com/pnp/powershell/issues/2222", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2510369345
[BUG]We need help on Azure ACS for the SharePoint. Someone can connect me sudhakar.kunamneni@dsm.com or Mob +91 9490786396 Notice Many bugs reported are actually related to the PnP Framework which is used behind the scenes. Consider carefully where to report an issue: Are you using Invoke-PnPSiteTemplate or Get-PnPSiteTemplate? The issue is most likely related to the Provisioning Engine. The Provisioning engine is not located in the PowerShell repo. Please report the issue here: https://github.com/pnp/pnpframework/issues. Is the issue related to the cmdlet itself, its parameters, the syntax, or do you suspect it is the code of the cmdlet that is causing the issue? Then please continue reporting the issue in this repo. If you think that the functionality might be related to the underlying libraries that the cmdlet is calling (We realize that might be difficult to determine), please first double check the code of the cmdlet, which can be found here: https://github.com/pnp/powershell/tree/master/src/Commands. If related to the cmdlet, continue reporting the issue here, otherwise report the issue at https://github.com/pnp/pnpframework/issues Reporting an Issue or Missing Feature Please confirm what it is that your reporting Expected behavior Please describe what output you expect to see from the PnP PowerShell Cmdlets Actual behavior Please describe what you see instead. Please provide samples of output or screenshots. Steps to reproduce behavior Please include complete script or code samples in-line or linked from gists What is the version of the Cmdlet module you are running? (you can retrieve this by executing Get-Module -Name "PnP.PowerShell" -ListAvailable) Which operating system/environment are you running PnP PowerShell on? [x] Windows [ ] Linux [ ] MacOS [ ] Azure Cloud Shell [ ] Azure Functions [ ] Other : please specify @SKunamneni - this is a community project and doesn't have SLA. If you need help, please open a discussion here with as much details as you can share : https://github.com/pnp/powershell/discussions and community can help
gharchive/issue
2024-09-06T12:53:36
2025-04-01T06:45:24.846982
{ "authors": [ "SKunamneni", "gautamdsheth" ], "repo": "pnp/powershell", "url": "https://github.com/pnp/powershell/issues/4243", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
804666498
SitePicker suggestion: onHover search results - show URL @AJIXuMuK , Category [x] Enhancement [ ] Bug [ ] Question Version "@pnp/spfx-property-controls": "^2.3.0", Expected / Desired Behavior / Question When I have similar site titles, be able to tell what Urls they come from so I can pick the right one. I could add a label in the property pane showing the selected URL but it takes more space and doesn't get shown until I pick one of the selections. Observed Behavior In the example below, I search for "Pivot", I see 2 returned results but can not tell the difference between them. Steps to Reproduce See above screenshot. I just happen to have 2 sites/subsites I have access to with the same title. Here's an example of adding the Url below the selected area once I pick one... Even if I do this, it is kind of quirky because it puts a gap above but looks like it's more related to the property below it in the pane. I think I found where in the npm package it could be updated but am not sure how to do it: From: PropertyFieldSitePickerListItem.js import * as React from 'react'; import { Checkbox } from 'office-ui-fabric-react/lib/Checkbox'; import styles from './PropertyFieldSitePickerListItem.module.scss'; export var PropertyFieldSitePickerListItem = function (props) { var site = props.site, checked = props.checked; return (React.createElement("li", { className: styles.siteListItem, key: site.id }, React.createElement(Checkbox, { className: styles.checkbox, checked: checked, onChange: function (ev, nowChecked) { return props.handleCheckboxChange(site, nowChecked); } }), React.createElement("span", { className: styles.title, title: site.title}, site.title))); }; //# sourceMappingURL=PropertyFieldSitePickerListItem.js.map TO: PropertyFieldSitePickerListItem.js import * as React from 'react'; import { Checkbox } from 'office-ui-fabric-react/lib/Checkbox'; import styles from './PropertyFieldSitePickerListItem.module.scss'; export var PropertyFieldSitePickerListItem = function (props) { var site = props.site, checked = props.checked; return (React.createElement("li", { className: styles.siteListItem, key: site.id }, React.createElement(Checkbox, { className: styles.checkbox, checked: checked, onChange: function (ev, nowChecked) { return props.handleCheckboxChange(site, nowChecked); } }), React.createElement("span", { className: styles.title, title: site.url }, site.title))); }; //# sourceMappingURL=PropertyFieldSitePickerListItem.js.map Hi @mikezimm! Thanks for the suggestion. I've implemented it in a bit other way: the URL is visible underneath the title of the site: It will be included in the next release. In a meanwhile you can test the functionality in beta version. @AJIXuMuK , I think this is a great improvement... Better than my implementation by far. Thanks for the update! I'm not sure if you close this with the final pull request though or close it now... v2.4.0 has been released.
gharchive/issue
2021-02-09T15:39:51
2025-04-01T06:45:24.854661
{ "authors": [ "AJIXuMuK", "mikezimm" ], "repo": "pnp/sp-dev-fx-property-controls", "url": "https://github.com/pnp/sp-dev-fx-property-controls/issues/330", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1130985759
Lookup - Esconder ícone de lupa Atualmente, quando um lookup é desabilitado, ele fica com o fundo cinza como o padrão dos fill's. Mas, continua apresentando o ícone de lupa, isto dá a falsa impressão que ele pode ser clicado. O ideal é que, ao ser desabilitado, o ícone seja escondido. Ou, exista a possibilidade do desenvolvedor esconder o ícone quando necessário. Segue anexo o print exemplificando a solicitação. Boa noite @TecoAvila, Consultamos o time de UX Lab e eles não indicaram esta alteração. Caso necessário você pode substituir o po-lookup por um po-input quando estiver desabilitado. Att. Olá @alinelariguet , bom dia, Gostaríamos de saber porque o UX Lab não recomenda esta alteração. Aproveitando, sempre que tivermos este cenário, saberia nos dizer qual seria o Canal para o nosso UX Local entrar em contato com o UX Lab's ? Olá @TecoAvila, bom dia. Tudo bem? Vou te mandar o contato do Douglas coordenador do UX Labs. Após uma análise mais detalhada e considerando as recentes mudanças na nossa estratégia, decidimos encerrar esta issue. Agradecemos a todos que contribuíram com suas análises e sugestões. Caso seja necessário, uma nova ISSUE será aberta conforme a evolução das nossas necessidades. Agradecemos pela compreensão.
gharchive/issue
2022-02-10T21:15:02
2025-04-01T06:45:24.898631
{ "authors": [ "TecoAvila", "alinelariguet", "brunoromeiro", "renanarosario" ], "repo": "po-ui/po-angular", "url": "https://github.com/po-ui/po-angular/issues/1193", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1532240776
Add ability to specify a keypath for the enum to ifCaseLet This extends the _IfCaseLetReducer and the .ifCaseLetFunction on the ReducerProtocol with the option to specify a KeyPath to a property containing the enum to be checked instead of requiring the whole State to be an enum. Reasoning: I recently started working with TCA on a new project and I encountered it multiple times that we might want to use a SwitchStore to handle multiple mutually exclusive children, but also might have additional state properties we might want to handle. struct SomeFeature: ReducerProtocol { struct State: Equatable { enum SomeEnum: Equatable { case someChild(SomeChild.State) case someOtherChild(SomeOtherChild.State) } var someLocalState: String var someChildStateEnum: SomeEnum } enum Action { case someChild(SomeChild.Action) case someOtherChild(SomeOtherChild.Action) } var body: some ReducerProtocol<State, Action> { ??? } } The way I understand this there are currently two ways to handle this. You could do something like this: var body: some ReducerProtocol<State, Action> { Scope(state: \.someChildStateEnum, action: /.self) { EmptyReducer() .ifCaseLet(/State.SomeEnum.someChild, action: /Action.someChild) { SomeChild() } .ifCaseLet(/State.SomeEnum.someOtherChild, action: /Action.someOtherChild) { SomeOtherChild() } } Reduce({ state, action in return .none }) } Having the extra Scope and EmptyReducer here (It would also be possible to nest multiple Scope) does not feel very clean, might be hard to understand and reintroduces the ordering issue that the function is supposed to avoid. It is also possible to extract the SwitchStore part of the view into an extra Feature with its own reducer and model that reducers state to be an enum. But sometimes that might feel like unnecessary extra work to just work around the ifCaseLet limitation, if the parent now only hold very little state and logic. With this change it would be possible to simply use the ifCaseLet like in a pure SwitchStore view. var body: some ReducerProtocol<State, Action> { Reduce({ state, action in return .none }) .ifCaseLet(\.someChildStateEnum, case: /State.SomeEnum.someChild, action: /Action.someChild) { SomeChild() } .ifCaseLet(\.someChildStateEnum, case: /State.SomeEnum.someOtherChild, action: /Action.someOtherChild) { SomeOtherChild() } } I also saw questions regarding similar examples in the discussions, but those were often related to navigation or mentioned the first solution as a workaround. I think this change would make using SwitchStore and ifCaseLet more ergonomic and should hopefully have no downsides. The only issue might be that it modifies the theoretically public _IfCaseLetReducer so it is a breaking change. Alternatively it would be possible to introduce something like a _IfCaseLetPullbackReducer to avoid modifying the existing Type, but since they model pretty much functionality I thought it would be better to integrate them. @dahlborn Thanks for the PR! This may be a lack of documentation, but there is an overload of Scope that takes a case path to enum state that helps you avoid the EmptyReducer: var body: some ReducerProtocol<State, Action> { Scope(state: \.someChildStateEnum, action: /.self) { Scope(state: /State.SomeEnum.someChild, action: /Action.someChild) { SomeChild() } Scope(state: /State.SomeEnum.someOtherChild, action: /Action.someOtherChild) { SomeOtherChild() } } Reduce { state, action in return .none } } While there are still things that may stand out to you (having to remember to order the child stuff up front, double-nested Scope-ing), it's not so so bad. I think a lot of the nested property/case ergonomic issues that APIs like this are trying to solve for are showing that Swift could improve a lot if it shipped first-class case paths and case access, and allowed things to compose nicely out-of-the-box. Then you could flatten the nesting with something like: var body: some ReducerProtocol<State, Action> { Reduce { state, action in // core logic... } // speculative case path composition and use syntax via `\._?`: .ifCaseLet(state: \.someChildStateEnum.someChild?, action: \.someChild?) { SomeChild() } .ifCaseLet(state: \.someChildStateEnum.someOtherChild?, action: \.someOtherChild?) { SomeOtherChild() } } At the end of the day I think we're in a bind since we can't explode every API with every kind of composition, but maybe this is reason to explore the OptionalPath/WritableOptionalPath wrapper that allows us to compose key paths and case paths directly, which would at least give us the following with the existing APIs: var body: some ReducerProtocol<State, Action> { Reduce { state, action in // core logic... } .ifCaseLet( state: (\.someChildStateEnum).appending(path: /State.SomeEnum.someChild), action: /Action.someChild ) { SomeChild() } .ifCaseLet( state: (\.someChildStateEnum).appending(path: /State.SomeEnum.someOtherChild), action: /Action.someOtherChild ) { SomeOtherChild() } } Any thoughts given the above? @stephencelis Thanks for your response. Your documentation is good, I was aware of the nested Scope variant. But I was looking at the .ifCaseLet alternative for the reasons you mentioned. I completely agree that having language support for case paths would be the optimal solution, but I think till then having a more ergonomic solution would be nice, since it seems to be a question that comes up pretty often. (Even though it often comes up regarding navigation, which you are already solving) I like the OptionalPath wrapper you mentioned. I guess the question would be whether adding this complexity would be justified for the things it solves. Was there a reason you decided to have it as a helper in the isowords project instead of adding it to tca or case-paths? I like the OptionalPath wrapper you mentioned. I guess the question would be whether adding this complexity would be justified for the things it solves. Was there a reason you decided to have it as a helper in the isowords project instead of adding it to tca or case-paths? We think it works really well, but in general the topic is advanced enough that we didn't want it to be required learning for folks, so we decided that using existing tools that take key paths and case paths strikes a better balance for beginners for now. We also think there are ergonomic refinements to explore if we do ship optional paths in library form (more seamless integration with key paths, supporting a type-erased hierarchy similar to key paths, etc.), but we haven't had time to explore these ideas in full. If the OptionalPath helpers we ship in isowords is useful to you, we definitely recommend bringing them into your project, though! @dahlborn OK to close this out since we can't possible add overloads for every nesting of key path and case path? In the meantime, folks with more advanced compositional needs can use the existing APIs in a nested fashion, or introduce optional paths to their code base. For reference: https://github.com/pointfreeco/isowords/blob/main/Sources/TcaHelpers/OptionalPaths.swift
gharchive/pull-request
2023-01-13T12:36:25
2025-04-01T06:45:24.958993
{ "authors": [ "dahlborn", "stephencelis" ], "repo": "pointfreeco/swift-composable-architecture", "url": "https://github.com/pointfreeco/swift-composable-architecture/pull/1832", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }