id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
251441629 | Added port option to parseUrl, test with parseUrl
Closes #32
Changes proposed in this pull request:
Added port option in get request for parseUrl method;
Added test to cover parseUrl with localhost random port.
@bobby-brennan please take a look
Thanks for the contribution!
| gharchive/pull-request | 2017-08-19T18:22:57 | 2025-04-01T04:33:39.143045 | {
"authors": [
"bobby-brennan",
"soulman-is-good"
],
"repo": "bobby-brennan/rss-parser",
"url": "https://github.com/bobby-brennan/rss-parser/pull/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1514998667 | Add extra title depth
For individual posts, the page title should be "Robert Yin | SECTION | PAGE" instead of "Robert Yin | PAGE".
Not really an issue right now, and too lazy to figure out how to access section features from the page template
| gharchive/issue | 2022-12-31T04:35:07 | 2025-04-01T04:33:39.144068 | {
"authors": [
"bobertoyin"
],
"repo": "bobertoyin/personal-site",
"url": "https://github.com/bobertoyin/personal-site/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2272393382 | license
Greetings! Thanks for posting this. Can you add a license to the repo so users can comply with how you want it used? Thanks!
Hello, I have updated the license to MIT license.
| gharchive/issue | 2024-04-30T20:25:36 | 2025-04-01T04:33:39.190718 | {
"authors": [
"boheumd",
"esquires"
],
"repo": "boheumd/MA-LMM",
"url": "https://github.com/boheumd/MA-LMM/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
330290191 | Feature/content hub example rebased
Rebased version of #724 (removes functional commits that had since been reviewed and merged elsewhere).
Looks great @theSadowski, merging this in! Remember to branch off master (not your old branch) for additional work.
| gharchive/pull-request | 2018-06-07T14:10:27 | 2025-04-01T04:33:39.249981 | {
"authors": [
"remydenton"
],
"repo": "bolt-design-system/bolt",
"url": "https://github.com/bolt-design-system/bolt/pull/731",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
942783849 | exclude code/vendor/bundle from deploy stage if it exists
This is a 🐞 bug fix.
[ ] I've added tests (if it's a bug, feature or enhancement)
[ ] I've adjusted the documentation (if it's a feature or enhancement)
[X] The test suite passes (run bundle exec rspec to verify this)
Summary
This is a confirmed fix for #571. We are already using my fork with this patch at our company until this or an equivalent fix is merged.
Note @tongueroo if you want to make some edits, I'd be happy to try them out on our CI and see if your modified version still fixes the issue.
Context
In some environments (such as docker, apparently), an extra /tmp/jets/[project]/stage/code/vendor/bundle directory appears that should not be included in the deployment. When it is included the deploy typically fails because we will exceed the maximum lambda code size of 250 MB.
How to Test
Run jets deploy on a project within an affected system. The sample GitHub Action config in #571 should work fine for testing this.
Version Changes
Minor version bump
Thanks for all the debugging effort on this one. 👍 Reviewed it and took a slightly different more generalized approach in #577
| gharchive/pull-request | 2021-07-13T05:43:20 | 2025-04-01T04:33:39.261933 | {
"authors": [
"sam0x17",
"tongueroo"
],
"repo": "boltops-tools/jets",
"url": "https://github.com/boltops-tools/jets/pull/573",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1482086626 | Only generate output files with content
Hi I'm playing with leveraging layering to support custom behaviour with Terraspace by using .rb files in the tfvars folder; not sure if there is a better way to do this. Essentially I'm running custom ruby to set configuration required by a custom-helper later on, I want this configuration to be handled per environment, etc through layering.
Currently Terraspace expects that these files contain at least one tfvar DSL method, the DSL code checks if any variables values are generated and returns null if not. Unfortunately the writer still creates a corresponding but empty JSON file. This causes terraform to fail as the file is empty.
Another use case is logic in terraform to check the environment and define different variable values using DSL, if this results in no values being defined for that environment, terraform will fail.
Summary
Don;t generate files for terraform if their contents are empty/missing
Motivation
Allow none DSL ruby code by the Terraspace DSL compiler
Guide-level explanation
Reference-level explanation
Modify compiler/writer.rb to check the value of the content before outputing to a file
line 38 : IO.write(dest_path, content, mode: "wb") unless content.nil?
Drawbacks
Can't think of any. I purposely didn't modify to check for empty content - but you would probably want to filter that out as well. Can't think why you'd want to generate empty files for terraform to process?
Thanks been following https://community.boltops.com/t/using-ruby-files-in-tfvars-folder/993/2
Don't believe there's a reason to generate empty files instead of not generating a file at all 🤔 Just haven't test it.
Think fastest path is to check. A quick way to test and check is modify the gem locally on your system and see if it breaks anything. In your terraspace project run:
bundle info terraspace
That should give you info about the path where the gem is actually installed. Monkeypatch the gem on the spot. That's a benefit of scripting languages you can modified on the spot. It'll provide a good quick sanity check. Then Send a PR! Thanks
PR attached, already ended up doing what you said :)
Couldn't see any existing tests and my knowledge of Ruby is limited to know what if any tests should be added, especially for lightweight fix. Current fix only checks for explicit Nil content, but equally empty content should also be stripped - only risk I could see is another downstream script/tool relying on an empty file. I'd guess you could check if it was passthrough mode and only copy, or add an option
| gharchive/issue | 2022-12-07T14:43:03 | 2025-04-01T04:33:39.267333 | {
"authors": [
"johnlister",
"tongueroo"
],
"repo": "boltops-tools/terraspace",
"url": "https://github.com/boltops-tools/terraspace/issues/284",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
111119541 | Triple click row selection
Triple click row selection (Selection->Options->soTripleClickRowSelect=True) in the last line does not work.
Fixed.
| gharchive/issue | 2015-10-13T07:13:22 | 2025-04-01T04:33:39.271789 | {
"authors": [
"bonecode"
],
"repo": "bonecode/TBCEditor",
"url": "https://github.com/bonecode/TBCEditor/issues/208",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2692979821 | 🛑 NZBGet is down
In 4fe9ff4, NZBGet ($URLS_NZBGET) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NZBGet is back up in ec76e04 after 9 minutes.
| gharchive/issue | 2024-11-26T03:38:21 | 2025-04-01T04:33:39.293159 | {
"authors": [
"bonny1992"
],
"repo": "bonny1992/upptime",
"url": "https://github.com/bonny1992/upptime/issues/1398",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2086224654 | Add support for Linux, global python installations, and loading python scripts from assembly
This PR adds the following features:
Support for Linux OS - previously, the way the RuntimeManager and EnvironmentHelper configured the path variables and python engine were hard-coded and specific to the Windows OS. I have added several lines of code in both of these which will 1) determine the current OS; and 2) configure path and python engine based on the OS detected.
Support for global python installations - previously, if no virtual environment was detected, then creation of the python engine would fail due to incorrect loading of the python DLL. With the newly added features, the EnvironmentHelper will search for a global python installation by performing a directory search through the path and correctly set the python DLL if a python installation on the path has been detected.
Support for loading python scripts in assembly - when using the CreateRuntime module in Bonsai, it is now possible for the RuntimeManager to parse the path of a script to determine 1) if it is an embedded resource; and 2) load the embedded python from the specified assembly location.
@ncguilbeault closing this as it was superseded by other PRs which have now been merged.
| gharchive/pull-request | 2024-01-17T13:43:27 | 2025-04-01T04:33:39.295339 | {
"authors": [
"glopesdev",
"ncguilbeault"
],
"repo": "bonsai-rx/python-scripting",
"url": "https://github.com/bonsai-rx/python-scripting/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2553690273 | create basic store interface
I will keep working on this branch. This merge will make it easier to show what we've done so far to the professor tonight, but the branch will still be used
alguem muda esse quality gate ai sacanagem esse negocio @icrcode
| gharchive/pull-request | 2024-09-27T20:33:58 | 2025-04-01T04:33:39.296544 | {
"authors": [
"CodyKoInABox"
],
"repo": "bonsite/bonsite",
"url": "https://github.com/bonsite/bonsite/pull/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
294147898 | Macro in class definitions ?
Consider following code:
class a():
macro int0(name):
yield [| $name = 0 |]
def constructor():
int0 test
print test
a()
Surely it looks OK for you, right ? So well, compiler disagrees here:
BCE0005: Unknown identifier: 'int0'
Any way to fix that thing ?
The macro expander does not search for macros nested inside of a class--with the exception of macros nested in other macros, which are only in-scope inside an invocation of the outer macro--and changing that would potentially be very messy, especially in scenarios like this where the macro is defined inside the same project that's using it.
I've updated MacroMacro to catch this scenario and error out, giving you a more clear idea of what's going wrong, so you'll know you have to declare the macro at the top level rather than inside of a type.
| gharchive/issue | 2018-02-03T21:48:42 | 2025-04-01T04:33:39.298774 | {
"authors": [
"Guevara-chan",
"masonwheeler"
],
"repo": "boo-lang/boo",
"url": "https://github.com/boo-lang/boo/issues/186",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
243341026 | Add new version to changelog
Changes of the 0.10.4 version should be added to CHANGELOG.md.
And also published to npm?
@Playrom Did the ghost dev get back to you regarding publishing to npm?
@ricardogama Go ahead if you want. Otherwise I will add the changes soon.
@dj-hedgehog nope no answer , I discovered her handle
@kirrg001 could you push the new version in npm? also could you check your emails? I wrote you a message :)
Closed via https://github.com/bookshelf/bookshelf/commit/cd9e9d26b3d62fd4f0daeca24eb43578a905c904.
Thanks to @Playrom 👍
| gharchive/issue | 2017-07-17T09:38:18 | 2025-04-01T04:33:39.306973 | {
"authors": [
"Playrom",
"dj-hedgehog",
"kirrg001",
"ricardogama"
],
"repo": "bookshelf/bookshelf",
"url": "https://github.com/bookshelf/bookshelf/issues/1599",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
396225646 | normalize websocket write_op
...to not need handler_ptr: find an unused part of the stream object to put the close frame so we don't need the extra allocation.
I don't see how this is possible, and it isn't really needed because the allocation rarely happens
| gharchive/issue | 2019-01-06T03:59:23 | 2025-04-01T04:33:39.326708 | {
"authors": [
"vinniefalco"
],
"repo": "boostorg/beast",
"url": "https://github.com/boostorg/beast/issues/1405",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2730563826 | Create T10
Expertise
Abort
| gharchive/pull-request | 2024-12-10T16:14:06 | 2025-04-01T04:33:39.327380 | {
"authors": [
"Cypherscircle"
],
"repo": "boostorg/compute",
"url": "https://github.com/boostorg/compute/pull/894",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1040562195 | Question on convenience-header
How would I accomplish something like this?
<xsl:template mode="convenience-header" match="@file[contains(., 'boost/url/bnf')]">url/bnf.hpp</xsl:template>
<xsl:template mode="convenience-header" match="@file[contains(., 'boost/url')]">url.hpp</xsl:template>
<xsl:template mode="convenience-header" match="@file"/>
I am getting "Ambiguous rule match." I want everything in url/bnf to use the bnf.hpp file, and everything else to use url.hpp.
Add priority=“1” to the first rule. Otherwise they have the same default
priority of 0.5.
On Sun, Oct 31, 2021 at 12:28 PM Vinnie Falco @.***>
wrote:
How would I accomplish something like this?
<xsl:template mode="convenience-header" @.[contains(., 'boost/url/bnf')]">url/bnf.hpp</xsl:template>
<xsl:template mode="convenience-header" @.[contains(., 'boost/url')]">url.hpp</xsl:template>
<xsl:template mode="convenience-header" @.***"/>
I am getting "Ambiguous rule match." I want everything in url/bnf to use
the bnf.hpp file, and everything else to use url.hpp.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/boostorg/docca/issues/111, or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAETWLRINYUPZ6MSRV6HKH3UJWKE7ANCNFSM5HCT45ZQ
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Yep!!!
Also, let me know if that doesn’t work and I’ll take a closer look when I’m
near my computer.
On Sun, Oct 31, 2021 at 1:42 PM Evan Lenz @.***> wrote:
Add priority=“1” to the first rule. Otherwise they have the same default
priority of 0.5.
On Sun, Oct 31, 2021 at 12:28 PM Vinnie Falco @.***>
wrote:
How would I accomplish something like this?
<xsl:template mode="convenience-header" @.[contains(., 'boost/url/bnf')]">url/bnf.hpp</xsl:template>
<xsl:template mode="convenience-header" @.[contains(., 'boost/url')]">url.hpp</xsl:template>
<xsl:template mode="convenience-header" @.***"/>
I am getting "Ambiguous rule match." I want everything in url/bnf to use
the bnf.hpp file, and everything else to use url.hpp.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/boostorg/docca/issues/111, or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAETWLRINYUPZ6MSRV6HKH3UJWKE7ANCNFSM5HCT45ZQ
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
It works (or else I would not have closed the issue) thanks!!!
Haha, yes, I emailed before I saw that. Glad to hear it!
On Sun, Oct 31, 2021 at 1:53 PM Vinnie Falco @.***>
wrote:
It works (or else I would not have closed the issue) thanks!!!
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/boostorg/docca/issues/111#issuecomment-955789688, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AAETWLQRSM7TPVRYLXV5PO3UJWUDHANCNFSM5HCT45ZQ
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
| gharchive/issue | 2021-10-31T19:28:04 | 2025-04-01T04:33:39.355991 | {
"authors": [
"evanlenz",
"vinniefalco"
],
"repo": "boostorg/docca",
"url": "https://github.com/boostorg/docca/issues/111",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1580990096 | BDropdown Boundary prop to detect overflow
Relates to https://github.com/bootstrap-vue/bootstrap-vue-next/issues/800
The boundary prop should be added back in
-> Boundary prop is added,
-> detectOverflow will close the dropdown when hits Boundary
This also holds for the popovers and tooltips. Right now, they are hidden when the overflow is anything other than overflow: visible
| gharchive/issue | 2023-02-11T19:42:54 | 2025-04-01T04:33:39.378309 | {
"authors": [
"VividLemon",
"sagacity-felix"
],
"repo": "bootstrap-vue-next/bootstrap-vue-next",
"url": "https://github.com/bootstrap-vue-next/bootstrap-vue-next/issues/926",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1920383469 | Create Song “2023-09-30/index-4”
Automatically generated by Netlify CMS
👷 Deploy Preview for suricate processing.
Name
Link
🔨 Latest commit
e3d9d904aceda96cf21a2d1a6216a11e5e704303
🔍 Latest deploy log
https://app.netlify.com/sites/suricate/deploys/65187ec830cc6300087ddb1c
| gharchive/pull-request | 2023-09-30T20:02:15 | 2025-04-01T04:33:39.385143 | {
"authors": [
"bop"
],
"repo": "bop/VueDN",
"url": "https://github.com/bop/VueDN/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2148440356 | 请问能否添加远程控制功能用于保号
通过telegram发消息给bot,控制air780发出短信进行保号。
感觉实现起来会有点麻烦 先作为todo吧,等有空了我看看能不能做 话说保号就是定期发个短信出去吧?那有可能都不需要bot,做个定时任务就行了
感谢!
确实定时任务也是可以,一般就是180天之类的发一次短信就行。定时的话可能不太灵活吧,不太懂单片机的事情,不知道怎么确定第一条短信怎么发出去还有计时之类的实现。我也再想想有没有其他更灵活的实现方式,跑一个bot感觉有点重?看大佬都把固件精简了。
写一个定时程序,检测当前月份(可以用在线接口),然后触发发送短信就行吧
如果可行且不复杂的话,我觉得添加一个控制功能也是有一定实用性的。不止保号,国内有些平台的验证方式现在也是发送一串数字到某个号码的方式(鹅厂用的比较多),如果能加入控制功能的话,也能拓展更多使用场景,比如我现在用了很久的联通的卡就可以放心用5元小米套餐挂上转发器而不用担心那天需要用它发短信而没法用。
在esp32上部署一个http server,然后用内网穿透工具远程执行,这样可能是最简单的
https://github.com/0wQ/air780e-forwarder这个项目功能很全,不知道有没有什么帮助。
尝试过暴露 API 来发短信但会报错,不知道怎么解决
| gharchive/issue | 2024-02-22T07:52:05 | 2025-04-01T04:33:39.454526 | {
"authors": [
"Pnut-GGG",
"antonchen",
"yyxida"
],
"repo": "boris1993/sms_forwarder_air780_esp32",
"url": "https://github.com/boris1993/sms_forwarder_air780_esp32/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1237289066 | Consider caching certain reflection aspects
Hi there.
I just integrated EFCore.BulkExtensions in the hope to improve the performance in some heavyload operations we are having. As part of the profiling I did, I noticed that on each individual BatchUpdate call, will create a new BatchUpdateCreateBodyData and subsequently TableInfo and FastProperty.
Let's say we update 2000 times objects in a loop using a statement like dbContext.MyObjects.Where(...).BatchUpdate(x => new MyObject { Prop = someValue }). This will result in:
2000 BatchUpdateCreateBodyData instances
2000 TableInfo instances
2000 FastProperty
4000 newly compiled delegates for the getter/setter in the FastProperty
This is quite a waste of resources (memory and CPU/time wise) and also the GC will not be happy. It would be quite a big task to have an automatic query caching/compilation like EF does internally, but maybe there could be some low-hanging fruits to improve:
FastProperty can be cached globally. This would already reduce the 2000 FastProperty and 4000 compiled lambdas into 1 FastProperty and 2 lambdas. A static ConcurrentDictionary<PropertyInfo, FastProperty> with a GetOrAdd instead of a direct constructor call should do the job.
An API like the old CompiledQuery^1 or EF.CompileQuery^2 could be added to EFCore.BulkExtensions to reduce also the TableInfo and BatchUpdateCreateBodyData instances. Code paths which are executed in high frequency could be benefit a lot from such a feature.
I can provide some more detailed figures if needed, but already somebasics show that there is a big performance hit in this basic case where we update hundreds of records in a loop using BatchUpdate:
Let me know what you think about this topic and if there would be interest in adding such improvements.
Using Dicts for loop is always a good idea, will take a look into it.
PR merged.
| gharchive/issue | 2022-05-16T14:57:16 | 2025-04-01T04:33:39.460407 | {
"authors": [
"Danielku15",
"borisdj"
],
"repo": "borisdj/EFCore.BulkExtensions",
"url": "https://github.com/borisdj/EFCore.BulkExtensions/issues/833",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
436738031 | Invalid manifest error on Chrome Web Store
When trying to add to a browser (tried Chrome and Brave), "An error has occurred" message pops out with "Invalid manifest" for details.
Fixed it. Version 0.9.10.1 will be available in store in up to 60 minutes. Thank you for the report.
| gharchive/issue | 2019-04-24T14:41:30 | 2025-04-01T04:33:39.462081 | {
"authors": [
"aleksod",
"feedbee"
],
"repo": "borismus/keysocket",
"url": "https://github.com/borismus/keysocket/issues/326",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
839487932 | Clean up some SQL code and use string instead of BLOB
See each commit.
Updated, thanks for spotting this @TrustHenry !
| gharchive/pull-request | 2021-03-24T08:42:06 | 2025-04-01T04:33:39.464295 | {
"authors": [
"Geod24"
],
"repo": "bosagora/agora",
"url": "https://github.com/bosagora/agora/pull/1825",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
260389432 | GetBucketLifecycle - "And" Member missing in "Rule"
Definition in for GetBucketLifecycle in botocore is missing "filter". Documentation of GetBucketLifecycle shows:
<LifecycleConfiguration> xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Rule>
<ID>Archive and then delete rule</ID>
<Filter>
<Prefix>projectdocs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Transition>
<Days>30</Days>
<StorageClass>STANDARD_IA</StorageClass>
</Transition>
<Transition>
<Days>365</Days>
<StorageClass>GLACIER</StorageClass>
</Transition>
<Expiration>
<Days>3650</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
botocore:
"Rule":{
"type":"structure",
"required":[
"Prefix",
"Status"
],
"members":{
"Expiration":{"shape":"LifecycleExpiration"},
"ID":{
"shape":"ID",
"documentation":"Unique identifier for the rule. The value cannot be longer than 255 characters."
},
"Prefix":{
"shape":"Prefix",
"documentation":"Prefix identifying one or more objects to which the rule applies."
},
"Status":{
"shape":"ExpirationStatus",
"documentation":"If 'Enabled', the rule is currently being applied. If 'Disabled', the rule is not currently being applied."
},
"Transition":{"shape":"Transition"},
"NoncurrentVersionTransition":{"shape":"NoncurrentVersionTransition"},
"NoncurrentVersionExpiration":{"shape":"NoncurrentVersionExpiration"},
"AbortIncompleteMultipartUpload":{"shape":"AbortIncompleteMultipartUpload"}
}
}
This operation is deprecated as noted in the docs.
Please use get_bucket_lifecycle_configuration instead.
| gharchive/issue | 2017-09-25T19:31:21 | 2025-04-01T04:33:39.510202 | {
"authors": [
"joguSD",
"sven-schubert"
],
"repo": "boto/botocore",
"url": "https://github.com/boto/botocore/issues/1280",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
104595974 | Some moderate refactoring for boto3 docstrings
Includes extracting out the logic that creates the client classes into a public function. This is needed because resource factories in boto3 do not have access to the client's class, and thus its name, when creating the resource class.
There was also some moving around some of the internals of the docstring class to make it easier to inherit from for example:
class ActionDocstring(LazyLoadedDocstring):
def __init__(self, *args, **kwargs):
super(ActionDocstring, self).__init__(*args, **kwargs)
self._docstring_writer = document_action
cc @jamesls @mtdowling @rayluo
:ship:
:ship:
| gharchive/pull-request | 2015-09-03T00:13:38 | 2025-04-01T04:33:39.512117 | {
"authors": [
"kyleknap",
"mtdowling",
"rayluo"
],
"repo": "boto/botocore",
"url": "https://github.com/boto/botocore/pull/642",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1622949420 | Concurrent requests will have some timeouts without returning results
When I use ipmitools command, 50 concurrent requests can return all,
but I use this package 23 concurrent request has one timeout.
The result:
The code:
`package main
import (
"bytes"
"errors"
"fmt"
"github.com/bougou/go-ipmi"
"os/exec"
"sync"
"sync/atomic"
"time"
)
var total atomic.Int32
func main() {
var wg sync.WaitGroup
ip := "xxxxxxx"
port := 623
username := "xxxxx"
password := "xxxxx"
for i := 0; i < 23; i++ {
wg.Add(1)
getResultFromIPMI(ip, port, username, password, &wg)
//getCmdOut()
}
wg.Wait()
}
func getCmdOut() {
cmd := "./ipmitool -I lanplus -H xxxxxx -U xxxxx -P xxxxxxx fru"
cmd2 := exec.Command("sh", "-c", cmd)
var out bytes.Buffer
var stderr bytes.Buffer
cmd2.Stdout = &out
cmd2.Stderr = &stderr
err := cmd2.Run()
if err != nil {
fmt.Println(errors.New(fmt.Sprint(err) + ": " + stderr.String()))
return
}
fmt.Println(out.String())
total.Add(1)
fmt.Println("total:", total)
}
// 走ipmi协议取数据
func getResultFromIPMI(ip string, port int, username, password string, wg *sync.WaitGroup) ([][]interface{}, error) {
defer wg.Done()
client, err := connectIPMI(ip, port, username, password) //连接ipmi
if err != nil {
return nil, err
}
defer client.Close()
//从ipmi协议取数据
var arr [][]interface{}
arr, err = fru(client) //取sensor类型数据
if err != nil {
return nil, err
}
fmt.Println(arr)
return arr, nil
}
// 连接ipmi
func connectIPMI(ip string, port int, username string, password string) (*ipmi.Client, error) {
var c *ipmi.Client
var err error
//没有,重新连接
c, err = ipmi.NewClient(ip, port, username, password)
if err != nil {
fmt.Println(err)
return nil, err
}
c.WithInterface("lanplus")
c.WithTimeout(30 * time.Second)
if err = c.Connect(); err != nil { //TODO 有时候很慢,有时候很快
fmt.Println(err)
return nil, err
}
c.Interface = "lanplus"
//defer c.Close() //连接缓存就不用关闭了
return c, nil
}
// 取fru类型数据
func fru(client *ipmi.Client) ([][]interface{}, error) {
res, err := client.GetFRUs() //2秒
if err != nil {
return nil, err
}
arr := make([][]interface{}, len(res), len(res))
for index, value := range res {
item := make([]interface{}, 1, 1)
item[0] = value.String()
arr[index] = item
}
return arr, nil
}
`
"go run -race test.go" that also has problem.
Maybe you should remove the username and password from your code
Ok, I remove them.
| gharchive/issue | 2023-03-14T08:12:34 | 2025-04-01T04:33:39.550850 | {
"authors": [
"gophernancy",
"yonwoo9"
],
"repo": "bougou/go-ipmi",
"url": "https://github.com/bougou/go-ipmi/issues/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
307095846 | ImportError: No module named yapps
When I run
python q_server.py cifar10 cifar10_logs
Following error occurs
Traceback (most recent call last):
File "q_server.py", line 4, in
import libs.grammar.q_protocol as q_protocol
File "/metaqnn-master/libs/grammar/init.py", line 1, in
import q_learner
File "/metaqnn-master/libs/grammar/q_learner.py", line 6, in
import cnn
File "/metaqnn-master/libs/grammar/cnn.py", line 4, in
from yapps import runtime
ImportError: No module named yapps
What is the 'yapps'?
How can I solve it?
I found the yapps (https://github.com/smurfix/yapps)
I will leave my previous issue for other users.
| gharchive/issue | 2018-03-21T01:40:22 | 2025-04-01T04:33:39.597704 | {
"authors": [
"SujungBae"
],
"repo": "bowenbaker/metaqnn",
"url": "https://github.com/bowenbaker/metaqnn/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
46287384 | When I have some dependencies one dependent on another which being loaded via tarball URL they could not be istalled.
When I have some dependencies one dependent on another which being loaded via tarball URL they could not be istalled.
For example dependencies are angular and angular-sanitize. In bower.json one dependent on another and being loaded via tarball URL. They could not be istalled via bower install simultaneously. The problem occurs when both dependencies have versions set up in their bower.json-s. During install angular is resolved as angular#*, despite it has exact version (1.3.0 in my case) in its bower.json. After that angular-sanitize tries to load it's angular#1.3.0 dependency and fails because angular#* is not equal angular#1.3.0 as far as I can see.
I would expect it resolve version correctly based on version set in angular-1.3.0.tgz/bower.json. It actually does the resolve when you first install only angular, then add angular-sanitize to bower.json and run bower install again.
You can find more details with exact steps below in successfull/unsuccessfull sections.
Unsucsessfull install
Set bower.json dependencies part as following:
"dependencies": {
"angular": "http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz",
"angular-sanitize": "http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz"
}
Then clear everything that is possible and installing:
>rm -r bower_components
>bower cache clean
>bower install
Output after bower install is following:
bower angular#* not-cached http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz#*
bower angular#* resolve http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz#*
bower angular-sanitize#* not-cached http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz#*
bower angular-sanitize#* resolve http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz#*
bower angular#* download http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz
bower angular-sanitize#* download http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz
bower angular-sanitize#* extract angular-sanitize-1.3.0.tgz
bower angular#* extract angular-1.3.0.tgz
bower angular-sanitize#* resolved http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz#e-tag:{SHA1{7a9
bower angular#* resolved http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz#e-tag:{SHA1{eaf
bower angular#1.3.0 ENOTFOUND Request to https://bower.herokuapp.com/packages/angular failed: getaddrinfo ENOTFOUND
Sucsessfull install
Set bower.json dependencies part as following:
"dependencies": {
"angular": "http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz"
}
Then clear everything that is possible and installing:
>rm -r bower_components
>bower cache clean
>bower install
Output after bower install:
bower angular#* not-cached http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz#*
bower angular#* resolve http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz#*
bower angular#* download http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz
bower angular#* extract angular-1.3.0.tgz
bower angular#* resolved http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz#e-tag:{SHA1{eaf
bower angular#* install angular#e-tag:{SHA1{eaf
angular#e-tag:{SHA1{eaf bower_components\angular
Adding angualr-sanitize to bower.json:
"dependencies": {
"angular": "http://my-repo.com/npm/angular/1.3.0/angular-1.3.0.tgz",
"angular-sanitize": "http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz"
}
Running install again:
>bower install
And output after:
bower angular-sanitize#* not-cached http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz#*
bower angular-sanitize#* resolve http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz#*
bower angular-sanitize#* download http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz
bower angular-sanitize#* extract angular-sanitize-1.3.0.tgz
bower angular-sanitize#* resolved http://my-repo.com/npm/angular-sanitize/1.3.0/angular-sanitize-1.3.0.tgz#e-tag:{SHA1{7a9
bower angular-sanitize#* install angular-sanitize#e-tag:{SHA1{7a9
angular-sanitize#e-tag:{SHA1{7a9 bower_components\angular-sanitize
└── angular#e-tag:{SHA1{eaf
Yey, everything is fine!
Installed packages versions on my-repo.com.
angular-1.3.0.tgz/bower.json:
{
"name": "angular",
"version": "1.3.0",
"main": "./angular.js",
"dependencies": {
}
}
angular-sanitize-1.3.0.tgz/bower.json:
{
"name": "angular-sanitize",
"version": "1.3.0",
"main": "./angular-sanitize.js",
"dependencies": {
"angular": "1.3.0"
}
}
Bower is deprecated and we recommend to switch to Yarn
| gharchive/issue | 2014-10-20T15:27:33 | 2025-04-01T04:33:39.606420 | {
"authors": [
"sheerun",
"wips"
],
"repo": "bower/bower",
"url": "https://github.com/bower/bower/issues/1569",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
151226358 | Malformed .bower.json - Unexpected token }
EMALFORMED bower_components/flot/.bower.json
Additional error details:
Unexpected token }
bower.json: https://gist.github.com/SephVelut/d28bef8cf10e5f4dcf663cbd8c1c17ba
failure output: https://gist.github.com/SephVelut/406412439bef11775d3c0b07224c1ccd
success_output: https://gist.github.com/SephVelut/1aeb0ad9992a6bfc54a4cbd8f7a471ef
With the above bower.json, If I do a rm -rf bower_components && bower install I'll get a malformed bower_components/flot/.bower.json in the official Flot library. https://gist.github.com/SephVelut/e031f6c51117a6309a49fcba43907e3f
If I do this a bunch of times, eventually I'll get a non malformed bower_components/flot/.bower.json.
Can you provide the faulty .bower.json?
Pretty sure this is the same issue as https://github.com/bower/bower/issues/2067. With your .bower.json we can be sure.
failure: https://gist.github.com/SephVelut/e031f6c51117a6309a49fcba43907e3f
success: https://gist.github.com/SephVelut/560fab6f2a28d85b6bcbcef0185a9482
Thanks!
Without further looking at the separate packages, I'm pretty confident that this is the same issue.
In short: Some of your dependencies are requiring jquery and others jQuery, which unfortunately are different packages in the registry. When writing to bower_components those two conflict.
given the packages below, it is very likely some are requiring the one version and others the other version of jQuery. There is currently no real fix for this. You could try to set a resolution to a certain jQuery version, but I am not sure that helps.
"jquery-advanced-news-ticker": "https://github.com/risq/jquery-advanced-news-ticker.git#1.0.0",
"jquery-minicolors": "^2.2.4",
"jquery-ui": "^1.11.4",
"jquery.cookie": "jquery-cookie#^1.4.1",
"jquery.easy-pie-chart": "^2.1.6",
I'll close the issue in favor of the more elaborate one (https://github.com/bower/bower/issues/2067).
We're already working on a newer version of the registry, which will hopefully also fix this behavior by cleaning out such cases!
Gotcha. Thank you for looking into it.
| gharchive/issue | 2016-04-26T20:04:10 | 2025-04-01T04:33:39.613584 | {
"authors": [
"BenMann",
"SephVelut"
],
"repo": "bower/bower",
"url": "https://github.com/bower/bower/issues/2268",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
117902599 | Supports ~/ paths to cwd field in .bowerrc
Updates:
Add untildify module
Update config.js to convert tilde path to home path
Fixes #1784
Could you write a test for it?
Sure thing!
@sheerun Done! Could you please see again?
Looks good, thank you! :)
| gharchive/pull-request | 2015-11-19T20:37:16 | 2025-04-01T04:33:39.615718 | {
"authors": [
"sheerun",
"watilde"
],
"repo": "bower/bower",
"url": "https://github.com/bower/bower/pull/2027",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1668338665 | Updates to Priority in a given bpfProgramconfig are not being correctly handled.
The Function StringifyAttachType Adds priority as part of the program key for indexing our nodeState on each reconcile loop. This means that if a bpfProgramConfig's priority is changed we're incorrectly orphaning the bpf program on the node loaded at the old priority. This means that priority updates won't work correctly. In order to fix this we either need to continue with the work outlined in #386, Make the program key smarter, or add some sort of metadata K/V pairs to the bpfctl load command.
Fixed by #
| gharchive/issue | 2023-04-14T14:17:05 | 2025-04-01T04:33:39.635894 | {
"authors": [
"astoycos"
],
"repo": "bpfd-dev/bpfd",
"url": "https://github.com/bpfd-dev/bpfd/issues/406",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2172564087 | bpfman: Add support for fentry/fexit program types
Add support for the fentry and fexit program types.
Resolves: #1012
(just remove hold when ready)
I squashed the integration tests in with the main fentry/fexit code (because I addressed one of the comments in the wrong commit and just made it easier to combine them than to try to unwind the update).
| gharchive/pull-request | 2024-03-06T22:36:56 | 2025-04-01T04:33:39.637177 | {
"authors": [
"Billy99",
"astoycos"
],
"repo": "bpfman/bpfman",
"url": "https://github.com/bpfman/bpfman/pull/1024",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
438261023 | Import only certain languages
Hi,
I was wondering if it is possible to only import the languages that are needed, because in my application I only need a few languages and importing all the languages is quite a bit of an overhead.
Thanks for your time.
This would be nice and has been requested before but I don't have time to do it and it isn't a priority at this time. I would welcome a PR for this but am going to close this one.
| gharchive/issue | 2019-04-29T11:04:42 | 2025-04-01T04:33:39.680066 | {
"authors": [
"bradymholt",
"jz222"
],
"repo": "bradymholt/cRonstrue",
"url": "https://github.com/bradymholt/cRonstrue/issues/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2184651409 | Update README.md
Update read me:
update outdated score format
add the new tutorial link
delete previous duplicated links to documentation and tutorials
@mschrimpf This should be good to merge, but it seems that some checks have not passed.
@deirdre-k @samwinebrake this looks like it's from the travis errors running into time limits again. Do you prefer us to hold off on merging for now until those are fixed?
| gharchive/pull-request | 2024-03-13T18:32:49 | 2025-04-01T04:33:39.686999 | {
"authors": [
"YudiXie",
"mschrimpf"
],
"repo": "brain-score/vision",
"url": "https://github.com/brain-score/vision/pull/627",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1992687695 | Subject: ModuleNotFoundError: No module named 'braindecode.training' after installing Braindecode 0.8
Hello!
I have recently installed Braindecode version 0.8, and I encountered an issue when trying to run the following code snippet from the documentation:
from braindecode.datasets import MOABBDataset
subject_id = 3
dataset = MOABBDataset(dataset_name="BNCI2014_001", subject_ids=[subject_id])
Error Message:
ModuleNotFoundError: No module named 'braindecode.training'
Same here after fresh installation
Hi @Paulhb7 and @etiennedemontalivet!
I'm sorry about that. If you uninstall braindecode and install it again, everything will work:
pip uninstall braindecode
pip install braindecode --no-cache-dir
We changed the installation system in the last sprint, moving the legacy system from setup.py to pyproject.toml; we are still learning some things.
Thank you very much for using braindecode, and if you can confirm that everything is working, that would be great!
FYI @dcwil
Working for again :D
Fixed. Thanks for your work @bruAristimunha !
Thank you @bruAristimunha!
| gharchive/issue | 2023-11-14T12:53:09 | 2025-04-01T04:33:39.690797 | {
"authors": [
"Paulhb7",
"bruAristimunha",
"changkun",
"dcwil",
"etiennedemontalivet"
],
"repo": "braindecode/braindecode",
"url": "https://github.com/braindecode/braindecode/issues/559",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1882742951 | Preprocessing classes
-Not finished (+revising)
-Delete some comments
NOTE: pick_types() is a legacy function. New code should use inst.pick(...). --> so, I still need to change to pick
Created some new preprocessor objects based on mne's raw/Epochs methods (in preprocess_classes.py), with docstrings copied from mne
Created a test file (test_new_preprocess.py) with some tests adapted from the Preprocessor's test (test_preprocess.py).
Commenting
Still did not update whats new file
Still needs to be reviewed
Added the new preprocessor classes to the API reference
Fixed the copy docstring from mne
@robintibor
I still need to update the what's new file
| gharchive/pull-request | 2023-09-05T21:23:38 | 2025-04-01T04:33:39.694082 | {
"authors": [
"brunaafl"
],
"repo": "braindecode/braindecode",
"url": "https://github.com/braindecode/braindecode/pull/500",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1253226639 | A new tab opens evertime I start VSCode
Besides the fact that the Glow feature doesn't work as the command does not exist, there seems to be another issue. Everytime I start VSCode, a new tab titled "What's new for Synthwave '84" opens. I really like the extention, but I can't use it like this. I'll keep an eye for an update.
Thank you.
Same
I opened the "src/extension.js" for this extension and added a "return" statement at the top, and the problem went away:
function showUpdatePage() {
return; // add this line
const panel = vscode.window.createWebviewPanel(
`synthwave.whatsNew`, // Identifies the type of the webview. Used internally
'What\'s new for Synthwave \'84', // Title of the panel displayed to the user
vscode.ViewColumn.One, // Editor column to show the new webview panel in.
{ enableScripts: !0 } // Webview options. More on these later.
);
const viewPath = path.join(this.cntx.extensionPath, "whats-new", "view.html");
const viewResourcePath = panel.webview.asWebviewUri(viewPath);
const htmlContent = fs.readFileSync(viewPath, "utf-8");
panel.webview.html = htmlContent;
}
I removed the tab altogether - sorry about that this comment closes #7
| gharchive/issue | 2022-05-31T01:14:48 | 2025-04-01T04:33:39.702860 | {
"authors": [
"brainomite",
"iandouglas",
"mazwrld",
"oasaleh"
],
"repo": "brainomite/dark-synthwave-vscode",
"url": "https://github.com/brainomite/dark-synthwave-vscode/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
145740735 | Add a way to programmatically initiate nonce retreival.
Referencing issue #149 I would like to put in a request for a way to call something like.
getNonce or something.
It could look like this.
onReady: function(thing) {
thing.getNonce -> //This could have the method to get the nonce when we want to try.
or
thing.submit()
}
This is something we are actively working on for the next version of our SDK. Stay tuned!
| gharchive/issue | 2016-04-04T16:50:18 | 2025-04-01T04:33:39.713658 | {
"authors": [
"EvanHahn",
"JesusisFreedom"
],
"repo": "braintree/braintree-web",
"url": "https://github.com/braintree/braintree-web/issues/151",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
362388780 | paymentRequest.create promise callbacks not working
General information
SDK version: 3.37.0
Environment: Sandbox and Production
Browser and OS Mac OS 10.13.6 Chrome 69
Issue description
I just noticed that using braintree-web v3.37.0 doesn't fire then() or catch() on the braintree.paymentRequest.create calls
It works fine when using v3.36.0
import braintreeClient from 'braintree-web/client';
import braintreePaymentRequest from 'braintree-web/payment-request';
braintreeClient.create({
authorization: token
})
.then(client => {
//i get here
braintreePaymentRequest.create({
client: client
})
.then(instance => {
//never gets here
})
.catch(() => {
//never gets here
});
});
I will do some more digging to see if I can see what's happening
From what I can tell, the FRAME_READY event no longer is being hit for some reason
https://github.com/braintree/braintree-web/blob/882ac0adcf1798b8920e5094fbb24a331787a2b7/src/payment-request/external/payment-request.js#L214
Found the issue here:
https://github.com/braintree/braintree-web/blob/3.36.0/src/payment-request/external/payment-request.js#L112-L125
When this gets generated into /dist as part of the npm install, you wind up with
function composeUrl(assetsUrl, componentId, isDebug) {
var baseUrl = assetsUrl;
{
// Pay with Google cannot tokenize in our dev environment
// so in development, we have to use a sandbox merchant
// but set the iFrame url to the development url
baseUrl = 'https://' + undefined + ':9000';
}
// endRemoveIf(production)
return baseUrl + '/web/' + VERSION + '/html/payment-request-frame' + useMin(isDebug) + '.html#' + componentId;
}
which is obviously incorrect
Thanks for the report, we'll get this fixed up.
We've pinpointed the problem. Aiming to get a fix out for this on Monday, 9/24
This should be fixed in 3.38.0. Thanks for letting us know!
| gharchive/issue | 2018-09-20T22:19:31 | 2025-04-01T04:33:39.718937 | {
"authors": [
"719media",
"crookedneighbor"
],
"repo": "braintree/braintree-web",
"url": "https://github.com/braintree/braintree-web/issues/388",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2450456990 | 🛑 Charite gateway-iop-connection is down
In 1c1c4f5, Charite gateway-iop-connection (https://ids.health-x.charite.de/health/4) was down:
HTTP code: 500
Response time: 176 ms
Resolved: Charite gateway-iop-connection is back up in 67cde67 after 24 minutes.
| gharchive/issue | 2024-08-06T09:54:11 | 2025-04-01T04:33:39.779004 | {
"authors": [
"braunma"
],
"repo": "braunma/dataloft-endpoint-monitor",
"url": "https://github.com/braunma/dataloft-endpoint-monitor/issues/591",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1178611508 | PhoenixChannel using previous user's topic(not a bug need a help!)
My scenario is like this.
I created 2 users on simulator for testing.
When I log in the app, my app will connect socket and add channel with topic like this
var topic = "cart:$userId";
var connectedSocket = await _socket!.connect();
if (connectedSocket!.isConnected &&
connectedSocket.channels.containsKey(topic) != true) {
var channel = connectedSocket.addChannel(topic: topic);
if (channel.state != PhoenixChannelState.joined) {
var pushResponse = await channel.join().future;
And then when I add a item to cart using this function
Future<PushResponse>? addToCart(
{String? productId, int? quantity, int? storeId}) async {
final Session? session = getIt<Session>();
print("calling addToCart from CartApi, user_id: ${session!.user!.id}");
Future<PushResponse>? pushResponse;
await channel.then(
(channel) {
print("Printing current channel info from addToCart function");
print(channel!.topic);
print(channel.state.name);
pushResponse = channel.push(
"add_to_cart",
{
"user_id": session.user!.id,
"product_id": productId,
"store_id": storeId,
"quantity": quantity
},
).future;
},
);
return pushResponse!;
}
It works but
When I switch to another user (app doesn't restart, just lot out and log in with another user),
My channel object is using previous users' channel.
I inspect it using print(channel!.topic);
It shows me cart:1(previous user) instead of cart:2(current user)
I tried leaving channel when user logs out.
Future<void> leaveChannel() async {
print("Leaving channel....");
return await channel.then(
(channel) {
print("Leaving from ${channel!.topic}");
channel.leave();
},
);
}
But it still uses previous channel with topic.
I know this problem is from my source code not from this package. but I am stuck on this for couple of days.
So Please give me an advice!
Try to use channel.close() and socket.close() and reopen it after relogin
| gharchive/issue | 2022-03-23T20:13:04 | 2025-04-01T04:33:40.029643 | {
"authors": [
"freewebwithme",
"vizakenjack"
],
"repo": "braverhealth/phoenix-socket-dart",
"url": "https://github.com/braverhealth/phoenix-socket-dart/issues/47",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
254056836 | logout template_name lost in (at least) Django 1.11.3
There's a bug in Django (at least 1.11.3) that prevents userena from passing its template_name to the logout view. As I documented in the bug, it can be fixed by converting the arg into a kwarg. Can you confirm the project is active and that a PR will be accepted?
I had the same problem and as @claytondaley mentioned passing template name as kwarg solved the issue.
| gharchive/issue | 2017-08-30T16:15:13 | 2025-04-01T04:33:40.037426 | {
"authors": [
"claytondaley",
"mrabedini"
],
"repo": "bread-and-pepper/django-userena",
"url": "https://github.com/bread-and-pepper/django-userena/issues/550",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
782794249 | 关于锂电池大小
hi,博主,我想问下这个锂电池是多少尺寸的(长宽高),谢谢啦
hi,博主,我想问下这个锂电池是多少尺寸的(长宽高),谢谢啦
我的这个电池是524013mm.
hi,博主,我想问下这个锂电池是多少尺寸的(长宽高),谢谢啦
我的这个电池是524013mm.
| gharchive/issue | 2021-01-10T11:06:03 | 2025-04-01T04:33:40.040589 | {
"authors": [
"breakstring",
"rakuchyan"
],
"repo": "breakstring/eInkCalendarOfToxicSoul",
"url": "https://github.com/breakstring/eInkCalendarOfToxicSoul/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
383955694 | 关于数据公开
为了让其它人可以方便地进行数据验证及复现结果,计划把所有对局的sgf定时打包发布,但与此同时发现11月23日之前的数据配置有一点点错误,虽然对ELO的影响很小,但出于严谨,在此之前的数据计划分批删除,直到所有数据均从11月24日开始,这时候会打包发布所有对局的sgf,时间大约为12月月底
此外,如果你希望哪个AI或某个LZ权重加入测试,请告诉我,不过前提是那个AI支持限制playout或设置为其水平与机器性能无关
209出来了,求姐姐加入测试
作为一个小污女破娃,也不知道怎么就被这位看似腹黑,实则暖心,外表禁-欲,内心闷骚的大叔给迷住了。反正就是一个目标,扑倒大叔。“大叔,你这车不错,适合车震。”“大叔,觉是谁啊?我只想睡你。”“大叔,我要试试你的功能。”破娃从来都是个敢爱敢恨的女人,且对爱坚持到底。别看她年纪小,战斗力却是杠杠的。大概每个女人心中都有一个属于自己的大叔吧!
https://drive.google.com/drive/folders/1bB8ee1wFuRWL9nPhsl4_BPUhcWSBuxO0
推荐ox系列的ox24加入测试,30*256lz可用格式,谢谢小姐姐
| gharchive/issue | 2018-11-24T03:36:43 | 2025-04-01T04:33:40.043031 | {
"authors": [
"breakwa11",
"powapapa",
"yehud",
"zhanzhenzhen"
],
"repo": "breakwa11/GoAIRatings",
"url": "https://github.com/breakwa11/GoAIRatings/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2521649144 | PHP 8.4.0beta5
apcu build is broken. Is there a reason we include apcu in the bref base layer? Does it not work as an extension?
This extension is built-in the main bref layers.
I see you triggered a rebuild, but I don't think it'll succeed, as changes are needed to the apcu extension source to make it work properly on beta5 (and later).
Yep, I figured it doesn't hurt to retry 🤷 But the error message didn't look like it would make any difference.
Building the extension from source seems to work. Not sure if what I've done is actually identical to what we had before. Was the extension previously being enabled using an ini file?
APCu 5.1.24 has now been released.
Nice!
| gharchive/pull-request | 2024-09-12T08:00:35 | 2025-04-01T04:33:40.054582 | {
"authors": [
"GrahamCampbell",
"mnapoli"
],
"repo": "brefphp/aws-lambda-layers",
"url": "https://github.com/brefphp/aws-lambda-layers/pull/202",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2020159216 | Multiple Base Configurations
In the example from base configurations, it seems like all the objects are instantiated first and then one gets selected / modified by the user.
However, there's the case where:
i) some of the options are such that the default arguments don't always work (e.g., some file paths mightn't exist)
ii) said options aren't what the user wants, so technically we don't care that the default arguments don't work
For example, the following dataclass might be a nested config inside a base config:
@dataclass
class MultiCameraConfig:
camera_cfgs: List[CameraConfig] = field(default_factory=list)
camera_intrinsics_dir: Optional[pathlib.Path] = None
camera_extrinsics_path: Optional[pathlib.Path] = None
def __post_init__(self):
if self.camera_cfgs is not None:
assert camera_intrinsics_dir is not None, "..."
...
Then,
env_configs: Dict[str, EnvConfig] = {
"real-env": EnvConfig(
multi_camera_cfg=MultiCameraConfig(
camera_cfgs=None,
camera_intrinsics_dir="random_default_path",
camera_extrinsics_path="random_default_path",
)
),
"sim-env": EnvConfig(
multi_camera_cfg = field(
default_factory=lambda: MultiCameraConfig(
# so ran other script and pasting values here
camera_cfgs=[
CameraConfig(
...
),
EnvConfigUnion = tyro.extras.subcommand_type_from_defaults(
env_configs,
prefix_names=False, # Omit prefixes in subcommands themselves.
)
AnnotatedEnvConfigUnion = tyro.conf.OmitSubcommandPrefixes[EnvConfigUnion] # Omit prefixes of flags in subcommands.
where both EnvConfig EnvConfig are created (and one might trigger an assertion even though that's not the option the user eventually chooses)
Is there a nice way around this?
It's hard to avoid instantiating the provided default objects, since they're needed for things like helptext generation (which currently always happens, even if the helptext isn't being printed), default subcommand matching[^1], and — if the corresponding subcommand is selected — need to be instantiated in their original form before tyro can use the CLI flags to override values inside of them.
The easiest suggestion I can think of is to move validation logic like this out of __post_init__; you could add a .validate() method or something similar?
[^1]: Default subcommand matching means: given a field that's transformed to subcommands x: Union[A, B, C] = some_object, does some_object correspond to A, B, or C?
Gotcha - this sounds reasonable. Thanks!
| gharchive/issue | 2023-12-01T06:00:52 | 2025-04-01T04:33:40.079289 | {
"authors": [
"brentyi",
"kevin-thankyou-lin"
],
"repo": "brentyi/tyro",
"url": "https://github.com/brentyi/tyro/issues/99",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
147052984 | Better Documentation
Follow the advice in the readme and update the docs to include a more step-by-step guide
Agree. And... hm, may be at least one example not it kotlin?
| gharchive/issue | 2016-04-08T22:16:09 | 2025-04-01T04:33:40.704527 | {
"authors": [
"brianegan",
"se-panfilov"
],
"repo": "brianegan/bansa",
"url": "https://github.com/brianegan/bansa/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
331404560 | DateTime.Ticks broken in Chrome 67
Upgrading to Chrome will make several unit tests to fail where they rely in DateTime's Ticks property. It works fine on Edge, Firefox and Safari.
Steps To Reproduce
https://deck.net/1049588cfb52c7874edacd21a2a81363
Expected Result
Success.
315254592000000000
Actual Result
Failure.
315254592280000000
This is one of the failing samples in examples explorer. This issue does not require an unit test, as several unit tests are failing due to this.
Google Chrome 67 introduced early time zone historical consideration. According to this website: https://www.timeanddate.com/time/zone/brazil/sao-paulo
The time zone for Brazil was GMT-3:06:28 from at least 1800 until 1913. Seems Chrome just assumed any date prior to 1800 will be using this time zone as well.
According to this stackoverflow question, other countries and cities may be affected by this new google Chrome behavior.
Other browsers do not take this into consideration.
This command is probably a good one to check for the failing browser in different time zones (switch time zone and run the command):
var truecount=0;
var falsecount=0;
for (var i = 1; i < 2100; i++) {
for (var j = 0; j < 12; j++) {
var g = new Date(i, j, 1);
g.setFullYear(i);
var ug = new Date(Date.UTC(g.getUTCFullYear(), g.getUTCMonth(), g.getUTCDay(), g.getUTCHours(), g.getUTCMinutes(), g.getUTCMilliseconds()));
ug.setFullYear(g.getUTCFullYear());
if (ug.getHours() == g.getHours()) truecount++;
else falsecount++;
}
}
console.log("true:"+truecount+" - false:"+falsecount);
It should read zero false occurences.
Just disabled the tests that relied on the features this Chrome issue breaks, unfortunately the fix won't be trivial, as now different browsers handle the issues differently. There were around 15 tests ignored.
This one still failing (the original one, i suppose) https://testing.bridge.net/?testId=7548dbd0
| gharchive/issue | 2018-06-12T01:25:02 | 2025-04-01T04:33:40.774544 | {
"authors": [
"fabriciomurta",
"stgolem"
],
"repo": "bridgedotnet/Bridge",
"url": "https://github.com/bridgedotnet/Bridge/issues/3633",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
123038093 | Initial findings in porting to Cygwin on Windows
I don't have time to complete this properly, but recording some notes here in case someone else finds useful as a starting point for porting this project to Windows with Cygwin.
check_prerequisites is hard coded to make sure OS X (Darwin) – I commented out the call to the function so Cygwin worked.
I had to compile up https://github.com/emcrisostomo/fswatch - which for me involved installing more Cygwin libraries and the C++ compiler. I could not find fswatch in the default cygwin downloads.
I replaced “greadlink” with “readlink” (greadlink was a separate command added on OS X because the built in readlink was not powerful enough on OS X).
I commented out the “install_dependencies” call (I could have turned on ‘skip dependencies’ flag it looks like) – this invokes ‘homebrew’ to install stuff on OS X. I had already done these steps by hand on Windows (that is, install boot2docker etc).
The ‘install’ command failed to add a hostname to /etc/hosts for two reasons: It used 'sudo' which failed under cygwin, and it messed up the CRLF line ending for Windows. That needs to be more robust.
After that the 'install' command seemed to work and set everything up surprisingly well, and I could use the script to start boot2docker, sync to docker host, etc. Nice work!!!
Wow, nice work! I'm amazed those were the only changes required. Is there a package manager that works with cygwin that could fill in fswatch and the other dependencies in install_dependencies? If so, all the other issues would be straightforward to fix. And if that's all it takes to support Windows, I'm totally game.
Unfortunately not that I know of - other than cygwin. If cygwin added fswatch support for example, that would be a great win. This seems to be the relevant page: https://cygwin.com/setup.html. The fswatch owner does not make a Windows binary available - so maybe that is step one. Find a volunteer to get fswatch included in cygwin. Then, as you say, the rest is probably straightforward for you to do.
(I don't think you can install packages from the command line in cygwin by the way - they have a separate tool that can install updates - assuming you are not running cygwin at the time. In Windows, you cannot replace binaries if the code is currently executing, so I think the install process had to be kept separate. So it is going to be just echo statements saying "you also need to install the following cygwin packages". Maybe that is all you do - echo cygwin packages you need, don't try to install them.)
That's a shame about package management on Cygwin.
One thought on maintainability: I don't even have a Windows box these days, so testing changes would be tough. In fact, even maintaining the code on OS X is tough, as we can't run integration tests due to #7. To add Windows to the mix, I'd probably need a solution similar to run automated integration tests in both environments. Otherwise, the code would almost certainly break after a short while.
| gharchive/issue | 2015-12-18T22:38:56 | 2025-04-01T04:33:40.791903 | {
"authors": [
"alankent",
"brikis98"
],
"repo": "brikis98/docker-osx-dev",
"url": "https://github.com/brikis98/docker-osx-dev/issues/158",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1046490417 | peg syntax error on operator prefix match
I ran into this ambiguity...
% zapi query 'from Raw | topic=="Invoices"'
error parsing Zed at column 15:
from Raw | topic=="Invoices"
=== ^ ===
It appears that the PEG parser is matching "top" as an operator then getting stuck parsing the implied filter expression. Operator names should require whitespace or EOF to avoid this problem.
Looks like a duplicate of #3072 BTW.
I'm closing this one as a duplicate of #3072, which was indeed fixed by #3792.
| gharchive/issue | 2021-11-06T11:36:46 | 2025-04-01T04:33:40.800729 | {
"authors": [
"mccanne",
"philrz"
],
"repo": "brimdata/zed",
"url": "https://github.com/brimdata/zed/issues/3249",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1468494282 | Apache Arrow support
Add support for Apache Arrow as another format the Zed tooling can read/write.
A bonus found during research is that the Arrow library we intend to use comes with a better Parquet implementation than the one we're using currently, so once we're making use of that some of our open Parquet issues may be addressed.
#4252 added support for the Arrow IPC stream format. We can close this after we add support for the IPC file format.
#4278 covers using the Arrow Parquet implementation.
| gharchive/issue | 2022-11-29T18:08:18 | 2025-04-01T04:33:40.802349 | {
"authors": [
"nwt",
"philrz"
],
"repo": "brimdata/zed",
"url": "https://github.com/brimdata/zed/issues/4226",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
701460197 | add non-streaming zjson search endpoint
Add an option to the search point to return the results of a search as a single (non-chunked) http response.
All but the the final stats updates should be ignored.
The mime type of the response should be application/json.
Given the clean design of zqd/search, this task should be trivial by collecting up all the data objects into a go slice then calling the encoding the slice as json onto the http response.
Verified in zq commit 4bce00d.
If I post a simple event and request it via the new -e json option, the response that comes back is a single JSON array.
$ echo '{"foo": "bar"}' > foo.ndjson
$ zapi -s foo post -f foo.ndjson
$ zapi -s foo get -e json | jq .
[
{
"type": "TaskStart",
"task_id": 0
},
{
"type": "SearchRecords",
"channel_id": 0,
"records": [
{
"id": 23,
"type": [
{
"name": "foo",
"type": "string"
}
],
"values": [
"bar"
]
}
]
},
{
"type": "SearchEnd",
"channel_id": 0,
"reason": "eof"
},
{
"type": "SearchStats",
"start_time": {
"sec": 1600368655,
"ns": 114175000
},
"update_time": {
"sec": 1600368655,
"ns": 114387000
},
"bytes_read": 4,
"bytes_matched": 4,
"records_read": 1,
"records_matched": 1
}
]
As expected, it shows up on the wire marked with the correct Content-type. The response sniffed from Wireshark:
Content-Type: application/json
X-Request-Id: 10
Date: Thu, 17 Sep 2020 18:50:55 GMT
Content-Length: 393
[{"type":"TaskStart","task_id":0},{"type":"SearchRecords","channel_id":0,"records":[{"id":23,"type":[{"name":"foo","type":"string"}],"values":["bar"]}]},{"type":"SearchEnd","channel_id":0,"reason":"eof"},{"type":"SearchStats","start_time":{"sec":1600368655,"ns":114175000},"update_time":{"sec":1600368655,"ns":114387000},"bytes_read":4,"bytes_matched":4,"records_read":1,"records_matched":1}]
Contrast with -e zjson which is streamed as NDJSON.
$ zapi -s foo get -e zjson | jq .
{
"type": "TaskStart",
"task_id": 0
}
{
"type": "SearchRecords",
"channel_id": 0,
"records": [
{
"id": 23,
"type": [
{
"name": "foo",
"type": "string"
}
],
"values": [
"bar"
]
}
]
}
{
"type": "SearchEnd",
"channel_id": 0,
"reason": "eof"
}
{
"type": "SearchStats",
"start_time": {
"sec": 1600368782,
"ns": 425084000
},
"update_time": {
"sec": 1600368782,
"ns": 425536000
},
"bytes_read": 4,
"bytes_matched": 4,
"records_read": 1,
"records_matched": 1
}
{
"type": "TaskEnd",
"task_id": 0
}
That response on the wire:
HTTP/1.1 200 OK
Content-Type: application/x-zjson
X-Request-Id: 14
Date: Thu, 17 Sep 2020 18:53:02 GMT
Transfer-Encoding: chunked
23
{"type":"TaskStart","task_id":0}
79
{"type":"SearchRecords","channel_id":0,"records":[{"id":23,"type":[{"name":"foo","type":"string"}],"values":["bar"]}]}
35
{"type":"SearchEnd","channel_id":0,"reason":"eof"}
be
{"type":"SearchStats","start_time":{"sec":1600368782,"ns":425084000},"update_time":{"sec":1600368782,"ns":425536000},"bytes_read":4,"bytes_matched":4,"records_read":1,"records_matched":1}
1f
{"type":"TaskEnd","task_id":0}
0
Thanks @mccanne!
| gharchive/issue | 2020-09-14T21:57:09 | 2025-04-01T04:33:40.807699 | {
"authors": [
"mccanne",
"philrz"
],
"repo": "brimsec/zq",
"url": "https://github.com/brimsec/zq/issues/1277",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2267633286 | Build out dev container
Here, I add a few quality-of-life improvements to make the dev experience as smooth as possible:
Install the psql CLI into the container.
Install python requirements as part of container setup.
Set port forwarding the recommended way (oops, I had that misconfigured before).
Added a debugger configuration for vscode.
Added a migration script to create the DB and auto-apply the relevant schema updates on launch.
All included, a new developer should now just have to open the repository in a dev container enabled vscode and hit run!
@SamP20 I'm not sure how the migration script would fit in with your DB dump + sample data? I presume we could start with an empty DB, run your SQL script, set the "last_migrated.txt" to migration/19_address_not_null and debug from there?
@SamP20 I'm not sure how the migration script would fit in with your DB dump + sample data? I presume we could start with an empty DB, run your SQL script, set the "last_migrated.txt" to migration/19_address_not_null and debug from there?
I'm wondering whether we should start using something like https://flask-migrate.readthedocs.io/en/latest/index.html? It stores the migration version in the database itself, ensuring the version always stays in sync. The easiest option would probably be to condense all our existing scripts into a single migration, then continue from there
I'm happy to use this script in the meantime though if it makes things a bit easier for you.
Ooh, I like this idea - I haven't really worked in this stack before which is why I jumped at automating sql via bash. It probably makes sense for me to keep this PR open and move across to that over the next week or 2 rather than uproot everyone's dev process only to do it again in a few weeks time.
Sorry this isn't in the thread, I can't seem to reply to comments - only create new ones...
Sorry for the delay! I had some time to work on this earlier this week, but got lost in a dependency versioning mess. I've pinned certain versions of the python dev container and the postgres images so that we can't mix up pgsql vs CLI versions when doing dumps etc in future.
This is now a breaking change. If you have a DB already, you will need to do the following:
Ensure all of the the previous migrations were applied.
Checkout the latest change.
Gather the new python dependences (via pip, virtual environment, rebuilding dev container etc).
Run flask --app hackspace_mgmt db stamp head to apply the initial schema version change.
Start the app as usual.
I'm happy to be on hand during the real upgrade if we are concerned, but taking a backup and using that as a roll-back strategy is probably the safest bet.
This is amazing work, thank you! I'll try and find some time to give it a proper review, especially the new migration.
| gharchive/pull-request | 2024-04-28T15:23:26 | 2025-04-01T04:33:40.831264 | {
"authors": [
"SamP20",
"dAdil"
],
"repo": "bristolhackspace/hackspace-mgmt",
"url": "https://github.com/bristolhackspace/hackspace-mgmt/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1527077 | Grid not aligning with text
Having the grid background with text that does not use it as an approximate & consistent baseline is not a convincing skeuomorph.
decided I'm not going to fix this. I started down the path of aligning the text to the grid, but then realized that different font sizes / fonts will mess it up. leaving it for now, but may take the grid away if others complain.
| gharchive/issue | 2011-08-31T00:56:57 | 2025-04-01T04:33:40.849754 | {
"authors": [
"alanhogan",
"brittohalloran"
],
"repo": "brittohalloran/notestack",
"url": "https://github.com/brittohalloran/notestack/issues/13",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1433782835 | DDO-2445 Refactor Lead Time Metrics To Support Multi Replica Sherlock
Problem: Right now with the way Sherlock's accelerate metrics implementation works, the metrics would report incorrectly if we run more than one Sherlock instance.
For the case of the deploy frequency metric this can be solved with just a minor adjustment to the Prometheus query. In iterator logic this would look like:
sherlock_instances
.iterate()
.map(delta(sherlock_deploy_frequency{service="blah",env="blah}))
.sum()
However for lead times it is a little more complex. With the event driven lead time metric as we currently have it only the sherlock replica that serves the request will be aware of the updated lead time, the others will still have the old lead time which will cause weird looking data in grafana
Solution: Refactor the lead time metric implementation so that rather than responding in an event based manner on requests to a particular endpoint. Sherlock will instead run a concurrent process which maintains a cache of the most recent lead time records in the db. The cache is flushed and rebuilt on some interval. The metrics write interval and cache flush interval are both changeable via Sherlock config.
This implementation uses time.Tickers to coordinate this.
There is potential that this could cause a bit of jitter in the lead time time series data in grafana if the tickers are out of sync in different sherlock instances. However this can be solved by a combination of playing with the timer configs and some smoothing on the prometheus side. Millisecond accuracy is not a requirement of accelerate metrics.
Thanks, for the thumb. Just realized I have a major bug here. Will fix.
| gharchive/pull-request | 2022-11-02T21:00:40 | 2025-04-01T04:33:40.888438 | {
"authors": [
"mflinn-broad"
],
"repo": "broadinstitute/sherlock",
"url": "https://github.com/broadinstitute/sherlock/pull/86",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
836541868 | Best practice for assigning api parameter?
When customising the service, the api version is specified like so:
api: xxx // This is the version of the browser-update api to use. Please do not remove.
In the sample code to copy and paste, the api version is specified as a decimal number like this ==> api: 2021.03
At the time of this writing, it is 2021/03/21, so it makes sense to me that this will need to be automatically incremented for the current yyyy.mm?
Seeking clarity on how it is actually implemented behind the scenes because I've put 2021.10 and 2000.10 - nothing different seems to happen. I've peeked into the source and I see it is appended to the source:
var extra=encodeURIComponent("frac="+frac+"&txt="+txt+"&apiver="+op.apiver);
i.src="https://browser-update.org/cnt?what=noti&from="+bb.n+"&fromv="+bb.v + "&ref="+ escape(op.pageurl) + "&jsv="+op.jsv+"&tv="+op.style+"&extra="+extra;
Bottom line, is it recommended that the current date in the format of yyyy.mm is used so as to not hardcode it to a specific version?
Or simply omit it as default would be the latest? Given the comment "Please do not remove", it doesn't seem likely that it would default to anything.
Guidance appreciated.
Hi,
You should ad the date the code was created.
It is just in case there are some backward incomaptible changes to the configuration or default values in the future.
Then I can look what API version the people were using and update the script without breaking the sites' intended configuration.
Does this help?
Ah, okay. For some reason I was thinking that in order for the parameters of browser versions to work, it would use the api version. Good to know that isn't the case and makes sense what you have said. Thanks.
| gharchive/issue | 2021-03-20T00:58:28 | 2025-04-01T04:33:40.925864 | {
"authors": [
"josselex",
"xrx101"
],
"repo": "browser-update/browser-update",
"url": "https://github.com/browser-update/browser-update/issues/525",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2539559495 | Allow Developers to change DOM parsing method
As a user of Stagehand, I might want to change the method of DOM parsing:
from Travis:
Have some modularity (backstage) that allows developers to choose which method they want to extract data from (DOM cleaning vs marking up DOM vs screenshotting)
Related to #149 and #184
| gharchive/issue | 2024-09-20T19:43:56 | 2025-04-01T04:33:40.927293 | {
"authors": [
"filip-michalsky",
"kamath"
],
"repo": "browserbase/stagehand",
"url": "https://github.com/browserbase/stagehand/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
366625350 | supervise: start "down" services with svc -o
Continued from PR#47.
This patch addresses issue #32. Basically, the firstrun flag is only valid until the start script is launched but the reaper code is trying to use it later to decide what to do.
Thanks. Fixes #32
| gharchive/pull-request | 2018-10-04T05:15:45 | 2025-04-01T04:33:40.930859 | {
"authors": [
"bruceg",
"snakpak"
],
"repo": "bruceg/daemontools-encore",
"url": "https://github.com/bruceg/daemontools-encore/pull/56",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2392139011 | IR Loader Won't Work (Windows 11 - Reaper)
Can only add IRs when fx chain is disabled. Enabling the fx chain removes the IRs. In the video even when there is only 1 IR after initially turning on the fx chain its defiantly not there.
https://github.com/brummer10/Ratatouille.lv2/assets/113886368/58ea01ce-6ab2-4f72-9bb7-d080cb526066
Do you've the same issue when using the Ratatouille GUI?
Which version of Reaper is it? I just tested it again here on windows, and it just works.
Could you maybe start reaper from a terminal, then there should be some output there when the IR File loading fail.
Same issue using the GUI. Using the latest Reaper version 7.17. Started reaper from a terminal but it didn't show anything.
I've currently no idea how to solve that. As I said, here on windows10 it works flawless. I may going to update to wondows11 within the next day's to see if that makes a different, but can't promise any thing.
Okay, I see now what the issue is.
You're buffer-size 192. The convolution engine didn't like that. Try please with 128 or 256.
In the mean time I'll investigate to make it work with any buffer-size.
I've pushed a new release aims to fix this issue.
https://github.com/brummer10/Ratatouille.lv2/releases/download/v0.5/Ratatouille.lv2-v0.5-win64.zip
let me know please if it works for you now as well.
Yup its working.
| gharchive/issue | 2024-07-05T08:38:23 | 2025-04-01T04:33:40.942739 | {
"authors": [
"Zodi-ark",
"brummer10"
],
"repo": "brummer10/Ratatouille.lv2",
"url": "https://github.com/brummer10/Ratatouille.lv2/issues/16",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
892270500 | Not compatible with latest release of Scriptable Object Collection
In InitialSetupEditorWindow.cs:
CollectionsRegistry.Instance.TryGetCollectionFromType should be renamed to CollectionsRegistry.Instance.TryGetCollectionOfType
ScriptableObjectCollectionSettings.Instance.SetGenerateCustomStaticFile cannot be found since you refactored the settings on SOC and I don't know how to fix it myself, because I cannot find out what you did with it.
Hey! I'm still working on this UIManager, it should have a new version quite soon, but for now I would suggest that you would not use the updated version of the SCO
Fixed on #4
| gharchive/issue | 2021-05-14T21:53:13 | 2025-04-01T04:33:40.947239 | {
"authors": [
"brunomikoski",
"sezdev"
],
"repo": "brunomikoski/UIManager",
"url": "https://github.com/brunomikoski/UIManager/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
440854371 | Make docs gender-inclusive
Replace gendered pronouns with more inclusive pronouns. (Sorry, but it's a pretty glaring oversight these days!) Also made a slight punctuation improvement. Thanks for the project!
Thanks @mscheper!
| gharchive/pull-request | 2019-05-06T19:40:04 | 2025-04-01T04:33:40.951546 | {
"authors": [
"brutasse",
"mscheper"
],
"repo": "brutasse/django-password-reset",
"url": "https://github.com/brutasse/django-password-reset/pull/77",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
235021314 | Redirect to admin login instead of raising PermissionDenied
The current behavior of raising a generic PermissionDenied exception isn't particularly helpful in a the event of a user's being logged out. I propose redirecting the user to the admin login view if they are logged out or don't have the required permissions. This mirrors the behavior in the rest of the Django Admin.
In addition, Django designates is_staff as the designator for admin site access (1). It and is_superuser can be exclusive permissions of one another.
https://docs.djangoproject.com/en/1.11/ref/contrib/auth/#django.contrib.auth.models.User.is_staff
PR re-opened from a different repository.
| gharchive/pull-request | 2017-06-10T19:08:28 | 2025-04-01T04:33:40.953662 | {
"authors": [
"wastrachan"
],
"repo": "brutasse/django-rq-dashboard",
"url": "https://github.com/brutasse/django-rq-dashboard/pull/18",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
170494786 | WrapAPI reporting "Object Moved" after login endpoint
Starting sometime on 8/9/2016 the login WrapAPI now gives the below error. I checked and there are no nag screens. Anyone else running into this in the last 24 hours?
{ "success": false, "outputScenario": null, "data": null, "messages": [ "None of the output scenarios matched. See the raw data received in rawData" ], "errTypes": [ "noMatchedOutputScenario" ], "rawData": { "responses": [ { "statusCode": 302, "body": "<html><head><title>Object moved</title></head><body>\r\n<h2>Object moved to <a href=\"%2fpda%2f404.aspx%3faspxerrorpath%3d%2fpda%2f%7b%7bsessionUrl%7d%7d%2fDefault.aspx\">here</a>.</h2>\r\n</body></html>\r\n", "headers": { "content-type": "text/html; charset=utf-8", "location": "/pda/404.aspx?aspxerrorpath=/pda/{{sessionUrl}}/Default.aspx", "server": "Microsoft-IIS/7.5", "x-powered-by": "ASP.NET", "p3p": "policyref=\"/w3c/p3p.xml\",CP=\"OUR SAMa ADM UNI BUS ALL CUR DSP TAI COR IND STA\"", "access-control-allow-origin": "*", "date": "Wed, 10 Aug 2016 19:06:23 GMT", "connection": "close", "content-length": "199" }, "request": { "uri": { "protocol": "https:", "slashes": true, "auth": null, "host": "www.alarm.com", "port": 443, "hostname": "www.alarm.com", "hash": null, "search": null, "query": null, "pathname": "/pda/%7B%7BsessionUrl%7D%7D/Default.aspx", "path": "/pda/%7B%7BsessionUrl%7D%7D/Default.aspx", "href": "https://www.alarm.com/pda/%7B%7BsessionUrl%7D%7D/Default.aspx" }, "method": "post", "headers": { "origin": "https://www.alarm.com", "accept-language": "en-US,en;q=0.8", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36", "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "referer": "https://www.alarm.com/pda/(S(wuxx1xq5vbiyl145vcz3fs55))/Default.aspx", "Content-Type": "application/x-www-form-urlencoded", "content-length": 4208 } } }, { "statusCode": 404, "body": "<html>\r\n <head>\r\n <title>The resource cannot be found.</title>\r\n <style>\r\n body {font-family:\"Verdana\";font-weight:normal;font-size: .7em;color:black;} \r\n p {font-family:\"Verdana\";font-weight:normal;color:black;margin-top: -5px}\r\n b {font-family:\"Verdana\";font-weight:bold;color:black;margin-top: -5px}\r\n H1 { font-family:\"Verdana\";font-weight:normal;font-size:18pt;color:red }\r\n H2 { font-family:\"Verdana\";font-weight:normal;font-size:14pt;color:maroon }\r\n pre {font-family:\"Lucida Console\";font-size: .9em}\r\n .marker {font-weight: bold; color: black;text-decoration: none;}\r\n .version {color: gray;}\r\n .error {margin-bottom: 10px;}\r\n .expandable { text-decoration:underline; font-weight:bold; color:navy; cursor:hand; }\r\n </style>\r\n </head>\r\n\r\n <body bgcolor=\"white\">\r\n\r\n <span><H1>Server Error in '/pda' Application.<hr width=100% size=1 color=silver></H1>\r\n\r\n <h2> <i>The resource cannot be found.</i> </h2></span>\r\n\r\n <font face=\"Arial, Helvetica, Geneva, SunSans-Regular, sans-serif \">\r\n\r\n <b> Description: </b>HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly.\r\n <br><br>\r\n\r\n <b> Requested URL: </b>/pda/404.aspx<br><br>\r\n\r\n </body>\r\n</html>\r\n", "headers": { "cache-control": "private", "content-type": "text/html; charset=utf-8", "server": "Microsoft-IIS/7.5", "x-aspnet-version": "2.0.50727", "x-powered-by": "ASP.NET", "p3p": "policyref=\"/w3c/p3p.xml\",CP=\"OUR SAMa ADM UNI BUS ALL CUR DSP TAI COR IND STA\"", "access-control-allow-origin": "*", "date": "Wed, 10 Aug 2016 19:06:23 GMT", "connection": "close", "content-length": "1510" }, "request": { "uri": { "protocol": "https:", "slashes": true, "auth": null, "host": "www.alarm.com", "port": 443, "hostname": "www.alarm.com", "hash": null, "search": "?aspxerrorpath=/pda/%7B%7BsessionUrl%7D%7D/Default.aspx", "query": "aspxerrorpath=/pda/%7B%7BsessionUrl%7D%7D/Default.aspx", "pathname": "/pda/404.aspx", "path": "/pda/404.aspx?aspxerrorpath=/pda/%7B%7BsessionUrl%7D%7D/Default.aspx", "href": "https://www.alarm.com/pda/404.aspx?aspxerrorpath=/pda/%7B%7BsessionUrl%7D%7D/Default.aspx" }, "method": "GET", "headers": { "origin": "https://www.alarm.com", "accept-language": "en-US,en;q=0.8", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36", "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", "referer": "https://www.alarm.com/pda/(S(wuxx1xq5vbiyl145vcz3fs55))/Default.aspx", "Content-Type": "application/x-www-form-urlencoded" } } } ], "stateToken": REDACTED" } }
I tested this myself and it appears that the alarm.com integration is broken for myself. If I activate my "I'm leaving" scene, I get a response that the scene can't be ran. Plus I can tell it's not logging in to alarm.com as I have a dedicated account. I'm quite new with getting this set up, but I am not sure how to duplicate scombest's scenario. All I can tell is that it's "not working" and it's been working at least back to this Monday.
I noticed that the bookmarked API for login was changed 10 hours ago. Maybe a change occurred?
There was a security issue discovered this morning. I have been in contact with the folks at WrapAPI. I'm not sure if this is a result of their security fix or not, but I've reached out to them and I'll update as soon as I hear back. For now, it does seem to be broken. It's not substituting variable inputs in the URLs its trying to load. The calls are supposed to load www.alarm.com/pda/{{sessionUrl}}/Default.aspx where a value is substituted for {{sessionUrl}}. Instead, it's literally passing in {{sessionUrl}}.
Just heard back and it appears to be working again. @scombest @rmervine can you confirm or deny on your end?
@bryanbartow confirmed it is working again! Thanks for the quick turnaround!
| gharchive/issue | 2016-08-10T19:10:23 | 2025-04-01T04:33:40.962418 | {
"authors": [
"bryanbartow",
"rmervine",
"scombest"
],
"repo": "bryanbartow/homebridge-alarm.com",
"url": "https://github.com/bryanbartow/homebridge-alarm.com/issues/27",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
90056397 | BLKDelegateSplitter crash
I don't have a scenario yet to reproduce the crash, but sometimes I receive this:
[BLKDelegateSplitter scrollViewDidScroll:]: unrecognized selector sent to instance 0x18380150
Does anyone have a fix for it?
@dorinsimina you can try to add this code in the dealloc method
- (void)dealloc
{
self.tableView.delegate = nil;
}
@brycezhang Thanks a lot. I have the same problem and it works as you suggested.
| gharchive/issue | 2015-06-22T09:48:03 | 2025-04-01T04:33:40.964829 | {
"authors": [
"brycezhang",
"caiyue1993",
"dorinsimina"
],
"repo": "bryankeller/BLKFlexibleHeightBar",
"url": "https://github.com/bryankeller/BLKFlexibleHeightBar/issues/39",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
517473969 | Possible to support SHA2 512 hash alg in addition to the current SHA3?
Currently, the spec requires the hash to be computed using SHA3 512.
Would it be possible to also (or instead) support SHA2 512? (My main reason for the request is - WebCryptography API in the browser supports SHA2 but not 3).
I picked SHA3 initially because I had support for it in my implementation stack and it was a newer alg. It was simpler to build with just one for now, but what I think we actually need to do is some kind of crypto agility. Like the client sends this request:
callback: {
uri: "https://client.foo/callback?state=blab",
nonce: "12356431",
alg: "SHA3-512"
}
Or even if the client sends a set of supported algorithms:
alg: ["SHA3-512", "SHA2-512", "ROT13"]
And when the server returns its nonce it also returns the chosen alg:
server_nonce: "6544321",
alg: "SHA2-512"
| gharchive/issue | 2019-11-05T00:06:55 | 2025-04-01T04:33:41.029405 | {
"authors": [
"dmitrizagidulin",
"jricher"
],
"repo": "bspk/oauth.xyz-site",
"url": "https://github.com/bspk/oauth.xyz-site/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
177200713 | getblockheader is not exposed
getblockheader is an important RPC call that is not exposed. At first glance it looks the same as getblock and dropping the tx array, but there's an important difference:
elsa@winter:~$ bitcoin-cli getblockheader 000000000000000000f37fddab6ae59b06d55c9949c4bf35151b7776ff551897
{
"hash": "000000000000000000f37fddab6ae59b06d55c9949c4bf35151b7776ff551897",
"confirmations": 100001,
"height": 329936,
"version": 2,
"versionHex": "00000002",
"merkleroot": "48786f412860f93607fd2b67bfc32d1c905c83da12efd35f5718b38da44b296b",
"time": 1415947066,
"mediantime": 1415942411,
"nonce": 1327336627,
"bits": "181bc330",
"difficulty": 39603666252.41841,
"chainwork": "00000000000000000000000000000000000000000002ba5d1b2f3f765cff293c",
"previousblockhash": "00000000000000000b954ce608c45c9229c3d8e0f8f710e5663e0bb9091d33e1",
"nextblockhash": "000000000000000000657614104babe20c0f76066c288c0b849504ea11f7af6e"
}
elsa@winter:~$ bitcoin-cli getblock 000000000000000000f37fddab6ae59b06d55c9949c4bf35151b7776ff551897
error code: -32603
error message:
Block not available (pruned data)
Does #91 work for you?
Indeed it does, thanks!
Thanks for testing. The issue will be closed when the PR is merged.
| gharchive/issue | 2016-09-15T14:58:06 | 2025-04-01T04:33:41.043449 | {
"authors": [
"FrozenPrincess",
"dajohi"
],
"repo": "btcsuite/btcrpcclient",
"url": "https://github.com/btcsuite/btcrpcclient/issues/89",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
64969733 | RSS feed
A blog platform should provide RSS feeds.
It wouldn't be that hard and we could support dynamic feeds based on queries.
Made some progress on this.
A major issue is that we don't assume files have names/titles or descriptions, which makes an RSS feed hard to usefully populate. I can add some more fields to my sln-extract-metadata script, but we need some sort of fallback.
The plan is to send the file previews generated by the blog, but we need to 1. generate our own previews for files that don't have them, and 2. add escaping for XML CDATA output.
On the upside, the dynamic queries work well and are very slick.
Right now the end-point is /feed.xml but maybe just /feed would be nicer. Or /rss?
| gharchive/issue | 2015-03-28T17:49:33 | 2025-04-01T04:33:41.048034 | {
"authors": [
"btrask"
],
"repo": "btrask/stronglink",
"url": "https://github.com/btrask/stronglink/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
87916959 | 下载了3.1版本的源码 ,make编译总是有错误
hi buaazp:
开始提示一些依赖错误,解决后,提示一些变量找不到,能否提供一个完整正确的安装文档呢。谢谢。
现在是报 error: expected specifier-qualifier-list before ‘memcached_st’ 这个错误,是不是memcached 安装有问题找不到相关的头文件。
重新装了一遍 memcached ,安装成功。
| gharchive/issue | 2015-06-13T03:21:16 | 2025-04-01T04:33:41.054021 | {
"authors": [
"hhservice"
],
"repo": "buaazp/zimg",
"url": "https://github.com/buaazp/zimg/issues/87",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
590303977 | RPI2 adapter "Cannot setup port 23 as input: Error: EACCES: permission denied, open /sys/class/gpio/export‘ "
It seems that libc6 is outdated.
Version 2.28 is requested by wiringpi,
but 2.24 is installed.
root@99a59f9214c7:~# ldd --version
ldd (Debian GLIBC 2.24-11+deb9u4) 2.24
testing gpio within container shell brings:
root@99a59f9214c7:/opt/iobroker# gpio
gpio: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.28′ not found (required by /usr/lib/libwiringPi.so)
Seems to be similar to this issue, using gpio in a container:
https://github.com/marthoc/docker-deconz/issues/153
Hi,
if you can tell me a way/ link how to install libc6 2.28 on Debian Stretch, I can check how we can get it into the image.
As my first tests with Debian Buster failed, it is actually no option to switch the base image.
Maybe someone already forked my image and switched the base image to buster???
Regards,
André
You might try this:
You will find libc6 2.30 (libc6_2.30-4_armhf.deb) in the unstable sid branch: https://packages.debian.org/sid/armhf/libc6/download
E.g. add deb http://ftp.de.debian.org/debian sid main to /etc/apt/sources.list
and execute apt-get update && apt-get upgrade libc6
Or just download from above link and install deb file via dpkg.
I tried to do that within the container itself, but it says that it was installed manually and cannot be installed because of that. Building your image might solve that using sid repo from beginning ?? Don't know ...
Searching within the database via https://packages.debian.org/search?suite=stretch&arch=armhf&mode=exactfilename&searchon=contents&keywords=libc6
I found this:
https://packages.debian.org/buster/libc6
https://packages.debian.org/stretch/libc6
https://packages.debian.org/bullseye/libc6
Seems that there is no way for glibc 2.28 on stretch.
Meanwhile I killed my rpi3 and started the same on a rpi2.
you will not believe, but adapter rpi2 did not logged any Errors anymore.
Seems to work but something is still not working.
I cannot see any changes to input on gpio readall even outside the container anymore.
Will have to create some safer hw test equipment to analyze,
my free floating experiment killed my rpi3 by touching 5V signal :-(
Just ordered one rpi4 instead :-)
regards
Michael
Hi,
base image was switched to Buster on April 14th.
Is there still an issue with glibc version?
Please check and give a short response.
Thank you.
Regards,
André
Hi,
i just checked ldd --version with "ldd (Debian GLIBC 2.28-10) 2.28" as result.
regards
Dominik
Thanks. Then I guess this is solved.
Regards,
André
| gharchive/issue | 2020-03-30T13:45:12 | 2025-04-01T04:33:41.063108 | {
"authors": [
"buanet",
"fzzybllz",
"mtk64"
],
"repo": "buanet/docker-iobroker",
"url": "https://github.com/buanet/docker-iobroker/issues/87",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
587495051 | HTTPS
Hi, How can activate https connection? VoIP can't use via http.
this image is based on nginx, so configure https as you would using nginx
http://nginx.org/en/docs/http/configuring_https_servers.html
https://medium.com/faun/setting-up-ssl-certificates-for-nginx-in-docker-environ-e7eec5ebb418
feel free to submit a pull request with documentation update on the README.md to enable this, ty
| gharchive/issue | 2020-03-25T07:25:18 | 2025-04-01T04:33:41.065443 | {
"authors": [
"bubuntux",
"saucompeng"
],
"repo": "bubuntux/docker-riot-web",
"url": "https://github.com/bubuntux/docker-riot-web/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
237723583 | Support merging multiple license xml files
When multiple projects are part of an overall project there should be functionality to merge multiple license xml files into one single xml file.
So for example, we should be able to specify a "product name" and then license.xml files for the projects that make up the project. The xml files would be in the format produced by our licenser tool.
| gharchive/issue | 2017-06-22T03:38:38 | 2025-04-01T04:33:41.067802 | {
"authors": [
"danbev"
],
"repo": "bucharest-gold/licenser",
"url": "https://github.com/bucharest-gold/licenser/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1962320 | server should log errors at startup
The server should log an error connecting to the XMPP server as a component and connecting to the database.
David has had issues getting it installed and a log file showing a failed DB or in his case a failed XMPP server connection would help him and other users get running faster.
Server logs stuff now.
| gharchive/issue | 2011-10-18T18:26:06 | 2025-04-01T04:33:41.685462 | {
"authors": [
"astro",
"imaginator"
],
"repo": "buddycloud/buddycloud-server",
"url": "https://github.com/buddycloud/buddycloud-server/issues/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2419583589 | Switch envoyproxy/envoy to sync semver releases
Fixes #591.
Verified that the previous manifest and new manifest only differ due to changes to the README:
--- envoy-manifest.old 2024-07-19 13:37:07.638810242 -0500
+++ envoy-manifest 2024-07-19 13:27:51.696435730 -0500
@@ -2,7 +2,7 @@
shake256:7a9a5a3a62ffe3acb50d67d1b0806566f798ac145649c493bf156a2e5d927e264464f5d3add734c687acde39586cc032795e8ea07f51f5bb2493fd0423db10c3 bazel/cc_proto_descriptor_library/testdata/test-extension.proto
shake256:9e8bd1676e9fcea1d02e7cd0ce63f87e6c76b4b31467ec3df1f4a580da427488a764f9e48617a5be98d85057c648f46ec1925b05cae4346b207210327283dd11 bazel/cc_proto_descriptor_library/testdata/test.proto
shake256:76295fd73d7f33a987c1243e8821818b80bd8b04634fc9ccd10879ab3c1ade926e3ef847f11d8e07ccf8fcf011561d151a5d4926505d6494a23119614e049aa6 bazel/cc_proto_descriptor_library/testdata/test1.proto
-shake256:60519d5eb5014a4213613b4fcec13fc263a14690057a005331a601a21dd4dcaf97e347e606e092692cccdbc9a47a171659a15fcfb900b6d707b3303b76a98a53 buf.md
+shake256:036e58836a23359ebf2324efe94106d38521bef2ad6dab6b46b0379dc66192e8f29097c96a9f1c3b299274ec10b47204d63d9e23a010eb5e1c5d4ed52f15c3f7 buf.md
shake256:f9abf7473dc3f95cc9ce2dabfabeedbf0f5fd808e1eb09ab07776ca3991ec073784ef8cb2f6df49a8293f1033141e29d687de39f506046b663b258728864f6b4 buf.yaml
shake256:7144b74045a5813f5f81b71eb11ca0ddac4105e3d20b51a146b634f7dc8c529de6fc84d2c4d3fb635eec48616f787eca10287f14ea10ec3d218e9562508be0ba contrib/envoy/extensions/compression/qatzip/compressor/v3alpha/qatzip.proto
shake256:ae38a03abf75ec63838de20438d1fdaf6ca49f6fcc9d701c4f918411027fcabfe08355fb24d7767074d9c440842bc86fd8788ad25e8c47e98782d6b5379f8b42 contrib/envoy/extensions/compression/qatzstd/compressor/v3alpha/qatzstd.proto
| gharchive/pull-request | 2024-07-19T18:26:10 | 2025-04-01T04:33:41.702940 | {
"authors": [
"pkwarren"
],
"repo": "bufbuild/modules",
"url": "https://github.com/bufbuild/modules/pull/639",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1119587171 | Create 0000-01-02-buggzgh.md
Add buggzgh's file
reopen
mmm
| gharchive/pull-request | 2022-01-31T15:34:14 | 2025-04-01T04:33:41.709149 | {
"authors": [
"buggzgh"
],
"repo": "buggzgh/github-slideshow",
"url": "https://github.com/buggzgh/github-slideshow/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1644134200 | dtb build error
cd dtb
sh make_dtb.sh
when all file download and unzipped, to build dtb has this error:
_
Error: linux-6.1.18/arch/arm64/boot/dts/rockchip/rockchip-pinconf.dtsi:7.2-3 syntax error
FATAL ERROR: Unable to parse input tree
os: Ubuntu 18.04 LTS
dtc 1.4.3
is the build system env not good ? Which os should i use to build dtb?
Just tested on a fresh Ubuntu 22.04 LTS - it seems to work as expected :)
$ sudo apt install -y device-tree-compiler gcc wget xz-utils
$ sh make_dtb.sh
[...]
build complete: rk3568-nanopi-r5.dtb
Thanks, I change to Ubuntu 20.04 Focal , it works well.
build complete: rk3568-nanopi-r5.dtb
| gharchive/issue | 2023-03-28T14:56:53 | 2025-04-01T04:33:41.712074 | {
"authors": [
"altmangood",
"buglloc"
],
"repo": "buglloc/nanopi-r5",
"url": "https://github.com/buglloc/nanopi-r5/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2235082577 | Update bugsnag-android from v5.32.2 to v6.4.0
Goal
Design
Changeset
Update bugsnag-android from v5.32.2 to v6.4.0
Testing
@djskinner are we expecting electron tests to be failing on the v8 branch?
@djskinner are we expecting electron tests to be failing on the v8 branch?
No, they are passing on the latest run of https://github.com/bugsnag/bugsnag-js/pull/2011
| gharchive/pull-request | 2024-04-10T08:53:38 | 2025-04-01T04:33:41.733884 | {
"authors": [
"djskinner",
"gingerbenw"
],
"repo": "bugsnag/bugsnag-js",
"url": "https://github.com/bugsnag/bugsnag-js/pull/2119",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1367714388 | 7.1.1 New introduced NUllreferenceException when Manually starting Bugsnag
Describe the bug
Hello,
SInce the app moved from 7.1.0 to 7.1.1 it seems there is some kind of lifecycle issue when starting bugsnag manually as described here :
https://docs.bugsnag.com/platforms/unity/#starting-bugsnag-manually
When you play the first time, all is fine, and when stopping/restarting the Editor Run time, there is an increasing amount of NullReferenceException being raised :
NullReferenceException: Object reference not set to an instance of an object
BugsnagUnity.Bugsnag.SetApplicationState (System.Boolean inFocus) (at <3e7eadaa757a4a59b36981258f8053e4>:0)
BugsnagUnity.TimingTrackerBehaviour.OnApplicationPause (System.Boolean paused) (at <3e7eadaa757a4a59b36981258f8053e4>:0)
NullReferenceException: Object reference not set to an instance of an object
BugsnagUnity.Bugsnag.SetApplicationState (System.Boolean inFocus) (at <3e7eadaa757a4a59b36981258f8053e4>:0)
BugsnagUnity.TimingTrackerBehaviour.OnApplicationFocus (System.Boolean hasFocus) (at <3e7eadaa757a4a59b36981258f8053e4>:0)
Steps to reproduce
Download the sample project I provided (Or follow the guide in https://docs.bugsnag.com/platforms/unity/#starting-bugsnag-manually)
Start Bugsnag manually in script
Run the editor Play mode and then stop it
Start the Editor Play mode again and see the error
Repeating step 3 and 4 will lead to an increasing amount of error being thrown each time.
Environment
Bugsnag version:7.1.1
Unity version:2022.1.14f1
iOS/Android/macOS/Windows/browser version:Windows 11, in editor
simulator/emulator or physical device:In Editor
Initializing bugsnag via a Unity GameObject or in code?:In Code
Player Settings:
Scripting backend (Mono or IL2CPP):N/A
API compatibility level for .NET:2.1
Stack Trace level for all error types (None/ScriptOnly/Full):Full
I have included a minimal example taken from your repo example with an additional script that launch Bugsnag in script. I have removed my API Key from the config but the issue is present with and without a valid API key
example.zip
Thanks for the report!
I'll be looking into this early next week 👍
Hi @JinFox
I just pushed a release fixing this.
If you update to version 7.2.0 then this issue will go away.
Please note that after updating you will need to restart unity to make the null refs go away.
Please let me know if you need any more info 👍
| gharchive/issue | 2022-09-09T11:45:42 | 2025-04-01T04:33:41.741014 | {
"authors": [
"JinFox",
"rich-bugsnag"
],
"repo": "bugsnag/bugsnag-unity",
"url": "https://github.com/bugsnag/bugsnag-unity/issues/616",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2660076061 | IVS-228 purepythonparser for critical gherkin checks and info extraction [WIP]
Doesn't fully work yet. Seems that whitespace within strings is discarded. So MVD and authoring are wrong.
WIP
Should be ready for review with https://github.com/IfcOpenShell/step-file-parser/commit/8a5349f57eadcbc280f8578cf5f6a1c006189533
I didn't sync any submodules... so checkout the submodule heads when reviewing
| gharchive/pull-request | 2024-11-14T21:16:01 | 2025-04-01T04:33:41.755015 | {
"authors": [
"aothms"
],
"repo": "buildingSMART/validate",
"url": "https://github.com/buildingSMART/validate/pull/127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2210831383 | Allow patching pod spec from controller configuration and steps
This is our spin on @42atomys' excellent suggestion to allow applying a strategic merge patch of the PodSpec. It builds on their work in #262 but adds a few things we suggested in PR review.
Notably
Adds integration and unit tests
Added pod-spec-patch to the helm value file's JSON schema
Parsing of the user provided PodSpecPatch YAML as corev1.PodSpec
It will NOT be possible to set Command and Args on a container via a pod spec patch. This is because we've found that the interaction between a step's command: attribute and these containers fields is too complex to expose to users. Now, using pod spec patch, the steps's command: will be the only way to customise the process executed in a container. However, the existing sidecarContainers functionality is not affected by this, so for users that need additional containers for which they need to customise Command and Args, we recommend they use that instead.
Now instead of specifying a verbose PodSpec in every step, users can specify a common patch in their helm chart. The controller will take the command of a step and attempt to create a JK8s ob around it. Further customisation of the PodSpec is supported by applying the patches. Thus, most of the complexity of the Kubernetes YAML can be shifted to the Helm chart's values rather than the Buildkite YAML. Any further step level customization should be performed with the step level podSpecPatch rather than specifying a step level podSpec.
Fixes: #247
Closes: #262
Reviewers should note that a significant detour had to be made in order to reuse the corev1.PodSpec go struct in parsing the config level podSpecPatch. The essence of the issue that the library used to parse configuration, viper, does not support the struct tag json:",inline" because it relies on mapstructure for which the equivalent struct tag is mapstructure:",squash". I've made a fork of mapstructure that rectifies this, and used a replace directive in go.mod to force viper to use that the forked version of mapstructure. I plan on contributing the fork back upstream, so hopefully this is a temporary situation. There is a more detailed explanation in the commit message for https://github.com/buildkite/agent-stack-k8s/pull/282/commits/8f253f55b807acf309c112157a9e6df235c62552
I just wanted to say that this is a very good change, makes common infra configuration much easier 👍
| gharchive/pull-request | 2024-03-27T13:37:15 | 2025-04-01T04:33:41.761534 | {
"authors": [
"artem-zinnatullin",
"triarius"
],
"repo": "buildkite/agent-stack-k8s",
"url": "https://github.com/buildkite/agent-stack-k8s/pull/282",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
426762438 | Add checksums for artifactory uploaded artifacts
This adds checksums so that Artifactory doesn't complain in this screen:
There was some question on Slack that perhaps the SHA256 checksums actually caused problems with some versions of Artifactory.
| gharchive/pull-request | 2019-03-29T00:27:16 | 2025-04-01T04:33:41.762843 | {
"authors": [
"lox"
],
"repo": "buildkite/agent",
"url": "https://github.com/buildkite/agent/pull/961",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2394552961 | Fix typo in environment variables table contents.
Addresses https://github.com/buildkite/docs/issues/2869.
Hi @yaningo ,
Thanks very much for spotting out the issue mentioned in https://github.com/buildkite/docs/issues/2869.
Please feel free to approve this PR so that I can merge it in and resolve that issue.
Thanks @JuanitoFatas !
| gharchive/pull-request | 2024-07-08T03:50:58 | 2025-04-01T04:33:41.764713 | {
"authors": [
"gilesgas"
],
"repo": "buildkite/docs",
"url": "https://github.com/buildkite/docs/pull/2876",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1181435244 | Remove most webhook database reads from hot path
Webhook generation can be a remarkably large part of any given transaction. This removes most database reads from IssuingModel#generate_webhook by caching the listening endpoint lookup and pushing the serialization into a job.
@petekeen-cf I'm having trouble after merging this and enabling CI, so I'm going to have to revert both of these merges and regroup. Sorry for the trouble!
| gharchive/pull-request | 2022-03-26T01:20:24 | 2025-04-01T04:33:41.784668 | {
"authors": [
"andrewculver",
"peterkeen"
],
"repo": "bullet-train-co/bullet_train-outgoing_webhooks",
"url": "https://github.com/bullet-train-co/bullet_train-outgoing_webhooks/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1498712899 | [WIP] Create an example Scaffolding::CompletelyConcrete::SimpleSingleton resource
https://github.com/bullet-train-co/bullet_train/issues/532
@andrewculver Is this something you want to merge? It's not entirely clear to me what you're asking for in #532. I'm not sure if you're wanting to just see an example branch that you can point people to, or if it's something that should fully live in the starter repo.
| gharchive/pull-request | 2022-12-15T16:14:25 | 2025-04-01T04:33:41.785982 | {
"authors": [
"jagthedrummer",
"ps-ruby"
],
"repo": "bullet-train-co/bullet_train",
"url": "https://github.com/bullet-train-co/bullet_train/pull/543",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1668754518 | Filter by Location field
The field we would like to show within the OSCP as an additional search filter is titled “Location” (from the job record). The values are Remote, In-person, and Hybrid. Please how to enable custom filtering using a custom field?
Hi
I am not sure the field is in the public API, so you can not do that with OSCP.
You need a solution that uses the Private (full) API
Self-promoting: Matador Jobs can do this with a click of a checkbox in the settings. We also have the option to add remote the list of locations https://matadorjobs.com/
| gharchive/issue | 2023-04-14T18:14:52 | 2025-04-01T04:33:41.790799 | {
"authors": [
"leandroberg",
"pbearne"
],
"repo": "bullhorn/career-portal",
"url": "https://github.com/bullhorn/career-portal/issues/515",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1580847437 | 🛑 VODA-SA is down
In 7ebfa99, VODA-SA ($SECRET_SITE2) was down:
HTTP code: 403
Response time: 1110 ms
Resolved: VODA-SA is back up in 8bd2c86.
| gharchive/issue | 2023-02-11T13:41:10 | 2025-04-01T04:33:41.799813 | {
"authors": [
"buluma"
],
"repo": "buluma/uptime",
"url": "https://github.com/buluma/uptime/issues/543",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1906476081 | fix: typo
This is causing issues in nostr-wallet-connect-relayer as I couldn't use handleNIP11 function
it was not exposed in the original fork.
the original fork has developed further at some point soon we will need to update to the new API of the original
| gharchive/pull-request | 2023-09-21T09:03:33 | 2025-04-01T04:33:41.800904 | {
"authors": [
"bumi",
"im-adithya"
],
"repo": "bumi/relayer",
"url": "https://github.com/bumi/relayer/pull/1",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
954540154 | UnboundLocalError: local variable 'parsed' referenced before assignment
>>> de.fetch([48.51999, 9.07136], [48.51999, 9.07137])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ajung/src/deutschland/lib/python3.9/site-packages/deutschland/geo.py", line 81, in fetch
return parsed
UnboundLocalError: local variable 'parsed' referenced before assignment
Fixed in e1416af72144fdc482b7b49ba1f3753098857a03
| gharchive/issue | 2021-07-28T06:56:55 | 2025-04-01T04:33:41.836211 | {
"authors": [
"LilithWittmann",
"zopyx"
],
"repo": "bundesAPI/deutschland",
"url": "https://github.com/bundesAPI/deutschland/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
256220761 | Travis: jruby-9.1.13.0
This PR updates the CI matrix to use latest JRuby.
http://jruby.org/2017/09/06/jruby-9-1-13-0.html
Are you using gemstash with JRuby? I don't really understand the test failures for JRuby.
@indirect Can you review this change?
@olleolleolle this is good, thanks!
@bundlerbot r+
@bundlerbot retry
| gharchive/pull-request | 2017-09-08T11:11:39 | 2025-04-01T04:33:41.838505 | {
"authors": [
"HParker",
"indirect",
"olleolleolle"
],
"repo": "bundler/gemstash",
"url": "https://github.com/bundler/gemstash/pull/167",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
403151775 | React-native: Summary for complete support
I've got this lib working in React-Native, but in the process I ran into a few issues. Not sure if you are planning to support React-Native, in case you do:
What the issues were:
1) Haste modules: "Unable to resolve 'url' from /Users....'
It looks like this is actually a react-native metro bug, (stale react-native issue).
Workaround: yarn add url to your react-native project.
2) store node_module doesn't support React-Native, gives error at import
Acutal error: "Undefined is not an object (Global.navigator ? Global.navigator.userAgent....)
When compiled to js, we are importing store in dist/BunqJSClient.js but the store module doens't support React-Native.
Workaround: Instead of using import you can use require and set the navigator before you do
global.navigator = { userAgent: '' };
const BunqClient = require('@bunq-community/bunq-js-client').default
3) node-forge generateKeyPair is very slow as react-native doesn’t have support for native window.crypto**
It is now using the default javascript version, which is very slow causing your app to freeze for a couple of seconds (5-10s).
Possible solution: For the React-Native developer using this lib a good alternative can be a native crypto lib, (like this). I tried to use your process.env.CI_PUBLIC_KEY_PEM and process.env.CI_PRIVATE_KEY_PEM (here) but kept getting 400 response from bunq. Not sure why.
Edit: Got a little bit close to solving the last one, I now get that you can't re-use public/private keys that is providing the 400 response. Also react-native-rsa-native is providing different length public and private keys, not sure why (both use 2048 bit). Do you think this is related?
Sorry for the late response, I've been busy with my internship and other projects. I rewrote the constructor to only import the store library when no default is given as you suggested. And feel free to add a guide for react-native to the docs :+1:
The CI_PUBLIC_KEY_PEM environment variables are used for testing to prevent the 100+ tests from each creating their own private key which takes too long. It might be better if I change how that is done more properly but that'd be a breaking change so I'll wait with that for now.
To add to this, I added two store examples to the src/Stores/* directory for both localstorage and json-file storage to be used in the code so users don't have to write them themselves in most cases.
Feel free to add a react-native compatible version there
No worries, same here. Thanks for the changes!
I will add it this weekend, already got it written down. I will also add the react-native version of the store.
@Crecket @DannyvanderJagt Has this been added yet? I'm trying to get this library working in a React Native app.
I'm sorry, I completely forgot about this one. I just found the project on my computer, will test it to make sure it still works and then propose a PR and share a repo with the project.
@rauldeheer I created an example project and included a readme with steps to add BunqJSClient to your own project.
Note: For now use the BunqJSClient version 0.40.2, the newer versions are having an issue with a missing dns module. I will see if/how we can fix this and report back.
Currently away until Monday so I can check it out when I get back 👍
No hurry, have a nice weekend! 👍
Found the problem. We are using the module socks-proxy-agent here in RequestLimitFactory.ts. This one is using a few node-js modules which we can't use in react-native.
@Crecket Looking at the history you just added it, is it a requirement for bunq or a (soon-to-be) core part of this module? Otherwise, maybe we can make it optional (and lazy require the socket-proxy-agent) to allow for RN support. What do you think?
I'll look into making it a peer dependency or only loading it when a custom proxy is given.
The dependency is now only required when setting a custom proxy service so it should work again for your use-case
@DannyvanderJagt Thank you for providing me with the example! I'll take a look at it.
@Crecket Thanks for the adjustment!
@rauldeheer You're welcome. If you have any questions, let me know.
I guess we can close this issue now. Glad it all worked out.
| gharchive/issue | 2019-01-25T13:11:43 | 2025-04-01T04:33:41.854859 | {
"authors": [
"Crecket",
"DannyvanderJagt",
"rauldeheer"
],
"repo": "bunqCommunity/bunqJSClient",
"url": "https://github.com/bunqCommunity/bunqJSClient/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2382100617 | 🛑 doubledouble is down
In f13c85b, doubledouble (https://doubledouble.top/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: doubledouble is back up in bf8aa46 after 40 minutes.
| gharchive/issue | 2024-06-30T05:59:42 | 2025-04-01T04:33:41.857898 | {
"authors": [
"bunt4021"
],
"repo": "bunt4021/dubstatus",
"url": "https://github.com/bunt4021/dubstatus/issues/551",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2252620670 | 问题请教-关于生产环境数据库更新
如何优雅地更新生产环境数据库
我的部署方式是用 docker compose 服务器上只有docker compose
我本地的项目执行打包 并将镜像包传到镜像仓库 然后再服务器上拉取部署
目前业务中修改了一些表的字段 想要同步到生产环境
看到了迁移的命令和方法但是有点疑惑
迁移文件生成在src文件夹中,那么部署的时候这些迁移文件是会带到生产环境的
我改如何触发这些迁移文件,难道要进到容器内部去执行命令吗 还是有什么更加优雅的方式
虚心求教
🤔 最简单的方式,在每次容器启动之前执行一次数据库迁移就行了吧
https://github.com/buqiyuan/nest-admin/blob/b85d4e262b22f2156550d75a586c40694a02a39a/Dockerfile#L54
- ENTRYPOINT ./wait-for-it.sh $DB_HOST:$DB_PORT -- pm2-runtime ecosystem.config.js
+ ENTRYPOINT ./wait-for-it.sh $DB_HOST:$DB_PORT -- pnpm migration:run && pm2-runtime ecosystem.config.js
只做这一步就够了吗 我可能理解了 我去试试
还有一件事:
我没有理解任务每次执行后为什么做了一个这个操作
@OnQueueCompleted()
onCompleted(job: Job) {
this.taskService.updateTaskCompleteStatus(job.data.id)
}
在updateTaskCompleteStatus方法的实现中 发现队列中有执行时间小于当前时间的任务就会停止整个任务
这导致我高频率触发定时任务时经常会出现任务自动异常停止的情况
在我的生产环境中我去掉了这个代码逻辑,但是我仍然疑惑 这里直接停止任务可能不是一个好的方式 或者说我没有get到作者的点
我没有理解任务每次执行后为什么做了一个这个操作
文件路径
src\modules\system\task\task.processor.ts
| gharchive/issue | 2024-04-19T10:21:16 | 2025-04-01T04:33:41.863165 | {
"authors": [
"buqiyuan",
"eamd-wq"
],
"repo": "buqiyuan/nest-admin",
"url": "https://github.com/buqiyuan/nest-admin/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
945229441 | npm run build :fail
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! vite-vue3-lowcode@0.0.1 build: cross-env vite build
npm ERR! Exit status 1
I'm not sure about your error, suggested use yarn
| gharchive/issue | 2021-07-15T10:20:44 | 2025-04-01T04:33:41.864696 | {
"authors": [
"buqiyuan",
"cybibo"
],
"repo": "buqiyuan/vite-vue3-lowcode",
"url": "https://github.com/buqiyuan/vite-vue3-lowcode/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
203038739 | Real image instead of nearest neighbor interpolation
Hi,
In your code you used nearest neighbor interpolated image of small image instead of true real image in training. What is the reason for doing that?
Thanks
Sorry that was a mistake.
| gharchive/issue | 2017-01-25T08:20:17 | 2025-04-01T04:33:41.865754 | {
"authors": [
"islamtashfiq"
],
"repo": "buriburisuri/SRGAN",
"url": "https://github.com/buriburisuri/SRGAN/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
221877973 | Improve thread synchronization
Extract unsafe methods and wrap them with safe methods
Implemented as part of #50
| gharchive/issue | 2017-04-14T19:49:25 | 2025-04-01T04:33:41.877975 | {
"authors": [
"burzyk"
],
"repo": "burzyk/ShakaDB",
"url": "https://github.com/burzyk/ShakaDB/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1045659098 | 🛑 SMR Vinay Cascade B Block is down
In b8dc02c, SMR Vinay Cascade B Block (https://cloud.tymly.in/status/1093) was down:
HTTP code: 521
Response time: 20207 ms
Resolved: SMR Vinay Cascade B Block is back up in 6c1d887.
| gharchive/issue | 2021-11-05T10:12:54 | 2025-04-01T04:33:41.912718 | {
"authors": [
"bvenkysubbu"
],
"repo": "bvenkysubbu/tymlymonitor",
"url": "https://github.com/bvenkysubbu/tymlymonitor/issues/11338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1096376635 | 🛑 Trifecta Esplanade Multipurpose Hall is down
In 1463a02, Trifecta Esplanade Multipurpose Hall (https://cloud.tymly.in/status/1169) was down:
HTTP code: 521
Response time: 228 ms
Resolved: Trifecta Esplanade Multipurpose Hall is back up in 845386c.
| gharchive/issue | 2022-01-07T14:40:21 | 2025-04-01T04:33:41.915111 | {
"authors": [
"bvenkysubbu"
],
"repo": "bvenkysubbu/tymlymonitor",
"url": "https://github.com/bvenkysubbu/tymlymonitor/issues/15996",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1169193212 | 🛑 Casa Gopalan Block A Lift is down
In bb44fbd, Casa Gopalan Block A Lift (https://cloud.tymly.in/status/1006) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Casa Gopalan Block A Lift is back up in a9d17e9.
| gharchive/issue | 2022-03-15T05:09:25 | 2025-04-01T04:33:41.917483 | {
"authors": [
"bvenkysubbu"
],
"repo": "bvenkysubbu/tymlymonitor",
"url": "https://github.com/bvenkysubbu/tymlymonitor/issues/20045",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1181570404 | 🛑 PSR Aster 1 is down
In adb71b2, PSR Aster 1 (https://cloud.tymly.in/status/1059) was down:
HTTP code: 521
Response time: 19532 ms
Resolved: PSR Aster 1 is back up in 181354b.
| gharchive/issue | 2022-03-26T07:21:21 | 2025-04-01T04:33:41.920005 | {
"authors": [
"bvenkysubbu"
],
"repo": "bvenkysubbu/tymlymonitor",
"url": "https://github.com/bvenkysubbu/tymlymonitor/issues/20667",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1231115013 | 🛑 Madhuban Brindavan B Block is down
In f40a19c, Madhuban Brindavan B Block (https://cloud.tymly.in/status/1136) was down:
HTTP code: 521
Response time: 223 ms
Resolved: Madhuban Brindavan B Block is back up in 23f6392.
| gharchive/issue | 2022-05-10T12:57:00 | 2025-04-01T04:33:41.922423 | {
"authors": [
"bvenkysubbu"
],
"repo": "bvenkysubbu/tymlymonitor",
"url": "https://github.com/bvenkysubbu/tymlymonitor/issues/23265",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1240967090 | 🛑 PSR Aster 3 E Block is down
In da632ce, PSR Aster 3 E Block (https://cloud.tymly.in/status/1064) was down:
HTTP code: 521
Response time: 216 ms
Resolved: PSR Aster 3 E Block is back up in c4e65c9.
| gharchive/issue | 2022-05-19T01:56:35 | 2025-04-01T04:33:41.924735 | {
"authors": [
"bvenkysubbu"
],
"repo": "bvenkysubbu/tymlymonitor",
"url": "https://github.com/bvenkysubbu/tymlymonitor/issues/23784",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1251743980 | 🛑 Amrutha Value is down
In 102fc8f, Amrutha Value (https://cloud.tymly.in/status/1010) was down:
HTTP code: 521
Response time: 233 ms
Resolved: Amrutha Value is back up in 0d2caac.
| gharchive/issue | 2022-05-28T23:24:39 | 2025-04-01T04:33:41.927031 | {
"authors": [
"bvenkysubbu"
],
"repo": "bvenkysubbu/tymlymonitor",
"url": "https://github.com/bvenkysubbu/tymlymonitor/issues/24314",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1254196209 | 🛑 SMR Vinay Cascade B Block is down
In dc8d089, SMR Vinay Cascade B Block (https://cloud.tymly.in/status/1093) was down:
HTTP code: 521
Response time: 20228 ms
Resolved: SMR Vinay Cascade B Block is back up in bb8b323.
| gharchive/issue | 2022-05-31T17:51:13 | 2025-04-01T04:33:41.929333 | {
"authors": [
"bvenkysubbu"
],
"repo": "bvenkysubbu/tymlymonitor",
"url": "https://github.com/bvenkysubbu/tymlymonitor/issues/24441",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.