id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1829085698 | OOP school library: Decorate a class
Pull Request Summary:
Title: Implement Abstract Nameable Class and Derived Decorators
Description:
In this pull request, I have introduced significant enhancements to the codebase, focusing on modularity and flexible behaviour using abstract classes and decorators.
Abstract Nameable Class:
Created the Nameable class, serving as an abstract class with an abstract method correct_name.
The correct_name method raises a NotImplementedError, enforcing that subclasses must implement this method, ensuring consistency in behaviour across derived classes.
Person Class:
The Person class now inherits from the abstract Nameable class, providing the required implementation for the correct_name method.
The constructor of the Person class has been updated to allow setting age, parent_permission, and name attributes.
The can_use_services? method in the Person class returns true if the person is of age (>= 18) or has parental permission.
Decorator Pattern:
Introduced the concept of decorators to add functionalities dynamically to objects.
Created two decorator classes:
capitalize_decorator: Capitalizes the output of correct_name from the wrapped Nameable object.
trimmer_decorator: Trims the output of correct_name to a maximum of 10 characters.
The implementation of abstract classes and decorators allows for more extensibility and reusability in the code. By creating decorators, we can easily add new behaviour to Nameable objects without modifying their original code, promoting better separation of concerns.
Please review and merge this pull request to incorporate these improvements into the main codebase.
Hi, @julie-ify. Thanks for approving the PR😊.
| gharchive/pull-request | 2023-07-31T12:29:02 | 2025-04-01T06:44:29.888855 | {
"authors": [
"iamsjunaid"
],
"repo": "iamsjunaid/OOP-school-library",
"url": "https://github.com/iamsjunaid/OOP-school-library/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1863945130 | 🛑 Hacker News is down
In 7e5d0f1, Hacker News (https://news.ycombinator.com) was down:
HTTP code: 502
Response time: 336 ms
Resolved: Hacker News is back up in a257201 after 1,108 days, 12 hours, 26 minutes.
| gharchive/issue | 2023-08-23T20:10:06 | 2025-04-01T06:44:29.915656 | {
"authors": [
"iamthecloverly"
],
"repo": "iamthecloverly/monitor",
"url": "https://github.com/iamthecloverly/monitor/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1237819038 | 🛑 mi-core is down
In 4cddc60, mi-core (https://www.mi-core.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: mi-core is back up in 65f4edc.
| gharchive/issue | 2022-05-16T22:41:59 | 2025-04-01T06:44:29.918003 | {
"authors": [
"ian4hu"
],
"repo": "ian4hu/mi-core-uptime",
"url": "https://github.com/ian4hu/mi-core-uptime/issues/327",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
709580690 | -remote command not automatically adding item to cart
Testing with a 2060 the -remote command doesn't add the item automatically to my cart anymore. A link is sent to my phone but after clicking it takes me to an empty cart page. Testing the clerk without the -remote command on my computer adds the 2060 to my cart as intended.
What region / version of clerk / phone OS / notification delivery system are you using? I'm using an iPhone XS Max on iOS 14, discord notifications, and the -remote tag with the 2060 works, adding card directly to my cart on my phone or computer.
If you are on android, you would need to turn off web link previews in your messenger settings.
What region / version of clerk / phone OS / notification delivery system are you using? I'm using an iPhone XS Max on iOS 14, discord notifications, and the -remote tag with the 2060 works, adding card directly to my cart on my phone or computer.
Latest version of Clerk (I've tried older versions too), IPhone X with IOS 14, USA region, Twilio SMS service
Running command; nvidia-clerk-windows.exe -region=USA -model=2060 -sms -remote
The link works for me on Android if my default browser is set to Samsung, but not Chrome
The link works for me on Android if my default browser is set to Samsung, but not Chrome
Hmm, I'm using safari on IOS. But I swear it was working a couple days ago.
@shaunster80 You need to turn off link previewing
https://osxdaily.com/2018/08/02/disable-url-link-previews-messages-ios-mac/
^^^ This is one way we could do it, but I believe there's a way to do it IOS wide (it doesn't do it on my ios 14 device)
@shaunster80 You need to turn off link previewing
https://osxdaily.com/2018/08/02/disable-url-link-previews-messages-ios-mac/
^^^ This is one way we could do it, but I believe there's a way to do it IOS wide (it doesn't do it on my ios 14 device)
Ahh.. ok I think I'm getting somewhere now thanks Ian and everyone. So I can get the item into my cart now but it looks like I can only use the link once. If I remove the item and then try clicking the link again it doesn't add the item again. Seems to be a one shot deal per link. This is fine though, I'll only ever need one shot.
Yep I just updated the URL to automatically include periods to escape it on IOS.
Resolved in the latest release.
| gharchive/issue | 2020-09-26T17:48:22 | 2025-04-01T06:44:29.926194 | {
"authors": [
"Qerlak",
"ianmarmour",
"rjprevost",
"shaunster80",
"thenearl"
],
"repo": "ianmarmour/nvidia-clerk",
"url": "https://github.com/ianmarmour/nvidia-clerk/issues/125",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1775091032 | 🛑 Cairde is down
In 7dd05f2, Cairde (https://www.cairde.ie) was down:
HTTP code: 500
Response time: 487 ms
Resolved: Cairde is back up in 615a7e5.
| gharchive/issue | 2023-06-26T15:49:16 | 2025-04-01T06:44:29.928728 | {
"authors": [
"ianmyles"
],
"repo": "ianmyles/uptime",
"url": "https://github.com/ianmyles/uptime/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
178320740 | add regularization, dropout and batch norm?
Has anybody got loss lower than ~2? Tried couple of configurations (default, 3 and 4 stacks of 10 dilation layers), but loss does not get lower, suggesting the network is not learning anymore.
Also, there is what happened happened after ~30k steps:
I believe this is the same problem as reported in #30. There is what happens with weights:
Now running the same network with l2 norm regularization added.
And one more note: training just stops after 44256 steps (already happened twice) without any warnings or errors, despite of num_steps=50000
I've observed the same things. I looked at the code to see what might be hanging and didn't find any red flags. I thought the hang might be related to my setup: CUDA 8.0rc (required for Pascal support), cuDNN 5.1, and tensorflow built from source (git master from 9/20)
The hanging is probably caused by the background audio processing crashing. (Especially if if the CPU/GPU are idle once it stops).
Usually, there should be a backtrace that can help us find the reason it crashed.
Which commit did you observe the problem with?
There was a bug where we simply stop processing audio once we've seen every file once.
It might be that you're on an older commit that had this problem.
I've been trying to find a solution to the gradient jumping to large values at large step numbers, but don't have any amazing solutions at the moment.
It seems to be related to the ReLU activations in the last few layers of the network.
I've tried clipping the gradients, which didn't have an effect on this problem.
Replacing the ReLU activations with Tanh seems to fix it completely, but the network doesn't converge quite as quickly as with ReLU.
@ibab I'm experiencing the stalling with the latest commit.
@r-zemblys if you resume training at the checkpoint right before gradients implosion with a lower learning rate does it still behave the same ?
@lelayf i've used learning rate of 0.01 to get that loss curve above. Train saver only stores last 5 checkpoints so I'm not able to try lowering learning rate right before gradient implosion.
@ibab I was indeed using older commit. Latest one does not have stalling problem.
Here is loss curve with l2 regularization added; orange - learning rate 0.01 (~20k steps), blue - 0.001 (~60k steps)
Gradient implosion problem is gone, but it seems network is not learning anymore after first epoch. Will try to generate some audio later today.
@r-zemblys are you training on GPU or CPU ?
Here is generated 80k samples, primed with 8k sample audio from other database.
generated_l2_primed.wav.zip
Soundwave looks reasonably OK (green - generated audio)
Notes:
used af4c58e
trained for ~20k steps with learning rate of 0.01 and continued for ~60k steps with 0.001
@lelayf it is TitanX GPU I'm using
used l2 regularization
disabled silence trimming because of #59
there was a bug in WaveNet.decode, which resulted to all-zeros output. I think bug is still here in fc5417d
@r-zemblys: Excellent, did you use the default wavenet_params.json?
I've also linked some of my results in #47.
Forgot to add. This is configuration I've used:
{
"filter_width": 2,
"quantization_steps": 256,
"sample_rate": 16000,
"dilations": [1, 2, 4, 8, 16, 32, 64, 128, 256, 512,
1, 2, 4, 8, 16, 32, 64, 128, 256, 512,
1, 2, 4, 8, 16, 32, 64, 128, 256, 512],
"residual_channels": 32,
"dilation_channels": 16,
"use_biases": false
}
But as I've mention in the beginning, there is no difference (at least in loss curve) if using default configuration.
@r-zemblys: Did you train on the entire dataset, or a specific speaker?
@ibab: entire VCTK corpus. And then primed generation with a recording from LibriSpeech ASR corpus.
That's very cool. I think mixing together all different speakers explains the voice difference between your sample and mine.
Would you be interested in contributing the l2 regularization in a pull request?
I'm using python 2.7 and as r-zemplys mentioned above as "..there was a bug in WaveNet.decode, which resulted to all-zeros output", I obtained the generated.wav file with all-zeros.
After fixing the last line of "wavenet_ops.py" like below, I am now getting the speech-like waveform output.
magnitude = (1 / mu) * ((1 + mu)**abs(signal) - 1)
--> magnitude = (1. / mu) * ((1. + mu)**abs(signal) - 1)
Hope someone reflect it to the code if necessary.
@hoonyoung: This should be fixed on master now. I've also enabled travis to run the test with Python 2.
I commented out silence trimming and now training does not stall anymore, using 88e77bf.
| gharchive/issue | 2016-09-21T11:34:17 | 2025-04-01T06:44:29.967569 | {
"authors": [
"dnuffer",
"hoonyoung",
"ibab",
"lelayf",
"r-zemblys"
],
"repo": "ibab/tensorflow-wavenet",
"url": "https://github.com/ibab/tensorflow-wavenet/issues/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1556245353 | 🛑 CLA assistant is down
In 952ecd9, CLA assistant (https://cla-assistant.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CLA assistant is back up in 54b0ce5.
| gharchive/issue | 2023-01-25T08:30:39 | 2025-04-01T06:44:29.970522 | {
"authors": [
"ibakshay"
],
"repo": "ibakshay/test-uptime-2",
"url": "https://github.com/ibakshay/test-uptime-2/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1170981840 | Remove prefixed info (like package name) from element description in XSD
For example for https://frankdoc.frankframework.org/#!/Pipes/BytesOutputPipe the code completion shows:
BytesOutputPipe - nl.nn.adapterframework.pipes.BytesOutputPipe used as Pipe Output bytes as specified by the input XML.
This is very confusing. In my opinion there's no need to add "BytesOutputPipe - nl.nn.adapterframework.pipes.BytesOutputPipe used as Pipe" at the beginning of the text. Please remove it.
Why is that relevant for the end user? When you look at the below screenshots the XmlInputValidator/Pipe part it is already mentioned on the left side. That it is a XmlValidator used as Validator/Pipe can also be deducted from the word XmlInputValidator/Pipe on the left side. So this all seems rather redundant information that distracts from the most important info saying that this pipe can be use to validate against a XML-Schema
| gharchive/issue | 2022-03-16T12:52:01 | 2025-04-01T06:44:30.020692 | {
"authors": [
"jacodg"
],
"repo": "ibissource/frank-doc",
"url": "https://github.com/ibissource/frank-doc/issues/89",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1086960585 | Create codeql-analysis.yml
I got the idea to add some code quality analyzers, since we are invested in nice code anyway.
GitHub suggested its (beta) code analyze tool.
This is a pretty basic config, but with push to master and cornjob turned off. I only really care about pull requests. Kodiak will keep the branches up to date anyway.
If it's only going to give these java errors, there won't be much use for it.
| gharchive/pull-request | 2021-12-22T15:52:35 | 2025-04-01T06:44:30.022596 | {
"authors": [
"philipsens"
],
"repo": "ibissource/frank-flow",
"url": "https://github.com/ibissource/frank-flow/pull/376",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
205019752 | Wrong formatting for bar chart display
Posted on SO: http://stackoverflow.com/questions/41990549/dsx-images-generated-by-pixiedust-display-command-are-ugly
Enhancement have been made to provide better formatting. Please upgrade to v0.83 or higher to validate changes.
| gharchive/issue | 2017-02-02T22:45:15 | 2025-04-01T06:44:30.030867 | {
"authors": [
"DTAIEB"
],
"repo": "ibm-cds-labs/pixiedust",
"url": "https://github.com/ibm-cds-labs/pixiedust/issues/77",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2441148947 | 🔩 Ticket #12345 - Create Azure Virtual Network from Backstage.
Creating Azure Virtual Network in stack mybob54abc
This is an initial pull request to create an Azure Virtual Network with suffix 003 in stack mybob54abc and was created based on the Backstage template.
If you need to add more parameters, check the official documentation - https://github.com/claranet/terraform-azurerm-vnet
created by: Backstage Software Template 👷♂️⚙️👷♀️
Terraform Format and Style 🖌success
Terraform Initialization ⚙️success
Terraform Validation 🤖Success! The configuration is valid.
Terraform Plan 📖success
Show Plan
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.azure_virtual_network_12345.azurerm_virtual_network.main_vnet will be created
+ resource "azurerm_virtual_network" "main_vnet" {
+ address_space = [
+ "10.2.0.0/16",
]
+ dns_servers = []
+ guid = (known after apply)
+ id = (known after apply)
+ location = "eastus"
+ name = "vnet-mybob54abc-client-use-prod-001"
+ resource_group_name = "rg-mybob54abc-client-prod"
+ subnet = (known after apply)
+ tags = {
+ "CostCenter" = "00000"
+ "Criticality" = "High"
+ "Entity" = "UK"
+ "Owner" = "Bob Tayara"
+ "env" = "prod"
+ "stack" = "mybob54abc"
}
}
# module.rg.azurerm_resource_group.main_rg will be created
+ resource "azurerm_resource_group" "main_rg" {
+ id = (known after apply)
+ location = "eastus"
+ name = "rg-mybob54abc-client-prod"
+ tags = {
+ "env" = "prod"
+ "stack" = "mybob54abc"
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
Pusher: @ibrt2016, Action: pull_request, Workflow: Terraform Plan
| gharchive/pull-request | 2024-07-31T23:58:29 | 2025-04-01T06:44:30.054322 | {
"authors": [
"ibrt2016"
],
"repo": "ibrt2016/terraform-azure-web-mybob54abc",
"url": "https://github.com/ibrt2016/terraform-azure-web-mybob54abc/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1290570555 | 🛑 Icarephone is down
In 07eba72, Icarephone (https://www.icarephone.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Icarephone is back up in 80b2b2a.
| gharchive/issue | 2022-06-30T19:51:20 | 2025-04-01T06:44:30.058040 | {
"authors": [
"sqeven"
],
"repo": "icarephone/upptime",
"url": "https://github.com/icarephone/upptime/issues/3894",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1712356201 | default export GSF
Make GlobalSplineFit importable via from simweights import GlobalSplineFit5Comp
Codecov Report
Patch and project coverage have no change.
Comparison is base (0efbbbd) 100.00% compared to head (f3a4bb1) 100.00%.
:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.
Additional details and impacted files
@@ Coverage Diff @@
## main #11 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 12 12
Lines 693 693
=========================================
Hits 693 693
Impacted Files
Coverage Δ
src/simweights/__init__.py
100.00% <ø> (ø)
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2023-05-16T16:31:10 | 2025-04-01T06:44:30.108785 | {
"authors": [
"The-Ludwig",
"codecov-commenter"
],
"repo": "icecube/simweights",
"url": "https://github.com/icecube/simweights/pull/11",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
877560467 | Search is not working when, I use value as Id Int inside Value DropdownMenuItem
SearchableDropdown.single(
key: key,
items: list.asMap().entries.map((item) {
return new DropdownMenuItem(
child: Text(item.value['name']), value: item.value['id']);
}).toList(),
isExpanded: true,
searchHint: new Text(
'Select ',
style: new TextStyle(fontSize: 20),
),
onChanged: (value) {},
style: TextStyleWidget.build(),
),
Hello i faced same problem and i used item.value['id']+"*"+item.value['name']
and split to get id
| gharchive/issue | 2021-05-06T14:29:25 | 2025-04-01T06:44:30.134182 | {
"authors": [
"EslamAbotaleb",
"muhammedozdemir"
],
"repo": "icemanbsi/searchable_dropdown",
"url": "https://github.com/icemanbsi/searchable_dropdown/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2729084439 | 🛑 Cravings For Green is down
In ab66e38, Cravings For Green (https://cravingsforgreen.nl/) was down:
HTTP code: 500
Response time: 10440 ms
Resolved: Cravings For Green is back up in b013fa1 after 12 minutes.
| gharchive/issue | 2024-12-10T05:48:02 | 2025-04-01T06:44:30.154126 | {
"authors": [
"icheered"
],
"repo": "icheered/uptime",
"url": "https://github.com/icheered/uptime/issues/225",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1210185034 | How install on WSL ?
When I try on WSL I get:
$ go get -u github.com/ichiban/prolog
package embed: unrecognized import path "embed" (import path does not begin with hostname)
package io/fs: unrecognized import path "io/fs" (import path does not begin with hostname)
$
@Jean-Luc-Picard-2021 Hi! Both embed and io/fs are introduced in go1.16. Could you run $ go version and see if it's newer than 1.16, please? If it's older, upgrading Go will solve the problem.
Yes, could be the issue, I am only using go1.10.
I'm using WSL2 (Ubuntu 20) and sudo apt install golang will get a working version.
$ go version
go version go1.18.1 linux/amd64
The interpreter itself works great on Windows, but it seems like 1pl can't pick up keyboard input from the Windows terminal. I might take a crack at fixing that.
| gharchive/issue | 2022-04-20T20:35:18 | 2025-04-01T06:44:30.161302 | {
"authors": [
"Jean-Luc-Picard-2021",
"guregu",
"ichiban"
],
"repo": "ichiban/prolog",
"url": "https://github.com/ichiban/prolog/issues/204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1006296816 | Add H2BeamTimeout directive
Add H2BeamTimeout directive in order to set separate beam timeouts
per requests. If not set defaults to the timeout value of the virtual
host.
mod_http2/h2_config.c: Setup H2BeamTimeout config directive handling.
mod_http2/h2_config.h: Setup H2BeamTimeout config directive handling.
mod_http2/mod_http2.c: Set the beam timeout in the fixup hook.
Yes, this would work. Would call it H2StreamTimeout maybe, as users are not really aware what a 'beam' is.
I do not yet understand the scenario where this helps. As long as no reads from input or writes to output happen, beam timeouts have no effect.
The example you mentioned was a slow backend system. So, mod_proxy forwards the request to it and then does a blocking read waiting for the response. There, only ProxyTimeout applies, right? When it then finally gets something, and writes to the output beam, the beam timeout starts on a block.
This stream output will time out when the client is really slow (and may other streams with higher priority come first and that takes a long time). Is this the scenario you'd like to fix?
Yes, this would work. Would call it H2StreamTimeout maybe, as users are not really aware what a 'beam' is.
Sweet naming discussion :-). But yes, this would be fine for me.
I do not yet understand the scenario where this helps. As long as no reads from input or writes to output happen, beam timeouts have no effect.
The example you mentioned was a slow backend system. So, mod_proxy forwards the request to it and then does a blocking read waiting for the response. There, only ProxyTimeout applies, right? When it then finally gets something, and writes to the output beam, the beam timeout starts on a block.
This stream output will time out when the client is really slow (and may other streams with higher priority come first and that takes a long time). Is this the scenario you'd like to fix?
Yes. In my scenario I have a slow client, or better one with a high latency. In this case the 'one TCP connection' of HTTP/2 is a little bit a drawback as multiple TCP connections would probably have a higher throughput. I am aware of the tuning possibilities the TCP stack offers with regards to buffers here. But in my specific case this tuning is not easily possible for some reasons. Hence I want to have an additional possibility to increase the 'patience' on getting the request data back on the TCP connection without the need to increase the Timeout value.
Yes, this would work. Would call it H2StreamTimeout maybe, as users are not really aware what a 'beam' is.
Sweet naming discussion :-). But yes, this would be fine for me.
;-)
Yes. In my scenario I have a slow client, or better one with a high latency. In this case the 'one TCP connection' of HTTP/2 is a little bit a drawback as multiple TCP connections would probably have a higher throughput. I am aware of the tuning possibilities the TCP stack offers with regards to buffers here. But in my specific case this tuning is not easily possible for some reasons. Hence I want to have an additional possibility to increase the 'patience' on getting the request data back on the TCP connection without the need to increase the Timeout value.
I think the correct and most simple approach here would be to have no timeouts on stream output. Imagine a client opens two streams 1 and 3. 1 has priority over 3 and is a 1GB file. stream 3 will not get 'space' on the main connection for some time. It would be tricky to set the 'correct' timeout value.
OTOH, this would give opportunities for DoS exploits. There is some protection in h2, as not all streams opened are always scheduled for processing. Maybe that could take such blocked streams better into account.
Another scenario would be a response consisting of a large file and some footer. The file bucket would get immediately pulled through the beam and the write of the footer would then hang until the file is sent on the main connection. Not a common scenario, but to illuminate that blocking on streams can be more tricky.
Hmm.
I think H2StreamTimeout can be a stop-gap until we have a good feeling about a more self-adjusting approach here (and then ignore the setting in the future).
WDYT?
Commenting myself: this is nevertheless the right configuration, even if we later do not apply it to the beam directly, but consider more the overall stream timing.
I think H2StreamTimeout can be a stop-gap until we have a good feeling about a more self-adjusting approach here (and then ignore the setting in the future).
Self-adjusting in the future sounds good, but I guess we would still need to consider the value someone set via H2StreamTimeout. But I agree that the default should change then from server Timeout to self adjusting and maybe we should a value of auto or something like this for H2StreamTimeout then to get it back to self adjusting.
BTW: Thanks for merging. Any timetable by when you want to bring icing/pipes to `trunk?
BTW: Thanks for merging. Any timetable by when you want to bring icing/pipes to `trunk?
Probably next week, it runs stable in my testing for some time now and I would like to give people here more opportunity for trying it out.
| gharchive/pull-request | 2021-09-24T10:02:55 | 2025-04-01T06:44:30.175860 | {
"authors": [
"icing",
"rpluem"
],
"repo": "icing/mod_h2",
"url": "https://github.com/icing/mod_h2/pull/221",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1672973425 | feat: implement bindport
Description:
As a developer, I want to implement the "bindport" functionality for the "port" module, so that I can allow clients to bind to a particular port and establish a connection with a corresponding port on another chain.
Commit Message
feat: implement bindport
see the guidelines for commit messages.
Checklist:
[x] I have performed a self-review of my own code
[x] I have documented my code in accordance with the documentation guidelines
[x] My changes generate no new warnings
[x] I have added tests that prove my fix is effective or that my feature works
[x] I have run the unit tests
[ ] I only have one commit (if not, squash them into one commit).
[x] I have a descriptive commit message that adheres to the commit message guidelines
Please review the CONTRIBUTING.md file for detailed contributing guidelines.
Codecov Report
Merging #296 (899efb9) into main (28e24b3) will increase coverage by 0.08%.
The diff coverage is 100.00%.
:exclamation: Current head 899efb9 differs from pull request most recent head e6e4b2e. Consider uploading reports for the commit e6e4b2e to get more accurate results
@@ Coverage Diff @@
## main #296 +/- ##
==========================================
+ Coverage 46.98% 47.07% +0.08%
==========================================
Files 94 94
Lines 15263 15283 +20
==========================================
+ Hits 7171 7194 +23
+ Misses 8092 8089 -3
Impacted Files
Coverage Δ
...cts/cosmwasm-vm/cw-ibc-core/src/ics05_port/port.rs
88.31% <100.00%> (+4.10%)
:arrow_up:
... and 1 file with indirect coverage changes
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
| gharchive/pull-request | 2023-04-18T12:02:37 | 2025-04-01T06:44:30.204530 | {
"authors": [
"codecov-commenter",
"shanithkk"
],
"repo": "icon-project/IBC-Integration",
"url": "https://github.com/icon-project/IBC-Integration/pull/296",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1849595525 | docs: add doc for icon light client
Description: Added doc stating deviation for Icon LightClient
Commit Message
docs: add doc for icon light client
see the guidelines for commit messages.
Changelog Entry
version: <log entry>
Checklist:
[x] I have performed a self-review of my own code
[x] I have documented my code in accordance with the documentation guidelines
[x] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have run the unit tests
[x] I only have one commit (if not, squash them into one commit).
[x] I have a descriptive commit message that adheres to the commit message guidelines
Please review the CONTRIBUTING.md file for detailed contributing guidelines.
Codecov Report
Merging #626 (5ca1a3d) into main (d75ae87) will increase coverage by 0.70%.
Report is 5 commits behind head on main.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #626 +/- ##
============================================
+ Coverage 68.32% 69.02% +0.70%
Complexity 408 408
============================================
Files 129 151 +22
Lines 12626 14016 +1390
Branches 294 294
============================================
+ Hits 8627 9675 +1048
- Misses 3843 4185 +342
Partials 156 156
Flag
Coverage Δ
rust
67.17% <ø> (+1.07%)
:arrow_up:
Flags with carried forward coverage won't be shown. Click here to find out more.
see 29 files with indirect coverage changes
| gharchive/pull-request | 2023-08-14T11:31:16 | 2025-04-01T06:44:30.214352 | {
"authors": [
"codecov-commenter",
"rupeshkarna"
],
"repo": "icon-project/IBC-Integration",
"url": "https://github.com/icon-project/IBC-Integration/pull/626",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1185778492 | Can it support .net4.6?
Steps to reproduce
Expected behavior
Tell us what should happen
Actual behavior
Tell us what happens instead
Version of SharpZipLib
Obtained from (only keep the relevant lines)
Compiled from source, commit: _______
Downloaded from GitHub
Package installed using NuGet
As per ICSharpCode.SharpZipLib.csproj, this is targeting .NET Standard 2.0, 2.1, and .NET Framework 4.5. Per the .NET Standard documentation .NET Framework 4.6.1 and 4.6.2 can use .NET Standard 2.0 assemblies.
https://devblogs.microsoft.com/dotnet/net-framework-4-5-2-4-6-4-6-1-will-reach-end-of-support-on-april-26-2022/
| gharchive/issue | 2022-03-30T02:32:30 | 2025-04-01T06:44:30.227025 | {
"authors": [
"christophwille",
"david-beckman",
"xmzzy"
],
"repo": "icsharpcode/SharpZipLib",
"url": "https://github.com/icsharpcode/SharpZipLib/issues/738",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1919988561 | Parallelize Schedule Load Websoc Requests
Summary
Saves a bunch of time on schedule loading by making PPAPI requests for different terms concurrently.
Before:
After:
Test Plan
Verify with network inspector that requests are being done concurrently, and also that nothing breaks.
Issues
Closes #712
Blazing
| gharchive/pull-request | 2023-09-29T22:46:33 | 2025-04-01T06:44:30.229354 | {
"authors": [
"EricPedley",
"MinhxNguyen7"
],
"repo": "icssc/AntAlmanac",
"url": "https://github.com/icssc/AntAlmanac/pull/713",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
155107119 | Cast body to a string for regex - fixes #424
This just ensures that when we go to sanitize the body for logging, it is actually a string
Thanks. Will release soon. In the meanwhile you can use master.
Released as 1.1.2. It will be available on PyPi soon.
| gharchive/pull-request | 2016-05-16T19:51:26 | 2025-04-01T06:44:30.282001 | {
"authors": [
"parryjacob",
"thedrow"
],
"repo": "idan/oauthlib",
"url": "https://github.com/idan/oauthlib/pull/425",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2600839152 | feat: Upgrade Nu version to 0.99
The flag parsing was using ShellError::UnsupportedConfigValue, which was removed in nu-protocol 0.99. I've changed those instances to ShellError::InvalidValue, which is more correct as the flags are not config values.
Thanks!
| gharchive/pull-request | 2024-10-20T20:10:34 | 2025-04-01T06:44:30.283184 | {
"authors": [
"aftix",
"idanarye"
],
"repo": "idanarye/nu_plugin_skim",
"url": "https://github.com/idanarye/nu_plugin_skim/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2189455625 | match round can be negative
I got this response from match index get request:
[
{
"match": {
"id": 358485473,
"tournament_id": 14250816,
"state": "open",
"player1_id": 226125204,
"player2_id": 226125203,
"player1_prereq_match_id": null,
"player2_prereq_match_id": null,
"player1_is_prereq_match_loser": false,
"player2_is_prereq_match_loser": false,
"winner_id": null,
"loser_id": null,
"started_at": "2024-03-15T12:38:49.140-07:00",
"created_at": "2024-03-15T12:38:49.052-07:00",
"updated_at": "2024-03-15T12:38:49.140-07:00",
"identifier": "A",
"has_attachment": false,
"round": 1,
"player1_votes": null,
"player2_votes": null,
"group_id": null,
"attachment_count": null,
"scheduled_time": null,
"location": null,
"underway_at": null,
"optional": false,
"rushb_id": null,
"completed_at": null,
"suggested_play_order": 1,
"forfeited": null,
"open_graph_image_file_name": null,
"open_graph_image_content_type": null,
"open_graph_image_file_size": null,
"prerequisite_match_ids_csv": "",
"scores_csv": ""
}
},
{
"match": {
"id": 358485474,
"tournament_id": 14250816,
"state": "pending",
"player1_id": 226125205,
"player2_id": null,
"player1_prereq_match_id": null,
"player2_prereq_match_id": 358485473,
"player1_is_prereq_match_loser": false,
"player2_is_prereq_match_loser": false,
"winner_id": null,
"loser_id": null,
"started_at": null,
"created_at": "2024-03-15T12:38:49.057-07:00",
"updated_at": "2024-03-15T12:38:49.057-07:00",
"identifier": "B",
"has_attachment": false,
"round": 2,
"player1_votes": null,
"player2_votes": null,
"group_id": null,
"attachment_count": null,
"scheduled_time": null,
"location": null,
"underway_at": null,
"optional": false,
"rushb_id": null,
"completed_at": null,
"suggested_play_order": 2,
"forfeited": null,
"open_graph_image_file_name": null,
"open_graph_image_content_type": null,
"open_graph_image_file_size": null,
"prerequisite_match_ids_csv": "358485473",
"scores_csv": ""
}
},
{
"match": {
"id": 358485475,
"tournament_id": 14250816,
"state": "pending",
"player1_id": null,
"player2_id": null,
"player1_prereq_match_id": 358485474,
"player2_prereq_match_id": 358485473,
"player1_is_prereq_match_loser": true,
"player2_is_prereq_match_loser": true,
"winner_id": null,
"loser_id": null,
"started_at": null,
"created_at": "2024-03-15T12:38:49.063-07:00",
"updated_at": "2024-03-15T12:38:49.063-07:00",
"identifier": "E",
"has_attachment": false,
"round": -1,
"player1_votes": null,
"player2_votes": null,
"group_id": null,
"attachment_count": null,
"scheduled_time": null,
"location": null,
"underway_at": null,
"optional": false,
"rushb_id": null,
"completed_at": null,
"suggested_play_order": 3,
"forfeited": null,
"open_graph_image_file_name": null,
"open_graph_image_content_type": null,
"open_graph_image_file_size": null,
"prerequisite_match_ids_csv": "358485474,358485473",
"scores_csv": ""
}
},
{
"match": {
"id": 358485476,
"tournament_id": 14250816,
"state": "pending",
"player1_id": null,
"player2_id": null,
"player1_prereq_match_id": 358485474,
"player2_prereq_match_id": 358485475,
"player1_is_prereq_match_loser": false,
"player2_is_prereq_match_loser": false,
"winner_id": null,
"loser_id": null,
"started_at": null,
"created_at": "2024-03-15T12:38:49.070-07:00",
"updated_at": "2024-03-15T12:38:49.070-07:00",
"identifier": "C",
"has_attachment": false,
"round": 3,
"player1_votes": null,
"player2_votes": null,
"group_id": null,
"attachment_count": null,
"scheduled_time": null,
"location": null,
"underway_at": null,
"optional": false,
"rushb_id": null,
"completed_at": null,
"suggested_play_order": 4,
"forfeited": null,
"open_graph_image_file_name": null,
"open_graph_image_content_type": null,
"open_graph_image_file_size": null,
"prerequisite_match_ids_csv": "358485474,358485475",
"scores_csv": ""
}
},
{
"match": {
"id": 358485477,
"tournament_id": 14250816,
"state": "pending",
"player1_id": null,
"player2_id": null,
"player1_prereq_match_id": 358485476,
"player2_prereq_match_id": 358485476,
"player1_is_prereq_match_loser": false,
"player2_is_prereq_match_loser": true,
"winner_id": null,
"loser_id": null,
"started_at": null,
"created_at": "2024-03-15T12:38:49.076-07:00",
"updated_at": "2024-03-15T12:38:49.076-07:00",
"identifier": "D",
"has_attachment": false,
"round": 3,
"player1_votes": null,
"player2_votes": null,
"group_id": null,
"attachment_count": null,
"scheduled_time": null,
"location": null,
"underway_at": null,
"optional": false,
"rushb_id": null,
"completed_at": null,
"suggested_play_order": 5,
"forfeited": null,
"open_graph_image_file_name": null,
"open_graph_image_content_type": null,
"open_graph_image_file_size": null,
"prerequisite_match_ids_csv": "358485476",
"scores_csv": ""
}
}
]
however, it is stored in a struct with u64: https://github.com/iddm/challonge-rs/blob/7262d145d7f7fafe2ca7d57c5e86927d179e9da9/src/matches.rs#L223C1-L225C20
this causes a crash
thread 'main' panicked at /Users/tommy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/challonge-0.5.4/src/matches.rs:280:55:
called `Option::unwrap()` on a `None` value
Thank you for reporting the issue! I certainly didn't expect the round to be negative. Should be easily fixable though! Just need to change the type in the struct, I think. I'll check tomorrow.
| gharchive/issue | 2024-03-15T20:54:12 | 2025-04-01T06:44:30.292366 | {
"authors": [
"iddm",
"tommy-mor"
],
"repo": "iddm/challonge-rs",
"url": "https://github.com/iddm/challonge-rs/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
288222820 | Set the server name
The server name is displayed in the header, next to the logo.
It probably will need to be something like "RFI Planète Radio", I guess.
You'll need to run the following command in the playbook, after installing the irfi package and running the migrations:
python manage.py config set server site-name "RFI Planète Radio"
Domain name will be used during configuration, then they'll use it as radio name
| gharchive/issue | 2018-01-12T19:30:18 | 2025-04-01T06:44:30.297712 | {
"authors": [
"bochecha",
"fheslouin"
],
"repo": "ideascube/ansiblecube",
"url": "https://github.com/ideascube/ansiblecube/issues/166",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
205987984 | More informative logs
So at the moment we get this
Warning...this SF won't be able to be matched to an Uppercase SF
very difficult to debug, which sf doesn't match to what ? What are the consequences ?
Closing as part of archiving process.
| gharchive/issue | 2017-02-07T19:22:02 | 2025-04-01T06:44:30.320180 | {
"authors": [
"mal",
"tgalery"
],
"repo": "idio/spotlight-model-editor",
"url": "https://github.com/idio/spotlight-model-editor/issues/22",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
343416726 | international targeting is missing
check webmaster : https://www.google.com/webmasters/tools/i18n?hl=en&siteUrl=https://oopsreview.com/&tid=herrors
solution: https://yoast.com/hreflang-ultimate-guide/
| gharchive/issue | 2018-07-22T15:51:15 | 2025-04-01T06:44:30.326312 | {
"authors": [
"yussan"
],
"repo": "idmore/oopsreview-web",
"url": "https://github.com/idmore/oopsreview-web/issues/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1560288308 | The "Mark as action taken" button not working - liaison
Describe the issue
Hi,
We received a ticket from Scott Mansfield that mentions that the "Mark as action taken" button on this liaison: https://datatracker.ietf.org/liaison/1798/ does not appear to do anything. I have also tried clicking on the button and it doesn't appear to work, also tried in Sandbox and nothing happened.
Thanks,
Jenny
Code of Conduct
[X] I agree to follow the IETF's Code of Conduct
Hi Tools Team,
Happy Friday!
Scott Mansfield mentioned that nothing appeared to happen when he clicked on the “Mark as action taken”, but it does work me now when I click on it.
Jenny
On Jan 30, 2023, at 8:02 AM, Robert Sparks @.***> wrote:
Closed #5042 https://github.com/ietf-tools/datatracker/issues/5042 as completed via #5053 https://github.com/ietf-tools/datatracker/pull/5053.
—
Reply to this email directly, view it on GitHub https://github.com/ietf-tools/datatracker/issues/5042#event-8389823400, or unsubscribe https://github.com/notifications/unsubscribe-auth/AZHGIMM3YOANH7YGCSRBJZDWU7QXTANCNFSM6AAAAAAUJBMLHE.
You are receiving this because you authored the thread.
Hi Jenny - this was deployed last Tuesday. I'll reach out to Scott to see why he's still having trouble.
| gharchive/issue | 2023-01-27T19:22:06 | 2025-04-01T06:44:30.362284 | {
"authors": [
"jennybui1",
"rjsparks"
],
"repo": "ietf-tools/datatracker",
"url": "https://github.com/ietf-tools/datatracker/issues/5042",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1308413891 | feat(api): graphql endpoint for querying meeting data
Proof of concept
Must migrate to Django 4 first. Current graphql libraries for django require v3-4+.
Graphene is not really being actively maintained. Strawberry is the suggested alternative.
Closing in favor of the feat/strawberry branch.
| gharchive/pull-request | 2022-07-18T19:23:20 | 2025-04-01T06:44:30.363797 | {
"authors": [
"NGPixel"
],
"repo": "ietf-tools/datatracker",
"url": "https://github.com/ietf-tools/datatracker/pull/4227",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
451092620 | Взаимодействие со смарт контрактом
Good afternoon, tell me how to correctly call the smart contract function, which requires sending money, thank you.
https://github.com/iexbase/tron-api-python/blob/master/tronapi/transactionbuilder.py#L508
thanks, tell me why the code that is specified in the documents does not work
event_result = tron.trx.get_event_result('TGEJj8eus46QMHPgWQe1FJ2ymBXRm96fn1', 0, 'Notify')
AttributeError: 'Trx' object has no attribute 'get_event_result'
https://github.com/iexbase/tron-api-python/blob/master/tronapi/transactionbuilder.py#L508
Pls add this to examples.
I also have a problem, have you solved the problem?
@sitecreate
can you share with the code that works for you? thanks
I also have a problem, have anyone solved the problem?
| gharchive/issue | 2019-06-01T14:46:30 | 2025-04-01T06:44:30.398542 | {
"authors": [
"Cashik",
"GalymzhanAbdimanap",
"luchec",
"serderovsh",
"sitecreate",
"smartDev22"
],
"repo": "iexbase/tron-api-python",
"url": "https://github.com/iexbase/tron-api-python/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
308093917 | ifm3d trace bails out when there are too much logs on the camera
If the amount of logs on the camera is big the XMLRPC-C calls times out.
fm3d@73891fb04ef5:~/build$ ifm3d trace
ifm3d error: -100001
Lib: XMLRPC Timeout - can you `ping' the sensor?
A workaround for this is to limit the amount of logs for example to the last 100 log messages:
ifm3d trace --limit=100
This is related to the underlying xmlrpc-c library
This is still an open issue
| gharchive/issue | 2018-03-23T16:19:18 | 2025-04-01T06:44:30.402502 | {
"authors": [
"graugans"
],
"repo": "ifm/ifm3d",
"url": "https://github.com/ifm/ifm3d/issues/51",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1087547716 | Return status code 300 Multiple Choice when find has multiple candidates
This differentiates it from status code 200 when it returns content directly
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/300
The HTTP 300 Multiple Choices redirect status response code indicates that the request has more than one possible responses. The user-agent or the user should choose one of them. As there is no standardized way of choosing one of the responses, this response code is very rarely used.
Makes sense. But if we switch it to a search parameter then 200 is probably better. Let's wait for Zarf's evaluation.
| gharchive/pull-request | 2021-12-23T09:45:43 | 2025-04-01T06:44:30.408449 | {
"authors": [
"curiousdannii",
"dfabulich"
],
"repo": "iftechfoundation/ifarchive-unbox",
"url": "https://github.com/iftechfoundation/ifarchive-unbox/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2011993893 | DNS resolving has broken on latest release
Hey,
Been using your image for quite a few months with no issues. This morning I updated to latest and now all DNS resolution seems to have broken.
I confirmed this by removing your container and running just plain WG-Easy image and all works fine. When I run your image all pings slow to a crawl from host machine and any clients connected to adwireguard can no longer resolve any URLs.
Not sure what exactly is going on, I tried rebuilding my resolv.conf and re-applying NetworkManager.conf with no luck.
Running on Ubuntu 22.04.2 LTS with Docker.
https://github.com/iganeshk/adwireguard-dark/issues/7#issuecomment-1806265165
about to update the readme!
| gharchive/issue | 2023-11-27T10:46:41 | 2025-04-01T06:44:30.410510 | {
"authors": [
"iganeshk",
"uncapped1599"
],
"repo": "iganeshk/adwireguard-dark",
"url": "https://github.com/iganeshk/adwireguard-dark/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1161020250 | [BE] - CRUD контактов
[x] Добавить контакт
[x] Добавить контакт к карточке
[x] Переименновать контакт
[ ] Редактировать контакт
[ ] Переместить в другую карточку
[ ] Отвязать от карточки
[x] Добавить кастомное поле ( как в карточках ) [#8] + Учитывать кастомные поля при получении контактов
Пункты 4, 5, 6 неактуальны
| gharchive/issue | 2022-03-07T07:44:47 | 2025-04-01T06:44:30.421797 | {
"authors": [
"Ocultus",
"ignavan39"
],
"repo": "ignavan39/ucrm-go",
"url": "https://github.com/ignavan39/ucrm-go/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2393051689 | Sync Engine Support
Feature Request: Support for Synchronous Database Sessions
Is your feature request related to a problem?
No, this is a new feature proposal.
Proposed Solution
I'd like to discuss options for integrating support for synchronous database sessions. I'm open to implementing this myself, but I'd appreciate opinions on the best approach. Here are some potential solutions I've considered:
Separate Synchronous Classes
Create new SyncFastCrud class and sync_crud_router instance
Drawback: This would introduce significant code redundancy
Dual Function Approach (Similar to Langchain)
Introduce separate functions for sync and async operations
Example: create for sync and acreate for async
Implementation: Check if the passed db is a Session or AsyncSession
Artificial Session Wrapper
Create a wrapper to convert sync Sessions into AsyncSessions
Use run_in_threadpool (which FastAPI would do internally for the previous two proposals)
I'm open to other ideas and would appreciate any feedback or alternative suggestions.
I like this feature. I need to think a bit about the possibilities before discussing a best approach
Yes, I agree. I think the most "FastAPI" solution would be to keep the entire sync process synchronous as fastapi will automatically run that in the thread pool and will ensure that no extra "magic" is occurring behind the hood.
@igorbenav Any news on this? Thanks!
| gharchive/issue | 2024-07-05T19:28:01 | 2025-04-01T06:44:30.434515 | {
"authors": [
"VDuchauffour",
"igorbenav",
"kdcokenny"
],
"repo": "igorbenav/fastcrud",
"url": "https://github.com/igorbenav/fastcrud/issues/122",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
739369328 | Load a config file on startup
Nice plugin, thank you. It works!
I was wondering if you persist the configuration of all codes, delays, and urls somewhere in a folder (firefox profile?!), so that a config file (your example config file, e.g.) could be loaded on startup of the firefox browser.
Thank you.
I am not 100% certain ... i'll have to check,
but if i had to guess, i dont thing local file access on statrup doesn not seem like a good idea.
But i'll check.
Later.
(And sorry for the late reply)
| gharchive/issue | 2020-11-09T21:17:28 | 2025-04-01T06:44:30.458162 | {
"authors": [
"igorlogius",
"mazlo"
],
"repo": "igorlogius/automate-click",
"url": "https://github.com/igorlogius/automate-click/issues/1",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2212819854 | Allow specifying source URLs
Is your feature request related to a problem? Please describe.
The extension's UI currently allows the user to only specify regex destination URLs for temporary containers. I would like to specify regex source URLs.
Describe the solution you'd like
Please allow the user to specify regex source and/or destination URLs for which to open temporary containers.
For example, let's say site example.com has a bunch of links. Let's also say you want each of those links to open in its open unique temporary container (a unique temporary container for each link). In this example, example.com is the source URL.
Describe alternatives you've considered
Using the Temporary Containers extension.
Additional context
Thank you!
https://github.com/igorlogius/open-in-temp-container/assets/67047467/777bec43-11cf-4a4e-9a5b-501a13629570
| gharchive/issue | 2024-03-28T10:05:10 | 2025-04-01T06:44:30.602899 | {
"authors": [
"Gitoffthelawn",
"igorlogius"
],
"repo": "igorlogius/open-in-temp-container",
"url": "https://github.com/igorlogius/open-in-temp-container/issues/15",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
264278488 | Add Japanese Localization
src/resources/lang
I'll do this.
| gharchive/issue | 2017-10-10T15:42:50 | 2025-04-01T06:44:30.625138 | {
"authors": [
"Braunson",
"igoshev"
],
"repo": "igoshev/laravel-captcha",
"url": "https://github.com/igoshev/laravel-captcha/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
90343500 | Flat style badge does not work
Flat style for GA beacon badge does not work as stated in #19.
It's interesting because the code appears correct. It seems like the server is just not up to date.
</facepalm>... my bad guys; the default version was still set on old release. Should be fixed now.
| gharchive/issue | 2015-06-23T10:05:32 | 2025-04-01T06:44:30.653812 | {
"authors": [
"catdad",
"igrigorik",
"leodido"
],
"repo": "igrigorik/ga-beacon",
"url": "https://github.com/igrigorik/ga-beacon/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
278058096 | Object has no attribute album_art
Python version: 3.5.2
Bandcamp-dl version: 0.0.9.dev0
Bancamp-dl options: --base-dir ~ --template "%{track} - %{artist} - %{title} [%{album}]" --overwrite --group --embed-art --no-slugify $url
url: https://blanckmass.bandcamp.com/album/white-math-polymorph
options:
Describe the issue:
With --embed-art:
Track list incomplete, some tracks may be private, download anyway? (yes/no): yes
Starting download process.
Traceback (most recent call last):
File "/usr/local/bin/bandcamp-dl", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/bandcamp_dl/__main__.py", line 106, in main
bandcamp_downloader.start(album)
File "/usr/local/lib/python3.5/dist-packages/bandcamp_dl/bandcampdownloader.py", line 57, in start
self.download_album(album)
File "/usr/local/lib/python3.5/dist-packages/bandcamp_dl/bandcampdownloader.py", line 209, in download_album
os.remove(self.album_art)
AttributeError: 'BandcampDownloader' object has no attribute 'album_art'
Without --embed-art:
Track list incomplete, some tracks may be private, download anyway? (yes/no): yes
Starting download process.
And nothing gets downloaded.
They are all private tracks, makes sense nothing gets downloaded though the error could be more helpful.
I will at some point in the near future squash the error and have it return something useful in such a situation.
| gharchive/issue | 2017-11-30T09:51:21 | 2025-04-01T06:44:30.663465 | {
"authors": [
"Evolution0",
"gpchelkin"
],
"repo": "iheanyi/bandcamp-dl",
"url": "https://github.com/iheanyi/bandcamp-dl/issues/137",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
2279259358 | 请问作者有什么自动看抖音直播的方法
非常喜欢一个主播
希望能实现开播 自动刷粉丝灯牌 看30分钟直播 的功能
你都不看 谈何喜欢
| gharchive/issue | 2024-05-05T00:40:11 | 2025-04-01T06:44:30.665557 | {
"authors": [
"ZhangHAHA122",
"garyvalue"
],
"repo": "ihmily/DouyinLiveRecorder",
"url": "https://github.com/ihmily/DouyinLiveRecorder/issues/340",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1516606405 | Fix skeleton modifier bug + add ManifestRef changes
Fixes a dumb bug with the skeleton modifier (not including the "before" text in the output when doing an insertion) and adds manifest reference changeset to allow extra properties.
Fixes #152
Again I have local branching funtimes - the only important commits here are the last 3 (changes to modify_skeleton.py and skeleton.py)
| gharchive/pull-request | 2023-01-02T17:52:52 | 2025-04-01T06:44:30.709204 | {
"authors": [
"digitaldogsbody"
],
"repo": "iiif-prezi/iiif-prezi3",
"url": "https://github.com/iiif-prezi/iiif-prezi3/pull/154",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
380744604 | 秒传文件接口已更新
slice-md5不能忽略了. 所以,export,fixmd5等功能不能用了
啊太惨了,就是说以后都不能用秒传功能了?
slice-md5是block list里面的吗?需要整个加进去?
"block_list": [
"8da0ac878f3702c0768dc6ea6820d3ff",
"3c1eb99b0e64993f38cd8317788a8855"
]
不是的
| gharchive/issue | 2018-11-14T15:14:36 | 2025-04-01T06:44:30.711433 | {
"authors": [
"iikira",
"isaac850904"
],
"repo": "iikira/BaiduPCS-Go",
"url": "https://github.com/iikira/BaiduPCS-Go/issues/537",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
393660307 | Ubuntu登陆报错:Forwarding failure
BaiduPCS-Go-v3.5.6-linux-amd64版本
错误代码: -1, 消息: 网络请求失败, Post https://wappass.baidu.com/wp/api/login: Forwarding failure
账号密码没有输入错误,麻烦帮忙看下大概是因为什么原因呢?
操作系统版本:
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.10
Release: 18.10
Codename: cosmic
# uname -a
Linux xxxx-book 4.18.0-12-generic #13-Ubuntu SMP Wed Nov 14 15:17:05 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
网络问题?或者开启了无效代理?
嗯嗯,尴尬,环境变量设置了全局代理......
给大家添麻烦了,抱歉啊!
| gharchive/issue | 2018-12-22T09:53:44 | 2025-04-01T06:44:30.714145 | {
"authors": [
"toddlerya"
],
"repo": "iikira/BaiduPCS-Go",
"url": "https://github.com/iikira/BaiduPCS-Go/issues/599",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
156461912 | static.duoshuo.com/embed.js 请求失败
Expected behavior (预期行为)
可以正常加载多说插件
Actual behavior (实际行为)
有时候无法加载
Steps to reproduce the behavior (重现步骤)
在不使用https时,无法加载embed.js,提示403
并不是 hexo-theme-next 的问题
| gharchive/issue | 2016-05-24T09:33:51 | 2025-04-01T06:44:30.716187 | {
"authors": [
"873314461",
"djyde"
],
"repo": "iissnan/hexo-theme-next",
"url": "https://github.com/iissnan/hexo-theme-next/issues/908",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1560439197 | "HeadlessChrome" in User-Agent forces downgrade to es2015
Whenever the User-Agent header contains "HeadlessChrome", the build target "es2015" will be selected, regardless of the specified version:
$ curl -s -H "User-Agent: HeadlessChrome/80" https://esm.sh/react@18.2.0
/* esm.sh - react@18.2.0 */
export * from "https://esm.sh/stable/react@18.2.0/es2015/react.js";
export { default } from "https://esm.sh/stable/react@18.2.0/es2015/react.js";
$ curl -s -H "User-Agent: HeadlessChrome/109" https://esm.sh/react@18.2.0
/* esm.sh - react@18.2.0 */
export * from "https://esm.sh/stable/react@18.2.0/es2015/react.js";
export { default } from "https://esm.sh/stable/react@18.2.0/es2015/react.js";
$ curl -s -H "User-Agent: Chrome/80" https://esm.sh/react@18.2.0
/* esm.sh - react@18.2.0 */
export * from "https://esm.sh/stable/react@18.2.0/es2021/react.js";
export { default } from "https://esm.sh/stable/react@18.2.0/es2021/react.js";
$ curl -s -H "User-Agent: Chrome/109" https://esm.sh/react@18.2.0
/* esm.sh - react@18.2.0 */
export * from "https://esm.sh/stable/react@18.2.0/es2022/react.js";
export { default } from "https://esm.sh/stable/react@18.2.0/es2022/react.js";
$ curl -s -H "User-Agent: HeadlessSomething/109" https://esm.sh/react@18.2.0
/* esm.sh - react@18.2.0 */
export * from "https://esm.sh/stable/react@18.2.0/esnext/react.js";
export { default } from "https://esm.sh/stable/react@18.2.0/esnext/react.js";
Please note that react is an arbitrary example. The issue was first encountered with @observablehq/stdlib, where the target causes the build to fail:
$ curl -s -H "User-Agent: HeadlessChrome/109" https://esm.sh/@observablehq/stdlib@5.3.2
/* esm.sh - error */
throw new Error("[esm.sh] " + "esbuild: Transforming async generator functions to the configured target environment (\"es2015\") is not supported yet");
export default null;
fixed, thanks
Thank you!
| gharchive/issue | 2023-01-27T21:12:31 | 2025-04-01T06:44:30.719632 | {
"authors": [
"ije",
"mootari"
],
"repo": "ije/esm.sh",
"url": "https://github.com/ije/esm.sh/issues/509",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
823051720 | fix: handle invalid JSON response
Sometimes a server will respond with a "content-type": "application/json" header, but sends text or invalid JSON in the body.
In this case I was receiving an error from talkback:
Error handling request SyntaxError: Unexpected token e in JSON at position 0
This PR tries to parse the response as JSON first, if it fails fallback to the rawBody.
Thank you @SebFlippence. v2.4.1 contains this fix.
| gharchive/pull-request | 2021-03-05T12:45:03 | 2025-04-01T06:44:30.722475 | {
"authors": [
"SebFlippence",
"ijpiantanida"
],
"repo": "ijpiantanida/talkback",
"url": "https://github.com/ijpiantanida/talkback/pull/50",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
380155259 | Mutation Accessor file missed
Hi,
After downloading Mutation Accessor, and uncompressed it, I find all the file are all in csv format.
wget http://mutationassessor.org/r2/MA.scores.hg19.tar.bz2 --no-passive-ftp
I cannot go on with following step, any other link I can use?
perl ${ISOWN_HOME}/bin/mutation-assessor_format_index_vcf.pl MA.hg19 2013_12_11_MA.vcf
Thank you.
I find this and it works now.
Please update readme doc and close this issue, Thank you.
| gharchive/issue | 2018-11-13T10:25:05 | 2025-04-01T06:44:30.724599 | {
"authors": [
"xiucz"
],
"repo": "ikalatskaya/ISOWN",
"url": "https://github.com/ikalatskaya/ISOWN/issues/20",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
283417967 | Add include and exclude options.
For more control over openapi directive than the rather restrictive pathsoption, this PR adds include and exclude options that allow specifying paths to include / exclude using regular expressions.
.. openapi:: api.yaml
:include: /person.*
to render /person, /person/{pk}, /person/{pk}/changepw ... This allows splitting a big api specs file into small logical parts within the docs.
It also adds a local cache for deserialized specs to speedup openapi directive reuse.
Hey @jmbarbier! Thanks for the pull request.
Can you please rebase this on top on master in order to pass the tests?
Can you please split the PR into 2: one that implements include/exclude, one that implements caching?
| gharchive/pull-request | 2017-12-20T01:04:24 | 2025-04-01T06:44:30.727125 | {
"authors": [
"ikalnytskyi",
"jmbarbier"
],
"repo": "ikalnytskyi/sphinxcontrib-openapi",
"url": "https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/16",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
351903360 | Add partial support for openapi v3
Hello,
This is a patch to allow compatibility with openapi version 3.0:
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#schema
I add code to convert openapi example object
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#mediaTypeObject
to sphinx httpdomain using .. sourcecode:: http directive
https://sphinxcontrib-httpdomain.readthedocs.io/en/stable/#basic-usage
@Maillol What are your thoughts on #22? It's basically this but with the main file split into multiple files and some other minor changes. I'm working to resolve the example TODO as we speak.
Closed in favor of #22.
| gharchive/pull-request | 2018-08-19T13:32:04 | 2025-04-01T06:44:30.730144 | {
"authors": [
"Maillol",
"ikalnytskyi",
"stephenfin"
],
"repo": "ikalnytskyi/sphinxcontrib-openapi",
"url": "https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/19",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
505156761 | Exception thrown when adding a Secondary Tile
Repo steps:
Run the app
Create a template
Try to pin the template to Start
Say yes to the pop-up dialog of Windows asking if you want to pin it to start.
Exception is being thrown, but the tile is still being pinned and is functional after rebooting the app.
Bug was caused by a difference a typo. Whoops. Fixed it in commit b023d378.
| gharchive/issue | 2019-10-10T09:43:55 | 2025-04-01T06:44:30.734635 | {
"authors": [
"ikarago"
],
"repo": "ikarago/Revent",
"url": "https://github.com/ikarago/Revent/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1195214462 | 🛑 Penyanyi VF V2 is down
In 86d6789, Penyanyi VF V2 (https://evobot-1.nexter32.repl.co) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Penyanyi VF V2 is back up in ae9d16c.
| gharchive/issue | 2022-04-06T21:42:18 | 2025-04-01T06:44:30.743757 | {
"authors": [
"ikhwan32"
],
"repo": "ikhwan32/uptimevf",
"url": "https://github.com/ikhwan32/uptimevf/issues/798",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1267056838 | Getting Warning after Flutter 3 Upgrade 🪲
I am getting below warning message 👉
Warning: Operand of null-aware operation '!' has type 'SchedulerBinding' which excludes null
'SchedulerBinding' is from 'package:flutter/src/scheduler/binding.dart' ('/C:/Users/sinno/flutter/packages/flutter/lib/src/scheduler/binding.dart').
package:flutter/…/scheduler/binding.dart:1
SchedulerBinding.instance!.addPostFrameCallback((_) {
I am getting this too. It appears the package relies on an old version (pre flutter 3) of the package "scrollable_positioned_list".
| gharchive/issue | 2022-06-10T05:42:47 | 2025-04-01T06:44:30.745952 | {
"authors": [
"ZachGonzalezz",
"sinnoorc"
],
"repo": "ikicodedev/calendar_timeline",
"url": "https://github.com/ikicodedev/calendar_timeline/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2102172783 | 🛑 Frankie-GPT Website is down
In fe0a57d, Frankie-GPT Website (https://frankie-gpt.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Frankie-GPT Website is back up in bbb65eb after 2 hours, 58 minutes.
| gharchive/issue | 2024-01-26T12:55:31 | 2025-04-01T06:44:30.752730 | {
"authors": [
"ildella"
],
"repo": "ildella/frankie-gpt",
"url": "https://github.com/ildella/frankie-gpt/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2230150190 | 🛑 Frankie-GPT Chat is down
In a1c39f5, Frankie-GPT Chat (https://chat.frankie-gpt.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Frankie-GPT Chat is back up in f03599d after 35 minutes.
| gharchive/issue | 2024-04-08T02:54:47 | 2025-04-01T06:44:30.755159 | {
"authors": [
"ildella"
],
"repo": "ildella/frankie-gpt",
"url": "https://github.com/ildella/frankie-gpt/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
129585912 | indicated parallel event placeholders as such
fixes #29.
@stopfstedt This has a minor jshint failure - missing semi colon.
| gharchive/pull-request | 2016-01-28T21:19:56 | 2025-04-01T06:44:30.759213 | {
"authors": [
"jrjohnson",
"stopfstedt"
],
"repo": "ilios/calendar",
"url": "https://github.com/ilios/calendar/pull/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
503024521 | Add download queue data button
This PR:
Adds a button and endpoint to download course queue data as a csv on the course page
This is only viewable to course staff, which I have tested
Changed a comment that was probably not changed after copy/paste to say "remove" instead of "add"
Here is an example of what the download would look like given fake data. Unfortunately, I couldn't upload to GitHub as a csv.
Here is the SQL query that this was based off of:
SELECT
questions.id AS id,
courses.name AS CourseName,
queues.name AS QueueName,
u_asked.netid AS AskedBy_netid, u_asked.universityName AS AskedBy_RealName,
u_answered.netid AS AnsweredBy_netid, u_answered.universityName AS AnsweredBy_RealName,
topic,
CONVERT_TZ(enqueueTime, '+0:00', 'US/Central') AS enqueueTime,
CONVERT_TZ(dequeueTime, '+0:00', 'US/Central') AS dequeueTime,
CONVERT_TZ(answerStartTime, '+0:00', 'US/Central') AS answerStartTime,
CONVERT_TZ(answerFinishTime, '+0:00', 'US/Central') AS answerFinishTime,
comments, preparedness,
questions.location AS UserLocation,
queues.location AS QueueLocation,
CONVERT_TZ(queues.createdAt, '+0:00', 'US/Central') AS Queue_CreatedAt,
queueId, courseId
FROM questions
INNER JOIN queues ON questions.queueId = queues.id
INNER JOIN courses ON queues.courseId = courses.id
INNER JOIN users u_asked ON questions.askedById = u_asked.id
LEFT JOIN users u_answered ON questions.answeredById = u_answered.id
WHERE courseId=6
ORDER BY enqueueTime DESC
Adding onto Nathan's comments, we should also add some tests for this new endpoint
@nwalters512 as for your point about downloads being in another section, I think at this point @wadefagen just wants downloads to be pushed out as quick as possible - so I guess think of this as a temporary solution, and later I can make it nicer to be able to download per queue as well.
Now ready for review again.
Sequelize is weird in that if you specifically include an attribute in a related model, it doesn't exclude other attributes. So in the getColumns function, I whitelist the fields we want to display.
Added tests for the endpoint.
RealName is changed to UniversityName
Moved all logic to api except for the download
Here is an updated version of an example csv.
Tested on staging. Everything appears to be working properly and correctly (times are verified to be in CST as well).
Going to give this a final look over!
| gharchive/pull-request | 2019-10-05T22:33:18 | 2025-04-01T06:44:30.771543 | {
"authors": [
"jackieo5023",
"james9909",
"nwalters512"
],
"repo": "illinois/queue",
"url": "https://github.com/illinois/queue/pull/290",
"license": "NCSA",
"license_type": "permissive",
"license_source": "github-api"
} |
2514705610 | Toolgun assembles ship with weird collisions?
Describe the bug
I assembled my ship and it seems like it assembled it weirdly. It looks like a physics hitbox is surrounding the ship and it's solid and one block shorter than the ship. This makes it float off the ground by one block.
To Reproduce
Steps to reproduce the behavior:
Build a large ship (I need to do more testing)
Select the area with a tool gun
Build it
Watch the weirdness unfold
Expected behavior
My ship's hitbox should include the actual blocks on the ship and it should not float a block off the ground.
Screenshots/videos
Logs
The game logs (latest.log, debug.log).
Versions
VS2 version: 1.20.1-forge-2.3.0-beta.5
Mekanism version: 10.4.9.61
This has nothing to do with kontraption. Increase LOD detail in vs_core config to make hitbox more accurate
| gharchive/issue | 2024-09-09T19:05:23 | 2025-04-01T06:44:30.776468 | {
"authors": [
"Cosmos616",
"PriestOfFerns"
],
"repo": "illucc/Kontraption",
"url": "https://github.com/illucc/Kontraption/issues/46",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
90196747 | Remove a tiny bit of redundant code
Was just browsing through sources. Notices the sweet array_get function which seems to support a default parameter as used here. So it seems to be it's a bit redundant when combined with the elvis operator
No, it's not exactly the same, see: https://github.com/laravel/framework/blob/a423a55ed6601daebcbb75817de6d06b4d670a70/src/Illuminate/Support/Arr.php#L226
If $config['connection'] === false, array_get returns false since isset(false) is true.
However you can proabably do array_get($config, 'connection') ?: 'default'
But test it with valid value/false/null/no config value/empty string
My bad, I think I must have misread isset for empty in Arr::get.
Wrong repo. Please close this.
| gharchive/pull-request | 2015-06-22T20:02:19 | 2025-04-01T06:44:30.779187 | {
"authors": [
"GrahamCampbell",
"JeffreyVdb",
"kylekatarnls"
],
"repo": "illuminate/cache",
"url": "https://github.com/illuminate/cache/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
88925552 | Limit Discovery facets to 5 items
It seems the default is 10 items, but perhaps 5 would be better as we have many facets, each displaying 5+ top items. See the author affiliations below:
I assume this is done in dspace/config/spring/api/discovery.xml
Each search filter has an accompanying facetLimit:
<property name="facetLimit" value="10"/>
I think for now I'll try to trim down all the facets to 5 items with the exception of the Author-related ones.
Actually, I'm going to reduce them all to 5. Let's get feedback on this first. In the end you can always click "more" to see the rest...
Initial feedback is that we need to keep ALL the CGIAR Research Programs displayed, and I think they also want keep long listings for all the others as well... trying to get clarification.
| gharchive/issue | 2015-06-17T06:23:56 | 2025-04-01T06:44:30.782847 | {
"authors": [
"alanorth"
],
"repo": "ilri/DSpace",
"url": "https://github.com/ilri/DSpace/issues/108",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2026080195 | Create transaction API's
Changes made in this Api
Created CRUD APIs for transaction /createTransaction /updateTransaction/:id /deleteTransaction/:id
Added API's for listing specific transaction and all transactions /getTransaction/:id /getTransactions /getTransactionsByUser/:userId
Hello @afaq-karim, I went through your pull request and made some changes as well. Looks nice! Good work. 🚀 🚀 🪙
| gharchive/pull-request | 2023-12-05T12:27:03 | 2025-04-01T06:44:30.792987 | {
"authors": [
"afaq-karim",
"ilyaskarim"
],
"repo": "ilyaskarim/CoinFlowSystem",
"url": "https://github.com/ilyaskarim/CoinFlowSystem/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1654755380 | 请教一下langchain协调使用向量库和chatGLM工作的
代码里面这段是创建问答模型的,会接入ChatGLM和本地语料的向量库,langchain回答的时候是怎么个优先顺序?先搜向量库,没有再找chatglm么? 还是什么机制?
knowledge_chain = ChatVectorDBChain.from_llm(
llm=chatglm,
vectorstore=vector_store,
qa_prompt=prompt,
condense_question_prompt=new_question_prompt,
)
实现中基本过程如下
文档加载➡️文本分段➡️embedding进行各段文本向量化形成vector_store➡️embedding对问句向量化并与vector_store中向量比较➡️匹配出的文本作为上下文与问句一起放入prompt模板中➡️prompt发送给llm获取回答
具体匹配排序机制可以在langchain的函数中找找看
谢谢 了解了 构建的语料都是本地,搜索问题相关文本,然后将结果让LLM模型做语言优化和改写,返回答案。这样就不涉及语料之外的问题回答。
| gharchive/issue | 2023-04-05T00:36:14 | 2025-04-01T06:44:30.795196 | {
"authors": [
"imClumsyPanda",
"wjjc1017"
],
"repo": "imClumsyPanda/langchain-ChatGLM",
"url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
421742517 | Would nice to have :)
Thanks for this plugin. I am newbie at Flutter also dart (coming from react native side) bu really love it.
So I can't help at least now :( but it would be nice to see these options :
barBackgroundColor
circleStrokeColor
barTopBorderColor
barTopBorderWidth
animationDuration
Hi, Thank you for contribute (absolutely this is contribute),
I agree with barBackgroundColor and animationDuration
But about circleStrokeColor It's better to set this color as same as barBackgroundColor
because the design will broke by separating these two,
and about barTopBorderColor and barTopBorderWidth, currently we haven't barTopBorder, firstly we should add it to library.
barBackgroundColor and animationDuration added, they are exists on 0.9.1 version
| gharchive/issue | 2019-03-15T23:52:30 | 2025-04-01T06:44:30.798822 | {
"authors": [
"blntylmn",
"imaNNeoFighT"
],
"repo": "imaNNeoFighT/circular_bottom_navigation",
"url": "https://github.com/imaNNeoFighT/circular_bottom_navigation/issues/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
574358148 | Feature Request: Using IconData as a SideTitle
Is there some way I can use icon data as a side title instead of a string?
Ex: SideTitles( getTitles: (double value){ return Icons.face } )
#184 Is it work for you?
@imaNNeoFighT No, that's not what I mean. I want to change the '6.0, 4.0, 2.0, 0.0' on the left to a list of icons.
Got it
this exists in the latest PR if 1) you're happy having the icons aligned along the border 2) you're providing the icons as SVGs or images.
when i was investigating adding a flutter "Icon" widget to the chart it was just weird. flutter Icons are neither images nor vectors. they're one glyph of a font. I find it quite abnormal and difficult to work it.
if you're not happy having them aligned along the border a PR to providing padding should be very very simple
@shamilovtim we can also support having image and vectors instead of text titles,
but as you said, there is a trick to have it at the moment.
Thanks for explaining.
@shamilovtim Yes, I don't want them aligned along the border. I want it to position just like the text title.
I managed to do this by wrapping the Chart and a Column (icon on the left) or Row (icon on the bottom) of icons in a Row (icon on the left) or Column (icon on the bottom)
Hey, can someone point to a snippet in how I can achieve having icons/images instead of string titles?
Hi, it has been a year, any updates on this? Thank you.
Good news.
Please follow #183.
| gharchive/issue | 2020-03-03T01:35:25 | 2025-04-01T06:44:30.804665 | {
"authors": [
"Kuoyhout",
"MihailovDev",
"imaNNeoFighT",
"sawirricardo",
"shamilovtim"
],
"repo": "imaNNeoFighT/fl_chart",
"url": "https://github.com/imaNNeoFighT/fl_chart/issues/215",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
471847526 | Fix typos
Fixed some small errors I found while reading the documentation :)
@ConnorSkees The automatic links derived from symbol names only work on nightly rust but do not yet work on stable. So unfortunately the links need to point to the exact docs directly. But the first link is on the same page and can utililize the standard url anchor syntax of #method.finish which resolves to the html id method.finish that is generated for the inherent fn finish(&self) method .
It should be valid to write links in this way. They both work correctly when testing just now (the former points to .../image-png/target/doc/png/struct.StreamWriter.html#method.finish and flush now goes to https://doc.rust-lang.org/nightly/std/io/trait.Write.html#tymethod.flush)
The tracking issue for intra doc links is: https://github.com/rust-lang/rust/issues/43466
Thanks for noticing @ConnorSkees and thank you for the review @kaj
| gharchive/pull-request | 2019-07-23T18:11:44 | 2025-04-01T06:44:30.808164 | {
"authors": [
"ConnorSkees",
"HeroicKatora"
],
"repo": "image-rs/image-png",
"url": "https://github.com/image-rs/image-png/pull/157",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
195998735 | error with osx
my os is macosx
help me!
gulpfile.js :
gulp.task('images-opt', function () {
gulp.src('./images/*.+(jpeg|jpg|png)')
.pipe(imagemin({
progressive: true,
use: [pngquant({quality: '65-80'})]
}))
.pipe(gulp.dest('./assets/'));
});
events.js:141
throw er; // Unhandled 'error' event
^
Error: spawn /Users/k/HBuilderProjects/家教/node_modules/.3.2.0@jpegtran-bin/vendor/jpegtran ENOENT
at exports._errnoException (util.js:873:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:178:32)
at onErrorNT (internal/child_process.js:344:16)
at nextTickCallbackWith2Args (node.js:442:9)
at process._tickCallback (node.js:356:17)
Reinstall jpegtran-bin module correctly.
| gharchive/issue | 2016-12-16T07:40:31 | 2025-04-01T06:44:30.818068 | {
"authors": [
"acegank",
"shinnn"
],
"repo": "imagemin/imagemin",
"url": "https://github.com/imagemin/imagemin/issues/222",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
240832760 | while statement doesn't work correctly
Problem
Expected behavior:
"while" statement should support a conditional that includes only a variable.
Now "if " statement in OSL works.
Actual behavior:
while statement doesn't work if I used C-style conditional.
Steps to Reproduce
if I used only a variable as conditional,while statement didn't work correctly.
int hit=1;
while(hit){
hit = 0;
}
To work it,I had to specify conditional using a comparison operator explicitly.
int hit=1;
while(hit==1){
hit = 0;
}
OSL is an infringement of my copyrighted work. US copyright registration No. TxU2035517
CopyrightCoffelt_.pdf
Sorry for this falling through the cracks, but oh dear, you are right. How did we not discover this?
Fix coming.
| gharchive/issue | 2017-07-06T03:11:51 | 2025-04-01T06:44:30.821734 | {
"authors": [
"kazkit",
"lgritz",
"louis-coffelt"
],
"repo": "imageworks/OpenShadingLanguage",
"url": "https://github.com/imageworks/OpenShadingLanguage/issues/776",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
565977500 | Linux compilation error: "collect2: error: ld returned 1 exit status"
Do these errors say anything to you?
$ cd example/linux/
$ make all
g++ ../libssp_test.cpp -std=c++11 -L../../lib/linux_x64/ -lssp -lpthread -I../../include/ -I../../include/libuv/include/ -lrt -o libssp_test
../libssp_test.cpp: In function ‘void on_264_1(imf::SspH264Data*)’:
../libssp_test.cpp:24:91: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ [-Wformat=]
printf("on 1 264 [%d] [%lld] [%d] [%d]\n", h264->frm_no, h264->pts, h264->type, h264->len);
^
../libssp_test.cpp:24:91: warning: format ‘%d’ expects argument of type ‘int’, but argument 5 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
../libssp_test.cpp: In function ‘void on_264_2(imf::SspH264Data*)’:
../libssp_test.cpp:29:91: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ [-Wformat=]
printf("on 2 264 [%d] [%lld] [%d] [%d]\n", h264->frm_no, h264->pts, h264->type, h264->len);
^
../libssp_test.cpp:29:91: warning: format ‘%d’ expects argument of type ‘int’, but argument 5 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
/tmp/ccVSbn5M.o: In function `setup(imf::Loop*)':
libssp_test.cpp:(.text+0x287): undefined reference to `imf::SspClient::SspClient(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, imf::Loop*, unsigned long, unsigned short, unsigned int)'
collect2: error: ld returned 1 exit status
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 1
I have g++ 5.4.0 on Ubuntu 16.04 LTS.
Curiously, at work on a CentOS 7 machine I was able to compile it just fine.
What are the real minimum requirements for a successful build? Which versions of what?
Same Error here on ubuntu18.04 with g++ 7.4.0 :
g++ ../libssp_test.cpp -std=c++11 -L../../lib/linux_x64/ -lssp -lpthread -I../../include/ -I../../include/libuv/include/ -lrt -o libssp_test
../libssp_test.cpp: In function ‘void on_264_1(imf::SspH264Data*)’:
../libssp_test.cpp:24:91: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ [-Wformat=]
printf("on 1 264 [%d] [%lld] [%d] [%d]\n", h264->frm_no, h264->pts, h264->type, h264->len);
~~~~~~~~~ ^
../libssp_test.cpp:24:91: warning: format ‘%d’ expects argument of type ‘int’, but argument 5 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
../libssp_test.cpp: In function ‘void on_264_2(imf::SspH264Data*)’:
../libssp_test.cpp:29:91: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ [-Wformat=]
printf("on 2 264 [%d] [%lld] [%d] [%d]\n", h264->frm_no, h264->pts, h264->type, h264->len);
~~~~~~~~~ ^
../libssp_test.cpp:29:91: warning: format ‘%d’ expects argument of type ‘int’, but argument 5 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
/usr/bin/ld: ../../lib/linux_x64//libssp.a(timer.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(uv-common.c.o): relocation R_X86_64_32S against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(async.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(core.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(fs-poll.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(inet.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(linux-core.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(linux-inotify.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(loop.c.o): relocation R_X86_64_32 against hidden symbol `uv__work_done' can not be used when making a PIE object
/usr/bin/ld: ../../lib/linux_x64//libssp.a(pipe.c.o): relocation R_X86_64_32S against hidden symbol `uv__server_io' can not be used when making a PIE object
/usr/bin/ld: ../../lib/linux_x64//libssp.a(poll.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(process.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(signal.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(stream.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(tcp.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(thread.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(threadpool.c.o): relocation R_X86_64_32 against `.bss' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(udp.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(fs.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/tmp/ccgq4msZ.o: In function `setup(imf::Loop*)':
libssp_test.cpp:(.text+0x1c1): undefined reference to `imf::SspClient::SspClient(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, imf::Loop*, unsigned long, unsigned short, unsigned int)'
/usr/bin/ld: final link failed: Symbol needs debug section which does not exist
collect2: erg++ ../libssp_test.cpp -std=c++11 -L../../lib/linux_x64/ -lssp -lpthread -I../../include/ -I../../include/libuv/include/ -lrt -o libssp_test
../libssp_test.cpp: In function ‘void on_264_1(imf::SspH264Data*)’:
../libssp_test.cpp:24:91: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ [-Wformat=]
printf("on 1 264 [%d] [%lld] [%d] [%d]\n", h264->frm_no, h264->pts, h264->type, h264->len);
~~~~~~~~~ ^
../libssp_test.cpp:24:91: warning: format ‘%d’ expects argument of type ‘int’, but argument 5 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
../libssp_test.cpp: In function ‘void on_264_2(imf::SspH264Data*)’:
../libssp_test.cpp:29:91: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ [-Wformat=]
printf("on 2 264 [%d] [%lld] [%d] [%d]\n", h264->frm_no, h264->pts, h264->type, h264->len);
~~~~~~~~~ ^
../libssp_test.cpp:29:91: warning: format ‘%d’ expects argument of type ‘int’, but argument 5 has type ‘size_t {aka long unsigned int}’ [-Wformat=]
/usr/bin/ld: ../../lib/linux_x64//libssp.a(timer.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(uv-common.c.o): relocation R_X86_64_32S against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(async.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(core.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(fs-poll.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(inet.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(linux-core.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(linux-inotify.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(loop.c.o): relocation R_X86_64_32 against hidden symbol `uv__work_done' can not be used when making a PIE object
/usr/bin/ld: ../../lib/linux_x64//libssp.a(pipe.c.o): relocation R_X86_64_32S against hidden symbol `uv__server_io' can not be used when making a PIE object
/usr/bin/ld: ../../lib/linux_x64//libssp.a(poll.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(process.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(signal.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(stream.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(tcp.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(thread.c.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(threadpool.c.o): relocation R_X86_64_32 against `.bss' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(udp.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: ../../lib/linux_x64//libssp.a(fs.c.o): relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/tmp/ccgq4msZ.o: In function `setup(imf::Loop*)':
libssp_test.cpp:(.text+0x1c1): undefined reference to `imf::SspClient::SspClient(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, imf::Loop*, unsigned long, unsigned short, unsigned int)'
/usr/bin/ld: final link failed: Symbol needs debug section which does not exist
collect2: error: ld returned 1 exit status
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 1
ror: ld returned 1 exit status
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 1
@jlucidar If it helps you, I wrote myself this Dockerfile that can compile it successfully and consistently:
FROM centos:7
RUN yum install -y make gcc-c++-4.8.5
COPY . ./libssp/
WORKDIR ./libssp/example/linux
RUN make all && chmod 777 libssp_test
CMD ./libssp_test
(I didn't check if the resulting file works as-is in Ubuntu though.)
Yup, did the same ^^
It works directly in Ubuntu ;)
Use g++ version 4.8.5 then it should compile just fine.
| gharchive/issue | 2020-02-16T21:56:56 | 2025-04-01T06:44:30.828346 | {
"authors": [
"darkvertex",
"jlucidar",
"secit"
],
"repo": "imaginevision/libssp",
"url": "https://github.com/imaginevision/libssp/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1966452513 | リポジトリをクローンして初回起動時にエラーが発生する
リポジトリをクローンして初めて起動した時にエラーが発生しました。
リロードしたらなおって、それ以降は発生しなかったです。
もう一度リポジトリをクローンしなおしたら、再現しました。
でもリロードしたらもう出ないです。
直せそうだったら自分でなおします
環境
bun 1.0.6
なんとなく原因分かりそうなので一旦トライしてみます。そしてあとでPRつくります
記事を読んだら既知の問題でした
https://zenn.dev/imaimai17468/articles/af22e695ed24a8#エラー発生
BlockNoteの対応を待ちましょう🥹
| gharchive/issue | 2023-10-28T05:58:10 | 2025-04-01T06:44:30.831802 | {
"authors": [
"imaimai17468",
"tecsoc"
],
"repo": "imaimai17468/imaimai-whitespace",
"url": "https://github.com/imaimai17468/imaimai-whitespace/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1209547545 | How to exclude tests folders?
Hello,
I am able to successfully generate an index.ts file but this also contains entries like this;
export * from "./__tests__/some/path";
How do I exclude test folders from being added to the index?
Thanks in advance :-)
@F7502 Hi,
You have 3 option.
add .gitignore
add .npmignore
create new tsconfig.json (ex> tsconfig.ctix.json) after set exclude option you want
Many thanks, option 3 fits well for my use case.
| gharchive/issue | 2022-04-20T11:22:58 | 2025-04-01T06:44:30.889054 | {
"authors": [
"F7502",
"imjuni"
],
"repo": "imjuni/ctix",
"url": "https://github.com/imjuni/ctix/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1812352942 | pop_diff_new assertion typo.
One-liner fix. It seems one of the linters (maybe?) didn't apply changes properly, causing an assertion failure in population initialisation (see code). Fixed here.
Also CI seems to be broken!
| gharchive/pull-request | 2023-07-19T17:03:23 | 2025-04-01T06:44:30.903628 | {
"authors": [
"jamesturner246"
],
"repo": "imperialCHEPI/healthgps",
"url": "https://github.com/imperialCHEPI/healthgps/pull/171",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
169313606 | Multiline graph from config.yaml
Hello!
I really need to understand if its possible to aggregate few measures into one multiline graph by expression in config.yaml of my Pivot?
So I have those 2 dimensions, like "dimension1" and "dimension2", and I need something like:
- name: combine
title: Combine
formula: [$main.sum($dimension1), $main.sum($dimension2)]
I've tried writing custom aggregation with no luck. Is it even possible to make this kind of feature from config.yaml or I need to go inside of Pivot? What I need to get in result is something like Split graph, where we have few lines on one graph, but just showing diffirent measures instead of filter split.
If I have to modify Pivot's sources in my case, could you please point me the direction where to start looking?
Thanks in advance!
Hi this is not possible in general right now. Can you add some details as to your use case? What are the two formulas you are trying to show?
| gharchive/issue | 2016-08-04T07:57:26 | 2025-04-01T06:44:30.911641 | {
"authors": [
"satanworker",
"vogievetsky"
],
"repo": "implydata/pivot",
"url": "https://github.com/implydata/pivot/issues/322",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
293249563 | HA handling for store nodes
Store nodes are currently generally run as a single replica. It's not super critical to have HA in general since several hours or even days of recent data are HA via the Prometheus servers. But for some scenarios it might still be preferable.
Two could simply be deployed and the query node would take care of deduplication/merging just like for Prometheus HA pairs. But unlike Prometheus servers, the underlying data is truly the same in this case and fetching twice the amount is unnecessary overhead.
Some simple logic could be added to the query node to recognize real duplicates (Prometheus HA pairs are actually different through a replica label) and to only query one of them.
I think we might need that sooner than later... (: How can we do it easily? Basically we need to tel querier that these X stores are the same thing.. Can we reuse labels field from store Info endpoint?
The most basic way would just be the option to add for example --bucketid="xxx" to the storage command.
If the query command notices 2 (or more) buckets with the same ID, it could just take a random one to get its data from instead of all of them.
For active/passive this could be done using a leader latch protocol and sharing the data downloaded by the leader as it could announce any new downloaded bucket via gossip (for a faster failover) and share it via HTTP/gRPC. This would eliminate the need to fetch the data from an object store directly and allow for the query nodes to have only a single source of truth (the current leader)
I'd like to volunteer to take this on. For our use case, downtime caused by the store instance fronting an S3 bucket being rescheduled to another machine is not really palatable.
I'm thinking of an active-active solution, since it avoids some of the complexities around deciding which instance is 'active' and would be more efficient with resources. As store nodes are essentially just caches, I think it should reasonable straightforward to achieve.
While thinking about high availability, we should also consider allowing the store nodes to scale horizontally for very large deployments, effectively allowing horizontal scaling the LRU cache of indices.
I propose:
We shard the index cache across multiple store instances.
Optionally, we replicate the shards to provide high availability for a single shard - though by having multiple shards, we can already improve overall availability and reduce the time to recovery.
Just an idea: If we have multiple shards, we might simplify the store instances by avoiding persisting the cache to disk, since the amount of data to pull from object storage would be reduced by 1/n where n is the number of shards.
@mattbostock Thanks!
It all works for one assumption: Thanos setup has only bucket to take data from, are we ok with it? I have seen some use cases for multiple buckets connected to same Thanos "cluster/network/setup", because "it is easier to manage", "my object storage is specific" etc. Maybe that's separate issue, but woth to be aware of this while implementing HA.
We shard the index cache across multiple store instances.
Makes sense, just I would love to hear/see more about the implementation details. As you suggested offline: https://godoc.org/github.com/golang/groupcache sound nice but it means that we are talking about sharding fully on stores (you ask whatever store and it gives you correct answer 100% time even if it needs to ask its peers) or maybe we want thanos-query to be aware of store sharding? Also are we are talking about sharding index cache based on... what? On matchers 0.o? __name__ only? what if someone asks for __name__~=.*?
though by having multiple shards, we can already improve overall availability and reduce the time to recovery.
Totally agree and thanks for example :+1: However, I would start from something simple first - just replicating (so true HA), because that is what you need (from you what you say). This will enable horizontal scaling (will offload single store) and potentially improve performance as well. Just sharding will ONLY improve the availability (but will still have some major disruption time), regarding the performance it is hard to say without https://github.com/improbable-eng/thanos/issues/346 (which is in progress).
Added a proposal for high-availability for store instances here:
https://github.com/improbable-eng/thanos/pull/404/files
This can be solved by just by running multiple of Store Gateways behind any Loadbalancer (like Kuberentes Service) and without gossip.
| gharchive/issue | 2018-01-31T17:40:53 | 2025-04-01T06:44:30.920817 | {
"authors": [
"Bplotka",
"bwplotka",
"deejay1",
"dupondje",
"fabxc",
"mattbostock"
],
"repo": "improbable-eng/thanos",
"url": "https://github.com/improbable-eng/thanos/issues/199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
212455886 | Line not entered into lineWrap + lines after expanding a shrunk terminal
To reproduce:
Enter some data into the terminal (eg. ls -l)
Shrink the terminal so the lines wrap.
Expand the shrunk terminal so the lines unwrap.
Enter more data with a line break (eg. Enter).
@LucianBuzzo
@imsnif I can't repro this issue? What should I be seeing? Are you using the same command twice here?
| gharchive/issue | 2017-03-07T14:54:42 | 2025-04-01T06:44:30.926685 | {
"authors": [
"LucianBuzzo",
"imsnif"
],
"repo": "imsnif/xterm.js",
"url": "https://github.com/imsnif/xterm.js/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
963252161 | Reporting SVD defects to NXP
The imxrt-ral is derived from patched SVD files. The patches correct defects in SVD files. See #5 for a few SVD defects we're patching.
These patches help our users, but we might help others if we report these defects to NXP. This issue tracks SVD defect reporting to NXP. I'll occasionally report a defect to NXP, and summarize the outcomes in comments. If you'd like to help out with defect reporting, let me know, and I might be able to add you to the NXP technical support channel.
Title: Incorrect USBCMD[ATDTW] bit offset in i.MX RT SVDs
Case: 00376594
Date opened: 2021-06-26
Report
i.MX RT System View Description (SVD) files have an incorrect bit offset for USBCMD[ATDTW]. This defect affects projects that generate library code from the SVD files. The defect results in incorrect usage of the USB device controller, since it will result in an invalid semaphore between software and hardware.
Affects the SVD files for the following processors (SVD version). As of this writing, these SVDs are available at developer.arm.com.
MIMXRT1021DAG5A (version 1.0)
MIMXRT1051DVL6B (version 1.0)
MIMXRT1052DVL6B (version 1.0)
MIMXRT1061DVL6A (version 1.0)
MIMXRT1062DVL6A (version 1.0)
MIMXRT1064DVL6A (version 1.0)
We would expect the bit offset to be 14, as documented in the reference manual. However, in the defective SVD files, the bit offset is 12, which is documented as a reserved bit in the reference manual.
Response
NXP technical support acknowledged the discrepancy. NXP also noted that the issue affects their SDK files (demonstrated in SDK headers, IDE GUIs).
NXP's internal applications team acknowledged the defect, and confirmed that 14 is the correct offset. From the support team,
This change has already been requested to SDK team so they will update this in future releases. For the SVD files that are present in the arm website, we will try to update them as well.
Since NXP is tracking the issue internally, NXP and I closed the issue on 2021-07-08.
Title: Incorrect PIT[LDVALx] bit width in i.MX RT SVDs
Case: 00413572
Date opened: 2021-06-26
Report
Select i.MX RT System View Description (SVD) files have an incorrect bit width for PIT[LDVALx]. The defect affects projects that generate library code from the SVD files. A mask generated from this bit width will prevent full utilization of the field, which may affect timing.
The defect is present in SVD files for the following i.MX RT processors (SVD version). The defective SVD files are published at developer.arm.com.
MIMXRT1015DAF5A (version 1.0)
MIMXRT1021DAG5A (version 1.0)
These SVD files indicate that PIT[LDVALx] is 24 bits wide. However, we would expect the SVD files to indicate a bit width of 32 for PIT[LDVALx].
Response
NXP support acknowledges that this is a problem, and recommends direct writes to the register without using a bitmask. NXP support will report problem to software team.
Issue still open.
| gharchive/issue | 2021-08-07T15:31:56 | 2025-04-01T06:44:30.940807 | {
"authors": [
"mciantyre"
],
"repo": "imxrt-rs/imxrt-ral",
"url": "https://github.com/imxrt-rs/imxrt-ral/issues/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1189747556 | Website breaks when inserting inline math into specific text
During testing I used the following text and encountered some sort of overload, causing the site to basically stop responding entirely.
To replicate, copy this markdown text:
## What is Lorem Ipsum?
**Lorem Ipsum** is simply dummy text of the printing and typesetting industry.
and then insert an inline math block ($$) by hand or using the button for it between the header and the text, like this:
## What is Lorem Ipsum?
$$
**Lorem Ipsum** is simply dummy text of the printing and typesetting industry.
This happens on the demo page as well as on my test build (running version 1.11.4). It seems to happen when any header follows a few words, and the inline math is inserted in between.
Thanks, I am trying to fix it~
It was fixed in v1.11.5, have a try~
| gharchive/issue | 2022-04-01T12:47:41 | 2025-04-01T06:44:30.943779 | {
"authors": [
"imzbf",
"pxnt"
],
"repo": "imzbf/md-editor-v3",
"url": "https://github.com/imzbf/md-editor-v3/issues/72",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2330682181 | Allow the user to choose the name of the metadata file to make the PR
In FAIR Evaluator.
Something like this "Clicking a branch":
After clicking "Clicking a branch", the field for the branch opens up with "master by default":
Discard selection of branch; the purpose of putting metadata in the repo is visibility, and anything in a different branch from main/master is mostly hidden.
Allow selection of name:
[ ] Modify front-end
[ ] Update API accordingly. Modify filename in endpoint POST /metadata/pull
| gharchive/issue | 2024-06-03T09:49:06 | 2025-04-01T06:44:30.951106 | {
"authors": [
"EvaMart"
],
"repo": "inab/openEBench-nuxt",
"url": "https://github.com/inab/openEBench-nuxt/issues/614",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1345729435 | typo
closs-platform->cross-platform
fix
| gharchive/issue | 2022-08-22T02:42:04 | 2025-04-01T06:44:30.951965 | {
"authors": [
"inabajunmr"
],
"repo": "inabajunmr/webauthn-viewer",
"url": "https://github.com/inabajunmr/webauthn-viewer/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1393230449 | AxiosError: Request failed with status code 500
Hi,
hab den Adapter gestern Abend mal installiert. Daten wurden einwandfrei abgerufen., ich kann allerdings nicht genau sagen wie oft. Hab heute früh mal reingeschaut und bekomme im 5 Minutentakt:
AxiosError: Request failed with status code 500
Hast du eine Idee woran das liegt?
Hi,
hab den Adapter gestern Abend mal installiert. Daten wurden einwandfrei abgerufen., ich kann allerdings nicht genau sagen wie oft. Hab heute früh mal reingeschaut und bekomme im 5 Minutentakt: AxiosError: Request failed with status code 500 Hast du eine Idee woran das liegt?
Bei mir auch.
Sttuscode 500 ist Internal Server Error - das sind die üblichen Statuscodes, die man auch im Browser sehen kann.
Wenn die Drops-Seite über einen Webbrowser erreichbar ist, sollte das eigentlich nicht kommen.
Ich habe mal die Log vom heutigen Tag bei mir durchgeschaut und konnte keinen Error 500 finden.
Es gab ab und zu einen "AxiosError: timeout of 5000ms exceeded" - das kommt, wenn die Antwort vom Server länger als 5 Sekunden auf sich warten lässt (könnte man mal vergrößeren)
Der 5 Minuten Takt kommt durch das Abfrageintervall der Webseite.
Die Webseite war für mich aber durchgehend erreichbar. Den timeout Fehler hatte ich nur einmal. Adapter ist jetzt deaktiviert und ich schaue heute Abend noch Mal was er dann so treibt.
Es gab ab und zu einen "AxiosError: timeout of 5000ms exceeded" - das kommt, wenn die Antwort vom Server länger als 5 Sekunden auf sich warten lässt (könnte man mal vergrößeren)
Würde ich vergrößern, aber lauft sehr gut. Arbeite gerade an einer Vis:
Ich habe mir den BarChart im Moment hinter meiner normalen Anzeige für die Temperatur gelegt:
Der Chart wird nur angezeigt, wenn "rainStartsAt" nicht gleich -1 ist.
@inbux
Hab noch Mal getestet... Wenn ich einen Ort ohne Sonderzeichen im Namen nehme läuft der Adapter einwandfrei. Wenn ich einen Ort mit ö im Namen nehme kommt Fehler 500. Der kommt auch wenn ich ö durch oe ersetze. Wenn ich den Ort mit ö auf der Webseite eingebe ist das kein Problem. Komischerweise hat er beim ersten Abruf auch Daten geholt. Hast du ne Idee wie ich das ö maskieren kann?
du kannst auch die GPS Koordinaten eingeben, mit Komma getrennt - die
werden auf der Webseite normalerweise auch in der URL angezeigt
In der nächsten Version kann man auch alternativ automatisch die
Koordinaten aus der Systemkonfiguration benutzen.
Am Sa., 1. Okt. 2022 um 14:02 Uhr schrieb m-s-b @.***>:
@inbux https://github.com/inbux
Hab noch Mal getestet... Wenn ich einen Ort ohne Sonderzeichen im Namen
nehme läuft der Adapter einwandfrei. Wenn ich einen Ort mit ö im Namen
nehme kommt Fehler 500. Der kommt auch wenn ich ö durch oe ersetze. Wenn
ich den Ort mit ö auf der Webseite eingebe ist das kein Problem.
Komischerweise hat er beim ersten Abruf auch Daten geholt. Hast du ne Idee
wie ich das ö maskieren kann?
—
Reply to this email directly, view it on GitHub
https://github.com/inbux/ioBroker.drops-weather/issues/6#issuecomment-1264342426,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AKR42ISBPQJ4NTXSCSWHWH3WBAR5HANCNFSM6AAAAAAQ2IZVEA
.
You are receiving this because you were mentioned.Message ID:
@.***>
Hallo, sind auch mehrere Orte geplant? Oder eine Weitere Instanz anlegen.
@inbux
Das mit den Koordinaten war mir nicht bekannt. Damit klappt es einwandfrei. Danke. Mit der Version 0.2 auch. Übernahme der Koordinaten aus den Systemeinstellungen ist ne gute Idee.
Weitere Orte klingt interessant. Gerne ohne weitere Instanz.
By the way... Kannst du einen Datenpunkt mit nur dem aktuellen Niederschlag einfügen?
Und wenn du nen Thread unter Tester aufmachst kann ich hier zu machen ;-)
AktuellerNiederschlag ist jetzt drin.
Werde einen Beitrag unter Tester erstellen, sobald ich die Rechte dazu erhalten habe...
| gharchive/issue | 2022-10-01T04:21:59 | 2025-04-01T06:44:30.978236 | {
"authors": [
"inbux",
"m-s-b",
"sigi2345"
],
"repo": "inbux/ioBroker.drops-weather",
"url": "https://github.com/inbux/ioBroker.drops-weather/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1424841621 | 'Save As' does not work on MacOS
The save as function doesnt work on MacOS, save works fine. Tried digging deeper and realised the
await _channel.invokeMethod<String>(_saveAs, args) in _openFileManager doesnt complete or return any value. Cant go any deeper as I do not write swift.
MacOS, Monterey, version 12.6
Weldone on such a great package though, really helpful.
It is resolved in the latest version please check, reopen the issue if not
Hi, when I used the version 0.2.0. it is still not working.
Hi @incrediblezayed
Thank for you updating. It is working now.
This doesn't work for me still. I am using version 0.2.6. The saveAs method doesn't return anything, so the future never completes. I am on MacOS Ventura 13.0 (M1 Chip). Flutter version is 3.7.5. The saveFile method works fine though. The reason I wanted to use saveAs was the saveFile method was not overwriting the file if it had the same name.
This doesn't work for me still. I am using version 0.2.6. The saveAs method doesn't return anything, so the future never completes. I am on MacOS Ventura 13.0 (M1 Chip). Flutter version is 3.7.5. The saveFile method works fine though. The reason I wanted to use saveAs was the saveFile method was not overwriting the file if it had the same name.
I really can't reproduce this, since I've already tested this multiple times, have you added the required entitlements in DebugProfile.entitlements?
Yes, it's there. I double checked everything, but still couldn't get it to work.
await FileSaver.instance.saveAs(name: fileName, bytes: Uint8List.fromList(export.codeUnits), ext: 'csv', mimeType: MimeType.csv);
This is the code.
Yes, it's there. I double checked everything, but still couldn't get it to work.
await FileSaver.instance.saveAs(name: fileName, bytes: Uint8List.fromList(export.codeUnits), ext: 'csv', mimeType: MimeType.csv);
This is the code.
I actually have the exact same macbook as you, but I never faced this, I'll try this tomorrow morning, can you join discord if possible?
I figured out what was happening. As you can see from the screenshot in my previous reply, I have App Sandbox enabled as it's a requirement to publish to App Store. So, I had to also add com.apple.security.files.user-selected.read-write key in entitlements file as App Sandbox disables it by default. When a Save As dialog appears, it no longer becomes a Downloads folder access, but a 'User Selected' one, so that's why.
<key>com.apple.security.files.user-selected.read-write</key> <true/>
I figured out what was happening. As you can see from the screenshot in my previous reply, I have App Sandbox enabled as it's a requirement to publish to App Store. So, I had to also add com.apple.security.files.user-selected.read-write key in entitlements file as App Sandbox disables it by default. When a Save As dialog appears, it no longer becomes a Downloads folder access, but a 'User Selected' one, so that's why.
<key>com.apple.security.files.user-selected.read-write</key> <true/>
Is it fixed now?
Yes, working fine after adding com.apple.security.files.user-selected.read-write key to the entitlements file. Thanks!
So closing the issue
Should probably update the readme with this info here
Just encountered this issue and this was the fix ^^
| gharchive/issue | 2022-10-27T00:20:59 | 2025-04-01T06:44:31.023853 | {
"authors": [
"T-P-F",
"arianshi",
"incrediblezayed",
"ologunB",
"programmeraditya"
],
"repo": "incrediblezayed/file_saver",
"url": "https://github.com/incrediblezayed/file_saver/issues/52",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2594968859 | IWF-137: Update iwf-idl to latest to allow to use separate persistency loading policy for waitUntil
Description
Updating iwf-idl to latest to: Allow to use separate persistency loading policy for waitUntil
Depends on https://github.com/indeedeng/iwf/pull/448
Checklist
[x] Code compiles correctly
[ ] Tests for the changes have been added
[x] All tests passing
[x] This PR change is backwards-compatible
[ ] This PR CONTAINS a (planned) breaking change (it is NOT backwards-compatible)
Related Issue
Closes https://github.com/indeedeng/iwf/issues/387
Nit: maybe modify/add an new test to use the new field
Nit: maybe modify/add an new test to use the new field
I agree with you, but I can't add any new tests before before the iwf server MR, they would fail until we merge those. I can get something going in the mean time, so that once we merge those re-run the pipeline would make them pass.
A RpcTest is failing in this MR but they all pass locally for me, unsure what is going on:
``
`
RpcTest > testRPCLocking() FAILED
org.opentest4j.AssertionFailedError at RpcTest.java:90
A RpcTest is failing in this MR but they all pass locally for me, unsure what is going on:
RpcTest > testRPCLocking() FAILED
org.opentest4j.AssertionFailedError at RpcTest.java:90
testRPCLocking is flacky becuase it's agressively doing locking, I guess it's someitmes overloading the docker iwf-servier(temporal)...we need to improve it later. For now, I usually just rerun it
Nit: maybe modify/add an new test to use the new field
I agree with you, but I can't add any new tests before before the iwf server MR, they would fail until we merge those. I can get something going now, so that once we merge those re-run the pipeline would make them pass, but they wouldn't pass for now.
I approved your server change!
Alright, that makes sense. I'm going to create a new ticket to merge these upgrades. I don't think they should be included as part of the current ticket changes.
Alright, that makes sense. I'm going to create a new ticket to merge these upgrades. I don't think they should be included as part of the current ticket changes.
Yes that's a good idea
Perfect, the failing test is fixed now, thanks so much Long for looking into it! Appreciated.
I'll merge this on Monday 👍
| gharchive/pull-request | 2024-10-17T14:47:29 | 2025-04-01T06:44:31.034565 | {
"authors": [
"longquanzheng",
"samuel27m"
],
"repo": "indeedeng/iwf-java-sdk",
"url": "https://github.com/indeedeng/iwf-java-sdk/pull/248",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1632328065 | Staging Frontend UAA Client Setting
Updating the staging web app to use the same new UAA clientId for OAuth authentication that have been configured in the staging API.
This worked. The web app is using the correct client ID, and is able to redeem a code for a JWT from the API.
| gharchive/pull-request | 2023-03-20T15:17:53 | 2025-04-01T06:44:31.044651 | {
"authors": [
"jasonfrancis"
],
"repo": "indiana-university/itpeople-functions-v2",
"url": "https://github.com/indiana-university/itpeople-functions-v2/pull/93",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1253218589 | Bug: Box16 doesn't actually run at "realtime" speed.
It runs at ~98% realtime, even by its own arithmetic. This seems to be because the timing code doesn't consider the possibility that a frame plus a usleep call might result in the frame ever so slightly longer than 16,666μs, and doesn't allow subsequent frames to shave off the difference to make up for it.
Ha, in fact I don't know how fast Box16 runs relative to "realtime" because there's the added complexity that the x16 runs at a display frequency of 59.524fps, which is not 60fps, so this may not be an issue at all. Closing for now.
| gharchive/issue | 2022-05-31T00:55:16 | 2025-04-01T06:44:31.051788 | {
"authors": [
"indigodarkwolf"
],
"repo": "indigodarkwolf/box16",
"url": "https://github.com/indigodarkwolf/box16/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
121151968 | Error handling in different environments
https://github.com/inf3rno/dataflower/issues/59
I close this in favor of #62
| gharchive/issue | 2015-12-09T03:28:04 | 2025-04-01T06:44:31.079590 | {
"authors": [
"inf3rno"
],
"repo": "inf3rno/o3",
"url": "https://github.com/inf3rno/o3/issues/60",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
154810540 | Motor.addRenderTask should return the render task, for reference.
For example, so we can easily remove it without manually creating a reference. Here's the current way, needing a manual reference before adding the task:
let task = function() {
// manipulate some nodes in here...
}
Motor.addRenderTask(task)
// ...
Motor.removeRenderTask(task)
But, if Motor.addRenderTask return a reference to the task, then we can do something similar to the setInterval/clearInterval and setTimeout/clearTimeout pairs:
let task = Motor.addRenderTask(function() {
// manipulate some nodes in here...
})
// ...
Motor.removeRenderTask(task)
which can be more convenient.
Completed in 4b5a984cd99a6be97cb5cb885ac8ac16efbd7d0c
| gharchive/issue | 2016-05-13T22:14:01 | 2025-04-01T06:44:31.081221 | {
"authors": [
"trusktr"
],
"repo": "infamous/infamous",
"url": "https://github.com/infamous/infamous/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2256533937 | FI-2456: Verify granular scopes
Summary
This branch adds tests to verify that the required granular scopes were granted.
Testing Guidance
Run one of the granular scope launches against the reference server. When the reference server allows you to select which scopes to grant, deselect some of the granular scopes, and test will fail.
Test result looks as expected.
| gharchive/pull-request | 2024-04-22T13:23:37 | 2025-04-01T06:44:31.083532 | {
"authors": [
"Jammjammjamm",
"yunwwang"
],
"repo": "inferno-framework/us-core-test-kit",
"url": "https://github.com/inferno-framework/us-core-test-kit/pull/171",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2162504478 | [Bug]: Inserting data with unmatched column names, error occurs.
Is there an existing issue for the same bug?
[X] I have checked the existing issues.
Branch name
main
Commit ID
6f8b3cb2
Other environment information
No response
Actual behavior
Inserting data with unmatched column names, error occurs.
[critical] No column name: c2@src/storage/meta/entry/table_entry.cpp:615
Unrecoverable error issued, stop the serverShutdown infinity server ...
Shutdown storage ...
Expected behavior
No response
Steps to reproduce
Function test code can be found in python/test/test_insert.py`test_insert_no_match_column`
Additional information
No response
Fixed by #696 and #699
| gharchive/issue | 2024-03-01T03:13:18 | 2025-04-01T06:44:31.091104 | {
"authors": [
"JinHai-CN",
"chrysanthemum-boy"
],
"repo": "infiniflow/infinity",
"url": "https://github.com/infiniflow/infinity/issues/688",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
452076170 | ISPN-10170 session externalization
https://issues.jboss.org/browse/ISPN-10170
Closing as per discussion in the downstream Jira.
| gharchive/pull-request | 2019-06-04T15:51:56 | 2025-04-01T06:44:31.092197 | {
"authors": [
"oraNod"
],
"repo": "infinispan/infinispan",
"url": "https://github.com/infinispan/infinispan/pull/7013",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2397211898 | vs code not working
install vs form app store contact
apple and try chaing saffari extension and website with the help of apple. no issue from apple end so palese help me in this thank you
Sorry I don't understand the problem you're facing. I can only provide support for the Clone in VS Code extension. Are you facing any issues with the extension?
As I haven't heard from you I will be closing this issue. Let me know if you're experiencing any issues with the extension.
| gharchive/issue | 2024-07-09T05:58:45 | 2025-04-01T06:44:31.101843 | {
"authors": [
"Pranavpatil96k",
"infinitepower18"
],
"repo": "infinitepower18/CloneInVSCode-Safari",
"url": "https://github.com/infinitepower18/CloneInVSCode-Safari/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2161872644 | feat(recipe): New Enforcing Import Order recipe
Shows how to use @serverless-guru/prettier-plugin-import-order to enforce and auto-fix import orders throughout the source code.
@Jpoliachik Yeah, the serverless guru one is more powerful when it comes to typescript imports and merging duplicate imports. It's also got some nicer features when it comes to separating groups of imports.
I've used both and i think that this one is superior.
@markrickert I might suggest a very quick blurb on that, just so people don't get confused if the google prettier-plugin-import-order on its own and land on the trivago one!
| gharchive/pull-request | 2024-02-29T18:39:56 | 2025-04-01T06:44:31.103605 | {
"authors": [
"Jpoliachik",
"markrickert"
],
"repo": "infinitered/ignite-cookbook",
"url": "https://github.com/infinitered/ignite-cookbook/pull/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
165494115 | [Feature Request] Use react-native-extended-stylesheet for styling
https://github.com/vitalets/react-native-extended-stylesheet
From its docs: Extend React Native stylesheets with media-queries, variables, themes, relative units, percents, math operations, scaling and other styling stuff.
I think this library would be a great addition to an already awesome project generator :)
Thoughts?
Thanks for the link. I'd never seen this before.
We are using Extended Style Sheets in our Ignited project right now. I just generated a couple of containers, all that was necessary was a couple of changed lines in style and container files. Wasn't hard at all, but this would enhance an already very handy generator!
In Ignite 2.0, plugins are a thing. They allow you to plugin to Ignite as if you were writing into core!
| gharchive/issue | 2016-07-14T07:21:19 | 2025-04-01T06:44:31.105933 | {
"authors": [
"justingosan",
"sinewave440hz",
"skellock"
],
"repo": "infinitered/ignite",
"url": "https://github.com/infinitered/ignite/issues/248",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1400099437 | Delete org: navigation has changed; update instructions
Cloud navigation has changed. Update docs for Deleting an organization. To find the Delete icon, click on your org name (pull-down) in the breadcrumbs at top and choose Settings--example URL path: - https://us-east-1-1.aws.cloud2.influxdata.com/orgs/12ecfe5b8de761f8/org-settings
Relevant URLs
https://docs.influxdata.com/influxdb/cloud/organizations/delete-org/
Doc the Docs Bot
APP 7:10 AM
New user feedback!
Page: https://docs.influxdata.com/influxdb/cloud/organizations/delete-org/
Feedback: There ist no Delete Organisation.
related issue: https://github.com/influxdata/docs-v2/issues/4536
| gharchive/issue | 2022-10-06T18:06:09 | 2025-04-01T06:44:31.120242 | {
"authors": [
"jstirnaman",
"lwandzura"
],
"repo": "influxdata/docs-v2",
"url": "https://github.com/influxdata/docs-v2/issues/4536",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
593692803 | client.switch_database('db_name') works, despite database not existing
InfluxDB version: '1.7.8'
InfluxDB-python version: 5.2.3
Python version: 3.7.7
Operating system version: Ubuntu 18.04.3 LTS
when I initialize a client like so:
client = InfluxDBClient(host=host, port=port, username=username, password=password)
and then, if I want to write to a particular database, I would do:
client.switch_database('some_db_name').
If some_db_name does not exist in client.get_list_database() I don't get any error.
Also, a new db is not created either. It feels that it should at least give an early warning if not an error.
In addition:
When I create a database that already exists, nothing happens (no warning or error). Therefore the question is, did it do nothing or did it create (overwrite) an existing one?
It will be nice to have a function to show which database is the client currently on. There is no way of knowing currently (only via accessing the protected client._database.
@snenkov thanks for the issue:
It looks like switch_database doesn't run any command against Influx:
https://github.com/influxdata/influxdb-python/blob/master/influxdb/influxdb08/client.py#L173
This is the behavior of InfluxDB: https://docs.influxdata.com/influxdb/v1.7/query_language/database_management/#create-database
Feels like a very easy function to add to the client.py above. Would you like to take a crack at it?
| gharchive/issue | 2020-04-03T23:38:26 | 2025-04-01T06:44:31.142161 | {
"authors": [
"russorat",
"snenkov"
],
"repo": "influxdata/influxdb-python",
"url": "https://github.com/influxdata/influxdb-python/issues/803",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
120127731 | Fix escaping issue
This PR fixes both #115 and #119.
I'm sorry this is a big patch, but I found escaping rule of the Line Protocol is very comprecated. Here is a summary of which character needs escaping.
character
measurement name
tag key,tag value
field key
field value
Equal sign(=)
no
yes
yes
no
Whitespace( )
yes
yes
yes
no
Comma(,)
yes
yes
yes
no
Double quote(")
no
no
yes
yes
Backslash
no
no
no
no
The following is a test that influxdb gem can correctly escape these characters. I confirmed this test works well with InfluxDB 0.9.5.1.
require 'influxdb'
# Generate a string with special characters
def S(i)
"#{i}= ,\"\\#{i}"
end
# Connect to influxdb
influxdb = InfluxDB::Client.new(host: "10.0.0.4", database: "junk")
# Write a point
n = Time.now.to_i
p influxdb.write_point(S(1), values: {S(2) => S(3), "n" => n}, tags: {S(4) => S(5)})
# Query the point
res = influxdb.query(%(SELECT * FROM "1= ,\\"\\\\1" WHERE n=#{n}), denormalize: false)
p res
# Check if stored as expected
raise if res[0]["name"] != S(1)
raise if res[0]["columns"][1] != S(2)
raise if res[0]["values"][0][1] != S(3)
raise if res[0]["columns"][2] != S(4)
raise if res[0]["values"][0][2] != S(5)
puts "ok"
What is the brocker of merging this patch?
Hi, any news about this fix ?
Had issues with strings, and this patch works for me
Strings are stored as expected (without backslashes before white spaces) with curl test to write endpoint and this patch according to the influxdb documentation
Can't see what failed to travis test (expired ?)
Test failed on only jruby20-mode so this patch should not be related.
I also want to merge this patch...
ok, thx for feedback, so i also vote for this merge to be considered !
Using this one in favor of #119. Thanks @yhara!
@toddboom Thanks for the merge. Do you have a plan to release new version?
| gharchive/pull-request | 2015-12-03T09:33:40 | 2025-04-01T06:44:31.150095 | {
"authors": [
"dje4om",
"repeatedly",
"toddboom",
"yhara"
],
"repo": "influxdata/influxdb-ruby",
"url": "https://github.com/influxdata/influxdb-ruby/pull/121",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
477206136 | Memory gradually grows and some data in a static table/collection is lost.
Influx version 1.7.7
Ubuntu 18.04
I am logging http responses of ~50 api endpoints with a cron job. It checks healths of urls.
5 columns in table, time and request_url is tag.
Start up ram usage is 300mb but after ~5 hours ram usage become 5gb and more.
I think, because of this problem, Influx is randomly deleting some data in table where i store urls.
@aliberatcetin can you post logs?
| gharchive/issue | 2019-08-06T07:03:51 | 2025-04-01T06:44:31.151996 | {
"authors": [
"aliberatcetin",
"dgnorton"
],
"repo": "influxdata/influxdb",
"url": "https://github.com/influxdata/influxdb/issues/14571",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
596170471 | Create Prototype build with New Flux
We want to rapidly prototype the new Flux syntax. For that we have used the colm project to translate the new Flux syntax to existing syntax. See for details on the new syntax and features https://github.com/influxdata/flux/pull/2617
This is not the final implementation rather a way to prototype what using the new syntax would feel like.
We need a build of OSS that embeds the colm program and translates Flux on the fly so users can try it out.
Prototype merged to master in https://github.com/influxdata/flux/pull/2744. Enabling it causes table flux to be transformed to flux in the main entry point.
Binaries available:
https://s3.amazonaws.com/dl.influxdata.com/experimental/influxd-2.0.0-tableflux.0_darwin_amd64.tar.gz
https://s3.amazonaws.com/dl.influxdata.com/experimental/influxd-2.0.0-tableflux.0_linux_amd64.tar.gz
| gharchive/issue | 2020-04-07T21:44:43 | 2025-04-01T06:44:31.155271 | {
"authors": [
"adrian-thurston",
"nathanielc"
],
"repo": "influxdata/influxdb",
"url": "https://github.com/influxdata/influxdb/issues/17663",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
151399321 | Show Tag Keys Problem with Given Retention Policy
Bug report
System info: [InfluxDB 0.12.2, Mac OS X 10.11.4]
Steps to reproduce:
# create database
curl -G http://localhost:8086/query --data-urlencode "q=CREATE DATABASE mydb"
# create retention policy * 2 (myrp1, myrp2), set myrp2 as DEFAULT retention policy
curl -G http://localhost:8086/query --data-urlencode "q=CREATE RETENTION POLICY myrp1 ON mydb DURATION 365d REPLICATION 1 DEFAULT"
curl -G http://localhost:8086/query --data-urlencode "q=CREATE RETENTION POLICY myrp2 ON mydb DURATION 365d REPLICATION 1 DEFAULT"
# insert points into myrp1.cpu
curl -i -XPOST 'http://localhost:8086/write?db=mydb&rp=myrp1' --data-binary 'cpu,host=server01,region=us-west value=0.64'
Expected behavior and Actual behavior
# show tag keys of measurement cpu
curl -G http://localhost:8086/query\?db\=mydb --data-urlencode "q=show tag keys from cpu"
expected: {"results":[{}]}
got: {"results":[{}]}
# show tag keys of measurement cpu in myrp2
curl -G http://localhost:8086/query\?db\=mydb --data-urlencode "q=show tag keys from mydb.myrp2.cpu"
expected: {"results":[{}]}
got: {"results":[{}]}
# show tag keys of measurement cpu in myrp1
curl -G http://localhost:8086/query\?db\=mydb --data-urlencode "q=show tag keys from mydb.myrp1.cpu"
expected: {"results":[{"series":[{"name":"cpu","columns":["tagKey"],"values":[["host"],["region"]]}]}]}
got: {"results":[{}]}
Additional info:
Query Log: All of the statements were rewrite to SELECT tagKey FROM mydb.myrp2._tagKeys WHERE _name = 'cpu' . It rewrite every source into default retention policy.
[query] 2016/04/27 20:18:22 SELECT tagKey FROM mydb.myrp2._tagKeys WHERE _name = 'cpu'
[httpd]2016/04/27 20:18:22 ::1 - - [27/Apr/2016:20:18:22 +0800] GET /query?db=mydb&q=show+tag+keys+from+mydb.myrp1.cpu HTTP/1.1 200 16 - curl/7.43.0 223a4a69-0c72-11e6-8008-000000000000 542.083µs
[query] 2016/04/27 20:18:28 SELECT tagKey FROM mydb.myrp2._tagKeys WHERE _name = 'cpu'
[httpd]2016/04/27 20:18:28 ::1 - - [27/Apr/2016:20:18:28 +0800] GET /query?db=mydb&q=show+tag+keys+from+mydb.myrp2.cpu HTTP/1.1 200 16 - curl/7.43.0 25ddf2af-0c72-11e6-8009-000000000000 378.017µs
[query] 2016/04/27 20:18:33 SELECT tagKey FROM mydb.myrp2._tagKeys WHERE _name = 'cpu'
[httpd]2016/04/27 20:18:33 ::1 - - [27/Apr/2016:20:18:33 +0800] GET /query?db=mydb&q=show+tag+keys+from+cpu HTTP/1.1 200 16 - curl/7.43.0 28d8e93c-0c72-11e6-800a-000000000000 630.81µs
The possible problem position in source code (in influxql/statement_rewriter.go line 157). I noticed that the previous statement's database & retention policy was discarded. But I don't know why? Are there any other considerations?
func rewriteShowTagKeysStatement(stmt *ShowTagKeysStatement) (Statement, error) {
// Check for time in WHERE clause (not supported).
if HasTimeExpr(stmt.Condition) {
return nil, errors.New("SHOW TAG KEYS doesn't support time in WHERE clause")
}
return &SelectStatement{
Fields: []*Field{
{Expr: &VarRef{Val: "tagKey"}},
},
Sources: []Source{
&Measurement{Name: "_tagKeys"}, // why not use previous statement's db and rp
},
Condition: rewriteSourcesCondition(stmt.Sources, stmt.Condition),
Offset: stmt.Offset,
Limit: stmt.Limit,
SortFields: stmt.SortFields,
OmitTime: true,
Dedupe: true,
}, nil
}
thx
I try to fix it. related pr https://github.com/influxdata/influxdb/issues/6481
merged & closed
thx
| gharchive/issue | 2016-04-27T14:25:01 | 2025-04-01T06:44:31.160350 | {
"authors": [
"lvheyang"
],
"repo": "influxdata/influxdb",
"url": "https://github.com/influxdata/influxdb/issues/6480",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1367911698 | fix: allow backup of all buckets
Description
Fixes an issue where backups were only limited to the first 20 buckets. These changes will allow backup of all buckets.
Context
This should help with potential data loss during backup/restore.
Note for reviewers:
Check the semantic commit type:
Feat: a feature with user-visible changes
Fix: a bug fix that we might tell a user “upgrade to get this fix for your issue”
Chore: version bumps, internal doc (e.g. README) changes, code comment updates, code formatting fixes… must not be user facing (except dependency version changes)
Build: build script changes, CI config changes, build tool updates
Refactor: non-user-visible refactoring
Check the PR title: we should be able to put this as a one-liner in the release notes
The latest commit removes any limitations on the amount returned from buckets, orgs, and users. The higher level code passes in a limit of 20 as is, so this should not have a visible impact on those APIs.
| gharchive/pull-request | 2022-09-09T14:28:13 | 2025-04-01T06:44:31.163717 | {
"authors": [
"jeffreyssmith2nd"
],
"repo": "influxdata/influxdb",
"url": "https://github.com/influxdata/influxdb/pull/23719",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
280637812 | How to handle scientific notation for timestamp
I have data points in the following format and I want to generate alerts when service was started in last 15 seconds for that I need to take a diff of the current time and the gauge tag in the following points, I could not find anyway to do so , any help is appreciated.
time gauge service
---- ----- ---
1512435230000000000 1.512421379e+09 test-service
New 1.04 release has now() and unixNano() function which can be used for such purposed, closing it.
| gharchive/issue | 2017-12-08T22:37:43 | 2025-04-01T06:44:31.165230 | {
"authors": [
"bpatelcs"
],
"repo": "influxdata/kapacitor",
"url": "https://github.com/influxdata/kapacitor/issues/1720",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
118166552 | Move to simpler cache
This cache simply evicts as much as possible whenever a checkpoint is set. Data is organised by key, then by checkpoint. With the reduction in functionality, it's now quite simple.
The engine must combine data in the cache with any other data, such as tsm1 files, that may contain data matching a query.
$ go test -run=XXX -bench=Benchmark_Cache -test.benchmem
PASS
Benchmark_CacheWriteSameKeySameCheckpoint 5000 331231 ns/op 423138 B/op 0 allocs/op
Benchmark_CacheWriteSameKeyDifferentCheckpoint 10000 103946 ns/op 82019 B/op 2 allocs/op
ok github.com/influxdb/influxdb/tsdb/engine/tsm1 2.777s
I need to study the benchmarks more closely, to look for improvements, but this cache should be usable now by the engine.
Note to reviewers: I suggest you simply look at the new files (hit the "view" button), and ignore the diff. The new code is much simpler, and probably easier to to understand that way.
@jwilder @pauldix
@otoolep looks good, but there's one thing missing that I just realized. It's not enough to have a single checkpoint value since the cache has to work for multiple shards. Assuming the checkpoint is just the number of the WAL file, they wouldn't be unique across shards. So you need the combination of the shard ID and the checkpoint.
OK @pauldix -- I believe I see what you are saying. While we imagine 1 in-RAM cache for the entire system, there is a WAL per shard. And WAL segment IDs are not unique system-wide, but only per shard.
(Shard is a logical concept in the sense it maps to multiple WAL segment files and tsm1 files).
To ease development, going to merge this now and open a new PR so that the cache supports shard IDs. I believe @jwilder is already generally happy with the interface, so it just needs tweaking with IDs.
We had also discussed moving to a single WAL as well. The engine is tied to a shard and the engine is what would have the reference to the WAL and cache. Making these components support multiple shards might not be straightforward with the other engines in place since we'd have to move the engine up to the tsdb.Store.
@jwilder and I spoke about this, and we understand the requirement @pauldix . However, in the interest of getting to the goal of a refactored tsm1 in 0.9.6, we're going to punt on it for now. In other words the cache will operate like it does now -- on a per-shard basis. It actually requires significantly more refactoring to have a unified cache, and while we both agree it's the right thing to do, it will only get in the way right now of bringing a refactored tsm1 engine online.
| gharchive/pull-request | 2015-11-21T03:31:01 | 2025-04-01T06:44:31.205662 | {
"authors": [
"jwilder",
"otoolep",
"pauldix"
],
"repo": "influxdb/influxdb",
"url": "https://github.com/influxdb/influxdb/pull/4863",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.