id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
983386586 | an error
code correct webpack config and then npm run build, throw this error:
compiler.hooks.afterEmit.tapPromise('WebpackAliyunOss', (compilation) => {
^
TypeError: Cannot read property 'afterEmit' of undefined
please check the version of your webpack, this plugin requires version only >= 4
please check the version of your webpack, this plugin requires version only >= 4
| gharchive/issue | 2021-08-31T02:59:48 | 2025-04-01T06:44:22.193680 | {
"authors": [
"gp5251",
"pass13046132"
],
"repo": "gp5251/webpack-aliyun-oss",
"url": "https://github.com/gp5251/webpack-aliyun-oss/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
461825250 | copy-to-clipboard doesn't work
Hi,
I went through issue #32 , #105 but none of them solved this problem. So, currently I have these lines uncommented in ~/.tmux.conf.local:
tmux_conf_copy_to_os_clipboard=true
set -g mouse on
and also I've installed xclip. but when I'm working with mouse and after selecting highlighted section then pressing 'y' to yank to clipboard, it just types y in my shell (zsh). I'd appreciate if you help me out if you have any suggestions or workaround this.
Hello @pwnslinger 👋
So you're running tmux on a Linux box where xclip is installed.
Can you please reload the configuration with <prefix> + r then paste the output of
$ tmux list-keys | grep -E 'copy-seleciton|copy-pipe'
Note that with the default configuration and mouse mode enabled, relaxing the mouse button (MouseDragEnd1Pane) triggers the copy/yank action already. As such pressing y doesn't correspond to yanking to clipboard (since nothing is selected anymore) and it seems normal to me that the key is sent to the shell.
@pwnslinger ping?
@pwnslinger any news?
Hi @gpakosz, I'll happy continue this if you're still willing to assist!
I have the same problem. I hit <prefix> m. See that mouse mode is toggled to on in the bottom display but am unable to use any mouse functions as per the demo gif in the README.
I'm using windows terminal with wsl/ubuntu. xclip is installed. I'm using the standard configuration (only C-a is unbound as the prefix).
The tmux list-keys command above returns no lines after the grep.
What do you think I have done wrong?
There was a typo in the command, can you try again?
Under WSL/Ubuntu, xclip being installed will cause troubles. Please uninstall it.
Then list-keys should give you:
$ tmux list-keys | grep -E 'copy-selection|copy-pipe'
bind-key -T copy-mode C-w send-keys -X copy-pipe-and-cancel clip.exe
bind-key -T copy-mode MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel clip.exe
bind-key -T copy-mode M-w send-keys -X copy-pipe-and-cancel clip.exe
bind-key -T copy-mode-vi C-j send-keys -X copy-pipe-and-cancel clip.exe
bind-key -T copy-mode-vi Enter send-keys -X copy-pipe-and-cancel clip.exe
bind-key -T copy-mode-vi y send-keys -X copy-pipe-and-cancel clip.exe
bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel clip.exe
thanks for your help.
I have the same bind-key settings as you expect. I have compared the config running on Windows Terminal (WSL/Ubuntu shell) and the stock WSL/Ubuntu application. The windows shell seems to account for the issues.
Using the WSL app terminal I am able to click and drag pane boundaries and highlight and select text with the ENTER key. That selection is available in the system clipboard.
Strangely, when pasting into an iPython shell. I get an extra 4 spaces for each line just as though I was pasting into VIM without :set paste. Is this something to do with copy-mode-vi ?
Ok, please excuse that last comment.
It's clearly an IPython setting. Like Vim, one must prepare the shell for multiline paste by toggling %autoindent.
I'm coming from a Mac and iTerm2 and haven't used :set paste in years. The dotfiles are the same so I will have to investigate somewhere else for where paste mode is automatically managed on my Mac/iTerm.
Thanks for your help @gpakosz. Your tmux config is excellent, it's just the windows terminal options that are a let down.
Yeah I'm afraid all you're describing is outside the realm of my configuration.
Once bindings are configured, all the heavy lifting is done by tmux piping the selected text to clip.exe which is supposed to transfer it to the system clipboard. And pasting from the system clipboard is also unrelated to the bindings settings.
Hello, I'm having this same issue with a Kali in a virtual box scenario. I am not sharing the clipboard, xclip installed and the output for
tmux list-keys | grep -E 'copy-selection|copy-pipe'
is as you sayed. I really don't know what is wrong. Thank you.
| gharchive/issue | 2019-06-28T02:24:02 | 2025-04-01T06:44:22.211309 | {
"authors": [
"LaiKash",
"eL0ck",
"gpakosz",
"pwnslinger"
],
"repo": "gpakosz/.tmux",
"url": "https://github.com/gpakosz/.tmux/issues/259",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2439439774 | Feature Request: View Coordinates
I think it will be really useful (and I think it is not really hard to implement) that when I click a point on my route, along with the "Start loop here" and "Delete" options, it displays the latitude and longitude coordinates of the point.
@Sklyvan not sure if that works for you. But you could just use the "Point of Interest" tool to find the coordinates.
Thanks, I will use that.
| gharchive/issue | 2024-07-31T08:23:50 | 2025-04-01T06:44:22.248661 | {
"authors": [
"Sklyvan",
"nevvkid"
],
"repo": "gpxstudio/gpx.studio",
"url": "https://github.com/gpxstudio/gpx.studio/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
297252332 | Introduce unit tests with jest
This PR adds jest as a development dependency. This is enough for unit testing and mocks. You can now start tests with npm test. This will run all tests and linter (standard). You can also execute npm run test:watch to start tests with watcher, which re-runs tests on file changes.
Here are the use cases that were tested for handlePullRequestChange so far:
creates pending status if PR title contains wip
creates pending status if PR title contains WIP
creates pending status if PR title contains do not merge case insensitive
creates success status if PR title does NOT contain wip
There are many more things that can be tested but that's a good base to start with. As there were other people who wanted to contribute, I leave other use cases for them to cover ✌️
Also, unit tests are now a part of the pipeline, Travis already picked up npm test as a common way to run tests for Nodejs apps and you have it automated for free. All green: https://travis-ci.org/gr2m/wip-bot/builds/341634622
He he, some commit messages containing wip because it's describing work that was done, that is why WIP-check is not passing ;)
thanks @HeeL! I'll get back to you soon, just have to catch up with work first, sorry for the delay
| gharchive/pull-request | 2018-02-14T21:31:38 | 2025-04-01T06:44:22.252900 | {
"authors": [
"HeeL",
"gr2m"
],
"repo": "gr2m/wip-bot",
"url": "https://github.com/gr2m/wip-bot/pull/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1132325423 | 🐛 [Bug] Autochangelog always contains some commits
🐛 Bugreport
Autochangelog always contains some commits
(see last two entries)
It is unclear why. Please investigate.
Furthermore this ticket should be used to find out how we must formulate our PRs to have a valuable changelog
this is fixed
| gharchive/issue | 2022-02-11T10:41:56 | 2025-04-01T06:44:22.290439 | {
"authors": [
"ulfgebhardt"
],
"repo": "gradido/gradido",
"url": "https://github.com/gradido/gradido/issues/1454",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1222878380 | Add pending OSS project
Signed-off-by: Jerome Prinet jprinet@gradle.com
Kotlin uses dependency verification meaning a fork is required to run the experiments with the scripts as it introduces an extra dependency on GE Gradle plugin 3.10.
| gharchive/pull-request | 2022-05-02T13:17:49 | 2025-04-01T06:44:22.306829 | {
"authors": [
"jprinet"
],
"repo": "gradle/gradle-enterprise-oss-projects",
"url": "https://github.com/gradle/gradle-enterprise-oss-projects/pull/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
200318362 | EclipsePlugin sets incorrect javaRuntimeName for older target runtimes
Expected Behavior
The convention mapping should set the following javaRuntimeName depending on the current targetCompatibility:
JavaSE-1.8
JavaSE-1.7
JavaSE-1.6
J2SE-1.5
J2SE-1.4
J2SE-1.3
J2SE-1.2
JRE-1.1
These values are declared as defaults in the Eclipse platform.
Current Behavior
The convention mapping sets javaRuntimeName to JavaSE-1.x.
Context
https://discuss.gradle.org/t/building-with-newer-jdks/21102/5
Steps to Reproduce (for bugs)
Import a Java project into Buildship with targetCompatibility=1.5 configuration.
The release branching is coming very soon so I've assigned this story to the next release.
LGTM 👍
Thanks for the review, @jjohannes!
| gharchive/issue | 2017-01-12T09:35:16 | 2025-04-01T06:44:22.310747 | {
"authors": [
"donat",
"jjohannes"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/1148",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
208425145 | PluginRepositoriesSpec does not have the helper methods RepositoryHandler has
If I want to define mavenLocal(), or mavenCentral() or jcenter() and so on as plugin repository, there are no helper methods to do so. When deploying to one of those, you can also deploy the plugin marker artifacts, especially if you want to use a local build of a Gradle plugin.
It would be nice if the PluginRepositoriesSpec interface has the same helper methods that are also provided by the RepositoryHandler interface.
I am still not fully understanding the use case. Can you please give me concrete code example where you'd want this to work?
Assume I develop a build that uses a Grade plugin. Now I need to modify the plugin to extend it. I want to test it in my consuming build. So I deploy the plugin to maven local. In the consuming building it would now be nice to be able to use mavenLocal(). The same for the others. If someone decides to deploy his plugin to maven central or jcenter but not to plugin central, they would be easily usable. I was not aware plugin central mirrors those. If this persists, maybe only mavenLocal makes sense.
I'd use TestKit to test your plugin under development in different consuming scenarios. That's basically the use case you are trying to fulfill except that you do it manually. Have you considered using TestKit?
Sure, TestKit is great to formulate automated integration tests. And the plugin in question also uses it. But that's not the point. I wanted (a) test it in my reals use-case and (b) use it already with a local build until the maintainer accepted and released my PR.
| gharchive/issue | 2017-02-17T12:16:14 | 2025-04-01T06:44:22.314005 | {
"authors": [
"Vampire",
"bmuschko"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/1422",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
717623765 | Enable Groovy plugin to leverage toolchains
The groovy compile tasks should be able to consume a toolchain instead of an exectuable/java_home. This includes making the Groovy compile daemon compatible as well.
And remove docs stating otherwise
Groovydoc support will be added through #15283
| gharchive/issue | 2020-10-08T19:59:42 | 2025-04-01T06:44:22.315909 | {
"authors": [
"bmuskalla",
"ljacomet"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/14801",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
943829571 | Misleading error when using map on output property inside a task's action
Given:
abstract class MyTask extends DefaultTask {
@OutputDirectory
abstract DirectoryProperty getOutputDirectory()
@TaskAction
void printIt() {
def mapped = getOutputDirectory().map { it.asFile }
println "outputDirectory = " + mapped.get()
}
}
task mytask(type: MyTask) {
outputDirectory = layout.projectDirectory.dir("foo")
}
gradle mytask fails with an error.
Expected Behavior
Since outputDirectory is an output for the task being executed, this should not be an error. The task is not consuming the output of itself.
Current Behavior
gradle mytask fails with:
Querying the mapped value of task ':mytask' property 'outputDirectory' before task ':mytask' has completed is not supported
Context
Reported by Cedric: https://gradle-community.slack.com/archives/CA745PZHN/p1625653281217800
The easiest workaround is to avoid the use of .map in the task action. You can get the value from outputDirectory without an error/warning.
Steps to Reproduce
See example
Your Environment
This occurs in all environments.
I guess we still want this.
I've also faced this issue. An interesting thing that getOutputDirectory().map { it.file("blah-blah.txt" } doesn't work, while getOutputDirectory().file("blah-blah.txt") does.
I was having an instance of Property<Directory> which doesn't have such a method like DirectoryProperty#file, so had to convert the property into DirectoryProperty` 😢
Thank you for providing a valid reproducer.
The issue is in the backlog of the relevant team and prioritized by them.
| gharchive/issue | 2021-07-13T21:08:52 | 2025-04-01T06:44:22.320740 | {
"authors": [
"ALikhachev",
"big-guy",
"jbartok",
"lptr"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/17704",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1350651404 | Configuration cache report misses implicit usages of project methods, resulting to a crash when reusing the cache
Expected Behavior
The configuration cache report includes the violation of the (implicit) runtime usage of project. If there are implicit usages of the project methods, the report should list them and the build should fail.
Current Behavior
Gradle doesn't warn you on the first invocation. On the second invocation with the configuration cache, it fails as it can't find the exec method:
./gradlew :broken-with-configuration-cache
Configuration cache is an incubating feature.
Calculating task graph as no configuration cache is available for tasks: :broken-with-configuration-cache
> Task :broken-with-configuration-cache
task executed
BUILD SUCCESSFUL in 648ms
1 actionable task: 1 executed
Configuration cache entry stored.
./gradlew :broken-with-configuration-cache
Configuration cache is an incubating feature.
Reusing configuration cache.
> Task :broken-with-configuration-cache FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':broken-with-configuration-cache'.
> Could not find method exec() for arguments [build_7aih72w7tlo707xqlss3tj1se$_run_closure1$_closure2$_closure3@284d3ec2] on task ':broken-with-configuration-cache' of type org.gradle.api.DefaultTask.
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 335ms
1 actionable task: 1 executed
Configuration cache entry reused.
Context
We enabled the configuration cache and we have several tasks that have these implicit usages. While we were able to verify that most tasks were working by running them once (as Gradle includes the report and fails the build if it can find any), for these implicit usages we are unable to track them. These leads to a whack-a-mole situation, where even new tasks we write and thought they would work, would sometimes start failing on the second invocation. This breaks workflows of our engineers, which we would like to prevent from happening
Steps to Reproduce
See attached archive for the project: gradle-bug.zip
Define a task which uses an implicit property of project during execution, for example exec in a doLast (:broken-with-configuration-cache in the reproduction project)
Run the task once
Run the task again
In step 2, I expect the task to fail, as it is incompatible with the configuration, similar to what :correct-report-with-configuration-cache would show on the first run:
./gradlew :correct-report-with-configuration-cache
Configuration cache is an incubating feature.
Calculating task graph as no configuration cache is available for tasks: :correct-report-with-configuration-cache
> Task :correct-report-with-configuration-cache
task executed
FAILURE: Build failed with an exception.
* Where:
Build file '/Users/timvand/Projects/gradle-bug/build.gradle' line: 11
* What went wrong:
Configuration cache problems found in this build.
1 problem was found storing the configuration cache.
- Build file 'build.gradle': invocation of 'Task.project' at execution time is unsupported.
See https://docs.gradle.org/7.4.2/userguide/configuration_cache.html#config_cache:requirements:use_project_during_execution
See the complete report at file:///Users/timvand/Projects/gradle-bug/build/reports/configuration-cache/5gzym0itzjfat8q69crlpr29o/ag6eq5yzqogrsnzzk4fhugh04/configuration-cache-report.html
> Invocation of 'Task.project' by task ':correct-report-with-configuration-cache' at execution time is unsupported.
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 397ms
1 actionable task: 1 executed
Configuration cache entry discarded with 1 problem.
For posterity, here are the definitions without needing to download the attached archive:
tasks.register('broken-with-configuration-cache') {
doLast {
exec {
commandLine "echo", "task executed"
}
}
}
tasks.register('correct-report-with-configuration-cache') {
doLast {
project.exec {
commandLine "echo", "task executed"
}
}
}
Related to #20126.
| gharchive/issue | 2022-08-25T10:03:33 | 2025-04-01T06:44:22.326423 | {
"authors": [
"TimvdLippe",
"bamboo"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/21671",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1534091280 | Document that JUnit Jupiter is the default testing framework for JvmTestSuite
Expected Behavior
I expect to see specified somewhere that JUnit Jupiter is the default/convention testing framework for JvmTestSuite. As a result, attempting to configure Test tasks testing framework after the test suite framework is set results in an error.
I think this is something that should ideally be mentioned in the release notes for Gradle 8 or some callout in the documentation itself.
Current Behavior
Before Gradle 8, this workedTM
plugins {
java
}
testing {
suites {
register("functionalTest", JvmTestSuite::class) {
dependencies {
implementation(project())
}
targets.configureEach {
testTask.configure {
useJUnitPlatform {
}
}
}
}
}
}
Starting with Gradle 8 it does not and needs to be changed:
plugins {
java
}
testing {
suites {
register("functionalTest", JvmTestSuite::class) {
dependencies {
implementation(project())
}
targets.configureEach {
testTask.configure {
- useJUnitPlatform {
-
- }
+ options {
+ this as JUnitPlatformOptions
+ // configure test framework options
+ }
}
}
}
}
}
Changes internally were done in https://github.com/gradle/gradle/pull/22177 which does mention the behavior in the upgrade notes from Gradle 7. However, that is for the built in test suite, not of ones created.
Context
While upgrading my plugin to test against Gradle 8, I encountered the error described in https://github.com/gradle/gradle/issues/18606. I resolved the issue based on the comments , however I noticed that for JvmTestSuites, it is not documented anywhere that JUnit Jupiter is the convention when no test framework is specified.
The current (7.6 of this writing) docs do not specify the convention:
https://docs.gradle.org/7.6/dsl/org.gradle.api.plugins.jvm.JvmTestSuite.html
https://docs.gradle.org/7.6/userguide/jvm_test_suite_plugin.html#jvm_test_suite_plugin
The only documentation is in the code itself:
https://github.com/gradle/gradle/blob/v7.6.0/subprojects/plugins/src/main/java/org/gradle/api/plugins/jvm/internal/DefaultJvmTestSuite.java#L142
And in Gradle 8 with added (helpful) comments and changes:
https://github.com/gradle/gradle/blob/v8.0.0-RC1/subprojects/plugins/src/main/java/org/gradle/api/plugins/jvm/internal/DefaultJvmTestSuite.java#L221...L226
Thank you for your interest in Gradle!
This is a valid documentation issue which we will address.
| gharchive/issue | 2023-01-16T01:05:24 | 2025-04-01T06:44:22.333570 | {
"authors": [
"ciscoo",
"jbartok"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/23544",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1741937701 | More details when "CC cannot be reused because an input to plugin 'x.y.z' has changed."
Expected Behavior
Something that tells me exactly what has changed. Right now I know of no way to diagnose this issue.
A flag similar to -Dorg.gradle.caching.debug=true would be good-enough if an exact "folder ... was created" is not feasible.
Current Behavior (optional)
Calculating task graph as configuration cache cannot be reused because an input to plugin 'com.android.internal.application' has changed.
Here's what I tried:
I started diffing two files:
build/reports/configuration-cache/wbcggdq5fhgr2f2ntqfjfm1m/484g49auzyz1nkuo1m00nvc95/configuration-cache-report.html
build/reports/configuration-cache/wbcggdq5fhgr2f2ntqfjfm1m/20pm7fh6vnarhfzp1suoj1a4z/configuration-cache-report.html
until I realized that the second execution was just the storing of the second configuration phase's result.
Is there a diagnostic flag that outputs the hashing process? I tried -Dorg.gradle.caching.debug=true but no output.
I debugged where the error message is coming from in deep Gradle internals
this kind-of gives a hint, but it doesn't make sense, since there's annotation processor registered (kapt plugin is applied), and the fileSystemInputs.files is empty, so I'm not sure how its hash can differ.
Context
I found this issue reference in the code, but I cannot see Gradle internal issues:
https://github.com/gradle/configuration-cache/issues/282
Consider this as an upvote, and if the issue exists publicly please close this as duplicate.
I cannot provide an isolated repro, because it's a Gradle Test Kit test with a complex setup.
Thanks for the report, @TWiStErRob.
since there's annotation processor registered (kapt plugin is applied), and the fileSystemInputs.files is empty, so I'm not sure how its hash can differ.
I wonder if there are empty (versus missing) child directories involved. 🤔
Sorry, typo there, no annotation processor registered, only the plugin applied. Your guess is still valid though. Because this is a test project, the first build always runs without a buildDirectory, and the second is incremental.
Anyway, the issue is that this should be clearly visible in some way, without attaching a debugger to see Gradle internal memory.
Thank you for your interest in Gradle!
This feature request is in the backlog of the relevant team and is prioritized by them.
I wanted to echo that this would be very helpful for androidx team as well. We had to debug it with a debugger and it was quite painful to do that without first understanding Gradle's internal configuration cache code.
| gharchive/issue | 2023-06-05T14:44:15 | 2025-04-01T06:44:22.341419 | {
"authors": [
"TWiStErRob",
"bamboo",
"jbartok",
"liutikas"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/25282",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2031154607 | Provide a test fixture to inspect Build cache in tests
It would be nice that with Build Cache NG we also provide a nice test fixture, to easily inspect build cache content in tests. This should not depend on the implementation details of the cache.
For example we check content of cache in OverlappingOutputsIntegrationTest and CacheTaskOutputIntegrationTest and CachedTaskExecutionIntegrationTest.
We introduced BuildCacheOperationFixtures to inspect operations, but we don't provide option to inspect content there.
So I think we needs some more work here.
| gharchive/issue | 2023-12-07T16:55:10 | 2025-04-01T06:44:22.344080 | {
"authors": [
"asodja"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/27319",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2337603118 | Autoconfigure the java-gradle-plugin with jvm-test-suite
Expected Behavior
// build.gradle.kts
plugins {
`kotlin-dsl`
id("jvm-test-suite")
}
testing.suites {
register("integrationTest", JvmTestSuite::class) {
useKotlinTest()
testType.set(TestSuiteType.INTEGRATION_TEST)
}
}
Current Behavior (optional)
// build.gradle.kts
plugins {
`kotlin-dsl`
id("jvm-test-suite")
}
testing.suites {
register("integrationTest", JvmTestSuite::class) {
useKotlinTest()
testType.set(TestSuiteType.INTEGRATION_TEST)
dependencies {
implementation(gradleTestKit())
}
gradlePlugin.testSourceSet(sources)
}
}
Context
The Gradle Java (or Kotlin) plugin for plugin authors does add the test source set and the Gradle TestKit by default to the testImplementation dependency configuration, but the test source set should be used for unit tests and you should use a custom sourceset/test suite for actually running the Gradle build, according best practice mentioned in your documentation: https://docs.gradle.org/current/userguide/testing_gradle_plugins.html#setting_up_automated_tests
So I expect the Gradle plugin sets up the dependencies/testSourceSet automatically.
I can provide a PR, if you want.
Gradle should not impose where (and if) TestKit should be used for testing plugins.
In addition, this can be achieved through templates (like this one), convention plugins, internal to an organization or public, giving users the option to agree to such a convention.
So, Gradle will not be updated to do this by default.
Gradle should not impose where (and if) TestKit should be used for testing plugins.
But the Gradle plugin adds test sourceSet to the testSourceSets and configures the pluginMetadataDescriptor. How is this different?
| gharchive/issue | 2024-06-06T08:01:54 | 2025-04-01T06:44:22.348747 | {
"authors": [
"hfhbd",
"ljacomet"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/29434",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
369163657 | Fail fast if source folders are prefixes of each other
gradle version: v4.10.2
Create a anonym class implementing an interface.
Compile one time.
Change a method name in the interface.
Build again
The incremental build will fail, asking for the delete method.
This is a regression from gradle v4.10
With gradle v4.9 is working very well
Please provide a reproducible sample project.
I cannot reproduce the problem on a simple project.
The bug appear only on my project which is quite complexe.
The project which contains the modified interface does not update the .class in the build/classes folder.
however i got this message:
Task :java:index6:compileJava UP-TO-DATE
file or directory 'C:\ng\src\svn\platform\apollo\index6\src\main\java', not found
Build cache key for task ':java:index6:compileJava' is 4db1a98a9d3506238fbcac22b75e543d
Task ':java:index6:compileJava' is not up-to-date because:
Input property 'source' file C:\ng\src\svn\platform\apollo\index6\index\src\com\exalead\index\compact\IndexCompactService.java has changed.
Input property 'source' file C:\ng\src\svn\platform\apollo\index6\index\src\com\exalead\index\resources\IndexResourcesManager.java has changed.
Created classpath snapshot for incremental compilation in 0.154 secs. 937 duplicate classes found in classpath (see all with --debug).
Class dependency analysis for incremental compilation took 0.151 secs.
file or directory 'C:\ng\src\svn\platform\apollo\index6\src\main\java', not found
None of the classes needs to be compiled! Analysis took 0.328 secs.
:java:index6:compileJava (Thread[main,5,main]) completed. Took 1.159 secs.
:java:index6:classes (Thread[main,5,main]) started.
Looks like you have defined both src and src/main/java as a source folder, which can't work, because on is a prefix of the other. If you want your source folder to be src, then you have to overwrite the default src/mainjava` instead of just adding it.
We should fail much earlier in such scenarios.
You point something interesting, when I created very small java gradle project, I tested 3 sourSets definition:
1)
sourceSets {
main.java.srcDirs = ['src']
}
sourceSets {
main {
java {
srcDirs = ['src']
}
}
}
sourceSets {
main {
java {
srcDir 'src'
}
}
}
But on the last one (and only on the last one, I get this message:
Task :p2:compileJava UP-TO-DATE
file or directory 'C:\Users\qft1\Desktop\gradleTest\p2\src\main\java', not found
Skipping task ':p2:compileJava' as it is up-to-date.
:p2:compileJava (Thread[Task worker for ':' Thread 3,5,main]) completed. Took 0.101 secs.
I my project I only used the last syntax on every projects
I'll try on my project to change the sourceset as you mention and give you the result
The only thing that matters is the srcDirs = ['src'] part. Everything else is just variations of one another.
Ok, I understand the problem now.
Every variation are not equals.
With the following variation:
sourceSets { main { java { srcDir 'src' } } }
The sourcet is well added, but the default src/main/java is still used. Therefor, I get the message
'C:\Users\qft1\Desktop\gradleTest\p2\src\main\java', not found
Additionally, on this variation, the cache system is still looking on the missing path, and doesn't rebuild the current project. And so the incremental compilation was broken.
In my project, the broken variation is the most important because I want to specify a pattern of inclusion or exclusion to every single folder that I specify. (The structure of the project is quite complicated, we are still migrating from a very old build system)
So with srcDirs = ['src'], incremental compile works?
yes with srcDirs it's working
Interesting - it shouldn't fail because of that, but we don't have test coverage for overlapping source folders, so it slipped through.
| gharchive/issue | 2018-10-11T14:59:14 | 2025-04-01T06:44:22.360497 | {
"authors": [
"oehme",
"quentinflahaut"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/7341",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
192530065 | Composite builds should handle included builds where dependency versions are set by resolution rules
Expected Behavior
When running a composite build from a project referencing another project as included-build,
each project (the "main" project and the included project) should be built separately with their own resolutionStrategies as defined in their own build.gradle or their own applied plugins.
Current Behavior
ResolutionStrategy of the "main" project (the project that gradle build is run on) is applied to all other projects (included-builds).
ResolutionStrategies of included projects are ignored.
Context
I use dependency-management-plugin in some of my projects. Applying this plugin creates a resolutionStrategy that resolves dependency versions.
In those build files that apply the plugin, I declare dependencies without versions, such as: compile org.slf4j:slf4j-api, counting on the resolutionStrategy created by the plugin to determine the vesrion.
If such a project is included with --included-build in another project that I try to build, the build fails because the resolutionStrategy of the included-build is ignored and the dependency version cannot be resolved.
This bug is general and not limited to the use of plugins.
Directly declaring a resolutionStrategy in build.gradle is ignored when this build is an included-build of another project.
Steps to Reproduce
create two separate projects: "app" and "common"
build.gradle of the "app" project:
apply plugin: 'java'
apply plugin: 'maven'
repositories {
mavenLocal()
mavenCentral()
}
dependencies {
compile 'deps:common:1.0'
}
build.gradle of the "common" project:
apply plugin: 'java'
group = 'deps'
version = '1.0'
repositories {
mavenCentral()
}
configurations.all {
resolutionStrategy {
force 'org.slf4j:slf4j-api:1.7.5'
}
}
dependencies {
compile 'org.slf4j:slf4j-api'
}
Running a composite build on the "app" project fails:
user:/var/localwork/dependency-example/app$gradle build --include-build ../common
[composite-build] Configuring build: /var/localwork/dependency-example/common
:compileJava FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Could not resolve all dependencies for configuration ':compileClasspath'.
> Could not find org.slf4j:slf4j-api:.
Searched in the following locations:
file:/home/user/.m2/repository/org/slf4j/slf4j-api//slf4j-api-.pom
file:/home/user/.m2/repository/org/slf4j/slf4j-api//slf4j-api-.jar
https://repo1.maven.org/maven2/org/slf4j/slf4j-api//slf4j-api-.pom
https://repo1.maven.org/maven2/org/slf4j/slf4j-api//slf4j-api-.jar
Required by:
project : > project :common:
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 2.09 secs
Your Environment
------------------------------------------------------------
Gradle 3.2.1
------------------------------------------------------------
Build time: 2016-11-22 15:19:54 UTC
Revision: 83b485b914fd4f335ad0e66af9d14aad458d2cc5
Groovy: 2.4.7
Ant: Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM: 1.8.0_91 (Oracle Corporation 25.91-b14)
OS: Linux 3.19.0-32-generic amd64
This is expected. Dependency resolution happens isolated in each configuration, this is not specific to composite builds.
Only the resolution strategy of the configuration being resolved is taken into account.
I understand that the behaviour may be expected given the current design, but, in that case, I'd argue that the design doesn't meet the expectations set by the description of the feature. The documentation describes composite builds as allowing your to
combine builds that are usually developed independently, for instance when trying out a bug fix in a library that your application uses
Without a composite build, the usual way to try out a bug fix would be to build the library locally and install it into a repository, my local Maven cache for example. I'd then update my application to use this new, patched version of the library. Crucially, this would result in my application using the patched artifact's pom for dependency resolution. This doesn't happen with the current approach taken by composite builds and, as a result, the behaviour can be quite different.
If the intention is to keep the current, limited behaviour could this section of the documentation please be updated to reflect that?
I think you're overstating this problem: the things that aren't considered when resolving an artifact from an imported build are the repositories declared for the importing build, and the resolution strategy, which defines things like rules and cache timeouts. The dependency metadata for the imported build is definitely used when resolving.
I don't see any difference from the "publish to local repo" use case that you described. Am I missing something?
I don't want to understate the problem either: the core issue is that there are a number of ways that the published metadata can be tweaked for a build, such that the published POM contains information that isn't included in the dependency declarations. These modifications are not taken into account for an included build. Unfortunately, many of these modifications can be quite opaque to Gradle.
The intention is to continue to enhance composite builds so that more of the information that ends up in a published pom or ivy.xml is used when resolving an included build. We'd like to do something similar for Gradle projects included in a multi-project build.
I don't see any difference from the "publish to local repo" use case that you described. Am I missing something?
The dependency management plugin (linked to in the original comment in this issue) uses a resolution strategy to set dependency versions. It also enhances the generated pom to include that version information.
When using a composite build, neither the resolution strategy nor the generated pom is used so dependency resolution fails. When publising to a local repo, the generated pom complete with additional version information is used so dependency resolution succeeds.
The intention is to continue to enhance composite builds so that more of the information that ends up in a published pom or ivy.xml is used when resolving an included build
This is the missing piece in the puzzle, and is what I think should be documented as a limitation.
Right, I think that "No support for included builds that have publications that don't mirror the project default configuration" and this section of the docs was intended to convey this message. Would you mind submitting a PR that would make this clearer to you?
10.3.2 does indeed convey the message. Sorry, not sure how I missed that earlier. Thanks for bearing with me.
I think it's fair to say that our docs can be impenetrable at times. Any feedback or suggestions for improvement is welcome.
Are there any specific plans to support this feature? I believe composite builds are great tool, even better with IDEA 2016.3 integration. But our projects use spring-boot-gradle-plugin which in turn uses dependency-management-plugin mentioned above, so they don´t work propertly with composite builds.
Or, is there any known workaround?
I'm reopening this with a slightly changed title. While the problem is simple to solve within a single build (just declare the same substitution rules in an allProjects() block), there is no good way to do so for a composite.
@oehme Any news on this? Our project makes heavy use of Spring's dependency-management plugin, and we would like to use composite builds to simplify our development cycle.
Are there any known workarounds we can use to test and debug locally with included builds?
We are currently working towards dependency constraints as well as BOM support directly in Gradle. I'll keep this issue updated.
To provide a little more background: Resolution rules were not meant as recommenders of versions, but to correct wrong metadata (e.g. version ranges that were too strict). Instead we intend to model constraints such as a 'recommended' version or a 'platform that belongs together' more declaratively and map them to the appropriate Maven concepts where applicable. That way they'll work with composite builds and the generated Poms will need less tweaking by the user.
Hi,
it's great to see this issue being worked on, as I just ran into a similar limitation for the composite build: the included build used a repository to directly link to github (jitpack.io, to be complete), which the main project didn't use. so the dependencies of the included project could not be resolved.
As a quick&dirty solution, I added the additional repo to the main project, now it works. But without that change (for example, if I didn't know the included code as well as I do), it would be harder to integrate.
Adding a further remark: From the naive user perspective, I'd expect a composite build to just build that included project, af it were compiled directly - and include the results... That would (a) prevent the issue above and would (b) also allow an included Project, to also include a composite build, which would add flexibility. (For example, I have 3 distinct projects, one is a general purpose library, one is a software using that library, and one 'uber'-software that is using that software...)
(side-notice, just to be clear: composite builds are a really, really great feature, thank you for that!)
Gradle 4.6 has added support for constraints and BOMs, see the release notes for more info.
I'm closing this issue because there is now a way to have recommended versions work together with composites. Composites won't work with resolution rules, but that's unlikely to change and resolution rules will probably be removed in the long term, as we are adding better, more declarative APIs for their use cases.
@oehme well kind of, using that feature is not generating correct pom tough.. #4543
@oehme Sorry to resurrect an old thread, but what are the "better, more declarative APIs" that will be replacing Resolution Rules?
The user guide gives a good overview of them now.
Gradle 4.6 has added support for constraints and BOMs, see the release notes for more info.
I'm closing this issue because there is now a way to have recommended versions work together with composites. Composites won't work with resolution rules, but that's unlikely to change and resolution rules will probably be removed in the long term, as we are adding better, more declarative APIs for their use cases.
Correct me if I am wrong, but there is still no way to share exclusions of transitive dependencies, even if they are defined in a platform and imported. So if composite builds are not handling resolution rules, how should that be done?
The user guide gives a good overview of them now.
The link is broken.
Is that true that there still no any workaround for using composite builds with spring-dependency-management plugin?
| gharchive/issue | 2016-11-30T10:34:29 | 2025-04-01T06:44:22.382532 | {
"authors": [
"alexei-osipov",
"avonengel",
"bigdaz",
"brainbytes42",
"dorongold",
"oehme",
"rhoughton-pivot",
"rupebac",
"vehovsky",
"vitjouda",
"wilkinsona"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/issues/947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
937822633 | Make ProjectConnection.notifyDaemonsAboutChangedPaths() no-op for uninitialized connections
Fixes #17623
@bot-gradle test QuickFeedbackLinux
OK, I've already triggered QuickFeedbackLinux build for you.
Let's wait until we get more feedback from the disconnect changes
| gharchive/pull-request | 2021-07-06T12:01:09 | 2025-04-01T06:44:22.385565 | {
"authors": [
"big-guy",
"bot-gradle",
"donat"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/pull/17624",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1033020456 | Use DetachedResolver to resolve Gradle libs in IDE plugins
Fixes #15732
Trivial change so no unit or integration tests. Just existing test suite.
Also no public API changes, so no changes in docs.
Context
Allows using RepositoriesMode.FAIL_ON_PROJECT_REPOS in dependencyResolutionManagement when running/syncing from IDE. Also removes a lot of warnings in case of RepositoriesMode.PREFER_SETTINGS in same case.
Contributor Checklist
[x] Review Contribution Guidelines
[x] Make sure that all commits are signed off to indicate that you agree to the terms of Developer Certificate of Origin.
[x] Make sure all contributed code can be distributed under the terms of the Apache License 2.0, e.g. the code was written by yourself or the original code is licensed under a license compatible to Apache License 2.0.
[x] Check "Allow edit from maintainers" option in pull request so that additional changes can be pushed by Gradle team
[x] Provide integration tests (under <subproject>/src/integTest) to verify changes from a user perspective
[ ] Provide unit tests (under <subproject>/src/test) to verify logic
[ ] Update User Guide, DSL Reference, and Javadoc for public-facing changes
[x] Ensure that tests pass sanity check: ./gradlew sanityCheck
[x] Ensure that tests pass locally: ./gradlew <changed-subproject>:quickTest
Gradle Core Team Checklist
[ ] Verify design and implementation
[ ] Verify test coverage and CI build status
[ ] Verify documentation
[ ] Recognize contributor in release notes
Just rebased on current master, since previous was against couple of months ago state, likely before 7.3
Thank you for the contribution!
We are still struggling with keeping an eye on all community PRs, sorry this did not get better attention.
Let me see what CI thinks of it, but from a quick look at the code, this makes sense!
@bot-gradle test QF
OK, I've already triggered the following builds for you:
QuickFeedback build
Any chance it will land in 7.4? Is there some fixed feature freeze date?
@bot-gradle test and merge
No milestone for this PR. Please set a milestone and retry.
@grossws We are currently discussing this, but you are in time with this contribution 😉
@bot-gradle test and merge
No milestone for this PR. Please set a milestone and retry.
@ljacomet, would it merge if @bot-gradle writes about missing milestone on issue?
Sorry but I'm afraid you're not allowed to do this.
@bot-gradle test and merge
Your PR is queued. See the queue page for details.
OK, I've already triggered a build for you.
| gharchive/pull-request | 2021-10-21T23:25:16 | 2025-04-01T06:44:22.397115 | {
"authors": [
"bot-gradle",
"grossws",
"ljacomet"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/pull/18740",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1367537273 | Update wrapper to latest release nightly
Update wrapper to latest release nightly
@bot-gradle test and merge
OK, I've already triggered a build for you.
| gharchive/pull-request | 2022-09-09T09:20:10 | 2025-04-01T06:44:22.398980 | {
"authors": [
"bot-gradle",
"eskatos"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/pull/21947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1673479011 | Restore kotlin-dsl precompiled scripts compatibility
Fixes https://github.com/gradle/gradle/issues/24754
@bot-gradle test ACTR
I've triggered the following builds for you:
AllCrossVersionTestsReadyForRelease build
@bot-gradle test and merge
I've triggered a build for you.
| gharchive/pull-request | 2023-04-18T16:33:16 | 2025-04-01T06:44:22.401568 | {
"authors": [
"bot-gradle",
"eskatos"
],
"repo": "gradle/gradle",
"url": "https://github.com/gradle/gradle/pull/24803",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1671849905 | No way to set jsonData.prometheusType or jsonData.prometheusVersion for datasource
This might be a feature request not a bug, I apologize if I did it wrong. I will be submitting a PR shortly that fixes this.
Describe the bug
It appears that there is no way to set jsonData. prometheusType and jsonData. prometheusVersion on datasources. These need to be set to get Grafana to work well with Google Managed Prometheus. If you don't set them, metric name auto completion lookup doesn't really work.
Version
v4.10.0
To Reproduce
Attempt to apply CRD with jsonData. prometheusType or jsonData. prometheusVersion set.
Expected behavior
A way to configure jsonData. prometheusVersion and jsonData. prometheusVersion.
I finally found customJsonData which I assume will be the suggested solution.
| gharchive/issue | 2023-04-17T19:36:40 | 2025-04-01T06:44:22.404741 | {
"authors": [
"kinghrothgar"
],
"repo": "grafana-operator/grafana-operator",
"url": "https://github.com/grafana-operator/grafana-operator/issues/1008",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2293020508 | Bug: supervisord.conf generated wrong executable names for GitHub datasource
Which package(s) does this bug affect?
[X] Create Plugin
[ ] Sign Plugin
[ ] Plugin E2E
[ ] Plugin Meta Extractor
Package versions
4.10.3
What happened?
While updating the create-plugin to the latest version in the github datasource I found that the supervisord.conf generated wrong executable names for it.
Wrong
command=bash -c 'while [ ! -f /root/grafana-github-datasource/dist/gpx_git-hub* ]; do sleep 1; done; /run.sh'
Correct
command=bash -c 'while [ ! -f /root/grafana-github-datasource/dist/gpx_github* ]; do sleep 1; done; /run.sh'
What you expected to happen
I expected the right executable to be there.
How to reproduce it (as minimally and precisely as possible)
Update create-plugin in github datasource. See https://github.com/grafana/github-datasource/pull/305
Environment
System:
OS: macOS 14.4.1
CPU: (8) arm64 Apple M2
Memory: 3.77 GB / 24.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.13.1 - ~/.nvm/versions/node/v20.13.1/bin/node
Yarn: 1.22.19 - ~/.nvm/versions/node/v20.13.1/bin/yarn
npm: 10.5.2 - ~/.nvm/versions/node/v20.13.1/bin/npm
pnpm: 8.14.0 - ~/.nvm/versions/node/v20.13.1/bin/pnpm
Browsers:
Chrome: 124.0.6367.158
Safari: 17.4.1
Additional context
No response
@oshirohugo please take a look, maybe can be done as part of https://github.com/grafana/plugin-tools/pull/910 or after that.
| gharchive/issue | 2024-05-13T14:33:12 | 2025-04-01T06:44:22.604213 | {
"authors": [
"tolzhabayev",
"zoltanbedi"
],
"repo": "grafana/plugin-tools",
"url": "https://github.com/grafana/plugin-tools/issues/907",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1450330855 | Feature request: left to right gradient
Not sure whether this is something you'd be interested in but for vertical gradients I'm using this: https://gecdesigns.com/color-gradient-generator ... and while it works, it can't be compared with the features offered by your tool! :)
I could add an option to the PNG output format to save in different orientations and potentially offer different preview modes in the editor. Is this what you had in mind?
Yes, that would be sufficient. I realize that storing it as asm output would be a bit more difficult (because you are not writing palette values anymore but real pixels) so I'm totally fine with a bitmap output only for such cases.
I've added a horizontal option to the PNG output mode:
Changing the editor display too would be bit more complex as I'd need to redo the dragging behaviour. Is this something that's important to you?
Wow, that was fast. I doesn't need it to be rotated, not at all.
However I have noticed one thing which was perhaps not that important for the horizontal version: differentiating between resolution and the number of steps.
The horizontal version is basically built around the idea that you can change the colour on every scanline (or every second scanline etc) from an unlimited number of colours (limited only by the palette).
However the vertical version doesn't have this luxury: if I want to create a vertical gradient, I must write pixels there. So now, if I want to create a gradient with 16 steps, I get... a 16-column image. It would be cool if one could specify the amount of pixels to write these steps into (maybe even for the horizontal version?) so one can say "hey, I want 16 colours/shades distributed over 320 pixels".
Is it asking too much? ;-)
Ok so we're talking about limiting the gradient to a distinct number of palette entries, regardless of steps. This would be really useful but I'll need to think about the best way to implement it. It needs to go in further up the chain when it's calculating the hex values, taking into account dithering, colour models etc. rather than just being part of the output module.
I would really like to implement this though so leave it with me!
Frankly, I could even work around it in my code: all it takes is to have an integral multiplier of the final value. So for instance I generate 16 vertical steps (i.e. using 16 colours), save it as 16xN PNG and every 320/16 = 20th step I just fill the same colour for 16 the next pixels.
But you are right, if you want to do this nicely (especially with the dithering in play), this has to be calculated on a higher level, too.
Another thing to consider is that currently this tool only does one dimensional dithering, given that the expected output format was a single array of colour values. When we look at bitmap output formats this opens up the scope to 2D dither patterns such as bayer or Floyd Steinberg that repeat over multiple lines. This changes the scope of this tool a bit, but I'll think about whether there's a good way to incorporate this.
You could also think about stuff like two dimensional gradients :) But if we stay with the initial requirements (one line / one column gradient), I think you don't have to worry about 2D dithering at all.
Yeah the simple option would be to just add a 'scale' option to the PNG output that just repeats pixels as you described. I'd like to have a go at implementing the more 'correct' solution though :-)
Hey, any updates on this?
Sorry, I started this and completely forgot about it! I'll pick it up again this week.
I've got a working implementation of the limited palette option we discussed. It's available in a dev build on https://gradient-blaster-4kf2vtxta-grahambates.vercel.app
There are a couple of bugs to iron out, but it would be good to check if this meets you use case. To use it, enable the 'palette size' option and set the desired number of colours:
Maybe I'm not using it correctly but it produces odd results for me.
For instance, why is palette size = 16 and steps = 16 producing just one color?
Also, why palette size = 16 and steps = 320 (i.e. an integer result of 320/16 division) produces an irregular gradient? (even without dithering). For instance this one: https://gradient-blaster-4kf2vtxta-grahambates.vercel.app/?points=ffff00@0,0000ff@319&steps=320&blendMode=oklab&ditherMode=off&target=atariFalcon24 (set palette size to 16 afterwards) ... shouldn't it look more symmetric?
Yeah so one of the bugs I'm looking at is that it doesn't work correctly on small ranges.
For your second example, the irregular nature is due to the way that the blending and colour proximity algorithms are based on perceptual distance, rather than linear RGB values. Basically the initial blending to create the gradient usually happens in a non-RGB colour space (e.g. OKLAB by default) to give a visually correct transition rather than even RGB steps. Then the palette reducing algorithm finds the best X colours (e.g. 16) to represent that gradient. This also takes into account the way the eye perceives colour rather than just the difference in RGB values.
Hope this explains the reasoning. I'm going to keep working on resolving the remaining bugs and tweaking parameters to get better results.
Yes, that reasoning sounds good, no objections from me. However if I were after the "raw steps" in RGB space, is there a way to achieve it? I tried to switch between different colour spaces but all of them gave me this irregular steps. That would be a nice feature IMHO.
Changing 'Simple RGB' will do it for the first step i.e. the original gradient will be linear, but the palette reduction will still be perceptual. I'd need to add different modes for that too. I'll look into it.
| gharchive/issue | 2022-11-15T20:20:51 | 2025-04-01T06:44:22.617944 | {
"authors": [
"grahambates",
"mikrosk"
],
"repo": "grahambates/gradient-blaster",
"url": "https://github.com/grahambates/gradient-blaster/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1783724629 | Add clean-url option
...for vitepress.
The migration is completed
I haven't tested this. But I believe that you have tested this, and so I'll merge for now.
| gharchive/pull-request | 2023-07-01T09:23:10 | 2025-04-01T06:44:22.631652 | {
"authors": [
"CikiMomogi",
"dcdunkan"
],
"repo": "grammyjs/link-checker",
"url": "https://github.com/grammyjs/link-checker/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
791932494 | Feature request: Apply Active Effect to all tokens in region
Does what it says on the tin. When a token enters a region, apply an Active Effect. When a token leaves a region, remove it.
Could be done currently with macros, but having an integrated way to do this would be excellent (and save on clutter in my macro folder!). It would allow a ton of zone spells to be automated, which don't currently have a good solution.
A nice additional QoL feature would be the ability to apply this to only friendly or hostile tokens.
Yeah, agreed this would be nice
| gharchive/issue | 2021-01-22T11:33:45 | 2025-04-01T06:44:22.633002 | {
"authors": [
"BadIdeasBureau",
"grandseiken"
],
"repo": "grandseiken/foundryvtt-multilevel-tokens",
"url": "https://github.com/grandseiken/foundryvtt-multilevel-tokens/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
229251365 | Access current user in functions
It would be useful to have the currently logged in user in the context of the function event object.
Use cases: add user to some created type or send notification if when something is updated (lots of similar use cases I believe).
Any plans to implement this?
| gharchive/issue | 2017-05-17T06:38:23 | 2025-04-01T06:44:22.651153 | {
"authors": [
"altschuler",
"yezarela"
],
"repo": "graphcool/graphcool",
"url": "https://github.com/graphcool/graphcool/issues/204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
646689866 | Avoid Areas by type/boundary (boundary=protected_area)
Hi,
it would be amazing to have a feature to avoid areas by type - for example OSM boundaries like "residential" or "protected_area".
i tried to make it work this way on the graphhopper dev demoserver:
areas:
stadt:
features:
properties: {
landuse: forest}
priority:
area_stadt: {1}`
but failed with this error message:
"“Error response: cannot process input”"
So if there would be any way to have this feature working (or is there already a way to have this working???) i guess it would be a great help for custom routing profiles!!!
Thanks and cheers!
Very interesting topic!
Is there currently any wip related to this @karussell ?
I saw ORS did something similar with green and (maybe) quiet weighting options (https://giscience.github.io/openrouteservice/documentation/routing-options/Routing-Options.html). But I think their implementation is a bit unspecific, using the landuse tag directly would be much more precise for different use cases.
just had to think about it again...
is there anything new on that topic?
What certainly works is a) extracting the OSM boundaries you are interested in yourself and b) using these areas (with the correct format) for the areas section in a custom model.
But obviously it would be a lot more comfortable if our OSM importing code extracted the boundaries automatically.
To detect residential areas you can also make use of the urban density encoded value: #2637
@pcace, @FaFre I just built a quick prototype that parses the landuse tag and adds a corresponding encoded value: https://github.com/graphhopper/graphhopper/pull/2765
Does this seem helpful for your use-case?
| gharchive/issue | 2020-06-27T14:13:33 | 2025-04-01T06:44:22.662223 | {
"authors": [
"FaFre",
"easbar",
"pcace"
],
"repo": "graphhopper/graphhopper",
"url": "https://github.com/graphhopper/graphhopper/issues/2072",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
153805628 | Flexible Mode not allowed for Non-Ch
Something I found is that if we do not enable CH. We do not allow the flexible mode.
So the Server has CH disabled and you query a flexible routing. You get an exception. I have CH disabled for my local development environment but turned on when hosting my application. Therefore I have to check if i query the local environment or the remote server.
I vote for not being that strict in that case. What do you think?
You mean you have CH disabled and use 'force flex mode'=true then you get an exception? Indeed this should not be the case and work exactly the same like if CH is disabled and without the parameter.
Essentially I locally changed that method in the CHDecorator:
public final boolean isForcingFlexibleModeAllowed()
{
return forcingFlexibleModeAllowed || !isEnabled();
}
From a logical standpoint. It makes sense that forcing the flexible mode is allowed when ch is disabled, even though it is not needed in this case.
@karussell Sorry, haven't seen your reply. Yes. We do not check it in the CHDecorator Constructor:
https://github.com/graphhopper/graphhopper/blob/master/core/src/main/java/com/graphhopper/routing/ch/CHAlgoFactoryDecorator.java#L103
Okay. I stumbled over similar issues in my local setup with this init code when I tried other stuff. So I'll have a look if I already fixed it there. I have also renamed this setting to routing.ch.disable (and routing.ch.disabling_allowed in the config) making it better aligned when more algos come into the game.
Let me know if this now works for you!
Thanks @karussell. I will let you know if there is an issue.
| gharchive/issue | 2016-05-09T15:38:31 | 2025-04-01T06:44:22.666603 | {
"authors": [
"boldtrn",
"karussell"
],
"repo": "graphhopper/graphhopper",
"url": "https://github.com/graphhopper/graphhopper/issues/717",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
809558930 | Interactive custom model text box
This is work in progress to realize a text input box that allows entering a custom model with error highlighting and auto-completion. The goal here is to provide a very flat learning curve for the custom model language. The custom model is written in YAML that can contain boolean expression strings that refer to encoded values like road_class == PRIMARY etc.
Since the encoded values are unknown to most users they are provided as auto-completion suggestions. The auto-completion can be triggered by pressing Ctrl+Space. Note that the auto-completion suggestions depend on the cursor position. If the cursor is positioned within some word the conditions are filtered by the characters between the beginning of the word and the cursor.
|road_class + Ctrl+Space yields surface, road_class, road_class_link, toll, ..., but ro|ad_class + Ctrl+Space only yields road_class, road_class_link, ... (all values starting with ro)! Is this what most people would expect?
Usage: see the README.md file in the textbox folder
Currently the boolean expressions are colored in blue, errors in boolean expressions are red and errors in yaml format are indicated by red underlines.
todo (roughly ordered by priority):
[x] calculate error ranges also for YAML errors related to null values
[x] allow multiple 'else if' (there is a minor bug that prevents this currently)
[x] move YAML error messages inside text editor (tooltip or maybe error indicator at line number). for a tooltip maybe take a look here: https://github.com/codemirror/CodeMirror/blob/master/addon/tern/tern.js#L595
[ ] support boolean and numeric encoded values
[ ] support boolean literals true, false
[ ] integrate with GH maps
[ ] simplify error messages (now that they are shown at their origin) ?
[ ] maybe use tooltips to provide some documentation, e.g. for speed/priority/distance_influence
[ ] read-only view for json (to be able to copy&paste the model to request)
[ ] choose some colors, a theme and styles (so far they were just chosen for easy debugging)
[ ] colored frame of the editor on valid/invalid input?
[ ] support area encoded values
[ ] support negation operator !(?)
[ ] auto-complete for YAML (?)
[ ] check syntax for areas section (geojson) (?)
[ ] maybe use code mirror bracket matching
[ ] make it easy to translate error messages? but first make them as simple as possible.
[ ] check code mirror options to see if we need anything besides the defaults, maybe customize the 'mode'?
[ ] maybe trigger autocomplete automatically without pressing Ctrl+Space? At least in certain cases?
[ ] handle YAML multiline syntax like >+ and |... https://stackoverflow.com/a/21699210
[ ] dark theme and VIM keybindings (just kidding...)
@karussell I found this code in main-template.js:
https://github.com/graphhopper/graphhopper/blob/16a325382dc783352c07393317b8737f2d68f7c9/web-bundle/src/main/resources/com/graphhopper/maps/js/main-template.js#L176-L183
It looks like it takes a custom model from the url parameters? Is this an actual working feature or can it be removed? So far I thought it would be nice to be able to store custom models inside the url, but it was not possible...
Thanks for this incredible work :)
Is this an actual working feature or can it be removed?
For development it should work but we can also remove it.
For development it should work but we can also remove it.
Yes it does. One can use the load_custom parameter to point to an external yaml file that will be copied to the custom model box :+1: Speaking of hidden features: Pressing Ctrl+Enter while the cursor is in the box to update the current route is also quite useful. The only annoying thing can be the automatic zoom-to-route in this case.
I now integrated the new text box in GH maps and both features are still available (but flex=true is no longer necessary for the first one).
The CI builds fail, because I am including the custom model editor as local npm module like this:
"dependencies": {
"d3": "5.11.0",
"flatpickr": "4.4.6",
"custom-model-editor": "file:src/main/js/custom-model-editor",
This way a symlink pointing to the given path will be created in node_modules. Apparently on travis, appveyor and github packages this symlink cannot be resolved so the build fails. Not sure what to do here :(
I now integrated the new text box in GH maps
Nice :+1:
This way a symlink pointing to the given path will be created in node_modules.
Instead of the package.json why can't we do a relative require statement like
Instead of the package.json why can't we do a relative require statement like
Yes, good idea, this works :+1: Its not ideal, because this way we have to run npm install && npm run build from the custom-model-editor directory separately, but this can be done via the maven frontend plugin at least. See my last commit.
In the last commit I added a button that switches the editor to a read-only json view
I'd like to disable it as long as the yaml input is invalid. WDYT is this useful? Can/should we change these three 'buttons' from anchors to buttons? Or how can we make it appear disabled? Maybe use strikethrough, or even remove it entirely?
Btw I also disable the search button as long as the input is invalid now, does this make sense?
Over one thing I stumbled: before I was able to do the usual npm run watch I had to either build the jar via maven or run cd src/main/js/custom-model-editor/ && npm install && npm run build and restart the GH server.
In the last commit I added a button that switches the editor to a read-only json view
Nice idea!
And should I do a reimport of our customizable routing server in its current state? (and rewrite the examples&screenshots?)
btw: I just picked a random route and thought something is really broken as it did not avoid MOTORWAY at all. No matter what I did. The problem was that although in the map it looked like a motorway it was a TRUNK (I have a strange luck ;) ) ... maybe we should send details:[road_class] ... or better ALL used encoded values :D ?
And previously it worked when I included details: [road_class] in the yaml but we are now too strict :)
(also a bit ugly that the elevation widget currently only works if elevation is enabled :) )
Can/should we change these three 'buttons' from anchors to buttons?
text is fine IMO. Only a minor style adaption so that it matches the sidebar footer? (greenish text without underscore)
Over one thing I stumbled: before I was able to do the usual npm run watch I had to either build the jar via maven or run cd src/main/js/custom-model-editor/ && npm install && npm run build and restart the GH server.
Since we now include custom-model-editor via require(...) only instead of package.json we have to build it separately yes. You can also do npm run watch in the custom-model-editor folder to get live updates.
And should I do a reimport of our customizable routing server in its current state? (and rewrite the examples&screenshots?)
Sure! We cannot use areas yet though.
... maybe we should additionally send details:[road_class] ... or better ALL used encoded values :D ?
You mean we should show all encoded values when we hover edges in the mvt view? Yes!
And previously it worked when I included details: [road_class] in the yaml but we are now too strict :)
What do you mean? What are details in yaml?
(also a bit ugly that the elevation widget currently only works if elevation is enabled :) )
Yes, since it is also showing path details this feature should not depend on elevation...
text is fine IMO. Only a minor style adaption so that it matches the sidebar footer? (greenish text without underscore)
And for disabling: gray it out?
Ok will try.
What do you mean? What are details in yaml?
Since we (currently) have the properties like speed on the same JSON root level like the other request parameters we would request path details via details:[...]. But as the plan is to change this (in #2232) it is probably not worth a thought :)
Since we now include custom-model-editor via require(...) only instead of package.json we have to build it separately yes. You can also do npm run watch in the custom-model-editor folder to get live updates.
We would need to have this documented or include the editor JS files like we include other JS files?
You mean we should show all encoded values when we hover edges in the mvt view? Yes!
I didn't mean this but it would be nice :)
What I meant is that we collect all encoded values used in the script and include it in the query as path details. This way we can easily see the e.g. the road_class for the route and how it changes if we change the number.
Since we (currently) have the properties like speed on the same JSON root level like the other request parameters we would request path details via details:[...]. But as the plan is to change this (in #2232) it is probably not worth a thought :)
Ah ok. Btw so far you can use ctrl+enter to send the request regardless of the validation. Not sure if this should be a 'feature' :) So far the validation check is just not implemented in this case...
We would need to have this documented or include the editor JS files like we include other JS files?
Ok sure it should be documented. We cannot include editor files like other files because the editor uses an additional transpilation step.
What I meant is that we collect all encoded values used in the script and include it in the query as path details. This way we can easily see the e.g. the road_class for the route and how it changes if we change e.g. the multiply-by number. (if the elevation widget is shown)
Ok yes that makes sense. It should not be too hard to return the used encoded values from the expression parser.
Its still not working with IE11, I get a SCRIPT5021 error that is due to a character range \u{D800}-\u{DFFF} used in the yaml parser. Apparently it is supposed to support IE11 though: https://github.com/eemeli/yaml/pull/73. Maybe I am not correctly making use of it's 'browser build' yet.
| gharchive/pull-request | 2021-02-16T18:43:54 | 2025-04-01T06:44:22.691572 | {
"authors": [
"easbar",
"karussell"
],
"repo": "graphhopper/graphhopper",
"url": "https://github.com/graphhopper/graphhopper/pull/2239",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1693715115 | [Feature] Generate basic config.toml file via graphman
Description
Would be great if graphman command could generate a basic config file by using a template. This way ever new starter could easily get the config.toml created and just change the capslocked configurations inside it that works for him.
Are you aware of any blockers that must be resolved before implementing this feature? If so, which? Link to any relevant GitHub issues.
No response
Some information to help us out
[ ] Tick this box if you plan on implementing this feature yourself.
[X] I have searched the issue tracker to make sure this issue is not a duplicate.
@YassinEldeeb proposes that we could use the @maoueh example from https://github.com/streamingfast/graph-node-dev/blob/master/config/eth-mainnet-rpc.toml
What's the advantage here in generating it via graphman as opposed to having some example .toml's in some public repo?
Closing in favour of examples & documentation
| gharchive/issue | 2023-05-03T09:33:30 | 2025-04-01T06:44:22.717985 | {
"authors": [
"azf20",
"neysofu",
"rotarur"
],
"repo": "graphprotocol/graph-node",
"url": "https://github.com/graphprotocol/graph-node/issues/4592",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
716881711 | Consider empty list when constructing tsvectors
Should resolve #1937
Oh yeah I don't really know SQL so I hope that stuff works... testing it out on my local setup rn :crossed_fingers:
| gharchive/pull-request | 2020-10-07T22:04:22 | 2025-04-01T06:44:22.719013 | {
"authors": [
"cag"
],
"repo": "graphprotocol/graph-node",
"url": "https://github.com/graphprotocol/graph-node/pull/1938",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
201979168 | Show deprecation warnings in hint typeahead
This adds additional rendering in the hint typehead to indicate when a field hinted is deprecated. The styling matches that used in the info hover tooltips.
As mentioned in #273
lg2m
| gharchive/pull-request | 2017-01-19T21:22:22 | 2025-04-01T06:44:22.730133 | {
"authors": [
"leebyron",
"wincent"
],
"repo": "graphql/graphiql",
"url": "https://github.com/graphql/graphiql/pull/282",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1529937810 | fix netlify build
add project reference for cm6 to graphql LS
Thank you for jumping in trying to fix this ❤️
@thomasheyenbrock happy to help! did you see the google doc invite? I'll need someone's help for us to be able publish again
@thomasheyenbrock would you be able to merge this one for me?
Sorry I only saw this now 😅 too late 🙈
| gharchive/pull-request | 2023-01-12T01:10:48 | 2025-04-01T06:44:22.731670 | {
"authors": [
"acao",
"thomasheyenbrock"
],
"repo": "graphql/graphiql",
"url": "https://github.com/graphql/graphiql/pull/2992",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1551810242 | Version Packages
This PR was opened by the Changesets release GitHub action. When you're ready to do a release, you can merge this and the packages will be published to npm automatically. If you're not ready to do a release yet, that's fine, whenever you add more changesets to main, this PR will be updated.
Releases
cm6-graphql@0.0.3
Patch Changes
#2995 5f276c41 Thanks @imolorhe! - fix(cm6-graphql): Fix query token used as field name
codemirror-graphql@2.0.4
Patch Changes
#2993 bdc966cb Thanks @B2o5T! - add unicorn/consistent-destructuring rule
Updated dependencies [e68cb8bc, f788e65a, bdc966cb]:
graphql-language-service@5.1.2
graphiql@2.3.1
Patch Changes
#2995 5f276c41 Thanks @imolorhe! - fix(cm6-graphql): Fix query token used as field name
Updated dependencies [e68cb8bc, f788e65a, bdc966cb]:
graphql-language-service@5.1.2
@graphiql/react@0.16.1
@graphiql/plugin-explorer@0.1.14
Patch Changes
Updated dependencies [bdc966cb]:
@graphiql/react@0.16.1
@graphiql/react@0.16.1
Patch Changes
#2993 bdc966cb Thanks @B2o5T! - add unicorn/consistent-destructuring rule
Updated dependencies [e68cb8bc, f788e65a, bdc966cb]:
graphql-language-service@5.1.2
codemirror-graphql@2.0.4
graphql-language-service@5.1.2
Patch Changes
#2986 e68cb8bc Thanks @bboure! - Fix JSON schema for custom scalars validation
#2917 f788e65a Thanks @woodensail! - Fix infinite recursiveness in getVariablesJSONSchema when the schema contains types that reference themselves
#2993 bdc966cb Thanks @B2o5T! - add unicorn/consistent-destructuring rule
graphql-language-service-cli@3.3.16
Patch Changes
Updated dependencies [e68cb8bc, f788e65a, bdc966cb]:
graphql-language-service@5.1.2
graphql-language-service-server@2.9.6
graphql-language-service-server@2.9.6
Patch Changes
#2993 bdc966cb Thanks @B2o5T! - add unicorn/consistent-destructuring rule
Updated dependencies [e68cb8bc, f788e65a, bdc966cb]:
graphql-language-service@5.1.2
monaco-graphql@1.1.8
Patch Changes
#2993 bdc966cb Thanks @B2o5T! - add unicorn/consistent-destructuring rule
Updated dependencies [e68cb8bc, f788e65a, bdc966cb]:
graphql-language-service@5.1.2
vscode-graphql@0.8.6
Patch Changes
Updated dependencies [bdc966cb]:
graphql-language-service-server@2.9.6
vscode-graphql-execution@0.1.8
Patch Changes
#2993 bdc966cb Thanks @B2o5T! - add unicorn/consistent-destructuring rule
@thomasheyenbrock in order for this release to work, we need someone from graphql TSC who has npm permissions to grant access to the graphiql scope to my npm user once again, which is why the npm token we use on publish shows my name
| gharchive/pull-request | 2023-01-21T14:07:41 | 2025-04-01T06:44:22.756237 | {
"authors": [
"acao"
],
"repo": "graphql/graphiql",
"url": "https://github.com/graphql/graphiql/pull/3001",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
482254943 | docs: document our release process
From discussion with @acao we'd like a steady release process. This RELEASE_PROCESS.md documents how the release process works, using lerna for much of the automation.
Lerna automatically handles publishing packages in the right order (as can be seen with the npx lerna changed --long --toposort command).
Lerna will prompt for the version bump for each package:
I've configured lerna to publish to the @next tag on npm by default; however I've not tested this.
I've made lerna forbid versioning on any branch but master.
In future we should look into using semantic versioning to automatically decide if each package should be a major/minor/patch release; but for now I think we can handle this manually.
Note: at this time 2.1.0 of various things is already out (e.g. https://unpkg.com/browse/graphql-language-service-server@2.1.0/) though master still states 2.0.1 - these seem to have been released from within this PR: https://github.com/graphql/graphiql/pull/941
I have an issue for adding semantic release and had a PR for adding it and commitizen I was going to push after manually compiling all the release info, and cz-conventional-changelog
Changed @next references to @rc to reflect the decisions in the WG.
I'm working on a few more commits here @Neitsch to add conventional commits/changelog/release/etc
Now this is what I was looking for... :)))
Now just - commitlint or commitizen?
Here's the generated output from my PR against this PR
https://github.com/graphql/graphiql/pull/951/files#diff-cb2abf73448a6338d7f7f250233df271R9
I have it commitint-ing as an after effect of writing your commit message. It's less instrusive, but it also means that if you carefully draft a commit message and it doesn't adhere to the guidelines, that it will possibly be lost the first few times until you get used to it. so possibly a prompt is better? i dont know if i like commitlint's prompt as well as I like commitizen's. le sigh.
@Neitsch if we try this, merge a bugfix to master with fix(graphiql): example fix message, merge that PR and then checkout master, rebase from master and then publish that from master locally, we will see it truly in action
@benjie @Neitsch are we good to merge this? its been hangin with no approvals
I think the release process should be in a different document as most contributors won't need to worry about that (better to keep CONTRIBUTING short so it's less intimidating and people are more likely to read it).
Other than that it all looks good :+1: (though I feel like husky should be opt-in rather than opt-out; I'm always uncomfortable adding anything that modifies user's data to OSS projects)
good advice, we can implement those things in another branch. another way to do this without local hooks is to do the commitlint in CI
I think we should definitely do commitlint in CI as well; preferably using something like @danger
well we have to ensure that every commit message is correct. we can use
danger too, but we need to be able to fail a PR until its git commit
messages are in line
On Mon, Sep 9, 2019, 10:51 Benjie Gillam notifications@github.com wrote:
I think we should definitely do commitlint in CI as well; preferably using
something like @danger https://github.com/danger
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/graphql/graphiql/pull/948?email_source=notifications&email_token=AAKOFFZQGKL3XH5YKPFIHYDQIZPFTA5CNFSM4IM6UI42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6H3XRQ#issuecomment-529513414,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAKOFF62R45PGA7WPRBVZZDQIZPFTANCNFSM4IM6UI4Q
.
| gharchive/pull-request | 2019-08-19T11:24:15 | 2025-04-01T06:44:22.768308 | {
"authors": [
"acao",
"benjie"
],
"repo": "graphql/graphiql",
"url": "https://github.com/graphql/graphiql/pull/948",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
48165975 | Fix error page
https://app.getsentry.com/gratipay/gratipay-com/group/49625026/ (was https://app.getsentry.com/gratipay/gratipay-com/group/39866655/)
I infer from https://github.com/gratipay/gratipay.com/commit/ac9077437ed59a485105edd92d3c7fd83b9518ed that the problem here was that i18n wasn't configured before an 403 was triggered in get_auth_from_request, ya?
@whit537 Correct. error.spt extends templates/base.html which uses i18n functions.
k thx
| gharchive/issue | 2014-11-08T13:34:20 | 2025-04-01T06:44:22.793505 | {
"authors": [
"Changaco",
"whit537"
],
"repo": "gratipay/gratipay.com",
"url": "https://github.com/gratipay/gratipay.com/issues/2913",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
737866423 | Retrieving plan with operation ID does not always work
Description
What happened:
Given multiple operations available in the backend, gravity plan [display] --operation-id=<id> might not display the correct operation.
What you expected to happen:
If an operation is selected with an ID, gravity plan correctly displays its details.
How to reproduce it (as minimally and precisely as possible):
Environment
Start an operation after the installation (e.g. runtime environment update - does not have to run automatically, suffice that the operation and the plan are created).
Query the install operation with its ID.
Gravity version [e.g. 7.0.11]: 7.0.23
Also seen on Gravity version 7.0.30
Is master affected?
No, but I'll backport.
| gharchive/issue | 2020-11-06T15:54:32 | 2025-04-01T06:44:22.797407 | {
"authors": [
"a-palchikov",
"helgi",
"tokiwong"
],
"repo": "gravitational/gravity",
"url": "https://github.com/gravitational/gravity/issues/2309",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2331156004 | 🛑 API is down
In c8d6d35, API (https://api.greencent.io) was down:
HTTP code: 404
Response time: 617 ms
Resolved: API is back up in 439adc8 after 12 minutes.
| gharchive/issue | 2024-06-03T13:24:46 | 2025-04-01T06:44:22.993045 | {
"authors": [
"vicmosin"
],
"repo": "greencent/upptime",
"url": "https://github.com/greencent/upptime/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
245726628 | Data error, OUP only 239 journals?
In your figure 6 I've just spotted that you appear to state that OUP (by which I presume you mean Oxford University Press), has only 239 journals?
I have extensively looked into every journal OUP has earlier this year, for DOI-based analyses [1]. From that I'd say OUP has approximately 394 journals. Sure, some of them are new, or newly-transferred to OUP e.g. Gigascience, and the Linnean Society journals but even then this is significantly different to the figure you indicate. Could you please investigate this? Only a tangential issue hopefully but... needs investigating.
https://github.com/rossmounce/Checking-OUP-DOIs
PS "OUP" for Oxford University Press, but not CUP for Cambridge University Press? Might be worth standardising across the publishers in the figures as to whether you use acronyms or not. My preference would be for not, unless it's really necessary - IEEE might necessitate but one could use the full form for the others.
@rossmounce thanks for the report. We get journal information from Scopus, which we process in dhimmel/journalmetrics. Let's assume that OUP has published 394 journals (including both active and inactive journals).
First, we can check our Scopus catalog for the number of journals that have OUP listed as their publisher.
curl --silent --location \
https://github.com/dhimmel/journalmetrics/raw/209a5fd38396632f08705ab66ed8e3b182d11991/data/title-attributes.tsv \
| grep --perl-regexp --regexp="\tOUP\t" --count
The result is 264, so 130 OUP journals are not attributed to OUP (or do not exist) in Scopus.
The journals that recently were acquired by OUP do not have their publisher updated in Scopus yet. GigaScience (Scopus ID 21100420802) was attributed to "Springer Nature". Biological Journal of the Linnean Society (29771) was attributed to Wiley-Blackwell.
So the next question is how did we go from 264 Scopus journals with OUP as publisher to 239 in our publisher table. We map between Scopus journals and Crossref DOIs using ISSNs. So either of these issues may occur:
Crossref DOIs deposited by certain OUP journals may be missing ISSN metadata
Scopus doesn't contain the ISSN information for those OUP journals.
@rossmounce if you really want to know the specifics, I can spend the time to further diagnose the issues. Just let me know.
Might be worth standardising across the publishers in the figures as to whether you use acronyms or not.
Publisher names comes from Scopus. There isn't an easy way to get synonyms or convert to full names. In general, I avoid patching upstream data quality issues. When possible, I report upstream data issues via public forums (like GitHub Issues, not availabe for Scopus).
There is one more recent Scopus update that I could try to include in dhimmel/journalmetrics... that may fix some problems... or make them worse. Another option would be switching to a Crossref derived journal catalog, which is now possible since https://github.com/CrossRef/rest-api-doc/issues/179 was resolved. But that's lot's of work and will inevitably run into problems of its own.
I'm not entirely against a patch... if someone wanted to contribute a mapping of "Scopus publisher name" to "most-readable name", we'd likely accept that contribution. Just I'm not going to spend time patching these Scopus issues myself.
@rossmounce how often to journals change publishers? I'm guessing Scopus attempts to stay up to date with the current publisher. Crossref DOI metadata may not get updated if the publisher changes. Or it may.
How often do journals change publishers? A good question! Not one I could comprehensively answer I'm afraid. I would estimate it is less than 1% per year.
I will investigate the Scopus data myself. I guess the majority of the discrepancy is simply that Scopus doesn't care to index those "missing" OUP journals - it is 'selective' in what it indexes. Which is something interesting in itself, hence why I will investigate further... 😄
This also suggests that it would be good to point out somewhere in the methods section that the Scopus catalog of journals is deeply imperfect/incomplete, in that there are many many journals which have valid CrossRef DOIs and valid ISSNs, which are not in the Scopus catalog. This OUP discrepancy is one surfacing of that issue.
As for the names / acronyms issue. I get you're trying to faithfully represent the source data, but you want the manuscript to be accessible to a wide readership, beyond just the publishing-savvy crowd, right? I wasn't suggesting you make the changes from acronyms to full names in the data, just in the relevant manuscript figure: Figure 6. There's just 9 unexplained acronyms: APS, ACS, IEEE, IOP, RSC, AIP, OUP, BMJ, APA, so it surely wouldn't be much to change?
I will investigate the Scopus data myself.
Interested to learn what you find.
This also suggests that it would be good to point out somewhere in the methods section that the Scopus catalog of journals is deeply imperfect/incomplete, in that there are many many journals which have valid CrossRef DOIs and valid ISSNs, which are not in the Scopus catalog
Good point. Will do. Note that 70% of articles mapped to a Scopus journal (57,074,208 / 81,609,016).
There's just 9 unexplained acronyms: APS, ACS, IEEE, IOP, RSC, AIP, OUP, BMJ, APA, so it surely wouldn't be much to change?
Good point. Figure 6 would really benefit from the abbreviated names.
I wasn't suggesting you make the changes from acronyms to full names in the data
I'm actually leaning towards this implementation (changing in dhimmel/journalmetrics). It would be bothersome if the names in Figure 6 differed from the online publisher table.
@rossmounce I updated the publisher names and manuscript based on your feedback. Thanks!
Note that Oxford University Press is now listed as having 240 journals (compared to 239 before). This is because Scopus previously used the full name rather than OUP for a single journal in our analysis.
Great. It might take me a while to investigate the coverage of OUP journals in Scopus but I'll let you know when I do get to the bottom of it.
If you want to close this issue and my other one, please do.
If you want to close this issue and my other one, please do.
Closing this issue, but leaving https://github.com/greenelab/scihub-manuscript/issues/14 open until we address those issues!
| gharchive/issue | 2017-07-26T13:43:29 | 2025-04-01T06:44:23.006557 | {
"authors": [
"dhimmel",
"rossmounce"
],
"repo": "greenelab/scihub-manuscript",
"url": "https://github.com/greenelab/scihub-manuscript/issues/15",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
222096279 | Various fixes
Bug fix in simulator where it was possible to move enemy player
Cities no longer regenerate when it has no owner
#10 fixed, only send game update once on turn 0
Thank you for your contribution, I've just merged your commits manually into master to avoid adding those "Merge remote-tracking branch 'upstream/master'" that were included in this pull request.
Thanks, I should get rid of those merge commits.
| gharchive/pull-request | 2017-04-17T09:12:52 | 2025-04-01T06:44:23.009823 | {
"authors": [
"greenjoe",
"nosslin579"
],
"repo": "greenjoe/sergeants",
"url": "https://github.com/greenjoe/sergeants/pull/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
292930821 | Fix S3 abortable stream warning
The S3 client has an annoying tendency to log lots of warnings when not all bytes are read from an input stream before it is closed. It does this because S3 is already sending the rest of the data, so the connection must be aborted and cannot be reused in a connection pool. Sometimes this is legitimately a performance hit, but other times consuming the remainder of a multi-megabyte stream is going to me more expensive than re-establishing a connection.
This library should take a page from Hadoop and wrap the S3 input stream with additional closing logic. See the following for references:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/internal/S3AbortableInputStream.html
https://issues.apache.org/jira/browse/HADOOP-14596
Fixed by always draining the input stream in 0cc8710f9606075e66b9d14cc43caf718c496538.
| gharchive/issue | 2018-01-30T20:44:13 | 2025-04-01T06:44:23.061225 | {
"authors": [
"greglook"
],
"repo": "greglook/blocks-s3",
"url": "https://github.com/greglook/blocks-s3/issues/2",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
473876679 | about scope
Hey, looks like typo in about_scope.py:73
def test_incrementing_with_local_counter(self):
global counter
start = counter
self.increment_using_local_counter(start) # should be `counter` instead of `start`
self.assertEqual(False, counter == start + 1)
Yes, in both cases the answer is False. But logically in start we should keep initial value and then compare with modified counter.
Well... I'm going to make admission. I wrote this a long time ago. I have no recollection of writing this koan at all. But... looking back at it with fresh eyes local vs global here is super confusing. I can see my intent was to show the difference in scope but this doesn't prove it at all well, because the tests for local vs global counters are almost identical!
So I'll throw down the gauntlet. If someone wants to make a cool contribution to Python Koans, this is a great place to start!
I would like to work on this issue.
| gharchive/issue | 2019-07-29T06:41:14 | 2025-04-01T06:44:23.063814 | {
"authors": [
"gordinmitya",
"gregmalcolm",
"ria-19"
],
"repo": "gregmalcolm/python_koans",
"url": "https://github.com/gregmalcolm/python_koans/issues/197",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
348890430 | Is it possible to hide the title and select checkbox?
This is an exceptional implementation. Is is possible to make this table simpler and not have the title or title bar, as well as select checkboxes for the first column?
Title/titlebar not as of today.
Selectable rows just pass in the following options:
const options = {
selectableRows: false
};
What is the best solution ?
add an option "viewToolbar", type: boolean, default: true, to see or not the MUIDataTableToolbar component. ( not compatible with option customToolbar )
add an option "topBar", type: TopBarOptions | false. I imagine something like:
interface TopBarOptions {
title?: boolean,
toolbar?: ToolBar | boolean
}
interface ToolBar {
filter?: boolean
search?: boolean
print?: boolean
download?: boolean
viewColumns?: boolean
}
options.topBar = false => MUIDataTableToolbar component not rendered
options.topBar = { toolbar: false } => We show the title but not the toolbar.
options.topBar = { title: false } => We show the title but not the toolbar.
It's about to have the full control on what you want to render or not.
That is the idea, I hope you understand what I mean.
The previous solution mixed with the options.customToolBar. This is all options about the same thing and I think it can be helpful to regroup everything about toolbar under a subobject of options.
@benjaminbours Good API design. As this library starts to grow in features the options page is starting to become messy. I want to take this a step further because this change you're proposing is a breaking change. When we do make this breaking change I want to have it apply to each section.
right now we have
customFooter,
customToolbar
customToolbarSelect
Then there are various options tied to some of these items that could be folded under and make the first glance of the API smaller but then a user would need to drill down. If you're interested in creating a concept PR where you focus on nothing but README/API design of converting these I would be interested in seeing how it looks. I don't want it to be a pain to figure out how to configure this library but I also realize the options needs to be more organized
I'm thinking about making the following change:
const options = {
toolbarSelect: fn
footer:
function(count, page, rowsPerPage, changeRowsPerPage, changePage) => string|React Component
toolBar: obj | fn,
// option #1
toolBar: (props) => {
return (<section id="toolbar"><h1>some title</h1><div>some custom toolbar</div></section>)
},
// option #2
toolBar: {
title: "custom title",
options: {
filter: {
display: true
icon: customIcon
},
search: {
display: false
},
print: {
display: true
},
download: {
display: true
},
viewColumns: {
display: true
}
}
}
@tonymckendry Is there a way to keep the toolbar, but remove only the title bar section? Also any idea how to change font color in toolbar?
@kedpak I can't say for sure without having it in front of me, but you would most certainly use an override like I did above, instead of root there is probably another subclass in MUIDataTableToolbar that you could target to hide the title. As for changing the font color, I would try to add color: <yourColor> to the same override I used above
Seems options setting been changed now. If need to remove only "print" icon then write
const options = {
print: false
}
This works to me
Removing the toolbar section is now possible just by removing any options that might appear there, as well as passing an empty title. There is an example here: https://github.com/gregnb/mui-datatables/blob/master/examples/simple-no-toolbar/index.js.
Removing the checkboxes can be done with selectableRows: false or selectableRows: 'none'. You can also set exactly which rows are selectable with isRowSelectable.
As I believe these options now satisfy the original issue, I will close.
It doesn't say so in the docs, it says the options are 'multiple', 'single' or 'none', and 'none' is not working. Passing the boolean value false does work.
Title/titlebar not as of today.
Selectable rows just pass in the following options:
const options = {
selectableRows: false
};
Does not work with remote data, the column checkbox disappears but the checkboxes on the rows still appear
Title/titlebar not as of today.
Selectable rows just pass in the following options:
const options = {
selectableRows: false
};
Does not work with remote data, the column checkbox disappears but the checkboxes on the rows still appear
selectableRows: 'none' seems to works
@AlexFenoul , selectableRows: 'none' works perfectly. Thanks.
Do you have and idea on how fix the issue of when columns are many and the table looks distorted, I have tried using responsive: 'stacked' but does not work. look at the table below:
I don't really know, did you try to use responsive: 'scroll', i never had this problem 🤐
I don't really know, did you try to use responsive: 'scroll', i never had this problem 🤐
Yes I tried that too but failed, I wonder if there is a way to fix this issue, it seems the table has a fixed width
@Ainnop It looks like you are overriding things in a way that is getting you into trouble. If you look at the custom-action-columns example, you'll see how the table looks with many columns, some of which are used for buttons.
@gabrielliwerant I used the same example to come up that, but when i add just 2 or more columns to the custom-action-columns example, the table still looks distorted, I tried using responsive: 'scroll' but it only scrolls vertically, I thought maybe it would scroll the table horizontally and auto resize the columns, none of what i expected worked.
`import React from 'react';
import MUIDataTable from 'mui-datatables';
class Activities extends React.Component{
constructor(props) {
super(props);
this.state = {
data: [
["Gabby George", "Business Analyst", "Minneapolis", 30, "$100,000", 30, "$100,000"],
["Aiden Lloyd", "Business Consultant", "Dallas", 55, "$200,000", 30, "$100,000"],
["Jaden Collins", "Attorney", "Santa Ana", 27, "$500,000", 30, "$100,000"],
["Franky Rees", "Business Analyst", "St. Petersburg", 22, "$50,000", 30, "$100,000"],
["Aaren Rose", "Business Consultant", "Toledo", 28, "$75,000", 30, "$100,000"],
["Blake Duncan", "Business Management Analyst", "San Diego", 65, "$94,000", 30, "$100,000"],
["Frankie Parry", "Agency Legal Counsel", "Jacksonville", 71, "$210,000", 30, "$100,000"],
["Lane Wilson", "Commercial Specialist", "Omaha", 19, "$65,000", 30, "$100,000"],
["Robin Duncan", "Business Analyst", "Los Angeles", 20, "$77,000", 30, "$100,000"],
["Mel Brooks", "Business Consultant", "Oklahoma City", 37, "$135,000", 30, "$100,000"],
["Harper White", "Attorney", "Pittsburgh", 52, "$420,000", 30, "$100,000"],
["Kris Humphrey", "Agency Legal Counsel", "Laredo", 30, "$150,000", 30, "$100,000"],
["Frankie Long", "Industrial Analyst", "Austin", 31, "$170,000", 30, "$100,000"],
["Brynn Robbins", "Business Analyst", "Norfolk", 22, "$90,000", 30, "$100,000"],
["Justice Mann", "Business Consultant", "Chicago", 24, "$133,000", 30, "$100,000"],
["Addison Navarro", "Business Management Analyst", "New York", 50, "$295,000", 30, "$100,000"],
["Jesse Welch", "Agency Legal Counsel", "Seattle", 28, "$200,000", 30, "$100,000"],
["Eli Mejia", "Commercial Specialist", "Long Beach", 65, "$400,000", 30, "$100,000"],
["Gene Leblanc", "Industrial Analyst", "Hartford", 34, "$110,000", 30, "$100,000"],
["Danny Leon", "Computer Scientist", "Newark", 60, "$220,000", 30, "$100,000"],
["Lane Lee", "Corporate Counselor", "Cincinnati", 52, "$180,000", 30, "$100,000"],
["Jesse Hall", "Business Analyst", "Baltimore", 44, "$99,000", 30, "$100,000"],
["Danni Hudson", "Agency Legal Counsel", "Tampa", 37, "$90,000", 30, "$100,000"],
["Terry Macdonald", "Commercial Specialist", "Miami", 39, "$140,000", 30, "$100,000"],
["Justice Mccarthy", "Attorney", "Tucson", 26, "$330,000", 30, "$100,000"],
["Silver Carey", "Computer Scientist", "Memphis", 47, "$250,000" , 30, "$100,000"],
["Franky Miles", "Industrial Analyst", "Buffalo", 49, "$190,000", 30, "$100,000"],
["Glen Nixon", "Corporate Counselor", "Arlington", 44, "$80,000", 30, "$100,000"],
["Gabby Strickland", "Business Process Consultant", "Scottsdale", 26, "$45,000", 30, "$100,000"],
["Mason Ray", "Computer Scientist", "San Francisco", 39, "$142,000", 30, "$100,000"]
]
};
}
render() {
const columns = [
{
name: "Delete",
options: {
filter: true,
sort: false,
empty: true,
customBodyRender: (value, tableMeta, updateValue) => {
return (
<button onClick={() => {
const { data } = this.state;
data.shift();
this.setState({ data });
}}>
Delete
</button>
);
}
}
},
{
name: "Edit",
options: {
filter: true,
sort: false,
empty: true,
customBodyRender: (value, tableMeta, updateValue) => {
return (
<button onClick={() => window.alert(`Clicked "Edit" for row ${tableMeta.rowIndex}`)}>
Edit
</button>
);
}
}
},
{
name: "Name",
options: {
filter: true,
}
},
{
label: "Modified Title Label",
name: "Title",
options: {
filter: true,
}
},
{
name: "Location",
options: {
filter: false,
}
},
{
name: "Age",
options: {
filter: true,
}
},
{
name: "Salary",
options: {
filter: true,
sort: false,
}
},
{
name: "Days",
options: {
filter: true,
}
},
{
name: "Debt",
options: {
filter: true,
sort: false,
}
},
{
name: "Add",
options: {
filter: true,
sort: false,
empty: true,
customBodyRender: (value, tableMeta, updateValue) => {
return (
<button onClick={() => {
const { data } = this.state;
data.unshift(["Mason Ray", "Computer Scientist", "San Francisco", 39, "$142,000"]);
this.setState({ data });
}}>
Add
</button>
);
}
}
},
];
const data1 = [
{Name: "Gabby George", Title: "Business Analyst", Location: "Minneapolis", Age: 30, Salary: "$100,000"},
{Name: "Aiden Lloyd", Title: "Business Consultant", Location: "Dallas", Age: 55, Salary: "$200,000"},
{Name: "Jaden Collins", Title: "Attorney", Location: "Santa Ana", Age: 27, Salary: "$500,000"},
{Name: "Franky Rees", Title: "Business Analyst", Location: "St. Petersburg", Age: 22, Salary: "$50,000"},
{Name: "Aaren Rose", Title: "Business Consultant", Location: "Toledo", Age: 28, Salary: "$75,000"},
{Name: "Blake Duncan", Title: "Business Management Analyst", Location: "San Diego", Age: 65, Salary: "$94,000"},
{Name: "Frankie Parry", Title: "Agency Legal Counsel", Location: "Jacksonville", Age: 71, Salary: "$210,000"},
{Name: "Lane Wilson", Title: "Commercial Specialist", Location: "Omaha", Age: 19, Salary: "$65,000"},
{Name: "Robin Duncan", Title: "Business Analyst", Location: "Los Angeles", Age: 20, Salary: "$77,000"},
{Name: "Mel Brooks", Title: "Business Consultant", Location: "Oklahoma City", Age: 37, Salary: "$135,000"},
{Name: "Harper White", Title: "Attorney", Location: "Pittsburgh", Age: 52, Salary: "$420,000"},
{Name: "Kris Humphrey", Title: "Agency Legal Counsel", Location: "Laredo", Age: 30, Salary: "$150,000"},
{Name: "Frankie Long", Title: "Industrial Analyst", Location: "Austin", Age: 31, Salary: "$170,000"},
{Name: "Brynn Robbins", Title: "Business Analyst", Location: "Norfolk", Age: 22, Salary: "$90,000"},
{Name: "Justice Mann", Title: "Business Consultant", Location: "Chicago", Age: 24, Salary: "$133,000"},
{Name: "Addison Navarro", Title: "Business Management Analyst", Location: "New York", Age: 50, Salary: "$295,000"},
{Name: "Jesse Welch", Title: "Agency Legal Counsel", Location: "Seattle", Age: 28, Salary: "$200,000"},
{Name: "Eli Mejia", Title: "Commercial Specialist", Location: "Long Beach", Age: 65, Salary: "$400,000"},
{Name: "Gene Leblanc", Title: "Industrial Analyst", Location: "Hartford", Age: 34, Salary: "$110,000"},
{Name: "Danny Leon", Title: "Computer Scientist", Location: "Newark", Age: 60, Salary: "$220,000"},
{Name: "Lane Lee", Title: "Corporate Counselor", Location: "Cincinnati", Age: 52, Salary: "$180,000"},
{Name: "Jesse Hall", Title: "Business Analyst", Location: "Baltimore", Age: 44, Salary: "$99,000"},
{Name: "Danni Hudson", Title: "Agency Legal Counsel", Location: "Tampa", Age: 37, Salary: "$90,000"},
{Name: "Terry Macdonald", Title: "Commercial Specialist", Location: "Miami", Age: 39, Salary: "$140,000"},
{Name: "Justice Mccarthy", Title: "Attorney", Location: "Tucson", Age: 26, Salary: "$330,000"},
{Name: "Silver Carey", Title: "Computer Scientist", Location: "Memphis", Age: 47, Salary: "$250,000" },
{Name: "Franky Miles", Title: "Industrial Analyst", Location: "Buffalo", Age: 49, Salary: "$190,000"},
{Name: "Glen Nixon", Title: "Corporate Counselor", Location: "Arlington", Age: 44, Salary: "$80,000"},
{Name: "Gabby Strickland", Title: "Business Process Consultant", Location: "Scottsdale", Age: 26, Salary: "$45,000"},
{Name: "Mason Ray", Title: "Computer Scientist", Location: "San Francisco", Age: 39, Salary: "$142,000"}
];
const options = {
filter: true,
filterType: 'dropdown',
responsive: 'scroll',
page: 2,
onColumnSortChange: (changedColumn, direction) => console.log('changedColumn: ', changedColumn, 'direction: ', direction),
onChangeRowsPerPage: numberOfRows => console.log('numberOfRows: ', numberOfRows),
onChangePage: currentPage => console.log('currentPage: ', currentPage)
};
return (
<MUIDataTable title={"ACME Employee list"} data={this.state.data} columns={columns} options={options} />
);
}
}
export default Activities`
@Ainnop So, first order of business, can you create a new issue for us to track the discussion? I think it's misplaced here.
Secondly, can you add your example to https://codesandbox.io, so I can tell we're looking at the same thing, including version of libraries being used? I'm unable to verify the issue based on your code, so it could be a version issue, but I have no way to tell at present.
@Ainnop Try lowering the padding on the columncells:
const theme = createMuiTheme({
palette: {
/* ... */
},
overrides: {
MUIDataTableBodyCell: {
root: {
padding: "0em 2em",
}
},
MUIDataTableHeadCell: {
root: {
padding: "0em 2em",
color: '#EE7D11'
}
},
},
});```
How to hide the entire tool bar?
@sajish-sasidharan See the simple-no-toolbar example for a way to do that.
u
| gharchive/issue | 2018-08-08T21:01:04 | 2025-04-01T06:44:23.088023 | {
"authors": [
"Ainnop",
"AlexFenoul",
"ajay-bsoft",
"benjaminbours",
"gabrielliwerant",
"gregnb",
"jpaulsen-apixio",
"kedpak",
"mercyjulliyana",
"sajish-sasidharan",
"svendeckers",
"tonymckendry"
],
"repo": "gregnb/mui-datatables",
"url": "https://github.com/gregnb/mui-datatables/issues/126",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
676548941 | onCellContextClick()
It would be nice to have a right click on a table cell
Expected Behavior
When user do right click on a table cell, a callback function should be provided in options
This can be accomplished using the setCellProps feature:
var columns = [
name: "Column 1",
options: {
setCellProps: {
onContextMenu: (evt) => {
alert("right click!");
}
}
}
];
| gharchive/issue | 2020-08-11T03:21:54 | 2025-04-01T06:44:23.091003 | {
"authors": [
"patorjk",
"tecefx"
],
"repo": "gregnb/mui-datatables",
"url": "https://github.com/gregnb/mui-datatables/issues/1466",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
297936305 | Card comment reactions
Need to see how this in manifested in the API. The feature was just recently released into the wild.
I have added a reaction to the primary card in the sandbox.
Looks like the data is coming through the API. Below is a comment action with reaction data.
{
"id": "5a72b7ea8ea559321a1eaa51",
"idMemberCreator": "50b693ad6f122b4310000a3c",
"data": {
"list": {"name": "List", "id": "51478f6469fd3d9341001daf"},
"board": {
"shortLink": "VHHdzCU0",
"name": "Sandbox",
"id": "51478f6469fd3d9341001dae"
},
"card": {
"shortLink": "3rm0AZg5",
"idShort": 1344,
"name": "Card",
"id": "5a72b7ab3711a44643c5ed49"
},
"text": "a comment"
},
"type": "commentCard",
"date": "2018-02-01T06:47:06.892Z",
"limits": {
"reactions": {
"perAction": {"status": "ok", "disableAt": 950, "warnAt": 900},
"uniquePerAction": {"status": "ok", "disableAt": 17, "warnAt": 16}
}
},
"display": {
"translationKey": "action_comment_on_card",
"entities": {
"contextOn": {
"type": "translatable",
"translationKey": "action_on",
"hideIfContext": true,
"idContext": "5a72b7ab3711a44643c5ed49"
},
"card": {
"type": "card",
"hideIfContext": true,
"id": "5a72b7ab3711a44643c5ed49",
"shortLink": "3rm0AZg5",
"text": "Card"
},
"comment": {"type": "comment", "text": "a comment"},
"memberCreator": {
"type": "member",
"id": "50b693ad6f122b4310000a3c",
"username": "gregsdennis",
"text": "Greg Dennis"
}
}
},
"memberCreator": {
"id": "50b693ad6f122b4310000a3c",
"fullName": "Greg Dennis",
"initials": "GSD",
"memberType": "normal",
"username": "gregsdennis",
"avatarHash": "cfd323494c6c01459001e53c35e88e41",
"bio": "Currently, I am a software developer at THL working on rebuilding our core web software from the ground up. I am active on StackOverflow, and I own several open-source projects which can be found on GitHub.",
"bioData": {"emoji": {}},
"confirmed": true,
"products": [10],
"url": "https://trello.com/gregsdennis",
"idPremOrgsAdmin": ["5ae69621a194f176af0097c0"]
},
"reactions": [
{
"id": "5a8756bd6b7854c3958c8b25",
"idMember": "50b693ad6f122b4310000a3c",
"idModel": "5a72b7ea8ea559321a1eaa51",
"member": {
"id": "50b693ad6f122b4310000a3c",
"avatarHash": "cfd323494c6c01459001e53c35e88e41",
"avatarUrl": "https://trello-avatars.s3.amazonaws.com/cfd323494c6c01459001e53c35e88e41",
"fullName": "Greg Dennis",
"initials": "GSD",
"username": "gregsdennis"
},
"emoji": {
"unified": "1F44D",
"native": "👍",
"name": "THUMBS UP SIGN",
"skinVariation": null,
"shortName": "+1"
}
}
]
}
creating a comment is
POST https://trello.com/1/actions/5439fca358689d656a6bb161/reactions HTTP/1.1
with body
{
"idModel": "5439fca358689d656a6bb161",
"idMember": "50b693ad6f122b4310000a3c",
"idEmoji": "1F440",
"emoji": {
"id": "eyes",
"name": "Eyes",
"colons": ":eyes:",
"emoticons": [],
"unified": "1f440",
"skin": null,
"native": "👀"
},
"token": "50b693ad6f122b4310000a3c/Xu4ce0GIS5PoZLAAi7Fqu7tMDbNOk48LtnBKFuisnxOQFlMcUtOV938MQFHRgOJt",
"unified": "1F440"
}
There are 10 options from the UI for reaction emojis.
+1
-1
raised_hands
pray
eyes
rocket
joy
blush
heart_eyes
trophy
heart
broken_heart
Questions:
Will the API support custom emoji?
What emoji data is required when POSTing? Can I just send idEmoji or unified and Trello figures out the rest?
What is the emoticons array? Why isn't it returned?
See branch feature/136-comment-reactions
According to Trello, emoji and reactions are currently undergoing some changes. This feature will be put on hold until it's stabilized at their end.
https://trello.com/c/DxNSkpXU/32-add-reactions-rest-api
Trello updates are out!
Hi; is there currently a way to do this (or something similar)?
I'm looking for a way to mark a comment as ‘processed’ and adding a reaction to it seems like a way more sensible thing to do than (e.g.) prepending its text with something like "–– DONE ––\n".
Or can some property in ActionData be used for the same purpose?
Thank you.
No I haven't been able to work on this, but I may prioritize it now that there is some interest
Holy crap! They return 1528 different emojis! I think I'm going to have to code-gen that...
See branch feature/136-emoji-comment-reactions
Version 3.7.0 will be released soon to support this feature.
| gharchive/issue | 2018-02-16T22:14:26 | 2025-04-01T06:44:23.119573 | {
"authors": [
"Racoon77",
"gregsdennis"
],
"repo": "gregsdennis/Manatee.Trello",
"url": "https://github.com/gregsdennis/Manatee.Trello/issues/136",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
331379521 | Improve account security - update active key before doing anything else
Initially all accounts have same owner and active keys, which is not safe.
Owner key should not be used for voting or for anything else, other than changing active keys, otherwise if owner key is compromised there is no way to recover.
I suggest eos-voter app adds a new feature that generates a new active key and updates user account if owner and active keys are same, and does not store owner key inside the wallet, only active key.
updating user active keys is done with:
cleos set account permission <accountname> active '{"threshold": 1, "keys": [{"key": "<active public key>", "weight": 1}],"weight":1}]}' owner
Good call - managing permissions and splitting those keys would be a great feature to add.
yes please!
+1 this is really needed
We're starting to really research how we should present this, along with all the other powerful permission features EOS has to offer. Planning for the 0.5.x release at this point.
We have the ability to do so now in 0.5.0, and are planning to rework the welcome process once again to add this and a number of other "best practices" steps into the flow.
| gharchive/issue | 2018-06-11T22:58:54 | 2025-04-01T06:44:23.148682 | {
"authors": [
"aaroncox",
"bitcoiners",
"btsfav",
"javierjmc"
],
"repo": "greymass/eos-voter",
"url": "https://github.com/greymass/eos-voter/issues/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1427695293 | Suggestion: keep track of utility of assigning all units each treatment outside of "level_one_learning"
Hi All,
Thanks for writing such a fantastic and well-documented package.
I've been working using the package the past few weeks, and I think I might have come across a potential speed improvement which could let users build even deeper trees than before.
Specifically, instead of creating an NxPxD array of cumulative treatment effects in the level_one_learning function, couldn't the algorithm keep tabs of the utility from assigning each unit to every treatment (a D-length vector) and use this to calculate the utility + best action on each side of a potential split?
I've used a flamegraph to try and diagnose where the algorithm is spending time (see below) and it seems like a lot of the algorithm's time is spent allocating and de-allocating these arrays. I'm not a C++ developer, so I can't be sure that I am reading the code / charts correctly, but a decent amount of time also seems to be to be spent maintaining the binary trees, which like they could be converted to vectors after they have been sorted, as it looks like they are only being accessed by index after creation (but I'm less sure about this).
I don't know C++ so I can't file a PR to tweak this, but I do know Rust, so I've quickly put together both of these changes in an Rust-backed R package here). The package seems to perform well when compared to the existing search algorithm as the number of data points increases (see plot below), and also also parallelizes the search to run across multiple cores which increases the speed by another order of magnitude.
(I originally called the package "exhaustivetree," but I changed the name shortly after making the graph.)
As you can see, the change in scaling from these tweaks are quite dramatic, and they make it feasible to run quick policytree searches on larger datasets with a more covariates than before — I was delighted to find out that I can now run a search for a depth-2 tree on my a dataset with 250,000 observations and 55 (categorical) covariates in 15 seconds, which is a large improvement over the current policytree time (which took too long for me to record).
I hope you'll be able to consider these changes / look into them more, as it seems like they offer a large increase in speed. I'd be very happy to answer any questions you have, or to run any tests you would like on the code I have written to verify that the speedups are indeed due to the tweaks I made, rather than any Rust / C++ difference.
Best,
Ben
Hi @beniaminogreen, thanks a lot, this looks very cool and interesting! I don't have time to look closely until a bit later next month, but in the meantime, could I please ask if you get the same result here if you plug in parallel_policy_tree?
Hi @beniaminogreen, thanks for looking into this; your addition seems very promising. In addition to what @erikcs requests above, could you also add
a comparison with your rust implementation on only one core
explain in pseudocode your approach; I am unable to understand it right now, for the question of the utility tracking
wrt the trees, we do add/delete in the main loop; so I do not understand what exactly you mean by access by index. You could explain that too in pseducode.
I think we should definitely incorporate changes which offer gains of this magnitude.
Hi @erikcs and @kanodiaayush, thanks for getting back to me so quickly!
Please find below a screenshot of the tests @erikcs asked me to run. Not sure how to read these statistics, but the outputs look similar to the ones in the script you sent.
I should also add that I have been unit-testing the treatment assignements produced by parallel_policy_tree against those produced by the policy_tree function (see here), to make sure that the two methods always produce the same treatment assignments. The tests show that the two methods coincide for tree depths up to 3 (beyond which I don't test, but I assume the assignments should still become identical).
@kanodiaayush, please find below a comparison of parallel_policy_tree and policytree when running on a single core. The times are likely different between the two charts, as I lost the bench-marking code I used for the first graph (the code for this run can be found here.
I couldn't get to writing the algorithm in pseudocode this weekend, but I'll be able to do it in the next few days, and I'll leave it here as soon as I do.
Best,
Ben
Thanks Ben, that looks great. This is very impressive work. I’m trying to understand where the gains are coming from, could you please run that benchmark plot for dense Xj, ie, line11: X = matrix(rnorm(n*p), n, p)?
Sure thing! Here it is. I tried to optimize performance for discrete / binned features as I think that's the modal case, but it looks like it performs very well even in the case of completely continuous features.
Sorry for getting you to run this, but I only have my phone at the moment. Could you please run that same script but overlay a 3rd line with runtime for just a depth 0 policytree? Thanks !
Sure, here it is:
This isn't what I expected to see, so I ran another flamegraph to try and diagnose where the policytree algorithm is spending time when creating a depth-0 tree for a data set of 50K units.
It seems that for depth-0 trees in datasets of this size, most of the time is spent creating the sorted sets. Once that is done, it looks like the remaining search does not take much time at all. Maybe the differences in speed are coming from a difference in the time it takes to create the initial data structures?
Thanks, yeah that is why I wanted to see the depth 0 timing, had a hunch a big speed difference arose from the sorted sets being created faster in Rust (I think algorithmically your level 1 learning should have the same amortized runtime as policytree’s, the amount of work it does should be the same)
What data structures are you using to create the sorted sets in Rust? I think (at least part of) your incredible speed improvement may be a system-level improvement in that particular data structure 🤔
Definitely looks to be the case - good catch! I am using a BTreeMap to create the sorted sets (which are converted to vectors once they have been populated with all the observations).
For each axis, I create an empty BTreeMap with X values as keys, vectors of observation indices as entries. So a dataset that looks like this:
Observation ID
X1
1
0.0
2
2.0
3
0.0
4
2.5
Would be stored as the following key/value pairs:
Key
Value
0.0
vec![1,3]
2.0
vec![2]
2.5
vec![4]
I populate the B-TreeMap by iterating over each observation, and either adding a key-value pair if the key has not been seen before, or appending the value of the vector to the value associated with the key if the key has been seen before. In the example above, it would look like:
Instantiate the BTreeMap
Check if (0.0) is a key in the Map. It isn't, so insert the key-value pair (0.0, vec![1]) into the map.
Check if (2.0) is a key in the Map. It isn't, so insert the key-value pair (2.0, vec![2]) into the map.
Check if (0.0) is a key in the Map. It is, so insert the index 3 into the vector associated with this key.
Check if (2.5) is a key in the Map. It isn't, so insert the key-value pair (2.5, vec![4]) into the map.
Convert the BTreeMap to a vector of ObservationBundles, which store the key and vector of indecies for each entry in one struct
Some of the speed improvement may also have to do with the fact that I am storing indexes of observations in my trees, while I believe you are storing entire observations, which would make the trees slower to populate / clone.
Great, thanks! Could you please run similar benchmarks for a depth 2 tree on three X-configurations: {all binary, a handful of discrete values, dense (i.e runif(n*p))}? (policytree may be to slow for too large n on the last one)
Also, if possible, (I don't have my laptop now so can't do this) could you repeat this + the depth 1 timings from above for a modified policytree where you: replace L18 here to #include<set> and change the first part of L22 from boost::container::flat_set to std::set ?
Thanks again!
Sure - here are the benchmarks for depth-2 trees on the three datasets you describe.
For the un-changed policytree:
For the policytree with the changes you described:
Thanks! I'd like a larger number of observations to see what's going on, could you do up to n = 100k, at least for the top plot? It could take a few hours, but I guess you have an external server? (also p = 2 and d = 2 is fine). Thanks again!
I've been running these on my laptop, but I can put them on my server. I don't know how long they will take, but I'll set them off and report back when they have completed. The only server I can run benchmarks on has 4 cores, so it could be a few days.
Actually, you could leave out policytree from the benchmarks, we have roughly an idea of how long time that takes if you include an R file with benchmark details. What would be interesting to see is the Rust policytree (1 core) timing as a function of n up to 100k or more for the different X settings.
How many categories are there in the 250k depth 2 tree you mention in your first post?
Sounds good - I've set the code running again with just the Rust implementation going. Will likely still take a few days.
The 250k observation run was based on dataset which actually had 45 columns, not 55 as I reported. The dimensions and number of unique values in each column are below, but the columns with 141 and 173 breaks were not included.
If they are included, the training time goes up to about 50 seconds on our 40-core server. This time can still see a lot of improvement, as I parallelize by having each core only search over trees that start by splitting on a specific variable. This means that cores assigned variables with fewer break points finish much earlier than others, and have to sit around waiting for the stragglers to finish. My bet is that a better parallelization scheme could bring this time down to ~25 seconds.
These came back a lot quicker than I anticipated! The binary and categorical runs both finished in about 15ms each for a dataset of n=100,000, while the dense dataset took about an hour and 5 mins to run. The benchmarking code I used can be found here
Interesting, policytree takes around 15 sec for the two first ones and 11 minutes for the last.
That is interesting. This definitely reflects my focus on the categorical / binary case in coding my version.
I bundle all the observations with the same value of a predictor together so that they can be quickly added / removed from a node as a group. If no observations share the same value of a predictor, then all the bundling code will have a significant speed cost. My guess is that this explains the discrepancy in performance between the categorical and dense cases.
In the amortized sorting operation, I imagine I could limit some of the speed losses by assessing if a variable is dense or not, and then deciding on an appropriate representation. That might allow me to glean the advantages of grouping, while not loosing performance in the dense case.
Yes, we should consider handling repeated Xj in policytree, your package makes it clear it's very worthwhile. I'm not sure when I will get time to have a closer look though.
And I looked at your Rust code, for the sorted sets it reminds of an optional splitting rule GRF used to have. For N unique samples you are essentially maintaining a NxP array of the global sort order. When moving samples L/R you can update indexes in O(1) time since you have an N-length index array, but creating a copy/new one takes O(N) time, where N is the global number of samples, in policytree creating a new one takes O(n) time where n is the number of samples in the recursive call, and insertion takes O(logn). Which of these terms dominate will depend on the number of distinct values (and will likely have to be large, e.g. num.samples=5k is too small to notice these differences). If you experiment in bechmarking (varying "denseness" and N) I think you will find a ~threshold where the the two runtimes cross. For "full" dense Xj your runtime when doubling num.samples is roughly 8 times larger (last plot from 50k to 100k), for policytree that same ratio will be ~4 instead of 8.
Thanks for spelling that out for me - I thought that the array approach would always dominate the sorted sets implementation in policytree, but now I see that isn't the case. This will definitely be something for me to think about if I keep working on my package .
Hopefully there will be a way for me to get the best of both worlds - the speed that arrays give for small / medium sizes of data with the scaling that your sorted sets implementation has for large datasets.
Yes, you can probably work out empirical N/n heuristics to switch between implementations during runtime if you really want to keep tinkering! (In these things there are practically always areas you can squeeze out more perf by identifying special cases). One small thing you could try is make your active array a hash table instead, that might make a small dent in speed if Rust has fast ~collision free integer hashing. Another thing to try could be to make the most out of SIMD, I don't know about Rust, but if you for example had d-rewards in an Eigen (C++) vector, then it should be able to compute the sum in less instructions than in a plain std::vector.
How about we add a link to your package in the README highlighting it for people who have large data with few distinct values?
Thanks for the suggestion - I've just made a branch with the hash set approach you describe, and I'll benchmark it to see which is faster. I went with the array approach originally so that the active array would be stored in a contiguous memory block, but I doubt that will offset the speed from the large array copy each time I want to split along a new axis. I'll benchmark both ways and pick the quicker one.
I'll also look into the SIMD approach, but I don't think Rust has particularly mature support for SIMD stuff from what I've been able to google.
An inclusion in the README sounds fantastic to me, thanks for the shout out! I can provide a little table like the one below for my algorithm if you'd like.
Great, maybe you could put some timings on your GH README for reference? (considering I guess you'll keep making changes you'll always have the latest timings in your repo, we don't have bandwidth to make any big changes to policytree in the near future).
Sounds good to me - I'll run them in the next few days, then put them in the README. Should also add some more documentation for how the parallel_policy_tree function is used.
Thanks for your help,
Ben
a) When things are getting more polished, we could also add an option to policytree that would dispatch to your package depending on input type? I.e. something like:
if solver == "auto" {
let v = compute some heuristic based on N and the number of unique values
// (an empirical question that probably requires some trial and error!)
if v > some threshold {
if "parallel_policy_tree" is installed {
tree = parallel_policy_tree::parallel_policy_tree(...)
} else {
print("Consider installing "parallel_policy_tree at ...")
}
//etc
}
b) On algorithmic details, your ObservationBundles are the right idea, but I can imagine you got from the timings you ran in this thread that this loop is something you'd want to avoid. We could help you out if you want to improve this even further? You could probably write up what you did and send it to JOSS as a software paper - Ayush and I can help you along the way. If you are applying for PhD programs maybe that would give you a boost!
So, for an optimal version of BundledObservations and SortedSets, I caught up with @kanodiaayush: an asymptotically unbeatable version of policytree for duplicated x-values would zap the O(N) term by maintaining an index data structure which tells you which ObservationBundle each sample belong to for each dimension. Then you could use this to move bundles left to right like policytree, and iterate over only the non-empty bundles in the recursive calls.
A short pseudo example could be
sets: P-vector of vectors of ObservationBundles.
SampleToBundle: N x P array telling you which bundle each sample belongs to.
for p in 1 to P:
let right_sets = sets.copy()
let left_sets = sets.new_empty() // each bundle set to have no samples.
for each observation_bundle in right_sets[p]:
sample_collection = right_sets[p](observation_bundle).pop_all()
left_sets[p](observation_bundle).add(sample_collection)
for each dimension pp != p:
right_sets[pp].pop_from_bundle(sample_collection) // 1)
left_sets[pp].add_from_bundle(sample_collection)
// do recursive call on left, right sets, etc.
// 1): This can be done quickly since from SampleToBundle we know which bundles we should remove samples from.
(this will probably require some back and forth on details).
Both those suggestions sound great to me - I would love to keep optimizing the package, and if there's a paper at the end of the process, that would be fantastic!
Once things get more stable / developed, I can definitely try and construct a heuristic to decide which is more efficient for a given solution. It would be slightly complicated by the fact that paralell_policy_tree is multicored, so it will depend on how much computer power is available; I work on a 40-core server so parallel_policy_tree can be up to 40x slower than policytree on a single thread and still return faster if I'm pressed for time and don't care about the resources used.
Great suggestion about the modification to the sorted sets. I had thought of doing something similar earlier, but I shelved it because it seemed like a lot of effort, and I had not made the connection that it would make the runtime better asymptotically. I'll start working on it this evening / over the weekend - it would be super to achieve a better asymptotic scaling. G
I tried implementing your suggestion to keep track of the alive units using hash sets see here but it didn't seem to be quicker for small dataset sizes. I'll schedule a more exhaustive test in the next few days, but it might be that the memory-contiguity of the arrays offsets the cost of the large copies for datasets of most sizes.
Sounds good! Yes, we should add multithreading to policytree then as well, else it looks would look weird. Some practical considerations to keep in mind is guaranteeing same output for same input. Some settings may have many optimal trees and policytree returns the one that came first while exploring. Users will be confused if threading causes a different optimal tree to be returned.
Similar considerations for very many trees with ~almost numerically the same reward. If Rust / C++ compiler internal numerics cause differences that will break user expectations. An implicit contract we should always try to stick with is: if I write a paper using policytree version a.b.c, then that paper should replicate exactly if I use policytree version a.x.z on an arbitrary computer in the future.
I think that right now, because of how I implement the parallelization, I am guaranteeing same output for same input, but I'll keep an eye out as I make changes to ensure that this is the case. I parallelize the search by assigning each thread to search for the best tree that starts by splitting on a single variable, and then select the best from these. any ties are resolved by looking at the initial variable the tree splits on. It will be important to make sure I maintain this pattern in future versions.
I also agree that Rust vs C++ numerics could be an issue, but I don't think that there is much I could do to fix these if they do arise. Empirically, I am not sure that they are much of a concern however, as I test paralell_policy_tree against policytree routinely, and haven't found any cases in which they return different classifications.
Finally, I have experimented implementing the tweaks you have suggested, and the results seem mixed (see below) I'm not sure if this is an issue with my implementation (looking for places to speed this up this afternoon), or if this is an accurate representation of how the new strategy will perform.
Part of what I think we are seeing is that the new strategy involves copying n sets, where n is the number of breaks / cut points in the dataset. This means that it would perform the best when the datasets are sparse, but this is also where the existing implimentation is the quickest, so there isn't much room for improvement. I'll keep plugging away and see if the performance in the sparse case can be improved at all. I think the best thing to do for the dense case might simply be to copy the existing algorithm into Rust (or to just have users use the existing package).
a) sounds good! Different compilers on different OS's sometimes produce different result. Also, different OS users may face different installation hassles, I tried installing the package on stanford's linux HPC and it failed with a Rust linking error. Maybe this is something your GH actions could try and catch with a larger build matrix, + give additional assurance tests pass across different OS's and dependency versions (am not up to date on Rust, but a big reason policytree/grf is stable is that they do not depend on a gazillion other libraries).
b1) great work, but what we meant with "asymptotically optimal" would be to do exactly what policytree (algo1) does, but with "sample_n" replaced by "ObservationBundle_{split_val_n}". Translating those BST insert/erase operations to work on ObservationBundles would require use of "SampleToBundle". That doesn't look like what's being done here? (It would likely be a quite big tinkering project.)
b2) by "asymptotically optimal" we mean: consider algo A (yours) and B (ours). Both solve the same problem, but algo A has runtime O(A) and algo B runtime O(B). Both depend on N = the total number of samples, and n_j = the number of distinct values of feature j. As you saw empirically, A is faster than B until you reach a point P where N and n_j gets sufficiently large. What we mean is that: the suggestion above to modify A to A' will likely bump up the point P (to P') at which A will be faster than B (not that it will make it best for the dense case, that's B). Note 1: What P and P' look like, and if it's worth the hassle, I don't know. Note 2: A' might very well be slower for the sparsest cases!
PS: I think your package in its current from already seem very useful. The algo suggestion above could be more of a longer term project if you are interested in keeping working on your package.
A more actionable near term thing we could do is add the following option to policytree:
policy_tree(
X,
Gamma,
depth = 2,
split.step = 1,
min.node.size = 1,
solver = c("policytree", "parallel_policy_tree"),
num.threads = NULL,
verbose = TRUE
)
where if installed, solver = "parallel_policy_tree" would just call directly into a fit method in that package (which bypasses input checks, we should sync and stick to an API and ensure future compatibility then). That would be easier than a "auto" solver option, at least for now, and you could maintain some benchmarks on your package github that could help inform the user (we also sent you a GH invite if you feel any info would be useful on policytree's page).
Ideally we should add multithreading to policytree too, as having a num.threads arugment that only works with one option looks strange (am not sure when I will get around to that though). On that note, maybe a more informative name could be something like "sparsepolicytree" instead of "parallel_policy_tree"?
Gotcha, I see what you mean about "asymptotically optimal," and I think I understand what you mean about implementing the same strategy as policytree but with observation bundles. I did something similar to this in the original prototype of the package, and I think that it might still be archived in the git history so I'll try and pull it out / bring it back to life for some benchmarking.
Historically, the issue that I had with that strategy was that I had to copy p sorted sets each recursive call, and this ended up being more expensive than copying the vector of active units. I'll try and see if there's a way to get the best of both worlds in the next few days.
In the nearer term, I do like the idea of creating a fit method with a standard API so policytree can call out to sparsepolicytree (I do agree that's a better name). What would be the next steps here, and do you have any preferences for how the internal API should look? I'll add a split.step option to sparse_policy_tree so it has all the features of the other solver, then perhaps we can work on integrating the two.
In the nearer term, I do like the idea of creating a fit method with a standard API so policytree can call out to sparsepolicytree (I do agree that's a better name). What would be the next steps here, and do you have any preferences for how the internal API should look? I'll add a split.step option to sparse_policy_tree so it has all the features of the other solver, then perhaps we can work on integrating the two.
For now I think it would already be helpful if just sparse_policy_tree had the same signature as policy_tree, it doesn't matter if your min.node.size doesn't do anything as long as it's documented, then users can just try out your package as a drop-in replacement doing search-and-replace policytree with sparse_policy_tree. Having that workflow work would already go a long way!
Then internals + package integration via a solver argument is something we could come back to later. My plate is too full for this atm, so I can't give you a timeline except the vague academic glacial pace answer of sometime next year : P. There are some other policytree projects in the tinkering phase related bandit problem estimation I'd want to finish first (have no idea when that will be done). Also as I mentioned, we should do multithreading as well. Conceptually the implementation is simple: we should execute each {left, right} recursive call asynchronously in a thread pool (with a fixed number of threads). Getting that to work involves fighting with the C++ compiler and is not something I have time for now. It could however be a different threading strategy for you to try out in Rust, if you didn't already try that.
| gharchive/issue | 2022-10-28T19:02:48 | 2025-04-01T06:44:23.203050 | {
"authors": [
"beniaminogreen",
"erikcs",
"kanodiaayush"
],
"repo": "grf-labs/policytree",
"url": "https://github.com/grf-labs/policytree/issues/146",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
291062315 | Tell User running command if "addnode onetry" was unsuccessful
In the debug console/RPC if you run the command:
addnode onetry
it doesn't tell you anything about the success or failure of the connection
Not exactly sure what to add, but I believe here's where to add the results of the attempted connection:
https://github.com/gridcoin/Gridcoin-Research/blob/master/src/net.cpp#L1914
| gharchive/issue | 2018-01-24T02:23:02 | 2025-04-01T06:44:23.224453 | {
"authors": [
"RoboticMind"
],
"repo": "gridcoin/Gridcoin-Research",
"url": "https://github.com/gridcoin/Gridcoin-Research/issues/867",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
345003250 | Deletion issues
I am having some issues with deleting rows.
Some of the code:
` conInfo, err := griddb_go.CreateContainerInfo("Users",
[][]interface{}{
{"username", griddb_go.TYPE_STRING},
{"email", griddb_go.TYPE_STRING},
{"password", griddb_go.TYPE_BLOB}},
griddb_go.CONTAINER_COLLECTION,
true)
if err != nil {
fmt.Println("Create containerInfo failed")
}
col, err := gridstore.PutContainer(conInfo, true)
if err != nil {
fmt.Println("put container failed")
}
query, err := col.Query(fmt.Sprintf("SELECT * WHERE username='%s'", username))
if err != nil {
fmt.Println("create query failed")
}
rs, err := query.Fetch(false)
if err != nil {
fmt.Println("create rs from query failed")
}
col.SetAutoCommit(false)
if rs.HasNext() {
rrow, err := rs.NextRow()
if err != nil {
fmt.Println("Error retrieving row")
}
//deletes user
col.Remove(rrow[0])
col.Commit()
} else {
fmt.Println("error. Has Next Failed")
}`
This is hooked up to a webpage and is triggered when a button is pressed. But the issue is, sometimes it deletes, sometimes it doesn't.
Thank you for using GridDB Go Client.
I will check this issue.
Sorry for late reply.
When we delete row 10.000 times, sometime go_client ran successfully, sometime failed.
So, I updated typemap for field type string/blob.
Please try the latest source code.
Thank you for pushing this fix.
Unfortunately after building I am having some issues reading from GridDB. When I try to run put/read from GridDB I get the following error:
/tmp/go-build289726786/b001/exe/griddb: symbol lookup error: /tmp/go-build289726786/b001/exe/griddb: undefined symbol: gsPutContainerGeneralV3_3
exit status 127
This is my go env:
GOARCH="amd64"
GOBIN="/home/imru/go/bin"
GOCACHE="/home/imru/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/imru/go/src:/home/imru/go/src/github.com/griddb/go_client/"
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build908016776=/tmp/go-build -gno-record-gcc-switches"
Here is some sample code I'm trying to run which fails:
package main
import (
"fmt"
"github.com/griddb/go_client"
"golang.org/x/crypto/bcrypt"
)
func main() {
factory := griddb_go.StoreFactoryGetInstance()
gridstore := factory.GetStore(map[string]interface{}{
"cluster_name": "defaultCluster",
"username": "admin",
"password": "admin",
"notification_member": "192.168.1.4:10001,192.168.1.5:10001,192.168.1.6:10001",
})
fmt.Println("successfully connected: ", gridstore)
// Create Collection
conInfo, err := griddb_go.CreateContainerInfo("Users",
[][]interface{}{
{"username", griddb_go.TYPE_STRING},
{"email", griddb_go.TYPE_STRING},
{"password", griddb_go.TYPE_BLOB}},
griddb_go.CONTAINER_COLLECTION,
true)
if err != nil {
fmt.Println("Create containerInfo failed")
}
col, err := gridstore.PutContainer(conInfo, false)
if err != nil {
fmt.Println("put container failed")
}
col.SetAutoCommit(false)
password := []byte("epic")
hashedPassword, err := bcrypt.GenerateFromPassword(password, bcrypt.DefaultCost)
if err != nil {
panic(err)
}
row := []interface{}{"test", "test@fixstars.com", hashedPassword}
err = col.Put(row)
if err != nil {
fmt.Println("col.Put panic:", err)
}
col.Commit()
// Create normal query
query, err := col.Query("select *")
if err != nil {
fmt.Println("create query failed")
}
//Execute query
rs, err := query.Fetch(true)
if err != nil {
fmt.Println("create rs from query failed")
}
for rs.HasNext() {
// Update row
rrow, err := rs.NextRow()
if err != nil {
fmt.Println("NextRow from rs failed")
}
fmt.Println("Person: name=", rrow[0], " status=", rrow[1], " count=", rrow[2])
}
}
Thank you for your assistance!
I guess you use Go Client build with gridstore.h of C Client V4.0 and
the library libgridstore.so of C Client V3.0.
Could you please use C Client for GridDB V4.0 ?
Thank you for the continued support.
I've successfully built the go_client with the new C_Client but now I'm getting some strange error. I'm not sure if it's something wrong with my env, but go commands work in other locations. I get this error:
main.go:23:2: no Go files in /home/imru/go/src/src/github.com/griddb/go_client
Could you please set GOPATH to the directory path placed GridDB Go Client ?
ex.) export GOPATH=/home/imru/go
I was able to fix it actually. I had placed the go_client in a directory
within the gopath and it was causing issues for whatever reason. To fix, I
just needed to move the go_client dir outside to a different location.
Thanks for the help.
On Wed, Aug 15, 2018 at 8:26 PM, Katsuhiko Nonomura <
notifications@github.com> wrote:
Could you please set GOPATH to the directory path placed GridDB Go Client ?
ex.) export GOPATH=/home/imru/go
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/griddb/go_client/issues/3#issuecomment-413413161, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AMmgQHUC-MBTJ-5XtAM1cozAx49l9viTks5uROZggaJpZM4ViiaK
.
| gharchive/issue | 2018-07-26T20:58:59 | 2025-04-01T06:44:23.236269 | {
"authors": [
"Imisrael",
"knonomura"
],
"repo": "griddb/go_client",
"url": "https://github.com/griddb/go_client/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
166841551 | More coverage, better task list
I have been reading ss13 reddit (https://www.reddit.com/r/SS13/) and this source hasn't been mentioned since your first open test, I thought giving some publicity to the source could help the progress. Other thing is that the issues' list is bit long and might be bit misleading lot of the issues isn't priority issues so maybe promoting the milestone issue would be a good thing(using git or other site like https://trello.com/ ).
https://github.com/griefly/griefly/milestone/5 here is the list of the current milestone issues. It is quite small list, so I think it is convenient enough.
I am not good at advertising, so I do it when it is needed (for example, when it is needed to conduct a beta-test). Current state of Griefly is really boring, only that can be said that "it is needed to make a lot of things, and we can show only some primitive gameplay". When 0.4 will be ready I may post something on SS13-related sites, but not now. However, Griefly is open-source project, so anyone can post anything about it anywhere he likes.
For the future: do not place more than one issue in one issue-post, split it
Well I get that but like ss14 and ss13 3d (or something like that) is often mentioned by people but griefly was mentioned one time, so I just say that the programmers that would/could help this project don't have an easy job as with ss14.
Well for some reason people do not talk about Griefly. I do not know what can be done with that. Griefly is not flashy remake with new cool graphics, it just plain old SS13, but on the new engine. People tends to evaluate graphics much more than engine things (obviously because they can easily see visual stuff, but it is needed to be a programmer and spend some time in order to understand internal things).
| gharchive/issue | 2016-07-21T15:01:28 | 2025-04-01T06:44:23.244128 | {
"authors": [
"kremius",
"shelltitan"
],
"repo": "griefly/griefly",
"url": "https://github.com/griefly/griefly/issues/286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
755797884 | Add startTick to duration in NoteOffEvent instead of subtracting it
First off, thanks for writing this library! It's saved me from having to learn and implement the MIDI file spec myself.
I feel like the NoteOffEvent tick property should be startTick + duration, not -startTick + duration. Is that correct or is there something I'm missing?
The patch on this ticket solves the bug illustrated on issue 82: https://github.com/grimmdude/MidiWriterJS/issues/82
Hey @kjin,
Sorry for the delay responding to this. Thanks for the PR, I think you're right with this. Merging and will release in 2.0.1.
-Garrett
| gharchive/pull-request | 2020-12-03T04:06:18 | 2025-04-01T06:44:23.296991 | {
"authors": [
"grimmdude",
"kjin",
"leegee"
],
"repo": "grimmdude/MidiWriterJS",
"url": "https://github.com/grimmdude/MidiWriterJS/pull/75",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1106390271 | Add Ruby 3.1 to CI matrix
To get CI running I needed to switch to a supported setup-ruby action (ruby/setup-ruby) and remove the trailing '.x' from each of the version strings.
Thanks for your contribution @petergoldstein!
| gharchive/pull-request | 2022-01-18T01:39:46 | 2025-04-01T06:44:23.316807 | {
"authors": [
"grodowski",
"petergoldstein"
],
"repo": "grodowski/undercover",
"url": "https://github.com/grodowski/undercover/pull/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
842873255 | [Master] release/v1.1.2 QA Sign-off
This PR is automatically created by CI to release a new official version of caver-js-ext-kas.
When this PR is approved by QA team, a new version will be released.
caver-js-ext-kas v1.1.2 QA Result
Functional Test Result
KAS-ANCHOR
Total Test Cases 37 = PASS 37 + FAIL 0 + N/A 0
Newly Added Test Cases 0 = PASS 0 + FAIL 0 + N/A 0
KAS-TH
Total Test Cases 138 = PASS 129 + FAIL 0 + N/A 9
Newly Added Test Cases 0 = PASS 0 + FAIL 0 + N/A 0
KAS-KIP17
Total Test Cases 79 = PASS 77 + FAIL 0 + N/A 2
Newly Added Test Cases 0 = PASS 0 + FAIL 0 + N/A 0
KAS-WALLET
Total Test Cases 222 = PASS 222 + FAIL 0 + N/A 0
Newly Added Test Cases 0 = PASS 0 + FAIL 0 + N/A 0
KAS-ETC
Total Test Cases 8 = PASS 8 + FAIL 0 + N/A 0
Newly Added Test Cases 0 = PASS 0 + FAIL 0 + N/A 0
Integration Test Result
KAS-INT
Total Test Cases 258 = PASS 258 + FAIL 0 + N/A 0
Newly Added Test Cases 0 = PASS 0 + FAIL 0 + N/A 0
| gharchive/pull-request | 2021-03-29T00:33:11 | 2025-04-01T06:44:23.358255 | {
"authors": [
"gx-circleci",
"jimni1222"
],
"repo": "ground-x/caver-js-ext-kas",
"url": "https://github.com/ground-x/caver-js-ext-kas/pull/94",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2250497092 | Enable path to decode encrypted features
This PR fixes an issue with responses containing encryptedFeatures.
With encryption enabled, no features can be found inside the FeaturesViewModel.
The problem can be reproduced by checking for the existence of decoded features in all FeaturesViewModelTests.
To pass the tests again I added a check if the list of unencrypted features isEmpty to enable the pah for encrypted features in this case.
There is the same isNullOrEmpty() check int the Kotlin SDK at https://github.com/growthbook/growthbook-kotlin/blob/8e693071a63194518ef074056d40f68108b7a7a2/GrowthBook/src/commonMain/kotlin/com/sdk/growthbook/features/FeaturesViewModel.kt#L102.
I was also wondering if the server response should even contain features: {} in case of enable encryption and this lead to the problem?
Sidenote: We have a similar problem with the Kotlin SDK on Android but we are still investigating the reasons.
@vazarkevych, could you have a look at this PR and the underlying issue, please?
| gharchive/pull-request | 2024-04-18T11:52:20 | 2025-04-01T06:44:23.363693 | {
"authors": [
"mgratzer"
],
"repo": "growthbook/growthbook-swift",
"url": "https://github.com/growthbook/growthbook-swift/pull/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
688231863 | Document grpc-status-details-bin
Moving from https://github.com/grpc/grpc-go/issues/2329 since this is a request for documentation that lives in this repo.
https://github.com/grpc/grpc/pull/12825 by @yang-g attempted to add this to the spec but it was never merged (closed and locked by stale bot).
This field is currently a supported feature of gRPC but its design / behavior is entirely undocumented. We need a specification for this feature to refer to users and to verify that our implementations are correct and complete. Unfortunately, since the current implementations (C/Java/Go/others?) were done without a spec/gRFC in place, we need to make sure the spec allows for all their behaviors.
@dfawley Given grpc/grpc-go#6662, is it possible to reassess the priority of this issue? It'd be really nice for the community to have the expected behavior of grpc-status-details-bin standardized (and ideally included in the conformance tests).
Unfortunately, since the current implementations (C/Java/Go/others?) were done without a spec/gRFC in place, we need to make sure the spec allows for all their behaviors.
FWIW, I think it would be preferred for the spec to actually be opinionated on the "correct" behavior and then any variance in existing implementations could be considered a bug and fixed in future releases.
Likely the most controversial decision that many implementations make is that the contents of grpc-status-details-bin override the contents of grpc-status and grpc-message, in the event that they disagree on the status code and message text. I personally think that is the wrong behavior since it means that clients that do not look at grpc-status-details-bin would then report a different status code and error message than clients that do. It also means that a proxy would need to decode/unmarshal the grpc-status-details-bin data in order to accurately record metrics for RPC status, when it seems simpler and more sane to instead treat grpc-status as authoritative (especially since it is the only thing that has been documented in the spec up to this point).
@markdroth, @yang-g: this issue has been assigned for over three and a half years. Is there any chance of addressing this in the near future?
At this point, it seems like the best course of action is to document the header as implementation dependent, and state that users should not ever include a Status proto in it that contradicts grpc-status.
C++ doesn't treat it specially at all. Java does validation only when the client application requests it as a Status proto. Go validates it while reading off the wire if it is a Status proto (and converts to an INTERNAL error if there is a mismatch).
@dfawley Are you open to a PR to the specification with text similar to what you're describing? Just documenting that grpc-status-details-bin should contain a Status proto would be an improvement.
@akshayjshah FYI I made #37124 for this if you would like to review the wording.
| gharchive/issue | 2020-08-28T17:18:31 | 2025-04-01T06:44:23.398230 | {
"authors": [
"akshayjshah",
"dfawley",
"jhump"
],
"repo": "grpc/grpc",
"url": "https://github.com/grpc/grpc/issues/24007",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1809244215 | Can utf8bom be used for the Character encoding format of c++ source code generated by proto file
What version of gRPC and what language are you using?
grpc 1.56.1。c++
What operating system (Linux, Windows,...) and version?
windows 11
What runtime / compiler are you using (e.g. python version or version of gcc)
Microsoft Visual Studio Community 2022 17.6.5
What did you do?
Generate C++source code from the proto file
protoc.exe -I. --cpp_out=. --grpc_out=. --plugin=protoc-gen-grpc=grpc_cpp_plugin.exe ./PositionSet.proto
protoc.exe -I. --cpp_out=. --grpc_out=. --plugin=protoc-gen-grpc=grpc_cpp_plugin.exe ./PositionSetStruct.proto
What did you expect to see?
Visual Studio can compile(VS can only compile utf8 source code with bom)。The proto file is with BOM, and the generated source code does not have it.
What did you see instead?
Visual Studio cannot be compiled because it is utf8 without bom。
Anything else we should know about your project / environment?
I think we've got CI tests working with visual studio. I'm not sure if this is a version issue. @veblush Do you know about this?
The problem has always existed, regardless of the GRPC version.Visual Studio needs to be compiled into a utf8 file containing Chinese, with BOM。
If the proto file is gbk encoded, the source code generated by protoc.exe is also gbk. Visual Studio can compile.
If the proto file is utf8bom, the source code generated by protoc.exe is encoded as utf8. Bom is missing.Visual Studio cannot compile(The proto file contains Chinese annotations)。
I believe that gprc generates source code using proto files. Just save the original encoding of the proto file. If the proto file is encoded in utf8bom, the generated source code wants to maintain utf8bom encoding.
Does protoc provide a way to do this?
| gharchive/issue | 2023-07-18T06:43:26 | 2025-04-01T06:44:23.403299 | {
"authors": [
"ml232528",
"yashykt"
],
"repo": "grpc/grpc",
"url": "https://github.com/grpc/grpc/issues/33750",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
337266492 | Reorder steps when starting a server
This change is needed for cleanly enabling server load reporting. With this change, the plugins that are added by the options can also call UpdateServerBuilder().
The only drawback is that the plugin can no longer add active options to the builder. But the plugin needn't add option to the builder in the first place, because its accessibility covers the options'.
My understanding is that option is a public API to allow users to add plugin. We may want to forbid the plugin from adding option to make the dependency clear.
The internal server load reporting plugin should be modified when we import this PR.
****************************************************************
libgrpc.so
VM SIZE FILE SIZE
++++++++++++++ ++++++++++++++
[ = ] 0 0 [ = ]
****************************************************************
libgrpc++.so
VM SIZE FILE SIZE
++++++++++++++ GROWING ++++++++++++++
-------------- SHRINKING --------------
-0.0% -16 [None] -288 -0.0%
-0.4% -48 src/cpp/server/server_builder.cc -48 -0.4%
-1.4% -48 grpc::ServerBuilder::BuildAndStart -48 -1.4%
-0.0% -64 TOTAL -336 -0.0%
[trickle] No significant performance differences
[microbenchmarks] No significant performance differences
Yes, I'm adding the plugin through the option.
Actually that's the only reasonable way I've found for a user to add a plugin. The other way is internal via a static method. https://github.com/grpc/grpc/blob/9a5a3883f59d8d3d5d25ffd4337ca150d97e2e38/include/grpcpp/server_builder.h#L205
So the alternative is to add a piblic API to allow users to add plugin to the ServerBuilder.
So we need to modify the API's anyways, I think.
Another alternative is to call InternalAddPluginFactory() at static initialization time.
But the ServerBuilder will always have the plugin after constructed. To disable the plugin, we need to either add a *DisableOption to remove the plugin or add some checking about the channel arg in each method of the plugin.
I see. So the problem is the added cq.
Yes.. I need somewhere between
https://github.com/grpc/grpc/blob/9a5a3883f59d8d3d5d25ffd4337ca150d97e2e38/src/cpp/server/server_builder.cc#L182
and
https://github.com/grpc/grpc/blob/9a5a3883f59d8d3d5d25ffd4337ca150d97e2e38/src/cpp/server/server_builder.cc#L243
to add the cq.
Add an API to add plugin to server builder:
Pros:
More flexibility when users need to add per-server plugin.
The added plugin can have UpdateServerBuilder() method called in server builder.
Cons:
The option can add plugins too. It might be confusing that we have several ways to add a plugin.
Reorder the steps in server builder
Pros:
All the plugins will have UpdateServerBuilder() method called in server builder consistently, no matter how they are added.
Keep a single way to add plugin, i.e., through option.
Cons:
Changed the plugin API expectation.
The plugin can no longer change the options. Losing the ability to add an option seems benign, as mentioned previously. But the plugin also loses the ability to remove an option. I think in most cases it can still be worked around because the plugin has full access to the server builder.
Please proceed by updating the comments in the plugin api.
Done.
****************************************************************
libgrpc.so
VM SIZE FILE SIZE
++++++++++++++ ++++++++++++++
[ = ] 0 0 [ = ]
****************************************************************
libgrpc++.so
VM SIZE FILE SIZE
++++++++++++++ GROWING ++++++++++++++
-------------- SHRINKING --------------
-0.0% -16 [None] -288 -0.0%
-0.4% -48 src/cpp/server/server_builder.cc -48 -0.4%
-1.4% -48 grpc::ServerBuilder::BuildAndStart -48 -1.4%
-0.0% -64 TOTAL -336 -0.0%
[trickle] No significant performance differences
[microbenchmarks] No significant performance differences
Bazel Debug build for C/C++: #15742
| gharchive/pull-request | 2018-07-01T05:59:28 | 2025-04-01T06:44:23.413103 | {
"authors": [
"AspirinSJL",
"grpc-testing",
"yang-g"
],
"repo": "grpc/grpc",
"url": "https://github.com/grpc/grpc/pull/15919",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
383088121 | Remove google-api-python-client pin
Reintroduce what was reverted in https://github.com/grpc/grpc/pull/17274
****************************************************************
libgrpc.so
VM SIZE FILE SIZE
++++++++++++++ ++++++++++++++
[ = ] 0 0 [ = ]
****************************************************************
libgrpc++.so
VM SIZE FILE SIZE
++++++++++++++ ++++++++++++++
[ = ] 0 0 [ = ]
[trickle] No significant performance differences
Objective-C binary sizes
*****************STATIC******************
New size Old size
2,020,494 Total (=) 2,020,494
No significant differences in binary sizes
***************FRAMEWORKS****************
New size Old size
11,175,622 Total (<) 11,175,628
No significant differences in binary sizes
Corrupt JSON data (indicates timeout or crash):
bm_call_create.BM_IsolatedFilter_ClientChannelFilter_NoOp_.counters.new: 10
bm_call_create.BM_IsolatedFilter_ClientChannelFilter_NoOp_.counters.old: 10
[microbenchmarks] No significant performance differences
****************************************************************
libgrpc.so
VM SIZE FILE SIZE
++++++++++++++ ++++++++++++++
[ = ] 0 0 [ = ]
****************************************************************
libgrpc++.so
VM SIZE FILE SIZE
++++++++++++++ ++++++++++++++
[ = ] 0 0 [ = ]
[trickle] No significant performance differences
Objective-C binary sizes
*****************STATIC******************
New size Old size
2,020,410 Total (=) 2,020,410
No significant differences in binary sizes
***************FRAMEWORKS****************
New size Old size
11,174,071 Total (<) 11,174,076
No significant differences in binary sizes
Corrupt JSON data (indicates timeout or crash):
bm_call_create.BM_IsolatedFilter_ClientChannelFilter_NoOp_.counters.new: 10
bm_call_create.BM_IsolatedFilter_ClientChannelFilter_NoOp_.counters.old: 10
[microbenchmarks] No significant performance differences
| gharchive/pull-request | 2018-11-21T12:05:57 | 2025-04-01T06:44:23.416459 | {
"authors": [
"grpc-testing",
"jtattermusch"
],
"repo": "grpc/grpc",
"url": "https://github.com/grpc/grpc/pull/17275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
661712605 | Upmerge 1.30.x branch into master
(preparing for the next release).
CC @apolcyn @stanley-cheung @lidizheng @ericgribkoff as you all had some changes in 1.30.x (looks like all the backports are also present in master already).
| gharchive/pull-request | 2020-07-20T11:31:17 | 2025-04-01T06:44:23.418158 | {
"authors": [
"jtattermusch"
],
"repo": "grpc/grpc",
"url": "https://github.com/grpc/grpc/pull/23548",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
945799961 | Buffer HPACK parsing until the end of a header boundary
HTTP2 headers are sent in (potentially) many frames, but all must be
sent sequentially with no traffic intervening.
This was not clear when I wrote the HPACK parser, and still indeed quite
contentious on the HTTP2 mailing lists.
Now that matter is well settled (years ago!) take advantage of the fact
by delaying parsing until all bytes are available.
A future change will leverage this to avoid having to store and verify
partial parse state, completely eliminating indirect calls within the
parser.
Test changes are usually ill advised to get a change through... so here are some notes on changes I made:
duplicate_headers wants to verify that we do not segfault on illegally duplicated headers, so verifying successful completion on some ops was over specifying the test.
large_metadata had previously sent a partial header fragment; under the new scheme such a partial header will not be parsed, and consequently not trigger the condition in question
@nicolasnoble @jtattermusch can you help me with why the tests don't seem to be running here?
Ok, looks like the tests are running again, fixed the last breakage I know of.
Pinging on this for review
Known issues: #24375
| gharchive/pull-request | 2021-07-15T22:36:24 | 2025-04-01T06:44:23.421940 | {
"authors": [
"ctiller"
],
"repo": "grpc/grpc",
"url": "https://github.com/grpc/grpc/pull/26700",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
953174212 | Second attempt: use CallTracer API in client channel code
The first commit reverts #26772, which reverted the original attempt in #26714.
The second commit reverts commit a9657b5 from the original PR, which is what was triggering the TSAN failures.
The third commit restores some code that should have been removed as part of a9657b5 but was instead removed later, which is why I missed restoring it when I originally tried to revert just that commit in #26770, resulting in ASAN failures.
I think this should actually work now. :)
Known issues: #26595
The "Artifact Build MacOS" failure is an infrastructure timeout.
| gharchive/pull-request | 2021-07-26T18:31:02 | 2025-04-01T06:44:23.423697 | {
"authors": [
"markdroth"
],
"repo": "grpc/grpc",
"url": "https://github.com/grpc/grpc/pull/26790",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
75693413 | Updates README: links to Stylus's new repo
The Stylus project moved from LearnBoost/stylus to stylus/stylus. Links to the old issue tracker are thus broken.
https://github.com/gruntjs/grunt-contrib-stylus/blob/master/docs/stylus-options.md please also update the /docs directory files
Ok, doing that now.
Actually, which of these three is the correct website for Stylus?
http://stylus-lang.com/
http://stylus.github.io/stylus/
http://learnboost.github.io/stylus/
Nevermind, my last question is for the stylus folks.
Is the README.md auto-generated from the /docs/ files? If so, what's the command to do that?
@cspotcode run grunt :)
Whoops, I did not see that in the gruntfile. Thank you :)
On Wed, May 13, 2015 at 12:35 PM, Vlad Filippov notifications@github.com
wrote:
@cspotcode https://github.com/cspotcode run grunt :)
—
Reply to this email directly or view it on GitHub
https://github.com/gruntjs/grunt-contrib-stylus/pull/136#issuecomment-101739431
.
I updated the docs so I think this PR is good to go.
Thanks!
| gharchive/pull-request | 2015-05-12T18:25:58 | 2025-04-01T06:44:23.441814 | {
"authors": [
"cspotcode",
"vladikoff"
],
"repo": "gruntjs/grunt-contrib-stylus",
"url": "https://github.com/gruntjs/grunt-contrib-stylus/pull/136",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
791483799 | Terraform 0.14.3 breaks Terraform Output Handling
Summary
With terraform 0.14.3 using terraform.Output functionality contains quotations marks and hence breaks output usage.
Investigation
With the release of terraform 0.14.3 a new output flag was introduced: -raw
which leads to the introduction of quotation marks for none raw output, e.g.
"tf-asg-2021012119532247190000000d" instead of tf-asg-2021012119532247190000000d.
This breaks the usage of:
https://github.com/gruntwork-io/terratest/blob/0c883d74782fecaf8e5987a760f5aa9e06a04a7f/modules/terraform/output.go#L24
Proposed Solutions
Adapt to reflect the latest changes:
For terraform version >= 0.14.3:
output, err := RunTerraformCommandAndGetStdoutE(t, options, "output", "-raw", "-no-color", key)
below keep the current implementation.
Provide additional functionality and redirect version handling to the user.
Relation
https://github.com/hashicorp/terraform-aws-nomad/issues/84
https://github.com/hashicorp/terraform-aws-nomad/pull/85
What version of Terratest are you using? We fixed issues related to output in https://github.com/gruntwork-io/terratest/releases/tag/v0.31.0.
What version of Terratest are you using? We fixed issues related to output in https://github.com/gruntwork-io/terratest/releases/tag/v0.31.0.
The version defined here:
https://github.com/hashicorp/terraform-aws-nomad/blob/10c23820707e0ef7eef9ea24f59dc67ef7046e45/test/go.mod#L5
-> v0.30.11
I can confirm, using version v.0.31.0 or above solves the issue.
The version defined here:
https://github.com/hashicorp/terraform-aws-nomad/blob/10c23820707e0ef7eef9ea24f59dc67ef7046e45/test/go.mod#L5
-> v0.30.11
I can confirm, using version v.0.31.0 or above solves the issue.
| gharchive/issue | 2021-01-21T20:50:58 | 2025-04-01T06:44:23.451565 | {
"authors": [
"MatthiasScholz",
"brikis98"
],
"repo": "gruntwork-io/terratest",
"url": "https://github.com/gruntwork-io/terratest/issues/766",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
981815415 | 🛑 volkstanz.st is down
In 6945068, volkstanz.st (https://volkstanz.st) was down:
HTTP code: 0
Response time: 0 ms
Resolved: volkstanz.st is back up in 10f8fd7.
| gharchive/issue | 2021-08-28T11:46:55 | 2025-04-01T06:44:23.461331 | {
"authors": [
"grzchr15"
],
"repo": "grzchr15/uptime",
"url": "https://github.com/grzchr15/uptime/issues/1573",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1007482950 | 🛑 volkstanzwannwo.at is down
In 0c9ad5b, volkstanzwannwo.at (https://volkstanzwannwo.at) was down:
HTTP code: 0
Response time: 0 ms
Resolved: volkstanzwannwo.at is back up in 1ad0767.
| gharchive/issue | 2021-09-26T19:39:26 | 2025-04-01T06:44:23.464336 | {
"authors": [
"grzchr15"
],
"repo": "grzchr15/uptime",
"url": "https://github.com/grzchr15/uptime/issues/2407",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1047715393 | 🛑 kinderundjugendtanz.at is down
In 49746c0, kinderundjugendtanz.at (https://kinderundjugendtanz.at) was down:
HTTP code: 0
Response time: 0 ms
Resolved: kinderundjugendtanz.at is back up in 6a7cd35.
| gharchive/issue | 2021-11-08T17:26:30 | 2025-04-01T06:44:23.467516 | {
"authors": [
"grzchr15"
],
"repo": "grzchr15/uptime",
"url": "https://github.com/grzchr15/uptime/issues/3414",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1056651303 | 🛑 volkstanz.st is down
In 5dc1832, volkstanz.st (https://volkstanz.st) was down:
HTTP code: 0
Response time: 0 ms
Resolved: volkstanz.st is back up in dc30e94.
| gharchive/issue | 2021-11-17T21:30:22 | 2025-04-01T06:44:23.470738 | {
"authors": [
"grzchr15"
],
"repo": "grzchr15/uptime",
"url": "https://github.com/grzchr15/uptime/issues/3638",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
941324315 | 🛑 volkstanz.st is down
In 0402f9d, volkstanz.st (https://volkstanz.st) was down:
HTTP code: 0
Response time: 0 ms
Resolved: volkstanz.st is back up in 2cb2b7d.
| gharchive/issue | 2021-07-10T21:09:29 | 2025-04-01T06:44:23.473644 | {
"authors": [
"grzchr15"
],
"repo": "grzchr15/uptime",
"url": "https://github.com/grzchr15/uptime/issues/743",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2378939928 | 🛑 volkstanz.st is down
In 9d07346, volkstanz.st (https://volkstanz.st) was down:
HTTP code: 0
Response time: 0 ms
Resolved: volkstanz.st is back up in 7750349 after 8 minutes.
| gharchive/issue | 2024-06-27T19:27:16 | 2025-04-01T06:44:23.476609 | {
"authors": [
"grzchr15"
],
"repo": "grzchr15/uptime",
"url": "https://github.com/grzchr15/uptime/issues/8535",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
162111784 | Any.Instance should avoid copy-constructor if possible
Check this with a test, currently I think this throws exception
The real problem is when copy constructor is public and there
Added better heuristics
| gharchive/issue | 2016-06-24T09:41:59 | 2025-04-01T06:44:23.483524 | {
"authors": [
"grzesiek-galezowski"
],
"repo": "grzesiek-galezowski/tdd-toolkit",
"url": "https://github.com/grzesiek-galezowski/tdd-toolkit/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1901249232 | Feature/datepicker
Rebrand of DatePicker - Ticket : IGL-70
Good job,
there's just one small detail left, which is the date selection in a disabled range that shouldn't have a hover effect. Example here on September 23
Default
Hover
| gharchive/pull-request | 2023-09-18T15:52:37 | 2025-04-01T06:44:23.506684 | {
"authors": [
"fraincs",
"franckgaudin"
],
"repo": "gsoft-inc/ov-igloo-ui",
"url": "https://github.com/gsoft-inc/ov-igloo-ui/pull/540",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
981530874 | Should gdk::Event::is_pointer_emulated really take a &mut self?
The gdk::Event::is_pointer_emulated method takes a &mut self argument, but it sounds like maybe it should take a &self? I'd be happy to send a PR if this is indeed the case.
Yes it shouldn't take a mutable reference. If you send a PR, please also check if any other event getters have the same problem. Thanks!
Seems to be something wrong on either the introspection side of things or gir. See https://gitlab.gnome.org/World/Rust/libhandy-rs/-/issues/19
Hm, does that mean that I need to fix something in gir? Or is it ok just to modify gdk/src/event.rs?
Hm, does that mean that I need to fix something in gir? Or is it ok just to modify gdk/src/event.rs?
It's fine to fix it on gdk's side for now, but i would open an issue to investigate it either on gir's side or here in gtk3
How is gir relevant here? It's manually bound, so a simple human mistake in the code :)
See https://github.com/gtk-rs/gtk3-rs/pull/626
| gharchive/issue | 2021-08-27T19:22:37 | 2025-04-01T06:44:23.526601 | {
"authors": [
"bilelmoussaoui",
"jneem",
"sdroege"
],
"repo": "gtk-rs/gtk3-rs",
"url": "https://github.com/gtk-rs/gtk3-rs/issues/625",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
174273144 | Passer les fonctions de private à public ?
Pour permettre de les appeler depuis les prototypes...
Est-ce bon pour les performances ?
http://ejohn.org/blog/function-call-profiling/
http://www.broofa.com/2009/02/javascript-inheritance-performance/
Certaines fonctions ont été rendu publiques pour être utilisées dans les callbacks
| gharchive/issue | 2016-08-31T13:28:27 | 2025-04-01T06:44:23.528836 | {
"authors": [
"gtoubiana"
],
"repo": "gtoubiana/acte",
"url": "https://github.com/gtoubiana/acte/issues/184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1025291842 | Please support source distributions on PyPi
Use case
Cross platform deployments of software systems that depend on lightkube may want to build it from source for a specific platform before deploying.
Issue
At present only prebuilt wheels are available through PyPi
All right, I released v0.8.1 also as tar.gz. Will do the same for the models later today
Closing since this issue is now resolved. Thank you very much.
| gharchive/issue | 2021-10-13T14:01:52 | 2025-04-01T06:44:23.530961 | {
"authors": [
"balbirthomas",
"gtsystem"
],
"repo": "gtsystem/lightkube",
"url": "https://github.com/gtsystem/lightkube/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1185976871 | 感谢,不知是否是否为我的异常
yii-dingtalk\src\DingTalk.php
172行 static::$_app = Yii::createObject(Application::class, $this->options);
生成的配置为空
需修改橙
static::$_app = new Application( $this->options );
@chenjiabin 可能是 Yii 版本导致的问题。方便的话修改一下,提个 pr 过来,我这边合并一下。
| gharchive/issue | 2022-03-30T06:56:10 | 2025-04-01T06:44:23.539355 | {
"authors": [
"chenjiabin",
"guanguans"
],
"repo": "guanguans/yii-dingtalk",
"url": "https://github.com/guanguans/yii-dingtalk/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2248444755 | RestrictToTopic validator does not work in guard
Import Guard and Validator
from guardrails.hub import RestrictToTopic
from guardrails import Guard
Setup Guard
guard = Guard().use(
RestrictToTopic(
valid_topics=["sports"],
invalid_topics=["music"],
disable_classifier=True,
disable_llm=False,
on_fail="exception"
)
)
guard.validate("""
In Super Bowl LVII in 2023, the Chiefs clashed with the Philadelphia Eagles in a fiercely contested battle, ultimately emerging victorious with a score of 38-35.
""") # Validator passes
guard.validate("""
The Beatles were a charismatic English pop-rock band of the 1960s.
""") # Validator fails
How they modifying above code to utilize openai api key?
I'm getting this error:
OpenAIError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/tenacity/init.py in call(self, fn, *args, **kwargs)
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
26 frames
OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
The above exception was the direct cause of the following exception:
RetryError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/tenacity/init.py in iter(self, retry_state)
324 if self.reraise:
325 raise retry_exc.reraise()
--> 326 raise retry_exc from fut.exception()
327
328 if self.wait:
RetryError: RetryError[<Future at 0x783bd95637c0 state=finished raised OpenAIError>
@Pratekh are you providing an OpenAI API Key as listed in the requirements for this validator? https://hub.guardrailsai.com/validator/tryolabs/restricttotopic
That seems to be the culprit according the error message you posted:
OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
@Pratekh this hasn't had any updates in a few months. If you're still having any issues please don't hesitate to reopen this or comment.
| gharchive/issue | 2024-04-17T14:29:24 | 2025-04-01T06:44:23.600690 | {
"authors": [
"CalebCourier",
"Pratekh",
"dtam"
],
"repo": "guardrails-ai/guardrails",
"url": "https://github.com/guardrails-ai/guardrails/issues/719",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1649012232 | 🛑 Sefaz - Rio Grande do Sul is down
In 12343cb, Sefaz - Rio Grande do Sul (https://onboardapi.guichepass.com.br/sefaz?code=2) was down:
HTTP code: 500
Response time: 132 ms
Resolved: Sefaz - Rio Grande do Sul is back up in 1a82bb8.
| gharchive/issue | 2023-03-31T09:21:05 | 2025-04-01T06:44:23.608240 | {
"authors": [
"suporte-gpass"
],
"repo": "guichevirtual/statuspage",
"url": "https://github.com/guichevirtual/statuspage/issues/1184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1679756489 | 🛑 Sefaz - Rio Grande do Sul is down
In ea03d23, Sefaz - Rio Grande do Sul (https://onboardapi.guichepass.com.br/sefaz?code=2) was down:
HTTP code: 500
Response time: 133 ms
Resolved: Sefaz - Rio Grande do Sul is back up in 33b449b.
| gharchive/issue | 2023-04-22T22:27:19 | 2025-04-01T06:44:23.610741 | {
"authors": [
"suporte-gpass"
],
"repo": "guichevirtual/statuspage",
"url": "https://github.com/guichevirtual/statuspage/issues/1291",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
645887108 | add characterLimit option
Fix #3
@PauloGoncalvesBH can you try this PR with uses: guilhem/rss-issues-action@27966e7
My third and final suggestion is to complement the README with this new option 😅
jobs:
gke-release:
runs-on: ubuntu-latest
steps:
- - uses: guilhem/rss-issues-action@0.0.1
+ - uses: guilhem/rss-issues-action@3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
feed: "https://cloud.google.com/feeds/kubernetes-engine-release-notes.xml"
+ characterLimit: "255"
prefix: "[GKE]"
dry-run: "false"
lastTime: "92h"
labels: "liens/Kubernetes"
You can test it with branch 3 ;)
I really liked the change, it's great 🎉
| gharchive/pull-request | 2020-06-25T22:25:32 | 2025-04-01T06:44:23.616394 | {
"authors": [
"PauloGoncalvesBH",
"guilhem"
],
"repo": "guilhem/rss-issues-action",
"url": "https://github.com/guilhem/rss-issues-action/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2512233254 | Cant specify system prompt
Like the prompt format but submitted once at the beginning of the conversation
Hi. You can read about system prompt here.
| gharchive/issue | 2024-09-08T07:00:33 | 2025-04-01T06:44:23.620435 | {
"authors": [
"guinmoon",
"trufae"
],
"repo": "guinmoon/LLMFarm",
"url": "https://github.com/guinmoon/LLMFarm/issues/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
53348991 | Cannot install gulp CLI
I can install/uninstall other global tools just fine, for example: sudo npm un -g clean-css and sudo npm install -g clean-css both run fine for me. But when I run sudo npm install -g gulp, I get the following error:
shell-init: error retrieving current directory: getcwd: cannot access parent directories: Permission denied
node.js:838
var cwd = process.cwd();
^
Error: EACCES, permission denied
at Function.startup.resolveArgv0 (node.js:838:23)
at startup (node.js:58:13)
at node.js:929:3
npm ERR! Darwin 14.0.0
npm ERR! argv "node" "/usr/local/bin/npm" "install" "-g" "gulp"
npm ERR! node v0.10.35
npm ERR! npm v2.1.17
npm ERR! code ELIFECYCLE
npm ERR! v8flags@1.0.8 install: `node fetch.js`
npm ERR! Exit status 8
npm ERR!
npm ERR! Failed at the v8flags@1.0.8 install script 'node fetch.js'.
npm ERR! This is most likely a problem with the v8flags package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node fetch.js
npm ERR! You can get their info via:
npm ERR! npm owner ls v8flags
npm ERR! There is likely additional logging output above.
Here's my full npm-debug.log.
your npm install is messed up, search for the issue there - this has nothing to do with gulp
| gharchive/issue | 2015-01-04T21:23:10 | 2025-04-01T06:44:23.639852 | {
"authors": [
"contra",
"djmadeira"
],
"repo": "gulpjs/gulp",
"url": "https://github.com/gulpjs/gulp/issues/852",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
782203931 | Error with corpus training
Hi! I have this code:
from chatterbot.trainers import ChatterBotCorpusTrainer, ListTrainer
Elli0t = ChatBot(name = 'Elli0t', read_only = False, logic_adapters = ["chatterbot.logic.BestMatch"])
corpus_trainer = ChatterBotCorpusTrainer(Elli0t)
corpus_trainer.train("chatterbot.corpus.english")
but I get FileNotFoundError:
FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/chatterbot_corpus/data/english'
Is there any way to fix this?
My fault, I forgot to install the corpus
My fault, I forgot to install the corpus
| gharchive/issue | 2021-01-08T15:37:17 | 2025-04-01T06:44:23.650865 | {
"authors": [
"isaa-ctaylor"
],
"repo": "gunthercox/ChatterBot",
"url": "https://github.com/gunthercox/ChatterBot/issues/2097",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
667056881 | Create 1.4.x and 2.2.x branch to support both LTS
Seems that performance squad is working on 1.4.x, and we should make sure that is continuously supported
what do you mean by performance squad ?
our ci/cd using latest image so ... do you mean we should matrix for fabric version in ci/cd?
fixed by #51
| gharchive/issue | 2020-07-28T13:05:19 | 2025-04-01T06:44:23.652414 | {
"authors": [
"SamYuan1990",
"guoger"
],
"repo": "guoger/stupid",
"url": "https://github.com/guoger/stupid/issues/42",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
688371707 | Image options
Could you please add the option to choose image resolution and format?
I am not sure we can pass resolution parameter while capturing image directly. That is only possible if you have bing api key.
| gharchive/issue | 2020-08-28T21:45:49 | 2025-04-01T06:44:23.658629 | {
"authors": [
"Maracko",
"gurugaurav"
],
"repo": "gurugaurav/bing_image_downloader",
"url": "https://github.com/gurugaurav/bing_image_downloader/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
64036138 | disable running composer with bldr
Lets see if this works.
related #397
| gharchive/pull-request | 2015-03-24T16:05:21 | 2025-04-01T06:44:23.659387 | {
"authors": [
"cordoval",
"sstok"
],
"repo": "gushphp/gush",
"url": "https://github.com/gushphp/gush/pull/399",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1165846586 | Onde está o slide da aula 4 ?
Somente testando as aulas kkk
Somente testando as aulas kkk
exatamente kkkk tmj
| gharchive/issue | 2022-03-10T23:47:55 | 2025-04-01T06:44:23.669084 | {
"authors": [
"lucas0395",
"samuel-nasc"
],
"repo": "gustavoguanabara/git-github",
"url": "https://github.com/gustavoguanabara/git-github/issues/1550",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2533011518 | encounter many 'gbk' codec errors
Since offical GraphRAG require UTF-8 encoding, prepare some input files which is UTF-8 format.
When using in this nano-graphrag, I hardcode with encoding='utf-8' , but encount many 'gbk' codec errors, could I have a global config to determine the encoding format?
for example, in _storage.py:
Thanks!
and the output file "vdb_entities.json" encoding is also not utf-8
是不是在windows下用的啊?我也遇到过,可能没有指定utf8写
是不是在windows下用的啊?我也遇到过,可能没有指定utf8写
对,windows下。改代码指定了,发现又有其他地方报同样的错,还是请作者改下源码吧。
已经修复 可以pull下最新的代码测试下
Fixed save/write encoding problem of utf-8
已经修复 可以pull下最新的代码测试下 Fixed save/write encoding problem of utf-8
感谢回复!不过还是报错了, line 121 in _storage.py:
Exception has occurred: UnicodeEncodeError
'gbk' codec can't encode character '\uc0bc' in position 3: illegal multibyte sequence
File "C:\demo\nano-graphrag\nano_graphrag\graphrag.py", line 312, in ainsert
await self.chunk_entity_relation_graph.clustering(
File "C:\demo\nano-graphrag\nano_graphrag_storage.py", line 374, in clustering
await self._clustering_algorithmsalgorithm
File "C:\demo\nano-graphrag\nano_graphrag_storage.py", line 437, in _leiden_clustering
from graspologic.partition import hierarchical_leiden
ModuleNotFoundError: No module named 'past'
During handling of the above exception, another exception occurred:
File "C:\demo\nano-graphrag\nano_graphrag_storage.py", line 121, in index_done_callback
self._client.save()
File "C:\demo\nano-graphrag\nano_graphrag\graphrag.py", line 339, in _insert_done
await asyncio.gather(*tasks)
File "C:\demo\nano-graphrag\nano_graphrag\graphrag.py", line 323, in ainsert
await self._insert_done()
File "C:\demo\nano-graphrag\nano_graphrag\graphrag.py", line 205, in insert
return loop.run_until_complete(self.ainsert(string_or_strings))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\demo\nano-graphrag\test.py", line 12, in
graph_func.insert(f.read())
UnicodeEncodeError: 'gbk' codec can't encode character '\uc0bc' in position 3: illegal multibyte sequence
另外: write的文件 vdb_entities.json, UTF-8打开还是乱码,gb2312打开正常。
是新的working dir吗?
原来的working_dir,只保留了原文件.txt,删除了其他所有中间产生的文件。我也换4o为4o-mini了,不删除中间文件会报错。
你更新仓库的方式是 pip install git+ 吗? btw 你需要pip install future
哦,更新了仓库,却忘了更新pip install 的nano :( 回头再测试下。多谢提醒!
获取 Outlook for iOShttps://aka.ms/o0ukef
发件人: Rangehow @.>
发送时间: Friday, September 20, 2024 2:45:15 PM
收件人: gusye1234/nano-graphrag @.>
抄送: Author @.>; Comment @.>
主题: Re: [gusye1234/nano-graphrag] encounter many 'gbk' codec errors (Issue #50)
你更新仓库的方式是 pip install git+ 吗? btw 你需要pip install future
―
Reply to this email directly, view it on GitHubhttps://github.com/gusye1234/nano-graphrag/issues/50#issuecomment-2362950488 or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABPZDAWGQ3HYQPO2LDDNMB3ZXO77ZBFKMF2HI4TJMJ2XIZLTSOBKK5TBNR2WLJDUOJ2WLJDOMFWWLO3UNBZGKYLEL5YGC4TUNFRWS4DBNZ2F6YLDORUXM2LUPGBKK5TBNR2WLJDUOJ2WLJDOMFWWLLTXMF2GG2C7MFRXI2LWNF2HTAVFOZQWY5LFUVUXG43VMWSG4YLNMWVXI2DSMVQWIX3UPFYGLLDTOVRGUZLDORPXI6LQMWWES43TOVSUG33NNVSW45FGORXXA2LDOOJIFJDUPFYGLKTSMVYG643JORXXE6NFOZQWY5LFVE4DGMZVGEZDGNRXQKSHI6LQMWSWS43TOVS2K5TBNR2WLKRSGUZTGMBRGE2TCOFHORZGSZ3HMVZKMY3SMVQXIZI.
You are receiving this email because you authored the thread.
Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
| gharchive/issue | 2024-09-18T08:00:11 | 2025-04-01T06:44:23.686279 | {
"authors": [
"cyberflying",
"gusye1234",
"luckfu",
"rangehow"
],
"repo": "gusye1234/nano-graphrag",
"url": "https://github.com/gusye1234/nano-graphrag/issues/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
200099122 | Fix webkit detection
Fix #215 issue
@guybedford could you please assist here?
@victordidenko sure.
Thanks so much!
Would you be so kind to issue new release? Or tell, when you are planning to?
I'm not going to cut a release anytime soon unfortunately. The project is looking for maintainers though who are welcome to make releases.
Oh, I've never maintained something, that are in active use of many users :fearful:
But we badly need this fix in one of our projects...
I could try, though. What are the requirements for maintainer's vacancy? :)
@victordidenko this change merged to master, but was not re-applied to css.min.js so release 0.1.8 contains 50% of fix-places. It should be released in 0.1.9 or later.
| gharchive/pull-request | 2017-01-11T13:54:34 | 2025-04-01T06:44:23.691814 | {
"authors": [
"alundiak",
"guybedford",
"victordidenko"
],
"repo": "guybedford/require-css",
"url": "https://github.com/guybedford/require-css/pull/220",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
235937239 | GCEInstanceService.create doesn't return created instance
The following code, based on the tutorial, doesn't work because GCEInstanceService.create doesn't return the created instance
debian_img_id = 'debian-8-jessie-v20170523'
img = provider.compute.images.get(debian_img_id)
inst_type = sorted([t for t in provider.compute.instance_types.list()
if t.vcpus >= 2 and t.ram >= 4],
key=lambda x: x.vcpus*x.ram)[0]
inst = provider.compute.instances.create(
name='cloudbridge-intro', image=img, instance_type=inst_type)
# Wait until ready
inst.wait_till_ready() # This is a blocking call
# Show instance state
print(inst.state)
Fails with error
Traceback (most recent call last):
File "test_gce.py", line 29, in <module>
inst.wait_till_ready() # This is a blocking call
AttributeError: 'NoneType' object has no attribute 'wait_till_ready'
I confirmed through the Google Cloud Engine dashboard that the instance was in fact created.
I've tried adding return GCEInstance(self.provider, response) to the bottom of GCEInstanceService.create method but am getting HttpAccessTokenRefreshError: invalid_scope: Empty or missing scope not allowed. when calling the wait_till_ready method (as well as a few others, such as refresh). Ping @baizhang
There are two issues here.
First, the create method did not return the newly created instance. And return GCEInstance(self.provider, response) did not work, because response is a zone operation resource (instead of the instance resource). Commit 5bd1d05 fixes this problems by returning the newly created instance and also making this call block till the operation is complete.
The second issue is due to a change made in commit 45a0bf5, where GCE image now uses its selfLink as its id. And we will make it consistent across GCE resources to use selfLink as id. So the following slightly modified code should be able to create and return a new GCE instance.
from cloudbridge.cloud.factory import CloudProviderFactory, ProviderList
provider = CloudProviderFactory().create_provider(ProviderList.GCE, {})
debian_img_name = 'debian-8-jessie-v20170523'
debian_images = provider.compute.images.find(debian_img_name)
img = debian_images[0]
# Alternatively
# img = provider.compute.images.get(
# 'https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-8-jessie-v20170523')
inst_type = sorted([t for t in provider.compute.instance_types.list()
if t.vcpus >= 2 and t.ram >= 4],
key=lambda x: x.vcpus*x.ram)[0]
inst = provider.compute.instances.create(
name='cloudbridge-intro', image=img, instance_type=inst_type)
Also add @mbookman here.
I believe that this can be closed.
It was fixed by Bai's change: 5bd1d052408dfa4eb7d0654f5f35dd45e9891533.
@machristie @nuwang @afgane, please confirm.
| gharchive/issue | 2017-06-14T16:27:03 | 2025-04-01T06:44:23.698123 | {
"authors": [
"afgane",
"baizhang",
"machristie",
"mbookman"
],
"repo": "gvlproject/cloudbridge",
"url": "https://github.com/gvlproject/cloudbridge/issues/47",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
636916173 | FFTW3 doesn't exist
When I run the commandcmake --build . --config Release --target install
I get the error output
[ 1%] Building CXX object _deps/rtff-build/src/rtff/CMakeFiles/rtff.dir/fft/fftw/fftw_fft.cc.o
/Users/username/Desktop/repos/vstSpleeter/external/spleeterpp/build/_deps/rtff-src/src/rtff/fft/fftw/fftw_fft.cc:5:10: fatal error:
'fftw3.h' file not found
#include "fftw3.h"
^~~~~~~~~
1 error generated.
make[2]: *** [_deps/rtff-build/src/rtff/CMakeFiles/rtff.dir/fft/fftw/fftw_fft.cc.o] Error 1
make[1]: *** [_deps/rtff-build/src/rtff/CMakeFiles/rtff.dir/all] Error 2
make: *** [all] Error 2
Same error on macOS Catalina.
This is full output of cmake ..
Seems something wrong with FFTW. Can someone help?
[koji@MacBook-Pro:~/work/PartScratch/spleeterpp/build]$ cmake ..
-- The C compiler identification is AppleClang 11.0.3.11030032
-- The CXX compiler identification is AppleClang 11.0.3.11030032
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- using spleeter_input_frame_count 64
-- Downloading models from https://github.com/gvne/spleeterpp/releases/download/models-1.0/models.zip
-- [download 0% complete]
-- [download 100% complete]
-- [download 0% complete]
-- [download 1% complete]
-- [download 2% complete]
-- [download 3% complete]
-- [download 4% complete]
-- [download 5% complete]
-- [download 6% complete]
-- [download 7% complete]
-- [download 8% complete]
-- [download 9% complete]
-- [download 10% complete]
-- [download 11% complete]
-- [download 12% complete]
-- [download 13% complete]
-- [download 14% complete]
-- [download 15% complete]
-- [download 16% complete]
-- [download 17% complete]
-- [download 18% complete]
-- [download 19% complete]
-- [download 20% complete]
-- [download 21% complete]
-- [download 22% complete]
-- [download 23% complete]
-- [download 24% complete]
-- [download 25% complete]
-- [download 26% complete]
-- [download 27% complete]
-- [download 28% complete]
-- [download 29% complete]
-- [download 30% complete]
-- [download 31% complete]
-- [download 32% complete]
-- [download 33% complete]
-- [download 34% complete]
-- [download 35% complete]
-- [download 36% complete]
-- [download 37% complete]
-- [download 38% complete]
-- [download 39% complete]
-- [download 40% complete]
-- [download 41% complete]
-- [download 42% complete]
-- [download 43% complete]
-- [download 44% complete]
-- [download 45% complete]
-- [download 46% complete]
-- [download 47% complete]
-- [download 48% complete]
-- [download 49% complete]
-- [download 50% complete]
-- [download 51% complete]
-- [download 52% complete]
-- [download 53% complete]
-- [download 54% complete]
-- [download 55% complete]
-- [download 56% complete]
-- [download 57% complete]
-- [download 58% complete]
-- [download 59% complete]
-- [download 60% complete]
-- [download 61% complete]
-- [download 62% complete]
-- [download 63% complete]
-- [download 64% complete]
-- [download 65% complete]
-- [download 66% complete]
-- [download 67% complete]
-- [download 68% complete]
-- [download 69% complete]
-- [download 70% complete]
-- [download 71% complete]
-- [download 72% complete]
-- [download 73% complete]
-- [download 74% complete]
-- [download 75% complete]
-- [download 76% complete]
-- [download 77% complete]
-- [download 78% complete]
-- [download 79% complete]
-- [download 80% complete]
-- [download 81% complete]
-- [download 82% complete]
-- [download 83% complete]
-- [download 84% complete]
-- [download 85% complete]
-- [download 86% complete]
-- [download 87% complete]
-- [download 88% complete]
-- [download 89% complete]
-- [download 90% complete]
-- [download 91% complete]
-- [download 92% complete]
-- [download 93% complete]
-- [download 94% complete]
-- [download 95% complete]
-- [download 96% complete]
-- [download 97% complete]
-- [download 98% complete]
-- [download 99% complete]
-- [download 100% complete]
-- Downloading online models from https://github.com/gvne/spleeterpp/releases/download/olmodels-v1.0/models.zip
-- [download 0% complete]
-- [download 100% complete]
-- [download 0% complete]
-- [download 1% complete]
-- [download 2% complete]
-- [download 3% complete]
-- [download 4% complete]
-- [download 5% complete]
-- [download 6% complete]
-- [download 7% complete]
-- [download 8% complete]
-- [download 9% complete]
-- [download 10% complete]
-- [download 11% complete]
-- [download 12% complete]
-- [download 13% complete]
-- [download 14% complete]
-- [download 15% complete]
-- [download 16% complete]
-- [download 17% complete]
-- [download 18% complete]
-- [download 19% complete]
-- [download 20% complete]
-- [download 21% complete]
-- [download 22% complete]
-- [download 23% complete]
-- [download 24% complete]
-- [download 25% complete]
-- [download 26% complete]
-- [download 27% complete]
-- [download 28% complete]
-- [download 29% complete]
-- [download 30% complete]
-- [download 31% complete]
-- [download 32% complete]
-- [download 33% complete]
-- [download 34% complete]
-- [download 35% complete]
-- [download 36% complete]
-- [download 37% complete]
-- [download 38% complete]
-- [download 39% complete]
-- [download 40% complete]
-- [download 41% complete]
-- [download 42% complete]
-- [download 43% complete]
-- [download 44% complete]
-- [download 45% complete]
-- [download 46% complete]
-- [download 47% complete]
-- [download 48% complete]
-- [download 49% complete]
-- [download 50% complete]
-- [download 51% complete]
-- [download 52% complete]
-- [download 53% complete]
-- [download 54% complete]
-- [download 55% complete]
-- [download 56% complete]
-- [download 57% complete]
-- [download 58% complete]
-- [download 59% complete]
-- [download 60% complete]
-- [download 61% complete]
-- [download 62% complete]
-- [download 63% complete]
-- [download 64% complete]
-- [download 65% complete]
-- [download 66% complete]
-- [download 67% complete]
-- [download 68% complete]
-- [download 69% complete]
-- [download 70% complete]
-- [download 71% complete]
-- [download 72% complete]
-- [download 73% complete]
-- [download 74% complete]
-- [download 75% complete]
-- [download 76% complete]
-- [download 77% complete]
-- [download 78% complete]
-- [download 79% complete]
-- [download 80% complete]
-- [download 81% complete]
-- [download 82% complete]
-- [download 83% complete]
-- [download 84% complete]
-- [download 85% complete]
-- [download 86% complete]
-- [download 87% complete]
-- [download 88% complete]
-- [download 89% complete]
-- [download 90% complete]
-- [download 91% complete]
-- [download 92% complete]
-- [download 93% complete]
-- [download 94% complete]
-- [download 95% complete]
-- [download 96% complete]
-- [download 97% complete]
-- [download 98% complete]
-- [download 99% complete]
-- [download 100% complete]
-- Downloading tensorflow C API pre-built
-- [download 0% complete]
-- [download 1% complete]
-- [download 2% complete]
-- [download 3% complete]
-- [download 4% complete]
-- [download 5% complete]
-- [download 6% complete]
-- [download 7% complete]
-- [download 8% complete]
-- [download 9% complete]
-- [download 10% complete]
-- [download 11% complete]
-- [download 12% complete]
-- [download 13% complete]
-- [download 14% complete]
-- [download 15% complete]
-- [download 16% complete]
-- [download 17% complete]
-- [download 18% complete]
-- [download 19% complete]
-- [download 20% complete]
-- [download 21% complete]
-- [download 22% complete]
-- [download 23% complete]
-- [download 24% complete]
-- [download 25% complete]
-- [download 26% complete]
-- [download 27% complete]
-- [download 28% complete]
-- [download 29% complete]
-- [download 30% complete]
-- [download 31% complete]
-- [download 32% complete]
-- [download 33% complete]
-- [download 34% complete]
-- [download 35% complete]
-- [download 36% complete]
-- [download 37% complete]
-- [download 38% complete]
-- [download 39% complete]
-- [download 40% complete]
-- [download 41% complete]
-- [download 42% complete]
-- [download 43% complete]
-- [download 44% complete]
-- [download 45% complete]
-- [download 46% complete]
-- [download 47% complete]
-- [download 48% complete]
-- [download 49% complete]
-- [download 50% complete]
-- [download 51% complete]
-- [download 52% complete]
-- [download 53% complete]
-- [download 54% complete]
-- [download 55% complete]
-- [download 56% complete]
-- [download 57% complete]
-- [download 58% complete]
-- [download 59% complete]
-- [download 60% complete]
-- [download 61% complete]
-- [download 62% complete]
-- [download 63% complete]
-- [download 64% complete]
-- [download 65% complete]
-- [download 66% complete]
-- [download 67% complete]
-- [download 68% complete]
-- [download 69% complete]
-- [download 70% complete]
-- [download 71% complete]
-- [download 72% complete]
-- [download 73% complete]
-- [download 74% complete]
-- [download 75% complete]
-- [download 76% complete]
-- [download 77% complete]
-- [download 78% complete]
-- [download 79% complete]
-- [download 80% complete]
-- [download 81% complete]
-- [download 82% complete]
-- [download 83% complete]
-- [download 84% complete]
-- [download 85% complete]
-- [download 86% complete]
-- [download 87% complete]
-- [download 88% complete]
-- [download 89% complete]
-- [download 90% complete]
-- [download 91% complete]
-- [download 92% complete]
-- [download 93% complete]
-- [download 94% complete]
-- [download 95% complete]
-- [download 96% complete]
-- [download 97% complete]
-- [download 98% complete]
-- [download 99% complete]
-- [download 100% complete]
-- Downloading FFTW sources
-- [download 0% complete]
-- [download 1% complete]
-- [download 2% complete]
-- [download 3% complete]
-- [download 4% complete]
-- [download 5% complete]
-- [download 6% complete]
-- [download 7% complete]
-- [download 8% complete]
-- [download 9% complete]
-- [download 10% complete]
-- [download 11% complete]
-- [download 12% complete]
-- [download 13% complete]
-- [download 14% complete]
-- [download 15% complete]
-- [download 16% complete]
-- [download 17% complete]
-- [download 18% complete]
-- [download 19% complete]
-- [download 20% complete]
-- [download 21% complete]
-- [download 22% complete]
-- [download 23% complete]
-- [download 24% complete]
-- [download 25% complete]
-- [download 26% complete]
-- [download 27% complete]
-- [download 28% complete]
-- [download 29% complete]
-- [download 30% complete]
-- [download 31% complete]
-- [download 32% complete]
-- [download 33% complete]
-- [download 34% complete]
-- [download 35% complete]
-- [download 36% complete]
-- [download 37% complete]
-- [download 38% complete]
-- [download 39% complete]
-- [download 40% complete]
-- [download 41% complete]
-- [download 42% complete]
-- [download 43% complete]
-- [download 44% complete]
-- [download 45% complete]
-- [download 46% complete]
-- [download 47% complete]
-- [download 48% complete]
-- [download 49% complete]
-- [download 50% complete]
-- [download 51% complete]
-- [download 52% complete]
-- [download 53% complete]
-- [download 54% complete]
-- [download 55% complete]
-- [download 56% complete]
-- [download 57% complete]
-- [download 58% complete]
-- [download 59% complete]
-- [download 60% complete]
-- [download 61% complete]
-- [download 62% complete]
-- [download 63% complete]
-- [download 64% complete]
-- [download 65% complete]
-- [download 66% complete]
-- [download 67% complete]
-- [download 68% complete]
-- [download 69% complete]
-- [download 70% complete]
-- [download 71% complete]
-- [download 72% complete]
-- [download 73% complete]
-- [download 74% complete]
-- [download 75% complete]
-- [download 76% complete]
-- [download 77% complete]
-- [download 78% complete]
-- [download 79% complete]
-- [download 80% complete]
-- [download 81% complete]
-- [download 82% complete]
-- [download 83% complete]
-- [download 84% complete]
-- [download 85% complete]
-- [download 86% complete]
-- [download 87% complete]
-- [download 88% complete]
-- [download 89% complete]
-- [download 90% complete]
-- [download 91% complete]
-- [download 92% complete]
-- [download 93% complete]
-- [download 94% complete]
-- [download 95% complete]
-- [download 96% complete]
-- [download 97% complete]
-- [download 98% complete]
-- [download 99% complete]
-- [download 100% complete]
-- FFTW - Configure: ./configure;--prefix=/Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/build;--enable-static;--enable-float
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... ./install-sh -c -d
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking build system type... x86_64-apple-darwin19.6.0
checking host system type... x86_64-apple-darwin19.6.0
checking for gcc... /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... configure: error: in `/Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/fftw-3.3.7':
configure: error: cannot run C compiled programs.
If you meant to cross compile, use `--host'.
See `config.log' for more details
-- FFTW - Make
make: *** No targets specified and no makefile found. Stop.
-- FFTW - Install
make: Nothing to be done for `install'.
-- FFTW - Found at fftw3f-NOTFOUND
-- Found PythonInterp: /usr/bin/python (found version "2.7.16")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/koji/work/PartScratch/spleeterpp/build
I add more.
Here is contents of build/_deps/rttf-build/fftw/fftw-3.3.7/config.log.
This log says stdio.h not found...
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by fftw configure 3.3.7, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ ./configure --prefix=/Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/build --enable-static --enable-float
## --------- ##
## Platform. ##
## --------- ##
hostname = MacBook-Pro.local
uname -m = x86_64
uname -r = 19.6.0
uname -s = Darwin
uname -v = Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
/usr/bin/uname -p = i386
/bin/uname -X = unknown
/bin/arch = unknown
/usr/bin/arch -k = unknown
/usr/convex/getsysinfo = unknown
/usr/bin/hostinfo = Mach kernel version:
Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64
Kernel configured for up to 8 processors.
4 processors are physically available.
8 processors are logically available.
Processor type: x86_64h (Intel x86-64h Haswell)
Processors active: 0 1 2 3 4 5 6 7
Primary memory available: 16.00 gigabytes
Default processor set: 421 tasks, 1821 threads, 8 processors
Load average: 16.32, Mach factor: 1.19
/bin/machine = unknown
/usr/bin/oslevel = unknown
/bin/universe = unknown
PATH: /Users/koji/.pyenv/shims
PATH: /Users/koji/.pyenv/bin
PATH: /Users/koji/.cargo/bin
PATH: /usr/local/bin
PATH: /usr/bin
PATH: /bin
PATH: /usr/sbin
PATH: /sbin
PATH: /usr/local/go/bin
PATH: /opt/X11/bin
PATH: /Library/Apple/usr/bin
PATH: /Applications/Wireshark.app/Contents/MacOS
PATH: /Users/koji/work/mruby/mruby/build/host/bin
PATH: /Users/koji/scripts
PATH: /opt/metasploit-framework/bin
PATH: /opt/metasploit-framework/bin
## ----------- ##
## Core tests. ##
## ----------- ##
configure:2873: checking for a BSD-compatible install
configure:2941: result: /usr/bin/install -c
configure:2952: checking whether build environment is sane
configure:3007: result: yes
configure:3158: checking for a thread-safe mkdir -p
configure:3197: result: ./install-sh -c -d
configure:3204: checking for gawk
configure:3234: result: no
configure:3204: checking for mawk
configure:3234: result: no
configure:3204: checking for nawk
configure:3234: result: no
configure:3204: checking for awk
configure:3220: found /usr/bin/awk
configure:3231: result: awk
configure:3242: checking whether make sets $(MAKE)
configure:3264: result: yes
configure:3293: checking whether make supports nested variables
configure:3310: result: yes
configure:3440: checking whether to enable maintainer-specific portions of Makefiles
configure:3449: result: no
configure:3497: checking build system type
configure:3511: result: x86_64-apple-darwin19.6.0
configure:3531: checking host system type
configure:3544: result: x86_64-apple-darwin19.6.0
configure:4167: checking for gcc
configure:4194: result: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
configure:4423: checking for C compiler version
configure:4432: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc --version >&5
Apple clang version 11.0.3 (clang-1103.0.32.62)
Target: x86_64-apple-darwin19.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
configure:4443: $? = 0
configure:4432: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -v >&5
Apple clang version 11.0.3 (clang-1103.0.32.62)
Target: x86_64-apple-darwin19.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
configure:4443: $? = 0
configure:4432: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -V >&5
clang: error: unsupported option '-V -Wno-objc-signed-char-bool-implicit-int-conversion'
clang: error: no input files
configure:4443: $? = 1
configure:4432: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -qversion >&5
clang: error: unknown argument '-qversion'; did you mean '--version'?
clang: error: no input files
configure:4443: $? = 1
configure:4463: checking whether the C compiler works
configure:4485: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc conftest.c >&5
configure:4489: $? = 0
configure:4537: result: yes
configure:4540: checking for C compiler default output file name
configure:4542: result: a.out
configure:4548: checking for suffix of executables
configure:4555: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -o conftest conftest.c >&5
configure:4559: $? = 0
configure:4581: result:
configure:4603: checking whether we are cross compiling
configure:4611: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -o conftest conftest.c >&5
conftest.c:14:10: fatal error: 'stdio.h' file not found
#include <stdio.h>
^~~~~~~~~
1 error generated.
configure:4615: $? = 1
configure:4622: ./conftest
./configure: line 4624: ./conftest: No such file or directory
configure:4626: $? = 127
configure:4633: error: in `/Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/fftw-3.3.7':
configure:4635: error: cannot run C compiled programs.
If you meant to cross compile, use `--host'.
See `config.log' for more details
## ---------------- ##
## Cache variables. ##
## ---------------- ##
ac_cv_build=x86_64-apple-darwin19.6.0
ac_cv_env_CC_set=set
ac_cv_env_CC_value=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
ac_cv_env_CFLAGS_set=
ac_cv_env_CFLAGS_value=
ac_cv_env_CPPFLAGS_set=
ac_cv_env_CPPFLAGS_value=
ac_cv_env_CPP_set=
ac_cv_env_CPP_value=
ac_cv_env_F77_set=
ac_cv_env_F77_value=
ac_cv_env_FFLAGS_set=
ac_cv_env_FFLAGS_value=
ac_cv_env_LDFLAGS_set=
ac_cv_env_LDFLAGS_value=
ac_cv_env_LIBS_set=
ac_cv_env_LIBS_value=
ac_cv_env_LT_SYS_LIBRARY_PATH_set=
ac_cv_env_LT_SYS_LIBRARY_PATH_value=
ac_cv_env_MPICC_set=
ac_cv_env_MPICC_value=
ac_cv_env_build_alias_set=
ac_cv_env_build_alias_value=
ac_cv_env_host_alias_set=
ac_cv_env_host_alias_value=
ac_cv_env_target_alias_set=
ac_cv_env_target_alias_value=
ac_cv_host=x86_64-apple-darwin19.6.0
ac_cv_path_install='/usr/bin/install -c'
ac_cv_prog_AWK=awk
ac_cv_prog_ac_ct_CC=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
ac_cv_prog_make_make_set=yes
am_cv_make_support_nested_variables=yes
## ----------------- ##
## Output variables. ##
## ----------------- ##
ACLOCAL='${SHELL} /Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/fftw-3.3.7/missing aclocal-1.15'
ALLOCA=''
ALTIVEC_CFLAGS=''
AMDEPBACKSLASH=''
AMDEP_FALSE=''
AMDEP_TRUE=''
AMTAR='$${TAR-tar}'
AM_BACKSLASH='\'
AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)'
AM_DEFAULT_VERBOSITY='1'
AM_V='$(V)'
AR=''
AS=''
AUTOCONF='${SHELL} /Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/fftw-3.3.7/missing autoconf'
AUTOHEADER='${SHELL} /Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/fftw-3.3.7/missing autoheader'
AUTOMAKE='${SHELL} /Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/fftw-3.3.7/missing automake-1.15'
AVX2_CFLAGS=''
AVX512_CFLAGS=''
AVX_128_FMA_CFLAGS=''
AVX_CFLAGS=''
AWK='awk'
BUILD_DOC_FALSE='#'
BUILD_DOC_TRUE=''
CC='/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc'
CCDEPMODE=''
CFLAGS=''
CHECK_PL_OPTS=''
COMBINED_THREADS_FALSE=''
COMBINED_THREADS_TRUE=''
CPP=''
CPPFLAGS=''
CYGPATH_W='echo'
C_FFTW_R2R_KIND=''
C_MPI_FINT=''
DEFS=''
DEPDIR=''
DLLTOOL=''
DSYMUTIL=''
DUMPBIN=''
ECHO_C='\c'
ECHO_N=''
ECHO_T=''
EGREP=''
EXEEXT=''
F77=''
FFLAGS=''
FGREP=''
FLIBS=''
GREP=''
HAVE_ALTIVEC_FALSE=''
HAVE_ALTIVEC_TRUE='#'
HAVE_AVX2_FALSE=''
HAVE_AVX2_TRUE='#'
HAVE_AVX512_FALSE=''
HAVE_AVX512_TRUE='#'
HAVE_AVX_128_FMA_FALSE=''
HAVE_AVX_128_FMA_TRUE='#'
HAVE_AVX_FALSE=''
HAVE_AVX_TRUE='#'
HAVE_GENERIC_SIMD128_FALSE=''
HAVE_GENERIC_SIMD128_TRUE='#'
HAVE_GENERIC_SIMD256_FALSE=''
HAVE_GENERIC_SIMD256_TRUE='#'
HAVE_KCVI_FALSE=''
HAVE_KCVI_TRUE='#'
HAVE_NEON_FALSE=''
HAVE_NEON_TRUE='#'
HAVE_SSE2_FALSE=''
HAVE_SSE2_TRUE='#'
HAVE_VSX_FALSE=''
HAVE_VSX_TRUE='#'
INDENT=''
INSTALL_DATA='${INSTALL} -m 644'
INSTALL_PROGRAM='${INSTALL}'
INSTALL_SCRIPT='${INSTALL}'
INSTALL_STRIP_PROGRAM='$(install_sh) -c -s'
KCVI_CFLAGS=''
LD=''
LDFLAGS=''
LDOUBLE_FALSE=''
LDOUBLE_TRUE='#'
LIBOBJS=''
LIBQUADMATH=''
LIBS=''
LIBTOOL=''
LIPO=''
LN_S=''
LTLIBOBJS=''
LT_SYS_LIBRARY_PATH=''
MAINT='#'
MAINTAINER_MODE_FALSE=''
MAINTAINER_MODE_TRUE='#'
MAKEINFO='${SHELL} /Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/fftw-3.3.7/missing makeinfo'
MANIFEST_TOOL=''
MKDIR_P='./install-sh -c -d'
MPICC=''
MPILIBS=''
MPIRUN=''
MPI_FALSE=''
MPI_TRUE=''
NEON_CFLAGS=''
NM=''
NMEDIT=''
OBJDUMP=''
OBJEXT=''
OCAMLBUILD=''
OPENMP_CFLAGS=''
OPENMP_FALSE=''
OPENMP_TRUE=''
OTOOL64=''
OTOOL=''
PACKAGE='fftw'
PACKAGE_BUGREPORT='fftw@fftw.org'
PACKAGE_NAME='fftw'
PACKAGE_STRING='fftw 3.3.7'
PACKAGE_TARNAME='fftw'
PACKAGE_URL=''
PACKAGE_VERSION='3.3.7'
PATH_SEPARATOR=':'
POW_LIB=''
PRECISION='s'
PREC_SUFFIX='f'
PTHREAD_CC=''
PTHREAD_CFLAGS=''
PTHREAD_LIBS=''
QUAD_FALSE=''
QUAD_TRUE='#'
RANLIB=''
SED=''
SET_MAKE=''
SHARED_VERSION_INFO='8:7:5'
SHELL='/bin/sh'
SINGLE_FALSE='#'
SINGLE_TRUE=''
SMP_FALSE=''
SMP_TRUE=''
SSE2_CFLAGS=''
STACK_ALIGN_CFLAGS=''
STRIP=''
THREADLIBS=''
THREADS_FALSE=''
THREADS_TRUE=''
VERSION='3.3.7'
VSX_CFLAGS=''
ac_ct_AR=''
ac_ct_CC='/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc'
ac_ct_DUMPBIN=''
ac_ct_F77=''
acx_pthread_config=''
am__EXEEXT_FALSE=''
am__EXEEXT_TRUE=''
am__fastdepCC_FALSE=''
am__fastdepCC_TRUE=''
am__include=''
am__isrc=''
am__leading_dot='.'
am__nodep=''
am__quote=''
am__tar='$${TAR-tar} chof - "$$tardir"'
am__untar='$${TAR-tar} xf -'
bindir='${exec_prefix}/bin'
build='x86_64-apple-darwin19.6.0'
build_alias=''
build_cpu='x86_64'
build_os='darwin19.6.0'
build_vendor='apple'
datadir='${datarootdir}'
datarootdir='${prefix}/share'
docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
dvidir='${docdir}'
exec_prefix='NONE'
host='x86_64-apple-darwin19.6.0'
host_alias=''
host_cpu='x86_64'
host_os='darwin19.6.0'
host_vendor='apple'
htmldir='${docdir}'
includedir='${prefix}/include'
infodir='${datarootdir}/info'
install_sh='${SHELL} /Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/fftw-3.3.7/install-sh'
libdir='${exec_prefix}/lib'
libexecdir='${exec_prefix}/libexec'
localedir='${datarootdir}/locale'
localstatedir='${prefix}/var'
mandir='${datarootdir}/man'
mkdir_p='$(MKDIR_P)'
oldincludedir='/usr/include'
pdfdir='${docdir}'
prefix='/Users/koji/work/PartScratch/spleeterpp/build/_deps/rtff-build/fftw/build'
program_transform_name='s,x,x,'
psdir='${docdir}'
runstatedir='${localstatedir}/run'
sbindir='${exec_prefix}/sbin'
sharedstatedir='${prefix}/com'
sysconfdir='${prefix}/etc'
target_alias=''
## ----------- ##
## confdefs.h. ##
## ----------- ##
/* confdefs.h */
#define PACKAGE_NAME "fftw"
#define PACKAGE_TARNAME "fftw"
#define PACKAGE_VERSION "3.3.7"
#define PACKAGE_STRING "fftw 3.3.7"
#define PACKAGE_BUGREPORT "fftw@fftw.org"
#define PACKAGE_URL ""
#define PACKAGE "fftw"
#define VERSION "3.3.7"
#define FFTW_ENABLE_ALLOCA 1
#define FFTW_SINGLE 1
#define BENCHFFT_SINGLE 1
configure: exit 1
I've found workaround for this. (macOS Catalina with Xcode11.7).
$ cd /path/to/spleeterpp
$ rm -rf build
$ export SDKROOT="$(xcrun --sdk macosx --show-sdk-path)"
$ mkdir build && cd build
$ cmake ..
$ cmake --build .
Great thanks for helpful comments.
by the way, I've stuck in TF issue with M1 BigSur v11.2.
is there any solution to solve this?
% sudo cmake --build .
Password:
[ 19%] Built target rtff
[ 27%] Built target spleeter_common
[ 37%] Built target artff
[ 45%] Built target spleeter_filter
[ 50%] Built target spleeter
[ 56%] Built target gmock
[ 64%] Built target gmock_main
[ 68%] Built target gtest
[ 72%] Built target gtest_main
[ 88%] Built target wave
[ 92%] Built target test_artff
[ 94%] Linking CXX executable test_spleeter_filter
ld: warning: ignoring file ../../tensorflow/lib/libtensorflow.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
ld: warning: ignoring file ../../tensorflow/lib/libtensorflow_framework.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
Undefined symbols for architecture arm64:
"_TF_DeleteBuffer", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
"_TF_DeleteGraph", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
"_TF_DeleteSession", referenced from:
spleeter::SessionDeleter(TF_Session*) in libspleeter_common.a(tf_handle.cc.o)
"_TF_DeleteSessionOptions", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
"_TF_DeleteStatus", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
spleeter::Filter::AsyncProcessTransformedBlock(std::__1::vector<std::__1::complex<float>*, std::__1::allocator<std::__1::complex<float>*> >, unsigned int) in libspleeter_filter.a(filter.cc.o)
spleeter::SessionDeleter(TF_Session*) in libspleeter_common.a(tf_handle.cc.o)
"_TF_DeleteTensor", referenced from:
spleeter::Filter::AsyncProcessTransformedBlock(std::__1::vector<std::__1::complex<float>*, std::__1::allocator<std::__1::complex<float>*> >, unsigned int) in libspleeter_filter.a(filter.cc.o)
spleeter::TensorAlloc(TF_DataType, std::__1::vector<long long, std::__1::allocator<long long> >) in libspleeter_filter.a(tensor.cc.o)
"_TF_GetCode", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
spleeter::Filter::AsyncProcessTransformedBlock(std::__1::vector<std::__1::complex<float>*, std::__1::allocator<std::__1::complex<float>*> >, unsigned int) in libspleeter_filter.a(filter.cc.o)
spleeter::SessionDeleter(TF_Session*) in libspleeter_common.a(tf_handle.cc.o)
"_TF_GraphOperationByName", referenced from:
spleeter::Filter::AsyncProcessTransformedBlock(std::__1::vector<std::__1::complex<float>*, std::__1::allocator<std::__1::complex<float>*> >, unsigned int) in libspleeter_filter.a(filter.cc.o)
"_TF_LoadSessionFromSavedModel", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
"_TF_NewBuffer", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
"_TF_NewGraph", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
"_TF_NewSessionOptions", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
"_TF_NewStatus", referenced from:
spleeter::Initialize(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, spleeter::SeparationType, std::__1::error_code&) in libspleeter_common.a(spleeter_common.cc.o)
spleeter::Filter::AsyncProcessTransformedBlock(std::__1::vector<std::__1::complex<float>*, std::__1::allocator<std::__1::complex<float>*> >, unsigned int) in libspleeter_filter.a(filter.cc.o)
spleeter::SessionDeleter(TF_Session*) in libspleeter_common.a(tf_handle.cc.o)
"_TF_NewTensor", referenced from:
spleeter::TensorAlloc(TF_DataType, std::__1::vector<long long, std::__1::allocator<long long> >) in libspleeter_filter.a(tensor.cc.o)
"_TF_SessionRun", referenced from:
spleeter::Filter::AsyncProcessTransformedBlock(std::__1::vector<std::__1::complex<float>*, std::__1::allocator<std::__1::complex<float>*> >, unsigned int) in libspleeter_filter.a(filter.cc.o)
"_TF_TensorData", referenced from:
void spleeter::Copy<float>(TF_Tensor const*, std::__1::vector<long long, std::__1::allocator<long long> >, std::__1::shared_ptr<spleeter::TFHandle<TF_Tensor> >) in libspleeter_filter.a(filter.cc.o)
void spleeter::Copy<std::__1::complex<float> >(TF_Tensor const*, std::__1::vector<long long, std::__1::allocator<long long> >, std::__1::shared_ptr<spleeter::TFHandle<TF_Tensor> >) in libspleeter_filter.a(filter.cc.o)
spleeter::internal::Adapter<std::__1::complex<float> >::operator()(unsigned long, unsigned long, unsigned long) in libspleeter_filter.a(filter.cc.o)
spleeter::internal::Adapter<float>::operator()(unsigned long, unsigned long, unsigned long) in libspleeter_filter.a(filter.cc.o)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [test/spleeter_filter/test_spleeter_filter] Error 1
make[1]: *** [test/spleeter_filter/CMakeFiles/test_spleeter_filter.dir/all] Error 2
make: *** [all] Error 2
Maybe M1 mac(arm64) version of tensorflow required.
Someone should help..
Hi @ecb2c, I am facing the same issue. I have M1 with Monterey.
Did you manage to solve that?
| gharchive/issue | 2020-06-11T10:50:28 | 2025-04-01T06:44:23.712805 | {
"authors": [
"aidv",
"carmelofascella",
"ecb2c",
"kyab"
],
"repo": "gvne/spleeterpp",
"url": "https://github.com/gvne/spleeterpp/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1021764809 | Dos dx11 or dx12 work with the addons at someone here?
Addons dosn´t work with dx11 or dx12 if i do activate them not evn one of the addons are working. Dos anyone else got the same issues or i am the only one?
sry for my bad english
If you check the main code page as well as previously reported issue pages, DX11 and DX12 are not supported. However There is a patch for DX12 plugin. All the info is on the Code page under approved addons: https://github.com/gw2-addon-loader/Approved-Addons
I can't install the DX12 addon, if I try GW2 will not launch and throw me a missing DX9 error. I was able to launch with it the first two days of use though, and afaik nothing was updated. I'm not sure what's causing this issue. Possibly some settings file corruption.
Radial Menu does not work for me with dx11 activated.
GW2-UOAOM looks like it downloads d3d9.dll and put its in 'gw2 install dir/bin64/'. Seems there is still no option to handle the dll as dx11. This would be a feature request.
From arcdps site:
directx9: save d3d9.dll into 'gw2 install dir/bin64/' while the game is not running.
directx11: save d3d9.dll into 'gw2 install dir/' and rename to d3d11.dll while the game is not running.
Status for this? Doesnt work for me as of today
| gharchive/issue | 2021-10-09T17:42:42 | 2025-04-01T06:44:23.717963 | {
"authors": [
"BarelyRoss1",
"Kiarron",
"PaterFrog",
"Yushi91",
"pjllorens",
"wolfgangbures"
],
"repo": "gw2-addon-loader/GW2-Addon-Manager",
"url": "https://github.com/gw2-addon-loader/GW2-Addon-Manager/issues/159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
514509785 | Relationships in JSON-LD?
linkage according to any ontology may be better represented based on JSON-LD, rather than using the ad-hoc Relationship class.
e.g.
{
"@context": "https://.../amorphys-spatial",
"@id": { "$ref": "../../components/fridge" },
"on-the-left-side-of": { "$ref": "../../components/KeisukesDesk" }
}
The type property of Entity may be also corresponded to @type in JSON-LD.
Then, how to treat this property may be clearer.
at least the Relationship class is now based on JSON-LD, as of this commit
| gharchive/issue | 2019-10-30T09:21:51 | 2025-04-01T06:44:23.727382 | {
"authors": [
"gwappa"
],
"repo": "gwappa/amorphys",
"url": "https://github.com/gwappa/amorphys/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
621069810 | With h2o and h2load errors in volume with h2load
I've built h2o master branch and configured it for http/3 and all is well. I can run the lsquic http_client program against it in volume and everything works just fine. However, if I then build h2load (quic branch) and run more than one request against it, I get failures like this:
I01534686 0x82c30c0b3b135f6e pkt could not decrypt packet payload
I01534883 0x82c30c0b3b135f6e con recv packet len=1018
I01534883 0x82c30c0b3b135f6e pkt rx pkn=3574480305 dcid=0x008bbde81b18b9ed scid=0xc7ef90bcf602346ede type=Initial(0x00) len=145 k=0
I01534883 0x82c30c0b3b135f6e pkt packet has incorrect reserved bits
I01534883 0x82c30c0b3b135f6e pkt could not decrypt packet payload
I01535279 0x82c30c0b3b135f6e con recv packet len=449
I01535279 0x82c30c0b3b135f6e pkt rx pkn=2371164212 dcid=0x008bbde81b18b9ed scid=0xc7ef90bcf602346ede type=Initial(0x00) len=145 k=0
I01535279 0x82c30c0b3b135f6e pkt packet has incorrect reserved bits
I01535279 0x82c30c0b3b135f6e pkt could not decrypt packet payload
I01536071 0x82c30c0b3b135f6e con recv packet len=893
I01536071 0x82c30c0b3b135f6e pkt rx pkn=214 dcid=0x008bbde81b18b9ed scid=0xc7ef90bcf602346ede type=Initial(0x00) len=20 k=0
I can't tell if it's an h2o problem or an h2load problem, but I'm opening issues with both. Any questions, feel free to let me know:
Bob Perper
rperper@litespeedtech.com
Thanks so much.
Thank you for reporting the issue. FTR, discussion is happing on https://github.com/nghttp2/nghttp2/issues/1469.
| gharchive/issue | 2020-05-19T15:22:58 | 2025-04-01T06:44:23.795889 | {
"authors": [
"kazuho",
"rperper"
],
"repo": "h2o/quicly",
"url": "https://github.com/h2o/quicly/issues/345",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1094270458 | PUBDEV-7858: Expose distribution parameter in AutoML
https://h2oai.atlassian.net/browse/PUBDEV-7858
I noticed that the AutoML form renders differently - it is likely caused by the order of responses - BuildControl, AutoMLInput etc.
| gharchive/pull-request | 2022-01-05T11:49:50 | 2025-04-01T06:44:23.800650 | {
"authors": [
"tomasfryda"
],
"repo": "h2oai/h2o-flow",
"url": "https://github.com/h2oai/h2o-flow/pull/179",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
892533545 | Calling characteristic.indicate() inside server callback makes the callback never return.
After a day of troubleshooting some code, if you do something like:
void MyCallbacks::onWrite(BLECharacteristic *pCharacteristic) {
std::string rxValue = pCharacteristic->getValue();
uint8_t returnValue[3] = {0x80, (uint8_t)rxValue[0], 0x02};
pCharacteristic->setValue(returnValue, 3);
pCharacteristic->indicate();
}
pCharacteristic->indicate() never returns and the callback is never invoked again.
Yep, I found this issue a little while ago also. I have the fix prepared for it in the 2.0 branch but has to be backported and I haven't felt like dealing with the conflicts yet 😄. Sorry about that, I didn't think it would be an issue too soon since few people use indications.
That's okay...Only a day of my life :)
Good thing those are cheap.
If it's any consolation the backport for this fix it going to take all of my Saturday night. It might be my fault technically but I'm still blaming you for using indications 😄.
Kidding of course, it's a real pain when you get ahead of yourself in git though. PR for the fix should be up soon.
Wow! That was quick!
Might be a bit before I can test this. In school all week (recurrent training) but I'll try to fit it in sometime.
Lol, code was already done, just needed to refactor a few things.
Not a big rush, no other issues involving this yet. It's somewhat tested and works fine in the other branch but this is not so there may be a bug somewhere.
| gharchive/issue | 2021-05-15T20:32:46 | 2025-04-01T06:44:23.807611 | {
"authors": [
"doudar",
"h2zero"
],
"repo": "h2zero/NimBLE-Arduino",
"url": "https://github.com/h2zero/NimBLE-Arduino/issues/238",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2049290 | Taxonomy move_term tests fail for mysql, but not sqlite
Test results for move_term vary by database engine. The following two errors appear for mysql, but not sqlite:
Fail: When $before is true the Term should be inserted before $target_term
/var/www/habari/tests/test_taxonomy.php:298
Fail: Without arguments the Term should be moved all the way to the right
/var/www/habari/tests/test_taxonomy.php:304
My schema shows the terms table as UNSIGNED, as does the schema definition. This makes it impossible to place term id's into negative, temporary space as required when moving.
Confirmed, changing the mptt_left and mptt_right fields to be unsigned allows all test_taxonomy tests (including the most recent changes) to pass. Probably need to write a versioned migration, because I doubt that db_update will deal properly with the sign change.
Whaddaya know? db_update will change the sign of the field.
Just for reference, habari/system commit 35735ae90 is related.
| gharchive/issue | 2011-10-25T20:14:20 | 2025-04-01T06:44:23.833752 | {
"authors": [
"chrismeller",
"mikelietz",
"ringmaster"
],
"repo": "habari/habari",
"url": "https://github.com/habari/habari/issues/229",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
118297887 | WIP: Changes to cart and favorite adding
Adding to the cart and favoriting now uses AJAX instead of a form.
@cathychen95 Now that we're supporting AJAX requests for adding to the cart and updating the quantity from the search page, I'm going to switch quantity changing and removal from cart in the cart page to use that as well.
| gharchive/pull-request | 2015-11-22T23:25:36 | 2025-04-01T06:44:23.863904 | {
"authors": [
"jondubin"
],
"repo": "hack4impact/reading-terminal-market",
"url": "https://github.com/hack4impact/reading-terminal-market/pull/21",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2464726264 | Update RainWeatherSimulator-EthanJohn
[x] I have read the steps to getting a blot
[x] I am submitting art that...
[x] is algorithmically generated (will change each time the program is run)
[x] is drawable on a blot (fits in the work area & doesn't overlap too much)
[x] is original (not copied from somewhere else)
[x] doesn't call Math.random() (See the documentation on randomness)
[x] is drawable on a physical machine (doesn't have lines overlap more than 5 times)
[ ] Optional, if you used a tutorial or based your art on something else, please include the link here:
[ ] Optional, if you remixed this from something else, mention it here:
Hey there! I am closing this PR due to inactivity, please create a new PR once you have updated your art!
| gharchive/pull-request | 2024-08-14T01:51:33 | 2025-04-01T06:44:23.867658 | {
"authors": [
"Dongathan-Jong",
"Somebud0180"
],
"repo": "hackclub/blot",
"url": "https://github.com/hackclub/blot/pull/825",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2392941897 | Shubh Mehra game
Your checklist for this pull request
Author name
Shubh Mehra
About your game
What is your game about?
This game is basically an tank war in which we have to destroy and safe from enemy tanks
How do you play your game?
we can play this game on both sprig and on the site.
Press W to shoot the enemy, AD for moving and S for reset
Code
Check off the items that are true.
[x] The game was made using the Sprig editor.
[x] The game is placed in the in the /games directory.
[x] The code is significantly different from all other games in the Sprig gallery (except for games labeled "demo").
[x] The game runs without errors.
[x] The name of the file/game contains only alphanumeric characters, -s, or _s.
[x] The game name is not the same as the others from gallery
Image (If an image is used)
[x] The image is in the /games/img directory.
[x] The name of the image matches the name of your file.
Thanks for your PR!
Looked at this today, still waiting for author to respond.
Hey Graham sorry for the late reply.
Sure I'll make changes to the game as suggested and submit it again soon
On Fri, 12 Jul 2024, 12:18 am graham, @.***> wrote:
Looked at this today, still waiting for author to respond.
—
Reply to this email directly, view it on GitHub
https://github.com/hackclub/sprig/pull/1901#issuecomment-2223659745, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/BJVZQ4C64NE3FYFT3BJVVR3ZL3HPJAVCNFSM6AAAAABKNRFBNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRTGY2TSNZUGU
.
You are receiving this because you authored the thread.Message ID:
@.***>
Looked at this today, still waiting for the author to respond.
Hey @graham ,
Can you please close this pull request I'll make another when I'm ready
with game
Thanks
On Mon, 22 Jul 2024, 10:25 pm graham, @.***> wrote:
Looked at this today, still waiting for the author to respond.
—
Reply to this email directly, view it on GitHub
https://github.com/hackclub/sprig/pull/1901#issuecomment-2243408283, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/BJVZQ4GEYGGFIREXH4FYJZ3ZNU2R3AVCNFSM6AAAAABKNRFBNSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENBTGQYDQMRYGM
.
You are receiving this because you authored the thread.Message ID:
@.***>
@grymmy -> (pretty sure this is a mistake)
Closing this PR per the author's request.
| gharchive/pull-request | 2024-07-05T17:29:42 | 2025-04-01T06:44:23.884921 | {
"authors": [
"Mehra-Shubh",
"graham",
"grymmy"
],
"repo": "hackclub/sprig",
"url": "https://github.com/hackclub/sprig/pull/1901",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.