added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:56.241977
| 2024-12-05T09:11:17
|
2719814830
|
{
"authors": [
"captainbrosset"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12021",
"repo": "web-platform-dx/web-features-explorer",
"url": "https://github.com/web-platform-dx/web-features-explorer/issues/44"
}
|
gharchive/issue
|
Improve link text for MDN docs with anchors
Some of the links from features to MDN that we define in mdnDocsOverrides.json have anchrs in them. When that happens, we should make sure to capture the anchor in the link text, so users know where they're going. Right now, the link text shows the parent page's title only.
For example, when linking to the the linear easing-function section, we use the Web/CSS/easing-function#linear slug. The link text is currently See <easing-function>. It would be better if it was something like See linear, on the <easing-function> page.
Done in 020c3ca02dba96cd6db2c45dc1d77424c41ef1c4.
|
2025-04-01T04:35:56.264883
| 2021-02-23T02:32:28
|
814041939
|
{
"authors": [
"RichardLindhout",
"nandorojo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12022",
"repo": "web-ridge/react-native-paper-dates",
"url": "https://github.com/web-ridge/react-native-paper-dates/issues/48"
}
|
gharchive/issue
|
Disable/blur old dates
It would be nice to have a boolean that doesn't allow you to click dates before today (i.e. a minDate: new Date()). It could then reduce the opacity of the hidden dates, so users know not to click them.
I think a good strategy could be to use the validRange API from FullCalendar: https://fullcalendar.io/docs/validRange
const validRange = {
start: '2020-05-01', // optional
end: '2020-08-01' // optional
}
Opened a PR to close this at #49.
CC @RichardLindhout
Thanks!
Ok perfect! Thank you a lot I have some feedback but looking great so far
|
2025-04-01T04:35:56.267755
| 2019-12-05T13:51:31
|
533364437
|
{
"authors": [
"rmaucher",
"tchegito",
"tomjenkinson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12023",
"repo": "web-servers/narayana-tomcat",
"url": "https://github.com/web-servers/narayana-tomcat/issues/32"
}
|
gharchive/issue
|
impossible with specific jndi configuration
I can't use TransactionalDataSourceFactory if the transaction manager is defined in a parent JNDI node than the database.
To be precise, here is my two JNDI names for TM and database:
java:/comp/env/TransactionManager
java:/comp/env/jdbc/MyDatasource
Actually, when TransactionalDataSourceFactory is looking for the TM (see code below), it tries to search from the JNDI node "java:/comp/env/jdbc". And as far as I know, it's impossible to go back one level in JNDI tree from a specific node.
(from TransactionalDataSourceFactory#getObjectInstance)
final TransactionManager transactionManager = (TransactionManager) getReferenceObject(ref, context, PROP_TRANSACTION_MANAGER);
It seems that a prerequisite exists, stating than TM and datasource are on the same jndi namespace. Or I missed something.
I am not sure of the answer to this, it could be a restriction of JNDI in Tomcat itself. @rmaucher are you aware of such a limitation?
Yes, this is the way it is in the code.
I've noticed that, but the question: is that a feature or a bug ? It seems awkward to force users having a specific jndi configuration without mentioning it as a constraint.
|
2025-04-01T04:35:56.283033
| 2015-01-06T04:27:50
|
53479890
|
{
"authors": [
"2metres",
"davyzhang",
"simpsondigital",
"tsawitzki",
"vigo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12024",
"repo": "webBoxio/atom-color",
"url": "https://github.com/webBoxio/atom-color/issues/7"
}
|
gharchive/issue
|
Sass color functions
I prefer this one over some of the alternatives but SCSS/SASS color functions (rgba, saturate, darken, lighten, etc) would be awesome to see too.
great idea! maybe @f wants to give a hand to this?
+1 on this, I actually only have three "hard" color declarations in my recent projects SASS structure but use this three colors "remixed" over the whole project. Unfortunately I currently only get visual feedback in my colors.scss file, where the three hard coded declarations are.
+1 too! Was about to open issue requesting this until I came across this one. Love the functionality, but I use SASS color functions so much, it'd be hugely beneficial to see them rendered out.
+1, Thanks so much for the this
|
2025-04-01T04:35:56.285182
| 2022-04-26T08:25:38
|
1215591513
|
{
"authors": [
"Arthur-Acha",
"avaer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12025",
"repo": "webaverse/app",
"url": "https://github.com/webaverse/app/pull/2897"
}
|
gharchive/pull-request
|
solve sfx issues
-10dB on run files
-10dB on walk files
-10dB on land files
normalized OOT sounds
reworked SonicBoomTrail
recompiled all the sounds
In my testing, this PR completely breaks all sounds, including walking.
|
2025-04-01T04:35:56.288823
| 2023-05-13T12:58:16
|
1708594739
|
{
"authors": [
"drewstone",
"dutterbutter",
"upil69"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12026",
"repo": "webb-tools/webb-dapp",
"url": "https://github.com/webb-tools/webb-dapp/issues/1208"
}
|
gharchive/issue
|
[UI]
if I brigde the website always force close
Can you take a screen grab/video with steps to reproduce?
Closing due to lack of information. Please open a new bug report with more details.
|
2025-04-01T04:35:56.418726
| 2023-07-26T15:18:34
|
1822643181
|
{
"authors": [
"Rupeshhh1",
"scape76"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12027",
"repo": "webdevcody/code-racer",
"url": "https://github.com/webdevcody/code-racer/pull/537"
}
|
gharchive/pull-request
|
Update snippets.ts
By using the Snippet interface directly in the array, we eliminate the need for redundant type definitions. This ensures that the code remains maintainable and less prone to errors, especially if the Snippet type is updated in the future. Additionally, we no longer have to maintain an array of nested properties for code and language, which results in a more efficient data structure.
title: Issue | Loading time it seem slow i try to make it fast
Discord Username: @rupeshhh
What type of PR is this? (select all that apply)
[ ] π Feature
[ ] π Bug Fix
[ ] π§ Breaking Change
[ ] π§βπ» Code Refactor
[ ] π Documentation Update
i make it simpler form form code and the complexity over come the code base of the code
Related Tickets & Documents
Related Issue # loading time
Closes # slow time
QA Instructions, Screenshots, Recordings
UI accessibility concerns? yaaa loading time
Added/updated tests?
[ ] π yes
[ ] π
no, because they aren't needed
[ ] π no, because I need help
[optional] Are there any post deployment tasks we need to perform?
[optional] What gif best describes this PR or how it makes you feel?
hey, I dont really understand the statement that "we eliminate the need for redundant type definitions", we dont actually have redundant type definitions in our case, plus we tied up the types with their read data types, so for example if the code or language type will be redefined, we wont need to go in our snippets.ts file again and change the type
closing this pr for now
|
2025-04-01T04:35:56.463128
| 2021-08-05T19:28:29
|
962131066
|
{
"authors": [
"johnhamelink",
"mpdude"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12028",
"repo": "webfactory/ssh-agent",
"url": "https://github.com/webfactory/ssh-agent/pull/90"
}
|
gharchive/pull-request
|
Docs on how to integrate with build-push-action
This change adds some extra clarification to the documentation to show how to setup the docker/build-push-action step with this action. This is very helpful when using buildkit's RUN --mount=type=ssh. We found this to be a little confusing and the GH issues we found on the matter didn't help!
Thank you @johnhamelink !
|
2025-04-01T04:35:56.476246
| 2024-02-29T22:41:47
|
2162221212
|
{
"authors": [
"cmhhelgeson",
"greggman",
"kainino0x"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12029",
"repo": "webgpu/webgpu-samples",
"url": "https://github.com/webgpu/webgpu-samples/pull/360"
}
|
gharchive/pull-request
|
Add descriptions to categories on hover
Fix overflow bug in panel element
Add brief descriptions of sample categories on hover
https://github.com/webgpu/webgpu-samples/assets/62450112/1f752779-0dc8-4a4a-a27d-d01fdd8b2891
FYI we are planning to do #355 in some way and I think @greggman has started looking at it. We might want to hold off on more changes after this PR until that's done.
More changes to this branch specifically, or more changes overall?
How long do you think that will be?
@greggman would have to answer that
I am working on #355. Hopefully I'll have something to review today or tomorrow
|
2025-04-01T04:35:56.567069
| 2023-10-03T07:56:01
|
1923538526
|
{
"authors": [
"alexander-akait",
"g-rusev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12030",
"repo": "webpack-contrib/postcss-loader",
"url": "https://github.com/webpack-contrib/postcss-loader/issues/669"
}
|
gharchive/issue
|
ModuleBuildError: Module build failed (from ./node_modules/postcss-loader/dist/cjs.js):
Bug report
Actual Behavior
Minimal test repository:
https://github.com/g-rusev/postcss-error
Environment:
OS: Windows 10 Pro, 64 bit, 21H2
NodeJS: version 20.4.0
When using the POSTCSS-LOADER version 7.3.3 build fails with the error:
ModuleBuildError: Module build failed (from ./node_modules/postcss-loader/dist/cjs.js):
When the version in the package.json is set to:
"postcss-loader": "7.0.2"
build succeeds.
To summarize:
The same code base in the test repo:
when build with "postcss-loader": "7.0.2" - succeeds
when build with "postcss-loader": "7.3.3" - fails
Expected Behavior
It should build with the latest version of POST CSS LOADER with no errors.
How Do We Reproduce?
Download the test repo.
Run npm i to install dependencies.
Run npm run dev to start the build. The build will fail with errors.
Please paste the results of npx webpack info here, and mention other relevant information
Sounds like cosmiconfig is using private methods... Please check your Node.js version twice, because 20.4.0 supports them, I can't reproduce the problem locally using your repo, try to run node --version, maybe you have two version Node.js (for example some developers use nvm to switch between different Node.js version), anyway we can't fix it here, sorry, but feel free to feedback
|
2025-04-01T04:35:56.575439
| 2018-06-02T19:43:06
|
328771716
|
{
"authors": [
"BernieSumption",
"shellscape"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12031",
"repo": "webpack-contrib/webpack-serve",
"url": "https://github.com/webpack-contrib/webpack-serve/issues/159"
}
|
gharchive/issue
|
webpack-serve serves files from src folder, overriding generated files
Operating System: Mac
Node Version: v8.10.0
NPM Version: yarn 1.7.0
webpack Version: 4.10.2
webpack-serve Version: 1.0.2
This issue is for a:
[x] bug
[ ] feature request
[ ] modification request
Code
Minimal repro at https://github.com/BernieSumption/webpack-serve-bug-demo
The issue
I discovered this while trying to figure out why html-webpack-plugin worked in production builds but not with webpack-serve. webpack-serve serves files from the source directory. This breaks the development experience when there are files in the source directory that have the same name as files in the output directory. Also, serving these files at all allows developers to write code that works in development with webpack-serve but fails in a production build as it directly refers to files in the source directory that aren't copied to the output directory.
Expected Behavior
In general, webpack-serve should serve exactly the same content as if you did a webpack build then served the output folder using a static file server.
http://localhost:8080/ should serve the index.html file created by html-webpack-plugin
http://localhost:8080/bundle.js (or whatever file name is specified in webpackConfig.output.bundle) should serve the webpack bundle, regardless of whether there is a src/bundle.js.
http://localhost:8080/index.js should give a 404
Actual Behavior
http://localhost:8080/ serves src/index.html - the template used by html-webpack-plugin, not the output of the plugin
If you create a file src/bundle.js, this file overrides the webpack bundle
http://localhost:8080/index.js serves src/index.js.
How Do We Reproduce?
Clone https://github.com/BernieSumption/webpack-serve-bug-demo then yarn && yarn start to launch webpack-serve. yarn build to do a webpack build to the dist folder.
See #114. This isn't a bug. This is a symptom of your environment. You'll need to configure webpack-serve correctly for your environment.
|
2025-04-01T04:35:56.579838
| 2018-09-25T15:17:43
|
363631866
|
{
"authors": [
"StefanSchoof",
"montogeek"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12032",
"repo": "webpack/webpack.js.org",
"url": "https://github.com/webpack/webpack.js.org/issues/2546"
}
|
gharchive/issue
|
html file getting started in the dist folder
I read the Getting Started and I struggle in the Creating a Bundle. The text says the Code should go into /src and /dist is the output. But if I understand correct, the listing below says I should put the index.html into the /dist folder. These information contradict each other.
I think the html file is put into the /dist to keep ist simple. But I think the text should mention that this is real project done different and maybe add a link to a documentation how to handle html in webpack.
Please send a PR.
|
2025-04-01T04:35:56.595263
| 2022-04-06T10:48:22
|
1194428566
|
{
"authors": [
"DavidDudson",
"JTBrinkmann",
"Quovadisqc",
"alexander-akait",
"cdeutsch",
"stefanKuijers",
"vankop",
"yoyo837"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12033",
"repo": "webpack/webpack",
"url": "https://github.com/webpack/webpack/issues/15633"
}
|
gharchive/issue
|
in-operator broken; "property" in object ALWAYS gets inlined as true
Bug report
What is the current behavior?
Webpack v5 incorrectly evaluates & bundles "property" in object as true, when object imported from a different file, even if the property is not present in the object (e.g. "doesNotExist" in { }).
If the current behavior is a bug, please provide the steps to reproduce.
create the following files (e.g. in an empty src folder):
// testObject.js
export const testObject = {}
// main.js
import { testObject } from './testObject'
console.log('property' in testObject ? 'buggy' : 'not buggy')
With or without a webpack configuration, run webpack (e.g. npx webpack-cli).
Inspect the emitted bundle. (below: without a configuration file)
// main.js (NODE_ENV=production)
!function(){"use strict";console.log("buggy")}();
// main.js (NODE_ENV=development)
... console.log(true ? 'buggy' : 'not buggy') ...
What is the expected behavior?
The expression 'property' in testObject should evaluate to false, or be evaluated at runtime.
For comparison, with webpack v4 or when using terser directly, I get
console.log("property"in{}?"buggy":"not buggy")}
Workarounds:
The issue does not appear / can be worked around if testObject is defined within the same file or if it's indirectly referenced. The following three examples work correctly:
import { testObject } from './testObject'
console.log('property' in (0, testObject) ? 'buggy' : 'not buggy') // => not buggy
const indirectReference = testObject
console.log('property' in indirectReference ? 'buggy' : 'not buggy') // => not buggy
const testObject2 = {}
console.log('property' in testObject2 ? 'buggy' : 'not buggy') // => not buggy
Other relevant information:
webpack version: 5.71.0
Node.js version: v16.14.0
Operating System: Windows 10
This appears to be the root of https://github.com/chakra-ui/chakra-ui/issues/5804 and https://github.com/chakra-ui/chakra-ui/issues/5812, where currently downgrading to webpack v4 (by downgrading react-scripts) is suggested as a workaround.
/cc @vankop I think critical
yeah, please use<EMAIL_ADDRESS>for now..
I can verify, we had to step back to 5.70 as well
This issue also breaks react-redux.
Might help someone that also have the same issue as I had but this also breaks
Socket-io by making connections throw timeouts because of this
this function
@Quovadisqc We had to rollback to webpack 5.70 due to the use of socke.io aswell.
Is this fixed for people in v5.72.0?
I'm still stuck on v5.70.0, but it's possible it's not the same issue.
This issue also breaks react-redux.
I'm just as confused, 5.72.0 still breaks react-redux.
Double decorators.
@SomeDecorators1()
@SomeDecorators2()
export default Class1 {
}
@yoyo837 could you reduce your reproducible repo?
I'll try it.
mini-repo.zip
yarn/npm install
yarn/npm run dev
I got Could not find "store" in the context of "Connect(TestComp)". Either wrap the root component in a <Provider>, or pass a custom React context provider to <Provider> and the corresponding React context consumer to Connect(TestComp) in connect options.
I found the solution as below:
Downgrade webpack to 5.70.0.
remove Drawer handler props in src\routes\App.js. (Don't know the reason at the moment)
remove the decorator @connect() in src\routes\App.js - class TestComp. (Do not use redux)
@vankop Looks like still valid
I got Could not find "store" in the context of "Connect(TestComp)". Either wrap the root component in a , or pass a custom React context provider to and the corresponding React context consumer to Connect(TestComp) in connect options.
I'm getting a similar error in<EMAIL_ADDRESS>
Store 'myStore' is not available! Make sure it is provided by some Provider
When I have a nested <Provider>, the props provided by the parent are missing.
Was thinking this could be a different issue than in
|
2025-04-01T04:35:56.606688
| 2017-07-11T17:24:57
|
242123795
|
{
"authors": [
"ahmehri",
"haines",
"sokra",
"swernerx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12034",
"repo": "webpack/webpack",
"url": "https://github.com/webpack/webpack/issues/5262"
}
|
gharchive/issue
|
"webpack#ajv" not installed after update to webpack 3.1.0
Do you want to request a feature or report a bug?
Report a bug.
What is the current behavior?
yarn check results in the following error:
error "ajv" is wrong version: expected "4.11.7", got "5.2.1"
error "webpack#ajv" not installed
error Found 2 errors.
If the current behavior is a bug, please provide the steps to reproduce.
Update to webpack 3.1.0.
Run yarn check.
What is the expected behavior?
No errors.
If this is a feature request, what is motivation or use case for changing the behavior?
Please mention other relevant information such as the browser version, Node.js version, webpack version and Operating System.
Node.js version: 8.1.0
wepack version: 3.1.0
How should webpack influence this? Isn't this a yarn bug if it incorrectly install ajv?
@sokra I wasn't sure if this is related to yarn or not since this error didn't occur with webpack < 3.0.0. I'm not sure but I think this error is caused because yarn didn't install ajv not the opposite.
It's probably related to the package manager. I have seen the same issue... even with ajv by using<EMAIL_ADDRESS>
The problem here is not really a yarn bug, but rather that webpack's dependency on<EMAIL_ADDRESS>is incompatible with one of webpack's transitive dependencies which depends on<EMAIL_ADDRESS>The only for webpack to attempt to solve this issue would be to downgrade to<EMAIL_ADDRESS>but I don't think that's a very good option (other packages, e.g<EMAIL_ADDRESS>will pull in<EMAIL_ADDRESS>anyway, so this will crop up elsewhere).
The resolved dependency tree for my package has:
<EMAIL_ADDRESS>-><EMAIL_ADDRESS>-><EMAIL_ADDRESS>-><EMAIL_ADDRESS>-><EMAIL_ADDRESS>-><EMAIL_ADDRESS>-><EMAIL_ADDRESS>-><EMAIL_ADDRESS>
Working backwards,
<EMAIL_ADDRESS>bumps to<EMAIL_ADDRESS>request/request@baf9c1f bumps to<EMAIL_ADDRESS><EMAIL_ADDRESS>depends on<EMAIL_ADDRESS>
So all that needs to happen is a minor or patch release of request and all will be well again*
* assuming no other dependencies in your package pull in<EMAIL_ADDRESS>e.g. for me<EMAIL_ADDRESS>is still going to be a problem π
The problem here is not really a yarn bug, but rather that webpack's dependency on<EMAIL_ADDRESS>is incompatible with one of webpack's transitive dependencies which depends on<EMAIL_ADDRESS>
The package manager should be able to solve this. I. e. it could create a tree like this:
node_modules
ajv@5
webpack
watchpack
chokidar
fsevents
node-pre-gyp
request
har-validator
node_modules
ajv@4
Ah, yeah, you're quite right, yarn does actually do that<EMAIL_ADDRESS>gets installed to node_modules/har-validator/node_modules/ajv. My bad.
I guess yarn check is doing the wrong thing here then?
I've raised this with yarn, installing webpack on its own does not cause this error, it seems to require another package to have a transitive dependency on ajv@4; weird!
Great
|
2025-04-01T04:35:56.608256
| 2014-11-12T18:40:41
|
48545703
|
{
"authors": [
"richardmarais",
"zoomclub"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12035",
"repo": "webpack/webpack",
"url": "https://github.com/webpack/webpack/issues/581"
}
|
gharchive/issue
|
Online Support Forum
Just wondering if there is any online forum or discussion group for Webpack? I'm thinking that one of those google groups would work, like the one React.js has.
Please can someone look at the following, and offer some advise?
http://stackoverflow.com/questions/39453145/meteor-collections
|
2025-04-01T04:35:56.615667
| 2018-05-08T15:11:23
|
321228811
|
{
"authors": [
"Janpot",
"odedniv",
"ooflorent",
"shellscape"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12036",
"repo": "webpack/webpack",
"url": "https://github.com/webpack/webpack/issues/7238"
}
|
gharchive/issue
|
requiring a missing module throws a different error message than native
Bug report
The pattern thrown by exceptions is important when attempting to ensure we catch the exception we expect to catch.
What is the current behavior?
require('missing-module');
throws: Cannot find module "missing-module". (with double quotes and period)
If the current behavior is a bug, please provide the steps to reproduce.
require('missing-module');
What is the expected behavior?
Should throw: Cannot find module 'missing-module' (with single quotes and no period)
Other relevant information:
webpack version: 4.8.1
Node.js version: 8.11.1
Operating System: macOS 10.13.4
Additional tools:
Feel free to send a PR.
Doesn't seem like the most solid practice to make your code depend on what's in an error message. I don't think they're guaranteed to be always the same. Seems like webpack rather needs something like error.code?
There's an e.code, it only tells the error type - not which module is missing (e.g successfully requiring a module that then requires another module that is missing - I want to crash there).
There is no reason to have this tiny deviation from the standard which is seen across v4,6,8,9 of nodejs (I'm guessing it's through all of them - those are the ones I tested). Our code (which is a dependency) needs to work both when the depender uses webpack and when it doesn't.
I'm not at all against aligning with what node does. I'm only mentioning it because I don't think the message is as "standardized" as the error code. If by "the standard" you mean "canonical across node versions". To me it feels a bit brittle to rely on the error message and parse out the intended required module through some regex or something.
Yes @Janpot it is brittle, unfortunately this is the error object NodeJS supplies :disappointed: I definitely would wish that it had a module property as well as the code property, but I don't think that's something to fix here in webpack (it wouldn't help my use case since I can't rely on the project getting webpacked).
Closed by #7303
Closed by #7303
|
2025-04-01T04:35:56.617929
| 2019-03-22T04:45:59
|
424045137
|
{
"authors": [
"montogeek",
"rain201607"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12037",
"repo": "webpack/webpack",
"url": "https://github.com/webpack/webpack/issues/8939"
}
|
gharchive/issue
|
Since I upgraded to Webpack , I cannot have an "include" in my "rules". What's the right way of doing it now?
Before:
{
test: /\.js$/,
loader: 'babel-loader',
include: 'node_modules/v-tooltip',
}
Now:
{
test: /\.js$/,
use: [{ loader: 'babel-loader' }]
???
}
what should i do?
I encourage you to ask your question on StackOverflow instead. We try to use the issue tracker for bug reports and feature requests, as well as some discussions, but this appears to be a very localized question.
Iβm closing for these reasons.
|
2025-04-01T04:35:56.626898
| 2017-12-05T22:20:03
|
279552098
|
{
"authors": [
"lencioni",
"sokra"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12038",
"repo": "webpack/webpack",
"url": "https://github.com/webpack/webpack/pull/6073"
}
|
gharchive/pull-request
|
Reduce the number of calls to hash.update
I was profiling our webpack build on Node 8.9.0 and noticed that a fair
amount of time was spent calling hash.update. During the process of
building one of our bundles, ~115ms was spent inside of createHash. 95ms
of that time is spent in crypto's update itself. In the bottom up view
of the profiler, this is the 4th most expensive part of the profile, as
measured by self time.
It turns out that, concatenating up a string and calling crypto's update
once is much faster than calling update many times. In this commit, I
take a swing at replacing the pattern of passing around a hash object
that update is called on all the time, to concatenating up a string, and
reducing the number of calls to crypto's update.
After this change, my profiling shows that only 16.54ms is spent in
createHash. In the bottom up view, I verified that 8.5ms is now spent in
crypto's update, which moves it far from one of the most expensive
areas.
I think there are more opportunities like this, but I'm a little
hesitant to dig in much more at this point because of the staggering
amount of indirection which makes the code difficult to follow, the
profiling difficult to interpret, and the stack traces when I hit an
error not very helpful.
What kind of change does this PR introduce?
Optimization
Did you add tests for your changes?
Not yet. I wanted to get feedback on this before I fixed the tests.
If relevant, link to documentation update:
Summary
See above. https://github.com/webpack/webpack/issues/5718
Does this PR introduce a breaking change?
I don't think so
Other information
Before:
After:
I added more of our bundles into the mix and profiled before and after this change again, and this seems to have a nice positive impact. Here are some flame charts.
Before
After
Zooming in a bit:
Before
After
This is similar to #6006 but more complex.
You could try to backport #6006 to webpack 3, profile it and send a PR.
Ah thanks! I missed that one. Since v4 is so close, maybe I should start profiling on that version instead.
|
2025-04-01T04:35:56.630898
| 2023-08-14T17:54:48
|
1850263172
|
{
"authors": [
"TkDodo",
"eppisapiafsl",
"voxpelli",
"webpro"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12039",
"repo": "webpro/knip",
"url": "https://github.com/webpro/knip/issues/198"
}
|
gharchive/issue
|
Equivalent of ts-prune-ignore-next
Hey!
I'm looking for a way to ignore "new code" that is added to be detected as unused/deadcode
Is there an equivalent of // ts-prune-ignore-next ?
Can you describe the use case?
Or is it because you run it on save?
Personal reflection is that if it's only for new code then I would want it to expire somehow. How does it work for ts-prune?
Next to ignore the whole file, there are currently a few ways to handle this: https://github.com/webpro/knip/blob/main/docs/handling-issues.md#unused-exports
If that's not what you need, I also like to know the use case. And if there's something Knip should fix or handle better I'm all ears :)
if I read this correctly, marking an exported function as /** @public */ would be the equivalent, right?
- // ts-prune-ignore-next
+ /** @public */
export function myUnusedFunction() { ... }
That's effectively the same thing indeed.
Thanks /** @public */ works as expected π
|
2025-04-01T04:35:56.633090
| 2018-07-10T13:02:54
|
339832592
|
{
"authors": [
"chris-miaskowski",
"webpro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12040",
"repo": "webpro/release-it",
"url": "https://github.com/webpro/release-it/issues/324"
}
|
gharchive/issue
|
"ERROR Invalid Version" thrown if latest tag is not a semver
https://github.com/webpro/release-it/blob/a0ecfd40fe72c3ae0ba1bc93f2a28df9e7c11f87/lib/version.js#L34
The line above causes the code to break in runtime if the latest tag doesn't follow a semver format. While this is not a problem most of the time, it does break in some workflows (e.g. nightly builds we tag do not follow a semver format but we don't take them into account when releasing). I think a more defensive check would do the trick here and allow more flexible setups. E.g. a try/catch block around the semver.neq.
What do you think?
Hi @chris-miaskowski. Thanks for the bug report. Totally agree this should be handled more defensively. I've just released v7.4.8 that includes this fix.
|
2025-04-01T04:35:56.652142
| 2021-06-16T15:20:22
|
922755100
|
{
"authors": [
"robinvdvleuten",
"triskweline"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12041",
"repo": "webstronauts/ex_unpoly",
"url": "https://github.com/webstronauts/ex_unpoly/issues/4"
}
|
gharchive/issue
|
Support for the Unpoly 2 protocol
Thank you for maintaining this package!
This is just a heads up that Unpoly 2 is going to be released this week.
Unpoly 2 comes with several extensions to the optional server protocol that this repo implements. All protocol changes are backwards compatible, so Unpoly 2 users can keep using this package.
You may want to consider whether you want to implement some of the newer features like X-Up-Events, X-Up-Mode or X-Up-Accept-Layer.
If you're planning to stick with the current feature set, that's cool too, of course.
@triskweline thanks for reaching out and good work with the v2! Do you have a changelog of all new Unpoly features or is it only the four you mentioned here?
It's more than these four.
From what I could see in the code, ex_unpoly currently supports the following:
X-Up-Target (Request header)
X-Up-Validate (Request header)
X-Up-Location (Response header)
X-Up-Method (Response header)
X-Up-Title (Response header)
_up_cookie (Cookie)
Changes in V2 are:
All entries in https://v2.unpoly.com/up.protocol not mentioned above
The server may also send a X-Up-Target response header to change the render target
The user can configure to not send an X-Up-Target response header to improve cacheability on the client. Do detect an Unpoly request in V2 (Unpoly.unpoly?) you should test for presence of the X-Up-Version header, which is always guaranteed to be sent.
I also see that some response headers also can be set to null. Is this a JSON encoded null, like "null"?
It is the string null (no quotes), e.g. X-Up-Accept-Layer: null.
|
2025-04-01T04:35:56.655240
| 2014-10-27T09:26:29
|
46884567
|
{
"authors": [
"alexflav23",
"hamnis"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12042",
"repo": "websudos/phantom",
"url": "https://github.com/websudos/phantom/issues/145"
}
|
gharchive/issue
|
Ambiguous implicits when importing import com.websudos.phantom.Implicits._
$ sbt compile
[info] Compiling 3 Scala sources to /Users/maedhros/Projects/hbase-cassandra-migration/target/classes...
[error] /Users/maedhros/Projects/hbase-cassandra-migration/src/main/scala/hbcass/MigrateHistory.scala:72: type mismatch;
[error] found : ts.type (with underlying type String)
[error] required: ?{def toLong: ?}
[error] Note that implicit conversions are not applicable because they are ambiguous:
[error] both method augmentString in object Predef of type (x: String)scala.collection.immutable.StringOps
[error] and method wrapString in class LowPriorityImplicits of type (s: String)scala.collection.immutable.WrappedString
[error] are possible conversion functions from ts.type to ?{def toLong: ?}
[error] List(Click(site, UserContentEvent(userId, contentId, new DateTime(ts.toLong))))
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
@prettynatty @magott @hamnis An official Scala 2.11 release is now out. This issue is fixed and will be closed.
|
2025-04-01T04:35:56.664059
| 2022-11-07T10:06:13
|
1438091626
|
{
"authors": [
"Siubaak",
"ptatoChi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12045",
"repo": "wechatjs/vdebugger",
"url": "https://github.com/wechatjs/vdebugger/issues/2"
}
|
gharchive/issue
|
Can I debug the script from the browser's dev tools?
Hi thanks for your work.
I was wondering if it's possible to debug the custom scripts inside the browser's dev tool source window
You can check this repo out, which is a debug toolkit based on chrome devtools and vdebugger. :)
You can check mprdev out, which is a debug toolkit based on Chrome Devtools and vdebugger.
Also, you can make your own dev tools based on Chrome DevTools and Chrome DevTools Protocol, by just implementing the protocols below:
|
2025-04-01T04:35:56.685470
| 2018-05-15T18:20:31
|
323329556
|
{
"authors": [
"weibullguy"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12046",
"repo": "weibullguy/rtk",
"url": "https://github.com/weibullguy/rtk/issues/121"
}
|
gharchive/issue
|
Hardware Module View Has Inconsistent Method Definitions
Expected Behavior
Module views should have consistent method definitions (i.e., same number of arguments, same arguments, etc.).
Actual Behavior
The Requirements module view has the following insert() methods:
_do_request_insert(self, sibling=True, part=0)
_do_request_insert_child_assembly(self, __button)
_do_request_insert_child_part(self, __button)
_do_request_insert_sibling_assembly(self, __button)
_do_request_insert_sibling_part(self, __button)
This method is called by either _do_request_insert_child(self, __button) or _do_request_insert_sibling(self, __button) which provide the proper value to the sibling argument. However, the Requirements module view needs to be consistent with all RTK Module Views.
Steps to Reproduce the Problem
Not applicable for requirements/code quality errors.
Proposed Solution
1. Write a requirement for Module Views to have consistent insert methods depending on whether the RTK module being displayed is flat or hierarchical.
2. Re-define _do_request_insert(self, sibling=True, part=0) to _do_request_insert(self, **kwargs).
3. Re-define _do_request_insert_child_assembly(self, __button) and _do_request_insert_child_part(self, __button) to _do_request_insert_child(self, __button, **kwargs) method.
4. Re-define _do_request_insert_sibling_assembly(self, __button) and _do_request_insert_sibling_part(self, __button) to _do_request_insert_sibling(self, __button, **kwargs) method.
5. Have the _do_request_insert_child(self, __button, **kwargs) and _do_request_insert_sibling(self, __button, **kwargs) method call the _do_request_insert(self, **kwargs) method with the correct values for the kwargs (assembly or part).
Operating Environment
Not needed for requirements/code quality errors.
Fixed in commit f619fc7fd75af9d4c498e7a4ff8e3a3d936e2192
|
2025-04-01T04:35:56.723078
| 2018-07-26T07:33:32
|
344727933
|
{
"authors": [
"alexwlchan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12047",
"repo": "wellcometrust/platform",
"url": "https://github.com/wellcometrust/platform/pull/2459"
}
|
gharchive/pull-request
|
Move Sourced out of the common lib
internal_model makes more sense as a home for it, and gets one more piece out of the common-common lib.
Merging by fiat, this is a tiny change.
|
2025-04-01T04:35:56.724692
| 2019-03-28T11:12:25
|
426434102
|
{
"authors": [
"davidpmccormick",
"jamesgorrie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12048",
"repo": "wellcometrust/wellcomecollection.org",
"url": "https://github.com/wellcometrust/wellcomecollection.org/pull/4343"
}
|
gharchive/pull-request
|
Change 'Images' to 'Collections' in footer
Could(/should?) have a NavigationItemsContext that stores this in the one place in common/_app.js
Could(/should?) have a NavigationItemsContext that stores this in the one place in common/_app.js
Feels like over-egging, surely just a JS object we can include?
|
2025-04-01T04:35:56.736567
| 2021-06-09T14:44:09
|
916288436
|
{
"authors": [
"coveralls",
"wellyshen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12049",
"repo": "wellyshen/react-cool-virtual",
"url": "https://github.com/wellyshen/react-cool-virtual/pull/127"
}
|
gharchive/pull-request
|
fix: scrollToItem method freezes scrolling
fix: scrollToItem method freezes scrolling
Pull Request Test Coverage Report for Build 922091128
0 of 10 (0.0%) changed or added relevant lines in 1 file are covered.
3 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-0.4%) to 10.426%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
src/useVirtual.ts
0
10
0.0%
Files with Coverage Reduction
New Missed Lines
%
src/useVirtual.ts
3
0%
Totals
Change from base Build 920715448:
-0.4%
Covered Lines:
31
Relevant Lines:
277
π - Coveralls
|
2025-04-01T04:35:56.744517
| 2021-05-29T11:16:53
|
906446736
|
{
"authors": [
"coveralls",
"wellyshen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12050",
"repo": "wellyshen/react-cool-virtual",
"url": "https://github.com/wellyshen/react-cool-virtual/pull/63"
}
|
gharchive/pull-request
|
Fix: scrollToItem method not working with dynamic size
Fix: scrollToItem method not working with dynamic size
Pull Request Test Coverage Report for Build 888098379
0 of 13 (0.0%) changed or added relevant lines in 1 file are covered.
4 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-0.04%) to 2.098%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
src/useVirtual.ts
0
13
0.0%
Files with Coverage Reduction
New Missed Lines
%
src/useVirtual.ts
4
0%
Totals
Change from base Build 886423883:
-0.04%
Covered Lines:
7
Relevant Lines:
264
π - Coveralls
|
2025-04-01T04:35:56.754690
| 2024-04-15T18:08:53
|
2244310561
|
{
"authors": [
"balmy-gazebo",
"kespinola"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12051",
"repo": "wen-community/wen-new-standard",
"url": "https://github.com/wen-community/wen-new-standard/issues/60"
}
|
gharchive/issue
|
More comprehensive test suite
The current test suite is all javascript based and limited in edge cases. Should try to support a more broad range of test cases and ideally add a rust library.
Thanks @balmy-gazebo happy you opened this one. I'm looking at the test suite right now and am curious on the direction and what people want out of their test suite.
Overall what is the motivation behind the current spec setup vs leaning fully into the anchor test stack?
What are folks thoughts on ideal?
For me its:
Running against local validator
Use account snapshots to seed fixtures for the instruction under test instead of bootstrapping test state by replaying dependent instructions.
Test suite written in rust. If possible use solana test bank to avoid running local validator.
The original test suite was built like this to also test the SDK. Iβm very open to moving to anchor tests for this. Do you have a repository in mind that does this well?
The original test suite was built like this to also test the SDK. Iβm very open to moving to anchor tests for this. Do you have a repository in mind that does this well?
Doing research on that now.
Frens at nifty have a nice rust based setup using sdk generated by metaplex kinobi.
https://github.com/nifty-oss/asset/tree/main/clients/rust/asset
Then for vanilla anchor will mimic the examples in the anchor repo. They support some features like cloning accounts.
https://github.com/wen-community/wen-new-standard/pull/74
New suite of tests have been written for both wns and royalty distribution including an example sales protocol for testing royalty enforcement.
|
2025-04-01T04:35:56.769312
| 2023-04-14T17:55:13
|
1668706165
|
{
"authors": [
"gudipudipradeep",
"wencaiwulue"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12052",
"repo": "wencaiwulue/kubevpn",
"url": "https://github.com/wencaiwulue/kubevpn/issues/36"
}
|
gharchive/issue
|
Facing connection issue.
Please can you guide me what was the problem.
Please can you guide me what was the problem.
it is normal updating cluster deployment kubevpn-traffic-manager image, because you are using newer client, we recommand that kubevpn client version should same as container image version
just wait it upgrade finished
Waiting long amount time before reporting.. Can we enable debug mode or source code to run directly to understand what was the issue is...?
maybe because image can not
pull imageοΌyou can describe pod to see k8s event, and more,kubevpn is open source for everyone you can check code to see what did it do
Waiting long amount time before reporting.. Can we enable debug mode or source code to run directly to understand what was the issue is...?
here is a flag --debug, you can see more log
@wencaiwulue Please can you guide me, how we can run the code locally.
I checkout the code.
install the Go..
Need your inputs to run the project locally.
Am new to go :) Please help to me run.
@wencaiwulue
C:\ci_projects\kubevpn\cmd\kubevpn>go run main.go connect
got cidr from cache
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
Get "https://aks-xxxx-rg-wwwww-nonpr-8de222c5-0754bde13.hcp.eastus.azmk8s.io:443/api/v1/pods?timeout=30s&timeoutSeconds=30": read tcp <IP_ADDRESS>:63213-><IP_ADDRESS>:443: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
prepare to exit, cleaning up
failed to release ip to dhcp, err: failed to get cm DHCP server, err: Get "https://aks-xxxxxx-rg-wwwww-nonpr-8de222c5-0754bde13.hcp.eastus.azmk8s.io:443/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": context deadline exceeded
can not update ref-count: update ref-count failed, increment: -1, error: client rate limiter Wait returned an error: context deadline exceeded
clean up successful
Getting above issue.
it is disconnecting our organization ciso vpn connection to..
@wencaiwulue C:\ci_projects\kubevpn\cmd\kubevpn>go run main.go connect got cidr from cache update ref count successfully traffic manager already exist, reuse it port forward ready tunnel connected Get "https://aks-xxxx-rg-wwwww-nonpr-8de222c5-0754bde13.hcp.eastus.azmk8s.io:443/api/v1/pods?timeout=30s&timeoutSeconds=30": read tcp <IP_ADDRESS>:63213-><IP_ADDRESS>:443: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. prepare to exit, cleaning up failed to release ip to dhcp, err: failed to get cm DHCP server, err: Get "https://aks-xxxxxx-rg-wwwww-nonpr-8de222c5-0754bde13.hcp.eastus.azmk8s.io:443/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": context deadline exceeded can not update ref-count: update ref-count failed, increment: -1, error: client rate limiter Wait returned an error: context deadline exceeded clean up successful
Getting above issue. it is disconnecting our organization ciso vpn connection to.. using azure cloud aks private cluster
here is link to binary , not need to build it manally, kubevpn_v1.1.30_windows_amd64.zip
"https://aks-xxxx-rg-wwwww-nonpr-8de222c5-0754bde13.hcp.eastus.azmk8s.io
you can parse command output: kubectl get cm kubevpn-traffic-manager -o yaml and nslookup https://aks-xxxx-rg-wwwww-nonpr-8de222c5-0754bde13.hcp.eastus.azmk8s.io ?
@gudipudipradeep hello, it should be solved in this commit, can you clone this repo and pull latest code, run command make kubevpn to build binary, and try it again?
@gudipudipradeep you can try latest version v1.1.31 to verify this bug whether fixed or not ~
|
2025-04-01T04:35:56.805572
| 2015-02-18T07:55:04
|
58037519
|
{
"authors": [
"hatchan",
"pjvds"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12053",
"repo": "wercker/box-golang",
"url": "https://github.com/wercker/box-golang/pull/11"
}
|
gharchive/pull-request
|
Update Go to version 1.4.2
This pull requests updates the following things:
Update Go from version 1.4.1 to the latest stable version 1.4.2.
Make box description explicit about the goal of this box.
As a reference, this is what is fixed in Go 1.4.2:
go1.4.2 (released 2015/02/17) includes bug fixes to the go command, the compiler and linker, and the runtime, syscall, reflect, and math/big packages. See the Go 1.4.2 milestone on our issue tracker for details.
Looks good, thanks
|
2025-04-01T04:35:56.811776
| 2024-05-13T17:52:31
|
2293460294
|
{
"authors": [
"flant-team-sysdev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12054",
"repo": "werf/werf",
"url": "https://github.com/werf/werf/pull/6111"
}
|
gharchive/pull-request
|
chore: release 1.1.35
:robot: I have created a release *beep* *boop*
1.1.35 (2024-05-13)
Bug Fixes
update docs (f893e60)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/werf/werf/releases/tag/v1.1.35 :sunflower:
|
2025-04-01T04:35:56.812605
| 2018-03-12T06:53:16
|
304258372
|
{
"authors": [
"odahcam",
"werthdavid"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12055",
"repo": "werthdavid/ngx-zxing",
"url": "https://github.com/werthdavid/ngx-zxing/issues/42"
}
|
gharchive/issue
|
Check dependencies
E.g. imho rxjs is not needed here at all
RxJS, and others like it, should be dev-dependency so the demo-app can run.
|
2025-04-01T04:35:56.815555
| 2016-01-30T02:01:12
|
129928275
|
{
"authors": [
"trainto",
"weskinner"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12056",
"repo": "weskinner/symbol-gen",
"url": "https://github.com/weskinner/symbol-gen/pull/48"
}
|
gharchive/pull-request
|
Add keybinding for windows and linux
'ctrl-alt-g' seems ok to be defined for 'symbol-gen:generate' for Windows and Linux.
Fixes #38
published in 1.1.0
|
2025-04-01T04:35:56.855984
| 2020-11-09T06:40:38
|
738734949
|
{
"authors": [
"scala-steward"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12057",
"repo": "weso/shaclex",
"url": "https://github.com/weso/shaclex/pull/349"
}
|
gharchive/pull-request
|
Update scalatest to 3.2.3
Updates org.scalatest:scalatest from 3.2.0 to 3.2.3.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalatest", artifactId = "scalatest" } ]
labels: test-library-update, semver-patch
Superseded by #396.
|
2025-04-01T04:35:56.857704
| 2023-03-07T15:34:48
|
1613701239
|
{
"authors": [
"xiki-tempula"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12058",
"repo": "wesselb/plum",
"url": "https://github.com/wesselb/plum/issues/76"
}
|
gharchive/issue
|
Support for dispatch based on the value of the input
Hi, I noted that the new plum 2 offered a bunch of new things. I wonder if one could do dispatch based on the value of the input?
class Print:
@dispatch
def print(self, input: str = "two"):
return 2
@dispatch
def print(self, input: str = "three"):
return 3
For example, in this case one would return 3 when the input is "three" and return 2 when the input is "two". The input will be of type Literal["two", "three"]? Thanks.
Thanks
|
2025-04-01T04:35:56.895101
| 2016-07-03T13:18:47
|
163561913
|
{
"authors": [
"DaleMcGrew",
"josephevans",
"udii"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12059",
"repo": "wevote/WeVoteServer",
"url": "https://github.com/wevote/WeVoteServer/issues/111"
}
|
gharchive/issue
|
Use ElasticSearch python API to do Add/Delete data
Please create examples using the python ElasticSearch API (version 1.x) to do the following functions, so we can call the ElasticSearch API (from the WeVoteServer API server) when our data changes:
[ ] Add (or update if it exists) one entry for one table (ex/ Add a candidate) β this can be by βidβ key
[ ] Delete one entry for one table (ex/ Delete a candidate) β this can be by βidβ key
For adding documents to ES, the code should be something like this:
es = Elasticsearch(["es-ip:9200"], timeout = 20, max_retries = 5, retry_on_timeout = True)
result = es.index(index = "candidates", doc_type = "candidate", id = , body = {
}
es = Elasticsearch(["es-ip:9200"], timeout = 20, max_retries = 5, retry_on_timeout = True) result = es.index(index="candidates", doc_type="candidate", id=<unique id>, body={ "candidate_name": value, "candidate_twitter_handle": value, "twitter_name": value, "google_civic_election_id": blah, "state_code": blah, "we_vote_id" : blah }
To delete documents from ES:
es.delete(index="candidates", doc_type="candidate", id=<unique id>)
Thank you @josephevans, we will add this to the WeVoteServer application code.
I experimented with using django signals to get notification for objects saves and deletes (see [here]https://github.com/udii/WeVoteServer/blob/notify-elastic-search/signals/elasticsearch.py,
still need to find a better way to initialize the module and to access the instance,
before I can hook this up to elastic search.
Ready for test. Thank you @udii!
It works great @udii. Thank you.
|
2025-04-01T04:35:56.915149
| 2024-03-27T23:05:36
|
2212041156
|
{
"authors": [
"jbjanot",
"whaaaley"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12060",
"repo": "whaaaley/esbuild-plugin-glob-import",
"url": "https://github.com/whaaaley/esbuild-plugin-glob-import/issues/1"
}
|
gharchive/issue
|
Not completely working
Hey there! Thanks for writing this plugin.
It is partially working, as ESBuild does not return any error when I use it on this kind of syntax :
import 'core/*/js/__*.js';
import 'components/*/*/js/__*.js';
import 'flexibles/*/js/__*.js';
But my files are still not loaded.
They are auto instanced classes that looks like that:
export default new class {
constructor() {
console.log( 'hello you' );
}
};
When I try to debug it this way:
import cores from 'core/*/js/__*.js';
console.log(cores);
I get an infinite object with each part of the path, never seeing the expected class.
Any idea what could be causing it?
Thanks a lot for your help!
Thanks for reporting this! I'll definitely look into it. If you've made any headway keep me updated. I've had a next branch with quite a few updates that I never quite wrapped up last year that may have covered this. I'll try to set up some unit tests and see where I'm at.
Hey Dustin, thanks for your comment.
I did fork it here: https://github.com/lesanimals/esbuild-plugin-glob-import
It works for my use case, but I didn't particularly tested it, and it is the first time I have a look at esbuild plugins.
If it can be of any help to you, I'll be more than happy!
Thanks again!
|
2025-04-01T04:35:56.985223
| 2023-08-28T22:52:23
|
1870615781
|
{
"authors": [
"ahmedrabii",
"zahlernick"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12062",
"repo": "whittlem/pycryptobot",
"url": "https://github.com/whittlem/pycryptobot/issues/832"
}
|
gharchive/issue
|
Issue with Coinbase API grabbing Quote Currency
Hi all, I recently was using Coinbase Pro API and tried to switch over to the regular Coinbase API (Coinbase Advanced) and it appears to take the price of whatever crypto I'm trying to buy/sell, divide that by the funds in my Coinbase wallet, then tell me whatever the result is as my actual. (for example, $200 in wallet, with ETH $1600 per token, it will tell me that 200/1600 = 0.125 is my actual, and that my minimum is $200 when it's set to something else).
Is this because they updated the API? Is there a fix to this?
Coinbase API (Coinbase Advanced) is just not working it needs to be fixed :
scanner.py line 37 still getting api keys from configuration
scanner line 46 get_markets_24hr_stats function does not exist for coinbase AuthAPI
AppState.py line 223 is getting the min as the total (your case)
coinbase pro valid urls must be updated and refactored (same array too many places)
matplotlib must be updated to latest version and also the used styles
logbuysellinjson even when you set it to 1 it does not log orders into a json file
With AppState.py line 223 is getting the min as the total (your case) fixed by replacing the line with base_min = float(0) the app seems to work but guess what no order is being fulfilled it tells you that an order has been made but nothing actually happened
Basically I listed all bugs that i found in the hope that they will be fixed soon π
|
2025-04-01T04:35:56.987000
| 2024-09-20T18:17:26
|
2539399936
|
{
"authors": [
"Lxtharia",
"whleucka"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12063",
"repo": "whleucka/reverb.nvim",
"url": "https://github.com/whleucka/reverb.nvim/pull/8"
}
|
gharchive/pull-request
|
Add option to limit the amount of sounds playing at the same time
I mapped some sounds to CursorMove so when I hold down, I get blasted by 50 sounds playing all at the same time.
This should also just limit processes spawned when using more heavyweight players in the future (mpv?).
The implementation is hacky, as I don't know how "global" the global variable is, but it worksβ’οΈ
Thank you a lot btw for this plugin, I find it very fun :D
Edit: I apparently fucked up my branch name and called it max_options?? I meant max_sounds lol
Thanks again!
|
2025-04-01T04:35:56.990670
| 2016-03-07T13:57:45
|
138984073
|
{
"authors": [
"johnhaskew",
"mrjb"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12064",
"repo": "who-emro/meerkat_frontend",
"url": "https://github.com/who-emro/meerkat_frontend/issues/72"
}
|
gharchive/issue
|
Download Data Function
Dear All - we had a productive meeting with MOH about the system and went through the various new features added. Firstly - thank you for all the excellent work which is well received by MOH.
The Download Data works well but it is clear it is too confusing to simply have raw data from the forms to download. MOH don't understand what each variable means or where they come from.
If possible, we need to therefore restructure the download data function according to function - ie. CD, NCD, MH etc and to only include variables in each form that need to be used. We also need to rename the variables to something that can be understood to MOH.
I attach a data dictionary for seven data download files -Communicable Disease, Non-Communicable Disease, Mental Health, PIP, IMCI, Nutrition and Vaccination and mhGAP. The first column includes the current variable name, the second the name it should be renamed to and the third column contains the source form of the data (case, alert or review).
I hope this makes sense and is possible. I am happy to also include a Raw Data file for download too which includes the raw data for anyone who should need everything - ie. Raw Data (Case Report), Raw Data (Daily Register) etc.
We also need to ensure the data can be downloads as .xls please - there are problems downloading the csv for MOH (need to separate into columns manually).
Public Health Surveillance | Data Dictionary.docx
Making good progress with this. Will this look (very nearly) the same for all countries?
|
2025-04-01T04:35:57.001290
| 2020-02-21T13:30:49
|
568956416
|
{
"authors": [
"cal2195",
"whyboris"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12065",
"repo": "whyboris/Video-Hub-App",
"url": "https://github.com/whyboris/Video-Hub-App/issues/369"
}
|
gharchive/issue
|
Current Scanning Method Stats All Files, Not Just Videos
I know we redesigned this at one point, but I think I'll look into a more efficient way.
The current method works great for folders of just videos, but crawls to a snail π when there's as many .tag files and .jpg files etc. inside the folder hierarchy too. π
This is a waste of hard drive bandwidth as we then proceed to filter them out in app.
I'll look into a better solution for this. π
Thank you for catching this!
I only tested folders with a few varied files and with large folders containing only videos, never with large folders with many various files π
I'm pretty sure this is the gist that I used when starting VHA (and the code hasn't changed much). I see there are some interesting comments there:
https://gist.github.com/kethinov/6658166#gistcomment-2692596
uses yield
https://gist.github.com/kethinov/6658166#gistcomment-2936675
uses a functional programming approach
https://gist.github.com/kethinov/6658166#gistcomment-2733303
async
π
Looking at this again, this may be a non-issue - looks like you need to stat everything anyway to find out if it's a directory or not. π’
But, it would be nice if the whole thing was async!! π
@cal2195 -- thanks to your PR (β€οΈ) with chokidar I believe the issues has been solved, right? π
Please reopen if that's not quite correct π
|
2025-04-01T04:35:57.031451
| 2020-07-09T17:01:50
|
654213662
|
{
"authors": [
"TheTacoScott",
"cal2195",
"whyboris"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12066",
"repo": "whyboris/Video-Hub-App",
"url": "https://github.com/whyboris/Video-Hub-App/issues/477"
}
|
gharchive/issue
|
Proposal: Allow for 3rd party hooks into VHA
Since the code is open source, the custom hashing algorithm is easy to write in another language, the main config file (.vha2) is json, and the thumbnails/clips/filmstrips are live in predictable places with predictable file formats I think there's some benefit to offloading future "obscure" or "limiting" features to a simple 3rd party api that you expose from vha.
For example, if someone wants to regenerate all the clips using some custom nonsense they write (I'm speaking very much about myself here) or change the thumbnail of a clip every day using a custom thumbnail gen thing, this is all pretty doable by outputting different, but compliant, mp4/jpg at the correct location.
But we then need a way to send a signal to the app saying "things have change for hash X or video Y or whatever" if we want that update reflected at runtime.
An exposed API interface, or an ability to send and send messages to vha stdin or something similar to would open up the app.
Some of the other issues I've seen (like people who run vha on a separate server but use it on a different client) could be facilitated by opening up this framework.
TLDR: provide a way to send signals to a running VHA app (tcp port/rest api/stdin/etc) to let it know an external program has updated the state of:
the json definition of a clip
the thumbnail/clip/filmstrip of a clip
Once the signal is received it should "reload" that info from disk, or do a cache invalidation, or reload it however makes the most sense on your end.
This piece fully offloads some obscure use cases out of the Video-Hub-App project and makes it so those features become extensions or plugins, or something similar.
The project could facilitate their existence but isn't on the hook for their development.
Thank you for the idea. I'm very unfamiliar with how a plug-in system / 3rd party hooks would work. The biggest challenge would be for me to understand what users could even want to do -- because unless I have a clear picture of at least some examples, I wouldn't be able to provide anything useful as an API.
In short, this would be a very large undertaking for me and since I wouldn't be using the feature, it's unlikely I'll be motivated to add it.
I am curious about why something like this is needed. The code is open-source -- anyone could edit the code however they like for a custom behavior, rather than use whatever not-far-sighted API that I could produce. I guess the benefit is that the API could be very small and well-documented, meaning no one has to look at the VHA code to get something added π€
About updating the .json file while it's open in the app: I am unsure how it would work. For example, a user adds a tag to a video. Now the two data objects are different. If we don't allow external sources to update tags, we could make sure that VHA always does extra work to re-add the tags to incoming 'new' data ... but it seems super-complicated. It's just best not to have anything edit the .vha2 file when an app is open.
VHA 3.0.0 will allow the app to watch directories - adding new videos as they appear on the hard drive. So that solves at least one problem.
I understand that it would be cool to have a server that efficiently generates the screenshots (rather than doing it locally). But it seems easier to rip out the bit of code that does the extraction, write a script that keeps adding the screenshots without ever editing the .vha file. And when the app (running locally) imports the new videos, it finds the screenshots there (so it never tries to extract them). π€·
Do let me know if I've misunderstood -- and please feel free to add examples of what you imagine would be useful.
I think I have a better idea of what you mean now that I think of it ... the .vha2 (or .json as I referred to it above) wouldn't be updated by the external entity -- instead VHA would receive a message from the external entity about something changing somewhere. Is that right?
I have at best a very vague picture of how any of this would work (how could another language communicate to the app? would the plugins only work if they are written in a specific language?) Any time I can think of anything, the simplest solution is simply for whoever wants the functionality to edit the app themselves ...
Please do clarify if you have the time -- because I never used a plugin in any of my software, so I'm likely missing out on some cool stuff!
Think even simpler than a plugin.
For the purposes of this issue let's just assume that a rest/json based web interface is what exists.
Let's assume that vha listens on a configurable port, for this discussion let's assume port 9999, and listening on <IP_ADDRESS>.
Let's limit to one use case: an image thumbnail (on disk) has been updated outside of the program and we want to reload it in the interface at runtime.
Here's how that could work:
POST http://<ip_of_vha>:9999/update
payload:
{
"action": "thumbnail_update",
"vha_hash": "",
}
That's not really very resty, but it could be refactored to something like
PUT http://<ip_of_vha>:9999/thumbnails/
representing to vha to "update" that thumbnail for that vhash from disk
The benefit of something like this you don't need to know the exact use case, but you can expose a subset of the total feature set without having to worry about a way to make the gui work.
As of now the highest value targets would be a way to update the config file (vha2) and thumbnails/clips/filmstrips outside of the runtime and signal to the runtime that this has occurred.
Oh yeah! A server with an HTTP endpoint π
-- makes sense
My usually style of markdown is getting clobbered here, but you get the idea. =)
https://github.com/mpc-hc/mpc-hc is a good example of a video player (much much simpler than vha) but it has a web interface that exposes... basically all the buttons in the gui to be "clicked" via web calls.
Now the benefit of this is: you don't really need to know the use cases. This is about letting others discovery what they can do. =)
Although specifically for me: I want the ability to signal to the app that a thumbnail has been updated, and that tags have been updated.
FYI: currently you can:
select any of the extracted screenshots (ones seen in the filmstrip) to be the default thumbnail per video
you can drag/drop any image file over any video (in thumbnails view) and it will become the default thumbnail for that video
Figuring out a way to simulate a drag/drop was a path I went down, but sense this is electron and everything in the app isn't a classic windows "control" I abandoned that.
Picture a background process that is running, occasional;y updated thumbnails based on criteria not worth mentioning here. Having a way to have those updated without an up reload would be huge.
π makes sense
I wouldn't want the app to automatically run a server in the background, but I could see this as a feature that's present (but not enabled in production builds -- meant more for anyone who wants to tinker on their own).
So, we could have a global toggle in the node section that is false by default -- never turning on the server, but if a developer wants to tinker, they enable it and we're ready to go.
The server would start and would listen for POST calls to some local address.
Upon the calls - it would simply use the ipc to send the messages to Electron and the app would handle them however the developer wants to.
I think it may be simplest to have an example back-forth communication example with the non-enabled server and a bit of documentation in the README.md file to explain how to use it.
After that, it's up to every developer to put in whatever API they'd like.
In your case, you would just make a simple call to invalidate the image cache (already happening after an image is dropped over a thumbnail to replace it).
ps -- no need to use offset to change the thumbnail -- you look at the already-present functionality for picking any of the already-extracted thumbnails π
@cal2195 -- I believe you've already played around with having some server stuff in VHA in the past. If you're still interested in this topic -- please weigh in with some thoughts π€
I haven't done angular since angular 1, and have not touched nodejs since like 2015 but I will attempt to tackle this as well.
If I dive into all these readmes and such I assume an otherwise reasonable person can get a dev env going?
A slightly different direction, but FYI:
https://github.com/whyboris/Video-Hub-App/compare/master...cal2195:express-server
I think this plays videos over a locally-running server π€·
@TheTacoScott -- Angular has moved on since AngularJS -- so it may be easiest to treat it as if you're learning anew.
The good news is that once you npm install and npm start you'll have it all running.
Most of the functionality is in home.component.ts
To invalidate the cache (and re-read all the images on disk) run this line:
https://github.com/whyboris/Video-Hub-App/blob/master/src/app/components/home.component.ts#L586
this.electronService.webFrame.clearCache();
I'm happy to help with any step of the way π π€
whyboris: Why can't more people be like you? Love it. =)
Hey guys! π
This is certainly and interesting idea, and I think that an REST api is the way to go. As Boris mentioned, I have a working server on my fork for streaming (and even transcodeing) videos in-app, allowing for some cool extra features. π
Expanding on this to allow access to more features, and controlling the app would be very cool - triggering a rescan remotely for example would be very cool! π
One thing of note - whatever we choose should be planned out well and very extensible and flexible. If this is a dev feature, it mightn't matter as much, although I would hate to see custom code no longer work if we overhaul something because we missed some big issue! π―
I'm also interested in some of the other features @TheTacoScott has brought up in other issues, and I'll weigh in on my side on those as well!
I'm happy to help work on this, and design an extensible yet robust system that would serve many needs in the future! π
cal2195: Why can't more people be like you? Love it. =)
I think a web based resty/json like api makes a lot of senes (albeit I am very new to this project).
Proper modeling of the collections a resources to make it as discover-able and compliant as possible seems like a solid start after a POC is going
I'm open to this "server on the side" approach. I would prefer to focus on getting VHA 3.0.0 released first before getting this code into the codebase. I'm hoping for a release in a month or two. Until then we could work on it, it just wouldn't become part of the master branch (soon to be renamed to main btw) until version 3 is out.
So the basic idea is a web server at some URL that is able to send messages to node which in turn can forward them to the 'front end' π€
I'm happy to collaborate on this π
I will dive in more deeply into this, and the other issues with ideas tomorrow morning - getting lateish here! ππ
So assuming that added a server side resty thing in nodejs is on the light end of trivial I'm curious what @cal2195 and @whyboris think of as the high level collections and resources for vha?
Forgive the newness (and the names can change to better match whatever) but I currently view high level collections as: Hubs and Videos
Since vha is single hub only (iirc), hubs may not make sense as a collection to expose.
Everything would likely be prefixed with a version ID
GET /hubs/
list hubs
GET /hubs/HubName
list hub configuration (inputDir/version/numOfFolders/etc)
GET /hubs/HubName/videos/
list videos (in this case the contents of vha.json["images"][*]
GET /hubs/HubName/videos/hash
vha.json["images"][properindex]
GET /hubs/HubName/videos/hash
vha.json["images"][properindex]
GET /hubs/HubName/videos/hash/(clip|thumbnail|filmstrip)
return an html compliant header and the image/video itself
Cache-Control: no-cache maybe could be used to invalidate cache here
This is a little non-resty since these clips and thumbnails and such are 1:1 and not 1:X relations but something like this would work.
PUT /hubs/HubName/videos/hash/(clip|thumbnail|filmstrip)
update the image/video with the payload data
DELETE /hubs/HubName/videos/hash/(clip|thumbnail|filmstrip)
delete the image/video, and maybe regenerated?
Not all of these need to be implemented, but this could be the foundation
Assuming something like this is sane, should we have a discussion with an OpenAPI spec or something similar on a PR?
Working POC: https://github.com/whyboris/Video-Hub-App/pull/478/files
Related: #161 π
We're now one step closer as VHA has "official" server code π with #603
|
2025-04-01T04:35:57.039376
| 2022-01-20T20:32:34
|
1109740229
|
{
"authors": [
"jamesfpb",
"neilgrz",
"whyboris"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12067",
"repo": "whyboris/Video-Hub-App",
"url": "https://github.com/whyboris/Video-Hub-App/issues/725"
}
|
gharchive/issue
|
How to identify cause of loading getting 'stuck'
I am running Video Hub App on MacBookPro (2019) on Monterrey.
I have current hub set to watch for changes on my content folder.
On every start-up, the load process runs and get stuck at 50%.
I assume it is choking on one of my mp4 files but can't see any that are obviously missing or without clips/screenshots etc
How can I identify the source of the problem? Is there a log file somewhere?
(this may be a good item to add into the user guide troubleshooting section)
Many thanks
!
Thank you for the bug report. The "watch for changes" will do the equivalent of the Rescan button. I suspect if you press stop and then click Rescan in the 4th tab (Current hub) in Settings, the same behavior will happen.
You can use Debugtron to see the logs - perhaps it will show what went wrong (though I'm afraid the app might not be logging individual video file names).
I have noticed a few times when using the app that the progress bar indicator gets stuck; there's a good chance that it's not related to the app being stuck on a file, but a matter of the user interface not receiving the "done scanning" message π€
If you think of anything else that might be useful in debugging this issue, please do share π€
* note: the logging I'm talking about happens only inside the app during runtime and is never written to disk or sent out anywhere -- this is purely console.log() inside node (the technology that runs the code) π
I'm seeing the same thing. Is there any logging for this without having to install 3rd party software?
@neilgrz - there is no logging in the distributed software (demo or purchased version); installing Debugtron should show some of the logs.
For any logging you'd like, you can run the software from the source code (see the Readme for instructions).
The scan should not get stuck on any video for longer than setExtractionDurations function sets.
I'd love to figure out what is breaking on other people's computers, but I've not been able to replicate π
@whyboris Thanks for the info. I was able to use process of elimination and found my hanging issue to be two videos that were corrupt/wouldn't play. This is with the "watch folder" option on.
After cleaning that up and all was good, I was also able to reproduce the issue again on demand by putting a 0 byte .mkv file (blank txt file simply renamed with extension mkv) in the "watch folder". With that "bad" video in place and then starting Video Hub App, the title bar sits at "loading - 0%" and pulsating green dot indefinitely until the dud file is removed or "watch folder" is disabled.
|
2025-04-01T04:35:57.051303
| 2015-09-04T15:52:22
|
104917739
|
{
"authors": [
"BrendanBenshoof",
"jbenet",
"noffle",
"parkan",
"whyrusleeping"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12068",
"repo": "whyrusleeping/gx",
"url": "https://github.com/whyrusleeping/gx/issues/14"
}
|
gharchive/issue
|
Structure and File-names not generally compatible
If you are only planning on using gx for node and go, then it works fine.
however other languages have different requirements for folder structures and limits to file name (python stands out as an example).
If you want it to be a "general any language" package manager, you need to find an elegant way to deal with this (language specific plugins?)
could you detail how I would need to let things be structured for python to work? (i only ever script in python, never done an actual 'project' in it)
Relative imports for python are iffy to start with. I don't advise on planning to use them that way.
The way to do it for python is to put packages somewhere on the python path (varies from system to system)
the "right" way to deal with python would be to require the user to add a dedicated "vendor" folder for the entire machine, then put packages there.
For python you also cannot have a package name that would not work as a valid identifier in python. (no "."s or similar symbols, must not start with a number) Ironically a multihash works well. Your current name-version.num scheme cannot be imported.
When figuring out how to do this on my own, I named the package as a multihash and did:
import Qmblahblhablah as packagename
but I am not sure if people would tolerate that.
interesting... how does python normally do vendoring?
It doesn't. I essentially had to re-write every import in every module I've vendor-ed into urdht.
Essentially, you can do relative imports of modules inside the current module, but all later imports have to be relative to the "topmost" module rather than the current one. I like python, but that is nuts.
alright, how could gx make it easy for you?
gx build could set a custom python-path for you, or something maybe
most "real world" (devops) use of python is strictly jails with venv. it is "absolute" in terms of the python path, but relative in terms of the OS.
i think language plugins (rules?) is going to be the right thing to do. will not be able to capture every lang.
yeah, my thought is having language specific plugins to handle things
The hard part is maintaining "composibility" such that when you recursively get the requirements for each package, they just work too.
Having a library for gx that separates these concerns could help here: #15.
I recently ended up building a similar feature for mediachain's experimental translators, which ended up doing a lot of ugly sys.path manipulation, namespace juggling and __import__ing (we needed dynamic loading)
https://github.com/mediachain/mediachain-client/blob/master/mediachain/translation/lookup.py#L37
One issue I encountered was namespace shadowing; any .pyc on the path (python maddeningly sets sys.path[0] to cwd even if you're in a venv) that provide any part of the package prefix will shadow any branches in site-packages, e.g. if you have foo.bar in any directory on the path preceding the venv vendored dir, foo.qux from there will never be found.
It would be great to have a general purpose solution to this!
I wonder if a cleaner way to do this is to symlink (under the real package name instead of versioned/content-addressed path)?
|
2025-04-01T04:35:57.083448
| 2016-08-04T14:57:18
|
169395320
|
{
"authors": [
"FTBZ",
"paulmaunders",
"stephane-martin",
"wied03"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12072",
"repo": "wied03/centos-package-cron",
"url": "https://github.com/wied03/centos-package-cron/issues/12"
}
|
gharchive/issue
|
Command switch to get yum to update security advisories only
Since yum security doesn't work properly on CentOS, I am finding your tool really useful - so thanks for creating it :)
I have been using something like the following one liner to try to update all the security advisory packages on a system from your command:
centos-package-cron --output stdout --forceold | pcregrep -M 'Packages:[^:]*' | grep -o "[^* ]*" | grep -v 'Packages:' | grep -v 'References' | sort | uniq | xargs yum -y update
However this is a bit clunky and requires pcre-tools to be installed. I was wondering how difficult it would be for you to add a command line option to centos-package-cron to achieve the same result? (e.g. applying the security updates)
@paulmaunders - Thanks! In this case, doing the updates themselves is something I view as outside the scope of this tool. I think it should be done by a CM tool (Chef, Puppet, etc.).
That said, something that has been on the wish list in the back of my head is to add XML or JSON to the output format list. That way it would be easier to parse and do what you want.
Would that help?
Perhaps even just a having flag to return the package names only in text format?
Would you be opposed to JSON/XML? That way I don't have to add a bunch of command line switches to control what's displayed/formatted (security vs. general updates). Anyone could pick from the response what they needed.
Either XML or JSON would be fine as we could quite easily parse that!
I started the process on this in a branch but haven't had the time to keep going on it.
I'll take this one if you'd like. I already added JSON output to syslog in this branch: https://github.com/stephane-martin/centos-package-cron/tree/syslog_output
I'd appreciate the help. As long as unit tests are added and the existing ones don't break :)
Sure. Basically i think that we could have three switches: --json (output results to stdout as JSON), --mail (send results by email), --syslog (publish results to syslog). The switches would be cumulative, not exclusive.
I started a branch here https://github.com/wied03/centos-package-cron/commits/features/return_code_email and I was going to just remove email entirely and let people use sendmail/mailx/cronwrapper/etc. but then of course I got busy with the day job.
Well, if you want to remove email, then it's probably better to keep a "--output" switch that specifies the format to print to stdout (json or text). --syslog could be added to also send alerts to syslog.
@stephane-martin - Are you still working on this?
not on the last two weeks, but still interested for sure. i'll have some
time to code at the end of feb.
On 18 February 2017 19:27:46 Brady Wied<EMAIL_ADDRESS>wrote:
@stephane-martin - Are you still working on this?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/wied03/centos-package-cron/issues/12#issuecomment-280865087
@stephane-martin - I'll standby on this one and when you get the time, we can see where you end up. Should I remove the (incomplete) branch I referenced above?
Very interested in the ability to extract packages/erratas available by command line operation.
Now that someone submitted a JSON PR, closing this
|
2025-04-01T04:35:57.093732
| 2022-08-18T19:54:20
|
1343560772
|
{
"authors": [
"tpluscode"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12073",
"repo": "wikibus/wikibus",
"url": "https://github.com/wikibus/wikibus/issues/3"
}
|
gharchive/issue
|
Editable vehicle image
It get nicely sourced from dbpedia but then impossible to change
Add https://commons.wikimedia.org/wiki/File:Tiger_Line.jpg to Myllenium Hyline
|
2025-04-01T04:35:57.094555
| 2022-02-04T02:23:33
|
1123744291
|
{
"authors": [
"MusikAnimal",
"jdlrobson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12074",
"repo": "wikimedia-gadgets/MoreMenu",
"url": "https://github.com/wikimedia-gadgets/MoreMenu/pull/22"
}
|
gharchive/pull-request
|
Add vector-2022 to valid skins
Fixes #21
There's a few more places to fix, I think, but this one is definitely one of them. Thanks for going out of your way to fix our gadgets!
|
2025-04-01T04:35:57.101963
| 2022-10-22T19:38:44
|
1419474790
|
{
"authors": [
"PeWu",
"RobPavey"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12075",
"repo": "wikitree/wikitree-browser-extension",
"url": "https://github.com/wikitree/wikitree-browser-extension/pull/88"
}
|
gharchive/pull-request
|
Revert "Change the way features are registered" (#69)
This is not a "clean" revert, i.e. changes have been made to files in the meantime. 2 new features have been added using the "new" registration method, so I changed them to the existing registration method (in options.js).
Nonetheless, the most important parts have been reverted, specifically the options.js file.
@RobPavey
I have already submitted a revert. Sorry. If you sync your fork you should get the revert changes and hopefully they align with yours.
@RobPavey beat me to it in https://github.com/wikitree/wikitree-browser-extension/commit/5f4fec117d74e166d6832c95176152c007400d93 :smile:
|
2025-04-01T04:35:57.107122
| 2016-08-12T19:23:15
|
170941442
|
{
"authors": [
"bobmcwhirter"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12076",
"repo": "wildfly-swarm/wildfly-swarm",
"url": "https://github.com/wildfly-swarm/wildfly-swarm/pull/72"
}
|
gharchive/pull-request
|
SWARM-597 - Clean up RuntimeServer bootstrapping.
It's entirely possible to remove the UnmanagedInstance that
we used to bridge and subvert Weld. So we did.
Additionally, clarified the Server and Deployer contracts,
letting the Deployer deploy, and the Server serve. Even though
it's actually more like a container.
Additionally, use CDI to @Inject the RuntimeDeployer into
the RuntimeServer.
Additionally, use CDI to @Inject components needed by RuntimeDeployer.
Additionally, remove stuff from RuntimeServer that is now provided
directly to the RuntimeDeployer via CDI.
Additionally, remove the poorly-thought-out non-eager opening
Undertow stuff for the time being.
Additionally, be certain to use the word "additionally" often,
because Friday.
test this please
|
2025-04-01T04:35:57.125291
| 2022-12-16T09:56:37
|
1499913624
|
{
"authors": [
"wildfly-ci",
"yersan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12077",
"repo": "wildfly/wildfly-core",
"url": "https://github.com/wildfly/wildfly-core/pull/5334"
}
|
gharchive/pull-request
|
[19.x][WFCORE-6177] Remove test property added by EncodingPersistenceTestCase
Jira issue: https://issues.redhat.com/browse/WFCORE-6177
Core -> Full Integration Build 20 outcome was FAILURE using a merge of 82b060967edfb9d2358a19ea27a0da1fc4bef6cd
Summary: Exit code 1 (Step: Build & test full (Maven)) (new) Build time: 00:03:01
|
2025-04-01T04:35:57.133888
| 2018-04-13T08:51:32
|
314021564
|
{
"authors": [
"kabir",
"ochaloup",
"stuartwdouglas"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12078",
"repo": "wildfly/wildfly",
"url": "https://github.com/wildfly/wildfly/pull/11123"
}
|
gharchive/pull-request
|
[WFLY-9818] xts adding element to enable async endpoints registration
https://issues.jboss.org/browse/WFLY-9818
this is for .NET integration with Narayana WS-AT. The related Narayana jira is here:
https://issues.jboss.org/browse/JBTM-2928
The content of this PR is enabling the new WS endpoints needed for WS-AT async handling. The endpoints are registred if new element <async-registration enabled="true" /> is added under the XTS subsystem.
I will be really glad for any comment about the subsystem change where the new element is added (I'm not sure about the best practices in this area). Thanks.
Looks like this does not compile
@stuartwdouglas ouch, sorry and thanks. My last time refactoring, caused by fixing my IDE order of imports changes, caused I forgot to apply local-copy code additions. Now it should be working.
@stuartwdouglas how should I proceed about get review and possibly this get merged? would you give me hint who can I ask for looking at the code? Thanks!
The issue should be verified. @kabir is there something more to do for this PR would be mergable? Thanks.
hi @kabir , as the WFLY 13 was released can you, please, reconsider merging of these changes to the WFLY? Or what is the missing part for this could be merged? Many thanks.
I've rebased to stay up to date.
@kabir thanks for the review and pointing me to the missing pieces. I tried to provide what you asked me for. As you pointed out I'm not much familiar with this. I will be glad for your further feedback.
Anyway I updated the transformer and tuned the tests a bit. Please, let me know. Thank you.
Retest this please
|
2025-04-01T04:35:57.135494
| 2019-12-13T16:47:07
|
537652159
|
{
"authors": [
"bstansberry",
"darranl",
"maeste"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12079",
"repo": "wildfly/wildfly",
"url": "https://github.com/wildfly/wildfly/pull/12850"
}
|
gharchive/pull-request
|
WFLY-12867 Upgrade IronJacamar from 1.4.19.Final to 1.4.20.Final
https://issues.redhat.com/browse/WFLY-12867
retest this please
I am going to approve and flag for merge so I can combine with other PRs - the failure seems to be mixed domain mode specific.
|
2025-04-01T04:35:57.138208
| 2022-09-06T19:33:37
|
1363727034
|
{
"authors": [
"bstansberry",
"rachmatowicz",
"rhusar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12080",
"repo": "wildfly/wildfly",
"url": "https://github.com/wildfly/wildfly/pull/16018"
}
|
gharchive/pull-request
|
WFLY-16950 Provide a test case for testing EJB/HTTP with a load balancer.
This pull request provides a test case for using EJB/HTTP with a load balancer.
For more information, see the issue: https://issues.redhat.com/browse/WFLY-16950
@rachmatowicz Needs a rebase...
This has been inactive for a long time so I'm going to close it.
|
2025-04-01T04:35:57.139893
| 2023-11-23T14:35:27
|
2008351861
|
{
"authors": [
"istudens"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12081",
"repo": "wildfly/wildfly",
"url": "https://github.com/wildfly/wildfly/pull/17441"
}
|
gharchive/pull-request
|
[WFLY-17180][JBJCA-1402] added validation-query-timeout attribute to β¦
β¦datasources
Issue: https://issues.redhat.com/browse/WFLY-17180
Related issue: https://issues.redhat.com/browse/JBJCA-1402
Marking this as Draft since it needs an IronJacamar upgrade which has not been released yet.
|
2025-04-01T04:35:57.152100
| 2019-05-10T12:10:08
|
442686796
|
{
"authors": [
"ash0x0",
"besserwisser",
"joravkumar",
"thomvaill"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12082",
"repo": "wilix-team/iohook",
"url": "https://github.com/wilix-team/iohook/issues/167"
}
|
gharchive/issue
|
[Windows] Visual C++ Redistributable is required at runtime
IoHook cannot run on a Windows platform that does not have Visual C++ Redistributable installed (which is the case of a lot of people. Example: it is not installed on a fresh Windows 10 install).
I don't know if it has always been the case or if it's a new compilation bug.
After some search, those issues may be related:
https://github.com/wilix-team/iohook/issues/139
https://github.com/wilix-team/iohook/issues/161
https://github.com/wilix-team/iohook/issues/164
Expected Behavior
I am not a Windows developer, but I think uiohook.dll should be linked with static runtime libs, so the end user does not have to install Visual C++ Redistributable
Current Behavior
It looks like uiohook.dll is linked with the dynamic C Runtime library.
Dependency Walker output of uiohook.dll:
Possible Solution
Here again, I am not a Windows developer, but this may be a hint: https://stackoverflow.com/questions/1073509/should-i-redistribute-msvcrt-dll-with-my-application
https://braintrekking.wordpress.com/2013/04/27/dll-hell-how-to-include-microsoft-redistributable-runtime-libraries-in-your-cmakecpack-project/
I wish I could fix it, but I don't know where this /MT option should be set.
Steps to Reproduce
On a fresh Windows 10 install:
git clone https://github.com/wilix-team/iohook.git && cd iohook/
npm install
node examples/example.js
Loading native binary: D:\git\iohook\builds\node-v64-win32-x64\build\Release\iohook.node
internal/modules/cjs/loader.js:730
return process.dlopen(module, path.toNamespacedPath(filename));
^
Error: Le module spΓ©cifiΓ© est introuvable.
\\?\D:\git\iohook\builds\node-v64-win32-x64\build\Release\iohook.node
at Object.Module._extensions..node (internal/modules/cjs/loader.js:730:18)
at Module.load (internal/modules/cjs/loader.js:600:32)
at tryModuleLoad (internal/modules/cjs/loader.js:539:12)
at Function.Module._load (internal/modules/cjs/loader.js:531:3)
at Module.require (internal/modules/cjs/loader.js:637:17)
at require (internal/modules/cjs/helpers.js:22:18)
at Object.<anonymous> (D:\git\iohook\index.js:10:21)
at Module._compile (internal/modules/cjs/loader.js:701:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:712:10)
at Module.load (internal/modules/cjs/loader.js:600:32)
Then, go to https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads and install vc_redist.x64.exe.
Now it works!
NB1: if Visual C++ Redistributable is already installed on your Windows instance, you can reproduce the bug by uninstalling it:
NB2: I showed you how to reproduce the bug with a pre-built version of iohook. I tested to build it myself --> the bug is still there!
Context
I am developing a "click to screenshot" app with Electron. I want to distribute it on Linux, MacOS and Windows.
Possible workaround 1:
I ask my users to go to https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads and install vc_redist.x64.exe before installing my app.
--> I can't ask this to my users!
Possible workaround 2:
I achieve to distribute the missing DLLs into my Electron bundle.
I wish I could do that but I did not find a way to do it :(
Possible workaround 3:
I find a Windows installer that is able to install my app and install Visual C++ Redistributable in the background.
--> I have find one compatible with Electron :(
Your Environment
Version used: 0.4.5
Environment: Node v10.15.3
Operating System and version: Windows 10 Pro x64 v1809
Thank you so much. I spend the last 3 hours looking for a solution and installing VC_redist.x64.exe totally worked for me.
@Thomvaill Thanks for the this solution. It's working fine on win 10 but makes error on win 7. Can u provide me a solution for win 7 ?
Thanks in advance!!
Will add to docs. Thanks for the solution.
Added to doc.
|
2025-04-01T04:35:57.154904
| 2022-07-12T14:42:58
|
1302165759
|
{
"authors": [
"lemmy",
"will62794"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12083",
"repo": "will62794/tla-web",
"url": "https://github.com/will62794/tla-web/issues/19"
}
|
gharchive/issue
|
Variables values undefined when operator is primed
See undefined value of variable terminationDetected and priming of terminated operator:
---------------------- MODULE AsyncTerminationDetection ---------------------
EXTENDS Naturals
N == 2
ASSUME NIsPosNat == N \in Nat \ {0}
Node == 0 .. N-1
VARIABLES
active, \* activation status of nodes
pending, \* number of messages pending at a node
terminationDetected
vars == << active, pending, terminationDetected >>
terminated == \A n \in Node : ~ active[n] /\ pending[n] = 0
-----------------------------------------------------------------------------
Init ==
/\ active \in [ Node -> BOOLEAN ]
/\ pending \in [ Node -> 0..1 ]
/\ terminationDetected \in {FALSE, terminated}
Terminate(i) ==
/\ active' \in { f \in [ Node -> BOOLEAN] : \A n \in Node: ~active[n] => ~f[n] }
/\ pending' = pending
/\ terminationDetected' \in {terminationDetected, terminated'}
SendMsg(i, j) ==
/\ active[i]
/\ pending' = [pending EXCEPT ![j] = @ + 1]
/\ UNCHANGED << active, terminationDetected >>
Wakeup(i) ==
/\ pending[i] > 0
/\ active' = [active EXCEPT ![i] = TRUE]
/\ pending' = [pending EXCEPT ![i] = @ - 1]
/\ UNCHANGED << terminationDetected >>
DetectTermination ==
/\ terminated
/\ ~terminationDetected
/\ terminationDetected' = TRUE
/\ UNCHANGED << active, pending >>
-----------------------------------------------------------------------------
Next ==
\/ DetectTermination
\/ \E i \in Node, j \in Node:
\/ Terminate(i)
\/ Wakeup(i)
\/ SendMsg(i, j)
=============================================================================
Should be fixed. See here, and test.
|
2025-04-01T04:35:57.179477
| 2017-06-14T11:09:13
|
235843593
|
{
"authors": [
"williamleif",
"zesoon"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12084",
"repo": "williamleif/socialsent",
"url": "https://github.com/williamleif/socialsent/issues/10"
}
|
gharchive/issue
|
I am confused, how this project induce sentiment lexicon without help of existed domain-specifc lexicon
acc, auc, avg_prec = binary_metrics(polarities, lexicon, eval_words)
It seems get the best result (acc, auc, avg_prec) with the help of the baseline domain-specific lexicon(lexicon).
I am confused how this project induce domain-specific lexicon without the help of existed domain-specific lexicon. How the sentiment words are assigned, polarities["good"] is the positive score of word "good" ? and What is the negative score of word "good" .
Thank you for any feedback.
Hi zesoon! Sorry if the code/paper was not totally clear! The code/algorithm requires a "seed" lexicon in order to induce a larger lexicon.
Using a seed lexicon the algorithm assigns both positive and negative scores to all words in a larger set. The word "good", for example, will have a much higher positive score than negative score.
I recommend checking out the paper that is linked in the README for more details.
Hi, wil
Thank you for your feedback. In the paper, it defined the positive-polarity score as: Positive_labelscore/(Positive_labelscore+Negative_labelscore), what is the final sentiment score of the word? it is 2*Positive_labelscore/(Positive_labelscore+Negative_labelscore)-1? Since, when i run the example.py in the project it seems that more words positive-polarity are larger than 0.5 (Positive_labelscore>Negative_labelscore), for example, the example.py output 536 words whose positive-polarity scores are smaller than 0.5, while 2906 words' positive-polarity scores are larger than 0.5.
thanks again
|
2025-04-01T04:35:57.197204
| 2021-09-13T07:53:32
|
994570067
|
{
"authors": [
"danr",
"willmcgugan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12085",
"repo": "willmcgugan/rich",
"url": "https://github.com/willmcgugan/rich/issues/1483"
}
|
gharchive/issue
|
Spurious comma in markdown example
https://github.com/willmcgugan/rich/blob/6f09ae226c26a2be52e3214ee93e6d704756d282/rich/__main__.py#L198
- Supports much of the *markdown*, __syntax__!
+ Supports much of the *markdown* __syntax__!
Remove this comma?
Yeah, fair. Would you like to PR for that? Bear in mind, it will need updating in a test as well.
|
2025-04-01T04:35:57.198521
| 2020-12-16T04:13:13
|
768433076
|
{
"authors": [
"albb0920"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12086",
"repo": "willnorris/imageproxy",
"url": "https://github.com/willnorris/imageproxy/pull/255"
}
|
gharchive/pull-request
|
Add support for UNIX domain socket
This PR adds UNIX domain socket support for the addr option.
I want this as I run all my services behind nginx with unix domain socket.
I suppose there are more reasons someone would want to run
imageproxy without listening to a TCP port.
I never wrote any Go before this PR, I hope I haven't done anything stupid...
@googlebot I signed it!
|
2025-04-01T04:35:57.199861
| 2016-05-15T17:34:22
|
154919992
|
{
"authors": [
"trevorsenior"
],
"license": "cc0-1.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12087",
"repo": "willowtreeapps/tsx-boilerplate",
"url": "https://github.com/willowtreeapps/tsx-boilerplate/issues/17"
}
|
gharchive/issue
|
Update to Typings
I've tried this ~2 months ago and didn't really get anywhere. Added the "help wanted" tag if anyone wants to take a look at it. Typings has updated since then, and it may work better with react now.
https://github.com/typings/typings
Resolved in dd12bee
|
2025-04-01T04:35:57.202882
| 2020-04-21T21:05:02
|
604285148
|
{
"authors": [
"Uma-mv",
"jollygreenegiant"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12088",
"repo": "willowtreeapps/vocable-android",
"url": "https://github.com/willowtreeapps/vocable-android/issues/153"
}
|
gharchive/issue
|
Add an empty state for custom category
As a vocable app user, I would like to see an empty state message when I have just added a category
Acceptance Criteria
user navigates to the categories section of the app
Add a new category by tapping on the + option
After successfully adding a category, navigating to the category on the home screen displays empty state screen.
After successfully adding a category, navigating to the detail view of that category in Settings would display an empty state screen.
Designs on Home Screen:
https://www.figma.com/file/mNrwUygVhTmuWnKNHHkenR/Vocable-New?node-id=19%3A0
Design in Settings:
https://www.figma.com/file/mNrwUygVhTmuWnKNHHkenR/Vocable-New?node-id=19%3A1
Finished with #280 and #282
|
2025-04-01T04:35:57.229050
| 2023-11-06T09:50:53
|
1978727370
|
{
"authors": [
"staticlibs"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12089",
"repo": "wiltondb/wiltondb",
"url": "https://github.com/wiltondb/wiltondb/issues/3"
}
|
gharchive/issue
|
Enable back test runs for Fedora RPM
Postgres make check tests were temporary disabled in Fedora RPMs. These tests pass fine on RHEL but fail on Fedora. These likely happens because of compiler flags on newer version of GCC, it needs to be investigated and fixed. Currently Fedora RPMs cannot be used for anything serious.
Test runs pass fine on the latest versions of Fedora on x86_64, the failure is still observed on aarch64:
make -j1 checkprep >>'/builddir/build/BUILD/postgresql-15-15.4+wiltondb3.3-3'/tmp_install/log/install.log 2>&1
echo "+++ regress check in src/test/regress +++" && PATH="/builddir/build/BUILD/postgresql-15-15.4+wiltondb3.3-3/tmp_install/usr/bin:/builddir/build/BUILD/postgresql-15-15.4+wiltondb3.3-3/src/test/regress:$PATH" LD_LIBRARY_PATH="/builddir/build/BUILD/postgresql-15-15.4+wiltondb3.3-3/tmp_install/usr/lib64" ../../../src/test/regress/pg_regress --temp-instance=./tmp_check --inputdir=. --bindir= --dlpath=. --max-concurrent-tests=20 --schedule=./parallel_schedule --max-connections=5
+++ regress check in src/test/regress +++
============== creating temporary instance ==============
============== initializing database system ==============
============== starting postmaster ==============
running on port 51700 with PID 16500
============== creating database "regression" ==============
CREATE DATABASE
ALTER DATABASE
ALTER DATABASE
ALTER DATABASE
ALTER DATABASE
ALTER DATABASE
ALTER DATABASE
============== running regression test queries ==============
test test_setup ... ok 141 ms
test tablespace ... ok 169 ms
parallel group (20 tests, in groups of 5): char text varchar name boolean int4 int2 float4 int8 oid txid float8 uuid bit numeric regproc pg_lsn money enum rangetypes
boolean ... ok 32 ms
char ... ok 19 ms
name ... ok 25 ms
varchar ... ok 20 ms
text ... ok 20 ms
int2 ... ok 24 ms
int4 ... ok 19 ms
int8 ... ok 31 ms
oid ... ok 32 ms
float4 ... ok 29 ms
float8 ... ok 32 ms
bit ... ok 46 ms
numeric ... ok 171 ms
txid ... ok 27 ms
uuid ... ok 35 ms
enum ... ok 45 ms
money ... ok 26 ms
rangetypes ... ok 191 ms
pg_lsn ... ok 24 ms
regproc ... ok 19 ms
parallel group (19 tests, in groups of 5): line numerology lseg point strings path circle date polygon box time timetz interval timestamp timestamptz macaddr macaddr8 inet multirangetypes
strings ... ok 58 ms
numerology ... ok 21 ms
point ... ok 27 ms
lseg ... ok 24 ms
line ... ok 17 ms
box ... ok 111 ms
path ... ok 15 ms
polygon ... ok 105 ms
circle ... ok 25 ms
date ... ok 33 ms
time ... ok 21 ms
timetz ... ok 21 ms
timestamp ... ok 332 ms
timestamptz ... ok 345 ms
interval ... ok 40 ms
inet ... ok 32 ms
macaddr ... ok 19 ms
macaddr8 ... ok 25 ms
multirangetypes ... ok 110 ms
parallel group (12 tests, in groups of 5): tstypes horology type_sanity geometry regex unicode comments misc_sanity expressions opr_sanity xid mvcc
geometry ... ok 74 ms
horology ... ok 48 ms
tstypes ... ok 33 ms
regex ... ok 222 ms
type_sanity ... ok 56 ms
opr_sanity ... ok 160 ms
misc_sanity ... ok 19 ms
comments ... ok 14 ms
expressions ... ok 38 ms
unicode ... ok 13 ms
xid ... ok 23 ms
mvcc ... ok 36 ms
parallel group (5 tests): copyselect copydml copy insert_conflict insert
copy ... ok 47 ms
copyselect ... ok 30 ms
copydml ... ok 37 ms
insert ... ok 155 ms
insert_conflict ... ok 86 ms
parallel group (7 tests, in groups of 5): create_function_c create_operator create_misc create_procedure create_table create_type create_schema
create_function_c ... ok 18 ms
create_misc ... ok 29 ms
create_operator ... ok 23 ms
create_procedure ... ok 37 ms
create_table ... ok 183 ms
create_type ... ok 25 ms
create_schema ... ok 27 ms
parallel group (5 tests): index_including index_including_gist create_view create_index_spgist create_index
create_index ... ok 370 ms
create_index_spgist ... ok 268 ms
create_view ... ok 202 ms
index_including ... ok 130 ms
index_including_gist ... ok 144 ms
parallel group (16 tests, in groups of 5): create_cast create_aggregate create_function_sql constraints triggers select drop_if_exists typed_table vacuum inherit errors hash_func roleattributes create_am updatable_views infinite_recurse
create_aggregate ... ok 33 ms
create_function_sql ... ok 55 ms
create_cast ... ok 18 ms
constraints ... ok 74 ms
triggers ... ok 376 ms
select ... ok 40 ms
inherit ... ok 237 ms
typed_table ... ok 50 ms
vacuum ... FAILED 151 ms
drop_if_exists ... ok 47 ms
updatable_views ... ok 234 ms
roleattributes ... ok 30 ms
create_am ... ok 50 ms
hash_func ... ok 27 ms
errors ... ok 23 ms
infinite_recurse ... ok 105 ms
test sanity_check ... ok 60 ms
parallel group (20 tests, in groups of 5): select_distinct_on select_implicit select_having select_into select_distinct case union subselect join aggregates random transactions portals arrays btree_index namespace delete prepared_xacts update hash_index
select_into ... ok 34 ms
select_distinct ... ok 60 ms
select_distinct_on ... ok 27 ms
select_implicit ... ok 29 ms
select_having ... ok 32 ms
subselect ... ok 91 ms
union ... ok 90 ms
case ... ok 42 ms
join ... ok 253 ms
aggregates ... ok 285 ms
transactions ... ok 58 ms
random ... ok 19 ms
portals ... ok 69 ms
arrays ... ok 115 ms
btree_index ... ok 673 ms
hash_index ... ok 187 ms
update ... ok 135 ms
delete ... ok 27 ms
namespace ... ok 24 ms
prepared_xacts ... ok 48 ms
parallel group (20 tests, in groups of 5): brin spgist gist gin privileges init_privs security_label lock collate matview tablesample object_address replica_identity groupingsets rowsecurity drop_operator password identity generated join_hash
brin ... ok 197 ms
gin ... ok 364 ms
gist ... ok 352 ms
spgist ... ok 214 ms
privileges ... ok 563 ms
init_privs ... ok 14 ms
security_label ... ok 27 ms
collate ... ok 56 ms
matview ... ok 119 ms
lock ... ok 35 ms
replica_identity ... ok 74 ms
rowsecurity ... ok 203 ms
object_address ... ok 55 ms
tablesample ... ok 45 ms
groupingsets ... ok 123 ms
drop_operator ... ok 34 ms
password ... ok 76 ms
identity ... ok 93 ms
generated ... ok 169 ms
join_hash ... ok 891 ms
parallel group (2 tests): brin_bloom brin_multi
brin_bloom ... ok 56 ms
brin_multi ... ok 109 ms
parallel group (16 tests, in groups of 5): async alter_operator alter_generic misc create_table_like dbsize tsrf sysviews misc_functions merge collate.icu.utf8 tidrangescan tid tidscan incremental_sort create_role
create_table_like ... ok 126 ms
alter_generic ... ok 60 ms
alter_operator ... ok 23 ms
misc ... ok 64 ms
async ... ok 17 ms
dbsize ... ok 17 ms
merge ... ok 109 ms
misc_functions ... ok 65 ms
sysviews ... ok 57 ms
tsrf ... ok 28 ms
tid ... ok 27 ms
tidscan ... ok 35 ms
tidrangescan ... ok 27 ms
collate.icu.utf8 ... ok 18 ms
incremental_sort ... ok 72 ms
create_role ... ok 26 ms
parallel group (6 tests, in groups of 5): amutils psql_crosstab rules psql stats_ext collate.linux.utf8
rules ... ok 168 ms
psql ... ok 195 ms
psql_crosstab ... ok 31 ms
amutils ... ok 28 ms
stats_ext ... ok 686 ms
collate.linux.utf8 ... ok 13 ms
test select_parallel ... ok 816 ms
test write_parallel ... ok 98 ms
test vacuum_parallel ... ok 77 ms
parallel group (2 tests): subscription publication
publication ... ok 182 ms
subscription ... ok 48 ms
parallel group (17 tests, in groups of 5): portals_p2 dependency select_views cluster foreign_key combocid guc tsdicts tsearch bitmapops xmlmap functional_deps advisory_lock window foreign_data equivclass indirect_toast
select_views ... ok 66 ms
portals_p2 ... ok 20 ms
foreign_key ... ok 424 ms
cluster ... ok 140 ms
dependency ... ok 43 ms
guc ... ok 39 ms
bitmapops ... ok 147 ms
combocid ... ok 28 ms
tsearch ... ok 142 ms
tsdicts ... ok 45 ms
foreign_data ... ok 220 ms
window ... ok 108 ms
xmlmap ... ok 34 ms
functional_deps ... ok 38 ms
advisory_lock ... ok 40 ms
indirect_toast ... ok 155 ms
equivclass ... ok 29 ms
parallel group (6 tests, in groups of 5): jsonpath_encoding json_encoding jsonpath json jsonb jsonb_jsonpath
json ... ok 64 ms
jsonb ... ok 124 ms
json_encoding ... ok 16 ms
jsonpath ... ok 30 ms
jsonpath_encoding ... ok 15 ms
jsonb_jsonpath ... ok 35 ms
parallel group (18 tests, in groups of 5): limit plancache copy2 temp plpgsql prepare conversion truncate rangefuncs domain returning sequence rowtypes polymorphism alter_table with largeobject xml
plancache ... ok 51 ms
limit ... ok 43 ms
plpgsql ... ok 239 ms
copy2 ... ok 71 ms
temp ... ok 80 ms
domain ... ok 108 ms
rangefuncs ... ok 104 ms
prepare ... ok 38 ms
conversion ... ok 53 ms
truncate ... ok 97 ms
alter_table ... ok 589 ms
sequence ... ok 74 ms
polymorphism ... ok 89 ms
rowtypes ... ok 82 ms
returning ... ok 36 ms
largeobject ... ok 96 ms
with ... ok 91 ms
xml ... ok 307 ms
parallel group (12 tests, in groups of 5): hash_part reloptions indexing partition_join partition_prune partition_info explain compression partition_aggregate tuplesort memoize stats
partition_join ... ok 336 ms
partition_prune ... ok 535 ms
reloptions ... ok 34 ms
hash_part ... ok 25 ms
indexing ... ok 320 ms
partition_aggregate ... ok 361 ms
partition_info ... ok 47 ms
tuplesort ... ok 443 ms
explain ... ok 54 ms
compression ... ok 106 ms
memoize ... ok 60 ms
stats ... ok 129 ms
parallel group (2 tests): event_trigger oidjoins
event_trigger ... ok 68 ms
oidjoins ... ok 149 ms
test fast_default ... ok 75 ms
test serializable ... ok 18 ms
============== shutting down postmaster ==============
========================
1 of 213 tests failed.
========================
|
2025-04-01T04:35:57.235428
| 2024-07-05T02:28:51
|
2391657868
|
{
"authors": [
"Jaguarek62",
"vgmcal",
"win32ss"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12090",
"repo": "win32ss/supermium",
"url": "https://github.com/win32ss/supermium/issues/721"
}
|
gharchive/issue
|
Supermium not detected as default browser
Issue:
The notice to set Supermium as the default browser appears every time you open the application:
This happens even with Supermium set as the default browser application in Windows settings:
Expected behavior:
Once Supermium is selected as the default application, the notice should no longer appear.
Info:
OS: Windows 10 (64-bit)
Supermium: 124.0.6367.245 (64-bit)
This also happens on Windows 8.1
This seems to be happening because of some change I made to the installer end of things but never replicated in Supermium itself. As a result, the string returned by IApplicationAssociationRegistration::QueryCurrentDefault did not match the expected value because it was based on "Supemium" instead of "Supermium" ("Supermium" isn't too long for file association/protocol handlers, but it is too long for the default program handler in Windows 8 and up).
|
2025-04-01T04:35:57.266892
| 2018-08-24T16:11:50
|
353844283
|
{
"authors": [
"maiamcc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12091",
"repo": "windmilleng/tilt",
"url": "https://github.com/windmilleng/tilt/pull/177"
}
|
gharchive/pull-request
|
mill: enforce repo is absolute path
Hello @landism, @dmiller,
Please review the following commits I made in branch maiamcc/enforce-abs-repo:
82f8ab53ff1890c62dd166ee29efc0d270458d85 (2018-08-24 12:11:28 -0400)
mill: enforce repo is absolute path
e62010ab1889ca34c5b74a9275540fe2f331b93d (2018-08-24 11:51:23 -0400)
wip
Code review reminders, by giving a LGTM you attest that:
Commits are adequately tested
Code is easy to understand and conforms to style guides
Incomplete code is marked with TODOs
Code is suitably instrumented with logging and metrics
ping @nicks @landism
|
2025-04-01T04:35:57.270587
| 2021-11-24T10:48:44
|
1062272007
|
{
"authors": [
"CjHare"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12092",
"repo": "windranger-io/windranger-solidity-template",
"url": "https://github.com/windranger-io/windranger-solidity-template/pull/64"
}
|
gharchive/pull-request
|
fix: solhint solidity code style settings
Purpose for this PR
Although Solhint is not the best static analyser for security / quality analysis, it can provide code style enforcement.
By setting the checks to error, the process fails, with the husky hooks that'll be checked (enforced) on commit, but more importantly the CI job will fail and PR status will also fail (preventing a PR merge).
The reasoning for which settings are errors and which are off, basically come down to my personal preference ....so please, chime in, discussion for a better standard is always welcome!
Once the settings are approved, I'll document them with an additional section in the readme https://github.com/windranger-io/windranger-solidity-template/issues/63), I'm reluctant to begin the write-up before the style is chosen/agreed.
Need to change the solidity example to the Box (as it passes the code style, while BitDAO.sol does not).
|
2025-04-01T04:35:57.271767
| 2023-01-15T15:13:42
|
1533862568
|
{
"authors": [
"asika32764"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12093",
"repo": "windwalker-io/core",
"url": "https://github.com/windwalker-io/core/issues/1056"
}
|
gharchive/issue
|
Research a way to make view js after all js included in template
Impossible because HTML will render first. Try use js modules to resolve this issue.
|
2025-04-01T04:35:57.322344
| 2020-02-25T07:37:55
|
570370355
|
{
"authors": [
"DABH",
"Maverick1872",
"Wernfried",
"bobvanderlinden",
"wbt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12094",
"repo": "winstonjs/winston",
"url": "https://github.com/winstonjs/winston/issues/1767"
}
|
gharchive/issue
|
Child logger from winston default logger
Please tell us about your environment:
winston version?
[ ] winston@2
[x] winston@3
node -v outputs: v12.13.0
Operating System? MacOS
Language? ES5
What is the problem?
The following is not possible:
const mychildlog = winston.child({ label: 'somelabel' })
While the documentation gives the following 2 statements:
You can create child loggers from existing loggers to pass metadata overrides
See https://github.com/winstonjs/winston#creating-child-loggers
and
The default logger is accessible through the winston module directly. Any method that you could call on an instance of a logger is available on the default logger
See https://github.com/winstonjs/winston#using-the-default-logger
However, it is not possible to call child on winston. This is inconvenient.
My usecase is the following:
const getLogger = function(module) {
const path = module.filename.split('/').slice(-2).join('/');
return winston.child({
label: path
});
};
So that I can have the module filename in the logs.
The documentation says the following:
What do you expect to happen instead?
I expect the following to work:
const mychildlog = winston.child({ label: 'somelabel' })
mychildlog.info('hello')
I guess you are looking for a solution like this:
const logger = createLogger({
defaultMeta: { mainLabel: 'default label' },
level: 'info',
format: combine(
timestamp({ format: 'YYYY-MM-DD HH:mm:ss.SSS' }),
printf(({ message, timestamp, level, mainLabel, childLabel }) => {
return `${timestamp} (${childLabel || mainLabel}) [${level}] -> ${message}`;
})
),
transports: [
new transports.Console()
],
});
const childLogger = logger.child({ childLabel: 'child label' });
logger.info('the message');
childLogger.info('the child message');
Output
2021-12-17 17:46:07.781 (default label) [info] -> the message
2021-12-17 17:46:07.784 (child label) [info] -> the child message
This looks related to PR #1989, regarding the overwriting with the same variables.
@bobvanderlinden @wbt Per the issue author they wanted to be able to create a new child logger from the default logger instance that is exported. It appears that this was solved by #1603. As such I am closing this issue.
@DABH do you know if we've cut a release and published it to NPM, that has this feature in it? If you don't know off the top of your head, that's fine and I'll confirm later.
Yeah, we have done a release since then, so this must be included now
|
2025-04-01T04:35:57.324095
| 2023-08-17T02:45:18
|
1854150865
|
{
"authors": [
"robertdstein",
"virajkaram"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12095",
"repo": "winter-telescope/mirar",
"url": "https://github.com/winter-telescope/mirar/issues/621"
}
|
gharchive/issue
|
[BUG] Random shutil error related to swarp and tmp files, related to parallel processing?
Error for processor mirar.processors.reference at 2023-08-13 17:28:04.831880 (local time):
File "/data/loki/code/mirar/mirar/processors/base_processor.py", line 217, in apply_to_batch
batch = self.apply(batch)
^^^^^^^^^^^^^^^^^
File "/data/loki/code/mirar/mirar/processors/base_processor.py", line 243, in apply
batch = self._apply(batch)
^^^^^^^^^^^^^^^^^^
File "/data/loki/code/mirar/mirar/processors/base_processor.py", line 366, in _apply
return self._apply_to_images(batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/loki/code/mirar/mirar/processors/reference.py", line 131, in _apply_to_images
resampled_ref_img = ref_resampler.apply(ImageBatch(ref_image))[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/loki/code/mirar/mirar/processors/base_processor.py", line 243, in apply
batch = self._apply(batch)
^^^^^^^^^^^^^^^^^^
File "/data/loki/code/mirar/mirar/processors/base_processor.py", line 366, in _apply
return self._apply_to_images(batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/loki/code/mirar/mirar/processors/astromatic/swarp/swarp.py", line 342, in _apply_to_images
run_swarp(
File "/data/loki/code/mirar/mirar/processors/astromatic/swarp/swarp_wrapper.py", line 129, in run_swarp
execute(swarp_command)
File "/data/loki/code/mirar/mirar/utils/execute_cmd.py", line 311, in execute
run_local(cmd, output_dir=output_dir, timeout=timeout)
File "/data/loki/code/mirar/mirar/utils/execute_cmd.py", line 108, in run_local
shutil.move(current_path, output_path)
File "/data/loki/anaconda3/envs/mirar/lib/python3.11/shutil.py", line 845, in move
copy_function(src, real_dst)
File "/data/loki/anaconda3/envs/mirar/lib/python3.11/shutil.py", line 436, in copy2
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "/data/loki/anaconda3/envs/mirar/lib/python3.11/shutil.py", line 256, in copyfile
with open(src, 'rb') as fsrc:
^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/data/loki/code/mirar/vm722967_00001.tmp'
where did error occur?
|
2025-04-01T04:35:57.342113
| 2023-07-05T19:02:12
|
1790114212
|
{
"authors": [
"coveralls",
"saarahhall"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12096",
"repo": "winter-telescope/mirar",
"url": "https://github.com/winter-telescope/mirar/pull/446"
}
|
gharchive/pull-request
|
sedmv2 test: transient pipeline
Progress for #422
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
68 of 70 (97.14%) changed or added relevant lines in 4 files are covered.
11 unchanged lines in 1 file lost coverage.
Overall coverage increased (+0.9%) to 81.161%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
mirar/pipelines/sedmv2/load_sedmv2_image.py
60
61
98.36%
mirar/pipelines/sedmv2/sedmv2_pipeline.py
5
6
83.33%
Files with Coverage Reduction
New Missed Lines
%
mirar/processors/utils/multi_ext_parser.py
11
28.57%
Totals
Change from base Build<PHONE_NUMBER>:
0.9%
Covered Lines:
8431
Relevant Lines:
10388
π - Coveralls
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
Warning: This coverage report may be inaccurate.
This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
For more information on this, see Tracking coverage changes with pull request builds.
To avoid this issue with future PRs, see these Recommended CI Configurations.
For a quick fix, rebase this PR at GitHub. Your next report should be accurate.
Details
68 of 70 (97.14%) changed or added relevant lines in 4 files are covered.
11 unchanged lines in 1 file lost coverage.
Overall coverage increased (+0.9%) to 81.161%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
mirar/pipelines/sedmv2/load_sedmv2_image.py
60
61
98.36%
mirar/pipelines/sedmv2/sedmv2_pipeline.py
5
6
83.33%
Files with Coverage Reduction
New Missed Lines
%
mirar/processors/utils/multi_ext_parser.py
11
28.57%
Totals
Change from base Build<PHONE_NUMBER>:
0.9%
Covered Lines:
8431
Relevant Lines:
10388
π - Coveralls
|
2025-04-01T04:35:57.462983
| 2017-08-23T12:51:18
|
252270317
|
{
"authors": [
"Divran",
"SlipwayServers",
"bigdogmat"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12097",
"repo": "wiremod/wire",
"url": "https://github.com/wiremod/wire/issues/1461"
}
|
gharchive/issue
|
Wire Ranger Breaks when you disable the Ranger E2 extension.
Disable the Ranger E2 Extension (I'm doing this in the Q admin menu) and restart (I'm doing this on a dedi if it matters) and this is what happens.
attempted to use tool "wire_ranger"
used the tool wire_ranger on maps/rp_world_kdg.bsp
[ERROR] addons/____________wire-master/lua/wire/server/wirelib.lua:257: attempt to index a nil value
1. AdjustSpecialOutputs - addons/____________wire-master/lua/wire/server/wirelib.lua:257
2. Setup - addons/____________wire-master/lua/entities/gmod_wire_ranger.lua:101
3. MakeWireEnt - addons/____________wire-master/lua/wire/server/wirelib.lua:1047
4. MakeEnt - addons/____________wire-master/lua/wire/tool_loader.lua:65
5. LeftClick_Make - addons/____________wire-master/lua/wire/tool_loader.lua:56
6. LeftClick - addons/____________wire-master/lua/wire/tool_loader.lua:37
7. unknown - gamemodes/sandbox/entities/weapons/gmod_tool/shared.lua:227
I then turn the ranger extension back on and wait a few seconds for it to reload, then Rangers work again.
> Calling destructors for all Expression 2 chips.
> Reloading Expression 2 extensions.
> Calling constructors for all Expression 2 chips.
> Done reloading Expression 2 extensions.
> attempted to use tool "wire_ranger"
> used the tool wire_ranger on maps/rp_world_kdg.bsp
> attempted to use tool "wire_ranger"
> used the tool wire_ranger on maps/rp_world_kdg.bsp
When they "dont" work, they appear, but you cannot press Z to remove them, and walking into them makes them fall to the ground regardless of settings. Every other ent that we allow on the server works fine with all E2 extensions turned off, except for the Ranger.
I'm not sure if this is intended behaviour, as I would expect a Ranger to work without E2 extensions turned on.
I checked the code and there's nothing in it that makes it rely on E2 at all. You could delete E2 completely and it would still work.
I checked the code and there's nothing in it that makes it rely on E2 at all. You could delete E2 completely and it would still work.
It relies on the ranger data type which is created by E2. By default there're only these data types,
https://github.com/wiremod/wire/blob/b534c97637ebb1c2445cdff9ab47e7793703ca3b/lua/wire/server/wirelib.lua#L72-L108
The ranger one is added when the type is registered in E2 here,
https://github.com/wiremod/wire/blob/f374d5bbcef78667aaf4a985f50a9553e3c98112/lua/entities/gmod_wire_expression2/core/init.lua#L110-L116
yep, I already edited my post
|
2025-04-01T04:35:57.464717
| 2018-10-31T05:06:57
|
375800582
|
{
"authors": [
"thegrb93"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12098",
"repo": "wiremod/wire",
"url": "https://github.com/wiremod/wire/pull/1747"
}
|
gharchive/pull-request
|
Made wire constraint so wired entities don't need other constraints for duplicator to track them.
Fixes #339
Untested still, but putting here for review.
Tested and working
Still need to fix some stuff in advdupe2 to work, but it works in garry dupe.
Ok it works now
Haven't seen any errors on my server with this.
|
2025-04-01T04:35:57.466251
| 2024-01-18T12:09:52
|
2088156702
|
{
"authors": [
"dr4hcu5-jan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12099",
"repo": "wisdom-oss/microservice-middlewares",
"url": "https://github.com/wisdom-oss/microservice-middlewares/issues/5"
}
|
gharchive/issue
|
Service Identifier missing
In responses using the v3 package the service identifier is not prepended to the error code making it harder to identify the actual microservice responsible for the error.
the source of the error has been identified. the function for wrapping an internal error before sending it does not receive the service name as the function requires (https://github.com/wisdom-oss/commonTypes/blob/c88dee01eecac0049106f15af65d49c9d7a31823/error.go#L41)
|
2025-04-01T04:35:57.526011
| 2024-07-25T18:47:32
|
2430770136
|
{
"authors": [
"astrobot-houston",
"thomasbnt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12100",
"repo": "withastro/docs",
"url": "https://github.com/withastro/docs/pull/8924"
}
|
gharchive/pull-request
|
i18n(fr): Update tutorial/5-astro-api/2.mdx from #8907
Description (required)
Update tutorial/5-astro-api/2.mdx from #8907
Related issues & labels (optional)
Suggested label: i18n
Linked PR : https://github.com/withastro/docs/pull/8907
Lunaria Status Overview
π This pull request will trigger status changes.
Learn more
By default, every PR changing files present in the Lunaria configuration's files property will be considered and trigger status changes accordingly.
You can change this by adding one of the keywords present in the ignoreKeywords property in your Lunaria configuration file in the PR's title (ignoring all files) or by including a tracker directive in the merged commit's description.
Tracked Files
Locale
File
Note
fr
tutorial/5-astro-api/2.mdx
Localization changed, will be marked as complete.
Warnings reference
Icon
Description
ποΈ
The source for this localization has been updated since the creation of this pull request, make sure all changes in the source have been applied.
|
2025-04-01T04:35:57.532318
| 2024-11-02T05:19:44
|
2630224277
|
{
"authors": [
"astrobot-houston",
"jsparkdev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12101",
"repo": "withastro/starlight",
"url": "https://github.com/withastro/starlight/pull/2556"
}
|
gharchive/pull-request
|
i18n(ko-KR): update plugins.mdx
Description
update plugins.mdx
#2549
Lunaria Status Overview
π This pull request will trigger status changes.
Learn more
By default, every PR changing files present in the Lunaria configuration's files property will be considered and trigger status changes accordingly.
You can change this by adding one of the keywords present in the ignoreKeywords property in your Lunaria configuration file in the PR's title (ignoring all files) or by including a tracker directive in the merged commit's description.
Tracked Files
Locale
File
Note
ko
resources/plugins.mdx
Localization changed, will be marked as complete.
Warnings reference
Icon
Description
ποΈ
The source for this localization has been updated since the creation of this pull request, make sure all changes in the source have been applied.
|
2025-04-01T04:35:57.558685
| 2021-12-03T12:46:20
|
1070555734
|
{
"authors": [
"withtwoemms"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12102",
"repo": "withtwoemms/ucon",
"url": "https://github.com/withtwoemms/ucon/issues/24"
}
|
gharchive/issue
|
Package not installed
The package is missing when the distribution is installed.
"~/.../site-packages/ucon-0.2.1.dist-info/" is the only artifact present after install so the source isn't routable and so leads to this sort of error:
ModuleNotFoundError: No module named 'ucon'
v0.2.2rc1 accepted!
|
2025-04-01T04:35:57.574548
| 2023-07-27T19:41:47
|
1825081799
|
{
"authors": [
"17Amir17",
"jeffFG",
"noomorph",
"pagman77",
"sliangreal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12103",
"repo": "wix/Detox",
"url": "https://github.com/wix/Detox/issues/4146"
}
|
gharchive/issue
|
typeText() and replaceText() not registering React onChange() event in WebView
What happened?
NOTE: Keyboard Hardware has been DISABLED.
I am writing some tests to get through the authorization in my application, the flow of which is currently in a WebView. I have tried just about everything to get the text to register the onChange event. The onChangeEvent() on the input field needs to fire, as it has a regex check to make sure the email address is valid before rendering the submit button.
The correct element is being targeted
I can see the text being entered into the field
Keyboard Hardware has been disabled in the simulator
I am trying to assert that the value of the element has changed, and the test shows that the value of the input element is an empty string.
What was the expected behaviour?
I expect the onChange event to fire, and the value of the text in the element to be the value that was added with typeText().
After the test suite fails, if you click the field and hit the delete button on the simulator once, the onChange event fires and the text field updates as intended. Why does this not happen when typeText is called?
Was it tested on latest Detox?
[X] I have tested this issue on the latest Detox release and it still reproduces.
Did your test throw out a timeout?
[X] I have followed the instructions under Identifying which synchronization mechanism causes us to wait too much.
Help us reproduce this issue!
No response
In what environment did this happen?
Detox version: 20.11.1
React Native version: 0.69.10
Has Fabric (React Native's new rendering system) enabled: no
Node version: 16.10.0
Device model: Pixel4
Android version: API31(Android 12) and API33(Android13)
Test-runner (select one): jest 29.6.1
Detox logs
N/A - Test suite working as intended
Device logs
N/A - Devices working as intended
More data, please!
I have tried MANY combinations of various things such as:
Targeting the element
Targeting the class
typeText, clearText, typeText
replaceText, clearText, replaceText
focusing the element before input
tapping the screen after the input
Seeing this too
@pagman77, it would be helpful if you could provide a minimal repro (HTML excerpt or URL), so we can set up a webview and review the test scenario with the reported bug.
Oh, I see. I kinda can see a hypothetical reason why onchange might be not fired if you type text and don't change focus, but not sure if this is that case.
I manually triggered it like this in the end (have a react app in webview)
await web.element(by.web.cssSelector('[data-test="login-password-input"]')).runScript(`
function type (element) {
const nativeInputValueSetter = Object.getOwnPropertyDescriptor(window.HTMLInputElement.prototype, "value").set;
nativeInputValueSetter.call(element, 'password');
element.dispatchEvent(new Event('input', { bubbles: true}));
}
`)
you are hero
@17Amir17 This is actually magic, nice work!
I manually triggered it like this in the end (have a react app in webview)
await web.element(by.web.cssSelector('[data-test="login-password-input"]')).runScript(`
function type (element) {
const nativeInputValueSetter = Object.getOwnPropertyDescriptor(window.HTMLInputElement.prototype, "value").set;
nativeInputValueSetter.call(element, 'password');
element.dispatchEvent(new Event('input', { bubbles: true}));
}
`)
Confirmed to still work in Detox v20.27.0.
|
2025-04-01T04:35:57.602180
| 2015-02-05T17:12:27
|
56699465
|
{
"authors": [
"avi",
"damianmr",
"jonashagstedt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12104",
"repo": "wix/react-templates",
"url": "https://github.com/wix/react-templates/issues/10"
}
|
gharchive/issue
|
New lines around a property outputs [object Object] rather than the html
<!-- This works -->
<ul class="children">{this.props.children}</ul>
<!-- This shows [object Object] (Chrome's idea of showing an object as text) -->
<ul class="children">
{this.props.children}
</ul>
(other than React templates are great!)
Fixed... and added a test so it won't break in the future.
Will publish in a couple of days
Nice one.
I'll give it a go as soon as it's pushed.
Cheers
Ah, thanks, I got into this problem and thought that using {this.props.children} was something ReactTemplates didn't support
|
2025-04-01T04:35:57.664433
| 2022-07-11T19:18:04
|
1301117996
|
{
"authors": [
"korniko98"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12105",
"repo": "wiz-sec/open-cvdb",
"url": "https://github.com/wiz-sec/open-cvdb/issues/47"
}
|
gharchive/issue
|
GuardDuty detection bypass via cloudtrail
Discussed in https://github.com/wiz-sec/open-cvdb/discussions/44
Originally posted by megiddoa July 8, 2022
https://www.cloudvulndb.org/guardduty-cloudtrail-bypass
Please note that this GuardDuty detection bypass was remediated shortly after identified by Rhino Labs 2 years ago. GuardDuty will detect if this technique is used to disable CloudTrail logging.
Updated the issue to reflect the fix
|
2025-04-01T04:35:57.669664
| 2024-09-06T15:29:27
|
2510689212
|
{
"authors": [
"BenWestgate"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12106",
"repo": "wizardsardine/liana",
"url": "https://github.com/wizardsardine/liana/issues/1283"
}
|
gharchive/issue
|
typo: "ability to sign you backup"
During wallet creation the more info tab about descriptors says this.
https://github.com/wizardsardine/liana/blob/b9aaea14296612df7360131ba57bbd457fa6f3ad/gui/src/installer/prompt.rs#L2
Is the file with the error. I'll make a PR fixing.
|
2025-04-01T04:35:57.687360
| 2020-08-26T20:10:00
|
686585591
|
{
"authors": [
"OneBusyBrain",
"Vishrtr",
"rocky8roy",
"tauqeer-crewlogix",
"wlanjie"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12107",
"repo": "wlanjie/trinity",
"url": "https://github.com/wlanjie/trinity/issues/74"
}
|
gharchive/issue
|
List of all Issues in this wonderful Library (eg Memory issue and other issues)
App crashes at so many places randomly for eg:
when editing the video.. in timeline recyclerview just drag forward and backwards fast and it crashes
click on export and it will crash randomly on many phones.. i guess its again memory issue
Sound/music add doesn't get selected most of the time at recording screen.
Export video process is very nicely done as it process the effects and other stuff very fast ( Note : wherever it works )
Anyone who has solved the the above issues kindly reply here or connect with me on<EMAIL_ADDRESS>Will be happy to help eachother.
A version will be sent to jcenter recently
A version will be sent to jcenter recently
i can see latest version is v<IP_ADDRESS> everywhere.
When new version is sent to jcenter?????
@rocky8roy have you found the latest version?
The latest version will be submitted in a few days, and the document will be updated after submission
@wlanjie Guys you are doing some really good stuff. I am also using this library. i am waiting for the next version. Can you tell me when will be the next version released?
Not sure, it may be within 10 days
I will submit aar and documents to jcenter, but currently the code will not be pushed to master
ok, then I am waiting for the next version.
By when will the new version be submitted ?
Any update on the bug fixes ?
i want to know when the new update will be available
|
2025-04-01T04:35:57.689551
| 2021-02-08T12:56:41
|
803540723
|
{
"authors": [
"gnsuryan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12108",
"repo": "wls-eng/arm-oraclelinux-wls",
"url": "https://github.com/wls-eng/arm-oraclelinux-wls/issues/244"
}
|
gharchive/issue
|
Configured Cluster - dependency issues and handling scenario in Coherence setup where admin port 7001 is disabled
Configured Cluster - dependency issues and handling scenario in Coherence setup where admin port 7001 is disabled
This issue is now resolved
|
2025-04-01T04:35:57.703710
| 2024-08-23T02:37:16
|
2482123983
|
{
"authors": [
"maaikelimper",
"tomkralidis"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12109",
"repo": "wmo-im/wis2box",
"url": "https://github.com/wmo-im/wis2box/pull/748"
}
|
gharchive/pull-request
|
suppress daemon errors (#747)
Fixes #747
awesome, I did a quick test to ensure all the main wis2box-ctl.py commands still work as expected and I confirm the daemon errors are gone
|
2025-04-01T04:35:57.720710
| 2019-07-11T10:06:17
|
466795475
|
{
"authors": [
"JohnForster",
"wolfgang42"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12110",
"repo": "wolfgang42/rockstar-js",
"url": "https://github.com/wolfgang42/rockstar-js/issues/41"
}
|
gharchive/issue
|
All variables are global
Scope isn't preserved when defining variables.
SomeFunc takes my x
my y is 5
Give back my x with my y
Result is SomeFunc taking 6 (logs 11)
Shout Result
Shout my y (logs 5, should be undefined)
this should convert to
function SomeFunc(myx) {
var myy = "hello";
return myx + myy;
}
var Result = SomeFunc(6);
console.log(Result);
console.log(myy);
but instead converts to
function SomeFunc(myx) {
myy = "hello";
return myx + myy;
}
Result = SomeFunc(6);
console.log(Result);
console.log(myy);
(on a side-note the keyword 'a' doesn't seem to be recognised, single letter variables eg. X and Y aren't recognised, and comments aren't preserved either)
This looks at first glance like a legit bug, is there a reason you closed it?
|
2025-04-01T04:35:57.735326
| 2020-03-12T12:23:52
|
579902373
|
{
"authors": [
"Martin-Bloom",
"cdll"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12111",
"repo": "woltapp/react-router-query-params",
"url": "https://github.com/woltapp/react-router-query-params/issues/3"
}
|
gharchive/issue
|
setQueryParams should provide a history.replace option
Most of the time, when I set query params I don't want to add an entry in the browser history, I just want to update the current entry.
setQueryParams could provide an option to choose weather it will history.push or history.replace.
This is a feature request.
same issue here~
|
2025-04-01T04:35:57.736408
| 2023-03-22T04:04:11
|
1635019226
|
{
"authors": [
"kumarankit999",
"lornajane"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12112",
"repo": "womenofopensource/womenofopensource.github.io",
"url": "https://github.com/womenofopensource/womenofopensource.github.io/pull/7"
}
|
gharchive/pull-request
|
Added a Support Section
Added a Support Section
Hi, thanks for opening a pull request on the project. It's not clear to me how this supports the aims of the project, could you say something more about why this change would be of value?
Still not sure what this is intended to fix, so I'll close.
|
2025-04-01T04:35:58.030918
| 2023-11-10T14:27:47
|
1987738060
|
{
"authors": [
"kohlerdominik",
"woodjme"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12113",
"repo": "woodjme/unifi-hotspot",
"url": "https://github.com/woodjme/unifi-hotspot/issues/56"
}
|
gharchive/issue
|
Login via SAML Provider (AzureAD, Google, ...)
Hi @woodjme
I am researching a solution to authorize our users through their Office365 login.
Did you consider, or maybe already make tests to integrate this type of functionality?
Here's the Microsoft Tutorial on how to authorize through O365. https://learn.microsoft.com/en-us/entra/identity-platform/tutorial-v2-nodejs-webapp-msal
I would be willing to work on a Pull Request for this, but only if it's something that you would consider merging.
Hey @kohlerdominik
I've not considered this functionality before but I believe it's definitely a great idea and would be keen to have it included. I'll happily review a PR for this and will also add some tests for it, albeit, likely after your initial PR.
Thank you for you response. I don't know, when I will find time, but I definitely plan to do at least a Prove of Concept.
Hey @kohlerdominik
Did you manage to make any progress here? If not I may pick this up.
Hey @woodjme
Unfortunately, I didn't find time for this so for, and it doesn't look great for the next months either.
However, if you would realize this feature, I would offer to alpha/beta test it on one of our sites.
|
2025-04-01T04:35:58.032422
| 2024-11-22T11:27:27
|
2682886248
|
{
"authors": [
"6543",
"anbraten"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12114",
"repo": "woodpecker-ci/plugin-ready-release-go",
"url": "https://github.com/woodpecker-ci/plugin-ready-release-go/pull/237"
}
|
gharchive/pull-request
|
Separate pr-labels to changes code into an analyser
This changes moves the logic converting commits to changes using pr-labels into a separate analyser concept to later on be able to potentially add new analyzing logic like conventional commits, etc
I optimized the docker build process: https://github.com/woodpecker-ci/plugin-ready-release-go/pull/244 so it shuld fail less
|
2025-04-01T04:35:58.295417
| 2017-05-23T17:54:15
|
230792957
|
{
"authors": [
"shokry-suleiman",
"watilde"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12115",
"repo": "workshopper/how-to-npm",
"url": "https://github.com/workshopper/how-to-npm/issues/125"
}
|
gharchive/issue
|
Error, when trying to reset registry
That's what exactly happen when i try any challenge of how-to-npm`
Hi! Can you possibly give us your registry.log? I think it's like permission error. The registry.log shouldn't be read-only.
refs:
https://msdn.microsoft.com/en-us/library/bb727008.aspx
Okay
|
2025-04-01T04:35:58.306739
| 2023-05-22T22:37:16
|
1720725485
|
{
"authors": [
"giofsantos11",
"gronert-m"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12116",
"repo": "worldbank/gld",
"url": "https://github.com/worldbank/gld/pull/472"
}
|
gharchive/pull-request
|
Bgd tza simple
Pull request #471 has some amendments to other files (e.g. CSD of Turkey).
This is just the changes to BGD and TZA (plus a small change to the survey series checks).
@giofsantos11 please give it a last look that it includes everything in your original pull request. If OK, accept and close #471 .
Small update: @giofsantos11 I am no longer seeing the changes to other files in #471. I don't know where I was seeing that, whether I imagined it or something has changed. Odd.
In any case, please still proceed with this pull as it has the quality checks changes. Apologies for any inconvenience.
Ok, @gronert-m ! Thanks!
Ok, @gronert-m ! Thanks!
|
2025-04-01T04:35:58.342668
| 2015-10-13T18:44:22
|
111243279
|
{
"authors": [
"LuisHerranz",
"steph643"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12118",
"repo": "worona/meteorflux",
"url": "https://github.com/worona/meteorflux/issues/5"
}
|
gharchive/issue
|
AppState feature request: scoped Blaze global helpers
I would prefer to use global helpers like this:
<template name='VideoAuthor'>
Author name is {{AppState.videoAuthor.name}}.
</template>
Otherwise I need to prefix my root path in order to avoid mix up with template helpers...
Interesting. We can add it to the initialisation options:
AppState = new MeteorFlux.AppState({ prefix: 'state' });
Would that be ok?
That looks great.
Global helpers make sense in the case of a singleton object. So, to me, global helpers should be part of AppState, not ReactiveState.
Totally true, they should be in a different package
Just spent 1 hour because of this. Suppose you use AppState.set('doc', object) in your code, then the following code has very unexpected results:
<template name=myTemplate1>
{{>myTemplate2 doc=whatever}}
</template>
<!-- Args: doc -->
<template name=myTemplate2>
<div>doc.name</div> <!-- Here doc is not the template parameter! -->
</template>
That's not because of AppState, but because of how Meteor global Template helpers work.
If you do:
Template.registerHelper('dod', function() {
return object;
});
it will happen the same.
If you make a PR with a namespace option for ReactiveState I will accept it.
For example:
AppState = new ReactiveState({
namespace: 'state'
});
<template name=myTemplate2>
<div>doc.name</div> <!-- Here doc is the template parameter -->
<div>state.doc</div> <!-- Here doc is the AppState parameter -->
</template>
|
2025-04-01T04:35:58.441548
| 2015-03-31T03:12:40
|
65354171
|
{
"authors": [
"eduardohertz",
"rtopitt",
"tauil",
"wpolicarpo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12119",
"repo": "wpolicarpo/got-board",
"url": "https://github.com/wpolicarpo/got-board/pull/2"
}
|
gharchive/pull-request
|
Initial backend code
I have created two folders, one for the backend code, another for the frontend code. In the future, we may want to have two separate repos, one for the backend and another for the frontend.
I moved the original content in master (the assets) to the frontend folder.
The backend is a simple Rails application, which will host and persist the game world (board, cards, maps, units, players, etc) and serve a JSON API to the frontend code.
For now I have created the skeleton Rails app and some classes to represent the game's domain logic.
Please review and merge ASAP as to allow less conflicts.
I like this, but I'd remove all HTML files from backend (layouts and public). They won't be used.
I liked the way you prototyped the models, but I was thinking... since this is going to be just a backend API; why not to use Sinatra instead of Rails? I also agree with @eduardohertz about the layouts.
@wpolicarpo talked with me about writing the backend in SCALA but I think we could just keep working wih ruby on backend for a FAST prototype. We will already lose a lot of time learning new stuff in front-end....
+1 for the models.
We could try a simple rack app + grape using this modeling, maybe trying
something like Scalatra or Play in the future, just for fun.
On Mar 31, 2015 1:38 AM, "Rafael Borgonovi Tauil"<EMAIL_ADDRESS>wrote:
I liked the way you prototyped the models, but I was thinking... since
this is going to be just a backend API; why not to use Sinatra instead of
Rails? I also agree with @eduardohertz https://github.com/eduardohertz
about the layouts.
@wpolicarpo https://github.com/wpolicarpo talked with me about writing
the backend in SCALA http://www.scala-lang.org/ but I think we could
just keep working wih ruby on backend for a FAST prototype. We will already
lose a lot of time learning new stuff in front-end....
β
Reply to this email directly or view it on GitHub
https://github.com/wpolicarpo/got-board/pull/2#issuecomment-87935858.
:+1:
|
2025-04-01T04:35:58.454213
| 2022-12-02T02:48:05
|
1472230813
|
{
"authors": [
"dulithsenanayake",
"tharikaGitHub"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12120",
"repo": "wso2/apim-apps",
"url": "https://github.com/wso2/apim-apps/pull/393"
}
|
gharchive/pull-request
|
Update license header & add new line at EOF
This PR fixes the comments in https://github.com/wso2/apim-apps/pull/248
Hi @dulithsenanayake, are we using this branch now peer-test-4.1.0? If not we can close this.
Closing this PR as this branch is not used now.
|
2025-04-01T04:35:58.505007
| 2017-02-01T12:41:22
|
204573680
|
{
"authors": [
"ayshsandu",
"this"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12122",
"repo": "wso2/carbon-uuf",
"url": "https://github.com/wso2/carbon-uuf/issues/182"
}
|
gharchive/issue
|
Cannot use sendRedirect within try block
When sendRedirect is used within try, an exception is caught in catch.
Closed as repo is deprecated.
|
2025-04-01T04:35:58.507191
| 2023-10-06T10:37:04
|
1929886551
|
{
"authors": [
"CLAassistant",
"udhanMti"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12123",
"repo": "wso2/docs-choreo-dev",
"url": "https://github.com/wso2/docs-choreo-dev/pull/1140"
}
|
gharchive/pull-request
|
[Create pipeline yaml to upload doc site content to storage account
Purpose
$subject
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T04:35:58.509963
| 2019-10-09T14:55:04
|
504706447
|
{
"authors": [
"CLAassistant",
"sajithaliyanage"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12124",
"repo": "wso2/docs-ei",
"url": "https://github.com/wso2/docs-ei/pull/813"
}
|
gharchive/pull-request
|
Add jms sample for k8s samples
Purpose
Add jms sample for k8s samples
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.1 out of 2 committers have signed the CLA.:white_check_mark: sajithaliyanage:x: nilminiwso2You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T04:35:58.554281
| 2023-02-02T09:13:26
|
1567628755
|
{
"authors": [
"Sachin-Mamoru",
"ashensw"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12125",
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/issues/15460"
}
|
gharchive/issue
|
Scaling and Performance recommendations for WSO2 Identity Server
Is your suggestion related to an experience? Please describe.
Based on different requirements, scaling and performance recommendations for WSO2 Identity Server must be properly evaluated and documented. Need to revisit the existing way of presenting the performance numbers in IS and make the required changes to make it more informative for the following scenarios.
Catering for peak traffic (high concurrency)
Sustainable traffic
Benchmark to decide on horizontal scaling
Concurrency - no of users who can log in at a given time
Response time - time to perform a login
How many nodes are required to handle x concurrency within y seconds of added latency from the IS side? (TPS shouldn't be the main indicator as the output of the documents)
Load test script should reflect real-world scenarios (should counter user input delay)
Improve the performance stats representation to cater to the above scenarios.
Competitive Analysis
Our initial step was conducting a competitive analysis [1], through which we identified the various aspects that require attention for scaling and recommending performance enhancements for the identity server.
We acknowledged that utilizing more graphical representations would provide better clarity, and metrics such as CPU utilization could assist in identifying bottlenecks. To accomplish this, we can utilize the Apache JMeter dashboard [2], coupled with the Merge Results JMeter plugin [3].
Response Times Over Time - SAML2 SSO Redirect Binding [2 Node 2 Core Deployment]
CPU Utilization Over Time - SAML2 SSO Redirect Binding [2 Node 2 Core Deployment]
The following information can be extracted using the aws cli [1][2] capabilities.
IS Instance 01 - CPU utilization (%)
IS Instance 02 - CPU utilization (%)
DB Instance - CPU utilization (%)
[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/US_SingleMetricPerInstance.html
[2] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/metrics_dimensions.html
Considering only critical path when publishing performance benchmarks for WSO2 Identity Server.
Based on feedback from the SA team, we will remove throughput data from the benchmarks as it is not essential for capacity planning.
Furthermore, as an improvement, we have introduced a random delay for the related test cases to simulate a real-world scenario. In real-world scenarios, there is usually a delay in the login page when the end user enters login credentials. By incorporating this delay, we can expect the response time to be more reflective of actual scenarios.
Incorporated both 3-node and 4-node deployments and produced corresponding performance metrics.
Incorporated burst traffic into our performance test plan to showcase how the system handles sudden increases in traffic.
For more information please refer to the mail thread - Presenting our performance results in an optimal way to do capacity planning for our customers
Feel free to provide your suggestions for improvements.
As per the requested changes the updated performance results were published for the following flows.
SAML2 SSO Redirect Binding
OIDC Auth Code Grant Redirect With Consent
Client Credentials Grant Type
OIDC Password Grant Type
Next steps:
Onboard following scenarios based on priority
OIDC password grant including user attributes and groups in the id_token
OIDC password grant including user attributes and groups in the id_token and roles in the access token
JWT bearer grant including retrieving user attributes - this could give insights into signature verification performance
OIDC authorization code grant including user attributes without consent
OIDC authorization code grant including user attributes and consent
We have published performance results for a selected set of scenarios of IS 6.1.0 with the enhanced representation of the performance metrics. We have addressed all the requested suggestions and improved the representation.
Following are the selected set of performance test flows.
Client Credentials Grant Type
OIDC Auth Code Grant Redirect With Consent
OIDC Auth Code Grant Redirect Without Consent (Please note that results added only for this scenario is a sample)
OIDC Auth Code Grant Redirect Without Consent Retrieving User Attributes
OIDC Auth Code Grant Redirect Without Consent Retrieving User Attributes and Groups
OIDC Auth Code Grant Redirect Without Consent Retrieving User Attributes, Groups and Roles
OIDC Password Grant Type
OIDC Password Grant Type Retrieving User Attributes
OIDC Password Grant Type Retrieving User Attributes and Groups
OIDC Password Grant Type Retrieving User Attributes, Groups and Roles
SAML2 SSO Redirect Binding
Additionally, we have identified that in the OIDC Auth Code Grant Redirect With Consent flow, when we request user attributes in the access token or the id token, the AWS RDS database goes to 100% CPU utilization even for 500 concurrency. Currently, we are tracking it in the issue [2] and will analyse it further.
summary-graph.md [1] file where we have provided the comparison performance plots of the tested flows.
[1] https://github.com/wso2/performance-is/blob/performance-graphs/benchmarks/6.1.0/performance_visualization_v2/summary-graph.md
Finalized Performance Test Flows - 7.0.0 Release
Client Credentials Grant Type
OIDC Auth Code Grant Redirect With Consent - With only random scopes (No user attributes or groups or roles are requested.)
OIDC Auth Code Grant Redirect Without Consent - With only random scopes (No user attributes or groups or roles are requested.)
OIDC Auth Code Grant Redirect Without Consent Retrieve User Attributes, Groups and Roles
Burst Traffic with OIDC Auth Code Grant Redirect Without Consent Retrieving User Attributes
OIDC Password Grant Type - With only random scopes (No user attributes or groups or roles are requested.)
SAML2 SSO Redirect Binding
Token Exchange Grant type
API-based authentication flow [1] (Added to the roadmap)
Finalized summary - https://github.com/wso2/performance-is/blob/performance-graphs/benchmarks/6.1.0/performance_visualization_v2/summary-graph.md
[1] https://github.com/wso2/product-is/issues/17060
Mail thread - Presenting our performance results in an optimal way to do capacity planning for our customers
|
2025-04-01T04:35:58.557966
| 2023-11-20T12:15:07
|
2002073588
|
{
"authors": [
"AnuradhaSK",
"PasinduYeshan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12126",
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/issues/17965"
}
|
gharchive/issue
|
Getting 403 for POST https://localhost:9443/t/carbon.super/o/scim2/Roles/.search when onboarding users to organization
Describe the issue:
Try to onboard a user to the organization.
API call POST https://localhost:9443/t/carbon.super/o/scim2/Roles/.search will return 403
https://github.com/wso2/product-is/assets/25483865/a501e091-db0c-448a-a087-e54ad285eddd
The invoked API is wrong. v2 part is missing
/o/scim2/v2/Roles/
Couldn't reproduce this issue.
Incorrect listing role API is addressed through the following PR.
https://github.com/wso2/identity-apps/pull/4760
|
2025-04-01T04:35:58.559843
| 2018-02-20T08:38:26
|
298510053
|
{
"authors": [
"darshanasbg",
"isuruuy429"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12127",
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/issues/2508"
}
|
gharchive/issue
|
Redirection to older version of doc
Document: https://docs.wso2.com/display/IS550/Configuring+Claims+for+a+Service+Provider
Section: Related topcs/ Logging in to Salesforce with Facebook
Above link redirects to an older version of the document.
Type/Docs
Severity/Major
Priority/High
FIxed now..
@sherenewso2 Please verify
|
2025-04-01T04:35:58.561257
| 2018-06-19T09:24:37
|
333588741
|
{
"authors": [
"isharak",
"pulasthi7"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12128",
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/issues/3311"
}
|
gharchive/issue
|
Dispatch sample app login as null user for failures
Following is observed for idp failures
Thank you for your contribution!
We are closing this issue since it has not been prioritized for a long time. Chances are that it has already been solved in more recent versions. If not, we will be re-evaluating this when it becomes a priority.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.