added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:30.391685
| 2024-03-09T06:55:18
|
2177090211
|
{
"authors": [
"renatoargh",
"simonmcallister0210",
"zolbooo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10811",
"repo": "simonmcallister0210/cognito-srp-helper",
"url": "https://github.com/simonmcallister0210/cognito-srp-helper/issues/26"
}
|
gharchive/issue
|
Incorrect password hash when using account alias or regular username
Hi there. I would like to thank you for the great job working on this useful library.
I have encountered a problem with using a username during sign in. For example, I call InitiateAuth API with username like<EMAIL_ADDRESS>and Cognito returns a different username like e85de12a-f6b9-4108-a7d6-2f97442ef56e to use in calculations.
Here is the example code:
const formData = new FormData(event.currentTarget);
const username = formData.get("username") as string;
const passwordHash = createPasswordHash(
username,
formData.get("password") as string,
poolId
);
const srpSession = createSrpSession(username, passwordHash, poolId);
const initiateResult = await initiateSRPAuth({
username,
srpA: srpSession.largeA,
});
if ("error" in initiateResult) {
// ...
}
const signedSession = signSrpSession(srpSession, {
ChallengeParameters: {
SRP_B: initiateResult.srpB,
SALT: initiateResult.salt,
SECRET_BLOCK: initiateResult.secretBlock,
},
});
const proceedResult = await proceedSRPAuth({
uid: initiateResult.uid, // This is USER_ID_FOR_SRP
secret: signedSession.secret,
timestamp: signedSession.timestamp,
passwordSignature: signedSession.passwordSignature,
});
if ("error" in proceedResult) {
// ...
}
Cognito responds with incorrect password error, but I tried using regular password auth and it works. It took a whole day to find out the cause, and I even compared your implementation to Amplify's one.
Turns out, InitiateAuthCommand (initiateSRPAuth in the code above) returns USER_ID_FOR_SRP challenge parameter, which must be used when calculating password hash and signing session.
Here is a workaround:
// Create session with empty username and passwordHash,
// because we don't have the username yet, which is required to create the password hash.
const srpSession = createSrpSession("", "", poolId);
const initiateResult = await initiateSRPAuth({
username,
srpA: srpSession.largeA,
});
if ("error" in initiateResult) {
throw Error("TODO: handle error");
}
srpSession.username = initiateResult.uid;
srpSession.passwordHash = createPasswordHash(
// Important: use username from the Cognito response, not from the form
initiateResult.uid, // This is USER_ID_FOR_SRP
formData.get("password") as string,
poolId
);
const signedSession = signSrpSession(srpSession, {
ChallengeParameters: {
SRP_B: initiateResult.srpB,
SALT: initiateResult.salt,
SECRET_BLOCK: initiateResult.secretBlock,
},
});
// ...
This code does not provide initial values for username and passwordHash, and sets them only after the InitiateAuth command response. I think there is a need to create a new API like:
const srpSession = createSrpSession_v2(poolId);
// ...
const passwordHash = createPasswordHash_v2({
USER_ID_FOR_SRP: initiateAuthResult.ChallengeParameters.USER_ID_FOR_SRP,
password,
poolId,
});
const signedSession = signSrpSession_v2(srpSession, passwordHash, initiateAuthResult);
๐ค I'll have a look at this soon. I'll see if I can replicate the issue first
I used all latest version of libraries:
@aws-sdk/client-cognito-identity-provider: 3.525.0
cognito-srp-helper: 2.1.0
Thanks ๐ I can see what you mean, I've managed to replicate issue
So for anyone else who comes across this issue, here's a bit of context. When you set up a Userpool you can configure the attribute used to sign-in. At time of writing you can allow users to sign-in via: Username, Email, Phone Number. When you use an Email or Phone Number for sign-in Cognito will still create a Username for the user, that is a unique ID. According to the docs you can use whatever attribute you want for API calls
The problem we have is the password hash we are generating needs to use the Username not the Email or Phone Number. If you've configured Cognito to use Emails and you're running this library from the front-end odds are you don't have access to the Username, only the Email. So we need to think of a way to somehow get this Username before we authenticate...
@zolbooo Your suggestion of extending the API to provide a function that creates a password hash after the InitiateAuth call sounds like it would work since we have access to InitiateAuthCommandOutput.ChallengeParameters.USER_ID_FOR_SRP at this point. I'd like to think about this a bit more to see if there's a way we can solve this issue without making changes to the API
Maybe we could modify signSrpSession to use USER_ID_FOR_SRP since we have access to USER_ID_FOR_SRP there also? I'll spend some time looking into potential solutions...
Here's the code I used to replicate the issue:
const {
createSecretHash,
createPasswordHash,
createSrpSession,
signSrpSession,
wrapAuthChallenge,
wrapInitiateAuth,
} = require("cognito-srp-helper");
const {
CognitoIdentityProviderClient,
InitiateAuthCommand,
RespondToAuthChallengeCommand,
} = require("@aws-sdk/client-cognito-identity-provider");
(async () => {
// const usernameUsername = "a2c2e290-6d0f-4a08-a5ca-f0162935f3a6"; // this works
// const usernameEmail =<EMAIL_ADDRESS>// this doesn't
const username = "a2c2e290-6d0f-4a08-a5ca-f0162935f3a6";
const password = "Qwerty1!";
const poolId = "eu-west-2_eYpv1mFHB";
const clientId = "18u8119jgbpr464n28s1itk2mq";
// const secretId = "" // no secret
const cognitoIdentityProviderClient = new CognitoIdentityProviderClient({
region: "eu-west-2",
});
// const secretHash = createSecretHash(username, clientId, secretId);
const passwordHash = createPasswordHash(username, password, poolId);
const srpSession = createSrpSession(username, passwordHash, poolId);
const initiateAuthRes = await cognitoIdentityProviderClient
.send(
new InitiateAuthCommand(
wrapInitiateAuth(srpSession, {
ClientId: clientId,
AuthFlow: "USER_SRP_AUTH",
AuthParameters: {
CHALLENGE_NAME: "SRP_A",
// SECRET_HASH: secretHash, // no secret
USERNAME: username,
},
}),
),
)
.catch((err) => {
console.error(err);
throw err;
});
console.log("initiateAuthRes:");
console.log(initiateAuthRes);
const signedSrpSession = signSrpSession(srpSession, initiateAuthRes);
console.log("signedSrpSession:");
console.log(signedSrpSession);
const respondToAuthChallengeRes = await cognitoIdentityProviderClient
.send(
new RespondToAuthChallengeCommand(
wrapAuthChallenge(signedSrpSession, {
ClientId: clientId,
ChallengeName: "PASSWORD_VERIFIER",
ChallengeResponses: {
// SECRET_HASH: secretHash, // not configured for this userpool
USERNAME: username,
},
}),
),
)
.catch((err) => {
console.error(err);
throw err;
});
console.log("respondToAuthChallengeRes:");
console.log(respondToAuthChallengeRes);
})();
Thanks ๐ I can see what you mean, I've managed to replicate issue
So for anyone else who comes across this issue, here's a bit of context. When you set up a Userpool you can configure the attribute used to sign-in. At time of writing you can allow users to sign-in via: Username, Email, Phone Number. When you use an Email or Phone Number for sign-in Cognito will still create a Username for the user, that is a unique ID. According to the docs you can use whatever attribute you want for API calls
The problem we have is the password hash we are generating needs to use the Username not the Email or Phone Number. If you've configured Cognito to use Emails and you're running this library from the front-end odds are you don't have access to the Username, only the Email. So we need to think of a way to somehow get this Username before we authenticate...
@zolbooo Your suggestion of extending the API to provide a function that creates a password hash after the InitiateAuth call sounds like it would work since we have access to InitiateAuthCommandOutput.ChallengeParameters.USER_ID_FOR_SRP at this point. I'd like to think about this a bit more to see if there's a way we can solve this issue without making changes to the API
Maybe we could modify signSrpSession to use USER_ID_FOR_SRP since we have access to USER_ID_FOR_SRP there also? I'll spend some time looking into potential solutions...
We can overload function signatures as following:
function createSrpSession(username: string, passwordHash: string, poolId: string); // Old signature
function createSrpSession(poolId: string); // New one, we can ignore next 2 arguments
function signSrpSession(session: SrpSession, response: InitiateAuthResponse); // Old signature
function signSrpSession(session: SrpSession, response: InitiateAuthResponse, password: string); // New signature, we can check if third argument is provided
This way we can keep old behavior when functions are called with old arguments and implement new behavior when different arguments are provided.
Hey @zolbooo . How about we allow the user to pass in unhashed passwords during session creation, then hash them at the point of signing? Would allow for email and phone-number login, and would only require 1 additional optional parameter in createSrpSession so would be backwards compatible with <= 2.1.0 ?
Sounds good to me ๐
Hey guys, I have the same problem. Do you have any plans on merging the discussed improvements? Thanks a lot for the great work in this lib :)
Hey @renatoargh . I do, just finished updating the integration tests recently, only have to update the docs now. Then I'll get it merged
Along the way I noticed the tests ran very slow on Node v21, it takes ~20 seconds to sign the session ๐ฌ . That's something I can address later though
Alright! Thanks for the heads up! I will keep using the workaround in the meantime! And again, thanks a lot!
Just merged and published the changes with v2.2.0 , if there's any issues let me know
|
2025-04-01T04:35:30.405111
| 2023-06-19T12:10:19
|
1763409376
|
{
"authors": [
"riccardokhm",
"simonoliver"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10812",
"repo": "simonoliver/UnityFigmaBridge",
"url": "https://github.com/simonoliver/UnityFigmaBridge/issues/22"
}
|
gharchive/issue
|
SVG Importer [FEATURE REQUEST]
Dear creators,
have you thought of including a svg importer?
Hey Riccardo,
Yep its definitely on the list to look integrating with the Unity Vector Graphics API to support SVG images rather than server-side rendering for Vectors-<EMAIL_ADDRESS>I think it probably would only be suitable for larger images though - for small images (icons/logos) there might be issues with small details and antialiasing.
Yep, I agreed. Tell me if you need some help! It could be interesting to develop such functionality! As for server-side rendering what do you mean exactly?
Yeah by all means if you have the time and want to look into it, any help is very welcome. I'm not super familiar with the vector graphics library (and compatibility with Unity UI). For server side rendering, at the moment some node types which aren't natively supported by the library (such as VECTOR and BOOLEAN_OPERATION) are rendered via the Figma API as images. You can see the way nodes are identified here - https://github.com/simonoliver/UnityFigmaBridge/blob/main/UnityFigmaBridge/Editor/FigmaApi/FigmaDataUtils.cs#L295
|
2025-04-01T04:35:30.430951
| 2015-04-30T20:41:40
|
72276065
|
{
"authors": [
"rowanc1"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10813",
"repo": "simpeg/simpegem",
"url": "https://github.com/simpeg/simpegem/issues/15"
}
|
gharchive/issue
|
Orientation & moment
These are separated out:
orientation = (0,0,1) || 'Z'
The general code for anti-rotating the obsLocation should be separated out.
This issue was moved to simpeg/simpeg#139
|
2025-04-01T04:35:30.434043
| 2015-02-24T20:57:52
|
58803588
|
{
"authors": [
"coveralls",
"lheagy",
"rowanc1"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10814",
"repo": "simpeg/simpegem",
"url": "https://github.com/simpeg/simpegem/pull/11"
}
|
gharchive/pull-request
|
Adding mu
adding mu and updated install on travis.ylm
Coverage increased (+0.11%) to 87.05% when pulling 629bb5618537449d94228dfe9ee6d97dd04e0457 on addingMu into bc7fb2f1d7338257d9e3dac12fce3a4c1a5534a1 on master.
Coverage increased (+0.11%) to 87.05% when pulling 629bb5618537449d94228dfe9ee6d97dd04e0457 on addingMu into bc7fb2f1d7338257d9e3dac12fce3a4c1a5534a1 on master.
This issue was moved to simpeg/simpeg#135
|
2025-04-01T04:35:30.445820
| 2024-08-13T21:50:37
|
2464287305
|
{
"authors": [
"Remo",
"esbenbach",
"rezanid",
"swidz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10815",
"repo": "simple-odata-client/Simple.OData.Client",
"url": "https://github.com/simple-odata-client/Simple.OData.Client/issues/936"
}
|
gharchive/issue
|
Upgrade dependencies to allow Microsoft.OData.Core >= 8.0
Microsoft has released the Microsoft.OData.Client version 8.0
https://learn.microsoft.com/en-us/odata/changelog/odatalib-8x
Current Simple.OData.Client version (6.0.1) has dependency set for Microsoft.OData.Core (>= 7.9.4 && < 8.0.0)
I tried to see if I could easily upgrade it, but I'm running into some issues.
When you check version 8, you'll see that it only supports .NET 8
https://www.nuget.org/packages/microsoft.odata.core/#readme-body-tab
The older version 7.21.3 supports .NET Core 3.1, .NET Standard 1.1, .NET Framework 4.5: https://www.nuget.org/packages/Microsoft.OData.Core/7.21.3
Supporting Microsoft.OData.Core 8.0 would mean that we would have to break backwards compatibility and probably create a new major version. I'm fine with that, I just thought I would mention and see what others things.
Here's a start, but please note, it's incomplete!
https://github.com/simple-odata-client/Simple.OData.Client/pull/937
For the reference: https://github.com/OData/odata.net/issues/3073
For me, dropping .net7 support is not an issue.
.NET7 is an STS release, so people will have to move on regardless. If not now, then in a short time.
As long as a version of simple.odata.client exists for .net7 i doubt its a big issue.
The alternative is to multi target and write wrappers (urgh the maintainer is NOT going to like this in the long run), or create separate version for older framework types (sort of the same thing, just without the wrappers, but its still going to be annoying).
I am just starting to use your awesome library, but unfortunately because I am building tools for Microsoft products I am forced to use the old .Net Framework since most of Microsoft's own products (e.g. Visual Studio family of products, even the latest Power Platform), rely on .Net Framework without any published roadmap to ever move on to .Net.
Does this mean that I will not benefit from potential bug fixes in future? I'm fine with not receiving new features as Microsoft products are on the same boat too.
|
2025-04-01T04:35:30.474821
| 2023-09-04T01:32:14
|
1879348334
|
{
"authors": [
"Acha0203",
"sinproject-iwasaki"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10816",
"repo": "sinProject-Inc/sinpro-dev",
"url": "https://github.com/sinProject-Inc/sinpro-dev/issues/268"
}
|
gharchive/issue
|
Update PR Template
Tasks
[ ] Correct anything pointed out by ChatGPT or reviewers.
Writing the Issue
[ ] Write each task for this issue using checkboxes
[ ] Write all verbal instructions as tasks
[ ] Add labels and assignees
Before Working on the Issue
[ ] Assign yourself
[ ] Share your screen on Discord
[ ] Write which issue you are starting on Slack
[ ] Git: Fetch the latest main
[ ] Git: Create a branch with the name of the issue
Pre-PR Checks
[ ] Insure that changes include only what is necessary for this PR
[ ] Implement necessary Unit tests
[ ] Implement necessary E2E tests
[ ] Perform functionality checks
ใใฎใฟในใฏใฎๅ
ๅฎนใซใคใใฆใงใใใ
pull_request_templete.md ใ ChatGPT ใซ่ฆใใฆใๆๆใใใ้จๅใไฟฎๆญฃใใ
ใจใใ่ช่ญใงๅใฃใฆใใพใใงใใใใ๏ผ
|
2025-04-01T04:35:30.478371
| 2024-01-09T04:52:08
|
2071626586
|
{
"authors": [
"achugr",
"sinaatalay"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10817",
"repo": "sinaatalay/rendercv",
"url": "https://github.com/sinaatalay/rendercv/issues/12"
}
|
gharchive/issue
|
Add ability to build a cover letter.
A cover letter is often a good addition to a CV.
A cover letter is more personalized, but there is a template that can be reused from letter to letter. Plus, it would be great if the CV and the cover letter are of the same formatting and style.
What do you think about the idea of adding the ability to generate a cover letter based on a template? I would be happy to describe my proposal in more detail if you think this functionality fits the project.
Hello, and thank you for the suggestion!
I don't think incorporating cover letters into the current codebase would be a good idea, as it would require an entirely different code. I think the current code won't help with cover letters, as they are purely related to CVs. Therefore, I believe creating a separate package for cover letters would make more sense. A single codebase that does two independent things may not be practical to maintain.
I might be wrong (maybe for users, it's better to have them in the same package), and I would like to hear your view on this. How would you implement this, and how would it look from the user side?
Hello!
Overall I see the process this way: you have two YAML configs - one for CV (the base one) and one for cover letter (the base one). When you want to apply for a position, you create a new branch from the main. You put there a custom cover letter and maybe adjust your CV according to position requirements. If you find some adjustments good and generic enough, you bring them to the main branch.
This does not require it to be the same code base, this could be a separate project rendercoverletter ๐. I think a cover letter from the implementation perspective is much simpler than a CV and in terms of the code, this can be a much smaller data model and a simple template file. Having it in one repo makes it easy to reuse the code and keep default template styles in sync.
Other than that, this could be a separate project for a cover letter and an umbrella project that makes it easy to use both and makes sure styles (fonts etc) are in sync.
I'll prepare a draft PR in a couple of days and we could discuss there if it fits.
I will keep this issue until we come up with a way of handling cover letters. I agree with you; it's important to have this.
I am moving this issue to Discussions.
|
2025-04-01T04:35:30.489917
| 2017-10-19T12:33:22
|
266825899
|
{
"authors": [
"Keloran",
"SamVerschueren",
"sindresorhus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10818",
"repo": "sindresorhus/alfred-lock",
"url": "https://github.com/sindresorhus/alfred-lock/pull/1"
}
|
gharchive/pull-request
|
HotKey for lock
Added a keyword so that it acts a bit like windows lock
Can you explain what you did exactly? It's not very clear from the plist. A lot of things you added are automatically added to the plist when installing.
There is a hotkey assigned, it is currently set to cmd+shift+L, so that the key combo can be used instead of having to type lock
There already is a hotkey when you're on macOS 10.13:
So not interested in adding it here.
|
2025-04-01T04:35:30.496552
| 2019-05-05T13:15:20
|
440443430
|
{
"authors": [
"sindresorhus",
"tomi-vanek"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10819",
"repo": "sindresorhus/awesome-nodejs",
"url": "https://github.com/sindresorhus/awesome-nodejs/pull/932"
}
|
gharchive/pull-request
|
Added asciiart-logo to cli category
By submitting this pull request, I promise I have read the contribution guidelines twice and ensured my submission follows it. I realize not doing so wastes the maintainers' time that they could have spent making the world better. ๐
โฌโฌโฌโฌโฌโฌโฌโฌโฌโฌ
Thanks for the suggestion, but I'm going to pass on this. This is a curation, so I unfortunately can't include everything. That means making the hard decision of leaving out some cool projects for various reasons.
|
2025-04-01T04:35:30.513943
| 2017-12-27T00:04:41
|
284624533
|
{
"authors": [
"ebraminio",
"samber",
"sindresorhus"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10820",
"repo": "sindresorhus/awesome",
"url": "https://github.com/sindresorhus/awesome/pull/1171"
}
|
gharchive/pull-request
|
Add Q#
https://github.com/ebraminio/awesome-qsharp
Brand new domain specific language just released recently by Microsoft, expected to become popular
Created just for getting feedback to know whether this has any chance ever to be here, the below conditions are not just have met yet, that's why I am closing the PR just on its creation and will open it when below is done but waiting for feedback in the meanwhile. (updated and reopened in 2019)
I have read and understood the contribution guidelines and the instructions for creating a list.
This pull request has a descriptive title.For example, Add Name of List, not Update readme.md or Add awesome list.
The entry in the Awesome list should:
Include a short description about the project/theme of the list. It should not describe the list itself.Example: - [Fish](โฆ) - User-friendly shell., not - [Fish](โฆ) - Resources for Fish..
Be added at the bottom of the appropriate category.
The list I'm submitting complies with these requirements:
Has been around for at least 30 days.That means 30 days from either the first real commit or when it was open-sourced. Whatever is most recent.
It's the result of hard work and the best I could possibly produce.
Non-generated Markdown file in a GitHub repo.
Includes a succinct description of the project/theme at the top of the readme. (Example)
The repo should have awesome-list & awesome as GitHub topics. I encourage you to add more relevant topics.
Not a duplicate.
Only has awesome items. Awesome lists are curations of the best, not everything.
Includes a project logo/illustration whenever possible.
Either fullwidth or placed at the top-right of the readme. (Example)
The image should link to the project website or any relevant website.
The image should be high-DPI. Set it to maximum half the width of the original image.
Entries have a description, unless the title is descriptive enough by itself. It rarely is though.
Includes the Awesome badge.
Should be placed on the right side of the readme heading.
Should link back to this list.
Has a Table of Contents section.
Should be named Contents, not Table of Contents.
Should be the first section in the list.
Should only have one level of sub-lists, preferably none.
Has an appropriate license.
That means something like CC0, not a code licence like MIT, BSD, Apache, etc.
WTFPL and Unlicense are not acceptable licenses.
If you use a license badge, it should be SVG, not PNG.
Has contribution guidelines.
The file should be named contributing.md. Casing is up to you.
Has consistent formatting and proper spelling/grammar.
The link and description are separated by a dash. Example: - [AVA](โฆ) - JavaScript test runner.
The description starts with an uppercase character and ends with a period.
Drop all the A / An prefixes in the descriptions.
Consistent and correct naming. For example, Node.js, not NodeJS or node.js.
Doesn't include a Travis badge.You can still use Travis for list linting, but the badge has no value in the readme.
Go to the top and read it again.
Hey there. No other blocking things I guess, right?
Friend ping, or, even ping pong :)
Hey!
To improve quality of awesome lists, Sindresorhus made a simple linter.
You can embed it into a Travis CI pipeline easily: https://github.com/sindresorhus/awesome-lint
package.json
{
"scripts": {
"test": "awesome-lint"
},
"devDependencies": {
"awesome-lint": "*"
}
}
.travis.yml
language: node_js
node_js:
- 'node'
Done. https://travis-ci.org/ebraminio/awesome-qsharp/builds/491872621
Q# in Q# is a domain-specific programming language used for expressing should be linkified to the official website.
Tweet: https://twitter.com/awesome__re/status/1100254078695747586
Thank you! :)
Q# in Q# is a domain-specific programming language used for expressing should be linkified to the official website.
Done https://github.com/ebraminio/awesome-qsharp/commit/0086b8da743cfab7dc61aa0730a095c3d6453fe3
|
2025-04-01T04:35:30.515889
| 2015-09-03T01:06:21
|
104601599
|
{
"authors": [
"arthurvr",
"flipactual",
"flipstewart"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10821",
"repo": "sindresorhus/elegant-spinner",
"url": "https://github.com/sindresorhus/elegant-spinner/pull/1"
}
|
gharchive/pull-request
|
add bounce style
Thanks for the PR. No idea why it took that long for someone to reply, sorry therefore :)
Tests are failing would you mind taking a look? @sindresorhus Do you like the idea or do you think it's out of scope?
closing in favor of #3
|
2025-04-01T04:35:30.534961
| 2022-11-10T22:17:38
|
1444639784
|
{
"authors": [
"dimaslanjaka",
"sindresorhus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10822",
"repo": "sindresorhus/find-cache-dir",
"url": "https://github.com/sindresorhus/find-cache-dir/issues/37"
}
|
gharchive/issue
|
How to call commonJS version?
after update, find-cache-dir package causes error on my typescript project.
How do i import CommonJS of this package through typescript ?
https://gist.github.com/sindresorhus/a39789f98801d908bbc7ff3ecc99d99c
|
2025-04-01T04:35:30.537523
| 2016-03-22T04:13:45
|
142543907
|
{
"authors": [
"jamestalmage",
"sindresorhus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10823",
"repo": "sindresorhus/generator-nm",
"url": "https://github.com/sindresorhus/generator-nm/issues/45"
}
|
gharchive/issue
|
Make use of XO's overrides option and eslint-plugin-ava
turn on esnext for test.js
use eslint-plugin-ava to lint tests
blocker: https://github.com/sindresorhus/xo/issues/88
turn on esnext for test.js
:+1:
use eslint-plugin-ava to lint tests
I was actually planning to bundle eslint-plugin-ava in XO.
Closing as the eslint option will be enabled by default in the next XO version and eslint-plugin-ava is included in XO by default.
|
2025-04-01T04:35:30.546836
| 2019-04-13T20:23:04
|
432895504
|
{
"authors": [
"BendingBender",
"sindresorhus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10824",
"repo": "sindresorhus/resolve-cwd",
"url": "https://github.com/sindresorhus/resolve-cwd/pull/4"
}
|
gharchive/pull-request
|
Require Node.js 8, return undefined instead of null, add TypeScript definition
Waiting for https://github.com/sindresorhus/resolve-from/pull/12.
โ
Waiting for sindresorhus/resolve-from#12.
|
2025-04-01T04:35:30.550611
| 2017-06-28T09:23:21
|
239103431
|
{
"authors": [
"rarkins"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10825",
"repo": "singapore/renovate",
"url": "https://github.com/singapore/renovate/issues/369"
}
|
gharchive/issue
|
Onboarding: Detect pinVersions setting
When to pin:
Private package
No entry point
When to not pin:
Public package
Has an entry point
Perhaps dependencies should be range (pinVersions=false) by default and only pin devDependencies?
If a library has peer dependencies then that's definitely a sign that it's require'd via npm and probably prefers ranges in dev dev dependencies.
If it has no main then it's a sign it's something like a webapp or cli tool that might prefer pinning dependencies.
Having a bin script is a sign of being a cli tool however one package could be both a cli tool as well as required library.
Only really web/browser libraries have a strong need for keeper by semver ranges.
Possible presets:
webapp - pin everythjng
web-library - pin only dev dependencies
node-library - pin everything
I suspect that many webapps include a "main" because npm init suggests one, so there needs to be a better way of detecting that a library won't be consumed by others. Any idea at to verify if it's been published to npm, including both public and private? For public we could query the registry for the name in package.json and see if that points back to the same repo. But for private.. ?
Also need a way to distinguish between nodejs-only libraries/tools and ones that also might run in a browser.
|
2025-04-01T04:35:30.573506
| 2023-12-06T01:49:18
|
2027481379
|
{
"authors": [
"eddelbuettel",
"johnkerl"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10826",
"repo": "single-cell-data/TileDB-SOMA",
"url": "https://github.com/single-cell-data/TileDB-SOMA/pull/1957"
}
|
gharchive/pull-request
|
[r] Apply #1943 to main
Issue and/or context: Keeps main in sync with release-1.6 after #1943, which I should have applied the other way round but did not
Changes:
Notes for Reviewer:
The codecov/project is new, and unrelated (an admin button I should not have poked on this particular evening)
The codecov/project is new, and unrelated (an admin button I should not have poked on this particular evening)
It respects a file called codecov.yaml where, if present, you can set thresholds wide enough to turn red into green.
|
2025-04-01T04:35:30.579948
| 2022-10-19T20:39:58
|
1415580328
|
{
"authors": [
"MilanKovacic",
"jsedanoj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10827",
"repo": "single-spa/single-spa-layout",
"url": "https://github.com/single-spa/single-spa-layout/issues/181"
}
|
gharchive/issue
|
Example of single-spa + vue + ssr
Hello,
I have used the npx create-single-spa --moduleType root-config to create a root-config with spa-layout; then npx create-single-spa --moduleType app-parcel to create a few guest Vue3 applications (one for header, one for 404 and one for the content). So far, so good, appart for minor nuissance that I needed to add the libraryTarget=system in vue.config.js in the guests applications (I would have expected the cli to add it itself...). But well, appart from that, it is working.
But now I want to add SSR (mainly for SEO, so that for example curl http://localhost:9000/ returns the html of the home page with content). Do we have any example of such setup or a similar one? The closest I have found is https://github.com/isomorphic-microfrontends , referenced in the single-spa documentation https://single-spa.js.org/docs/examples but it does not seem to start from the structure generated by the cli but from a from-scratch one, or something like that.
Anybody doing something similar?
Closing due to age. Please reopen if the issue persists or there are new developments. Thanks!
|
2025-04-01T04:35:30.583326
| 2020-07-20T15:53:29
|
661979605
|
{
"authors": [
"scottgigante"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10828",
"repo": "singlecellopenproblems/SingleCellOpenProblems",
"url": "https://github.com/singlecellopenproblems/SingleCellOpenProblems/issues/5"
}
|
gharchive/issue
|
When should we normalize the data?
Options:
Download raw counts, preprocess adata.X in openproblems/data, keep raw counts in adata.layers["counts"].
Download raw counts, pass raw counts to methods. Provide normalization recipes in openproblems/utils.py.
@flo-compbio moving discussion of normalization to here:
Once we have cloud storage available, I think we have to weigh the complexity burden it puts on the system to allow external downloads while maintaining this type of version control, vs. the benefit of being fully transparent in terms of "preprocessing" the data. I feel like definitions of preprocessing and "raw data" are quite arbitrary. If we're downloading a count matrix from GEO, why is cell or gene filtering considered "preprocessing", but read mapping and expression quantification (UMI counting) not? Are there really a lot of tangible benefits to making these "preprocessing" steps part of the infrastructure?
My thought here is quite simply that a) publicly available data is typically provided as count matrices, and b) the cost of running any normalization after that is rather low, whereas the cost of alignment and read counting is rather high. I would rather provide the user with more information rather than less, within reason.
From https://github.com/singlecellopenproblems/SingleCellOpenProblems/issues/103
|
2025-04-01T04:35:30.618533
| 2024-05-17T21:38:10
|
2303609089
|
{
"authors": [
"singodiyashubham87",
"suhanipaliwal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10829",
"repo": "singodiyashubham87/Draw-it-out",
"url": "https://github.com/singodiyashubham87/Draw-it-out/issues/155"
}
|
gharchive/issue
|
Add PR template
I would like to add a pull request template for this repository. I believe that having a standardized template will help streamline the contribution process, ensuring that all necessary information is included and making it easier for maintainers to review and merge pull requests.
Could you please assign this issue to me under GSSOC'24.
@suhanipaliwal Thanks for raising the issue but there is already a PR template for the repository. Kindly explore other issues or raise new ones.
|
2025-04-01T04:35:30.628654
| 2022-06-23T08:08:21
|
1281992059
|
{
"authors": [
"arkbriar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10830",
"repo": "singularity-data/risingwave-operator",
"url": "https://github.com/singularity-data/risingwave-operator/pull/62"
}
|
gharchive/pull-request
|
feat(ci): disable the checks for draft PRs
What's changed and what's your intention?
PLEASE DO NOT LEAVE THIS EMPTY !!!
Please explain IN DETAIL what the changes are in this PR and why they are needed:
Disable the checks for draft PRs.
Checklist
[ ] I have written necessary docs and comments
[ ] I have added necessary unit tests and integration tests
Refer to a related PR or issue link (optional)
close #54
I believe it works. Thanks, @mikechesterwang.
I believe it works. Thanks, @mikechesterwang.
Too soon to say this ๐คฃ. I'll try another PR after this merged.
|
2025-04-01T04:35:30.645193
| 2020-10-06T18:31:56
|
715922360
|
{
"authors": [
"benjamingr",
"daniel-mina97"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10831",
"repo": "sinonjs/fake-timers",
"url": "https://github.com/sinonjs/fake-timers/issues/339"
}
|
gharchive/issue
|
Inconsistent behavior with varying 'now' timestamps in config for FakeTimers.install({ now: , shouldAdvanceTime: true })
FakeTimers version : 6.0.1
Environment : Node 12.18.3, macOS Catalina 10.15.7
Other libraries you are using: jest 26.4.2, bottleneck 2.18.1, nock 13.0.4
Background
I have some ETL (extract, transform, load) code that uses the bottleneck library to throttle some API calls. I am testing this code using nocks (will intercept the API calls and provide some mock data) and jest. For one of my end-to-end tests, I have a transform step that uses the current date to calculate days since an epoch timestamp. I am wanting to mock this "current date" to the epoch timestamp<PHONE_NUMBER>000 (Sep 30, 2020) and also allow the bottleneck library to work correctly with its timers. In the end, I should have an object with a keyDaysSinceCreated based on the mocked "current date".
What did you expect to happen?
My tests should complete with the thottling/timers in bottleneck going off when the 'now' timestamp is<PHONE_NUMBER>000.
What actually happens
The test does NOT complete when I set the 'now' timestamp to<PHONE_NUMBER>000 or anything near to that - but strangely does complete with other timestamps.
I have the following in my jest test code:
describe(test, () => {
beforeAll(() => {
clock = FakeTimers.install({ now: timestamp, shouldAdvanceTime: true });
});
...
});
When replacing timestamp above with<PHONE_NUMBER>00 or<PHONE_NUMBER>0000 (notice the number of zeroes at the end), the tests complete successfully, and in my final object I have a key DaysSinceCreated with the values -16666 and 166838 respectively. However, when I replace timestamp with<PHONE_NUMBER>000, the tests never complete, which seems to be due to the timers in Bottleneck not launching for some reason. This seems like weird behavior that the timers and tests are functioning as expected for some timestamps, but not others. Any help and/or guidance would be appreciated!
Note: I'm kind of limited as to what code I can post on here, but please let me know what other information could be helpful and I'll provide as much as I can. Thanks!
It's going to be very hard to debug this without a reproduction.
I'm working on creating a small dummy project to reproduce this. I'll add the link to it once I have it made.
Okay here's a link (https://github.com/daniel-mina97/FakeTimers) to a sample project where I can reproduce the problem. All you have to do after installing the dependencies is run yarn test to see the failing test. In the actual test file I have some different timestamps that you can swap in.
After messing around with it some more, I think the core problem is in the bottleneck library. If instead of using fake-timers I just use jest to mock Date.now to return the timestamp, the tests also get hung up. For some reason moving the bottleneck instance inside of the function (as described in the README at https://github.com/daniel-mina97/FakeTimers) fixes both problems... I'll go ahead and close this as I don't think the issue is on y'all's side of things
|
2025-04-01T04:35:30.830942
| 2016-03-17T07:43:57
|
141504645
|
{
"authors": [
"ghprince"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10832",
"repo": "sirlantis/rubocop-for-rubymine",
"url": "https://github.com/sirlantis/rubocop-for-rubymine/issues/42"
}
|
gharchive/issue
|
rubocop didn't run correctly (executable name wrong) in rubymine
My issue is similar but different with #34 and #30
Rubymine 8.0.3
rbenv 1.0.0
ruby 2.3.0
rubocop 0.38.0
rubocop-for-rubymine 3.1.0
This error shows up in Rubymine
15:37:44 Failed to parse RuboCop output: Please make sure that:you installed RuboCop for this Ruby versionyou did run bundle install successfully (if you use Bundler)your RuboCop version isn't ancient (show balloon)
In ideas.log
2016-03-17 15:37:44,244 [ 16175] DEBUG - hub.sirlantis.rubymine.rubocop - Executing RuboCop (SDK=/Users/gogao/.rbenv/versions/2.3.0/bin)/Users/gogao/.rbenv/shims/ruby --format json /Users/gogao/Developer/rails/my_app/app/models/server_instance.rb
2016-03-17 15:37:44,244 [ 16175] INFO - .ruby.ruby.run.RubyCommandLine - Executing [/Users/gogao/.rbenv/shims/ruby --format json /Users/gogao/Developer/rails/my_app/app/models/server_instance.rb], working dir =[/Users/gogao/Developer/rails/my_app]
2016-03-17 15:37:44,296 [ 16227] WARN - hub.sirlantis.rubymine.rubocop - Failed to parse RuboCop output
you can see in the log that it is trying to exec /Users/gogao/.rbenv/shims/ruby --format json instead of /Users/gogao/.rbenv/shims/rubocop --format json
Any reason why?
duplicates of #40
|
2025-04-01T04:35:30.847994
| 2018-03-09T22:13:31
|
303997400
|
{
"authors": [
"dgsb",
"freeformz",
"gguridi",
"madflojo",
"raidancampbell",
"renathoaz"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10833",
"repo": "sirupsen/logrus",
"url": "https://github.com/sirupsen/logrus/issues/725"
}
|
gharchive/issue
|
Is logrus capable of logging async?
If so, how can I achieve that?
I think currently Logrus doesn't support async logging.
Some hooks, however, have implemented their own version. You can create your own hook based on the logrus-graylog-hook for further information.
PS.- If additionally you would like the standard error output to be async as well then the solution gets more complicated.
@RenathoAzevedo did the previous comments answer your question ?
What are the perceived benefits of async logging? What are the use cases for async logging?
Perceived benefit is avoiding the performance penalty from synchronous logging due to IO bottlenecks or any internal computation (i.e. when SetReportCaller is set to true)
Use case: I have an application serving very bursty traffic. To lower the work on the main thread and increase performance, I would like to log asynchronously.
I agree that it's solvable with a wrapper over the logger, but I think it's too valuable to omit from logrus
As commented above the primary benefit is reducing time taken per transaction/request. Itโs not uncommon for a service to log messages for each request to a service. When using synchronous logging the time taken to perform the I/O operations are included in the response time.
If using asynchronous logging applications can reply back to a request while the I/O operations happen in the background.
This is a very common practice in high volume and low latency applications.
My primary question is. Is this something that the maintainers have specifically chosen not to do, or is there appetite but it just hasnโt been contributed yet?
I know from experience that async APIs are hard to support well and often come with a set of trades offs that end up surprising users of them, so it's not something I'm personally interested in adding. That doesn't mean these use cases are invalid or bad either, just that I'm not personally convinced the trade off is worth it. Maybe one of the other maintainers are though.
I'd wager, but can't prove, that the time spent logging is minuscule compared to the time spent communicating with network services like down stream APIs and/or databases.
That doesn't mean it's not a problem for certain types of applications, but there are ways to deal with at least some of that already. For instance "async logging" is possible now by making a Hook that takes a copy of the entry and processes it in an async fashion. We have at least one instances of such a hook in use inside of Heroku (if I'm not mistaken).
|
2025-04-01T04:35:30.893196
| 2017-08-18T06:15:12
|
251149396
|
{
"authors": [
"michaelschmitz",
"sj26",
"stefanhuber"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10834",
"repo": "sj26/mailcatcher",
"url": "https://github.com/sj26/mailcatcher/issues/349"
}
|
gharchive/issue
|
Could not find 'eventmachine' (= <IP_ADDRESS>) - did find: [eventmachine-1.2.5]
It seems that mailcatcher doesn't currently work with the newest release of eventmachine. I am using it on ubuntu 16.04. Somehow eventmachine got updated on my machine...
I solved the problem in this way:
$ gem uninstall eventmachine
$ gem install eventmachine -v <IP_ADDRESS>
Full error message:
/home/stefan/.rvm/rubies/ruby-2.3.4/lib/ruby/site_ruby/2.3.0/rubygems/dependency.rb:310:in `to_specs': Could not find 'eventmachine' (= <IP_ADDRESS>) - did find: [eventmachine-1.2.5] (Gem::MissingSpecVersionError)
Checked in 'GEM_PATH=/home/stefan/.rvm/gems/ruby-2.3.4:/home/stefan/.rvm/gems/ruby-2.3.4@global', execute `gem env` for more information
from /home/stefan/.rvm/rubies/ruby-2.3.4/lib/ruby/site_ruby/2.3.0/rubygems/specification.rb:1439:in `block in activate_dependencies'
from /home/stefan/.rvm/rubies/ruby-2.3.4/lib/ruby/site_ruby/2.3.0/rubygems/specification.rb:1428:in `each'
from /home/stefan/.rvm/rubies/ruby-2.3.4/lib/ruby/site_ruby/2.3.0/rubygems/specification.rb:1428:in `activate_dependencies'
from /home/stefan/.rvm/rubies/ruby-2.3.4/lib/ruby/site_ruby/2.3.0/rubygems/specification.rb:1410:in `activate'
from /home/stefan/.rvm/rubies/ruby-2.3.4/lib/ruby/site_ruby/2.3.0/rubygems.rb:299:in `block in activate_bin_path'
from /home/stefan/.rvm/rubies/ruby-2.3.4/lib/ruby/site_ruby/2.3.0/rubygems.rb:299:in `synchronize'
from /home/stefan/.rvm/rubies/ruby-2.3.4/lib/ruby/site_ruby/2.3.0/rubygems.rb:299:in `activate_bin_path'
from /home/stefan/.rvm/gems/ruby-2.3.4/bin/mailcatcher:22:in `<main>'
from /home/stefan/.rvm/gems/ruby-2.3.4/bin/ruby_executable_hooks:15:in `eval'
from /home/stefan/.rvm/gems/ruby-2.3.4/bin/ruby_executable_hooks:15:in `<main>'
Same here - thanks for the solution. Seems like mailcatcher is set to fixed <IP_ADDRESS> instead of max version. Everytime I update my gems I now have to run this workaround. Might be time to change the dependency in mailcatcher.
Mailcatcher is locked to a particular version of eventmachine because it tends to break very easily. Unfortunately rubygems doesn't seem to respect intergem dependencies very well so the wrong versions are often activated. :-(
Your solution here is the correct solution, for now:
$ gem uninstall eventmachine
$ gem install eventmachine -v <IP_ADDRESS>
Longer term this should be solved by #298
|
2025-04-01T04:35:30.914876
| 2024-07-20T10:52:46
|
2420825400
|
{
"authors": [
"aditya-bhaumik"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10835",
"repo": "sk66641/Random-Disco-Light-Simulator",
"url": "https://github.com/sk66641/Random-Disco-Light-Simulator/pull/461"
}
|
gharchive/pull-request
|
update features page
Description
i have added the details about the chatbot timer and the mute button
Fixes: #408
Type of change
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
Checklist:
[ ] My code follows the style guidelines of this project
[ ] I have performed a self-review of my own code
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[ ] My changes generate no new warnings
ATTACH SCREEN-SHOTS / DEPLOYMENT LINK
@sk66641 please review this pull request and merge it
|
2025-04-01T04:35:30.922296
| 2014-06-16T12:14:41
|
35790641
|
{
"authors": [
"bluepeppers",
"spheenik"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10836",
"repo": "skadistats/clarity",
"url": "https://github.com/skadistats/clarity/issues/14"
}
|
gharchive/issue
|
getProperty behaviour is kinda unsafe
Hi,
So e.getProperty(prop) is pretty much the same as e.getState()[e.getDtClass().getPropertyIndex(prop)]. However, .getPropertyIndex will return a null if the property does not exist. getProperty does not check for this, meaning that if you want to access a property that is not guaranteed to be there (i.e. all properties), then you cannot use getProperty. Instead you have to replace e.getProperty(prop) with int propI = e.getDtClass().getPropertyIndex(prop); if (i == null) {return null;} else {return e.getState()[propI];}
Anyway, it might be nice if getProperty explicitly checked for null indexes prior to doing the lookup, so it can propagate the null, instead of throwing a null pointer exception.
Alternatively, one could embrace the unsafe nature of getProperty, but if so, it would be nice to have a warning in the docs/a comment on the decl or something :)
Thanks,
Laurie
fixed in 2.0-dev branch:
https://github.com/spheenik/clarity/commit/d843b901603362ba21e587a76ac99b68e147d175
|
2025-04-01T04:35:30.926090
| 2024-12-28T18:51:49
|
2761841696
|
{
"authors": [
"skarpe-github"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10837",
"repo": "skarpe-github/napalm-aruba505",
"url": "https://github.com/skarpe-github/napalm-aruba505/pull/6"
}
|
gharchive/pull-request
|
Bugfix for bond interfaces and merge_candidate methods
The methods load_merge_candidate, compare_config and commit_config had to be refactored to align with other driver's implementation. Correct procedure for merging configuration commands is:
load_merge_candidate(config=my_new_config)
compare_config()
commit_config()
or
discard_config()
@davama merge methods needed refactoring to be supported by the ansible-napalm module
@davama Right, this PR fixes the wrong implementation of the load_merge_candidate method, which previously sent the commands to the device and applied them right away. This was different from all other napalm drivers, which only load the new config without applying it as the method name indicates. The compare_config allows a review. The changes can then be applied using commit_config or discarded using discard_config.
|
2025-04-01T04:35:30.934558
| 2018-12-08T22:18:58
|
388964835
|
{
"authors": [
"treshugart",
"trusktr"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10838",
"repo": "skatejs/skatejs",
"url": "https://github.com/skatejs/skatejs/issues/1545"
}
|
gharchive/issue
|
remembering to extend parent props
An issue that I keep running into is that when I define a new class that extends a base class, I sometimes forget to extend the parent props and I waste some time wondering why it isn't working.
For example, I often write
static props = {
foo: String
}
instead of
static props = {
...Parent.props,
foo: String
}
This made me think: what if SkateJS traversed the static prototype in search of props and automatically handled the inheritance (f.e. by concatenation)?
I suppose this would mean we can't opt-out of the inheritance (though I haven't wanted to do that, and I'd argue it's not a good idea to do so because subclasses blocking parents features over and over eventually results in leaf class that is entirely no longer related to the original base class).
Any thoughts on this?
We used to do this, and while it can be convenient, it betrays standard convention where when you overwrite something, you still must explicitly callback to the super. It also doesn't give you any way to opt out of that behavior.
True. Yeah, better to tell people to remember to inherit.
|
2025-04-01T04:35:30.941496
| 2015-10-27T14:01:13
|
113589676
|
{
"authors": [
"marcoms",
"treshugart"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10839",
"repo": "skatejs/skatejs",
"url": "https://github.com/skatejs/skatejs/issues/423"
}
|
gharchive/issue
|
Reviewing the defaults set in skate.property.*
A couple of the defaults set by skate.property.* stick out to me:
default set by default (false, 0, "")
I think this should be left to the developer to decide whether a property has a default value or not (if no default value is desired, then the user would have to define default: undefined - not practical IMO)
attribute: true by default
Whilst not the same context as skate.property.*() are opt-in, wasn't the general understanding in #286 to have it default to false?
This is standard behaviour for boolean attributes that are linked to properties. Other types might benefit from this, though. However then that makes it different from boolean properties that aren't linked to attributes. I'm not sure having the complex logic in there is better. I think if we do this we should make it enforce the type unless undefined.
Yeah, we basically just forgot to do this. We should action this as it still trips me up. Do we think it'd be worth creating corresponding skate.attribute.* for auto-linking? Probably not. Just thinking out loud.
I hadn't thought about the fact that boolean attributes are by default false. In that case it makes sense to have the default set for linked boolean properties, however I don't think that other types should have defaults set, especially since for example, a value of 0 could definitely mean something significant in the case of a number property.
if we do this we should make it enforce the type unless undefined
Through type? I thought this was done anyway...
corresponding skate.attribute.* for auto-linking
I think it would be clearer to just have the user set attribute: true manually in the object passed to skate.property.* for such a minor difference
however I don't think that other types should have defaults set
I agree with you here. I'll mess with altering these and see what happens.
Through type? I thought this was done anyway...
It may not be necessary but it's a nice hook to have for type checking and coercion. For example, if you're overriding existing properties and you want to ensure the value is compatible with the property.
I think it would be clearer to just have the user set attribute: true manually in the object passed to skate.property.* for such a minor difference
+1
I actually think it's worth considering your points for number 1 (cept on the booleans like you said). I did commit the change for attributes not being linked by default but will leave this open for the other one.
Implemented @marcoms' first point in #435.
|
2025-04-01T04:35:30.948057
| 2024-09-28T02:20:27
|
2553956897
|
{
"authors": [
"ArgentumRhodon",
"endigo9740"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10840",
"repo": "skeletonlabs/skeleton",
"url": "https://github.com/skeletonlabs/skeleton/issues/2865"
}
|
gharchive/issue
|
V3: Ordered List Example on Typography Page Wrong
Link to the Page
https://next.skeleton.dev/docs/design/typography/#ordered
Describe the Issue (screenshots encouraged!)
Looks like this was an orphaned ticket, the issue was resolved a while back!
|
2025-04-01T04:35:30.949595
| 2017-03-16T07:32:10
|
214619792
|
{
"authors": [
"cloud-walker",
"skellock"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10841",
"repo": "skellock/apisauce",
"url": "https://github.com/skellock/apisauce/issues/69"
}
|
gharchive/issue
|
Hi, I want to help to continue using your package
Hi @skellock, I'm a front-end dev from Wanderio and I'm using your package since this summer.
I find your package really useful to isolate the API calls of my application, so I wish to continue use it.
But recently I had some trouble with it, so I will try to help you!
I will open some issues about each topic so we can tackle them separately, thanks in advance!
Thanks!
|
2025-04-01T04:35:30.953545
| 2022-06-18T10:57:59
|
1275761195
|
{
"authors": [
"lesydimitri",
"skelsec"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10842",
"repo": "skelsec/aardwolf",
"url": "https://github.com/skelsec/aardwolf/issues/2"
}
|
gharchive/issue
|
Missing source distribution
Hi!
I noticed the package for this project on PyPI is missing a source distribution.
This caused the installation to fail on my ARM64 based machine, since no built distribution is available for that architecture.
Cloning the repository and building it manually worked like a charm, so adding a source distribution might make the installation a bit more comfortable for people using uncommon architectures.
Hello! Thank you for the issue!
I knew this problem was ought to come up sooner or later, however I don't know how to do that exactly.
When packaging the project to upload to pyp I get an error for the source distribution. This is why I tried to pack it in the current way.
Can you give a hint on how to do source distribution?
Exactly what command is throwing an error for you? Already when you package it using python3 setup.py sdist?
For me that does everything as expected and a .tar.gz file ends up in the dist folder.
I'm not that experienced with the packaging process myself, but it seems that a lot of people are relying on Twine to make the upload process easier.
This website seems to outline the process pretty well.
okay, I found the "issue", source distribution is now on pyp!
Please close this issue if you consider it fixed
Confirmed working :-) Thanks a lot!
|
2025-04-01T04:35:31.026823
| 2022-09-08T15:08:43
|
1366533465
|
{
"authors": [
"codecov-commenter",
"ganeshmurthy",
"jiridanek",
"kgiusti"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10844",
"repo": "skupperproject/skupper-router",
"url": "https://github.com/skupperproject/skupper-router/pull/712"
}
|
gharchive/pull-request
|
Fixes #500: refactor SSL tests to avoid using python openssl
The orginal test would use python Openssl to determine which versions of the TLS protocols where supported for testing. Test failures occur if python was been built against a different version of the Openssl library than Proton which may/may not support the same protocol versions.
@jiridanek when you get a chance, can you please apply this PR on your F36 CI PR and see if the ssl tests pass ?
Here's the CentOS 8 crash stacktrace https://github.com/skupperproject/skupper-router/runs/8252577238?check_suite_focus=true#step:32:52 ken mentioned. I'm going to try this with Fedora 36 and Ubuntu Jellyfish
Core was generated by `skrouterd -c A.conf -I /home/runner/work/skupper-router/skupper-router/skupper-'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 qdr_link_get_context (link=link@entry=0x2461490) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/router_core/connections.c:514
514 return safe_deref_qd_link_t(*safe_qdl);
[Current thread is 1 (Thread 0x7f2b746dd700 (LWP 23[52](https://github.com/skupperproject/skupper-router/runs/8252577238?check_suite_focus=true#step:32:53)))]
Thread 7 (Thread 0x7f2b766df700 (LWP 2350)):
#0 0x00007f2b80e2f44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00007f2b816a71eb in suspend (ts=0x22b7710, p=0x20f74b0) at /home/runner/work/skupper-router/skupper-router/qpid-proton/c/src/proactor/epoll.c:401
#2 next_event_batch (p=0x20f74b0, can_block=true) at /home/runner/work/skupper-router/skupper-router/qpid-proton/c/src/proactor/epoll.c:2487
#3 0x000000000049d986 in thread_run (arg=0x1fa06b0) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/server.c:1074
#4 0x00007f2b80e291ca in start_thread () from /lib64/libpthread.so.0
#5 0x00007f2b7fc01dd3 in clone () from /lib64/libc.so.6
Thread 6 (Thread 0x7f2b78721700 (LWP 2348)):
#0 0x00007f2b80e2f44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x000000000046bf39 in sys_cond_wait (cond=cond@entry=0x2193790, held_mutex=held_mutex@entry=0x21937c0) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/posix/threading.c:81
#2 0x00000000004885dc in router_core_thread (arg=0x2193740) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/router_core/router_core_thread.c:227
#3 0x00007f2b80e291ca in start_thread () from /lib64/libpthread.so.0
#4 0x00007f2b7fc01dd3 in clone () from /lib64/libc.so.6
Thread 5 (Thread 0x7f2b756de700 (LWP 2351)):
#0 0x00007f2b7fcf7f37 in epoll_wait () from /lib64/libc.so.6
#1 0x00007f2b816a72f7 in poller_do_epoll (can_block=true, ts=0x22c8710, p=0x20f74b0) at /home/runner/work/skupper-router/skupper-router/qpid-proton/c/src/proactor/epoll.c:2[53](https://github.com/skupperproject/skupper-router/runs/8252577238?check_suite_focus=true#step:32:54)1
#2 next_event_batch (p=0x20f74b0, can_block=true) at /home/runner/work/skupper-router/skupper-router/qpid-proton/c/src/proactor/epoll.c:2470
#3 0x000000000049d986 in thread_run (arg=0x1fa06b0) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/server.c:1074
#4 0x00007f2b80e291ca in start_thread () from /lib64/libpthread.so.0
#5 0x00007f2b7fc01dd3 in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x7f2b776e0700 (LWP 2349)):
#0 0x00007f2b80e2f44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00007f2b816a71eb in suspend (ts=0x22b75d0, p=0x20f74b0) at /home/runner/work/skupper-router/skupper-router/qpid-proton/c/src/proactor/epoll.c:401
#2 next_event_batch (p=0x20f74b0, can_block=true) at /home/runner/work/skupper-router/skupper-router/qpid-proton/c/src/proactor/epoll.c:2487
#3 0x000000000049d986 in thread_run (arg=0x1fa06b0) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/server.c:1074
#4 0x00007f2b80e291ca in start_thread () from /lib64/libpthread.so.0
#5 0x00007f2b7fc01dd3 in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7f2b79722700 (LWP 2347)):
#0 0x00007f2b80e2f44c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x000000000046bf39 in sys_cond_wait (cond=<optimized out>, held_mutex=<optimized out>) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/posix/threading.c:81
#2 0x0000000000470f2c in _vflow_thread (context=0x2193740) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/vanflow.c:1196
#3 0x00007f2b80e291ca in start_thread () from /lib64/libpthread.so.0
#4 0x00007f2b7fc01dd3 in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7f2b81ac5300 (LWP 2330)):
#0 0x00007f2b80e2a6bd in __pthread_timedjoin_ex () from /lib64/libpthread.so.0
#1 0x000000000049c967 in qd_server_run (qd=<optimized out>) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/server.c:1490
#2 0x0000000000426bed in main_process (config_path=0x7fff15dd9093 "A.conf", python_pkgdir=<optimized out>, test_hooks=<optimized out>, fd=2) at /home/runner/work/skupper-router/skupper-router/skupper-router/router/src/main.c:109
#3 0x0000000000426779 in main (argc=5, argv=0x7fff15dd7db8) at /home/runner/work/skupper-router/skupper-router/skupper-router/router/src/main.c:363
Thread 1 (Thread 0x7f2b746dd700 (LWP 2352)):
#0 qdr_link_get_context (link=link@entry=0x2461490) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/router_core/connections.c:514
#1 0x0000000000495c59 in CORE_link_second_attach (context=<optimized out>, link=0x2461490, source=0x2453e50, target=0x2453fd0) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/router_node.c:1662
#2 0x00000000004781fd in qdr_connection_process (conn=0x2409350) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/router_core/connections.c:350
#3 0x0000000000453ce8 in writable_handler (container=0x215fab0, container@entry=0x22e7e50, qd_conn=qd_conn@entry=0x22e7e50, conn=<optimized out>) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/container.c:388
#4 0x00000000004[54](https://github.com/skupperproject/skupper-router/runs/8252577238?check_suite_focus=true#step:32:55)403 in qd_conn_event_batch_complete (container=0x22e7e50, qd_conn=qd_conn@entry=0x22e7e50, conn_closed=conn_closed@entry=false) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/container.c:478
#5 0x000000000049daee in thread_run (arg=0x1fa06b0) at /home/runner/work/skupper-router/skupper-router/skupper-router/src/server.c:1105
#6 0x00007f2b80e291ca in start_thread () from /lib[64](https://github.com/skupperproject/skupper-router/runs/8252577238?check_suite_focus=true#step:32:65)/libpthread.so.0
Undefined command: "py-bt". Try "help".
#7 0x00007f2b7fc01dd3 in clone () from /lib64/libc.so.6
Thread 7 (Thread 0x7f2b7[66](https://github.com/skupperproject/skupper-router/runs/8252577238?check_suite_focus=true#step:32:67)df[70](https://github.com/skupperproject/skupper-router/runs/8252577238?check_suite_focus=true#step:32:71)0 (LWP 2350)):
@kgiusti This seems to fix the problem on Fedora 36 and Ubuntu Jellyfish, but there are still the other problems that appear on these os versions (and some that appear on any os)
Fedora 36 job: https://github.com/skupperproject/skupper-router/actions/runs/3022148291
Ubuntu Jellyfish job: https://github.com/skupperproject/skupper-router/actions/runs/3022157975
https://github.com/skupperproject/skupper-router/issues/713
https://github.com/skupperproject/skupper-router/issues/714
https://github.com/skupperproject/skupper-router/issues/543
@jiridanek @ganeshmurthy This patch is ready to land.
Codecov Report
Base: 26.92% // Head: 26.93% // Increases project coverage by +0.00% :tada:
Coverage data is based on head (8986be6) compared to base (69dffe2).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## main #712 +/- ##
=======================================
Coverage 26.92% 26.93%
=======================================
Files 131 131
Lines 31403 31403
Branches 5025 5025
=======================================
+ Hits 8455 8457 +2
Misses 21857 21857
+ Partials 1091 1089 -2
Flag
Coverage ฮ
unittests
26.93% <รธ> (+<0.01%)
:arrow_up:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage ฮ
src/router_core/route_tables.c
16.43% <0.00%> (+0.27%)
:arrow_up:
src/router_core/address_watch.c
89.65% <0.00%> (+1.14%)
:arrow_up:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
2025-04-01T04:35:31.033589
| 2014-11-23T23:57:50
|
49845547
|
{
"authors": [
"nathanredblur",
"skwp"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10845",
"repo": "skwp/dotfiles",
"url": "https://github.com/skwp/dotfiles/issues/572"
}
|
gharchive/issue
|
Tab Autocomplete not work
Hi, i have the latest updated version but i don't know what is wrong but on use zhs shell to navigate pressing tab not autocomplete.
example:
folder1
$ cd fol
Nothing happen.
Thanks
I'm assuming this is not an issue any more. Afaik zsh, prezto, etc all work just fine with tabs.
Finally I found the solution.
rm -f ~/.zcompdump; compinit
You can find more information here:
https://github.com/zsh-users/zsh-completions
|
2025-04-01T04:35:31.068585
| 2015-03-04T16:44:03
|
59824771
|
{
"authors": [
"behaghel",
"peter-mouland"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10846",
"repo": "skyglobal/component-helper",
"url": "https://github.com/skyglobal/component-helper/issues/65"
}
|
gharchive/issue
|
release v1.1.0
are there any of the remaining (non-3rd party) issues that should be closed before v1.1.0 is release?
Just skimmed through the list of issues and I'd say we are ready for a
release. My 2c.
On Wednesday, March 4, 2015, Peter Mouland<EMAIL_ADDRESS>wrote:
are there any of the remaining (non-3rd party) issues that should be
closed before v1.1.0 is release?
โ
Reply to this email directly or view it on GitHub
https://github.com/skyglobal/component-helper/issues/65.
|
2025-04-01T04:35:31.105958
| 2024-06-14T12:33:29
|
2353278536
|
{
"authors": [
"jjcasmar",
"skypjack"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10847",
"repo": "skypjack/entt",
"url": "https://github.com/skypjack/entt/issues/1149"
}
|
gharchive/issue
|
Can't emplace component derived from std::vector
I have a component which derives from std::vector. When I try to add that component to an entity, it doesn't compile.
#include <entt/entt.hpp>
#include <vector>
struct S : public std::vector<int>{};
int main()
{
entt::registry registry;
auto e = registry.create();
auto &c = registry.emplace<S>(e);
return 0;
}
This simple code fails to compile with
/opt/compiler-explorer/libs/entt/v3.12.2/src/entt/container/../core/memory.hpp:185:31: error: static assertion failed due to requirement 'std::is_constructible_v<S, const S &, const std::allocator<S> &>': Ill-formed request
However, I can make the vector an attribute of the struct S and then works fine.
Is this something expected? Why can't I emplace a struct deriving from std::vector?
https://godbolt.org/z/dcb3YbGWa
First of all, DO NOT inherit from classes in the std:: namespace.
That said, it doesn't work because your type is badly defined:
struct S: public std::vector<int> {
using std::vector<int>::vector;
};
https://godbolt.org/z/rve7q1jPf
I dont really understand why I need to specify I am using std::vector::vector ctors. Do you have any link where I can read about it?
Unfortunately, that's trickier than this.
At first glance (but get this with a grain of salt, I'm at work and cannot spend much on this), it boils down to uses-allocator construction.
Static traits return true because the class supports it but you don't export the ctors and therefore it fails when it comes to constructing an object.
Typical corner cases of the language, a bit like when you ask the compiler if a vector of unique_ptrs is copyable.
Fair enough. Thanks!
|
2025-04-01T04:35:31.136531
| 2022-09-17T15:53:28
|
1376807270
|
{
"authors": [
"skyv26"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10848",
"repo": "skyv26/math-magician",
"url": "https://github.com/skyv26/math-magician/pull/5"
}
|
gharchive/pull-request
|
Feature/website
#5 Milestone - Full Website
In this feature branch I have implemented the below things :
Added the latest React-Router.
Added the Router Path.
Created new components for different menus.
Added the simple css.
Resolve linters error.
Checked the functionalities.
@Lordkaito Thank you so much โค๏ธ
|
2025-04-01T04:35:31.147237
| 2024-05-10T16:42:04
|
2290063589
|
{
"authors": [
"keepcreative",
"slSeanWU"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10849",
"repo": "slSeanWU/beats-conformer-bart-audio-captioner",
"url": "https://github.com/slSeanWU/beats-conformer-bart-audio-captioner/issues/1"
}
|
gharchive/issue
|
How to reproduce the model training process๏ผ
Hi, Shih-Lun Wu, I'm trying to reproduce the model training process implemented in the paper, but i found there's only test process of the model explained in the readme file.
i'm begging for a more detailed readme file to demonstrate how to do the model training
Hi @keepcreative , thanks for reaching out.
Would you please kindly refer to Section 3.1 of our paper (https://arxiv.org/pdf/2309.17352) for the training settings? Thanks.
-- Shih-Lun
|
2025-04-01T04:35:31.152325
| 2017-02-22T19:10:56
|
209550730
|
{
"authors": [
"asheynkman",
"dblock"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10850",
"repo": "slack-ruby/slack-ruby-bot",
"url": "https://github.com/slack-ruby/slack-ruby-bot/issues/117"
}
|
gharchive/issue
|
Question about simple_latest
Hi Folks,
We are facing an issue with a slack bot where upon startup, slack bot keep on restarting due to the closed socket. Looks like rtm.start connection downloads metadata which takes too much time and forces slack connection to close. We have tried on setting { simple_latest: true } in the Slack::RealTime::Client, but it has no effect
Any ideas or suggestion would be appreciated ?
thank you
E, [2017-02-22T19:04:23.117699 #1] ERROR -- : Actor crashed!
ThreadError: Target thread must not be current thread
/app/vendor/bundle/ruby/2.3.0/gems/celluloid-essentials-0.20.5/lib/celluloid/internals/thread_handle.rb:37:in join' /app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/actor.rb:97:in join'
/app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/proxy/actor.rb:28:in terminate' /app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/proxy/cell.rb:59:in terminate'
/app/vendor/bundle/ruby/2.3.0/gems/slack-ruby-client-0.7.9/lib/slack/real_time/concurrency/celluloid.rb:41:in run_loop' /app/vendor/bundle/ruby/2.3.0/gems/slack-ruby-client-0.7.9/lib/slack/real_time/concurrency/celluloid.rb:28:in connect!'
/app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/calls.rb:28:in public_send' /app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/calls.rb:28:in dispatch'
/app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/call/sync.rb:16:in dispatch' /app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/cell.rb:50:in block in dispatch'
/app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/cell.rb:76:in block in task' /app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/actor.rb:339:in block in task'
/app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/task.rb:44:in block in initialize' /app/vendor/bundle/ruby/2.3.0/gems/celluloid-0.17.3/lib/celluloid/task/fibered.rb:14:in block in create'
I, [2017-02-22T19:04:23.382691 #1] INFO -- : BOAT: socket closed, restarting ...
Is this the same as https://github.com/slack-ruby/slack-ruby-client/issues/134?
Yes its the same issue, thnx
I'll close it here, the problem is really in slack-ruby-client.
|
2025-04-01T04:35:31.155783
| 2020-09-18T22:43:13
|
704696026
|
{
"authors": [
"seratch"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10851",
"repo": "slackapi/bolt-python",
"url": "https://github.com/slackapi/bolt-python/issues/93"
}
|
gharchive/issue
|
Test failure due to Chalice's breaking change
$ travis_retry pytest tests/adapter_tests/
============================= test session starts ==============================
platform linux -- Python 3.6.7, pytest-5.4.3, py-1.7.0, pluggy-0.12.0
rootdir: /home/travis/build/slackapi/bolt-python, inifile: pytest.ini
plugins: cov-2.10.1
collected 53 items
tests/adapter_tests/test_aws_chalice.py ....F. [ 11%]
tests/adapter_tests/test_aws_lambda.py ........ [ 26%]
tests/adapter_tests/test_bottle.py ... [ 32%]
tests/adapter_tests/test_bottle_oauth.py . [ 33%]
tests/adapter_tests/test_cherrypy.py .... [ 41%]
tests/adapter_tests/test_cherrypy_oauth.py .. [ 45%]
tests/adapter_tests/test_django.py .... [ 52%]
tests/adapter_tests/test_falcon.py .... [ 60%]
tests/adapter_tests/test_fastapi.py .... [ 67%]
tests/adapter_tests/test_flask.py .... [ 75%]
tests/adapter_tests/test_lambda_s3_oauth_flow.py . [ 77%]
tests/adapter_tests/test_pyramid.py .... [ 84%]
tests/adapter_tests/test_starlette.py .... [ 92%]
tests/adapter_tests/test_tornado.py ... [ 98%]
tests/adapter_tests/test_tornado_oauth.py . [100%]
=================================== FAILURES ===================================
______________________ TestAwsChalice.test_lazy_listeners ______________________
self = <tests.adapter_tests.test_aws_chalice.TestAwsChalice object at 0x7fcdd5cbcfd0>
def test_lazy_listeners(self):
app = App(client=self.web_client, signing_secret=self.signing_secret,)
def command_handler(ack):
ack()
def say_it(say):
say("Done!")
app.command("/hello-world")(ack=command_handler, lazy=[say_it])
input = (
"token=verification_token"
"&team_id=T111"
"&team_domain=test-domain"
"&channel_id=C111"
"&channel_name=random"
"&user_id=W111"
"&user_name=primary-owner"
"&command=%2Fhello-world"
"&text=Hi"
"&enterprise_id=E111"
"&enterprise_name=Org+Name"
"&response_url=https%3A%2F%2Fhooks.slack.com%2Fcommands%2FT111%2F111%2Fxxxxx"
"&trigger_id=111.111.xxx"
)
timestamp, body = str(int(time())), input
chalice_app = Chalice(app_name="bolt-python-chalice")
slack_handler = ChaliceSlackRequestHandler(app=app, chalice=chalice_app)
headers = self.build_headers(timestamp, body)
headers["x-slack-bolt-lazy-only"] = "1"
headers["x-slack-bolt-lazy-function-name"] = "say_it"
request: Request = Request(
method="NONE",
query_params={},
uri_params={},
context={},
stage_vars=None,
is_base64_encoded=False,
body=body,
> headers=headers,
)
E TypeError: __init__() got an unexpected keyword argument 'method'
tests/adapter_tests/test_aws_chalice.py:242: TypeError
=============================== warnings summary ===============================
Category (place an x in each of the [ ])
tests
Requirements
Please read the Contributing guidelines and Code of Conduct before creating this issue or pull request. By submitting, you are agreeing to those rules.
Fixed by https://github.com/slackapi/bolt-python/commit/d1dc479d656e2a5fdf2479ff811f225f32484c8a
|
2025-04-01T04:35:31.161126
| 2016-11-02T19:07:39
|
186892994
|
{
"authors": [
"DEGoodmanWilson",
"MichaelPereira"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10852",
"repo": "slackapi/python-slackclient",
"url": "https://github.com/slackapi/python-slackclient/pull/134"
}
|
gharchive/pull-request
|
typo in joining channel part
[x] I've read and understood the Contributing guidelines and have done my best effort to follow them.
[x] I've read and agree to the Code of Conduct.
[x] I've been mindful about doing atomic commits, adding documentation to my changes, not refactoring too much.
[x] I've a descriptive title and added any useful information for the reviewer. Where appropriate, I've attached a screenshot and/or screencast (gif preferably).
[x] I've written tests to cover the new code and functionality included in this PR.
[x] I've read, agree to, and signed the Contributor License Agreement (CLA).
PR Summary
Fix method name in function call documentation
Related Issues
Test strategy
@DEGoodmanWilson Sorry I thought checking the checkbox in the issue template was enough ^^
Michael, you know what? I forgot to ask you to regenerate the HTML documentation. Would you mind following the instructions here, and piling another PR on? http://slackapi.github.io/python-slackclient/faq.html#how-do-i-compile-the-documentation
I am so sorry :bow:
@MichaelPereira, you know what? I forgot to ask you to regenerate the HTML documentation. Would you mind following the instructions here, and piling another PR on? http://slackapi.github.io/python-slackclient/faq.html#how-do-i-compile-the-documentation
I am so sorry :bow:
|
2025-04-01T04:35:31.165718
| 2016-12-13T21:47:39
|
195378037
|
{
"authors": [
"Roach",
"codecov-io"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10853",
"repo": "slackapi/python-slackclient",
"url": "https://github.com/slackapi/python-slackclient/pull/150"
}
|
gharchive/pull-request
|
Updated changelog for 1.0.2 and 1.0.3
PR Summary
Updated changelog to acknowledge 1.0.2 release notes and add 1.0.3 notes
โ
I've read and understood the Contributing guidelines
โ
I've read and agree to the Code of Conduct
โ
I've been mindful about doing atomic commits, adding documentation to my changes, not refactoring too much.
โ
I've a descriptive title and added any useful information for the reviewer
โ
I've written tests to cover the new code and functionality included in this PR.
โ
I've read, agree to, and signed the Contributor License Agreement (CLA
Current coverage is 61.86% (diff: 100%)
Merging #150 into master will not change coverage
@@ master #150 diff @@
==========================================
Files 8 8
Lines 257 257
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
Hits 159 159
Misses 98 98
Partials 0 0
Powered by Codecov. Last update e13d3a8...beea48f
|
2025-04-01T04:35:31.177809
| 2019-03-10T00:24:42
|
419130754
|
{
"authors": [
"davidcelis",
"delanni",
"episod",
"fab-mindflow",
"lawrencegripper",
"okeeffed",
"riffraff",
"seratch",
"simonh1000",
"x3ro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10854",
"repo": "slackapi/slack-api-specs",
"url": "https://github.com/slackapi/slack-api-specs/issues/18"
}
|
gharchive/issue
|
Block Kit support
Description
The documentation already supports Block Kit. However, the changes to parameters/response are not yet reflected in this repository. Do you have any plans to work on it in the short term? If I can do something for it, I am keen to be involved in it.
What type of issue is this? (place an x in one of the [ ])
[ ] bug
[ ] enhancement (feature request)
[ ] question
[x] documentation related
[ ] testing related
[ ] discussion
Requirements (place an x in each of the [ ])
[x] I've read and understood the Contributing guidelines and have done my best effort to follow them.
[x] I've read and agree to the Code of Conduct.
[x] I've searched for any related issues and avoided creating a duplicate issue.
We want to offer full JSON schema (and a subset within our OpenAPI specs) for Block Kit but aren't yet ready to do so. In the meantime, the newest version of the spec on master includes a bare bones schema for blocks and includes the blocks parameter on methods that support them. Look for deeper support (including the difference between input and output blocks) in the future.
it would be wonderful if you could open source the validation code in https://app.slack.com/block-kit-builder
Curious to see that there hasn't been any notice on this for almost four years. As far as I can see, Block Kit is still the way to go for displaying UI in Slack, right? Having no way to validate it makes for fairly poor developer experience ๐
it's quite hard to plan an integration and be sure it will work with real data without a schema, it would be great if you could publish it.
+1
It makes a lot of sense to add BlockKit JSON schema to OpenAPI specifications. Any news on this?
Btw, it looks like this JSON schema exists:
https://gist.github.com/renatorib/1fb1a9bd71435b41bee602d15bc56899
Would it be possible to share this JSON schema you probably have officially?
The schema that's mentioned in the gist, and here by @fab-mindflow is incomplete, it cuts out after ~18k characters, so it's an invalid schema right now.
This was first open in 2019. Can someone at Slack take a look please?
Nudging this as it's super painful, I'd like to wrap my block generation client side in a unit test which validates the output against the JSON schema.
Given the tooling on the block kit builder has this to validate against I was hoping it's easy to publish :pray:
TLDR: With some manual effort and the typescript type information I generated the schema. Here it is for others to use.. Note: It validates an array of blocks for input.
[
{
"type": "header",
"text": {
"type": "plain_text",
"text": "Pipeline Deployment of Heaven Started ๐งต ๐",
"emoji": true
}
}
]
Talking with @itoys he suggested this approach using the typescript definitions for blocks to create a schema.
Doing that over this file gave me the definitions for the block types available :partying_face:
https://github.com/slackapi/node-slack-sdk/blob/main/packages/types/src/block-kit/blocks.ts
I then needed a top level definition for the to match the possible different block types, I did this with if-then syntax for json schema.
"type": "object",
"properties": {
"type": {
"enum": [
"image",
"context",
"actions",
"divider",
"section",
"input",
"file",
"header",
"video",
"rich_text"
]
}
},
"required": [
"type"
],
"additionalProperties": true,
"allOf": [
{
"if": {
"properties": {
"type": {
"const": "image"
}
}
},
"then": {
"$ref": "#/definitions/ImageBlock"
}
},
{
"if": {
"properties": {
"type": {
"const": "context"
}
}
},
"then": {
"$ref": "#/definitions/ContextBlock"
}
},
Thanks to @okeeffed for the typescript to json schema code and awesome blog ๐โโ๏ธ
Stoked that it managed to help out @lawrencegripper โค๏ธ
That schema is unfortunately also incomplete; many of Slack's blocks have limits on how many elements can appear in a list (e.g. an upper limit of 100 options in a dropdown select's options field) or how long various strings can be (e.g. a 150 character limit for the text element in a header block) or even how many blocks a single definition is limited to.
|
2025-04-01T04:35:31.218191
| 2016-03-12T06:26:37
|
140352389
|
{
"authors": [
"codecov-io",
"paulp",
"sellout",
"wemrysi"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10855",
"repo": "slamdata/matryoshka",
"url": "https://github.com/slamdata/matryoshka/pull/13"
}
|
gharchive/pull-request
|
Class-based algebras.
This adds some functionality, but also generalizes things pretty extremely.
I took the approach suggested by @wemrysi and defined a class for the most generic {co}algebras, then used type alias for the more constrained ones. I ended up having to explicitly define a subclass for GCoalgebra, because scalac doesnโt like Id[Id[_]] showing up in types.
We now
can .attribute any algebra;
can .generalize, .generalizeElgot, and .generalizeM any {co}algebra (that doesnโt already have a functor in the generalizing position);
have transApoM (for @drostron);
have totally general {co}algebras (GElgot*lgebraM);
no longer have to define an implicit for each new functor; and
have some additional docs.
I have a couple issues with this โย having to explicitly call an implicit conversion in some cases and still not being able to get .zip from implicit conversions.
Also, there are more changes to be made to make the types consistent. But, in general, algebras implicitly convert to/from functions fine, while coalgebras, maybe not so much.
Current coverage is 66.11%
Merging #13 into master will increase coverage by +0.88% as of 8afdee4
@@ master #13 diff @@
======================================
Files 17 18 +1
Stmts 233 242 +9
Branches 4 4
Methods 0 0
======================================
+ Hit 152 160 +8
Partial 0 0
- Missed 81 82 +1
Review entire Coverage Diff as of 8afdee4
Powered by Codecov. Updated on successful CI builds.
@sellout :+1: (apologies for the delay, I thought I had confirmed this already!).
@sellout So we're approaching the first anniversary of this PR, not sure if that makes it easier or harder to close ;)
Hah, yes โย I was about to go over this and the even older PR thatโs still open. See if thereโs anything worth salvaging โ at least open some issues.
I would love to see class based algebras, including composable Bialgebras (and Bimonads) as first-class entities.
I think the smart move would be to reduce the function and concept footprint of the current library and create multiple ways to assemble the arguments to a smaller number of functions. A tiny handful of morphisms dominate all uses, and the details of inference and implicits dominate the ergonomics of using the library. If you look at a generalized hylomorphism from different perspectives (input-centric, algebra-centric, monad-centric, adjoint-fold-centric ...) different interfaces emerge. And being explicit about partial application is a tremendous boon for usage because you can lose a lot of the type lambdas and work with single argument type constructors.
|
2025-04-01T04:35:31.228009
| 2017-12-27T12:40:19
|
284716132
|
{
"authors": [
"Crementif",
"slashiee"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10856",
"repo": "slashiee/cemu_graphic_packs",
"url": "https://github.com/slashiee/cemu_graphic_packs/pull/153"
}
|
gharchive/pull-request
|
Add ClarityGFX pages via submodule, update some text.
The clarity pages work via submodules which will be updated at and mostly by MelonSpeedruns repository which will then listen to pull requests etc. The url will still belong to this repository, so https://slashiee.github.io/cemu_graphic_packs/clarity/. One minor point is that to retrieve his latest updates, you need to update the submodule via a console with the command git submodule update --remote --merge in the github branch. I'll try to take care of this whenever there's an update.
Due to the _data setup it's really easy to add Presets (https://github.com/MelonSpeedruns/ClarityGFX_Presets/blob/master/_data/presets.yml). The photo's can be clicked on to get a magnified look at it. Also, it doesn't have to be game exclusive, it's all placeholders for now. There's also support for multiple pictures in the form of the image slider, but currently we just have used 4 dublicate links.
The website can be previewed at https://crementif.github.io/cemu_graphic_packs/clarity/ .
Also tagging @MelonSpeedruns .
Probably once there's a few proper examples on the page present it can be merged
Hm, think I'll close this down. Think the idea is not very wanted anymore and it's bothering me more to see an open pull request here ๐ .
|
2025-04-01T04:35:31.232998
| 2024-01-17T14:09:10
|
2086273744
|
{
"authors": [
"MangelMaxime",
"albertwoo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10857",
"repo": "slaveOftime/Fun.Build",
"url": "https://github.com/slaveOftime/Fun.Build/issues/61"
}
|
gharchive/issue
|
Should collapseGithubActionLogs be included in Fun.Build ?
The collapseGithubActionLogs seems like a handy tool
https://github.com/slaveOftime/Fun.Build/blob/8343307d7ee4933b73c4c346f90d3d8c6d1131f4/build.fsx#L12-L17
Should it be included by default in Fun.Build to avoid redefining it in every project?
Yes, how about we put it into Fun.Build.Github, we can put multiple helpers here.
I added module called Github for this.
|
2025-04-01T04:35:31.320015
| 2018-12-13T09:32:50
|
390590942
|
{
"authors": [
"Marwes",
"torkleyy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10858",
"repo": "slide-rs/specs",
"url": "https://github.com/slide-rs/specs/issues/520"
}
|
gharchive/issue
|
Useful resource: Gluon bindings
@Marwes wrote a nice module that allows using Specs with Gluon. I haven't checked it in detail yet, but maybe it's helpful to somebody.
https://github.com/Marwes/shred-example/blob/master/src/gluon_system.rs
Linking this comment as well https://github.com/amethyst/amethyst/issues/271#issuecomment-416617254
Despite the fact that it is extremely rough and has some hilariously unsafe code I think I should still bring up the specs/shred-gluon integration I have been sketching out. Still, it does sort of work and I haven't found any unsolvable blockers yet.
The main issue is that the support for dynamic systems in specs/shred is rather lackluster at the moment, part of it can be solved by exposing a few currently private function with regard to component extraction (which shouldn't affect the safety in any way). There might very well be possible to provide a better dynamic system abstraction in the crates themselves which could be another way of fixing it.
On the gluon side the main issue is that I have to cast away the lifetimes to pass in borrowed resources to the virtual machine. I think gluon can be improved to support that however.
https://github.com/Marwes/shred-example/blob/master/src/gluon_system.rs
https://github.com/Marwes/shred-example/blob/master/src/gluon_system.glu
And I'd like to emphasize again that it is very much a "get shit working project", it uses plenty of unsafe to get around lifetime errors which need to get removed or at the very least packaged under a safe interface.
Still, it does at least show the possibility of something nice!
Cleaning up the issue tracker, I'll close this.
I've created awesome-specs for these cases; see https://github.com/slide-rs/awesome-specs/pull/1
|
2025-04-01T04:35:31.324903
| 2023-12-15T07:07:08
|
2043025377
|
{
"authors": [
"KermanX",
"mjblackhorse"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10859",
"repo": "slidevjs/slidev",
"url": "https://github.com/slidevjs/slidev/issues/1221"
}
|
gharchive/issue
|
Failed to resolve import "vite-plugin-vue-server-ref/client" when opening the browser
Describe the bug
I install slidev on my MacBook (macOS 14.0) with following script:
sudo npm I -g @slidev/cli
It can be run. However, when I try to open the browser, it does not work correctly. I find following error information:
14:51:05 [vite] Internal server error: Failed to resolve import "vite-plugin-vue-server-ref/client" from "../../../../../../@server-reactive/drawings?diff". Does the file exist?
Plugin: vite:import-analysis
File: /@server-reactive/drawings?diff:3:84
1 |
2 | import { reactive, watch } from "vue"
3 | import { randId, stringify, parse, define, apply, reactiveSet, clone, diff } from "vite-plugin-vue-server-ref/client"
| ^
4 | const data = reactive({})
5 | let onSet = []
at formatError (file:///usr/local/lib/node_modules/@slidev/cli/node_modules/vite/dist/node/chunks/dep-Pluk1iaB.js:63408:46)
at TransformContext.error (file:///usr/local/lib/node_modules/@slidev/cli/node_modules/vite/dist/node/chunks/dep-Pluk1iaB.js:63402:19)
at normalizeUrl (file:///usr/local/lib/node_modules/@slidev/cli/node_modules/vite/dist/node/chunks/dep-Pluk1iaB.js:61677:33)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async file:///usr/local/lib/node_modules/@slidev/cli/node_modules/vite/dist/node/chunks/dep-Pluk1iaB.js:61831:47
at async Promise.all (index 1)
at async TransformContext.transform (file:///usr/local/lib/node_modules/@slidev/cli/node_modules/vite/dist/node/chunks/dep-Pluk1iaB.js:61752:13)
at async Object.transform (file:///usr/local/lib/node_modules/@slidev/cli/node_modules/vite/dist/node/chunks/dep-Pluk1iaB.js:63703:30)
at async loadAndTransform (file:///usr/local/lib/node_modules/@slidev/cli/node_modules/vite/dist/node/chunks/dep-Pluk1iaB.js:49384:29)
at async viteTransformMiddleware (file:///usr/local/lib/node_modules/@slidev/cli/node_modules/vite/dist/node/chunks/dep-Pluk1iaB.js:58985:32)
14:51:05 [vite] Pre-transform error: Failed to resolve import "vite-plugin-vue-server-ref/client" from "../../../../../../@server-reactive/drawings?diff". Does the file exist?
14:51:05 [vite] Pre-transform error: Failed to resolve import "vite-plugin-vue-server-ref/client" from "../../../../../../@server-reactive/nav". Does the file exist?
To Reproduce
Steps to reproduce the behavior:
Go to the directory containing my markdown file
run 'sudo npc slides myfile.md'
Type 'o' to open the browser
See the error
Desktop (please complete the following information):
OS: macOS 14.0
Browser: Safari
Slidev version: 0.46.0
It seems that this has been fixed via https://github.com/antfu/vite-plugin-vue-server-ref/commit/adb925642e8ee989e55276e8ff16394405dc359a.
|
2025-04-01T04:35:31.327184
| 2022-06-18T16:54:46
|
1275834758
|
{
"authors": [
"lirantal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10860",
"repo": "slidevjs/slidev",
"url": "https://github.com/slidevjs/slidev/pull/631"
}
|
gharchive/pull-request
|
feat: expose go() for dynamic slide navigation
When users want to extend the existing shortcuts (which are very minimal) with the customized setup then they still should be able to easily access some core navigation elements.
This PR exposes the go() navigation control.
I'll fix the lint issues so we can merge safely.
โ
Lint and Typescript errors fixed
@antfu I'm not entirely sure why, but the 0.33.1 release tag has been created on GitHub, yet the CI for it is failing due to Cypress tests: https://github.com/slidevjs/slidev/runs/6955808143?check_suite_focus=true
|
2025-04-01T04:35:31.357083
| 2023-12-26T10:11:51
|
2056262104
|
{
"authors": [
"matomatusov",
"slipx06"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10861",
"repo": "slipx06/sunsynk-power-flow-card",
"url": "https://github.com/slipx06/sunsynk-power-flow-card/issues/193"
}
|
gharchive/issue
|
Battery Remain Energy kWh
Is there an existing issue for this?
[X] I have searched the existing issues
Current Behavior
addition to the battery
Expected behaviour
Hi,
it would be possible to add to the battery part of the entity Battery Remain Energy (kWh) ?
Thanks :)
Possible Solutions
No response
Mode
compact, lite, full
Context / Reason
better idea how many kWh are in the battery
Hi,
couldn't the text be moved down a bit?
When using the default card_height there is no space to move this lower (See images below). It looks like you have adjusted the card_height? The only solution is to reposition the battery elements higher.
Default card height
Adjusted card height
Hi,
I have the height of the card in the default state.
would it be possible to move the battery up here?
would it be possible to display the voltage on the battery cells?
see image.
would it be possible to move the battery up here?
Yes I will make a small adjustment
would it be possible to display the voltage on the battery cells?
Not planned but will keep on the feature list
|
2025-04-01T04:35:31.398519
| 2016-04-13T09:08:48
|
147995451
|
{
"authors": [
"ksesong",
"sloria"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10862",
"repo": "sloria/webargs",
"url": "https://github.com/sloria/webargs/issues/103"
}
|
gharchive/issue
|
Using 'use_kwargs' with a callable
When we use the use_kwargs method with a callable (that accepts a request and returns a Schema instance), like in this example with use_kwargs: an error raises here, as the argmap is now a callable, not a dict.
argmap = <function factory at 0x1090227d0>
def get_field_names_for_argmap(argmap):
if isinstance(argmap, ma.Schema):
all_field_names = set([fname for fname, fobj in iteritems(argmap.fields)
if not fobj.dump_only])
else:
> all_field_names = set(argmap.keys())
E AttributeError: 'function' object has no attribute 'keys'
get_field_names_for_argmap is called by fill_in_missing_args.
Thanks for reporting. I've released a hotfix for this.
|
2025-04-01T04:35:31.637562
| 2022-03-28T16:43:19
|
1183706177
|
{
"authors": [
"ianlewis",
"laurentsimon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10863",
"repo": "slsa-framework/slsa-github-generator",
"url": "https://github.com/slsa-framework/slsa-github-generator/issues/1"
}
|
gharchive/issue
|
Create a provenance-only reusable workflow
cc @ianlewis
I think we can pretty much close this since we have https://github.com/slsa-framework/slsa-github-generator/blob/main/.github/workflows/slsa2_provenance.yml
|
2025-04-01T04:35:31.655589
| 2016-02-24T10:18:49
|
136016105
|
{
"authors": [
"drvinceknight",
"sluenenglish"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10864",
"repo": "sluenenglish/pyphase",
"url": "https://github.com/sluenenglish/pyphase/issues/1"
}
|
gharchive/issue
|
Ways to sample
Brett's Algorithm
Simulate to sample
Alternate sampling methods
https://cran.r-project.org/web/packages/PhaseType/PhaseType.pdf
|
2025-04-01T04:35:31.657183
| 2021-07-27T08:57:45
|
953660046
|
{
"authors": [
"nsorros"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10865",
"repo": "slundberg/shap",
"url": "https://github.com/slundberg/shap/pull/2110"
}
|
gharchive/pull-request
|
Return matplotlib plot when show False
The idea of this PR is to return the matplotib plot when show False so the user
can furher modify or save it. This could extend to all plots that have a show
param
To be honest this is what show=False advertises it does so I might as well be
missing something
Closing as it might not be relevant anymore. The intention was to be able to manipulate the plot and embed in an app like streamlit. Kind of similar to how text has a display argument
|
2025-04-01T04:35:31.658072
| 2019-04-12T19:59:09
|
432722161
|
{
"authors": [
"slundberg",
"stillmatic"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10866",
"repo": "slundberg/shap",
"url": "https://github.com/slundberg/shap/pull/552"
}
|
gharchive/pull-request
|
make shap play nicer with sklearn<0.18
an alternative is to pin sklearn >= 0.18, otherwise there are some import errors with older versions
Thanks!
|
2025-04-01T04:35:31.670565
| 2018-04-12T09:29:21
|
313646569
|
{
"authors": [
"LK1711",
"MXLHELLO",
"bujingdexin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10867",
"repo": "smallcorgi/Faster-RCNN_TF",
"url": "https://github.com/smallcorgi/Faster-RCNN_TF/issues/286"
}
|
gharchive/issue
|
Training on multiple GPUs
Has anyone successfully trained on multiple GPUs? If yes, what changes did you make? I added a split to the device ID's in the ./tools/train_net.py but that doesn't help. Is there a different approach to deal with this issue?
Thank you for the guidance.
I have some question to ask you ?Please contact me .My Email is 478137870@qq.com,Please .
I have some question to ask you ?Please contact me .My Email is 478137870@qq.com,Please .
Have you solve run it on multi-GPU?
|
2025-04-01T04:35:31.743210
| 2021-09-30T15:24:34
|
1012316991
|
{
"authors": [
"minhhang107",
"smaranjitghose"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10868",
"repo": "smaranjitghose/awesome-portfolio-websites",
"url": "https://github.com/smaranjitghose/awesome-portfolio-websites/issues/1066"
}
|
gharchive/issue
|
Reference Page: UI
The background color of the cards should be the same as the rest of the page to differentiate it from the navbar!
Hi, can I take on this issue?
|
2025-04-01T04:35:31.796906
| 2017-09-19T19:51:56
|
258943241
|
{
"authors": [
"brettywhite",
"codecov-io"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10869",
"repo": "smartdevicelink/sdl_android",
"url": "https://github.com/smartdevicelink/sdl_android/pull/611"
}
|
gharchive/pull-request
|
WIP: auto set correlation ID
Fixes #604
This PR is ready for review.
Risk
This PR makes minor API changes.
Testing Plan
This was tested by removing manually set correlation IDs in a test app's proxy and ensuring that they were auto set correctly by the library
Summary
In RPCRequest.java, edit getCorrelationID() and check for the key in the RPCMessage. If it was not set manually, then set it.
CLA
[X] I have signed the CLA
Codecov Report
Merging #611 into develop will increase coverage by 0.03%.
The diff coverage is 100%.
@@ Coverage Diff @@
## develop #611 +/- ##
=============================================
+ Coverage 39.92% 39.96% +0.03%
- Complexity 2346 2349 +3
=============================================
Files 329 329
Lines 14230 14232 +2
Branches 1477 1477
=============================================
+ Hits 5682 5688 +6
+ Misses 8343 8337 -6
- Partials 205 207 +2
Impacted Files
Coverage ฮ
Complexity ฮ
...ain/java/com/smartdevicelink/proxy/RPCRequest.java
80% <100%> (+3.07%)
6 <2> (+1)
:arrow_up:
...m/smartdevicelink/util/CorrelationIdGenerator.java
44.44% <0%> (+44.44%)
2% <0%> (+2%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
ฮ = absolute <relative> (impact), รธ = not affected, ? = missing data
Powered by Codecov. Last update 96eb784...f9ae297. Read the comment docs.
|
2025-04-01T04:35:32.007208
| 2021-05-28T07:42:12
|
904868396
|
{
"authors": [
"JulianKast",
"ashwink11",
"joeygrover",
"jordynmackool"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10870",
"repo": "smartdevicelink/sdl_java_suite",
"url": "https://github.com/smartdevicelink/sdl_java_suite/issues/1701"
}
|
gharchive/issue
|
Android SDL Apps not found on IVI
Bug Report
Android SDL Apps not found on IVI
Frequency
The issue is not always reproducible, but it was reported multiple times by users.
Steps to recover
The issue recovers if we launch the app in the foreground and reconnect Bluetooth.
I believe, sometimes the App misses Bluetooth intents in mobile devices and hence the issue is seen.
Reproduction Steps
In both methods below, the App did not receive Bluetooth connection intents and does not register on IVI.
Method 1:
Connect Bluetooth to IVI
Install HelloSDL App on a mobile device.
Launch app on a mobile device
Observation: App not registered on IVI since Bluetooth connection intents not received by App.
Method 2:
Launch HelloSDL App on a mobile device.
Force Stop App on the device.
Connect Bluetooth to IVI
Relaunch HelloSDL App on a mobile device
Observation: Bluetooth connection events not received by App. App not registered on IVI.
Expected Behavior
The app should be registered on IVI
Observed Behavior
The app does not register and needs to be relaunched on a mobile device and reconnect Bluetooth.
OS & Version Information
Android Version: Android S8 Android OS 9.0
SDL Android Version: SDL Library 5.1.1 and latest from GitHub Develop branch
Testing Against SYNC3
Test App: HelloSDL App build variant multi_sec_offDebug
Potential fix: SDL library could retrieve connected Device Mac address in case it misses Bluetooth connection Intents. Please find attached patch file. The patch has an implementation to retrieve the BT Mac address if it is not available.
potential_fix.patch.txt
When the mobile device had another app with an old library and connecteddevice in SDLDeviceListener is NULL (condition when app misses BT connection intents). None of the SDL apps were registered on IVI since no app started the SDL router service. The fix attached mitigates this issue.
2021-05-28 09:49:18.173 23842-23842/com.sdl.hellosdlandroid I/Sdlย Broadcastย Receiver: 5.1.1: Check for local router
2021-05-28 09:49:20.353 23842-23842/com.sdl.hellosdlandroid I/Sdlย Broadcastย Receiver: 5.1.1: : This app's package: com.sdl.hellosdlandroid
2021-05-28 09:49:20.354 23842-23842/com.sdl.hellosdlandroid I/Sdlย Broadcastย Receiver: 5.1.1: : Router service app's package: com.sdl.hellosdlandroid
2021-05-28 09:49:20.355 23842-23842/com.sdl.hellosdlandroid I/Sdlย Broadcastย Receiver: 5.1.1: Not starting device listener, bluetooth device is null and other SDL apps installed.
The issue exists if the app was installed after BT connection or if the app was force closed and not opened until after BT connection. If you believe the patch provided will solve the issue please open a PR, it would be greatly appreciated.
Hi @joeygrover , I have created PR. could you please review it?
Thanks for letting us know that this PR is ready for Livio review. Livio is currently testing for the upcoming 5.2 release on June 30, 2021.
We will then begin reviewing this PR when time allows. Thanks!
One more way to reproduce this issue.
Environment:
IVI: Ford SYNC System
Mobile Device: Samsung S8 with Android 9
Apps: WebEx- latest versions from Play store, SDL Test app with latest SDL library - sdl_android:5+
Steps to reproduce.
Connect Ford SYNC and mobile device via Bluetooth.
Let SDL apps show on Mobile apps screen on Phone.
Restart the mobile device.
the Mobile device will automatically connect Bluetooth with SYNC
Observation: No apps registered on SYNC
Closing issue as we believe that this issue has already been fixed. When SDLDeviceListener was implemented in 4.12.0, it did require you to cycle Bluetooth if the app was installed after the initial Bluetooth connection was established and therefore did not receive a Bluetooth connection intent while SdlDeviceListener was being used. In 5.2.0 we fixed that issue with this PR: #1685. You may have to have the tdk perform an SDP query by pressing the "Find Mobile Apps" button within a timeout period of 15 seconds of when the app is installed.
|
2025-04-01T04:35:32.013930
| 2018-07-20T14:15:21
|
343124156
|
{
"authors": [
"AByzhynar",
"AStasiuk",
"robinmk"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10871",
"repo": "smartdevicelink/sdl_requirements",
"url": "https://github.com/smartdevicelink/sdl_requirements/issues/95"
}
|
gharchive/issue
|
Is Resumption DB used ?
At the moment SDL has the following flag in smartdevicelink.ini file : UseDBForResumption
But we never use it.
SDL always uses app_info.dat file (with a simple json structure) for resumption purposes.
Please clarify if we still need to keep this flag as well as SDL functionality related to resumption usage with SQL data base?
@AByzhynar
I believe there is no negative impact on keeping this flag. If that is true, then I would recommend keeping this flag since there is no proposal which talks about removing this flag.
At a later point, the project maintainers can do a clean up and this can be brought up as an item for cleanup.
What do you think?
@robinmk From this point of view - I agree.
But there is another question : Do we need to implement resumption during Low Voltage for usage with DB? I will be glad to do it, just don't want to make redundant work which can be never used. What do you think?
Do we need to implement resumption during Low Voltage for usage with DB?
I think I may have misunderstood your question, but how else can resumption data be saved without using DB?
@robinmk At this moment SDL creates app_info.dat file (it is just simple txt file which contains all resumption related data in JSON format). Is is easy and fast to add or read data from this file when it is needed.
But there is another possibility to use resumption : create file resumption.db which is SQL data base, append or remove data with SQL transactions, commit these transactions etc. But I don't remember that this possibility was ever used.
Therefore my question : Do we need to update just resumption during Low Voltage to be used with app_info.dat (txt with JSON format) which is used by default or we have to implement resumption during Low Voltage also for SQL data base way which is not used now?
Do we have an instance in the open where SQL is used?
@robinmk If globally - we used separate thread in SDL which periodically saves Policy DB from RAM to file system.
Here we can not avoid it : we have lots of different tables and lots of cross dependencies between them. Therefore we need to use SQL data base in case of Policy
In case of resumption there is simple structure which is wrapped into JSON and placed to txt file.
@robinmk, did you have a chance to review Andrey's answers?
Yes, Andrey and I have been discussing over slack.
@robinmk, suggested to put here, in GitHub your decisions. It will be easy for all to track them.
|
2025-04-01T04:35:32.016457
| 2023-03-04T23:11:25
|
1609980969
|
{
"authors": [
"rudigier"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10872",
"repo": "smartive-education/app-pizza-hawaii",
"url": "https://github.com/smartive-education/app-pizza-hawaii/pull/32"
}
|
gharchive/pull-request
|
Feature/general cleanup
remove Tailwind CSS / add Tailwind config from component lib
config and run lint & prettier
improve some templates
... add also the user Page ProfileHeader and fetch some posts
Thanks for feedback! Fixed the mentioned things.
|
2025-04-01T04:35:32.017989
| 2020-12-16T14:37:12
|
768948272
|
{
"authors": [
"ilkkao",
"sjagoe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10873",
"repo": "smartlyio/kubernetes-canary-action",
"url": "https://github.com/smartlyio/kubernetes-canary-action/pull/85"
}
|
gharchive/pull-request
|
Only canary deploys should be blocked if there are multiple images ruโฆ
โฆnning
Normal deploy replaces all running pods, it doesn't matter what the initial state is.
I added 'More than one app image revision running. Canary deploy would modify non-canary pods. Not safe to proceed.' warning initially because canary deploy can't be then sure what the current image is for non-canary pods.
In any case, I'll leave this PR here. Please feel free to replace or close it.
Thanks, I'll build out some tests around this and get a release.
|
2025-04-01T04:35:32.021677
| 2015-07-03T14:43:06
|
92888747
|
{
"authors": [
"Flatroy",
"HamidRezaSepehr"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10874",
"repo": "smashingboxes/OwlCarousel2",
"url": "https://github.com/smashingboxes/OwlCarousel2/issues/921"
}
|
gharchive/issue
|
Right to Left Support
It would be great if we could use Owl carousel for Right to Left websites, right now I added dir="rtl" to the html tag and Owl stopped working!
Thanks guys
You just need add rtl:true and Owl will change direction from Right to left.
http://www.owlcarousel.owlgraphic.com/demos/rtl.html
Cool, I'll check it out. Thanks buddy.
|
2025-04-01T04:35:32.039207
| 2024-11-20T23:45:25
|
2677569622
|
{
"authors": [
"leahneukirchen",
"sminez"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10876",
"repo": "sminez/ad",
"url": "https://github.com/sminez/ad/issues/52"
}
|
gharchive/issue
|
Buffer confusion after :dc
Describe the bug
When a column is deleted, the other column is shown but ad thinks the deleted buffer is still active.
To Reproduce
Steps to reproduce the behavior:
% ad README.md
:O LICENSE
:dc
Now ad shows LICENSE - 19 lines, but still displays README.md. The cursor can't be moved more than 19 lines.
Expected behavior
:dc deletes the column and displays the other buffer
Versions & OS Details
OS: Linux
Distribution Void
ad Version: d2494246f73e255da387c98d9183021907f136be
hi @leahneukirchen, thank you for raising this! It looks like it also affected closing windows as well and the fix is the same in both cases. The latest commit on main now correctly focuses the active buffer internally.
|
2025-04-01T04:35:32.063240
| 2023-06-19T10:15:38
|
1763220447
|
{
"authors": [
"cyling250",
"smousavi05"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10877",
"repo": "smousavi05/STEAD",
"url": "https://github.com/smousavi05/STEAD/issues/21"
}
|
gharchive/issue
|
Acceleration Data Acquisition Method
Thank you very much for your contribution to building the STEAD dataset, which I also intend to use.
For my work, we are more concerned with the acceleration waveform of ground motion, and you also give the method of obtaining the acceleration waveform in the article.
client = Client("IRIS")
inventory = client.get_stations(network=quake_wave_set.attrs['network_code'],
station=quake_wave_set.attrs['receiver_code'],
starttime=UTCDateTime(quake_wave_set.attrs['trace_start_time']),
endtime=UTCDateTime(quake_wave_set.attrs['trace_start_time']) + 60,
loc="*",
channel="*",
level="response")
st = make_stream(quake_wave_set)
st = st.remove_response(inventory=inventory, output="ACC", plot=False)
When I use this method to acquire acceleration waveforms, I will get data from IRIS over the network, which is simple and feasible for a small number of seismic waves. But when I request more, IRIS rejects my request. I also realized that for millions of orders of data, it is almost impossible in this way.
I would like to ask if you have ground motion acceleration waveform data, or could you please give me another way to get ground motion acceleration waveform for the entire data set?
Thank you so much!
Hello,
I don't have acceleration data. The reason for this issue is that IRIS
server has a cap for number of requests per person per time. If you simply
add a waiting period (like 10 section) between each time you runn the below
script, you won't get that error message.
Regards,
Mostafa
On Mon, Jun 19, 2023, 3:15 AM RichardoGu @.***> wrote:
Thank you very much for your contribution to building the STEAD dataset,
which I also intend to use.
For my work, we are more concerned with the acceleration waveform of
ground motion, and you also give the method of obtaining the acceleration
waveform in the article.
client = Client("IRIS")
inventory = client.get_stations(network=quake_wave_set.attrs['network_code'],
station=quake_wave_set.attrs['receiver_code'],
starttime=UTCDateTime(quake_wave_set.attrs['trace_start_time']),
endtime=UTCDateTime(quake_wave_set.attrs['trace_start_time']) + 60,
loc="",
channel="",
level="response")
st = make_stream(quake_wave_set)
st = st.remove_response(inventory=inventory, output="ACC", plot=False)
When I use this method to acquire acceleration waveforms, I will get data
from IRIS over the network, which is simple and feasible for a small number
of seismic waves. But when I request more, IRIS rejects my request. I also
realized that for millions of orders of data, it is almost impossible in
this way.
I would like to ask if you have ground motion acceleration waveform data,
or could you please give me another way to get ground motion acceleration
waveform for the entire data set?
Thank you so much!
โ
Reply to this email directly, view it on GitHub
https://github.com/smousavi05/STEAD/issues/21, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ADHWAPFJWZY55E4EKIDLEKDXMARFLANCNFSM6AAAAAAZLW4D3I
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
Thank you very much, this is helpful.
|
2025-04-01T04:35:32.069531
| 2016-11-01T18:35:04
|
186611537
|
{
"authors": [
"7rulnik",
"meyer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10878",
"repo": "smyte/jsxstyle",
"url": "https://github.com/smyte/jsxstyle/pull/43"
}
|
gharchive/pull-request
|
Set syntax highlight to jsx
Github supports jsx highlighting. So, here we go!
@7rulnik thanks, looks a lot nicer ๐
|
2025-04-01T04:35:32.083515
| 2023-03-06T14:34:19
|
1611562257
|
{
"authors": [
"dlaehnemann",
"ncherric",
"nickp60"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10879",
"repo": "snakemake/snakemake",
"url": "https://github.com/snakemake/snakemake/issues/2154"
}
|
gharchive/issue
|
WorkflowError: Cannot parse runtime value into minutes for setting runtime resource: 00:30:00
Snakemake version
7.24.0
Describe the bug
I have been including runtime in the resources directive in each of my rules in my workflows for batch submission. I format the time in hh:mm:ss e.g. runtime="00:30:00" for 30 minutes. This works as of snakemake version 7.19.0, however, in a recent install with snakemake version 7.24.0, I get the following error when attempting a dry-run:
$ snakemake --version
7.24.0
$ snakemake -np
Building DAG of jobs...
WorkflowError:
Cannot parse runtime value into minutes for setting runtime resource: 00:30:00
$
Additional context
Looking at the changelog, it appears this likely occurred from a feature added in 7.20.0:
allow for human friendly resource definitions (e.g. mem=โ5GBโ, runtime=โ1dโ) (#1861) (24610ac)
I hope this helps with possibly including a revision to still accommodate the hh:mm:ss runtime format feature in future minor version updates. If the intent is to deprecate this format just let me know.
Thank you!
Noah
Thanks for reporting this, and this has also been reported in the respective pull request you cite. I've also run into the same problem, so I feel your pain, here.
But there's a good reason not to allow this kind of specification, which is portability of snakemake workflows. Namely, the time format runtime="00:30:00" was previously just passed on as is to slurm, and slurm understands this. However, any other cluster system might or might not understand this time format. Thus, the same workflow would not execute on another cluster system and would thus not be portable.
So, as annoying as this might seem with your current error, I'd suggest to simply move to the new timespan specification syntax and take the extra workflow portability as a useful extra.
Also, the new (portable) specification is based on an external library that consistently parses time spans, see:
https://humanfriendly.readthedocs.io/en/latest/api.html?highlight=parse_timespan#humanfriendly.parse_timespan
This library distinguishes between the specification of timespans as <number><time_unit> and actual timepoints during a day as <hour>:<minute>:<seconds>, so there's a finer nit-picking point to be made, here, as well... :sweat_smile:
Thanks David! I will go ahead and adapt my workflows for the new specification since it will add portability for users that submit to slurm and/or other job schedulers on clusters. It might be helpful to add a line in the changelog for future readers about how the new human friendly resource definition unintentionally deprecated the old time format hh:mm:ss, but moving away from the old time format is actually advantageous because of the new time format 30m portability for various job schedulers and cluster systems.
Putting that info in the changelog is a very good idea! Would you be willing to put in a pull request that quickly explains this in one or two sentences right here:
https://github.com/snakemake/snakemake/blob/1fb3cef54dfe877c944b8f38cf4ca78ebe91ebe9/CHANGELOG.md?plain=1#L113
Hi all, I'm piggybacking off this issue as it is related. With the change from 7.19 to 7.20 I get a similar issue but for cases where I don't define a runtime:
Building DAG of jobs...
WorkflowError:
Cannot parse runtime value into minutes for setting runtime resource: <TBD>
I think this is because the default resources only define mem_mb, disk_mb, and tmp.
I'm on LSF, so not sure if this is related to that or to core Snakemake but I'm mentioning it here as the lsf snakemake-profile repo lacks maintainers.
Is there a solution to this other than setting a default with --default-resources?
Correction -- The error I am getting with 7.20 that I wasn't getting with 7.19 is due to using a lambda function in the resources. Is there a workaround?
Can you describe the problem in more detail? Does your lambda function return an integer (specifying the runtime in minutes)?
Unless you resolved this, it might also make sense to open a separate issue for providing a more detailed account of the problem (and then hopefully a resolution). You can always mention the comments over here.
|
2025-04-01T04:35:32.092675
| 2024-04-13T20:33:26
|
2241772709
|
{
"authors": [
"Benkendorfer",
"KatharinaHoff",
"amadeovezz",
"mhjiang97",
"tinu-t",
"weber8thomas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10880",
"repo": "snakemake/snakemake",
"url": "https://github.com/snakemake/snakemake/issues/2808"
}
|
gharchive/issue
|
Snakemake 8.9.0 on a singularity slurm cluster combination error
I am trying to run snakemake 8.9.0 using singularity on a slurm cluster,
Constantly getting this error, not sure what is wrong with the syntax
Command:
snakemake --cores 3 --configfile config.yaml --executor cluster-generic --cluster-generic-submit-cmd 'sbatch -t {params.res_time} --cpus-per-task {threads} --mem-per-cpu {params.res_mem} -o {params.lsf_log}' -j 20 -k -p --notemp --latency-wait 24 --rerun-incomplete --use-singularity --singularity-args '--bind /cluster/snakemake
-path'
Error:
__main__.py: error: argument --apptainer-args/--singularity-args: expected one argument
The apptainer parameters had to be written in a YAML file and given as argument to the -profile argument like shown below:
snakemake --cores 3 --configfile config.yaml --executor cluster-generic --cluster-generic-submit-cmd 'sbatch -t {params.res_time} --cpus-per-task {threads} --mem-per-cpu {params.res_mem} -o {params.lsf_log}' -j 20 -k -p --notemp --latency-wait 24 --rerun-incomplete --profile ./profile/apptainer
cat ./profile/apptainer
use-singularity: True
singularity-args: "\"--bind /cluster/snakemake/\""
I am also running into this error while trying to use singularity-args with slurm:
__main__.py: error: argument --apptainer-args/--singularity-args: expected one argument
The above solution did not work and I've also tried passing it through the command line:
--use-singularity --singularity-args "--bind /some/path"
I'm not sure if it is an issue with the snakemake-executor-plugin-slurm or with snakemake itself. I noticed this regression after I upgraded from 8.4.2.
Additional details:
Config
executor: slurm
# Slurm specific
default-resources:
slurm_account: "some_account"
slurm_partition: "some_partition,common"
runtime: 60
mem_mb: 4000
nodes: 1
# Non-standard resource specifications
slurm_extra: "'--exclude=some_node'"
jobs: 1
printshellcmds: True
restart-times: 0
use-singularity: True
singularity-args: "--bind some/path"
Minimal snakefile
rule all:
output:
"test_output.txt"
singularity:
"/path/to/v4/RSingleCell.sif"
shell:
"ls /mnt > {output}"
Command
snakemake --profile ./profiles/generic/
My current versions:
snakemake-executor-plugin-slurm 0.4.4
snakemake-executor-plugin-slurm-jobstep 0.2.1
snakemake-interface-common 1.17.2
snakemake-interface-executor-plugins 9.1.1
snakemake-interface-report-plugins 1.0.0
snakemake-interface-storage-plugins 3.2.0
Any help is appreciated ๐
Same issue using that command on snakemake 8.10.7:
snakemake --sdm conda apptainer --conda-frontend mamba --forceall -j1 --executor slurm --set-resources preprocess_data:constraint="rome" --apptainer-args "--bind /g:/g"
Same issue to me using snakemake version 8.10.7 and --executor cluster-generic
Same issue with snakemake version 8.11.0 and --executor slurm.
I'm also having this problem in version 8.11.6 but the solution suggested by @tinu-t does not solve the problem. When I pass
singularity-args: "\"--bind /sdf\""
in my profile I get a crash with error
SLURM job submission failed. The error message was sbatch: error: Script arguments not permitted with --wrap option
Thank you @johanneskoester !
|
2025-04-01T04:35:32.097160
| 2024-04-18T07:12:33
|
2249933885
|
{
"authors": [
"cademirch",
"fgvieira"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10881",
"repo": "snakemake/snakemake",
"url": "https://github.com/snakemake/snakemake/issues/2822"
}
|
gharchive/issue
|
Incomplete DAG with checkpoints
Snakemake version
v8.10.7
Describe the bug
DAG is incomplete when there are checkpoints, since it only plots the DAG for the rules upstream of the checkpoint.
Minimal example
On test snakemake/tests/test_checkpoint_allowed_rules, if we run snakemake c --dag | dot -Tsvg > dag.svg, we get the DAG:
Where I think we would also expect rule b going into rule c.
Seems slightly related to #2824. I think this makes sense since B doesn't exist in the DAG until after A has been run.
Yes, but then it should work if a has been run, but it does not. As it is now:
if a has not been run: a -> c
if a has bee run: b -> c
So it is never possible to get the full DAG.
|
2025-04-01T04:35:32.105138
| 2021-02-03T21:03:26
|
800698583
|
{
"authors": [
"dlaehnemann",
"lczech"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10882",
"repo": "snakemake/snakemake",
"url": "https://github.com/snakemake/snakemake/issues/864"
}
|
gharchive/issue
|
Versions, citations and references for tools being used
Is your feature request related to a problem? Please describe.
I am currently writing a manuscript that uses snakemake with a lot of wrapper scripts, and have to go through all tools individually to get (a) the exact version of the tool, and (b) then look up its reference for the bibliography.
My process is:
Go through all my snakemake files and all rules and tools being used. For each of them:
Find the wrapper version:wrapper:
"0.51.3/bio/trimmomatic/pe"
Find the documentation at that exact version of the wrapper: https://snakemake-wrappers.readthedocs.io/en/0.51.3/wrappers/trimmomatic/pe.html
Get the version of the tool being used from there.
Do a Google Scholar search for the tool to get its preferred citation/reference.
Then, put this into text: "We used trimmomatic v0.36 [1]", etc...
This is repetitive work, not only for me, but for everyone who is in a similar situation. The two pieces of information that I am interested in here are the version of the tool being used, and its reference.
Describe the solution you'd like
Ideally, snakemake could offer some command that takes a config file/workflow, and figures out all tools being used in that exact workflow configuration (skipping any rules that are not being invoked!). It then automatically collects the needed information, and prints it in some format.
Tool version information is already available via the environment.yaml in the wrapper, and citation or links to the tool websites could be added (optionally and bit by bit for each tool) to the meta.yaml and read from there.
I guess that this is only feasible for wrapped tools, and that I still have to go through my individual (shell/script based) rules. But the above snakemake command then could at least also list all those rules, so that I do not forget about them. Basically, it prints out a linear list of the dependency graph (similar to the normal terminal output when running snakemake, but only once for each rule), but with all tools and references listed.
So, my ideal solution would look something like this:
$ snakemake [...] --print-tools # or whatever you would want to call this command
Rule `trim_read`:
- Uses wrapper `TRIMMOMATIC PE`
- Tool: trimmomatic v0.36
- Conda: https://anaconda.org/bioconda/trimmomatic
- URL: http://www.usadellab.org/cms/index.php?page=trimmomatic
- Reference: Bolger, A. M., Lohse, M., & Usadel, B. (2014). Trimmomatic: A flexible trimmer for Illumina sequence data. Bioinformatics, 30(15), 2114โ2120. https://doi.org/10.1093/bioinformatics/btu170
Rule `my_rule`:
- Uses shell-based command
Rule `my_other_rule`:
- Uses python-based command
....
That would be awesome!
Describe alternatives you've considered
I thought that a simple overview table here in the repo might also be a solution, but that would get too messy rather quickly, with all wrappers in different versions etc... and it would not really speed things up, would be extra maintenance work, and would probably be outdated all the time, and... No, not a good alternative.
I have also opened this issue in the snakemake wrapper repository. Feel free to close one of them - I wasn't quite sure what the best place to put this is.
Hi @lczech ,
I am just seeing this now, sorry for the slow response. An overview table of all the tools used and the respective versions sounds like the perfect thing to add to snakemake --report report.zip when creating a snakemake report? Even if you do not mark any specific rule output for inclusion in the report, the basic report will give you an overview of all the rules executed in DAG-form and run time statistics. But it should also be possible to systematically parse the (latest) conda environments and write a table with all the versions. Optimally, that table should be exportable into different formats (tsv, some open spreadsheet format). Sadly, I don't currently have time to pick this up, but I'll mention it in my group, and maybe someone else who sees this (including you;) can find some time to pick this issue up!
Also, as I think this best fits here, I'll close it over at the snakemake wrappers. But feel free to reopen, if something on the snakemake wrappers repo also needs to change.
|
2025-04-01T04:35:32.108338
| 2023-01-18T10:32:12
|
1537762334
|
{
"authors": [
"mwtoews"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10883",
"repo": "snakemake/snakemake",
"url": "https://github.com/snakemake/snakemake/pull/2067"
}
|
gharchive/pull-request
|
refactor: move static metadata (again) to setup.cfg
Description
This moves the static project metadata (again) to setup.cfg, which has had long-standing support with setuptools.
This resolves an issue with limitations of project metadata in pyproject.toml by release-please.
QC
[ ] The PR contains a test case for the changes or the changes are already covered by an existing test case.
[ ] The documentation (docs/) is updated to reflect the changes or this is not necessary (e.g. if the change does neither modify the language nor the behavior or functionalities of Snakemake).
Two subtle changes to note:
description was reworded to be shorter, as this must fit on one line
description content type was changed to text/plain
Versioneer upgrade and configuration in pyproject.toml was kept.
|
2025-04-01T04:35:32.117599
| 2019-05-31T12:29:50
|
450775437
|
{
"authors": [
"caseyjhol",
"mshalomdave"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10884",
"repo": "snapappointments/bootstrap-select",
"url": "https://github.com/snapappointments/bootstrap-select/issues/2290"
}
|
gharchive/issue
|
Click Optgroup and select / deselect Options for Bootstrap 4.0
Hello I am trying to create a bootstrap multi-select using Bootstrap 4.0 where by I can select the optgroups and hence the options within that optgroups are selected. I have tried various ways including from Stack Overflow and they seem either to be outdated or not working. Could you help. Using Bootstrap-select version 1.13.9
Here is my web page
Bootstrap-select Tests (Bootstrap 4)
Apple
Orange
Corn
Carrot
Duplicate of #737. Please ๐ to vote and subscribe ๐ for updates.
|
2025-04-01T04:35:32.119621
| 2022-10-21T01:45:52
|
1417571841
|
{
"authors": [
"NicolasCARPi",
"njzjz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10885",
"repo": "snapappointments/bootstrap-select",
"url": "https://github.com/snapappointments/bootstrap-select/issues/2793"
}
|
gharchive/issue
|
I wonder whether there is any limitation on the size of options. The documentation does not document it.
I wonder whether there is any limitation on the size of options.
You found it, thanks! Now we know.
|
2025-04-01T04:35:32.164032
| 2024-05-30T08:38:09
|
2325084907
|
{
"authors": [
"NucciTheBoss",
"jnsgruk",
"techtonik"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10886",
"repo": "snapcrafters/ci",
"url": "https://github.com/snapcrafters/ci/pull/42"
}
|
gharchive/pull-request
|
fix: correctly substitute testing instructions in issue template
Environment variable substitution in the call for testing stopped working since we factored out the testing instructions. This fix ensures that the template gets the various sections substituted out before rendering, to ensure that all the relevant fields get templated correctly.
See example output here: https://github.com/jnsgruk/ci-testing/issues/51
Why inject template-testing into template.md? Maybe allow to replace default template.md with own path to template.
Why inject template-testing into template.md? Maybe allow to replace default template.md with own path to template.
I think the idea is that you'll eventually be able to set a custom call for testing template. This current implementation is more a less a stop-gap for a rendering issue that I encounter in the ondemand snap: https://github.com/charmed-hpc/ondemand-snap/issues/3
Ideally, if/when we switch over to a CI tool, it will be much easier for folks to just set a template. For example, if we write something in Go, we can use the text/template package. That will make it really easy for folks to set their own template.
Well, kind of. I want to steer away from a system where every single snap has a totally different template. Consistency here is a good thing, but we do need a way to customise testing instructions specifically - a lot of the other stuff won't change between snaps :)
Does Go text/template allow block overriding? Like in Jinja https://jinja.palletsprojects.com/en/3.0.x/templates/#template-inheritance
Yep! Go's text/template supports inheritance! https://towardsdev.com/go-text-templates-advanced-concepts-for-dynamic-content-d9e4067e91b7
Yep! Go's text/template supports inheritance! https://towardsdev.com/go-text-templates-advanced-concepts-for-dynamic-content-d9e4067e91b7
Am I right that this is for HTML only?
Is there a way to specify default values for define? Looks like child template can not inherit parent, so there is a separate Go code that calculates the values for base template.
|
2025-04-01T04:35:32.173808
| 2022-11-20T14:18:29
|
1456920410
|
{
"authors": [
"hjazz7156"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10887",
"repo": "snapshot-labs/snapshot-spaces",
"url": "https://github.com/snapshot-labs/snapshot-spaces/pull/2328"
}
|
gharchive/pull-request
|
Update domains.json
Hi! What is your PR about?
[ ] Add or edit a skin
[X] Add a custom domain for your space
[ ] Add an alias to migrate your space
"vote.innercircle.finance": "innercirclefinance.eth",
|
2025-04-01T04:35:32.175877
| 2022-03-01T17:04:00
|
1155586475
|
{
"authors": [
"Orland0x",
"pscott"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10888",
"repo": "snapshot-labs/sx-core",
"url": "https://github.com/snapshot-labs/sx-core/pull/47"
}
|
gharchive/pull-request
|
Vanilla Space Contract
As per what was described in milestone1, here is a vanilla space contract (missing correct types and tests).
Tests are currently not working (because state is not updating... don't know why) :)
Decide to use a lib/ folder with a single file per new struct / type. https://github.com/snapshot-labs/sx-core/pull/47/commits/8371c1e6c472ed4b00a93235f59bccce0eb4471d
looks good
|
2025-04-01T04:35:32.219211
| 2016-08-22T11:52:40
|
172433861
|
{
"authors": [
"binjo",
"lazyhack",
"snare",
"trietptm"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10889",
"repo": "snare/voltron",
"url": "https://github.com/snare/voltron/issues/158"
}
|
gharchive/issue
|
AttributeError: 'NoneType' object has no attribute 'adaptor_class'
I build the latest Voltron from GitHub and install in Windows following the instructions at https://github.com/snare/voltron/wiki/Installation#windbg and receive the following error in my WinDbg Command log:
0:000> .load pykd
0:000> !py --global C:\Python27\Lib\site-packages\voltron-0.1.5-py2.7.egg\voltron\entry.py
C:\Python27\lib\site-packages\blessed\terminal.py:32: UserWarning: One or more of the modules: 'termios', 'fcntl', and 'tty' are not found on your platform 'win32'. The following methods of Terminal are dummy/no-op unless a deriving class overrides them: setraw, cbreak, kbhit, height, width
warnings.warn(_MSG_NOSUPPORT)
An error occurred while loading Voltron:
Traceback (most recent call last):
File "C:\Python27\Lib\site-packages\voltron-0.1.5-py2.7.egg\voltron\entry.py", line 98, in
voltron.debugger = plugin.adaptor_class(*args)
AttributeError: 'NoneType' object has no attribute 'adaptor_class'
Sounds like it hasn't installed the plugins properly. Can you try installing the Voltron package in develop mode?
$ cd voltron
$ pip uninstall .
$ pip install -e .
This will remove the Voltron egg from the Python packages directory, and create an egg-link to the source directory. Not a great solution, but might work for now. I'll investigate and see if I can reproduce the issue.
The same adaptor_class problem still occurs:
0:000> !py --global D:\GitHub\voltron\voltron\entry.py
An error occurred while loading Voltron:
Traceback (most recent call last):
File "D:\GitHub\voltron\voltron\entry.py", line 98, in
voltron.debugger = plugin.adaptor_class(*args)
AttributeError: 'NoneType' object has no attribute 'adaptor_class'
Please ensure Voltron is installed correctly per the documentation: https://github.com/snare/voltron/wiki/Installation
Now, I found a new script called voltron-script.py, is that the one I should execute?
0:000> .load pykd
0:000> !py --global c:\python27\scripts\voltron-script.py
C:\Python27\lib\site-packages\blessed\terminal.py:32: UserWarning: One or more of the modules: 'termios', 'fcntl', and 'tty' are not found on your platform 'win32'. The following methods of Terminal are dummy/no-op unless a deriving class overrides them: setraw, cbreak, kbhit, height, width
warnings.warn(_MSG_NOSUPPORT)
usage: voltron-script.py [-h] [--debug] [-o O] {view,v} ...
voltron-script.py: error: too few arguments
0:000> !py --global c:\python27\scripts\voltron-script.py -h
usage: voltron-script.py [-h] [--debug] [-o O] {view,v} ...
optional arguments:
-h, --help show this help message and exit
--debug, -d print debug logging
-o O override config variable
subcommands:
valid subcommands
{view,v}
view (v) display a view
Same problem with running the voltron on Windbg
No, you should be loading entry.py. I am not able to reproduce this on Windows 7. Will try on a fresh system.
Can you tell me which version of Windows, WinDbg and PyKD you're using?
My tested is version is..
OS : Windows 7 Ultimate K (Build 7601, SP1)
WinDBG 6.12.0002.633 X85
PyKD <IP_ADDRESS> (downloaded the latest version yesterday)
Python version 2.7.3 (32bit)
and.. I did below for the installation
I didn't execute install.sh because of My OS is Windows
Install the Pykd-<IP_ADDRESS>-py2-none-win_win32.whl
Install the Curses-2.2-cp27-none-win32.whl
python \setup.py install
then, On WinDBG, !py --global voltron\entry.py
C:\Python27\lib\site-packages\blessed-1.14.1-py2.7.egg\blessed\terminal.py:32: UserWarning: One or more of the modules: 'termios', 'fcntl', and 'tty' are not found on your platform 'win32'. The following methods of Terminal are dummy/no-op unless a deriving class overrides them: setraw, cbreak, kbhit, height, width
An error occurred while loading Voltron:
Traceback (most recent call last):
File "c:\tmp\voltron-master\voltron\entry.py", line 57, in
import voltron
ImportError: No module named voltron
So, I copied Voltron folder to \Lib\Site-packages\volton
and The result is below..
0:013> !py --global c:\tmp\voltron-master\voltron\entry.py
An error occurred while loading Voltron:
Traceback (most recent call last):
File "c:\tmp\voltron-master\voltron\entry.py", line 98, in
voltron.debugger = plugin.adaptor_class(*args)
AttributeError: 'NoneType' object has no attribute 'adaptor_class'
Please ensure Voltron is installed correctly per the documentation: https://github.com/snare/voltron/wiki/Installation
What did I wrong?
Thanks
I've tried to reproduce it and it seems that the install is failing on some of the dependencies. It would have thrown an error when you ran python setup.py install about the requests module not being available. If you then run the install again it will succeed on that dependency and fail on another one. If you run it 3 or 4 times it should get all the way through and successfully install. This has worked fine for me, and I can now load Voltron on a fresh machine. I'll investigate why the install is failing and fix it.
The AttributeError you're seeing indicates that the plugins aren't loaded properly, which says to me that your manual install method of copying the Voltron folder into site-packages hasn't worked as intended.
Yes! I did execute "python setup.py install" 3 times due to fail of installing some module.
Then Should not I copy the Voltron folder into the site-packages folder manually?
If I don't copy the Voltron folder into the site-packages folder, the ImporError of voltron occur.
You mean I did wrong installation?
You should not need to copy it manually, that's what the install does. What happens if you do this:
> python -c "import voltron;print voltron"
In the "manual copied" status
=> <module 'voltron' from c:\python27\lib\site-packages\voltron__init__pyc'>
In the NOT "manual copied" status after python setup.py install
=> ImportError: No module named voltron
So, when you ran python setup.py install 3 times it succeeded on the 3rd time? Can you run it again and verify it runs successfully? I cannot reproduce the install failing to install the module given running it enough times for the install to appear to have succeeded.
I didn't "python setup.py install" in the same folder with voltron path..
On WinDBG
:013> .load pykd
0:013> !py --global c:\tmp\voltron-master\voltron\entry.py
C:\Python27\lib\site-packages\blessed-1.14.1-py2.7.egg\blessed\terminal.py:32: UserWarning: One or more of the modules: 'termios', 'fcntl', and 'tty' are not found on your platform 'win32'. The following methods of Terminal are dummy/no-op unless a deriving class overrides them: setraw, cbreak, kbhit, height, width
An error occurred while loading Voltron:
Traceback (most recent call last):
File "c:\tmp\voltron-master\voltron\entry.py", line 98, in
voltron.debugger = plugin.adaptor_class(*args)
AttributeError: 'NoneType' object has no attribute 'adaptor_class'
Please ensure Voltron is installed correctly per the documentation: https://github.com/snare/voltron/wiki/Installation
0:013> !py --global entry.py
An error occurred while loading Voltron:
Traceback (most recent call last):
File "c:\tmp\voltron-master\voltron\entry.py", line 98, in
voltron.debugger = plugin.adaptor_class(*args)
AttributeError: 'NoneType' object has no attribute 'adaptor_class'
0:013> !py
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
import voltron
print voltron
<module 'voltron' from 'C:\Python27\lib\site-packages\voltron-0.1.5-py2.7.egg\voltron_init_.pyc'>
Have you removed all trace of your manually installed version of Voltron first?
I cannot reproduce the AttributeError and strongly suspect it's still related to your broken manual "install"
I will try again on clean system. Thanks.
Still the problem exists.
I did again on the clean system "WITHOUT MANUAL COPY"
executed "python setup.py install" 3 times
first, requests module fail.
second, flask fail
third, success
Success automatically copied (voltron -> python)
"python.exe -c "import voltron;print voltron"
=> < module 'voltron ' from voltron_init_.py' >
Same problem on windbg here.
Ok, in the logs you posted above you are importing the Voltron entry point from the source directory. Can you please try importing the one that is now installed in site-packages?
I'm just setting up a new environment to try to reproduce this
I still can't reproduce the error.
Another problem.
I installed the voltron on other system.
[Windows 7 x64, Windbg 6.12.0002.633 x86]
It seems to work normally,
but, Runtime Library error occured.
Attached file is the screeenshot
I have no idea. Does other stuff work in PyKD now?
Yes, It seems to work fine.
I will try next time.
Thank you~! (I have to work back)
My error output
0:000> !py -g c:\Python27\Lib\site-packages\voltron\entry.py
C:\Python27\lib\site-packages\blessed\terminal.py:32: UserWarning: One or more o
f the modules: 'termios', 'fcntl', and 'tty' are not found on your platform 'win
32'. The following methods of Terminal are dummy/no-op unless a deriving class o
verrides them: setraw, cbreak, kbhit, height, width
warnings.warn(_MSG_NOSUPPORT)
An error occurred while loading Voltron:
Traceback (most recent call last):
File "c:\Python27\Lib\site-packages\voltron\entry.py", line 98, in <module>
voltron.debugger = plugin.adaptor_class(*args)
AttributeError: 'NoneType' object has no attribute 'adaptor_class'
Please ensure Voltron is installed correctly per the documentation: https://gith
ub.com/snare/voltron/wiki/Installation
stty
The stty is missing in Windows, so the view.window_size will throw error.
def window_size(self):
height, width = subprocess.check_output(['stty', 'size']).split()
height = int(height) - int(self.config.pad.pad_bottom)
width = int(width) - int(self.config.pad.pad_right)
return (height, width)
dbg_windbg.py
The dbg_windbg.py plugin currently has a logic to check if in_windbg or not by importing vtrace.
try:
in_windbg = False
import pykd
try:
import vtrace
except:
in_windbg = True
except ImportError:
pass
However, vtrace module may exist in the system, e.x: vivisect will install vtrace, such that dbg_windbg.py will never be loaded.
Change it to as follows, the windbg plugin will load success, but voltron view still failed, due to stty issue. I haven't found a replacement of stty yet, what's your solution here? @snare
try:
in_windbg = False
import pykd
if 'Microsoft (R) Windows Debugger Version' in pykd.dbgCommand('version'):
in_windbg = True
except Exception:
pass
Re: stty, I've only ever used Voltron with Git Bash or ConEmu, and the GNU userland installed by Git which I suppose includes stty. I'll investigate a workaround, but in the mean time I'd suggest installing Git for Windows and using Git Bash or ConEmu.
Re: importing vtrace, vivisect is never installed as far as I know. It is used in-situ within the downloaded directory. There's no setup.py, etc, so as far as I can see, vtrace will never be in your Python path. Under what circumstances are you encountering this issue? IIRC the reason it works the way it does is because pykd will be importable from within the VDB adapter if the pykd package is installed in Python site-packages, but vtrace should never be importable from a script running in pykd. Your solution looks like it would incorrectly identify that it is running in WinDbg if it is in fact running in VDB if the pykd package is installed on the system.,
Re: your error output from loading Voltron, can you please try installing the Voltron package again per the previous discussion with @lazyhack and post your install log here? It seems that the install fails on dependencies a couple of times and needs to be run 3 times in order to install successfully. I haven't looked at this issue yet.
It turns I installed the vivisect from https://github.com/williballenthin/vivisect/, which includes a vtrace. :(
FTR, using full installation of Windows GIT, which includes stty, under bash, pip install voltron will succeed w/o error. voltron is usable, despite the error message from blessed\terminal. However the dependency of blessed module imports fcntl, which is not available in Windows, I couldn't find a resolution for this.
try:
import termios
import fcntl
import tty
HAS_TTY = True
except ImportError:
_TTY_METHODS = ('setraw', 'cbreak', 'kbhit', 'height', 'width')
_MSG_NOSUPPORT = (
"One or more of the modules: 'termios', 'fcntl', and 'tty' "
"are not found on your platform '{0}'. The following methods "
"of Terminal are dummy/no-op unless a deriving class overrides "
"them: {1}".format(sys.platform.lower(), ', '.join(_TTY_METHODS)))
warnings.warn(_MSG_NOSUPPORT)
HAS_TTY = False
In all, I'm giving up to use voltron under Windows, troublesome and I don't want to install a bunch of git tools in my analysis vm ;)
Ahhh ok, I'll try to find a better method of detecting WinDbg and vtrace including support for Willi's fork of vivisect.
Windows/WinDbg support in general is quite new and needs some work. I've not yet investigated the blessed warning, but it doesn't actually effect the operation of Voltron so I've ignored it thus far. I may put some effort into supporting Windows without the Linux userland tools at some point, but my care factor for it is pretty low. However, I'll certainly be making sure it works properly under the new Windows 10 Linux userland without additional tools.
|
2025-04-01T04:35:32.223548
| 2017-12-19T21:06:26
|
283369448
|
{
"authors": [
"adammwood",
"curry684",
"dzolnierz",
"petitphp"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10890",
"repo": "snc/SncRedisBundle",
"url": "https://github.com/snc/SncRedisBundle/issues/381"
}
|
gharchive/issue
|
Single \Predis\Connection\Parameters instance should be wrapped in array when using sentinel replication.
Config below results in ERR unknown command 'SET' when trying to set session data. Parsed DSN is passed as an instance of \Predis\Connection\Parameters to \Predis\Client which results in sending SET/GET to Redis Sentinel instead of real Redis node.
snc_redis:
clients:
session:
type: predis
alias: session
dsn:
- redis://redis-sentinel:26379
options:
replication: sentinel
service: master
This issue is related to: https://github.com/nrk/predis/issues/476
Hello @dzolnierz
I just ran into the same issue. Did you find a solution/workaround ?
Our temporary workaround to this issue is to just duplicate the declaration, causing it to be correctly converted to an array.
snc_redis:
clients:
default:
type: predis
alias: default
dsn:
# These are duplicated as a workaround to: https://github.com/snc/SncRedisBundle/issues/381
- redis://redis-sentinel:26379
- redis://redis-sentinel:26379
@petitphp
Yeah, we do exactly the same thing as @adammwood. Duplicate the DSN.
Would any of you have a go at issuing a PR to fix this?
|
2025-04-01T04:35:32.239740
| 2018-05-18T07:39:58
|
324302909
|
{
"authors": [
"B-Galati",
"Kalliser",
"OskarStark",
"curry684",
"damijank",
"javiermadueno",
"monsieurchico"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10891",
"repo": "snc/SncRedisBundle",
"url": "https://github.com/snc/SncRedisBundle/issues/425"
}
|
gharchive/issue
|
[v2.1.3] Unable to connect to Redis Resource in Azure
Hello,
Yesterday I updated the Bundle in my Azure webapp and it seems there is a problem with the last version changes. It's unable to connect to the Redis Resource in Azure showing this message:
predis SELECT failed: NOAUTH Authentication required.
The REDIS_URL is well defined and installing the version 2.1.2 it works properly.
Kind regards.
I got the same probleme here !
Must stem from https://github.com/snc/SncRedisBundle/pull/413.
@B-Galati any clue?
Btw it was actually broken before as they were evaluated at the wrong time in the wrong way. So you may have inadvertently been using this feature 'wrong' but accidentally made it work that way. If either of you can provide a bit more insight on how you configured things we can nail this down quickly hopefully.
Hello @B-Galati, this is the Exceptiontion trace: (I've replaced the redis URL for URL-HIDDEN-FOR-SECURITY
Predis\Connection\ConnectionException:
`SELECT` failed: NOAUTH Authentication required. [redis://URL-HIDDEN-FOR-SECURITY:6379]
at vendor\predis\predis\src\Connection\AbstractConnection.php:155
at Predis\Connection\AbstractConnection->onConnectionError('`SELECT` failed: NOAUTH Authentication required.', 0)
(vendor\predis\predis\src\Connection\StreamConnection.php:263)
at Predis\Connection\StreamConnection->connect()
(vendor\predis\predis\src\Connection\AbstractConnection.php:180)
at Predis\Connection\AbstractConnection->getResource()
(vendor\predis\predis\src\Connection\StreamConnection.php:288)
at Predis\Connection\StreamConnection->write('*6
$3
SET
$46
schengen_23c125a2e48cb927378d38b79f9deaef.lock
$13
5b03f3552f6e6
$2
PX
$6
300001
$2
NX
')
(vendor\predis\predis\src\Connection\StreamConnection.php:394)
at Predis\Connection\StreamConnection->writeRequest(object(StringSet))
(vendor\predis\predis\src\Connection\AbstractConnection.php:110)
at Predis\Connection\AbstractConnection->executeCommand(object(StringSet))
(vendor\snc\redis-bundle\Client\Predis\Connection\ConnectionWrapper.php:176)
at Snc\RedisBundle\Client\Predis\Connection\ConnectionWrapper->executeCommand(object(StringSet))
(vendor\predis\predis\src\Client.php:331)
at Predis\Client->executeCommand(object(StringSet))
(vendor\predis\predis\src\Client.php:314)
at Predis\Client->__call('set', array('schengen_23c125a2e48cb927378d38b79f9deaef.lock', '5b03f3552f6e6', 'PX', 300001, 'NX'))
(vendor\snc\redis-bundle\Session\Storage\Handler\RedisSessionHandler.php:138)
at Snc\RedisBundle\Session\Storage\Handler\RedisSessionHandler->Snc\RedisBundle\Session\Storage\Handler\{closure}(object(Client), 'schengen_23c125a2e48cb927378d38b79f9deaef.lock', '5b03f3552f6e6', 300001)
(vendor\snc\redis-bundle\Session\Storage\Handler\RedisSessionHandler.php:146)
at Snc\RedisBundle\Session\Storage\Handler\RedisSessionHandler->lockSession('23c125a2e48cb927378d38b79f9deaef')
(vendor\snc\redis-bundle\Session\Storage\Handler\RedisSessionHandler.php:204)
at Snc\RedisBundle\Session\Storage\Handler\RedisSessionHandler->read('23c125a2e48cb927378d38b79f9deaef')
at session_start()
(vendor\symfony\http-foundation\Session\Storage\NativeSessionStorage.php:142)
at Symfony\Component\HttpFoundation\Session\Storage\NativeSessionStorage->start()
(vendor\symfony\http-foundation\Session\Session.php:57)
at Symfony\Component\HttpFoundation\Session\Session->start()
(vendor\symfony\security-csrf\TokenStorage\SessionTokenStorage.php:78)
at Symfony\Component\Security\Csrf\TokenStorage\SessionTokenStorage->hasToken('https-get_quote')
(vendor\symfony\security-csrf\CsrfTokenManager.php:72)
at Symfony\Component\Security\Csrf\CsrfTokenManager->getToken('get_quote')
(vendor\symfony\form\Extension\Csrf\Type\FormTypeCsrfExtension.php:83)
at Symfony\Component\Form\Extension\Csrf\Type\FormTypeCsrfExtension->finishView(object(FormView), object(Form), array('block_name' => null, 'disabled' => false, 'label' => null, 'label_format' => null, 'translation_domain' => null, 'auto_initialize' => true, 'trim' => true, 'required' => true, 'property_path' => null, 'mapped' => true, 'by_reference' => true, 'inherit_data' => false, 'compound' => true, 'method' => 'POST', 'action' => '', 'post_max_size_message' => 'The uploaded file was too large. Please try to upload a smaller file.', 'error_mapping' => array(), 'invalid_message' => 'This value is not valid.', 'invalid_message_parameters' => array(), 'allow_extra_fields' => false, 'extra_fields_message' => 'This form should not contain extra fields.', 'csrf_protection' => true, 'csrf_field_name' => '_token', 'csrf_message' => 'The CSRF token is invalid. Please try to resubmit the form.', 'csrf_token_manager' => object(CsrfTokenManager), 'csrf_token_id' => null, 'attr' => array(), 'data_class' => 'Schengen\\Application\\Product\\GetQuoteRequest', 'empty_data' => object(Closure), 'error_bubbling' => true, 'label_attr' => array(), 'upload_max_size_message' => object(Closure), 'validation_groups' => null, 'constraints' => array(), 'data' => object(GetQuoteRequest)))
(vendor\symfony\form\ResolvedFormType.php:174)
at Symfony\Component\Form\ResolvedFormType->finishView(object(FormView), object(Form), array('block_name' => null, 'disabled' => false, 'label' => null, 'label_format' => null, 'translation_domain' => null, 'auto_initialize' => true, 'trim' => true, 'required' => true, 'property_path' => null, 'mapped' => true, 'by_reference' => true, 'inherit_data' => false, 'compound' => true, 'method' => 'POST', 'action' => '', 'post_max_size_message' => 'The uploaded file was too large. Please try to upload a smaller file.', 'error_mapping' => array(), 'invalid_message' => 'This value is not valid.', 'invalid_message_parameters' => array(), 'allow_extra_fields' => false, 'extra_fields_message' => 'This form should not contain extra fields.', 'csrf_protection' => true, 'csrf_field_name' => '_token', 'csrf_message' => 'The CSRF token is invalid. Please try to resubmit the form.', 'csrf_token_manager' => object(CsrfTokenManager), 'csrf_token_id' => null, 'attr' => array(), 'data_class' => 'Schengen\\Application\\Product\\GetQuoteRequest', 'empty_data' => object(Closure), 'error_bubbling' => true, 'label_attr' => array(), 'upload_max_size_message' => object(Closure), 'validation_groups' => null, 'constraints' => array(), 'data' => object(GetQuoteRequest)))
(vendor\symfony\form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy.php:111)
at Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy->finishView(object(FormView), object(Form), array('block_name' => null, 'disabled' => false, 'label' => null, 'label_format' => null, 'translation_domain' => null, 'auto_initialize' => true, 'trim' => true, 'required' => true, 'property_path' => null, 'mapped' => true, 'by_reference' => true, 'inherit_data' => false, 'compound' => true, 'method' => 'POST', 'action' => '', 'post_max_size_message' => 'The uploaded file was too large. Please try to upload a smaller file.', 'error_mapping' => array(), 'invalid_message' => 'This value is not valid.', 'invalid_message_parameters' => array(), 'allow_extra_fields' => false, 'extra_fields_message' => 'This form should not contain extra fields.', 'csrf_protection' => true, 'csrf_field_name' => '_token', 'csrf_message' => 'The CSRF token is invalid. Please try to resubmit the form.', 'csrf_token_manager' => object(CsrfTokenManager), 'csrf_token_id' => null, 'attr' => array(), 'data_class' => 'Schengen\\Application\\Product\\GetQuoteRequest', 'empty_data' => object(Closure), 'error_bubbling' => true, 'label_attr' => array(), 'upload_max_size_message' => object(Closure), 'validation_groups' => null, 'constraints' => array(), 'data' => object(GetQuoteRequest)))
(vendor\symfony\form\ResolvedFormType.php:167)
at Symfony\Component\Form\ResolvedFormType->finishView(object(FormView), object(Form), array('block_name' => null, 'disabled' => false, 'label' => null, 'label_format' => null, 'translation_domain' => null, 'auto_initialize' => true, 'trim' => true, 'required' => true, 'property_path' => null, 'mapped' => true, 'by_reference' => true, 'inherit_data' => false, 'compound' => true, 'method' => 'POST', 'action' => '', 'post_max_size_message' => 'The uploaded file was too large. Please try to upload a smaller file.', 'error_mapping' => array(), 'invalid_message' => 'This value is not valid.', 'invalid_message_parameters' => array(), 'allow_extra_fields' => false, 'extra_fields_message' => 'This form should not contain extra fields.', 'csrf_protection' => true, 'csrf_field_name' => '_token', 'csrf_message' => 'The CSRF token is invalid. Please try to resubmit the form.', 'csrf_token_manager' => object(CsrfTokenManager), 'csrf_token_id' => null, 'attr' => array(), 'data_class' => 'Schengen\\Application\\Product\\GetQuoteRequest', 'empty_data' => object(Closure), 'error_bubbling' => true, 'label_attr' => array(), 'upload_max_size_message' => object(Closure), 'validation_groups' => null, 'constraints' => array(), 'data' => object(GetQuoteRequest)))
(vendor\symfony\form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy.php:111)
at Symfony\Component\Form\Extension\DataCollector\Proxy\ResolvedTypeDataCollectorProxy->finishView(object(FormView), object(Form), array('block_name' => null, 'disabled' => false, 'label' => null, 'label_format' => null, 'translation_domain' => null, 'auto_initialize' => true, 'trim' => true, 'required' => true, 'property_path' => null, 'mapped' => true, 'by_reference' => true, 'inherit_data' => false, 'compound' => true, 'method' => 'POST', 'action' => '', 'post_max_size_message' => 'The uploaded file was too large. Please try to upload a smaller file.', 'error_mapping' => array(), 'invalid_message' => 'This value is not valid.', 'invalid_message_parameters' => array(), 'allow_extra_fields' => false, 'extra_fields_message' => 'This form should not contain extra fields.', 'csrf_protection' => true, 'csrf_field_name' => '_token', 'csrf_message' => 'The CSRF token is invalid. Please try to resubmit the form.', 'csrf_token_manager' => object(CsrfTokenManager), 'csrf_token_id' => null, 'attr' => array(), 'data_class' => 'Schengen\\Application\\Product\\GetQuoteRequest', 'empty_data' => object(Closure), 'error_bubbling' => true, 'label_attr' => array(), 'upload_max_size_message' => object(Closure), 'validation_groups' => null, 'constraints' => array(), 'data' => object(GetQuoteRequest)))
(vendor\symfony\form\Form.php:1021)
at Symfony\Component\Form\Form->createView()
(src\Controller\PurchaseFunnelController.php:60)
at App\Controller\PurchaseFunnelController->step1(object(Request))
(vendor\symfony\http-kernel\HttpKernel.php:149)
at Symfony\Component\HttpKernel\HttpKernel->handleRaw(object(Request), 1)
(vendor\symfony\http-kernel\HttpKernel.php:66)
at Symfony\Component\HttpKernel\HttpKernel->handle(object(Request), 1, true)
(vendor\symfony\http-kernel\Kernel.php:190)
at Symfony\Component\HttpKernel\Kernel->handle(object(Request))
(public\index.php:37)
@javiermadueno Thank you, could you also give us your config? And how URL-HIDDEN-FOR-SECURITY is formatted, for example : redis://secret@hostname/1.
Without that it's gonna be quite hard to help. thanks again.
@monsieurchico Could you also provide more information please ?
Hello, I have the same problem in v2.1.3, my configuration is the following :
snc_redis:
clients:
default:
type: predis
alias: default
dsn: "redis://%env(REDIS_AUTH)%@%env(REDIS_HOST)%:%env(REDIS_PORT)%"
logging: "%kernel.debug%"
options:
connection_persistent: true
session:
type: predis
alias: session
dsn: "redis://%env(REDIS_AUTH)%@%env(REDIS_HOST)%:%env(REDIS_PORT)%/1"
logging: false
monolog:
type: predis
alias: monolog
dsn: "redis://%env(REDIS_AUTH)%@%env(REDIS_HOST)%:%env(REDIS_PORT)%/4"
logging: false
stats:
type: predis
alias: stats
dsn: "redis://%env(REDIS_AUTH)%@%env(REDIS_HOST)%:%env(REDIS_PORT)%/5"
logging: false
monolog:
client: monolog
key: monolog
session:
client: session
ttl: 604800
I got the same error than javierdansmuero and it diseapear if I reverted the package back to v2.1.2
I use predis in v1.1.1
I reproduced the issue and it looks like the previous behavior was a bug.
Anyway it's a BC break so I guess we should fix it.
The quick fix I identified is the following: given @Kalliser's example, REDIS_AUTH should be prefixed with : if it only contains a secret. For example REDIS_AUTH=:secret instead of REDIS_AUTH=secret.
My bad.
It looks like a bug on predis/predis which used parsed_url to create parameters -> https://github.com/nrk/predis/blob/v1.1/src/Connection/Parameters.php#L92. If the : is missing then the password goes into $parsed->user and so is not detected here https://github.com/nrk/predis/blob/111d100ee389d624036b46b35ed0c9ac59c71313/src/Connection/Parameters.php#L112.
Predis project looks quite dead these days so I guess we should do a fix on our side no ?
This should be fixed in 2.1.4, can you confirm?
If anyone should stumble upon this issue...
I had the same problem today and I was going nuts. Then, I used the secondary key just to be sure I tried everything, and it worked!
It was a freakishly dumb error: the password (key) is run through urldecode() by SncRedis and my primary password has a + sign in it, which is converted to space and Azure Redis said NOAUTH Authentication required. The secondary password has no + sign in it.
I sure hope this will help someone save time.
What about adding this to the docs ?
Has been there since 2016 actually ๐
https://github.com/snc/SncRedisBundle/commit/9ffce08d73b294e711287831cd83ad46cd2e7465
Well, yeah... :rofl: Thanks @curry684
Obviously I did not read that part, and honestly I don't know if it crosses anybody's mind to go to the documentation in the case the server says the authentication is invalid. To quote Monk: "I could be wrong now, but I don't think so..." :wink:
Maybe it wouldn't be a bad idea to add this info in the error message? Just a thought...
Yeah I know, it's one of those classic "oh DUH" moments that I'm not sure we can do anything to alleviate. Best thing I could think of indeed was doing the Symfony DX thing and extend the access denied exception with some remarks about this. Theoretically we could even only do it when auth is denied and urldecode($pass) !== $pass.
Theoretically we could even only do it when auth is denied and urldecode($pass) !== $pass.
nice idea ๐
|
2025-04-01T04:35:32.251252
| 2022-10-16T14:59:13
|
1410514885
|
{
"authors": [
"TpmKranz",
"c-cube"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10892",
"repo": "sneeuwballen/zipperposition",
"url": "https://github.com/sneeuwballen/zipperposition/issues/93"
}
|
gharchive/issue
|
Unquoted predicate names in TPTP output
Predicate names that should be quoted because they contain non-alphanumeric symbols are output verbatim, thus breaking parsing.
Example TPTP output:
ga_left_unit_max.txt
pNat*Nat___<=__ is the big offender here.
Input TIP problem, if of interest:
mytotalnumbers_Nat__E1_ga_left_unit_max1315634022635723058.txt
Command line:
zipperposition --input=tip --output=tptp --mode=fo-complete-basic --induction mytotalnumbers_Nat__E1_ga_left_unit_max1315634022635723058.txt
It's been a while, do you know what the escaping rules for TPTP are? Anything not in [A-Za-z0-9]+ (with underscores perhaps)?
If I interpret the syntax correctly, predicate names should just be atomic_words, which must be escaped if not matching [a-z][A-Za-z0-9_]*. I'm not sure that #94 is the right approach for that since it only seems to check for very specific cases outside that regex. Is regex matching not an option?
Sorry that I'm not familiar enough with OCaml to whip up a PR myself.
|
2025-04-01T04:35:32.348855
| 2019-04-05T09:12:54
|
429656922
|
{
"authors": [
"ClemDoum"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10893",
"repo": "snipsco/snips-nlu",
"url": "https://github.com/snipsco/snips-nlu/issues/780"
}
|
gharchive/issue
|
[WIP]: NLU shouldn't be stochastic
For now the seed mechanism in the NLU is broken in many ways:
seed should not be in the config it rather should be in the shared_resources (or at least somewhere in the __init__ of the ProcessingUnits)
right now because of a the following scikit-learn bug, the intent classification can't be deterministic: https://github.com/scikit-learn/scikit-learn/pull/13422)
Roadmap
wait for the 0.21 release of scikit-learn which will include the SGDClassifier fix
change the current NLU api so that the random seed is not in the configuration anymore
This is work in progress I have a branch with almost every ready ;)
Exact duplicate of #779
|
2025-04-01T04:35:32.350361
| 2023-02-22T19:14:21
|
1595680449
|
{
"authors": [
"snoopyjc"
],
"license": "Artistic-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10894",
"repo": "snoopyjc/pythonizer",
"url": "https://github.com/snoopyjc/pythonizer/issues/299"
}
|
gharchive/issue
|
foreach as statement modifier with a , expression generates bad code
foreach as statement modifier with a , expression generates bad code. For example (from Exporter.pm):
s/^&//, $export_cache->{$_} = 1
foreach (@$exports, @{"$pkg\::EXPORT_OK"});
Fixed in v1.027
|
2025-04-01T04:35:32.404166
| 2024-04-05T16:08:32
|
2228383562
|
{
"authors": [
"Yoshi-Egawa",
"sfc-gh-stan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10895",
"repo": "snowflakedb/snowpark-python",
"url": "https://github.com/snowflakedb/snowpark-python/pull/1372"
}
|
gharchive/pull-request
|
SNOW-1304211: Pylance throws errors when using copy_options with copy_into_location method
Please answer these questions before submitting your pull requests. Thanks!
What GitHub issue is this PR addressing? Make sure that there is an accompanying issue to your PR.
Fixes https://github.com/snowflakedb/snowpark-python/issues/1363
Fill out the following pre-review checklist:
[ ] I am adding a new automated test(s) to verify correctness of my new code
[ ] I am adding new logging messages
[ ] I am adding a new telemetry message
[ ] I am adding new credentials
[ ] I am adding a new dependency
Please describe how your code solves the related issue.
src/snowflake/snowpark/dataframe_writer.py: The copy_into_location method was updated in three instances. The copy_options parameter was changed from Optional[str] to Optional[Dict[str, Any]]. This change allows for a wider range of types to be accepted as copy options, providing more flexibility when calling the method. [1] [2] [3]
I have read the CLA Document and I hereby sign the CLA
@sfc-gh-stan
I suggested 2 additional type hint fixes.
Thank you for your suggestion. I'd like to propose a slightly different fix for the blocks type.
The current suggestion sets block: Literal[True]. However, I believe it should be block: bool to maintain type consistency with the _internal_collect_with_tag_no_telemetry function.
The blocks argument in the _internal_collect_with_tag_no_telemetry function is used as a bool type. Therefore, setting the blocks type to bool in the copy_into_location function as well will ensure type consistency.
src/snowflake/snowpark/dataframe.py
def _internal_collect_with_tag_no_telemetry(
self,
*,
statement_params: Optional[Dict[str, str]] = None,
block: bool = True, # <-- here
data_type: _AsyncResultType = _AsyncResultType.ROW,
log_on_exception: bool = False,
case_sensitive: bool = True,
) -> Union[List[Row], AsyncJob]:
# When executing a DataFrame in any method of snowpark (either public or private),
# we should always call this method instead of collect(), to make sure the
# query tag is set properly.
return self._session._conn.execute(
self._plan,
block=block,
data_type=data_type,
_statement_params=create_or_update_statement_params_with_query_tag(
statement_params or self._statement_params,
self._session.query_tag,
SKIP_LEVELS_THREE,
),
log_on_exception=log_on_exception,
case_sensitive=case_sensitive,
)
_internal_collect_with_tag = df_collect_api_telemetry(
_internal_collect_with_tag_no_telemetry
)
@sfc-gh-stan
Thank you for your suggestion. I have implemented your suggestion with a slight modification.
I have added a default value to the blocks keyword argument. This is because keyword arguments are optional, and not providing a value would result in an error.
Tests kicked off by #1377, will merge after merge gates pass.
|
2025-04-01T04:35:32.416363
| 2022-06-20T15:48:32
|
1277116588
|
{
"authors": [
"ChuliangXiao",
"sfc-gh-jdu",
"sfc-gh-kwagner"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10896",
"repo": "snowflakedb/snowpark-python",
"url": "https://github.com/snowflakedb/snowpark-python/pull/382"
}
|
gharchive/pull-request
|
add orderBy alias of sort
Please answer these questions before submitting your pull requests. Thanks!
What GitHub issue is this PR addressing? Make sure that there is an accompanying issue to your PR.
None
Fill out the following pre-review checklist:
[ ] I am adding a new automated test(s) to verify correctness of my new code
[ ] I am adding new logging messages
[ ] I am adding new credentials
[ ] I am adding a new dependency
Please describe how your code solves the related issue.
Please write a short description of how your code change solves the related issue.
Hey @ChuliangXiao, thanks for your suggestion and we think this would be a useful alias. Could you move this alias here, and add a simple test here? Thanks!
@ChuliangXiao can you sign the CLA as well? You just have to copy the CLA text and paste it as a comment on this PR before we can run your tests/accept your PR
Snowflake Open Source Contribution
Terms and Conditions
By accepting and agreeing to this contributor license agreement, I understand and agree that my Contribution (as defined below) is public and that a record of the Contribution, including my full name and email address among other information, will be maintained indefinitely and may be redistributed consistent with this project, compliance with the open source license(s) involved, and maintenance of authorship attribution.
DEFINITIONS.
"You" (or "Your") shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with Snowflake. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"Contribution" shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to Snowflake (the โWorkโ). For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to Snowflake or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Snowflake for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by You as "Not a Contribution."
GRANT OF COPYRIGHT LICENSE. Subject to the terms and conditions of this Agreement, You hereby grant to Snowflake and to recipients and/or users of software distributed and/or made available by Snowflake a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and such derivative works.
GRANT OF PATENT LICENSE. Subject to the terms and conditions of this Agreement, You hereby grant to Snowflake and to recipients and/or users of software distributed and/or made available by Snowflake a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) were submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.
REPRESENTATIONS.
You represent that You are legally entitled to grant the above licenses.
You represent that each of Your Contributions is Your original creation.
You represent that none of Your Contributions includes any third party copyrights, patents, trade secrets, licenses or other restrictions.
NO SUPPORT. You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
NOTICE TO SNOWFLAKE. You agree to notify Snowflake of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect.
GOVERNING LAW & JURISDICTION. This Agreement shall be governed exclusively by, and construed exclusively in accordance with, the laws of the United States and the State of Delaware, without regard to its conflict of laws provisions. The state and federal courts located in Wilmington, Delaware shall have exclusive jurisdiction to adjudicate any dispute arising out of or relating to this Agreement.
ENTIRE AGREEMENT. This Agreement constitutes the entire agreement between the parties as to its subject matter and supersedes all prior and contemporaneous agreements, proposals or representations, written or oral, concerning the subject matter of this Agreement. No modification, amendment or waiver of any provision of this Agreement shall be effective unless in writing, accepted, and agreed to by the party against whom the modification, amendment or waiver is to be asserted. This Agreement does not supersede or amend any existing agreement between the parties for the purchase or use of either partyโs products or services.
GENERAL. No failure or delay by You or Snowflake in exercising any right under this Agreement shall constitute a waiver of that right. Other than as expressly stated herein, the remedies herein are in addition to, and not exclusive of, any other remedies of a party at law or in equity. Should any provision of this Agreement be held by a court to be unenforceable, such provision shall be modified by the court and interpreted so as to best to accomplish the objectives of the original provision to the fullest extent permitted by law, and the remaining provisions of this Agreement shall remain in full force and effect.
Sorry for the confusion, you need to copy, paste and comment
I have read the CLA Document and I hereby sign the CLA
I have read the CLA Document and I hereby sign the CLA
|
2025-04-01T04:35:32.422269
| 2017-07-07T04:21:39
|
241153693
|
{
"authors": [
"bgamari",
"izgzhen",
"snowleopard"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10897",
"repo": "snowleopard/hadrian",
"url": "https://github.com/snowleopard/hadrian/pull/351"
}
|
gharchive/pull-request
|
Parallelize stage0 build
This eliminates a rather significant duration of serial compilation from the
build.
I'm still not 100% certain we want to do this by default, but I thought it would be good to put it up for discussion.
Will build.sh -j have the same effect?
Will build.sh -j have the same effect?
It won't; this only applies to the initial bootstrap build of Cabal and ghc-cabal, which is currently serial.
From the issue content, I guess your answer would be "not" ... but at least we should only invoke this when user add -j to build.sh
Certainly a good point.
@bgamari We could also add -j conditionally, e.g. see
https://github.com/snowleopard/hadrian/blob/master/src/Settings/Builders/Make.hs#L8
That is, if Hadrian is run in single-threaded mode, we don't parallelise this bit.
Thanks for the hint, @snowleopard! How does this look?
@bgamari Wait, if I understand correctly, this is supposed to only apply to building ghc-cabal, right? But then I just found that we already do -j when building it. See here:
https://github.com/snowleopard/hadrian/blob/master/src/Settings/Packages/GhcCabal.hs#L29
Are you sure the change you are doing in this PR actually speeds up your build?
Ahh, indeed we do! I honestly didn't check the Hadrian side of things so consider this closed.
|
2025-04-01T04:35:32.437312
| 2021-06-30T15:19:33
|
933823385
|
{
"authors": [
"bard",
"drwpow"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10898",
"repo": "snowpackjs/snowpack",
"url": "https://github.com/snowpackjs/snowpack/issues/3522"
}
|
gharchive/issue
|
๐ก RFC: Support %PUBLIC_URL% in src/serviceWorkerRegistration.{js,ts}
Motivation
Snowpack supports %PUBLIC_URL% in HTML files, and mentions it's inspired by Create React App. There's another place where Create React App uses %PUBLIC_URL%, namely src/serviceWorkerRegistration.{js,ts}:
export function register(config?: Config) {
function doRegister() {
const swUrl = `${process.env.PUBLIC_URL}/service-worker.js`
Proposed solution
Possible solutions
Replace %PUBLIC_URL% in src/serviceWorkerRegistration.{js,ts} as well.
Alternatives considered
Remove %PUBLIC_URL% from the code base and hardcode the value in both public/index.html and src/serviceWorkerRegistration.{js,ts}.
Risks, downsides, and/or tradeoffs
None I can see.
Detailed design
No response
Open questions
No response
Help make it happen!
[ ] I am willing to submit a PR to implement this change.
[ ] I am willing to submit a PR to implement this change, but would need some guidance.
[X] I am not willing to submit a PR to implement this change.
Hm. Thatโs a good question. I think when it gets to transforming values in JS, Snowpack tries to encourage you to write JS as close as possible to what your final app could be. We just feel in general that the less your code looks like production, the more problems you have! That thinking is kinda what led to creating Snowpack in the first placeโletting all your import statements persist in the end (and as someone thatโs spent hundreds of hours debugging webpack, itโs an amazing tool but when things go wrong they go really wrong).
All that said, I think env vars are a good workaround now that is probably much easier and more flexible than having โmagic string replacementโ in JavaScript like this.
You raise a really good point that we do want to support more CRA-like things! Weโll think about it for a future add, but env vars may be good for now.
I think that CRA's thinking in this particular case isn't about transforming any value in any JS, only about transforming this specific value (%PUBLIC_URL%) in this specific file (src/servicWorkerRegistration) because, if you use rely on it for HTML and then activate the service worker, you end up with a broken site. There's a dependency between the two.
I do understand however that this is a CRA-inspired, not CRA-compatible feature, and that create-snowpack-app doesn't even come with a service worker template, so not really a bug. Close at will. :)
|
2025-04-01T04:35:32.441847
| 2017-12-04T23:20:08
|
279187215
|
{
"authors": [
"alexanderdean",
"yalisassoon"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10899",
"repo": "snowplow/snowplow-javascript-tracker",
"url": "https://github.com/snowplow/snowplow-javascript-tracker/issues/624"
}
|
gharchive/issue
|
Add trackConsentWithdrawn method
@yalisassoon to specify the arguments to be provided.
Nudge @yalisassoon
Under the hood this should use the consent document schema as a context:
{
"$schema": "http://iglucentral.com/schemas/com.snowplowanalytics.self-desc/schema/jsonschema/1-0-0#",
"description": "Schema for consent document context",
"self": {
"vendor": "com.snowplowanalytics.snowplow",
"name": "consent_document",
"format": "jsonschema",
"version": "1-0-0"
},
"type": "object",
"properties": {
"id": {
"type": "string",
"maxLength": 36
},
"version": {
"type": "string",
"maxLength": 36
},
"name": {
"type": "string",
"maxLength": 60
},
"description": {
"type": "string",
"maxLength": 10000
}
},
"required": ["id", "version"],
"additionalProperties": false
}
The event JSON itself should have a single additional field - an all flag to indicate if a user withdraws all consent. (I.e. across every document that they've previously agreed to.)
{
"$schema": "http://iglucentral.com/schemas/com.snowplowanalytics.self-desc/schema/jsonschema/1-0-0#",
"description": "Schema for consent withdrawn",
"self": {
"vendor": "com.snowplowanalytics.snowplow",
"name": "consent_withdrawn",
"format": "jsonschema",
"version": "1-0-0"
},
"type": "object",
"properties": {
"all": {
"type": "boolean"
}
},
"additionalProperties": false
}
So the trackConsentWithdrawn method should take the following arguments:
all (true/false)
id
version
name
description
In general we'd only expect the id, version, name and description fields to be filled in if the all field is set to false. (If set to true don't expect those fields to be filled in.)
SGTM - @mhadam can you create the associated tickets in Iglu Central?
|
2025-04-01T04:35:32.442816
| 2015-01-30T16:12:40
|
56052143
|
{
"authors": [
"alexanderdean"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10900",
"repo": "snowplow/snowplow",
"url": "https://github.com/snowplow/snowplow/issues/1370"
}
|
gharchive/issue
|
EmrEtlRunner: add group-by for S3DistCp of enriched and shredded events to S3
For discussion.
We're not going to do this, closing
|
2025-04-01T04:35:32.460346
| 2023-04-28T05:26:00
|
1687922793
|
{
"authors": [
"BECATRUE",
"choiwh03"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10901",
"repo": "snu-quiqcl/swift",
"url": "https://github.com/snu-quiqcl/swift/pull/157"
}
|
gharchive/pull-request
|
Addition of qiwi icon to readme
Add image qiwi.
See also: #156
Oh, in my laptop, the QIWI text is not shown because the background color is black. How about adding a gray background color?
And also, I think the following code is better.
<p align="center">
<img width="40%" alt="image" src="https://user-images.githubusercontent.com/65724072/235059956-6531f005-9a70-4b52-8d0c-c4477b96336f.svg">
</p>
[](https://github.com/snu-quiqcl/swift/actions/workflows/pylint.yml)
[](https://github.com/snu-quiqcl/swift/actions/workflows/unittest.yml)
Like the below image..
I think there was an update, but I can't see it from this branch, so please check it out! @choiwh03
|
2025-04-01T04:35:32.463131
| 2018-03-30T01:29:46
|
309951008
|
{
"authors": [
"DifferentSC",
"taegeonum"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10902",
"repo": "snuspl/mist",
"url": "https://github.com/snuspl/mist/pull/1090"
}
|
gharchive/pull-request
|
[MIST-1089] Give unique vertex id for each vertex in AvroDag
This PR addresses #1089 via
Give unique vertex id for each vertex in AvroDag
LGTM. I'm merging it
|
2025-04-01T04:35:32.486709
| 2023-06-10T08:29:32
|
1750871590
|
{
"authors": [
"jiangshengkai1999",
"soanagno"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10904",
"repo": "soanagno/wakenet",
"url": "https://github.com/soanagno/wakenet/issues/2"
}
|
gharchive/issue
|
can't generate wake dataset using curl model
I just change the inputs_curl.json and set the "train_net" to 1, and "make_data" to true, the outcome is floris.tools.floris_interface.FlorisInterface INFO Model identified as curl requires use of underlying grid points
floris.tools.floris_interface.FlorisInterface INFO Assuming model resolution
floris.tools.floris_interface.FlorisInterface INFO 250 100 75
Nearest value to 90.00 is 87.65
and i can't solve this
The Curl dataset was made using a modified version of FLORIS, since it was
not possible to tweak the dimensions in version 2.4.
|
2025-04-01T04:35:32.488601
| 2019-08-13T23:33:30
|
480415524
|
{
"authors": [
"soapdog"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10905",
"repo": "soapdog/patchfox",
"url": "https://github.com/soapdog/patchfox/issues/43"
}
|
gharchive/issue
|
"Copy to compose tab" contextual-menu feature
It is time for Patchfox to integrate deeper into the browser. This item is about changing the contextual menu for the selection context type to add an item to copy the selected text to the compose window from Patchfox.
Tricky bit
The user may have more than one compose window open. To which one I send the text? How to differentiate them and let the user know?
Resolved by copying the data to the clipboard. The user then can paste in whichever place they want.
|
2025-04-01T04:35:32.513809
| 2016-11-26T10:13:58
|
191798280
|
{
"authors": [
"DiegoHuang",
"darrachequesne"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10906",
"repo": "socketio/socket.io-redis",
"url": "https://github.com/socketio/socket.io-redis/issues/150"
}
|
gharchive/issue
|
subEvent seems doesnt work
Hi,
I set opetion.subEvent as 'RedisPub', but it still receive event from 'message',
sample code as belowing, if I use socket.io-redis module incorrectly, please tell me. :)
Setup:
var adapter = require('socket.io-redis')({
pubClient: pubClient,
subClient: pubClient,
subEvent: 'RedisPub'
});
const msgpack = require('msgpack-js');
adapter.subClient.on('message', function (channel, msg) {
const args = msgpack.decode(msg);
if (adapter.uid != args.shift())
logger.info({'data': args}, 'Received data from other Nodes')
});
adapter.subClient.on('RedisPub', function (channel, msg) {
const args = msgpack.decode(msg);
if (adapter.uid != args.shift())
logger.info({'Channels.RedisPub remote message': args})
});
io.adapter(adapter);
Publish Message:
_nsp.emit(data);
Hi! AFAIK, you can't use user-defined subscription events, you have to choose among message, pmessage, ...
Please see: https://github.com/NodeRedis/node_redis#subscriber-events
Please reopen if needed!
|
2025-04-01T04:35:32.515839
| 2023-11-17T15:19:28
|
1999367477
|
{
"authors": [
"chicoxyzzy",
"elibroftw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10907",
"repo": "socketsupply/socket-examples",
"url": "https://github.com/socketsupply/socket-examples/issues/80"
}
|
gharchive/issue
|
Please check for installed dependencies.
I already use React Native and Tauri so already have android and windows build dependencies. Please don't treat my PC as a junkyard.
On Windows some SDKs and clang++ are necessary to build Socket applications. I don't think that React Native provide these deps and Tauri uses Rust, not C++. clang++ dep is still necessary
Should provide UI instructions then because I got an email from you guys and to be honest, not going to abandon Tauri without building an executable. Don't know if this is like a kotlin multiplatform or like Tauri 2.0
|
2025-04-01T04:35:32.536744
| 2022-07-23T02:29:53
|
1315533656
|
{
"authors": [
"M4x1mumReZ",
"alex-free",
"datboi2008",
"krystalgamer",
"socram8888"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10908",
"repo": "socram8888/tonyhax",
"url": "https://github.com/socram8888/tonyhax/issues/133"
}
|
gharchive/issue
|
DDR 3rd and 4th mix stuck at now loading screen
Before opening this kind of issue, please ensure:
Are you using a compatible console version?
Are you using CD-R media, and not CD-RW?
Did you copy all the files in the expected format to the memory card?
tonyhax version: Which version are you running? Please ensure you are running the latest stable, or a newer beta.
Installation method: How did you get tonyhax on the memory card?
Entry point game: Which game are you using to launch tonyhax?
Console model: Console product code, written on the bottom of the unit, such as "SCPH-7502"
Integrity check: If it boots, does the built-in integrity check succeed?
BIOS version: If you can get to boot, which version of the BIOS does it report?
Target game: If the bug happens when launching a game, what's its name and game code? Example: "Spyro 3 (SCES-02835)"
Bug explanation:
Details:
Tonyhax version: 1.4.3
Ps1 version: SCPH-7501 (NTSC-U/C)
Ps1 Bios version: v1.4
Game used to run tonyhax: THPS3 (SLUS-01419)
I got Tony Hax onto my memory card using a ps2 with UlaunchElf
I followed this guide: https://www.youtube.com/watch?v=01gVgTQLP9U
Games with Problems:
DDR 3rd mix (have not tried 3rd mix plus or 4th mix but same engine so im guessing same issue)
Bumping this, as it seems to be the exact same issue as https://github.com/alex-free/tonyhax/issues/10 . DDR Best Hits Japan is getting stuck at the same exact loading screen in the same way. Reproduced by me with a stock SCPH-1001. Reproduced by @M4x1mumReZ with a SCPH-7502.
Have you tried on an emulator? I am working so cannot at the moment.
Have you tried on an emulator? I am working so cannot at the moment.
Unreproducible on emulation using DuckStation+ Boot CD, only on real hardware.
The same can also be said for my stock SCPH-1002, so this must be an anti-piracy issue I suppose.
The only way you could defeat it is to maybe eliminate the routine altogether.
How are you booting into tonyhax?
I've booted into Tonyhax International via both cheat cartridge and boot CD with swap trick.
Used two models for this: SCPH-1002 and SCPH-7502.
I've booted into Tonyhax International via both cheat cartridge and boot CD with swap trick.
I have confirmed this same issue is present in Tonyhax v1.4.4 AND Tonyhax International. I used the Tonyhax v1.4.4 Boot CD via CD Player Swap Trick on my SCPH-1001.
Yup, will certainly do.
Yup, will certainly do.
I created a fork of CDRDAO that forces byte-swapping CD audio files (which is required for CDRDAO to burn any kind of CD containing CD audio correctly on every system I've tested, from Mac back in the day to Linux right now) in every write mode. This means that I can burn with the generic-mmc-raw driver and the CD audio will also work.
The actual official CDRDAO is not capable of burning in raw mode AND byteswapping the CD audio files. The --swap argument or even explicitly specifiying '--driver generic-mmc-raw:0x20000' is completely ignored by the official CDRDAO. You can only use the swap options (and have them applied to the disc burned) if you use the generic-mmc driver.
I went ahead and hacked on cdrdao's source code a bit. I got it to force byte-swapping of the CD audio samples in my own fork, so finally everything works. Keep in mind that if you don't have CDRDAO treat the audio as byteswapped, you get insanely load static/white noise instead of the intended CDDA music). See my original dilemma?
Well finally everything works and I can proceed to test out this game as well. I can finally make a proper DDR 2nd Remix burn from Linux using just open source software.
Here's my fork by the way:
https://github.com/alex-free/cdrdao
Here's my 'hacks on cdrdao' to get this working:
https://github.com/alex-free/cdrdao/commit/6e544d25de0c61b294e2733ae012f4b3ebf3e0fe
Please make sure the website specifies this fork, because the official CDRDAO will produce invalid CD audio tracks burned to the disc in the wrong byte order, which causes static/white noise.
I have confirmed that by using my cdrdao fork with the generic-mmc-raw driver to burn each CD-R, all of the below games now work as expected on my stock SCPH-1001 using the Tonyhax v1.4.4 Boot CD via CD Player Swap Trick:
Dance Dance Revolution Best Hits.
Dance Dance Revolution 3rd Mix.
Dance Dance Revolution 4th Mix.
Proving the EDC check is what causes this (just like https://github.com/socram8888/tonyhax/issues/121#issuecomment-1341365357) , and that it can be bypassed with known burning methods.
great work @alex-free
do you have any plans in upstreaming your patches?
great work @alex-free do you have any plans in upstreaming your patches?
Hey, didn't you do the tekken 3 exploit stuff? I got Tekken 2 and Tekken 3 exploits into tonyhax international because of your previous work. https://alex-free.github.io/tonyhax-international#save-game-exploit . Thanks for that.
My approach with cdrdao was a 'brute-force I know how it needs to function internally' type modification. It's probably too hacky as-is for upstream. I was originally going for a more proper patch set but it got complicated.
glad someone used it :D
anyways it's great to start for a better CDDA on cdrdao
I might come up with some exploitable saves in the near future, I'm busy right now.
So Dance Dance Revolution Japan and USA versions (the 1st DDR game) have the APv1 style detection (that only triggers if you have a non-stealth modchip spamming the SCEX code). This might be bypassed already in the current Tonyhax, have not tested on real hardware for that. But more importantly, they also have the EDC check at sector 12 just like the newer DDR games.
You know, it wouldn't be that hard to simply see if the EDC is invalid in each sector, and then report which sector has invalid EDC data indicating an EDC check. I'll make a program which can detect this (using proper clean game rips as input) soon, maybe I'll integrate such functionallity into https://alex-free.github.io/aprip .
glad someone used it :D
anyways it's great to start for a better CDDA on cdrdao
You inspired me to go back to CDRDAO and figure out the actual bug that was causing this. I threw away my hacky changes and actually figured out the bug.
My CDRDAO Fork does not ignore the --swap argument when using the generic-mmc-raw driver, the official CDRDAO does however. The --swap argument is required for seemingly most, but not all PSX game rips which contain CD audio. Essentially, if you get static when playing back a CD with audio tracks that you burned with CDRDAO, the audio samples need to be 'byte-swapped' when sent to the CD drive during the burn. You can do this with the official CDRDAO with the --swap option and the generic-mmc driver, but you couldn't before with the generic-mmc-raw driver due to a bug I did find.
I did do a pull request for the official CDRDAO. Thanks for pushing me to look at this properly again when I had more time.
I need help building your CDRDAO fork to work with Linux. Currently, I'm using Linux Mint.
I need help building your CDRDAO fork to work with Linux. Currently, I'm using Linux Mint.
What issue are you having?
I need help building your CDRDAO fork to work with Linux. Currently, I'm using Linux Mint.
What issue are you having?
Compiling the fork under Linux.
|
2025-04-01T04:35:32.546532
| 2019-04-05T17:23:44
|
429859355
|
{
"authors": [
"Dylan-DPC",
"SuperiorJT",
"coveralls",
"kpp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10909",
"repo": "sodiumoxide/sodiumoxide",
"url": "https://github.com/sodiumoxide/sodiumoxide/pull/327"
}
|
gharchive/pull-request
|
Update libsodium hashes due to jedisct1/libsodium#813
The binaries were updated, see jedisct1/libsodium#813
Pull Request Test Coverage Report for Build 908
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 94.914%
Totals
Change from base Build 906:
0.0%
Covered Lines:
2874
Relevant Lines:
3028
๐ - Coveralls
@SuperiorJT does this branch work for you?
๐
bors r+
bors: r+
|
2025-04-01T04:35:32.563468
| 2023-03-03T15:43:55
|
1608861570
|
{
"authors": [
"karol-bisztyga"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10910",
"repo": "software-mansion-labs/cairo",
"url": "https://github.com/software-mansion-labs/cairo/pull/26"
}
|
gharchive/pull-request
|
Test type check
https://github.com/software-mansion/protostar/issues/1552
For now, it works only for args and return values, I couldn't find a way to check if the test is marked with nopanic
I may be wrong about this but I couldn't find how and if the nopanic is handled by the cairo1 compiler. I also ran a simple test - I compiled a simple function with and without nopanic and the sierra/casm outputs were the same, so maybe it isn't supported yet if there's no info in sierra about it.
I added checking if the test is panicable
|
2025-04-01T04:35:32.611308
| 2019-08-18T16:12:01
|
482010636
|
{
"authors": [
"adamw",
"janjaali",
"oliverbecker-fashionid"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10911",
"repo": "softwaremill/elasticmq",
"url": "https://github.com/softwaremill/elasticmq/issues/267"
}
|
gharchive/issue
|
sendMessageBatch to FifoQueue does not guarantee message ordering
Dear elasticmq-team,
I think I found an issue in the fifo-queue-handling, which does break the message ordering. Below is a unittest I've written to demonstrate the issue.
TL;DR: if you use a SendMessageBatchRequest with a fifo-queue, elasticmq does not provide a sequenceNumber and the message ordering seems to be messed up.
I used the following build.sbt:
scalaVersion := "2.12.9"
lazy val testing = (project in file("."))
.settings(
name := "elasticmq-testing",
libraryDependencies ++= {
Seq(
"org.elasticmq" %% "elasticmq-rest-sqs" % "0.14.7" % Test,
"software.amazon.awssdk" % "sqs" % "2.7.26" % Test,
"org.scalactic" %% "scalactic" % "3.0.8" % Test,
"org.scalatest" %% "scalatest" % "3.0.8" % Test
)
}
)
Testcase:
import java.net.URI
import java.util.UUID
import akka.actor.ActorSystem
import akka.stream.{ActorMaterializer, Materializer}
import org.elasticmq.rest.sqs.SQSRestServerBuilder
import org.scalatest.{Matchers, WordSpec}
import software.amazon.awssdk.auth.credentials.{AwsBasicCredentials, StaticCredentialsProvider}
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.services.sqs.SqsAsyncClient
import software.amazon.awssdk.services.sqs.model._
import scala.collection.JavaConverters._
import scala.concurrent.ExecutionContext
import scala.concurrent.duration._
class FifoSpec extends WordSpec with Matchers {
private implicit val system: ActorSystem = ActorSystem(UUID.randomUUID().toString)
private implicit val dispatcher: ExecutionContext = system.dispatcher
private implicit val materializer: Materializer = ActorMaterializer.create(system)
"BatchSending messages to fifo queues" should {
"guarantee message ordering" in {
val sqsRestServer = SQSRestServerBuilder.start()
val serverAddress = sqsRestServer.waitUntilStarted().localAddress
val sqsAsyncClient = SqsAsyncClient.builder().
region(Region.of("elasticmq")).
credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create("x", "x"))).
endpointOverride(new URI(s"http://${serverAddress.getHostName}:${serverAddress.getPort}")).
build()
val queueUrl = sqsAsyncClient.createQueue(
CreateQueueRequest.builder().
queueName("test-queue.fifo").
attributes(Map(
QueueAttributeName.FIFO_QUEUE โ true.toString,
QueueAttributeName.CONTENT_BASED_DEDUPLICATION โ false.toString,
QueueAttributeName.VISIBILITY_TIMEOUT โ 1.seconds.toSeconds.toString
).asJava).build
).get().queueUrl()
// just checking the attributes wanted in our queue
sqsAsyncClient.getQueueAttributes(
GetQueueAttributesRequest.builder().
queueUrl(queueUrl).
attributeNames(
QueueAttributeName.FIFO_QUEUE,
QueueAttributeName.CONTENT_BASED_DEDUPLICATION
).build()
).get().attributes().asScala shouldBe Map(
QueueAttributeName.FIFO_QUEUE โ true.toString,
QueueAttributeName.CONTENT_BASED_DEDUPLICATION โ false.toString
)
// every message has the same group id, but deduplication id is different for each message
// see https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html#FIFO-queues-understanding-logic
val groupId = UUID.randomUUID().toString
val messageCount = 10
val sendMessageBatchRequestEntries = (1 to messageCount).map { counter โ
SendMessageBatchRequestEntry.
builder().
id(counter.toString).
messageBody(counter.toString).
messageDeduplicationId(UUID.randomUUID().toString).
messageGroupId(groupId).
build()
}
val sendMessageBatchResultEntries = sqsAsyncClient.sendMessageBatch(
SendMessageBatchRequest.builder().
queueUrl(queueUrl).
entries(sendMessageBatchRequestEntries.toList.asJava).
build()).get().successful()
sendMessageBatchResultEntries.asScala.map(_.md5OfMessageBody()).count(_.nonEmpty) shouldBe messageCount
// no sequenceNumber is given to the messages, although it should be for fifo queues
// ArrayBuffer(null, null, null, null, null, null, null, null, null, null)
// see https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatchResultEntry.html
println(sendMessageBatchResultEntries.asScala.map(_.sequenceNumber()))
// just checking that {messageCount} messages are indeed in the queue
sqsAsyncClient.getQueueAttributes(
GetQueueAttributesRequest.builder().
queueUrl(queueUrl).
attributeNames(
QueueAttributeName.APPROXIMATE_NUMBER_OF_MESSAGES,
QueueAttributeName.APPROXIMATE_NUMBER_OF_MESSAGES_NOT_VISIBLE
).build()
).get().attributes().asScala shouldBe Map(
QueueAttributeName.APPROXIMATE_NUMBER_OF_MESSAGES โ messageCount.toString,
QueueAttributeName.APPROXIMATE_NUMBER_OF_MESSAGES_NOT_VISIBLE โ 0.toString
)
val receiveMessageResponse = sqsAsyncClient.receiveMessage(
ReceiveMessageRequest.builder().
queueUrl(queueUrl).
maxNumberOfMessages(10).build()
).get()
// this assertion fails with something like:
// ArrayBuffer("2", "5", "8", "7", "1", "4", "6", "9", "3", "10") was not equal to Vector("1", "2", "3", "4", "5", "6", "7", "8", "9", "10")
receiveMessageResponse.messages().asScala.map(_.body()) shouldBe (1 to messageCount).map(_.toString)
sqsRestServer.stopAndWait()
}
}
}
There are several comments in the code to show what I expected and what I received. If you use a "real world" AWS fifo-queue, the test works fine for me.
Could you perhaps verify that this is indeed a bug?
Thank you very much, elasticmq is a great tool ๐
Thanks for the report! Could you maybe check if https://github.com/softwaremill/elasticmq/pull/268 https://github.com/softwaremill/elasticmq/pull/268 solves the problem?
Adam
On 18 Aug 2019, at 18:12, oliverbecker-fashionid<EMAIL_ADDRESS>wrote:
Dear elasticmq-team,
I think I found an issue in the fifo-queue-handling, which does break the message ordering. Below is a unittest I've written to demonstrate the issue.
TL;DR: if you use a SendMessageBatchRequest with a fifo-queue, elasticmq does not provide a sequenceNumber and the message ordering seems to be messed up.
I used the following build.sbt:
scalaVersion := "2.12.9"
lazy val testing = (project in file("."))
.settings(
name := "elasticmq-testing",
libraryDependencies ++= {
Seq(
"org.elasticmq" %% "elasticmq-rest-sqs" % "0.14.7" % Test,
"software.amazon.awssdk" % "sqs" % "2.7.26" % Test,
"org.scalactic" %% "scalactic" % "3.0.8" % Test,
"org.scalatest" %% "scalatest" % "3.0.8" % Test
)
}
)
Testcase:
import java.net.URI
import java.util.UUID
import akka.actor.ActorSystem
import akka.stream.{ActorMaterializer, Materializer}
import org.elasticmq.rest.sqs.SQSRestServerBuilder
import org.scalatest.{Matchers, WordSpec}
import software.amazon.awssdk.auth.credentials.{AwsBasicCredentials, StaticCredentialsProvider}
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.services.sqs.SqsAsyncClient
import software.amazon.awssdk.services.sqs.model._
import scala.collection.JavaConverters._
import scala.concurrent.ExecutionContext
import scala.concurrent.duration._
class FifoSpec extends WordSpec with Matchers {
private implicit val system: ActorSystem = ActorSystem(UUID.randomUUID().toString)
private implicit val dispatcher: ExecutionContext = system.dispatcher
private implicit val materializer: Materializer = ActorMaterializer.create(system)
"BatchSending messages to fifo queues" should {
"guarantee message ordering" in {
val sqsRestServer = SQSRestServerBuilder.start()
val serverAddress = sqsRestServer.waitUntilStarted().localAddress
val sqsAsyncClient = SqsAsyncClient.builder().
region(Region.of("elasticmq")).
credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create("x", "x"))).
endpointOverride(new URI(s"http://${serverAddress.getHostName}:${serverAddress.getPort}")).
build()
val queueUrl = sqsAsyncClient.createQueue(
CreateQueueRequest.builder().
queueName("test-queue.fifo").
attributes(Map(
QueueAttributeName.FIFO_QUEUE โ true.toString,
QueueAttributeName.CONTENT_BASED_DEDUPLICATION โ false.toString,
QueueAttributeName.VISIBILITY_TIMEOUT โ 1.seconds.toSeconds.toString
).asJava).build
).get().queueUrl()
// just checking the attributes wanted in our queue
sqsAsyncClient.getQueueAttributes(
GetQueueAttributesRequest.builder().
queueUrl(queueUrl).
attributeNames(
QueueAttributeName.FIFO_QUEUE,
QueueAttributeName.CONTENT_BASED_DEDUPLICATION
).build()
).get().attributes().asScala shouldBe Map(
QueueAttributeName.FIFO_QUEUE โ true.toString,
QueueAttributeName.CONTENT_BASED_DEDUPLICATION โ false.toString
)
// every message has the same group id, but deduplication id is different for each message
// see https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html#FIFO-queues-understanding-logic
val groupId = UUID.randomUUID().toString
val messageCount = 10
val sendMessageBatchRequestEntries = (1 to messageCount).map { counter โ
SendMessageBatchRequestEntry.
builder().
id(counter.toString).
messageBody(counter.toString).
messageDeduplicationId(UUID.randomUUID().toString).
messageGroupId(groupId).
build()
}
val sendMessageBatchResultEntries = sqsAsyncClient.sendMessageBatch(
SendMessageBatchRequest.builder().
queueUrl(queueUrl).
entries(sendMessageBatchRequestEntries.toList.asJava).
build()).get().successful()
sendMessageBatchResultEntries.asScala.map(_.md5OfMessageBody()).count(_.nonEmpty) shouldBe messageCount
// no sequenceNumber is given to the messages, although it should be for fifo queues
// ArrayBuffer(null, null, null, null, null, null, null, null, null, null)
// see https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessageBatchResultEntry.html
println(sendMessageBatchResultEntries.asScala.map(_.sequenceNumber()))
// just checking that {messageCount} messages are indeed in the queue
sqsAsyncClient.getQueueAttributes(
GetQueueAttributesRequest.builder().
queueUrl(queueUrl).
attributeNames(
QueueAttributeName.APPROXIMATE_NUMBER_OF_MESSAGES,
QueueAttributeName.APPROXIMATE_NUMBER_OF_MESSAGES_NOT_VISIBLE
).build()
).get().attributes().asScala shouldBe Map(
QueueAttributeName.APPROXIMATE_NUMBER_OF_MESSAGES โ messageCount.toString,
QueueAttributeName.APPROXIMATE_NUMBER_OF_MESSAGES_NOT_VISIBLE โ 0.toString
)
val receiveMessageResponse = sqsAsyncClient.receiveMessage(
ReceiveMessageRequest.builder().
queueUrl(queueUrl).
maxNumberOfMessages(10).build()
).get()
// this assertion fails with something like:
// ArrayBuffer("2", "5", "8", "7", "1", "4", "6", "9", "3", "10") was not equal to Vector("1", "2", "3", "4", "5", "6", "7", "8", "9", "10")
receiveMessageResponse.messages().asScala.map(_.body()) shouldBe (1 to messageCount).map(_.toString)
sqsRestServer.stopAndWait()
}
}
}
There are several comments in the code to show what I expected and what I received. If you use a "real world" AWS fifo-queue, the test works fine for me.
Could you perhaps verify that this is indeed a bug?
Thank you very much, elasticmq is a great tool ๐
โ
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub https://github.com/softwaremill/elasticmq/issues/267?email_source=notifications&email_token=AAAOYV5JN5IFNQARD25BT33QFFYFFA5CNFSM4IMTNIM2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HF24YGA, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAOYV7KPKUN52WGREUUXVTQFFYFFANCNFSM4IMTNIMQ.
--
Adam Warski
http://www.softwaremill.com
http://twitter.com/#!/adamwarski
Maybe you should consider #269 as well for the check.
@oliverbecker-fashionid wasn't able to reproduce the bug, should to be fixed now. Can you please close this issue.
Yes, thank you very much. This seems to be fixed indeed. ๐
|
2025-04-01T04:35:32.637089
| 2018-08-06T18:21:29
|
348031650
|
{
"authors": [
"kellobri",
"nwstephens"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10912",
"repo": "sol-eng/data-science-lab",
"url": "https://github.com/sol-eng/data-science-lab/pull/1"
}
|
gharchive/pull-request
|
Add yum-builddep R
When testing, I needed to include this line as well, as seen here: https://rviews.rstudio.com/2018/03/21/multiple-versions-of-r/
Is it important enough to include in these instructions?
That old repos was for rstudio::conf 2018. It's OK if it's frozen.
|
2025-04-01T04:35:32.639328
| 2012-02-12T22:36:58
|
3193820
|
{
"authors": [
"nandykins",
"sol"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10913",
"repo": "sol/vimus",
"url": "https://github.com/sol/vimus/issues/5"
}
|
gharchive/issue
|
Dependency on cabal 0.14 necessary?
Is the dependency on cabal 0.14 really necessary? I had to build cabal from the darcs repository manually just to get the dependency to resolve. It refuses to build with stable cabal. Is this dependency really necessary?
Otherwise, that may be a hindrance to uploading it to hackage or similar since it can't be built automatically.
I'm a little bit puzzled. Where does this dependency come from?
Just checked with cabal-install == 0.10.2 and Cabal == <IP_ADDRESS>, and it worked.
@nandykins what to do with this ticket?
No idea. I'll re-check it tomorrow.
@nandykins I can't reproduce this. Please re-open, if this is still an issue.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.