id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
138020182 | Fixed validation of the external properties in the settings of the plugin
Array.prototype.contains method produce some problems with behaviour of the "for ... in ..." statement.
http://stackoverflow.com/questions/870032/setting-a-custom-property-with-dontenum
Big thanks to you :+1:
| gharchive/pull-request | 2016-03-02T23:03:05 | 2025-04-01T06:45:21.033238 | {
"authors": [
"pawelczak",
"segemun"
],
"repo": "pawelczak/EasyAutocomplete",
"url": "https://github.com/pawelczak/EasyAutocomplete/pull/180",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1044150118 | no_std example
Do you have an no_std example? I'm just trying to work out what "a fair bit of memory" is, to see what is feasible.
I don't have a no_std example handy right now, but I might be able give you an idea on the memory requirements. For the MLX90641, the unprocessed calibration data is 1664 bytes. The included Mlx90641Calibration structure takes up around 3206 bytes (-ish, there might be structure padding as well). The memory needed for the MLX90640 is much higher, with the processed calibration structure taking up 7821 bytes. CameraDriver also has an internal buffer of either 384 bytes (for the '641) or 1536 bytes (for the '640) that it uses as the destination when reading data off of the camera, in addition to the either 768 or 3072 bytes needed for the output image from the camera.
So for initial calibration loading for the '641, you need at least 4870 bytes of memory, and to actually get an image you'd need at least 4358 bytes. The '640 comes in a lot heavier, needing over 12K just to get images off of the camera.
A custom CalibrationData implementation can be written to reduce the calibration memory requirements, either by extracting and calculating each value from the source data as needed, or by using some sort of auxiliary storage for the calibration data. The former approach might work for slower frame rates, but I'd be concerned about calculating the per-pixel calibration values for every frame (some of those values are floats as well, which can be a lot slower depending on your device). The latter approach is probably what I'd choose, especially as there was some discussion about it in the official C++ library's repo: melexis/mlx90640-library#3
To be honest, I'd skipped over the buffer in CameraDriver, and will be creating an issue to track how to remove it and reduce the memory usage there. It'll take some thinking/planning and maybe some unsafe code as it'll be taking a buffer of u8 pairs as input, and writing f32 values to that buffer...hmm.
Thanks for the explanation. If you get around to a no_std example would you please comment here so I get pinged. I looked very briefly at the code and thought there may be room for memory saving by using time in milliseconds as u32 rather and seconds as floating. That is a very uneducated and not researched suggestion, so you probably know better.
| gharchive/issue | 2021-11-03T21:54:29 | 2025-04-01T06:45:21.039863 | {
"authors": [
"paxswill",
"pdgilbert"
],
"repo": "paxswill/mlx9064x-rs",
"url": "https://github.com/paxswill/mlx9064x-rs/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2424765194 | Can't Access Document ID Inside a Custom Server Component
Link to reproduction
No response
Payload Version
3.0.0-beta.67
Node Version
20.9.0
Next.js Version
15.0.0-canary.58
Describe the Bug
Currently, when using a custom server component inside a UI field, it's not possible to access the ID of a document. The only workaround is to use a client component. Accessing the document ID server side would be beneficial for fetching initial data and improving initial loading speed.
Reproduction Steps
Log the props of a custom server component inside a UI field.
You get the following props:
{
AfterInput: null,
BeforeInput: null,
CustomDescription: undefined,
CustomError: undefined,
CustomLabel: undefined,
custom: undefined,
descriptionProps: { description: undefined },
disabled: false,
errorProps: { path: '' },
label: 'Offers',
path: '',
required: undefined,
i18n: {
...
},
payload: {
...
}
}
But none of these props include the current document ID.
Adapters and Plugins
No response
Hey @tobiasiv — unfortunately this is a side-effect of React Server Components and not Payload's implementation.
Due to the recursive, drawer-based UI of an editor like Payload, we need to render custom components a single time on the server. They are not re-rendered. Imagine opening an "edit drawer" in Payload—that is a client-side action, which would render your custom component and the ID would need to be the ID of the document that was rendered.
So you can imagine that as you navigate the admin panel, your id could change based on which document(s) you open.
For this reason, if you want to access the ID of the document, you'd need to use a client component.
At least for now.
In the future, if React / Next.js introduced a way to request new server components upon a client action (the opening of a drawer or similar) we could add more props into the mix.
We went down about 1 million rabbit holes to discover this and I think unfortunately there's not much that can be done as of now. But I'll tag this as a Feature Request so we can keep it on the radar and hopefully improve this logic at some point based on React / Next.js features becoming available!
| gharchive/issue | 2024-07-23T09:52:21 | 2025-04-01T06:45:21.073799 | {
"authors": [
"jmikrut",
"tobiasiv"
],
"repo": "payloadcms/payload",
"url": "https://github.com/payloadcms/payload/issues/7304",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2686000757 | interfaceName does not work for named tabs.
Describe the Bug
If i define an "interfaceName" for an named tab no extra type interface gets generated.
Link to the code that reproduces this issue
https://github.com/ckruppe/Payload-issues-reproduction
Reproduction Steps
1.) Clone the Repo
2.) Install dependencies with yarn
3.) Collection config can be found under "src/server/payload/collections/Stores.ts"
4.) Collection defines an group interfaceName (StoreAdress) in line 82 and an tab interfaceName (ReferringMedia) in line 195.
5.) Run yarn generate:types
6.) Generated file can be found under "types/payload-types.ts"
7.) StoreAdress got generated, ReferringMedia not.
Which area(s) are affected? (Select all that apply)
area: core
Environment Info
Binaries:
Node: 22.11.0
npm: 10.3.0
Yarn: 1.22.19
pnpm: 8.11.0
Relevant Packages:
payload: 3.1.0
next: 15.0.3
@payloadcms/db-mongodb: 3.1.0
@payloadcms/graphql: 3.1.0
@payloadcms/live-preview: 3.1.0
@payloadcms/live-preview-react: 3.1.0
@payloadcms/next/utilities: 3.1.0
@payloadcms/plugin-nested-docs: 3.1.0
@payloadcms/plugin-seo: 3.1.0
@payloadcms/richtext-lexical: 3.1.0
@payloadcms/richtext-slate: 3.1.0
@payloadcms/translations: 3.1.0
@payloadcms/ui/shared: 3.1.0
react: 19.0.0-rc-65a56d0e-20241020
react-dom: 19.0.0-rc-65a56d0e-20241020
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 32488
Available CPU cores: 16
Hello @ckruppe! Indeed I can reproduce this on the current version, but I have a PR https://github.com/payloadcms/payload/pull/9299 where this is already fixed!
@r1tsuu very nice. Thank you! :)
| gharchive/issue | 2024-11-23T13:09:48 | 2025-04-01T06:45:21.077947 | {
"authors": [
"ckruppe",
"r1tsuu"
],
"repo": "payloadcms/payload",
"url": "https://github.com/payloadcms/payload/issues/9467",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
333323656 | Respect defaultInputValue when provided
Hi, thanks a lot for this UI-less solution: it's been really easy to integrate with our existing UI lib as a substitution of a (really) old jQuery based one!
I'm having one single problem: I provide both a selectedItem and a defaultInputValue to downshift, and since inputValue and selectedItem are decoupled, I'd expect that in my initial rendering I would see defaultInputValue.
In practice this snippet of code in the constructor:
if (state.selectedItem != null) {
state.inputValue = this.props.itemToString(state.selectedItem)
}
prevents that, effectively coupling the two fields.
I'd suggest to also check wether this.props.defaultInputValue is null in that if condition.
Hmmm... But the only reason selectedItem would not be null is if you're specifying a defaultSelectedItem prop. The inputValue should reflect the selectedItem when the user hasn't made changes to the input value which is why that logic is the way it is. Can you show an example of the kind of UI experience you're trying to build so I can understand better why you want things to not work as they are?
I'm talking about the initial render only (where the default values make sense), I'll show you what I mean shortly.
Here a quick example.
I would like to maintain the decoupling you have when you're changing the input, but the selected item doesn't change yet:
even in the first render (here implemented by a pristine value in the state:
The user experience I'd like to give is: since users haven't interacted with the dropdown yet, I'd like to show them all the viable options.
Now the experience they have is: they see an input without options, and they have to actively clear the input before options are shown
Example: here (I just took one official example, and added a defaultSelectedItem).
That's very interesting. I'm inclined to agree with you. I'm thinking what might be better is an initialInputValue prop, or perhaps a more flexible prop that could allow you to specify the initial state of the entire component:
const ui = <Downshift initialState={{isOpen: true, inputValue: '', selectedItem: items[3], highlightedIndex: 3}} />
What would you say to that?
My understanding was that it was exactly the role of the default_ props (defaultSelectedItem, defaultHighlightedIndex, defaultInputValue, defaultIsOpen), if not I completely missed their point..
Anyway, any solution looks equally promising (where the more flexible one could be useful moar broadly).
That's part of the use case, but another part of the use case is what to do when downshift is reset. The distinction is slight and normally not noticeable. It's probably kinda confusing. But the reason the default isn't being picked up in this case is because the default doesn't make sense when there's a selected item. Default is interpreted to mean: "When there's no other value that makes sense." In this case there is a value that makes sense. And that value is the itemToString value of the default selected item. 😅
Sorry it's confusing. I think I'd merge a PR to add support for initialState. We'd definitely want tests and docs for it. I think it should be considered an advanced prop (for docs purposes).
I got the distinction. I'll try to get a PR with the initial_ props ASAP.
I'd go down the distinct props, as opposed to the initialState, just to keep the same API.
I'll try also to add a short paragraph in the docs to explain the distinction.
Would that be OK?
That sounds fine. Maybe we could get away with not having docs dedicated to each prop? I want to avoid confusion. Most people probably wont need these props so having a single section for "initial props" would probably be sufficient.
agreed!
I have a question: right now on reset only defaultHighlightedIndex is respected, but from your previous comment I thought the default props were meant to be used on reset.
This is the piece of code I'm talking about:
reset = (otherStateToSet = {}, cb) => {
otherStateToSet = pickState(otherStateToSet)
this.internalSetState(
({selectedItem}) => ({
isOpen: false,
highlightedIndex: this.props.defaultHighlightedIndex,
inputValue: this.props.itemToString(selectedItem),
...otherStateToSet,
}),
cb,
)
}
Shouldn't it be something more like:
reset = (otherStateToSet = {}, cb) => {
otherStateToSet = pickState(otherStateToSet)
this.internalSetState(
(state) => {
return {
isOpen: this.props.defaultIsOpen,
highlightedIndex: this.props.defaultHighlightedIndex,
inputValue: this.props.defaultInputValue || this.props.itemToString(this.props.defaultSelectedItem),
selectedItem: this.props.defaultSelectedItem,
...otherStateToSet,
}
},
cb,
)
}
where the current state is completely discarded in favour to the default props?
Yeah, you're right. It really should be using the defaults in the reset. That's a design flaw. I sure wish that we'd realized this before I released 2.0.0 last week 😅
I think this is worth releasing 3.0.0 though. So let's change this and push it as a breaking change. Could you do that?
I'm working on this at the moment and I've got a solution that I like.
The one difference from what we discussed is the changes to the reset function. I will document this, but the expectation of the reset function isn't to totally revert the state of downshift to the initial state, but instead to reset it to the last selected item. Based on how it's used internally and externally, that's the best expectation.
Practically what this means is that the selectedItem will remain the same, and the inputValue will be derived from that value. The highlightedIndex and isOpen state will be based on the defaultValue though.
If someone wanted to completely reset downshift, they could do so by passing overrides:
reset({selectedItem: null, inputValue: ''})
// or, fun fact, we expose setState too, though this is undocumented:
// setState({isOpen: false, highlightedIndex: null, selectedItem: null, inputValue: ''})
All that said, perhaps reset isn't the right name for this function. If you have a better name for it let me know. Thanks!
:tada: This issue has been resolved in version 3.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/issue | 2018-06-18T15:38:22 | 2025-04-01T06:45:21.100872 | {
"authors": [
"EnoahNetzach",
"kentcdodds"
],
"repo": "paypal/downshift",
"url": "https://github.com/paypal/downshift/issues/467",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
231208012 | Add README.md "Table of Contents"
This PR is going to add "Table of Contents" to README document to improve its readability so that users don't need to scroll down and up to find out where's the section they need.
This seems super helpful, it is quite an extensive README. Thanks for adding this!
WDYT about 3 levels vs 4 levels?
3 levels
4 levels
With 4 levels it feels pretty vertically tall. When viewing the README, the only content "above the fold" is the title, badges, and table of contents. But it looks like we have the same outcome with 3 levels.. Hrmm. The titles in the 4th level might also be helpful in identifying terminology ("using className might be a helpful waypoint).
Thanks @PeterDaveHello this is super helpful!
But it looks like we have the same outcome with 3 levels, although it is 7 lines less.. Hrmm. The titles in the 4th level might also be helpful in identifying terminology ("using className might be a helpful waypoint).
@ajwhite makes a fair point.
I think that 3 levels is still enough information to glance over and find something I might need. I imagine that people new to glamorous are more likely to look for "Overriding component styles" than a specific subtopic of that, such as "using glamorous() composition".
Personally, I usually use a ToC to browse an unfamiliar topic and Ctrl-f when I'm looking for something specific. :man_shrugging:
This is super awesome! Thank you for making it! I would really prefer to not maintain this TOC though. Could we have it automated? A quick search showed doctoc as something we could use. I like it because it allows you to specify where the TOC goes and if you want to skip a heading.
We could hook it into the lint-staged config to run the CLI and git add for README.md. What do you all think?
Oh wow, doctoc looks awesome, of course there's a tool for this 😂
And look at that.. they event have a setting for a max depth as discussed above.
An automated TOC sounds 👌 very neat.
Done.
Codecov Report
Merging #132 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #132 +/- ##
=====================================
Coverage 100% 100%
=====================================
Files 10 10
Lines 130 130
Branches 33 33
=====================================
Hits 130 130
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b742428...6d48afb. Read the comment docs.
I would have expected an edit to the lint-staged config. Am I missing something?
@PeterDaveHello would you like to add yourself to the contributors table?
Maybe just add that when you need to update readme as that's not so important?
| gharchive/pull-request | 2017-05-25T00:28:00 | 2025-04-01T06:45:21.114316 | {
"authors": [
"PeterDaveHello",
"ajwhite",
"codecov-io",
"kentcdodds",
"kwelch",
"paulmolluzzo"
],
"repo": "paypal/glamorous",
"url": "https://github.com/paypal/glamorous/pull/132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
246146184 | fix autocomplete arrow down highlight
What:
Fixes https://github.com/paypal/react-autocompletely/issues/31
Why:
First arrow down event was not respecting defaultHighlightedIndex prop.
How:
Use defaultHighlightedIndex prop when menu first opens .
Checklist:
[ ] Documentation
[ ] Tests
[x] Ready to be merged
[ ] Added myself to contributors table
Codecov Report
Merging #42 into master will decrease coverage by 0.16%.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #42 +/- ##
==========================================
- Coverage 18.72% 18.55% -0.17%
==========================================
Files 8 8
Lines 219 221 +2
Branches 45 46 +1
==========================================
Hits 41 41
- Misses 139 140 +1
- Partials 39 40 +1
Impacted Files
Coverage Δ
src/autocomplete.js
15.94% <0%> (-0.48%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0258fe8...a6ca4a3. Read the comment docs.
| gharchive/pull-request | 2017-07-27T19:35:15 | 2025-04-01T06:45:21.125252 | {
"authors": [
"codecov-io",
"souporserious"
],
"repo": "paypal/react-autocompletely",
"url": "https://github.com/paypal/react-autocompletely/pull/42",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2067112322 | 🛑 Departamento is down
In 94004c6, Departamento (http://cca.unb.br/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Departamento is back up in 4135843 after 12 minutes.
| gharchive/issue | 2024-01-05T10:37:32 | 2025-04-01T06:45:21.129723 | {
"authors": [
"pazkero"
],
"repo": "pazkero/status.cacic.bsb.br",
"url": "https://github.com/pazkero/status.cacic.bsb.br/issues/1985",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2092022296 | 🛑 Departamento is down
In bc8f4e7, Departamento (http://cca.unb.br/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Departamento is back up in 7efd578 after 40 minutes.
| gharchive/issue | 2024-01-20T11:39:32 | 2025-04-01T06:45:21.132130 | {
"authors": [
"pazkero"
],
"repo": "pazkero/status.cacic.bsb.br",
"url": "https://github.com/pazkero/status.cacic.bsb.br/issues/2226",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
325667161 | Cordova Not Available when getting SIM info
Expected behaviour
Hope to request permission succesfully and get SIM info.
Actual behaviour
When I try to use getSimInfo and getSimInfo, it always return error: Cordova Not Available.
requestReadPermission is not useful as well.
Here is my configuration you may want to know:
"ionic-native/sim": "^4.7.0",
"ionic-native": "^2.2.13",
"cordova-plugin-sim": "^1.3.3"
And I am sure cordova-plugin-sim is installed in android platform 6.2.3 correctly because I can see Sim.java in com.pbakondy.
I'm seeing this behaviour on
Remove this hint: these checkboxes can be checked like this: [x]
[ ] iOS device
[ ] iOS simulator
[x] Android device:
[ ] Android emulator
I am using
[x] cordova
[x] ionic
[ ] PhoneGap
[ ] PhoneGap Developer App
[ ] Intel XDK
[ ] Intel App Preview
[ ] Telerik
[ ] Other:
Hardware models
Huawei Honor V 10
OS versions
Android 8.0.0, and tried with Android 7.0.0 as well
I've checked these
[ ] It happens on a fresh Cordova CLI project as well.
[ ] I'm waiting for deviceready to fire.
[ ] My JavaScript has no errors (window.onerror catches nothing).
[ ] I'm using the latest cordova library, Android SDK, Xcode, etc.
So how can we reproduce this?
corodva 8.0.0
ionic 3.18.0
ionic/cli-plugin-cordova 1.6.2
cordova-android 6.2.3
I put these code in root page right before checking login.
And it shows two prompts with "Cordova not available", and one with "Permission denied after opening app.
Here is the code:
```
this.sim.hasReadPermission().then(
(info) => alert(info),
(err) => alert(err)
);
this.sim.requestReadPermission().then(
() => alert('Permission granted'),
() => alert('Permission denied')
);
this.sim.getSimInfo().then(
(info) => alert(info),
(err) => alert(err)
);
Fixed
how did fix it ?
| gharchive/issue | 2018-05-23T11:40:41 | 2025-04-01T06:45:21.156624 | {
"authors": [
"wolfghoul",
"xDaly"
],
"repo": "pbakondy/cordova-plugin-sim",
"url": "https://github.com/pbakondy/cordova-plugin-sim/issues/82",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
316419458 | How to identify who has stopped recording?
I'm using Ionic 3 and plugin native Speech Recognition:
Plugin Ionic Native
My device is an Android 7.0.
So, how to identify who has stopped recording?
The function stopListening is only iOS. But, and in Android?
Thanks in advanced.
Hi @DeeSouza,
on Android the speech recognition (google assistant technology) auto-stop the recording when the user stop talking while on IOS (siri technology) you have to manually stop the recording through the stopListening method.
Maybe the answer of your question is you have to switch your code execution calling platform.is('ios') and platform.is('android') and use stopListening only on IOS.
Hi @texano00,
Thank you for answer.
But, how would I do when use the parameter showPartial ?
Don't have my code now, but is like this:
isRecording = false;
function startListen()
{
optionsAndroid = {
showPartial: true,
showPopup: false
};
this.speech.startListening(optionsAndroid).subscribe(
matches => {
isRecording = false;
this.speechResult = matches[0];
this.cd.detectChanges();
}
);
}
I don't know what your application do, but from the doc
If you set showPartial to true on iOS the success callback will be called multiple times until stopListening() called.
I suppose that showPartial has the same effect on Android but i'm not sure.
In monday I post my code and explain better.
Thanks!
My code:
// Listen to user
listenForSpeech() {
this.androidOptions = {
language: 'pt-BR',
matches: 1,
showPopup: false,
**showPartial: true** // ---------------------------------- SHOW PARTIAL
};
this.iosOptions = {
matches: 1,
language: 'pt-BR',
**showPartial: true** // ---------------------------------- SHOW PARTIAL
};
if(this.platform.is('android')){
return this.speech.startListening(this.androidOptions);
}
else if(this.platform.is('ios')){
return this.speech.startListening(this.iosOptions);
}
}
// Listen to user
async manualSpeech(event){
this.isRecording = true;
this.speech.listenForSpeech().subscribe(
async matches => this.searchManual(matches),
error => {
console.log(error);
this.isRecording = false;
}
);
}
// Search query
async searchManual(matches){
this.speechResult = matches[0];
if(matches && matches.length > 0){
this.zone.run(() => {
this.isRecording = false;
console.log('Send Message'); // --------------------- HERE -------------------------
});
}
}
I'm using the parameter 'showPartial'. If I to told "The books is on the table", the console.log('Send Message'); will appear 6 times.
But, I want what this happens only when to stop speech, in ANDROID, so so, only one time.
| gharchive/issue | 2018-04-20T21:38:29 | 2025-04-01T06:45:21.163377 | {
"authors": [
"DeeSouza",
"texano00"
],
"repo": "pbakondy/cordova-plugin-speechrecognition",
"url": "https://github.com/pbakondy/cordova-plugin-speechrecognition/issues/72",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
46558940 | What are the volumes in there ?
Nowhere does it say where I can mount a data volume for elasticsearch with docker run -v.
Digging around, I found out elasticsearch put its stuff in /data .. but is that enough ?
thanks :)
@abourget I'm not sure how I overlooked this question. Were you able to figure this out?
Closing due to inactivity. @abourget please reopen if I can do anything to help.
| gharchive/issue | 2014-10-22T20:50:11 | 2025-04-01T06:45:21.175998 | {
"authors": [
"abourget",
"pblittle"
],
"repo": "pblittle/docker-logstash",
"url": "https://github.com/pblittle/docker-logstash/issues/31",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1904691836 | 🛑 N-Able is down
In 671cd73, N-Able (https://rescue.pbscompany.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: N-Able is back up in f4426a7 after 9 minutes.
| gharchive/issue | 2023-09-20T10:31:38 | 2025-04-01T06:45:21.178741 | {
"authors": [
"pbs-itsupport"
],
"repo": "pbs-itsupport/upptime",
"url": "https://github.com/pbs-itsupport/upptime/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
135617287 | Something like ACTION_DONE?
Sometimes we need to change a state when the promise is done/completed (either fulfilled or rejected), such as re-enable the button. It would be helpful and keep DRY if this library could provide such action :-)
A '_FINALLY' would make sense to me yeh, I'm not sure why it wasn't added in the Promise A+ spec but it had some nice use cases! It should be made clear that this action is the very last to be dispatched for a set.
I'd rather not include this in the middleware. First, because I want it to follow the specifications and, second, because this is a specific request and I don't think all users of the middleware would benefit from this added feature.
Instead of dispatching a final action for either fulfilled or rejected, you could certainly have a reducer that checks for either the fulfilled or rejected action. Let me know if you need example or if you would like to further discuss. 🙂
@tomatau Yes! That would be better.
@pburtchaell Thanks! I understand your consideration. Currently I wrote a bunch of exactly the same "FULFILLED" and "REJECTED" reducers, which is the reason I propose this NFR :-)
@SummerWish Perhaps you should checkout the discussion in issue #35.
| gharchive/issue | 2016-02-23T03:13:56 | 2025-04-01T06:45:21.182874 | {
"authors": [
"SummerWish",
"pburtchaell",
"tomatau"
],
"repo": "pburtchaell/redux-promise-middleware",
"url": "https://github.com/pburtchaell/redux-promise-middleware/issues/62",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
148682207 | Something went wrong TypeError: Cannot read property 'canvas' of undefined
While trying to load my custom level (http://pcottle.github.io/learnGitBranching/?gist_level_id=be6d08de5273c85e0e151769d6891079) the following error occured:
Everything seemed to still work okay until I got past the dialog boxes and into the level itself:
There's an extra repository tree there that doesn't seem to do anything.
I'm running Chrome 50 on Windows 7. Here's a link to the source gist: https://gist.github.com/Ajedi32/be6d08de5273c85e0e151769d6891079
weird. did you see any errors in the JS console? When did you make the
level?
On Fri, Apr 15, 2016 at 7:55 AM, Andrew Meyer notifications@github.com
wrote:
While trying to load my custom level (
http://pcottle.github.io/learnGitBranching/?gist_level_id=be6d08de5273c85e0e151769d6891079)
the following error occured:
[image: Something went wrong TypeError: Cannot read property 'canvas' of
undefined]
https://cloud.githubusercontent.com/assets/1876931/14565258/cfafb118-02ef-11e6-90eb-909158e23afd.png
Everything seemed to still work okay until I got past the dialog boxes and
into the level itself:
[image: Extra tree]
https://cloud.githubusercontent.com/assets/1876931/14565260/d11c86a2-02ef-11e6-9bc4-2dca8d96848d.png
There's an extra repository tree there that doesn't seem to do anything.
I'm running Chrome 50 on Windows 7.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/pcottle/learnGitBranching/issues/371
--
Peter M Cottle
UC Berkeley Class of 2012
Master's of Science, Mechanical Engineering
UC San Diego Class of 2011
Bachelor's of Science, Mechanical Engineering
(408) 455 9405
"Equipped with his five senses, man explores the universe around him and
calls the adventure Science." - Edwin Hubble
Just tried it on another computer. Chrome 50 on Windows 10. Same error.
Browser console is showing:
Can't reproduce on Firefox 38.
Ah yeah I have a repro locally as well. I don't think it's related to the canvas JS error, its something silent (the JS console doesn't show anything serious). I'll look this weekend!
Fixed it! Yeah it was a weird race condition with raphael not initializing fully yet, which meant one of the JS calls fataled
Ill push the site with the fix now, but your level seems to not be working (i tried the commands and once i got to the final state, it didnt register as fixed). but maybe thats just due to bad JSON being in your level config :P
Had to clear my browser cache to get it to work, but everything seems to working now. Thanks!
| gharchive/issue | 2016-04-15T14:55:20 | 2025-04-01T06:45:21.228092 | {
"authors": [
"Ajedi32",
"pcottle"
],
"repo": "pcottle/learnGitBranching",
"url": "https://github.com/pcottle/learnGitBranching/issues/371",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
58417127 | Support stopping playback
There should be a convenient API on Player to stop playback and resume from the start.
Maybe once #5 is implemented, this can be considered fixed as well? With a seeking API stopping should be quite easy.
| gharchive/issue | 2015-02-20T22:06:19 | 2025-04-01T06:45:21.237416 | {
"authors": [
"est31",
"pcwalton"
],
"repo": "pcwalton/rust-media",
"url": "https://github.com/pcwalton/rust-media/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1415694063 | Update scalafmt-core to 3.6.0
Updates org.scalameta:scalafmt-core from 2.7.5 to 3.6.0.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "@monthly" },
dependency = { groupId = "org.scalameta", artifactId = "scalafmt-core" }
}]
labels: library-update, early-semver-major, semver-spec-major, commit-count:1
Superseded by #486.
| gharchive/pull-request | 2022-10-19T23:19:39 | 2025-04-01T06:45:21.244731 | {
"authors": [
"scala-steward"
],
"repo": "pdalpra/computer-database",
"url": "https://github.com/pdalpra/computer-database/pull/480",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
508220510 | Enhancement/Suggestion: Static Binary/Rudimentary Build System
@pdonadeo
This suggestion is built on top of #7 (External Configuration).
I'd like to write a Makefile that uses pyinstaller to create a static binary and "install" the binary as /usr/bin/rofi-web-search and the external config as ~/.config/rofi-web-search/config.json or as ~/.config/rofi-web-search/config.yaml
Users would be able to clone the repository and run make install to compile and install the binary/config on their system.
I would also add a requirements.txt file and have the make file automagically fetch any dependencies requires to build the binary.
I'd be happy to take on this work as I'll have a couple of long, boring flights the next three days.
Nope, sorry :)
As stated in #7 I want to keep this script simple. An external configuration is useful because one could keep different settings on different machines/users. But a static binary is too much. Why then? Python 3 is installed by default... everywhere in the Linux world.
| gharchive/issue | 2019-10-17T04:04:13 | 2025-04-01T06:45:21.275428 | {
"authors": [
"gsornsen",
"pdonadeo"
],
"repo": "pdonadeo/rofi-web-search",
"url": "https://github.com/pdonadeo/rofi-web-search/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
131888738 | Drupal Core updates to 7.42 plus content clarifications
In this pull request, I have only included the changes that came out of the update to version 7.42 of core. I created a dump file as well as an export file, using phpMyAdmin, as performed previously. I have tested these changes by re-installing a fresh version, successfully.
If you don't object, I'll update all of the modules soon--but avoid updating the themes, as they break the theme flow. If you are interested, let me know if you want each module update in a new pull request or all together (preferred by me).
Do you want me to continue assisting the project or is it too problematic?
On Mar 28, 2016, at 10:50, Chris Geib notifications@github.com wrote:
Closed #4.
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
Please feel free to contribute. We have a lot going on, but will try to get to contributions more quickly.
| gharchive/pull-request | 2016-02-06T19:02:43 | 2025-04-01T06:45:21.282194 | {
"authors": [
"Steve-A-Orr",
"cgeib"
],
"repo": "pdxlibrary/Library-DIY",
"url": "https://github.com/pdxlibrary/Library-DIY/pull/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
752628268 | Implemented FlatList with Example for Large data array
WRT #94 Issue, have implemented flatList and also have added an example
Note
Not tested on ios device
For flat list key should be a string but that can be overcome by passing keyExtractor which will return string
Thanks @mehimanshupatil 👍
I like that you kept the ScrollView rendering method intact for existing users. Is this ready for merge or is more testing needed?
Thanks @peacechen
Let wait for this weekend. Since we are implementing flatList I will try to fix #93. Also, can you test this on an iOS device?
@peacechen you can merge it. But I have not tested on iOS
Would love this to get merged. I'm also making a country selector :)
@kg-currenxie
Thanks for the reminder, and apologies for the delay. Please test this before I publish. You can use this in your package.json to pull directly from the repo:
dependencies: {
"react-native-modal-selector": "https://github.com/peacechen/react-native-modal-selector.git",
}
Wow, thanks for doing this so fast!
Found a small bug
renderFlatlistOption = ({ item, index, separators }) => {
if (item.section) {
return this.renderSection(item);
}
const numItems = this.props.data.length;
this.renderOption(item, index === (numItems - 1), index === 0); <------- missing return
}
My flatlist showed an empty list :) i debugged and after adding return, it works, and the modal opens instantly, and the initial options (images) load much faster!
Oh, and also, a small TS type error, but I can't figure out where it comes from:
ok, @kg-currenxie thank for testing. You can submit a PR for the same or else I will submit a PR in coming weekend.
Thanks @kg-currenxie for noting those bugs. I pushed a commit to fix the return in renderFlatlistOption().
@mehimanshupatil Can you confirm that the TS definition fix that kg proposed fixes the other problem?
Fix published in 2.0.4
| gharchive/pull-request | 2020-11-28T08:46:47 | 2025-04-01T06:45:21.292624 | {
"authors": [
"kg-currenxie",
"mehimanshupatil",
"peacechen"
],
"repo": "peacechen/react-native-modal-selector",
"url": "https://github.com/peacechen/react-native-modal-selector/pull/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
359346403 | Easy Peatio Installation 100% . Use our Shell Script
Check Our Peatio Latest Updated Repo and you can use our shell script to install peatio easily
https://github.com/algobasket/PeatioCryptoExchange
For any help
Skype : algobasket
Email : algobasket@gmail.com
Beware: The software is vulnerable!
Vulnerable, beware people!
@sha422 Joined 3 hours ago ! :+1:
Check Our Peatio Latest Updated Repo and you can use our shell script to install peatio easily
https://github.com/algobasket/PeatioCryptoExchange
For any help
Skype : algobasket
Email : algobasket@gmail.com
Scammer!!!!!!!!!!!!!!!!!!!!!!!!!!
He stole me 0.5 BTC!!!!
I have the conversation screens!
SCAM
| gharchive/issue | 2018-09-12T07:19:15 | 2025-04-01T06:45:21.302329 | {
"authors": [
"MatheusGrijo",
"algobasket",
"sha422",
"therealredvoid",
"wadaxofficial"
],
"repo": "peatio/peatio",
"url": "https://github.com/peatio/peatio/issues/799",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2549117297 | The project is not stable yet? what do you mean by that?
greetings friend, I am programming in react-native I am starting in the implementation RootEncoder in android to transmit by RTMP, a question, when you say here in ios that the project is not yet stable, what do you mean by that? my intention to use it is to work with RTMP in ios also with the functionality to flip the camera while it is streaming that has support for h264 and aac and the basics in RTMP, thank you very much.
Hello,
I means that It is not really well tested in multiple devices (I only tested with iphone 6s, iphone 11 and ipad gen6 because I haven't more devices) anyway rtmp and rtsp should works fine in all models.
Also, the funtionallity is limited. For now, you only have the equivalent to RtmpCamera, RtspCamera, RtmpDisplay and RtspDisplay.
Saludos nuevamente amigo, una pregunta, las funciones rtmp en ios esta disponible? se puede hacer streaming desde ios? no entendi cuando me comentaste que: 'Además, la funcionalidad es limitada, Por ahora, sólo tienes el equivalente a RtmpCamera', necesito lo mismo que en android, la funcionalidad de RtmpCamera (para la captura de la camara) y las funciones estandar de un streaming, StarPublis, StopPublish, Mute, Un mute, Voltear camara
Hola,
Todo eso funciona sin problemas, lo único que falta para RtmpCamera son filtros que ahora mismo creo que solo estaba el de GreyScale
| gharchive/issue | 2024-09-25T22:40:55 | 2025-04-01T06:45:21.319057 | {
"authors": [
"luigbren",
"pedroSG94"
],
"repo": "pedroSG94/RootEncoder-iOS",
"url": "https://github.com/pedroSG94/RootEncoder-iOS/issues/44",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
435178736 | feat(docz-core): add params to support dev server in the cloud
Description
I do a fair amount of developmnent in the cloud. The current version does not work in the cloud do to a localhost secuirity fix in webpack. I look at how create react app solved and issue and more or less borrowed what they did.
This is no longer necessary since the new v2.
| gharchive/pull-request | 2019-04-19T13:32:07 | 2025-04-01T06:45:21.322679 | {
"authors": [
"bcbrian",
"pedronauck"
],
"repo": "pedronauck/docz",
"url": "https://github.com/pedronauck/docz/pull/819",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2082592 | Support for Gzip + 304 handling
I've just added a $compress variable to the response that defaults to false.
If you set it to true, output gets gzipped.
304 as per http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html should not include a body, so I've added a conditional to make sure the response code isn't a 304 before echoing the body.
I've tried this before and people have had issues with Apache complaining about getting a compressed response back. I think it's safer to leave response compression to mod_deflate.
No issues here, but my use case probably isn't the common case. It doesn't compress by default, only if you set $response->compress = true;
Sounds like you already gave it some thought though, so I'll leave it in my fork. Thanks for the reply!
| gharchive/issue | 2011-10-28T18:53:05 | 2025-04-01T06:45:21.325790 | {
"authors": [
"JackWink",
"peej"
],
"repo": "peej/tonic",
"url": "https://github.com/peej/tonic/issues/56",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1992453211 | Performance plot has odd title
pdstools version checks
[X] I have checked that this issue has not already been reported.
[X] I have confirmed this bug exists on the latest version of pdstools.
Issue description
plotPredictorPerformance puts "Predictor PerformanceBin " in the title. I think I know why but it is not helping end users. Lets drop the Predictor PerformanceBin part and just keep the configuration name.
The plot code is so layered that it's not trivial for me to make the change myself.
Reproducible example
blah
Expected behavior
simple title
Installed versions
---Version info---
pdstools: 3.2.5
Platform: macOS-10.16-x86_64-i386-64bit
Python: 3.8.8 (default, Apr 13 2021, 12:59:45)
[Clang 10.0.0 ]
---Dependencies---
plotly: 5.16.1
requests: 2.25.1
pydot: 1.4.2
polars: 0.19.5
pyarrow: 8.0.0
tqdm: 4.59.0
pyyaml:
aioboto3: 11.2.0
---Streamlit app dependencies---
streamlit: 1.20.0
quarto: 0.1.0
papermill: 2.4.0
itables: 1.5.2
pandas: 1.5.3
jinja2: 3.1.2
xlsxwriter: 3.1.1
Same for this of course: plotPredictorCategoryPerformance
Resolved with:
https://github.com/pegasystems/pega-datascientist-tools/commit/bdc8422a4cbc12ef2e079935f84ce67d1c64c222
| gharchive/issue | 2023-11-14T10:31:37 | 2025-04-01T06:45:21.351975 | {
"authors": [
"operdeck",
"yusufuyanik1"
],
"repo": "pegasystems/pega-datascientist-tools",
"url": "https://github.com/pegasystems/pega-datascientist-tools/issues/167",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2188336776 | List View Builder not working
I have provided code of my document below here i have to use list view builder but its not working can you please suggest what i am doing wrong .
{
"type": "scaffold",
"args": {
"appBar": {
"type": "app_bar",
"args": {
"backgroundColor": "#F4511E",
"title": {
"type": "row",
"args": {
"children": [
{
"type": "icon",
"args": {
"icon": {
"codePoint": "0xe041",
"fontFamily": "MaterialIcons"
},
"color": "#FFFFFF",
"size": 30
}
},
{
"type": "text",
"args": {
"text": "Transaction History",
"style": {
"color": "#FFFFFF"
}
}
}
]
}
}
}
},
"body": {
"type": "set_value",
"args": {
"entries": {
"data": [
{
"title": "ABC",
"subtitle": "XYZ"
}
]
},
"template": {
"type": "container",
"args": {
"padding": 20
},
"child": {
"type": "list_tile",
"args": {
"subtitle": {
"type": "text",
"args": {
"text": "${value['subtitle']}"
}
},
"title": {
"type": "text",
"args": {
"text": "${value['title']}"
}
}
}
}
},
"child": {
"type": "padding",
"args": {
"padding": [
20,
20,
20,
20
],
"child": {
"type": "column",
"args": {
"children": [
{
"type": "form",
"args": {
"child": {
"type": "row",
"args": {
"children": [
{
"type": "expanded",
"args": {
"child": {
"type": "text_form_field",
"id": "from_date_field",
"args": {
"onTap": "${FromDateSelection('save_context')}",
"controller": "${FromDateTextEditingController}",
"decoration": {
"labelText": "From Date"
}
}
}
}
},
{
"type": "sized_box",
"args": {
"width": 16
}
},
{
"type": "expanded",
"args": {
"child": {
"type": "text_form_field",
"id": "to_date_field",
"args": {
"onTap": "${ToDateSelection('save_context')}",
"controller": "${ToDateTextEditingController}",
"decoration": {
"labelText": "To Date"
}
}
}
}
}
]
}
}
}
},
{
"type": "sized_box",
"args": {
"height": 16
}
},
{
"type": "elevated_button",
"id": "submit_button",
"args": {
"style": {
"backgroundColor": "#f4511e"
},
"child": {
"type": "text",
"args": {
"text": "Go",
"style": {
"color": "#FFFFFF",
"fontWeight": "bold",
"fontSize": 20
}
}
}
}
},
{
"type": "sized_box",
"args": {
"height": 16
}
},
{
"type": "text",
"args": {
"text": "You can search maximum of 60 days",
"style": {
"fontSize": 20,
"color": "#F4511E"
}
}
},
{
"type": "container",
"args": {
"height": 1,
"color": "#000000"
}
},
{
"type": "sized_box",
"args": {
"height": 16
}
},
{
"type": "text",
"args": {
"text": "${ResponseDesc}",
"style": {
"fontSize": 30,
"color": "#F4511E"
}
}
},
{
"type": "expanded",
"args": {
"child": {
"type": "list_view",
"children": "${for_each(entries['data'], 'template')}"
}
}
}
]
}
}
}
}
}
}
}
}
Few mistakes in it:
child / children needs to be inside of args
set_value now needs to wrap the values in values arg
Example fix
{
"type": "scaffold",
"args": {
"appBar": {
"type": "app_bar",
"args": {
"backgroundColor": "#F4511E",
"title": {
"type": "row",
"args": {
"children": [
{
"type": "icon",
"args": {
"icon": {
"codePoint": "0xe041",
"fontFamily": "MaterialIcons"
},
"color": "#FFFFFF",
"size": 30
}
},
{
"type": "text",
"args": {
"text": "Transaction History",
"style": {
"color": "#FFFFFF"
}
}
}
]
}
}
}
},
"body": {
"type": "set_value",
"args": {
"values": {
"entries": {
"data": [
{
"title": "ABC",
"subtitle": "XYZ"
}
]
},
"template": {
"type": "container",
"args": {
"height": 200,
"padding": 20,
"child": {
"type": "list_tile",
"args": {
"subtitle": {
"type": "text",
"args": {
"text": "${value['subtitle']}"
}
},
"title": {
"type": "text",
"args": {
"text": "${value['title']}"
}
}
}
}
}
}
},
"child": {
"type": "padding",
"args": {
"padding": [
20,
20,
20,
20
],
"child": {
"type": "column",
"args": {
"children": [
{
"type": "form",
"args": {
"child": {
"type": "row",
"args": {
"children": [
{
"type": "expanded",
"args": {
"child": {
"type": "text_form_field",
"id": "from_date_field",
"args": {
"onTap": "${FromDateSelection('save_context')}",
"controller": "${FromDateTextEditingController}",
"decoration": {
"labelText": "From Date"
}
}
}
}
},
{
"type": "sized_box",
"args": {
"width": 16
}
},
{
"type": "expanded",
"args": {
"child": {
"type": "text_form_field",
"id": "to_date_field",
"args": {
"onTap": "${ToDateSelection('save_context')}",
"controller": "${ToDateTextEditingController}",
"decoration": {
"labelText": "To Date"
}
}
}
}
}
]
}
}
}
},
{
"type": "sized_box",
"args": {
"height": 16
}
},
{
"type": "elevated_button",
"id": "submit_button",
"args": {
"style": {
"backgroundColor": "#f4511e"
},
"child": {
"type": "text",
"args": {
"text": "Go",
"style": {
"color": "#FFFFFF",
"fontWeight": "bold",
"fontSize": 20
}
}
}
}
},
{
"type": "sized_box",
"args": {
"height": 16
}
},
{
"type": "text",
"args": {
"text": "You can search maximum of 60 days",
"style": {
"fontSize": 20,
"color": "#F4511E"
}
}
},
{
"type": "container",
"args": {
"height": 1,
"color": "#000000"
}
},
{
"type": "sized_box",
"args": {
"height": 16
}
},
{
"type": "text",
"args": {
"text": "${ResponseDesc}",
"style": {
"fontSize": 30,
"color": "#F4511E"
}
}
},
{
"type": "expanded",
"args": {
"child": {
"type": "list_view",
"args": {
"children": "${for_each(entries['data'], 'template')}"
}
}
}
}
]
}
}
}
}
}
}
}
}
| gharchive/issue | 2024-03-15T11:49:10 | 2025-04-01T06:45:21.362906 | {
"authors": [
"crisperit",
"naveen715"
],
"repo": "peiffer-innovations/json_dynamic_widget",
"url": "https://github.com/peiffer-innovations/json_dynamic_widget/issues/264",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2447961932 | 🛑 Beyond.pl is down
In eb1fecd, Beyond.pl (http://www.beyond.pl) was down:
HTTP code: 403
Response time: 637 ms
Resolved: Beyond.pl is back up in e091d1e after 11 minutes.
| gharchive/issue | 2024-08-05T08:41:24 | 2025-04-01T06:45:21.369930 | {
"authors": [
"pejotes"
],
"repo": "pejotes/upptime",
"url": "https://github.com/pejotes/upptime/issues/3736",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2629770333 | 🛑 Beyond.pl is down
In 1f5db31, Beyond.pl (http://www.beyond.pl) was down:
HTTP code: 403
Response time: 517 ms
Resolved: Beyond.pl is back up in 950f0e8 after 22 minutes.
| gharchive/issue | 2024-11-01T19:53:14 | 2025-04-01T06:45:21.372982 | {
"authors": [
"pejotes"
],
"repo": "pejotes/upptime",
"url": "https://github.com/pejotes/upptime/issues/4036",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1719775855 | 🛑 GR-Server-Fibercorp is down
In b236f14, GR-Server-Fibercorp ($API_SEF) was down:
HTTP code: 0
Response time: 0 ms
Resolved: GR-Server-Fibercorp is back up in cf5ca53.
| gharchive/issue | 2023-05-22T14:06:04 | 2025-04-01T06:45:21.429433 | {
"authors": [
"pellit"
],
"repo": "pellit/control_uptime",
"url": "https://github.com/pellit/control_uptime/issues/369",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2637307836 | Figure out what to do with dates
Currently the grammar doesn't support dates, and all of the functionality around that would need to be implemented.
We need to make a decision on what to do with this - should we out-source it to cftime, or integrate it so that pyudunits2 is a one-stop-shop for unit handling.
Note this also came up in https://github.com/ioos/compliance-checker/pull/1118/files#diff-e7a4b63dd33717c4e657b158dab5b6b9db5178a5ed9dc608f118dcbc25dec494R328.
cc @ocefpaf for your thoughts.
I'm inclined to let cftime handle that.
A few comments to share my experience when trying to use cftime as an alternative. Note that I may be doing something wrong!
Workaround 1: I could not find a way to do only "units" as defined by COARDS definition, only full dates conversion that requires a calendar, cf_units could do that, no calendar needed. I'm also using a private method from cftime, not ideal.
xref.: https://github.com/ioos/compliance-checker/pull/1118/files#diff-4af926901344d50518fc4386b31673c7bfa2a64a0752088963b2b1596ec32be8R2048
Workaround 2: We used is_long_time_interval() from cf-units to check for "bad" months/years since in the units. That method is deprecated in cf-units, so we need to update that check regardless.
xref.: https://github.com/ioos/compliance-checker/pull/1118/files#diff-88d3dfc7498015aa900cc3050d5e5a9004adbf8d5b3d732185f90bd97de75862L1868
Workaround 3: We make use of is_convertible_to and is_time_reference from cf-units. That makes me wonder if we should wrap cftime in pyudunits2 for a smoother user experience when moving from cf-units.
Workaround 4: cftime is strict about 60 secs, cf-units was not! In a way, that is wrong and we should fix out test.
xref.: https://github.com/ioos/compliance-checker/pull/1118/files#diff-bbc2203122ce6acdb68da4bf8c0493966a5532a17f80421ff805c447106fd325R1301
Hope I'm giving you something useful and not just noise 😬
I'm in the process of updating the pyudunits2 grammar to support calendar units properly. It is clear that in udunits2 this was implemented after the grammar was defined, since it is quite a long way from reality. Note that I had tests in place knowing that there was work to do for the example of timezone https://github.com/pelson/pyudunits2/blob/f75f1afd27d4b66ef77fc9db9301ae914111d288/pyudunits2/tests/_grammar/test_parse.py#L246. Past me also warned current me about taking this implementation on... https://github.com/pelson/pyudunits2/blob/f75f1afd27d4b66ef77fc9db9301ae914111d288/pyudunits2/_expr_graph.py#L150. Past me wasn't wrong about it being gnarly :joy: .
In udunits2 the following specification is seemingly valid (but wild): hours since 2000-01-01 23:59:59.60.2. If you parse this with cftime it looks like it gets transformed incorrectly:
>>> dt = cftime._dateparse('hours since 2000-01-01 23:59:59.60.2', calendar='standard')
>>> dt.isoformat()
'2000-01-01T23:59:59.600000'
https://github.com/ioos/compliance-checker/pull/1118/files#diff-bbc2203122ce6acdb68da4bf8c0493966a5532a17f80421ff805c447106fd325L1301
I see in the CF spec this is covered explicitly:
A date/time in the excluded range must not be used as a reference date/time e.g. seconds since 2016-12-31 23:59:60 is not a permitted value for units
I'm thinking that in pyudunits2 we should either:
have a custom date-time representation, but completely avoid having any interpretation of that
avoid parsing the part after since if it looks like it is a date-time, and stopping to do anything than giving you a string
Having thought about this a little, I think I will make a date-time-timezone object, and stop there. It will be sufficient to know that you have a date-like unit, and then easy enough to build a datetime from it subsequently.
So over the last week I put in a large number of hours working on proper date parsing in pyudunits2. Unfortunately, it required deep re-factoring of the parser and I didn't manage to successfully reach the goal (and found many udunits2 bugs en-route). Frustratingly, I scrapped all that work, and went back to basics - I have now added support for parsing dates with timezones (the original issue that came up), as well as adding the ability to know if a unit is "time-like". From there, it should be easy for me to add the ability to determine if a unit is date-time based. Furthermore, it should be easy enough to expose the raw string that was parsed in that date specification, so that we can hand it over to tools like cftime. Some thought on how to expose that in an interface is still needed.
In short: you should now be able to at least parse and recognise units with calendars/date-time info. It may be slightly problematic for you at this moment though, since now it parses but there is no easy way to determine if the thing is convertible (and I have no doubt that there will be some deep errors when you try to do things like convert date-time units to something else).
I have no doubt that there will be some deep errors when you try to do things like convert date-time units to something else
In compliance-checker we are abusing the .is_convertible() method to compare units. Maybe converting dates-units should not be in the scope here to make it simple? Right now we would get an inconsistent behavior in compliance checker:
cf_units.Unit("seconds since 1980-01-19").is_convertible("hours since 1980-01-19")
True
cf_units.Unit("seconds since 1980-01-19").is_convertible("hours")
False
ut_system.unit("seconds since 1980-01-19").is_convertible_to(ut_system.unit("hours since 1980-01-19"))
True
ut_system.unit("seconds since 1980-01-19").is_convertible_to(ut_system.unit("hours"))
True
TL;DR IMO we should fix compliance-checker to check if the units are dates and not just "time."
Just hit a corner case when parsing iso dates. Let me know if I should open new issues, hold off for now, or report these here.
Passes
cf_units.Unit("days since 1970-01-01T00:00:00 UTC")
Unit('days since 1970-01-01T00:00:00', calendar='standard')
# Fails
ut_system.unit("days since 1970-01-01T00:00:00 UTC")
File inline:1
'days since 1970-01-01T00:00:00 UTC'
^
SyntaxError: mismatched input ':' expecting <EOF>
Just hit a corner case when parsing iso dates. Let me know if I should open new issues, hold off for now, or report these here.
A comment is fine for now. If you find it is backing up, open an issue (and feel free to clump multiple issues into one if helpful)
Note to self: Date units look a lot like reference points, but it is a mistake to use units for this purpose.
To give an example, you can't use a date as a reference for a non time unit:
meters since 2000-01-01
(i.e. this isn't a measure of the distance moved since a certain date)
However:
seconds since 2000-01-01
Feels a lot like a date serialisation format... yet principally the two are equivalent in what they represent (the number of units since the given date).
Because we know how many time units there are in a date (if we know the calendar), we can shift the date reference and apply it to the unit value:
$ udunits2 -H 'hours since 2000-01-01' -W 'minutes since 2000-01-01'
1 hours since 2000-01-01 = 60 (minutes since 2000-01-01)
x/(minutes since 2000-01-01) = 60*(x/(hours since 2000-01-01))
Yet the equally convertible form for meters is not supported:
$ udunits2 -H 'meters since 2000-01-01' -W 'kilometers since 2000-01-01'
udunits2: Don't recognize "meters since 2000-01-01"
Clearly, when it comes to understanding calendars we have to be able to associate the unit with the date:
$ udunits2 -H 'hours since 2000-01-01' -W 'hours since 2000-01-02'
1 hours since 2000-01-01 = -23 (hours since 2000-01-02)
x/(hours since 2000-01-02) = (x/(hours since 2000-01-01)) - 24
But if you accidentally forget that this date isn't really a "reference point", you can easily get unexpected results. Imagine you had been measuring the growth of a plant from a particular date, and naively tried to shift the reference point (a nonsensical question without more information [just like date shifting without a calendar is nonsensical]):
$ udunits2 -H 'mm / (day since 2000-01-01)' -W 'mm / (day since 2000-01-02)'
1 mm / (day since 2000-01-01) = 1 (mm / (day since 2000-01-02))
x/(mm / (day since 2000-01-02)) = (x/(mm / (day since 2000-01-01)))
Woops... your growth rate is date invariant. This should really have been represented as a unit of mm / day with the reference point being metadata that belongs outside of the unit. For this reason, I'm planning to prohibit date units other than those which are declared at the top level (e.g. days since 2000-01-01). Anything like ms per day since 2000-01-01 will be rejected (as will meters since 2000-01-01). There will be a special date unit class to represent this, as the operations permitted are not the same as those on a normal unit.
Note that there is an ambiguity in days since 20000101 which will be preserved, and may continue to produce nonsensical results:
$ udunits2 -H 'mm / day since 20000101' -W 'mm/ day since 20000201'
1 mm / day since 20000101 = -99 (mm/ day since 20000201)
x/(mm/ day since 20000201) = (x/(mm / day since 20000101)) - 100
In the future, I may try to prohibit this by introducing a unit concept of relative scale or delta units. I think that is a bigger topic that I won't try to fix here though.
Note to self: Date units look a lot like reference points, but it is a mistake to use units for this purpose.
OK. Point taken 😄
Not that, IMO, it is a mistake to differentiate those two time formats in that manner. I'm old and I'm from a time where "time-units since some-date" was just called COARDS time units, and treated as time units like the rest. I'll change the logic in compliance-checker to not rely on any is_time_reference or similar. I agree with you that you should not implement that in pyudunits2.
I agree with you that you should not implement that in pyudunits2.
Hold your horses :smile:. I have a DateUnit in the pipeline for precisely the purpose you suggested. Just note that you can't multiply DateUnit with another unit, for example. Should have more detail this week.
I have now exposed a DataUnit and the ability to access the reference_date on that type. For now, I can only expose the raw value, but plan to turn the type into an interpreted DateTime in the next few days.
>>> import pyudunits2
>>> us = pyudunits2.UnitSystem.from_udunits2_xml()
>>> u = us.unit('s since 2000')
>>> type(u)
<class 'pyudunits2.DateUnit'>
>>> u.reference_date.raw_content
'2000'
>>> print(u.unit)
s
BTW, I believe we should be good for a first release
I'd like to bed down this date implementation (for example, until I wrote the above, you couldn't access unit on the DateUnit), then we can proceed with an alpha release. Let's aim for the beginning of next week.
| gharchive/issue | 2024-11-06T07:51:25 | 2025-04-01T06:45:21.452077 | {
"authors": [
"ocefpaf",
"pelson"
],
"repo": "pelson/pyudunits2",
"url": "https://github.com/pelson/pyudunits2/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1053191565 | 🛑 Dinas Perhubungan is down
In 6f9fbd7, Dinas Perhubungan (https://dishub.bekasikota.go.id) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Dinas Perhubungan is back up in 93dee0d.
| gharchive/issue | 2021-11-15T04:09:36 | 2025-04-01T06:45:21.455802 | {
"authors": [
"rahadiana"
],
"repo": "pemkotbekasi/website-status",
"url": "https://github.com/pemkotbekasi/website-status/issues/1829",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1054397062 | 🛑 Dinas Perdagangan dan Perindustrian is down
In 41ac8b3, Dinas Perdagangan dan Perindustrian (https://disdagperin.bekasikota.go.id) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Dinas Perdagangan dan Perindustrian is back up in 32a914f.
| gharchive/issue | 2021-11-16T03:19:40 | 2025-04-01T06:45:21.458356 | {
"authors": [
"rahadiana"
],
"repo": "pemkotbekasi/website-status",
"url": "https://github.com/pemkotbekasi/website-status/issues/2300",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1447263360 | 🛑 Kecamatan Bantar Gebang is down
In 8265bfc, Kecamatan Bantar Gebang (https://kec-bantargebang.bekasikota.go.id) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Kecamatan Bantar Gebang is back up in 3d34d15.
| gharchive/issue | 2022-11-14T02:24:49 | 2025-04-01T06:45:21.461157 | {
"authors": [
"rahadiana"
],
"repo": "pemkotbekasi/website-status",
"url": "https://github.com/pemkotbekasi/website-status/issues/2698",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1043015753 | Kb personal dev
Merge in to dev
👍👌
| gharchive/pull-request | 2021-11-03T02:37:15 | 2025-04-01T06:45:21.516840 | {
"authors": [
"Kevin-Busquets",
"penguincto"
],
"repo": "penguintop/penguin",
"url": "https://github.com/penguintop/penguin/pull/24",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
472645782 | 传输图片时候报 json解析异常,有什么好的建议解决吗?
网页那边把图片base64处理后传给我,但是json报错,替换json解析工具可以解决吗?还是本身js交互传递的参数长度有限?
2019-07-25 11:20:29.116 E/JsBridgeDebug: JBArgumentParser::parse Exception
2019-07-25 11:20:29.117 E/JsBridgeDebug: org.json.JSONException: Unterminated string at character 10240 of {"id":957,"module":"@static","method":"saveImg","parameters":[{"type":1,"name":"957_a0","value":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKkAAACtCAYAAAG8Q2+vAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccl.....
看着像你传递了base64图片数据,可能超过了字符串限制,但是json被截断了。建议一次传递的数据量不要超过100w字符
网页那边把图片base64处理后传给我,但是json报错,替换json解析工具可以解决吗?还是本身js交互传递的参数长度有限?
2019-07-25 11:20:29.116 E/JsBridgeDebug: JBArgumentParser::parse Exception
2019-07-25 11:20:29.117 E/JsBridgeDebug: org.json.JSONException: Unterminated string at character 10240 of {"id":957,"module":"@static","method":"saveImg","parameters":[{"type":1,"name":"957_a0","value":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKkAAACtCAYAAAG8Q2+vAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccl.....
要么图片稍微压缩一下,尽量在1M以内?
谢谢大佬,我发现应该是跟android系统有关系,貌似比较新的如9.0的系统,传输长字符会被截断,低版本的手机是没这个问题的。
| gharchive/issue | 2019-07-25T03:26:02 | 2025-04-01T06:45:21.534747 | {
"authors": [
"pengwei1024",
"xinboljy"
],
"repo": "pengwei1024/JsBridge",
"url": "https://github.com/pengwei1024/JsBridge/issues/24",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
510877334 | Add updated group photo
thank you alice
thank you alice
| gharchive/pull-request | 2019-10-22T20:05:18 | 2025-04-01T06:45:21.543020 | {
"authors": [
"Pwpon500",
"kirubarajan"
],
"repo": "pennlabs/pennlabs.org",
"url": "https://github.com/pennlabs/pennlabs.org/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
196613023 | [ENGOPS-2667] Removing antcontrib targets(part II)
@lgrill-pentaho @lucboudreau
Build Completed
:fire: This pull request has some issues. It would be preferable to fix them in order for it to be just perfect. See below for more details. Some links are also available below for further assistance in addressing those issues.
Build Commands
ant -Dtestreports.dir=bin/reports/unit-test -f engine/build.xml clean-all resolve jacoco && ant -Dtestreports.dir=bin/reports/integration-test -f engine/build.xml jacoco-integration checkstyle publish-local
ant -Dtestreports.dir=bin/reports/unit-test -f dbdialog/build.xml clean-all resolve jacoco && ant -Dtestreports.dir=bin/reports/integration-test -f dbdialog/build.xml jacoco-integration checkstyle publish-local
Cleanup Commands
rm -r ~/.ivy2/local || echo no publish local to remove
Changed files
dbdialog/build.xml
engine/build.xml
Newly Fixed Tests:
org.pentaho.di.trans.steps.textfileinput.TextFileInputMetaLoadSaveTest.testSerialization:
:large_blue_circle: java.lang.RuntimeException
java.lang.RuntimeException: Error validating dateFormatLocale
at org.pentaho.di.trans.steps.loadsave.LoadSaveTester.validateLoadedMeta(LoadSaveTester.java:176)
at org.pentaho.di.trans.steps.loadsave.LoadSaveTester.testRepoRoundTrip(LoadSaveTester.java:248)
at org.pentaho.di.trans.steps.loadsave.LoadSaveTester.testSerialization(LoadSaveTester.java:183)
at org.pentaho.di.trans.steps.textfileinput.TextFileInputMetaLoadSaveTest.testSerialization(TextFileInputMetaLoadSaveTest.java:159)
Caused by:
Unit test coverage change
These statistics help you identify how your changes have affected the coverage of the following files. If a file is not in this list, then its coverage was not affected by your changes. To get some help interpreting these metrics, please refer to Jacoco's documentation.
org.pentaho.di.core.injection.bean.BeanInjector
Branch Change: -1.27%:small_red_triangle_down:
Complexity Change: -2.00%:small_red_triangle_down:
org.pentaho.di.job.entries.getpop.MailConnectionMeta
Branch Change: + 1.00%
Complexity Change: + 1.41%
Instruction Change: + .30%
org.pentaho.di.trans.steps.checksum.CheckSumMeta
Branch Change: -1.35%:small_red_triangle_down:
Complexity Change: -1.45%:small_red_triangle_down:
Instruction Change: -.21%:small_red_triangle_down:
org.pentaho.di.trans.steps.constant.ConstantMeta
Instruction Change: + .10%
org.pentaho.di.trans.steps.csvinput.CsvInputMeta
Instruction Change: + .05%
org.pentaho.di.trans.steps.dbproc.DBProcMeta
Instruction Change: -.11%:small_red_triangle_down:
org.pentaho.di.trans.steps.groupby.GroupByMeta
Branch Change: + 3.12%
Complexity Change: + 3.16%
Instruction Change: + .14%
Line Change: + .37%
org.pentaho.di.trans.steps.ifnull.IfNullMeta
Branch Change: -2.00%:small_red_triangle_down:
Complexity Change: -1.89%:small_red_triangle_down:
Instruction Change: -.17%:small_red_triangle_down:
org.pentaho.di.trans.steps.insertupdate.InsertUpdateMeta
Instruction Change: + .05%
org.pentaho.di.trans.steps.memgroupby.MemoryGroupByMeta
Branch Change: -4.71%:small_red_triangle_down:
Complexity Change: -5.41%:small_red_triangle_down:
Instruction Change: -.18%:small_red_triangle_down:
Line Change: -.48%:small_red_triangle_down:
org.pentaho.di.trans.steps.replacestring.ReplaceStringMeta
Instruction Change: + .07%
org.pentaho.di.trans.steps.sort.SortRowsMeta
Instruction Change: + .08%
org.pentaho.di.trans.steps.synchronizeaftermerge.SynchronizeAfterMergeMeta
Instruction Change: -.04%:small_red_triangle_down:
org.pentaho.di.trans.steps.systemdata.SystemDataMeta
Branch Change: + 2.04%
Complexity Change: + 2.04%
Instruction Change: + .32%
Line Change: + .81%
org.pentaho.di.trans.steps.tableoutput.TableOutputMeta
Branch Change: + .91%
Complexity Change: + .82%
org.pentaho.di.trans.steps.webservices.WebServiceMeta
Branch Change: + 1.79%
Complexity Change: + 1.19%
Instruction Change: + .07%
org.pentaho.di.www.TransformationMap
Branch Change: -1.14%:small_red_triangle_down:
Complexity Change: -1.54%:small_red_triangle_down:
Integration test coverage change
These statistics help you identify how your changes have affected the coverage of the following files. If a file is not in this list, then its coverage was not affected by your changes. To get some help interpreting these metrics, please refer to Jacoco's documentation.
org.pentaho.di.trans.step.BaseStep
Complexity Change: + .18%
Instruction Change: + .20%
Line Change: + .38%
Method Change: + .49%
| gharchive/pull-request | 2016-12-20T08:28:11 | 2025-04-01T06:45:21.599391 | {
"authors": [
"abaskakau",
"wingman-pentaho"
],
"repo": "pentaho/pentaho-kettle",
"url": "https://github.com/pentaho/pentaho-kettle/pull/3265",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
269706384 | [BACKLOG-19684] Update the PDI respository login screen
@bmorrise
8.0 PR: https://github.com/pentaho/pentaho-kettle/pull/4565
8.0.0.0 PR: https://github.com/pentaho/pentaho-kettle/pull/4564
Build Completed
:white_check_mark: This pull request has passed all validations.
Build Commands
mvn -Dsurefire.runOrder=alphabetical -B -fn -f 'pom.xml' -pl 'plugins/repositories/core' -P '!assemblies' -amd clean install && mvn -B -f 'pom.xml' -pl 'plugins/repositories/core' -P '!assemblies' -amd site
Cleanup Commands
mvn -B -f 'pom.xml' -pl 'plugins/repositories/core' -P '!assemblies' -amd build-helper:remove-project-artifact
Changed files
plugins/repositories/core/src/main/resources-filtered/web/pentaho-repository-connect.html
plugins/repositories/core/src/main/resources/web/css/style.css
plugins/repositories/core/src/main/resources/web/lang/messages_en.properties
| gharchive/pull-request | 2017-10-30T18:28:25 | 2025-04-01T06:45:21.602729 | {
"authors": [
"ddiroma",
"wingman-pentaho"
],
"repo": "pentaho/pentaho-kettle",
"url": "https://github.com/pentaho/pentaho-kettle/pull/4563",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2326119401 | [BACKLOG-40936] - Fix for jdk-17 unit test for random-number-cc-gener…
…ator plugin
Note:
Frogbot also supports Contextual Analysis, Secret Detection, IaC and SAST Vulnerabilities Scanning. This features are included as part of the JFrog Advanced Security package, which isn't enabled on your system.
🐸 JFrog Frogbot
:white_check_mark: Build finished in 15m 50s
Build command:
mvn clean verify -B -e -Daudit -Djs.no.sandbox -pl \
plugins/random-cc-number-generator/impl
:ok_hand: All tests passed!
Tests run: 1, Failures: 0, Skipped: 0 Test Results
:information_source: This is an automatic message
Note:
Frogbot also supports Contextual Analysis, Secret Detection, IaC and SAST Vulnerabilities Scanning. This features are included as part of the JFrog Advanced Security package, which isn't enabled on your system.
🐸 JFrog Frogbot
:white_check_mark: Build finished in 13m 18s
Build command:
mvn clean verify -B -e -Daudit -Djs.no.sandbox -pl \
plugins/postgresql-db-bulk-loader/impl,plugins/random-cc-number-generator/impl
:ok_hand: All tests passed!
Tests run: 12, Failures: 0, Skipped: 0 Test Results
:information_source: This is an automatic message
| gharchive/pull-request | 2024-05-30T16:48:21 | 2025-04-01T06:45:21.612951 | {
"authors": [
"buildguy",
"wseyler"
],
"repo": "pentaho/pentaho-kettle",
"url": "https://github.com/pentaho/pentaho-kettle/pull/9388",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
642858080 | Synchronous closing for AiopgConnector
Closes #
Successful PR Checklist:
[ ] Tests
[ ] (not applicable?)
[ ] Documentation
[x] (not applicable?)
[x] Had a good time contributing?
[x] (Maintainers: add PR labels)
I now get a warning when conn.close is called:
$ python -m asyncio
asyncio REPL 3.8.2 (default, Apr 8 2020, 14:31:25)
[GCC 9.3.0] on linux
Use "await" directly instead of "asyncio.run()".
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>> from procrastinate import AiopgConnector
>>> conn = AiopgConnector(dsn="")
>>> await conn.execute_query_async("select 1")
>>> conn.close()
/home/elemoine/.virtualenvs/procrastinate/lib/python3.8/site-packages/aiopg/pool.py:308: ResourceWarning: Unclosed 1 connections in <aiopg.pool.Pool object at 0x7f71b889de50>
warnings.warn(
And the same warning when deleting conn:
>>> conn = AiopgConnector(dsn="")
>>> await conn.execute_query_async("select 1")
>>> del conn
/home/elemoine/.virtualenvs/procrastinate/lib/python3.8/site-packages/aiopg/pool.py:308: ResourceWarning: Unclosed 1 connections in <aiopg.pool.Pool object at 0x7f71b88b10a0>
warnings.warn(
| gharchive/pull-request | 2020-06-22T08:16:17 | 2025-04-01T06:45:21.654638 | {
"authors": [
"elemoine",
"ewjoachim"
],
"repo": "peopledoc/procrastinate",
"url": "https://github.com/peopledoc/procrastinate/pull/263",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1527021751 | Call screenshot endpoint for AA
Optimization to call screenshot endpoint and upload screenshots from terminal for AppAutomate
Closing since no longer required
| gharchive/pull-request | 2023-01-10T09:09:07 | 2025-04-01T06:45:21.686578 | {
"authors": [
"pankaj443"
],
"repo": "percy/percy-appium-java",
"url": "https://github.com/percy/percy-appium-java/pull/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
159807089 | PUHLEEZE keep the "File System" database option in place.
I fundamentally rely on local files for KeepPass on my home Linux workstation, work Windows laptop, android phone, and now Chromebook. I greatly prefer the minor inconvenience of manual file sync v. keeping my password DB on Google drive, dropbox, onedrive, or any other such globally accessible file store. :)
It looks like it's staying: https://github.com/perfectapi/CKP/issues/97
Might pay to change the wording around it so there's less confusion.
Closing this issue. I "undid" the deprecation and changed the wording.
| gharchive/issue | 2016-06-12T04:21:01 | 2025-04-01T06:45:21.692966 | {
"authors": [
"DukeyToo",
"dnlombard",
"thelollies"
],
"repo": "perfectapi/CKP",
"url": "https://github.com/perfectapi/CKP/issues/102",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1595646241 | request a new release
👋 it has been more than two years since last release.
Now it's been almost three years! :)
But yes, it's probably time. I'm taking a break from work right now and have been working on Perkeep again so I'll try to squeeze in time for a release.
@bradfitz any update on this? It would help to bring perkeep formula back to life
https://github.com/Homebrew/homebrew-core/pull/162430
| gharchive/issue | 2023-02-22T18:53:42 | 2025-04-01T06:45:21.699688 | {
"authors": [
"bradfitz",
"chenrui333"
],
"repo": "perkeep/perkeep",
"url": "https://github.com/perkeep/perkeep/issues/1656",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
337990115 | Revise the lazy lists section
The problem
The main problem with this section is nomenclature. Repeatedly this section refers to lazy lists, while most of the examples are actually lazy arrays. Anything which mixes in the Iterable role can be lazy, in fact, and that's not really well explained either. Also, it's indexed as Lazy (property of List), when it's actually a method of Iterable. is-lazy is a property of Iterator, not iterable, too.
There's also the error of referring to the role Iterable instead of the role Iterator, the one that actually implements all the lazy stuff.,
Suggestions
Stop using the generic list and use specific denomination. Fix reference errors.
to lazy lists, while most of the examples are actually lazy arrays
FWIW, lowercased list refers to any Positional type, while List refers specifically to the List type.
| gharchive/issue | 2018-07-03T17:26:02 | 2025-04-01T06:45:21.709354 | {
"authors": [
"JJ",
"zoffixznet"
],
"repo": "perl6/doc",
"url": "https://github.com/perl6/doc/issues/2139",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
174950893 | Coercion types are now allowed as return types.
Since Rakudo commit b508576fc51cfa128a84ed4f302528a3f78bab03
Tests are here.
https://github.com/perl6/roast/blob/master/S06-signature/definite-return.t#L135-L209
Where do you think this should go? Could it get along well with #1225?
| gharchive/issue | 2016-09-04T14:14:30 | 2025-04-01T06:45:21.711238 | {
"authors": [
"JJ",
"LemonBoy",
"titsuki"
],
"repo": "perl6/doc",
"url": "https://github.com/perl6/doc/issues/884",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
369836661 | Add some information to the input section
👍 Thanks!
| gharchive/pull-request | 2018-10-13T20:09:46 | 2025-04-01T06:45:21.712106 | {
"authors": [
"uzluisf",
"zoffixznet"
],
"repo": "perl6/doc",
"url": "https://github.com/perl6/doc/pull/2381",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1770240492 | [Nuxt3] getSync2 is not a function
import { ray } from 'node-ray/web'
ray('something')
getSync2 is not a function
at Proxy.getOriginData (./node_modules/node-ray/dist/web.esm.mjs:2220:24)
at ./node_modules/node-ray/dist/web.esm.mjs:2258:34
at Array.forEach ()
at Proxy.sendRequest (./node_modules/node-ray/dist/web.esm.mjs:2257:14)
at Proxy.send (./node_modules/node-ray/dist/web.esm.mjs:2128:17)
at __vite_ssr_import_0__.watch.immediate (./composables/useRay.ts:10:43)
at callWithErrorHandling (./node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:156:18)
at callWithAsyncErrorHandling (./node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:164:17)
at doWatch (./node_modules/@vue/runtime-core/dist/runtime-core.cjs.js:1726:7)
Hi @DJafari.
How did you solve it?
@slavarazum
you must call ray in client side only, you can disable ssr to check this
In my case I use node version node-ray.
Trying to run in testing environment.
@slavarazum so i think you must open new issue
| gharchive/issue | 2023-06-22T19:02:47 | 2025-04-01T06:45:21.725687 | {
"authors": [
"DJafari",
"slavarazum"
],
"repo": "permafrost-dev/node-ray",
"url": "https://github.com/permafrost-dev/node-ray/issues/190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1600414210 | POC: Julie investigating things around UI reorging
Checklist
[ ] Pull request has a descriptive title and context useful to a reviewer.
[ ] Pull request title follows the [<catalog_entry>] <commit message> naming convention using one of the following catalog_entry values: FEATURE, ENHANCEMENT, BUGFIX, BREAKINGCHANGE, IGNORE.
[ ] All commits have DCO signoffs.
[ ] Changes that impact the UI include screenshots and/or screencasts of the relevant changes.
@juliepagano Is this a PR we could close for the time being while it's in draft mode? (It's OK if the answer is "no, it helps to have it open")
| gharchive/pull-request | 2023-02-27T03:52:54 | 2025-04-01T06:45:21.730461 | {
"authors": [
"cndonovan",
"juliepagano"
],
"repo": "perses/perses",
"url": "https://github.com/perses/perses/pull/986",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1247476360 | Remark presentation: Touch gestures conflict with navigation buttons
Summary
Attempting to click the next/previous buttons in presentation view with a touchscreen often causes additional slide transitions.
Remark has built-in touchscreen gesture support, which is causing the issue.
Steps to reproduce
Open the example slide deck
Tap the go to next slide button twice with a touchscreen.
Swipe left on that button.
Expected behavior
Presentation should advance two slides, swiping left on the button should have no effect.
Observed behavior
The presentation often doesn't go to the next slide.
Version information
hematite 0.1.9
This has been fixed.
| gharchive/issue | 2022-05-25T04:57:53 | 2025-04-01T06:45:21.742955 | {
"authors": [
"personalizedrefrigerator"
],
"repo": "personalizedrefrigerator/jekyll-hematite-theme",
"url": "https://github.com/personalizedrefrigerator/jekyll-hematite-theme/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
270414228 | Don't install headers
I believe headers don't need to be installed.
Correct, we should not be installing anything other than the .so file. This should be changed as far upstream as possible (e.g. master if needed).
| gharchive/pull-request | 2017-11-01T18:57:38 | 2025-04-01T06:45:21.744016 | {
"authors": [
"jslee02",
"psigen"
],
"repo": "personalrobotics/dartpy",
"url": "https://github.com/personalrobotics/dartpy/pull/41",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
389610235 | Old style URL used
Example
https://github.com/personium/app-cc-home/blob/master/html/js/common.js#L595
Suggestion
Like Unit Manager, all the URL should be handled by systematic way.
Applied hotfix to demo-fi.
| gharchive/issue | 2018-12-11T06:07:58 | 2025-04-01T06:45:21.745399 | {
"authors": [
"dixonsiu"
],
"repo": "personium/app-cc-home",
"url": "https://github.com/personium/app-cc-home/issues/209",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
357024316 | Demo not working in Chromium
It seems that demo still doesn't work in Chromium Version 68.0.3440.106 (Official Build) Built on Ubuntu , running on LinuxMint 19 (64-bit)
That may be because of MP4s used... There's no webm fallback.
Similar issue in FireFox and Chrome on Windows 10. I had to visit the video before it began running in Chrome and FireFox spits this out in the console, "Autoplay is only allowed when approved by the user, the site is activated by the user, or media is muted."
All major browsers should support MP4 at this point, I have not needed to use fallbacks on my own projects. Even works in Edge.
Anyways, try muting the video and see if that helps. You can also remove the audio track using a freeware audio tool such as Resolve. Even if no audio is present, there is still an audio track in a lot of cases.
| gharchive/issue | 2018-09-05T00:39:32 | 2025-04-01T06:45:21.749099 | {
"authors": [
"FrankFlitton",
"Just-Johnny",
"ergonomicus"
],
"repo": "pespantelis/vue-videobg",
"url": "https://github.com/pespantelis/vue-videobg/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
530213463 | prepare 0.1.2 release
add changelog
add badges for docs & pypi
bump version
force networkx >= 2.4
thanks @peteboyd , just released 0.1.2 on pypi
| gharchive/pull-request | 2019-11-29T08:19:56 | 2025-04-01T06:45:21.783797 | {
"authors": [
"ltalirz"
],
"repo": "peteboyd/lammps_interface",
"url": "https://github.com/peteboyd/lammps_interface/pull/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
65323640 | Update README.md
Update 6.1 Rails.
Coverage increased (+0.04%) to 97.57% when pulling 06f3f98a385cd75ff26d4bbdc33da21c615173cd on florentferry:patch-1 into e06ea23b9e359697812ad47c050d58d199e9c149 on peter-murach:master.
Coverage increased (+0.04%) to 97.57% when pulling 06f3f98a385cd75ff26d4bbdc33da21c615173cd on florentferry:patch-1 into e06ea23b9e359697812ad47c050d58d199e9c149 on peter-murach:master.
Coverage increased (+0.04%) to 97.57% when pulling 06f3f98a385cd75ff26d4bbdc33da21c615173cd on florentferry:patch-1 into e06ea23b9e359697812ad47c050d58d199e9c149 on peter-murach:master.
Coverage increased (+0.04%) to 97.57% when pulling 06f3f98a385cd75ff26d4bbdc33da21c615173cd on florentferry:patch-1 into e06ea23b9e359697812ad47c050d58d199e9c149 on peter-murach:master.
Coverage increased (+0.04%) to 97.57% when pulling 06f3f98a385cd75ff26d4bbdc33da21c615173cd on florentferry:patch-1 into e06ea23b9e359697812ad47c050d58d199e9c149 on peter-murach:master.
Thanks!
| gharchive/pull-request | 2015-03-30T23:01:57 | 2025-04-01T06:45:21.804176 | {
"authors": [
"coveralls",
"florentferry",
"peter-murach"
],
"repo": "peter-murach/github",
"url": "https://github.com/peter-murach/github/pull/224",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1709898011 | [Security] Usage example is vulnerable
The usage example is sensible to script injection
How to reproduce:
demo-repository-owner<<
octodemo
demo-repository-name<<
pm-automation-test-001
template<<
octodemo/template-demo-github-user-search";ls /;echo "
This was resolved with best practice examples for avoiding injection attacks when showing the data. It is expected that users would treat this as untrusted data but the examples definitely needed updating. Thank you for reporting 🙇
The use of echo '${{ env.parsed_data }}' is still vulnerable, as $ {{ }} is doing string replacement before the execution.
You can check how to fix on https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#understanding-the-risk-of-script-injections
Or the mitigations chapter on https://cycode.com/blog/github-actions-vulnerabilities/
Basically, once you declared env, as you have already done, you use them as env variable ($XXX) instead of string replacement ${{XXX}}
| gharchive/issue | 2023-05-15T11:29:49 | 2025-04-01T06:45:21.808398 | {
"authors": [
"peter-murray",
"tr4l"
],
"repo": "peter-murray/issue-forms-body-parser",
"url": "https://github.com/peter-murray/issue-forms-body-parser/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
411189283 | optimize "over" and "mod" for "NativeBigInt"
This can be interesting for the "benchmark" to compare other libs with the native bigints. As the results are very close when your lib is using natives.
And only Division is two times slower, this change should fix it.
Coverage decreased (-0.2%) to 95.127% when pulling 38206db40e7904003d92a2e105bf9acbcc9956c7 on Yaffle:master into 4a5b73a2d6a9a7a8aa78e4a095cc55e4d498f691 on peterolson:master.
| gharchive/pull-request | 2019-02-17T13:45:54 | 2025-04-01T06:45:21.815628 | {
"authors": [
"Yaffle",
"coveralls"
],
"repo": "peterolson/BigInteger.js",
"url": "https://github.com/peterolson/BigInteger.js/pull/171",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
555300516 | Flags enum doesn't pass the IsDefined Check
A Flags enum will never pass an IsDefined check.
Problem is in PeterO.Cbor.PropertyMap.ObjectToEnum
Stacktrace
PeterO.Cbor.CBORException: Unrecognized enum value: 4161
at PeterO.Cbor.PropertyMap.ObjectToEnum(CBORObject obj, Type enumType)
at PeterO.Cbor.PropertyMap.TypeToObject(CBORObject objThis, Type t, CBORTypeMapper mapper, PODOptions options, Int32 depth)
at PeterO.Cbor.CBORObject.ToObject(Type t, CBORTypeMapper mapper, PODOptions options, Int32 depth)
at PeterO.Cbor.PropertyMap.ObjectWithProperties(Type t, IEnumerable`1 keysValues, CBORTypeMapper mapper, PODOptions options, Int32 depth)
This could be a way to fix it.
edit: cc: @peteroupc
Fixed in 491aa10.
| gharchive/issue | 2020-01-26T22:54:00 | 2025-04-01T06:45:21.817621 | {
"authors": [
"NickAcPT",
"peteroupc"
],
"repo": "peteroupc/CBOR",
"url": "https://github.com/peteroupc/CBOR/issues/42",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
868395150 | Stealth mode - Hide what this app is until you login to it.
I rather not have a public facing site that makes it obvious what this app does...
So I want the ability to hide any references to 'Petio' on the login/auth screen. I also want to disable any media backdrop images on the login screen. Basically if you go to my petio url and you aren't logged in, you would have no idea the app is petio and it's for requesting media...
Customization is planned down the road.
Yes this will be addressed with customisation of the logo / meta title. As for media backdrops, we don't use media backdrops. Closing this issue as it is addressed in the planned customisation coming in later versions
| gharchive/issue | 2021-04-27T03:03:22 | 2025-04-01T06:45:21.825059 | {
"authors": [
"AshDyson",
"angrycuban13",
"beyondmeat"
],
"repo": "petio-team/petio",
"url": "https://github.com/petio-team/petio/issues/377",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1420802312 | BC break in 1.4 because of extension class renaming
#18 broke BC by renaming the extension GpsMessengerExtension to PetitPressGpsMessengerExtension.
This should be either documented as such, or better, reverted, maybe reintroduced in 2.x after proper deprecation.
That class is marked as final and should not be used outside of this bundle. Can you please give us more information about the issue? Why do you think it is BC break? Do you see some errors?
The change was needed as the PR adds new configuration option and I wanted to make this bundle more compatible with Symfony bundle best practices without renaming the main bundle class.
@HeahDude Thank you for your MR. This bundle has already version 2 available. I think we can close this issue.
| gharchive/issue | 2022-10-24T13:07:19 | 2025-04-01T06:45:21.827467 | {
"authors": [
"HeahDude",
"pulzarraider"
],
"repo": "petitpress/gps-messenger-bundle",
"url": "https://github.com/petitpress/gps-messenger-bundle/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
236013000 | Spread with jQuery XHR as argument transforms it into data
(This issue tracker is only for bug reports or feature requests, if this is neither, please choose appropriate channel from http://bluebirdjs.com/docs/support.html)
Please answer the questions the best you can:
What version of bluebird is the issue happening on?
Bluebird 3.5.0
What platform and version? (For example Node.js 0.12 or Google Chrome 32)
Google Chrome 58.0.3029.110 on macOS 10.12.4
Did this issue happen with earlier version of bluebird?
n/a
(Write description of your issue here, stack traces from errors and code that reproduces the issue are helpful)
Given the following code:
var Promise = require('bluebird');
var $ = require('jquery');
// Wraps a jQuery ajax call in a Bluebird promise
function myAjax(ajaxOptions, behavioralOptions) {
return new Promise(function (_resolve, _reject, _onCancel) {
var $ajax = $.ajax(ajaxOptions).then(function (data, textStatus, jqXHR) {
if (behavioralOptions.asArray) {
// optional because Bluebird only pays attention to the first argument of _resolve
_resolve([data, textStatus, jqXHR]);
} else {
_resolve(data);
}
}, function (jqXHR) {
_reject(jqXHR);
});
if ($.isFunction(_onCancel)) {
_onCancel(function () {
$ajax.abort();
});
}
});
}
when you call
myAjax({url: 'http://www.example.com'}, {asArray: true}).spread(function (data, textStatus, jqXHR) {
console.log(data, textStatus, jqXHR);
if (jqXHR === data) {
console.log('wtf!?');
}
});
jqXHR is exactly data.
my understanding is that .spread implicitly calls .all, which sees jqXHR as a thenable object, converts it's .then(data, textStatus, jqXHR) response dropping the other items, and converting it back to just data.
this is unintuitive behavior.
i would like to see .spread decoupled from .all or at least an alternative helper method provided that does not call .all magically.
That's how spread works and I don't think we're going to break that after over 4 years (not to mention compatibility with Q).
You can however use regular JavaScript spread which behaves the way you'd like:
myAjax({url: 'http://www.example.com'}, {asArray: true}).spread(([data, textStatus, jqXHR]) =>
I think it's more of a feature request than a bug, like having a splat that doesn't do an implicit all
| gharchive/issue | 2017-06-14T21:18:54 | 2025-04-01T06:45:21.833304 | {
"authors": [
"benjamingr",
"bisrael"
],
"repo": "petkaantonov/bluebird",
"url": "https://github.com/petkaantonov/bluebird/issues/1406",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2176936199 | T100 Motion Sensor
I cant seem to figure out how to add my Tapo T100 Motion Sensor. I added my hub and it comes up but i cant add the motion sensor and it does not come up through the hub. How should i add this?
Thank You
I seem to be in the same boat as MatrixAustin. I have added the sensor to the app, waited a few hours and tried adding it to my HA. Add Device only seems to want to add a new hub, which I already have working. Rummaging around in the system options, I have enable newly added entities and have clicked update, but alas no T100 is showing.
Thanks petretiandrea. I tried that and it didn't seem to work for me. I did find that rebooting HA worked though. Yay!
| gharchive/issue | 2024-03-08T23:55:43 | 2025-04-01T06:45:21.836599 | {
"authors": [
"MatrixAustin",
"MontyPud"
],
"repo": "petretiandrea/home-assistant-tapo-p100",
"url": "https://github.com/petretiandrea/home-assistant-tapo-p100/issues/722",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
751136265 | CRITICAL: CVE 2020-4269 vulnerability fix MERGE IMMEDIATELY
Do you have a bug bounty program?
https://english.stackexchange.com/questions/116800/one-letter-word-at-the-end-of-line-of-text/116802#116802
see u in court, sweaty :)
| gharchive/pull-request | 2020-11-25T21:47:47 | 2025-04-01T06:45:21.839599 | {
"authors": [
"JanPokorny",
"petrroll"
],
"repo": "petrroll/msc-thesis",
"url": "https://github.com/petrroll/msc-thesis/pull/1",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1890419113 | Renault - The number of jobs posted on Peviitor (143) and the one posted on the company website (140) isn't the same.
ENVIROMENT: production
URL: https://peviitor.ro/
Browser: Chrome
Device: laptop
OS: Windows 10
STEPS TO REPRODUCE:
Open "https://peviitor.ro/" in the browser.
Type "Renault" in the search bar.
Check the number of jobs available for Renault.
Open "https://alliancewd.wd3.myworkdayjobs.com/ro-RO/renault-group-careers?locationCountry=f2e609fe92974a55a05fc1cdc2852122&workerSubType=62e55b3e447c01871e63baa4ca0f9391&workerSubType=62e55b3e447c01140817bba4ca0f9891&workerSubType=62e55b3e447c01d10acebaa4ca0f9691".
Check the number of jobs displayed.
EXPECTED RESULTS:
The number of jobs posted on "https://peviitor.ro/" and the number of jobs posted on the company website should be the same.
ACTUAL RESULT:
There are 143 jobs posted on Peviitor website while on the company website there are posted 140 available jobs.
Fixed!
| gharchive/issue | 2023-09-11T12:38:27 | 2025-04-01T06:45:21.845522 | {
"authors": [
"Ale230992",
"lalalaurentiu"
],
"repo": "peviitor-ro/based_scraper_py",
"url": "https://github.com/peviitor-ro/based_scraper_py/issues/128",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2243390785 | "Miercani", "Râjlețu-Govora" are listed in the drop-down menu
Description:
When user writes "Miercani" and "Râjlețu-Govora" the locations appear in the drop-down menu next to the comune and the county witch is a part according to the law
Precondition:
The website is up an running.
Step 1
Write in the search bar "Miercani"
Expected results
The location appears in the drop-down menu as "Sat Miercani, ARGEȘ (Uda)"
Step 2
Press "x" button.
Expected results
The location was deleted from the search bar.
| gharchive/issue | 2024-04-15T11:10:19 | 2025-04-01T06:45:21.848926 | {
"authors": [
"AdelinaGuliman"
],
"repo": "peviitor-ro/ui.orase",
"url": "https://github.com/peviitor-ro/ui.orase/issues/2707",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2246050292 | "Pietroasele" is listed in the drop-down menu
Description:
"Pietroasele" is listed in the drop-down menu alongside its corresponding county and commune, according to the law
Precondition:
The website is up an running.
Step 1
Type "Pietroasele" in the search bar.
Expected results
The location is listed in the drop-down menu as "Comuna Pietroasele, BUZĂU".
| gharchive/issue | 2024-04-16T13:21:30 | 2025-04-01T06:45:21.851596 | {
"authors": [
"Florin201092"
],
"repo": "peviitor-ro/ui.orase",
"url": "https://github.com/peviitor-ro/ui.orase/issues/2940",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2269486259 | ”Schitu Duca”, ”Blaga”, ”Dumitreștii Gălății”, ”Pocreaca” are listed in the drop-down menu
Description:
When user writes ”Schitu Duca”, ”Blaga”, ”Dumitreștii Gălății”, ”Pocreaca” the locations appear in the drop-down menu next to the comune and the county which is a part according to the law.
Precondition:
The website is up an running.
Step 3
Write in the search bar "Blaga"
Expected results
The location appears in the drop-down menu as "Sat Blaga, IAȘI (Schitu Duca)".
Step 4
Press "x" button.
Expected results
The location was deleted from the search bar.
Step 5
Write in the search bar "Dumitreștii Gălății"
Expected results
The location appears in the drop-down menu as "Sat Dumitreștii Gălății, IAȘI (Schitu Duca)".
Step 6
Press "x" button.
Expected results
The location was deleted from the search bar.
Step 7
Write in the search bar "Pocreaca"
Expected results
The location appears in the drop-down menu as "Sat Pocreaca, IAȘI (Schitu Duca)".
| gharchive/issue | 2024-04-29T16:40:04 | 2025-04-01T06:45:21.858405 | {
"authors": [
"bogi1492"
],
"repo": "peviitor-ro/ui.orase",
"url": "https://github.com/peviitor-ro/ui.orase/issues/4596",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1058708210 | cluster-role is not required but always assumed
I'm trying to use this action to install helm charts. In my setup the AWS keys I'm using already give access to the cluster and no role needs to be assumed. However, when I run the workflow I get the error:
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::1234567890:user/... is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::1234567890:role/blackbox-eks-admin
It seems that the role is always assumed irrespectively of it being defined.
Is there a way to not assume the role, or would that need a PR?
Did you edit configmap aws-auth to add role ? @miguelaferreira
No I did not add or even configure any role for the action. The credentials I provide are enough and no role needs to be assumed.
can you add role in aws-auth configmap ? and try ?
Not really. That's now how I want to have the access set up.
I did add cluster-role-arn and it worked for me @miguelaferreira
| gharchive/issue | 2021-11-19T16:27:34 | 2025-04-01T06:45:21.865963 | {
"authors": [
"AwateAkshay",
"miguelaferreira"
],
"repo": "peymanmortazavi/eks-helm-deploy",
"url": "https://github.com/peymanmortazavi/eks-helm-deploy/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
365120147 | Extending stream method inventory
This is a continuation of:
https://github.com/micropython/micropython/issues/2180
https://github.com/micropython/micropython/issues/2622#issuecomment-275281602
(somewhat) https://github.com/micropython/micropython/issues/3396
Per the latest evolution/iteration, considering to implement:
stream.readline(sz=-1) to take another optional param, specifying "end of line value", i.e. stream.readline(sz=-1, end="\n"). The "end" param is not expected to be a keyword, but if it ever will be, it's named the same as print()'s argument.
stream.readline_into(buf, sz=-1, end="\n")
Change all "into" methods to accept not just buffer objects, but also BytesIO objects. This is an alternative to io.copy() proposal from https://github.com/micropython/micropython/issues/2622#issuecomment-275281602 .
So, the top-level decision point appears to be:
Whether we allow for (adhoc) buffering to be built into existing stream-like objects.
Or if there's a generic dual stream-buffer object which can be used even with unbuffered operations (and all current streams in uPy are unbuffered).
The current direction of thought follow choice 2, with BytesIO being such an object.
But fairly speaking, the fact that current streams in uPy are unbuffered is not ideal (won't compare favorably with benchmarks of buffered streams in other systems), so we may end up adding buffering directly to streams, and thus might rely on this buffering for some operations. But this buffering would be available only for some operations, e.g. read(), but not socket.recv(), the latter is not supposed to be buffered. But using data from recv() for buffering/stream access is still oftentimes useful, so the approach of having an external generic buffer still comes as beneficial.
(Then the next question will be how to avoid code, etc., duplication when implementing buffering for standard, e.g. file, streams - and that requires separate consideration and answers. Reusing BytesIO underlyingly can be one of the answers, unless there's a clear benefit to implement a more lightweight variant.)
stream.readline_into(buf, sz=-1, end="\n")
Change all "into" methods to accept not just buffer objects, but also BytesIO objects. This is an alternative to io.copy() proposal from micropython/micropython#2622 (comment) .
Instead of that 2., and following 3., now considering to do:
stream.readinto(BytesIO(), size=-1, until="\n")
until= still can be end=, but that's less clear...
| gharchive/issue | 2018-09-29T10:29:17 | 2025-04-01T06:45:21.878677 | {
"authors": [
"pfalcon"
],
"repo": "pfalcon/micropython",
"url": "https://github.com/pfalcon/micropython/issues/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1636945245 | Pytorch 2.0 support
#606 & some bug fix for torch 1.9.0
@imos コード内で,特にnotationのついてないところは,pytorch 2.0の仕様に合わせたものです.
Codecov Report
Merging #612 (0563de1) into develop (e6566fb) will increase coverage by 0.00%.
The diff coverage is 100.00%.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
@@ Coverage Diff @@
## develop #612 +/- ##
========================================
Coverage 97.15% 97.15%
========================================
Files 56 56
Lines 2318 2320 +2
========================================
+ Hits 2252 2254 +2
Misses 66 66
Impacted Files
Coverage Δ
pfhedge/nn/functional.py
91.43% <ø> (ø)
pfhedge/_utils/hook.py
100.00% <100.00%> (ø)
pfhedge/instruments/derivative/base.py
93.02% <100.00%> (ø)
pfhedge/instruments/primary/base.py
99.09% <100.00%> (ø)
pfhedge/nn/modules/hedger.py
100.00% <100.00%> (ø)
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
/test
/test
| gharchive/pull-request | 2023-03-23T06:43:18 | 2025-04-01T06:45:21.934301 | {
"authors": [
"codecov-commenter",
"imos",
"masanorihirano"
],
"repo": "pfnet-research/pfhedge",
"url": "https://github.com/pfnet-research/pfhedge/pull/612",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
109949437 | with chainer.cuda.get_device(x) fails when x is 0-size cupy.ndarray
In [48]: x
Out[48]: array([], shape=(0, 10), dtype=float32)
In [49]: with chainer.cuda.get_device(b) as i:
....: i
....:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-49-d8b685ac1ab0> in <module>()
----> 1 with chainer.cuda.get_device(x) as i:
2 i
3
AttributeError: __exit__
One possible solution is to check if cuda.ndarray.device is None in get_device.
Do you have a minimal test code to check this bug?
Sorry I failed to edit the code that reproduces the error.
In [1]: import cupy
In [2]: x = cupy.array([]).reshape((0, 10))
In [3]: import chainer
In [4]: with chainer.cuda.get_device(x) as i:
...: pass
...:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-4-fa05828b6a46> in <module>()
----> 1 with chainer.cuda.get_device(x) as i:
2 pass
3
AttributeError: __exit__
More simply:
>>> x=cupy.ndarray((0,))
>>> with x.device as i:
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: __exit__
In cupy.cuda.memory, when size is zero, Memory doesn't set _device member.
https://github.com/pfnet/chainer/blob/master/cupy/cuda/memory.py#L23
| gharchive/issue | 2015-10-06T07:23:25 | 2025-04-01T06:45:21.937827 | {
"authors": [
"delta2323",
"unnonouno"
],
"repo": "pfnet/chainer",
"url": "https://github.com/pfnet/chainer/issues/490",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
170189305 | Implement tangent function.
This PR implements trigonometric tan function.
LGTM except a comment.
We need to test float16 and float64, but it's another issue because sin and cos also are not tested with these types.
OK, LGTM.
| gharchive/pull-request | 2016-08-09T15:01:24 | 2025-04-01T06:45:21.939833 | {
"authors": [
"takagi",
"unnonouno"
],
"repo": "pfnet/chainer",
"url": "https://github.com/pfnet/chainer/pull/1480",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
197718413 | Fix max pooling backward cpu
MaxPooling2D.backward_cpu() causes an error when I call the method more than once.
You can reproduce the error by the following code:
import numpy as np
from chainer import functions as F
x = np.ones((1, 1, 4, 4), dtype=np.float32)
h = F.max_pooling_2d(x, 2)
h.grad = np.ones_like(h.data)
h.backward()
try:
h.backward()
except IndexError as e:
print e # index 16 is out of bounds for axis 1 with size 16
try:
h.backward()
except IndexError as e:
print e # index 24 is out of bounds for axis 1 with size 16
We expect the code will print nothing, but the message as the comment will be shown.
The actual error message is like this:
File "C:\Users\sakurai\Anaconda2\lib\site-packages\chainer\functions\pooling\max_pooling_2d.py", line 91, in backward_cpu
gcol[indexes] = gy[0].ravel()
IndexError: index 16 is out of bounds for axis 1 with size 16
The current implementation of forward_cpu() has a probably unintended side effect at L88-89
indexes = self.indexes.ravel()
indexes += numpy.arange(0, indexes.size * kh * kw, kh * kw)
ndarray.ravel() might return the view of self.indexes instead of the copy and it will be broken by += .
To avoid this, I replaced ravel() with flatten() which always return the copy.
MaxPoolingND.backward_cpu() has the same problem so I fixed it as well.
There is a test failed but it seems that it is not related to this PR.
https://travis-ci.org/pfnet/chainer/jobs/186992459#L784-L808
FAIL: test_backward_cpu (test_theano_function.TestTheanoFunctionTwoOutputs_param_1) parameter: {'inputs': [{'shape': (3, 2), 'type': 'float32'}, {'shape': (2,), 'type': 'float32'}], 'outputs': [{'shape': (3, 2), 'type': 'float32'}, {'shape': (3, 2), 'type': 'float32'}]}
LGTM!
| gharchive/pull-request | 2016-12-27T15:29:00 | 2025-04-01T06:45:21.944101 | {
"authors": [
"okuta",
"ronekko"
],
"repo": "pfnet/chainer",
"url": "https://github.com/pfnet/chainer/pull/2062",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
209704105 | Add persistent() function to Link, Chain and ChainList class
Add persistent() function to Link, Chain and ChainList, in order to access the persistent parameters from outside of the instance of each class.
Could you add tests cases to tests/chainer_tests/test_link.py?
| gharchive/pull-request | 2017-02-23T09:14:59 | 2025-04-01T06:45:21.945265 | {
"authors": [
"muupan",
"takerum"
],
"repo": "pfnet/chainer",
"url": "https://github.com/pfnet/chainer/pull/2319",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
212969909 | Static-type CUDA kernel argument pointers
This PR improves performance of linear_launch() by static-typing kernel argument pointers.
One concern is that dependency from function.pyx to core.pyx is introduced,
because core.ndarray and core.Indexer must be visible to function.pyx in order to do static-typing.
In my benchmark, linear_launch() is sped up by about 30%.
Thank you for reviewing! I followed your review.
OK!
| gharchive/pull-request | 2017-03-09T08:44:26 | 2025-04-01T06:45:21.946870 | {
"authors": [
"niboshi",
"okuta"
],
"repo": "pfnet/chainer",
"url": "https://github.com/pfnet/chainer/pull/2386",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
127863251 | Fix NPZ serializer with duplicated slashes
It fixes the problem that NPZ serializer failed to deserialize optimizers, since np.savez silently removes the slashes on the head of entry keys and contiguous slashes within the keys.
Please add a test which checks if the deserializer of the optimizer works well with the npz serializer.
Added such a test.
LGTM except one comment.
fixed.
LGTM!
This PR fixes #895
I updated two files(npz.py, test_npz.py), but still got error when loading --resume optimizer
File "train_imagenet.py", line 146, in
serializers.load_npz(args.resume, optimizer)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/chainer/serializers/npz.py", line 115, in load_npz
d.load(obj)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/chainer/serializer.py", line 79, in load
obj.serialize(self)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/chainer/optimizer.py", line 255, in serialize
state[key] = s(key, value)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/chainer/serializers/npz.py", line 92, in call
dataset = self.npz[self.path + key]
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 228, in getitem
raise KeyError("%s is not a file in the archive" % key)
KeyError: '/inc3a/conv33b/W/v is not a file in the archive'
| gharchive/pull-request | 2016-01-21T07:32:39 | 2025-04-01T06:45:21.952245 | {
"authors": [
"beam2d",
"matrlot",
"unnonouno"
],
"repo": "pfnet/chainer",
"url": "https://github.com/pfnet/chainer/pull/878",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
406900015 | Make links in disclaimer clickable, all locales
See DisclaimerComponent, line 15, but the change has to happen in the corresponding localized strings too.
According to @bensinjin the about page is done but not the disclaimer page. The commits fixing the about page should not have been tagged with this story, since this story makes no reference to the about page. The commit message also doesn't make mention of about or disclaimer page.
The work to fix the disclaimer page is still outstanding.
| gharchive/issue | 2019-02-05T17:45:52 | 2025-04-01T06:45:21.960642 | {
"authors": [
"rasmus-storjohann-PG"
],
"repo": "pg-irc/pathways-frontend",
"url": "https://github.com/pg-irc/pathways-frontend/issues/448",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
62682100 | Enforce to login only when connected to ldap
I want a user(with user privelegies) to login only when its connected to ldap. It does work if the user doesn't exist yet(obviously). But after it logins the first time and local profile is created it can login without any troubles without waiting for ldap connection.
You need to fully restart a system to check it because if you simply logoff and unplug cable and try to login it will work(the user won't connect) but that is not an option.
I've checked and unchecked all of those options it still doesn't work. Also once i did it(or maybe it was a dream) but after i deleted the local user profile and tried to test from scratch it stopped working even though the settings were the same.
I consider this a bug(unless i really don't understand something) and replicate on both stable and non stable versions.
Ok, it appears i didn't understand what "Disconnected/Connected" on startup means. It seems i have different kind of trouble.
This particular issue can be closed.
| gharchive/issue | 2015-03-18T12:28:46 | 2025-04-01T06:45:22.013098 | {
"authors": [
"swamp-fish"
],
"repo": "pgina/pgina",
"url": "https://github.com/pgina/pgina/issues/249",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
903366981 | NPE in KeyRingInfo.getLastModified()
Please look at the following code. We've received the bug reports that indicate NPE in getLastModified()
It seems getMostRecentSignature() can return null as results. It would be nice if PGPainless will have @Nullable here (the best choice is to have that over the whole public methods (if needed)). It's very important for Kotlin usage( to prevent such a bug.)
/**
* Return the date on which the key ring was last modified.
* This date corresponds to the date of the last signature that was made on this key ring by the primary key.
*
* @return last modification date.
*/
public Date getLastModified() {
PGPSignature mostRecent = getMostRecentSignature();
return mostRecent.getCreationTime();
}
private PGPSignature getMostRecentSignature() {
Set<PGPSignature> allSignatures = new HashSet<>();
if (mostRecentSelfSignature != null) allSignatures.add(mostRecentSelfSignature);
if (revocationSelfSignature != null) allSignatures.add(revocationSelfSignature);
allSignatures.addAll(mostRecentUserIdSignatures.values());
allSignatures.addAll(mostRecentUserIdRevocations.values());
allSignatures.addAll(mostRecentSubkeyBindings.values());
allSignatures.addAll(mostRecentSubkeyRevocations.values());
PGPSignature mostRecent = null;
for (PGPSignature signature : allSignatures) {
if (mostRecent == null || signature.getCreationTime().after(mostRecent.getCreationTime())) {
mostRecent = signature;
}
}
return mostRecent;
}
P.S. And of course have warnings about @Nullable in Java Docs will be super useful, like
@return last modification date or null if ...
Do you happen to have a test key available which caused this?
I can only imagine this to happen if there is no correct signature at all on the key, or if all signatures on the key were made in the future?
Do you happen to have a test key available which caused this?
unfortunately, I don't have such a key. We received anonymous error feedback(that is making automatically) that we collect from users
I see. I will try to make the method NPE-proof nonetheless :)
The method should now no longer throw NPEs. If no signatures are found, the method falls back to the latest key-creation date.
Thank you!
| gharchive/issue | 2021-05-27T07:55:59 | 2025-04-01T06:45:22.027157 | {
"authors": [
"DenBond7",
"vanitasvitae"
],
"repo": "pgpainless/pgpainless",
"url": "https://github.com/pgpainless/pgpainless/issues/125",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1179506639 | PGPainless produces broken messages for some signature / data format combinations.
A signature over data can either be of SignatureType.BINARY_DOCUMENT or SignatureType.CANONICAL_TEXT_DOCUMENT.
The body of a literal data packet can either be ´StreamEncoding.BINARY´, ´StreamEncoding.TEXT´ or ´StreamEncoding.UTF8´.
It turns out, that PGPainless only produces valid output for (BINARY,BINARY) combination.
Others result in errors, or lead to interoperability problems.
This is being investigated in https://github.com/pgpainless/pgpainless/tree/canonicalizedDatza
UPDATE
Some of the error cases have been fixed:
[x] Binary Data, Binary Signature
[x] Binary Data, Text Signature
[ ] Text Data, Binary Signature
[x] Text Data, Text Signature
[ ] UTF8 Data, Binary Signature
[x] UTF8 Data, Text Signature
For now users are advised to avoid using StreamEncoding.TEXT/StreamEncoding.UTF8 in combination with DocumentSignatureType.BINARY_DOCUMENT.
All other combinations should be fine now.
After checking back with some other OpenPGP developers it was decided that the best way forward is to deprecate ProducerOptions.setEncoding(StreamEncoding).
This would mean that all messages would use StreamEncoding.BINARY by default, which produces valid signatures for all combinations with DocumentSignatureTypes.
| gharchive/issue | 2022-03-24T13:19:07 | 2025-04-01T06:45:22.031995 | {
"authors": [
"vanitasvitae"
],
"repo": "pgpainless/pgpainless",
"url": "https://github.com/pgpainless/pgpainless/issues/264",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1132301012 | 编辑器不支持粘贴图片
编辑器不支持粘贴图片
fix v0.2.2
| gharchive/issue | 2022-02-11T10:29:37 | 2025-04-01T06:45:22.042770 | {
"authors": [
"phachon",
"zdwork"
],
"repo": "phachon/mm-wiki",
"url": "https://github.com/phachon/mm-wiki/issues/336",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1492388761 | General Issue: deprecate functions and arguments
Background Information
Functions and arguments which were deprecated in admiral 0.7.0 (or before) should be removed.
For functions and arguments which were deprecated in admiral 0.8.0 deprecate_warn() should be replaced with deprecate_stop() and the "deprecated" badge with the "defunct" badge.
Unit test should be updated.
This issue could be combined with https://github.com/pharmaverse/admiraldev/issues/36.
Definition of Done
No response
I think we also need to look into setting up the devel to pre-release PR checklist to help remind us to check on stuff like this...since we missed deprecating and removing functions on this for v9
Already addressing this in issue #1723
| gharchive/issue | 2022-12-12T17:28:35 | 2025-04-01T06:45:22.093880 | {
"authors": [
"bms63",
"bundfussr",
"sadchla-codes"
],
"repo": "pharmaverse/admiral",
"url": "https://github.com/pharmaverse/admiral/issues/1635",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1775107248 | Closes #1695 Establish codeowners
Thank you for your Pull Request! We have developed this task checklist from the Development Process Guide to help with the final steps of the process. Completing the below tasks helps to ensure our reviewers can maximize their time on your code as well as making sure the admiral codebase remains robust and consistent.
Please check off each taskbox as an acknowledgment that you completed the task or check off that it is not relevant to your Pull Request. This checklist is part of the Github Action workflows and the Pull Request will not be merged into the devel branch until you have checked off each task.
[ ] Place Closes #<insert_issue_number> into the beginning of your Pull Request Title (Use Edit button in top-right if you need to update)
[ ] Code is formatted according to the tidyverse style guide. Run styler::style_file() to style R and Rmd files
[ ] Updated relevant unit tests or have written new unit tests, which should consider realistic data scenarios and edge cases, e.g. empty datasets, errors, boundary cases etc. - See Unit Test Guide
[ ] If you removed/replaced any function and/or function parameters, did you fully follow the deprecation guidance?
[ ] Update to all relevant roxygen headers and examples, including keywords and families. Refer to the categorization of functions to tag appropriate keyword/family.
[ ] Run devtools::document() so all .Rd files in the man folder and the NAMESPACE file in the project root are updated appropriately
[ ] Address any updates needed for vignettes and/or templates
[ ] Update NEWS.md if the changes pertain to a user-facing function (i.e. it has an @export tag) or documentation aimed at users (rather than developers)
[ ] Build admiral site pkgdown::build_site() and check that all affected examples are displayed correctly and that all new functions occur on the "Reference" page.
[ ] Address or fix all lintr warnings and errors - lintr::lint_package()
[ ] Run R CMD check locally and address all errors and warnings - devtools::check()
[ ] Link the issue in the Development Section on the right hand side.
[ ] Address all merge conflicts and resolve appropriately
[ ] Pat yourself on the back for a job well done! Much love to your accomplishment!
Looks like we are going to give this another try @millerg23 @jeffreyad , with @bms63 drawing up the documentation for it, are you both still okay being our pilot codeowners?
Looks like we are going to give this another try @millerg23 @jeffreyad , with @bms63 drawing up the documentation for it, are you both still okay being our pilot codeowners?
That's fine.
| gharchive/pull-request | 2023-06-26T15:58:38 | 2025-04-01T06:45:22.102651 | {
"authors": [
"jeffreyad",
"zdz2101"
],
"repo": "pharmaverse/admiral",
"url": "https://github.com/pharmaverse/admiral/pull/1973",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
66430360 | Won't connect to SRV Records
Whilst trying to connect to a server via client.getSession().connect(), the code seems to stall without any sign of life, though it is not terminated.
It turns out this is caused by MCProtocolLib not loading SRV Records. I'll need to write something to get A Records from SRV Records.
| gharchive/issue | 2015-04-05T11:43:24 | 2025-04-01T06:45:22.131464 | {
"authors": [
"phase"
],
"repo": "phase/minekraft",
"url": "https://github.com/phase/minekraft/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1203625375 | romaps: add iteration method Do
add iteration benchmarks
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 90.53%. Comparing base (191e32b) to head (ed62bcd).
Report is 9 commits behind head on main.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #6 +/- ##
==========================================
+ Coverage 90.34% 90.53% +0.18%
==========================================
Files 6 6
Lines 259 264 +5
==========================================
+ Hits 234 239 +5
Misses 19 19
Partials 6 6
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
| gharchive/pull-request | 2022-04-13T18:09:15 | 2025-04-01T06:45:22.143998 | {
"authors": [
"codecov-commenter",
"phelmkamp"
],
"repo": "phelmkamp/immut",
"url": "https://github.com/phelmkamp/immut/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
314227470 | Update to support Angular 6 and RxJS 6
Upgrading a project to Angular 6 RC and encountered errors relating to the new breaking RxJS 6 changes, this should fix. Feel free to wait to merge until Angular 6 is fully released, this is just a stop-gap for our project.
Should also resolve #52
this fix is available since " 14 days ago" and still not merged, what are you waiting for. every times i run "npm i" in my project i need fix it manually. annoying me please accept
@colas74 Please respect that the maintainer of this project likely has other commitments, as well as no obligation to keep it maintained.
If you’d like, you may head over to the fork, clone it, then run npm install and npm run build, then cd into the dist directory and run npm link ..
Also keep in mind that I developed my changes based off of an older RC release of angular and have not tested them with the most recent release, so your results may vary. Odds are that the maintainer would like to wait until angular 6 is fully released to merge the pull request.
@computerwizjared afaik current errors aren't related to angular but rxjs (which is v6 released!)
@colas74 yes npm link could be solution or simply copy service file into your project or you can too use this one Efficient local storage module for Angular apps and PWA: simple API + performance + Observables + validation
@istiti My apologies, I didn't realize RxJS 6 came out 3 days ago. I am referencing an RC release in my PR, and haven't tested the stable release with my code yet.
Sorry for the delay - we're super busy over here at the moment!
I've merged the PR but reverted Angular back from v6 RC to the stable v5. The pipeable changes should be enough to get things going in ng6 on their own - please let me know if that is not the case! (you'll have to ignore peer dep warnings for a while longer sorry!).
I also removed the rxjs compat dependency - since the lib uses all pipeable operators now it should not be needed.
Published at 1.0.2
Thanks very much for the PR @computerwizjared!
...aaaand ng6 is out.
I've updated the deps (angular and rxjs) to 6.0.0 and published as a breaking change at v2.0.0
Good job, great thanks!
| gharchive/pull-request | 2018-04-13T19:38:46 | 2025-04-01T06:45:22.149821 | {
"authors": [
"colas74",
"computerwizjared",
"elwynelwyn",
"istiti"
],
"repo": "phenomnomnominal/angular-2-local-storage",
"url": "https://github.com/phenomnomnominal/angular-2-local-storage/pull/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2079989151 | Update sbt to 1.9.8
About this PR
📦 Updates org.scala-sbt:sbt from 1.9.2 to 1.9.8
📜 GitHub Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scala-sbt", artifactId = "sbt" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "org.scala-sbt", artifactId = "sbt" }
}]
labels: library-update, early-semver-patch, semver-spec-patch, version-scheme:early-semver, commit-count:1
Superseded by #206.
| gharchive/pull-request | 2024-01-13T02:17:57 | 2025-04-01T06:45:22.154603 | {
"authors": [
"scala-steward"
],
"repo": "phenoscape/phenoscape-kb-web-ui",
"url": "https://github.com/phenoscape/phenoscape-kb-web-ui/pull/203",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
318168771 | RC Test: Resistance In A Wire 1.5.0-rc.3
@jessegreenberg, @ariel-phet, @emily-phet, @terracoda, @phet-steele, Resistance in a Wire 1.5.0-rc.2 is ready for RC testing.
NOTE: This is the SECOND rc, despite the naming (#buildproblems).
Link to sim
Link to iFrame
Test Matrix
Before beginning, please familiarize yourself with how a screen reader works. Here is a page with information about PhET's supported screen readers and documentation about how to use them:
Screen reader intro
The simulation to test has labels and/or descriptions. They can be "seen" here:
Link to a11y view
This view shows what is available to a screen reader user. The sim is on the left side in an iframe, while the right side has all of the accessible descriptions. Beneath the sim are real-time alerts that will be announced by the screen reader. Please make sure that the descriptions on the right accurately describe the simulation at all times.
PhET supports the following platforms for accessibility so please test these:
(These should be in the test matrix)
[x] JAWS with latest Windows, latest Firefox
[x] NVDA with latest Windows, latest Firefox
[x] VoiceOver with latest macOS, latest Safari
Please also verify
[x] stringTest=double (all strings doubled)
[x] stringTest=long (exceptionally long strings)
[x] stringTest=X (short strings)
[x] stringTest=rtl (right-to-left)
[x] stringTest=xss (test passes if sim does not redirect, OK if sim crashes or fails to fully start)
[x] showPointerAreas (touchArea=red, mouseArea=blue)
[x] Full screen test
[x] Screenshot test
[x] Does AR need to be notified for any LOL updates?
Accessibility strings that are not visible should not be translatable yet, do not worry if these do not change with string tests.
If any new issues are found, please note them in https://github.com/phetsims/resistance-in-a-wire/issues and reference this issue.
Critical screen reader information
JAWS might complain about Firefox Quantum. If that is the case, please test with https://www.mozilla.org/en-US/firefox/organizations/
The screen reader must be turned on and reading before the browser is open for it to work. See https://github.com/phetsims/a11y-research/issues/90
JAWS might not automatically switch to forms mode for some controls (like sliders and radio buttons). In this case, pressing "enter" should switch to forms mode. For example, see https://github.com/phetsims/resistance-in-a-wire/issues/138
The screen reader might focus the last UI component that had focus on sim load, typically on refresh. This is a "feature". For example, see For example, see phetsims/resistance-in-a-wire#139.
(Other potentially useful items)
Please make sure that the following issues are fixed:
[x] https://github.com/phetsims/joist/issues/482
[x] https://github.com/phetsims/scenery/issues/770
[x] https://github.com/phetsims/resistance-in-a-wire/issues/140 (verify the correct string)
[x] https://github.com/phetsims/resistance-in-a-wire/issues/142 (should just be a FF setting)
[x] https://github.com/phetsims/resistance-in-a-wire/issues/141
@ariel-phet please assign and prioritize
QA is done.
Awesome! Liam!
Thanks @lmulhall-phet, new rc (spot check) coming up!
| gharchive/issue | 2018-04-26T19:36:16 | 2025-04-01T06:45:22.168502 | {
"authors": [
"lmulhall-phet",
"terracoda",
"zepumph"
],
"repo": "phetsims/QA",
"url": "https://github.com/phetsims/QA/issues/118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1173760476 | ReadingBlock should have their own responsePatterns option.
https://github.com/phetsims/scenery/blob/b042a44c8d06c9861d86544e3eb3d871bc22d060/js/accessibility/voicing/ReadingBlock.ts#L362
I see this line, which means we create garbage each time a ReadingBlock creates a response. In https://github.com/phetsims/ratio-and-proportion/issues/440 I want a different pattern anyways, so it is a good time to keep the options in sync with Voicing.
Perhaps we could try to inline this a bit so that ReadingBlock could just forward to the voicing options, so we don't have to duplicate. I'd like to see what @jessegreenberg thinks about it though
Seems very reasonable to me, either for ReadingBlock to have its own ResponsePacket or to forward to the supertype and avoid duplication. I haven't looked into the implementation deeply but the latter is probably best.
I discussed with @jessegreenberg. Steps for this issue:
Add these three to readingblock which just forward to the super option but are named for readingBlock*:
'voicingUtterance',
'voicingUtteranceQueue',
'voicingResponsePatternCollection',
Omit these from the API, including getters/setters for them:
'voicingNameResponse',
'voicingObjectResponse',
'voicingContextResponse',
'voicingHintResponse',
Align Voicing.collectAndSpeakResponse and collectReadingBlockResponses
Ask TS if hint responses for ReadingBlocks should respect the checkbox for hint responses.
Rename the readingBlockContent to readingBlockNameResponse, and run it off of voicingNameResponse within the response packet.
And also anything else.
Ok, still left to do after the above commits:
voicingUtteranceQueue is still supported, but it is pretty much the only option that isn't renamed to be called readingBlockUtteranceQueue. Do you think I should add that option to ReadingBlock and drop the support for the voicing option? It would mean a couple of changes in Voicing which is why I didn't (see speakResponse()). I wonder if it could be not confusing enough to just leave it as is.
Do we really need to create a brand new Utterance each time we speak a ReadingBlock? Could we get away with just a single Utterance per ReadingBlock instance?
@jessegreenberg, can you please review. How do things look to you?
RE: (1):
Index: js/accessibility/voicing/ReadingBlockUtterance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/accessibility/voicing/ReadingBlockUtterance.js b/js/accessibility/voicing/ReadingBlockUtterance.js
--- a/js/accessibility/voicing/ReadingBlockUtterance.js (revision 9c7dd23ec82aabf2c872f14273946c2d1256f035)
+++ b/js/accessibility/voicing/ReadingBlockUtterance.js (date 1648843754595)
@@ -17,10 +17,10 @@
* @param {Focus} focus
* @param {Object} [options]
*/
- constructor( focus, options ) {
+ constructor( focus?, options? ) {
super( options );
- // @public (read-only)
+ // @public (change it babey
this.readingBlockFocus = focus;
}
}
Index: js/accessibility/voicing/ReadingBlock.ts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/accessibility/voicing/ReadingBlock.ts b/js/accessibility/voicing/ReadingBlock.ts
--- a/js/accessibility/voicing/ReadingBlock.ts (revision 9c7dd23ec82aabf2c872f14273946c2d1256f035)
+++ b/js/accessibility/voicing/ReadingBlock.ts (date 1648843754592)
@@ -49,6 +49,7 @@
'voicingObjectResponse' |
'voicingContextResponse' |
'voicingHintResponse' |
+ 'voicingUtterance' |
'voicingResponsePatternCollection';
export type ReadingBlockOptions = SelfOptions &
@@ -160,6 +161,7 @@
// a different behavior.
( this as unknown as Node ).focusHighlight = new ReadingBlockHighlight( this );
+ this.voicingUtterance = new ReadingBlockUtterance();
( this as unknown as Node ).mutate( readingBlockOptions );
}
@@ -366,7 +368,7 @@
hintResponse: this.getReadingBlockHintResponse(),
ignoreProperties: this.voicingIgnoreVoicingManagerProperties,
responsePatternCollection: this._voicingResponsePacket.responsePatternCollection,
- utterance: null // we use a ReadingBlockUtterance below.
+ utterance: this.voicingUtterance // we use a ReadingBlockUtterance below.
} );
if ( content ) {
for ( let i = 0; i < displays.length; i++ ) {
@@ -381,10 +383,9 @@
const visualTrail = PDOMInstance.guessVisualTrail( rootToSelf, displays[ i ].rootNode );
const focus = new Focus( displays[ i ], visualTrail );
- const readingBlockUtterance = new ReadingBlockUtterance( focus, {
- alert: content
- } );
- this.speakContent( readingBlockUtterance );
+ content.readingBlockFocus = focus;
+
+ this.speakContent( content );
}
}
}
OK, pretty much that exact patch applied with some documentation changes. @zepumph is there anything else for this issue?
I also overrode the setter, but I'm not sure if that is complete. Can they still use the getter? I thought so, but want to make sure.
We reviewed this together. We can't override the setter like this because we need it for the implementation of ReadingBlock. Instead, we will override the setter and limit its argument so that it has to take a ReadingBlockUtterance. We will keep the option voicingUtterance disallowed. We don't expect the new setter to be used much but this way it will be clear that there are special things going on for the ReadingBlock Utterance.
This was done in the above commit. And overriding the method while changing its signature is working well:
@zepumph told me it would, I should have known he would be right.
@zepumph back to you, anything else?
Overriding the actual set voicingResponse made it so that the getter from get voicingResponse returned undefined. I don't understand why, but this is now fixed after the above commit.
The changes here seem incomplete, because now myReadingBlockNode.voicingUtterance still accepts just an Utterance:
I had good luck overriding the getter and the setter and just using a typecast to make sure all is good. But then I remembered about something called an assertion signature, and used it here. Tested with this code:
const readingBlockNode = new ( ReadingBlock( Node, 0 ) )();
readingBlockNode.voicingUtterance = new ReadingBlockUtterance( new Focus() );
readingBlockNode.voicingUtterance = new Utterance(); // ERROR
readingBlockNode.setVoicingUtterance( new Utterance() ); // ERROR
const x = readingBlockNode.getVoicingUtterance();
console.log( readingBlockNode.getVoicingUtterance() ); // Returns ReadingBlockUtterance
console.log( readingBlockNode.voicingUtterance ); // Returns ReadingBlockUtterance
I see that makes a lot of sense and is a better fix for https://github.com/phetsims/scenery/issues/1386#issuecomment-1098588811. I also tested it and it works well.
Anything else?
| gharchive/issue | 2022-03-18T16:07:05 | 2025-04-01T06:45:22.242046 | {
"authors": [
"jessegreenberg",
"zepumph"
],
"repo": "phetsims/scenery",
"url": "https://github.com/phetsims/scenery/issues/1386",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
169640649 | OptionParser exception
I've recently started to see an exception when attempting to run the plugin through Eclipse Version: Mars.2 Release (4.5.2), Build id: 20160218-0600
Here's the full stack:
Exception in thread "main" java.lang.NoSuchMethodError: joptsimple.OptionParser.acceptsAll(Ljava/util/List;Ljava/lang/String;)Ljoptsimple/OptionSpecBuilder;
at org.pitest.mutationtest.commandline.OptionsParser.(OptionsParser.java:122)
at org.pitest.mutationtest.commandline.MutationCoverageReport.main(MutationCoverageReport.java:36)
at org.pitest.pitclipse.pitrunner.PitRunner.runPIT(PitRunner.java:46)
at org.pitest.pitclipse.pitrunner.PitRunner.main(PitRunner.java:25)
I'm on vacation and unable to look at this for a week. Can I suggest you uninstall the plugin and install from the following update site http://eclipse.pitest.org/release-pre-juno/
Thanks, that resolves the issue.
Glad to hear it. I don't suppose the project exhibiting this issue is open source is it? Would help me write a test to prevent it happening again.
I'm afraid that the project isn't open source.
I'm wondering if the Eclipse installation may have been corrupted or perhaps the workspace, since I'm sure that I'd used it on that specific project before.
I think this and #66 are the same issue. If for some reason old versions of the plugins are there, I think it may load all of them. Classic behaviour for NoSuchMethod Error. Investigating #66 now.
| gharchive/issue | 2016-08-05T16:03:49 | 2025-04-01T06:45:22.657447 | {
"authors": [
"jiteshvassa",
"philglover"
],
"repo": "philglover/pitclipse",
"url": "https://github.com/philglover/pitclipse/issues/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
564099500 | Implement actions/workflow_jobs endpoints
https://developer.github.com/v3/actions/workflow_jobs/
Official implementation progres.
https://github.com/google/go-github/issues/1399#issuecomment-579554516
https://github.com/google/go-github/pull/1421/files/1bae46c43dbbe92f6fe443da0d72012779f23bad..016438dea3817a8547a3d4dedcb2bc024331488d
The library itself now has official support for workflow jobs
| gharchive/issue | 2020-02-12T16:02:51 | 2025-04-01T06:45:22.674022 | {
"authors": [
"marcofranssen"
],
"repo": "philips-labs/garo",
"url": "https://github.com/philips-labs/garo/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1137205351 | add keywords field detection in ymf
Hi,
Because some tools like pandoc use "keywords" field instead of "tags", could you add this field detection in Yaml Front Matter header ?
Thx
Hey @jerCarre,
I will try to add this soon(ish). In the meanwhile you could use some script to detect and convert it yourself :)
Thanks for opening this issue!
| gharchive/issue | 2022-02-14T12:05:28 | 2025-04-01T06:45:22.679161 | {
"authors": [
"Brend-Smits",
"jerCarre"
],
"repo": "philips-software/post-to-medium-action",
"url": "https://github.com/philips-software/post-to-medium-action/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1718205403 | 🚀 Feature: Add RSS feed
Bug Report Checklist
[X] I have tried restarting my IDE and the issue persists.
[X] I have pulled the latest main branch of the repository.
[X] I have searched for related issues and found none that matched my issue.
Overview
It would be great if we could generate this programmatically.
Additional Info
No response
I never developed a RSS feed but I am willing to take this on.
Hey, @tjwds do you have an example dataset you want to generate from? Atm created a generator that read from a single depth json like 'src/data/events.json'. It can output this:
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
<channel>
<item>
<title>Scripted Philadelphia: a Python, JavaScript, and Ruby joint meetup</title>
<link>https://www.eventbrite.com/e/scripted-philadelphia-a-python-javascript-and-ruby-joint-meetup-tickets-626403759507</link>
<description>Scripted Philadelphia: a Python, JavaScript, and Ruby joint meetup</description>
</item>
<item>
<title>Developments in React Router (and Remix) with Matt Brophy</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-developments-in-react-router-and-remix-tickets-627760688117</link>
<description>Developments in React Router (and Remix) with Matt Brophy</description>
</item>
<item>
<title>Contributing to Open Source Projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-contributing-to-open-source-projects-tickets-621853449407</link>
<description>Contributing to Open Source Projects</description>
</item>
<item>
<title>Technical.ly Super Meetup during Philly Tech Week</title>
<link>https://www.eventbrite.com/e/technically-super-meetup-during-philly-tech-week-tickets-611213515077</link>
<description>Technical.ly Super Meetup during Philly Tech Week</description>
</item>
<item>
<title>We're very excited to welcome Ken Rimple to share the latest developments in Next.js 13.x!</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-569920616907</link>
<description>We're very excited to welcome Ken Rimple to share the latest developments in Next.js 13.x!</description>
</item>
<item>
<title>We're very excited to welcome Alexa Stefanou from PhillyCHI to share her expertise with JavaScript Design Systems.</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-541868732967</link>
<description>We're very excited to welcome Alexa Stefanou from PhillyCHI to share her expertise with JavaScript Design Systems.</description>
</item>
<item>
<title>Social meeting / hack night</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-519965168817</link>
<description>Social meeting / hack night</description>
</item>
<item>
<title>JavaScript build systems + TypeScript projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-472565545267</link>
<description>JavaScript build systems + TypeScript projects</description>
</item>
<item>
<title>Linting & Mastodon!</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-451216569907</link>
<description>Linting & Mastodon!</description>
</item>
<item>
<title>Social meeting / sharing projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-420958005727</link>
<description>Social meeting / sharing projects</description>
</item>
<item>
<title>Social meeting / sharing projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-400972919797</link>
<description>Social meeting / sharing projects</description>
</item>
<item>
<title>Social meeting / sharing projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-387233926097</link>
<description>Social meeting / sharing projects</description>
</item>
<item>
<title>Social meeting / sharing projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-meetup-tickets-370871054307</link>
<description>Social meeting / sharing
projects</description>
</item>
<item>
<title>Our very first meeting!</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-meetup-tickets-345760176997</link>
<description>Our very first meeting!</description>
</item>
<item>
<title>Scripted Philadelphia: a Python, JavaScript, and Ruby joint meetup</title>
<link>https://www.eventbrite.com/e/scripted-philadelphia-a-python-javascript-and-ruby-joint-meetup-tickets-626403759507</link>
</item>
<item>
<title>Developments in React Router (and Remix) with Matt Brophy</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-developments-in-react-router-and-remix-tickets-627760688117</link>
</item>
<item>
<title>Contributing to Open Source Projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-contributing-to-open-source-projects-tickets-621853449407</link>
</item>
<item>
<title>Technical.ly Super Meetup during Philly Tech Week</title>
<link>https://www.eventbrite.com/e/technically-super-meetup-during-philly-tech-week-tickets-611213515077</link>
</item>
<item>
<title>We're very excited to welcome Ken Rimple to share the latest developments in Next.js 13.x!</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-569920616907</link>
</item>
<item>
<title>We're very excited to welcome Alexa Stefanou from PhillyCHI to share her expertise with JavaScript Design Systems.</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-541868732967</link>
</item>
<item>
<title>Social meeting / hack night</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-519965168817</link>
</item>
<item>
<title>JavaScript build systems +
TypeScript projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-472565545267</link>
</item>
<item>
<title>Linting & Mastodon!</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-451216569907</link>
</item>
<item>
<title>Social meeting / sharing projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-420958005727</link>
</item>
<item>
<title>Social meeting / sharing projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-400972919797</link>
</item>
<item>
<title>Social meeting / sharing projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-club-tickets-387233926097</link>
</item>
<item>
<title>Social meeting / sharing projects</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-meetup-tickets-370871054307</link>
</item>
<item>
<title>Our very first meeting!</title>
<link>https://www.eventbrite.com/e/philadelphia-javascript-meetup-tickets-345760176997</link>
</item>
</channel>
</rss>
I think I have a simple solution to generate the RSS feed xml.
| gharchive/issue | 2023-05-20T15:31:40 | 2025-04-01T06:45:22.697885 | {
"authors": [
"chethtrayen",
"tjwds"
],
"repo": "philly-js-club/philly-js-club-website",
"url": "https://github.com/philly-js-club/philly-js-club-website/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1567297 | switch to disable stdout redirect
stdout (and stderr) are redirected (internally to an ostringstream), so they can be reported later as part of the test results.
Sometimes it is desirable to see the output immediately - especially while debugging.
A command line switch could be provided to allow this option.
Might be a good idea to capture it and display only if the test fails – a behaviour I've seen in other testing frameworks.
:+1: Very user-friendly behaviour
| gharchive/issue | 2011-09-05T13:07:29 | 2025-04-01T06:45:22.699725 | {
"authors": [
"ikanor",
"nabijaczleweli",
"philsquared"
],
"repo": "philsquared/Catch",
"url": "https://github.com/philsquared/Catch/issues/49",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2504580944 | Can't find correct download file
Hi, not sure if I'm just lost but I can't find the correct file to download. I can only find DrumsetPatterns-main which doesn't seem to open to the correct files as described in installation. Help please!
hi eboff, first off, which version of MuseScore are you using? Unfortunately DrumsetPatterns doesn't launch in MuseScore 4.4, and I'm working on a fix for it. It caught me a bit unawares :(
If you're using an earlier version, then all patterns files (*.dp) need to be placed in the same location. Typically this will be under the MuseScore "Plugins" directory, under a directory for the plugin. For example, on mine its
\MuseScore4\Plugins\DrumsetPatterns-1.0
Closing issue as no reponse.
| gharchive/issue | 2024-09-04T07:54:50 | 2025-04-01T06:45:22.713649 | {
"authors": [
"eboff",
"philxan"
],
"repo": "philxan/DrumsetPatterns",
"url": "https://github.com/philxan/DrumsetPatterns/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2140210623 | KSampler error
ok I know you and I just finished chatting from the previous issue but here comes another one! built an engine using these args: --batch_min 1 --batch_opt 1 --batch_max 1 --height_min 512 --height_opt 1024 --height_max 1024 --width_min 512 --width_opt 1024 --width_max 1024 --token_count_min 75 --token_count_opt 150 --token_count_max 500 --ckpt_path ..\..\..\..\Models\StableDiffusion\anythingv5.safetensors and the build went perfectly without a single error but when I try to gen an image generated I get the error
Error occurred when executing KSampler:
__len__() should return >= 0
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\nodes.py", line 1380, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\nodes.py", line 1350, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\samplers.py", line 713, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\samplers.py", line 618, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\samplers.py", line 557, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\k_diffusion\sampling.py", line 154, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\samplers.py", line 281, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\samplers.py", line 271, in forward
return self.apply_model(*args, **kwargs)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\samplers.py", line 268, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\samplers.py", line 248, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\comfy\samplers.py", line 222, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\custom_nodes\comfy-trt-test\comfy_trt\node_unet.py", line 148, in apply_model
model_output = self.diffusion_model.forward(x=xc, timesteps=t, context=context, **extra_conds).float()
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\custom_nodes\comfy-trt-test\comfy_trt\node_unet.py", line 207, in forward
self.engine.allocate_buffers(feed_dict)
File "E:\StabilityMatrix-win-x64\data\Packages\ComfyUI\custom_nodes\comfy-trt-test\comfy_trt\utilities.py", line 212, in allocate_buffers
tensor = torch.empty(tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]).to(device=device)
so uhh, have fun!
very obscure error ...
can us share the comfy workflow
very obscure error ...
can u share the comfy workflow
workflow.json
do be aware you'll probably be missing some custom nodes such as rgthree, one button prompt, WD14 tagger or comfy-image-saver
sorry there's too many nodes & models i dont have 😂
my guess of the problem is control net (not yet supported in tensorrt, even the original a1111 extension)
it may be too much to ask, but if u really have a lot free time, can u start from the basic workflow and slowly add things to see when it breaks
Yeah, sure, I'll try later when I get home
that's about as simple as I can get it and it still errors
hmmm it works on my pc 😂😂😂 i took the anything v5 same as u, with same batch_min, token_count_min, etc.
u have the latest comfy ? what about torch version ? cuda - cudnn - tensorrt version ? no xformers ? u see any onnx file and trt file created ? on my end i have a 1.6gb onnx file + 1.6gb trt file
hmmm it works on my pc 😂😂😂 i took the anything v5 same as u, with same batch_min, token_count_min, etc.
u have the latest comfy ? what about torch version ? cuda - cudnn - tensorrt version ? no xformers ? u see any onnx file and trt file created ? on my end i have a 1.6gb onnx file + 1.6gb trt file
latest comfy? yes, toch version would be 2.1.0+cu121, cuda would be 12.1.1_531.14, cudnn is 8.9.7.29_cuda12, tensorrt is 9.3.0.post12.dev1, xformers isn't installed and both my onnx and trt files are 1.7gb
hmmm all seem fine 🤔 ...
the only difference is i have comfy standalone not in stability matrix, but it shouldnt be a problem
maybe this is your case: NVIDIA/Stable-Diffusion-WebUI-TensorRT#230
try lower token_count_max and/or higher batch_max
no idea why it's works on my pc though 😅😅😅
gonna try lowering token_count_max to 200 and increasing batch_max to 4
well well well then, 1 of those 2 was causing the error, the above settings seem to work
gonna close the issue and hopefully this may help someone in the future lol
| gharchive/issue | 2024-02-17T15:48:51 | 2025-04-01T06:45:22.725687 | {
"authors": [
"dotada",
"phineas-pta"
],
"repo": "phineas-pta/comfy-trt-test",
"url": "https://github.com/phineas-pta/comfy-trt-test/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2127536121 | Rails.application.routes is nil even when Phlex::Rails::Helpers::Routes is included
I have a Phlex component that subclasses ApplicationComponent (which includes the Phlex::Rails::Helpers::Routes) and I cannot use any of the rails route helpers directly. The only way I can work with them in Phlex components is if prefixed with Rails.application.routes.url_helpers., which is the whole point of Phlex::Rails::Helpers::Routes.
Rails.application.routes must be nil because I'm getting a undefined method default_url_options' for nil:NilClass` error when I try to invoke the route helper method directly. Any help here would be greatly appreciated.
It seems Phlex component classes' initialize method does not have any idea what Rails.application.routes is. Moving the path computation logic out of the initialize method fixed it. Not sure if this is a bug or not, however.
Sure: https://github.com/tubsandcans/test_phlex_routes_helper
Once running, go to /articles to see the failed example of referencing the path helper in Phlex component's initialize().
Go to /articles/1 to see the success example of referencing the path helper in the Phlex component's template().
Thanks for the reproduction. The problem is the helper isn't available during initialisation because the component hasn't yet been passed the view context, which it gets from the controller when you render it. my_component = MyComponent.new creates the component without any context about where it's being rendered. render my_component will pass it a view context and ask it to render.
The trick is to pull the url access out into a method that's called from the template. Alternatively, you could use the before_template hook to do what you're doing in the initialiser, since that hook is triggered after the context is known.
Hope that helps. I'm answering from my phone so haven't been able to go into too much detail.
That all makes sense, thanks for the before_template suggestion!
| gharchive/issue | 2024-02-09T17:07:40 | 2025-04-01T06:45:22.750185 | {
"authors": [
"joeldrapper",
"tubsandcans"
],
"repo": "phlex-ruby/phlex-rails",
"url": "https://github.com/phlex-ruby/phlex-rails/issues/151",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1091216259 | Add linux arm64 target
The Tailwind CLI version 3.0.8 added support for arm64 Linux. Just added a case for that in the installer. Seems to work fine for me!
❤️❤️❤️🐥🔥
| gharchive/pull-request | 2021-12-30T17:07:37 | 2025-04-01T06:45:22.781022 | {
"authors": [
"bowmanmike",
"chrismccord"
],
"repo": "phoenixframework/tailwind",
"url": "https://github.com/phoenixframework/tailwind/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
328788288 | Build fails for platform ios
Expected Behaviour
Build should succeed
Actual Behaviour
Build fails with the following error message:
The following build commands failed:
CompileC $HOME/Library/Developer/Xcode/DerivedData/conFusion-fjxujedkuzvwjegzzbfrbzfoketu/Build/Intermediates/conFusion.build/Debug-iphonesimulator/conFusion.build/Objects-normal/x86_64/CDVBarcodeScanner.o conFusion/Plugins/phonegap-plugin-barcodescanner/CDVBarcodeScanner.mm normal x86_64 objective-c++ com.apple.compilers.llvm.clang.1_0.compiler
(1 failure)
(node:28890) UnhandledPromiseRejectionWarning: Error code 65 for command: xcodebuild with args: -xcconfig,$HOME/Downloads/conFusion/platforms/ios/cordova/build-debug.xcconfig,-workspace,conFusion.xcworkspace,-scheme,conFusion,-configuration,Debug,-sdk,iphonesimulator,-destination,platform=iOS Simulator,name=iPhone SE,build,CONFIGURATION_BUILD_DIR=$HOME/Downloads/conFusion/platforms/ios/build/emulator,SHARED_PRECOMPS_DIR=$HOME/Downloads/conFusion/platforms/ios/build/sharedpch
(node:28890) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:28890) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Build succeeds for platform android
Steps to Reproduce
add barcodescanner plugin following instructions from ionic: https://ionicframework.com/docs/native/barcode-scanner/#installation
run ionic cordova emulate ios
Platform and Version (eg. Android 5.0 or iOS 9.2.1)
Mac OS X 10.11.6
iOS 10.2 (on simulated iPhone SE)
(Android) What device vendor (e.g. Samsung, HTC, Sony...)
Cordova CLI version and cordova platform version
cordova 8.0.0
Installed platforms:
android 7.0.0
browser 5.0.3
ios 4.5.4
Plugin version
phonegap-plugin-barcodescanner 8.0.0 "BarcodeScanner"
Sample Code that illustrates the problem
const options: BarcodeScannerOptions = {
preferFrontCamera: false,
showFlipCameraButton: true,
showTorchButton: true,
torchOn: false,
disableAnimations: false,
disableSuccessBeep: false,
prompt: 'Scan a barcode',
orientation: 'portrait'
};
this.barcodeScanner.scan(options)
.then(result => {
console.log(result);
if (!result.cancelled) {
this.type = result.format;
this.barcode = result.text;
} else {
console.log('Barcode scan cancelled');
this.resetBarcodeData();
}
})
.catch(error => {
console.log(`${error}`);
this.resetBarcodeData();
});
Logs taken while reproducing problem
This is probably caused by using Xcode 8 or older to build the app.
Since the 1st of April, Xcode 9 is required to submit the apps to the App Store.
If you want to use Xcode 8 you can put the incompatible code (if (@available(iOS 11.0, *))) inside a #if __IPHONE_OS_VERSION_MAX_ALLOWED >= 110000
| gharchive/issue | 2018-06-03T00:31:06 | 2025-04-01T06:45:22.803840 | {
"authors": [
"heapifyman",
"jcesarmobile"
],
"repo": "phonegap/phonegap-plugin-barcodescanner",
"url": "https://github.com/phonegap/phonegap-plugin-barcodescanner/issues/679",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
167366526 | In iOS, notification appears, but notification event is not triggered on click of notification.
Hi,
I am using APN to send push notifications to iOS.
Expected Behaviour
On notification clicked, App should open and notifcation event handler should be triggered
Actual Behaviour
App opens but notification event is not fired.
Platform and Version (iOS 9.2.1)
Cordova CLI version and cordova platform version
cordova --version # 6.0.0
cordova platform version iOS # 4.0.1
Plugin version
cordova plugin version | grep phonegap-plugin-push # 1.5.3
Sample Push Data Payload
$body['aps'] = array(
'alert' => "test message",
'sound' => 'default',
'link_url' => $url,
);
Sample Code that illustrates the problem
I am trying to check if notification event is triggered
push.on('notification', function(data) {
// data.message,
// data.title,
document.getElementById("event"). value = "notification recieved";
// data.count,
// data.sound,
// data.image,
// data.additionalData
alert("here");
});
@NishatJahan can you give me more details on how you setup the push code?
you are not using latest version of the plugin, can you update and try again?
@jcesarmobile: Sorry, my plugin version is 1.7.4. I think its the latest one
@macdonst : I have tried both approaches, using GCM fro iOS push notification and using APN itself.
USING GCM:
I have create genetared .p12 certifiacte file and uploaded it in google developers console.
and my server code is,
'here is a message. message',
'title' => 'This is a title. title',
'subtitle' => 'This is a subtitle. subtitle',
'tickerText' => 'Ticker text here...Ticker text here...Ticker text here',
'vibrate' => 1,
'sound' => 1
);
$fields = array
(
'registration_ids' => $registrationIds,
'notification' => $msg
);
$headers = array
(
'Authorization: key=' . API_ACCESS_KEY,
'Content-Type: application/json'
);
$ch = curl_init();
curl_setopt( $ch,CURLOPT_URL, 'https://android.googleapis.com/gcm/send' );
curl_setopt( $ch,CURLOPT_POST, true );
curl_setopt( $ch,CURLOPT_HTTPHEADER, $headers );
curl_setopt( $ch,CURLOPT_RETURNTRANSFER, true );
curl_setopt( $ch,CURLOPT_SSL_VERIFYPEER, false );
curl_setopt( $ch,CURLOPT_POSTFIELDS, json_encode( $fields ) );
$result = curl_exec($ch );
curl_close( $ch );
echo $result;
?>
Here the output is,
{"multicast_id":5623416507480230413,"success":1,"failure":0,"canonical_ids":0,"results":[{"message_id":"0:1469511283640485%394e7de1394e7de1"}]}
But notification doesn't show up in iPhone.
Using APN
I have created pem file and placed in server and i have added provisioning profile in xCode. Server side code is,
$message,
'sound' => 'default',
'link_url' => $url,
);
// Encode the payload as JSON
$payload = json_encode($body);
// Build the binary notification
$msg = chr(0) . pack('n', 32) . pack('H*', $deviceToken) . pack('n', strlen($payload)) . $payload;
// Send it to the server
$result = fwrite($fp, $msg, strlen($msg));
if (!$result)
echo 'Message not delivered' . PHP_EOL;
else
echo 'Message successfully delivered' . PHP_EOL;
// Close the connection to the server
fclose($fp);
here notification arrives, but on click of notification notification event is not triggered. it just opens up the app.
Please tell me which is the preferred method and where the problem is.
@macdonst
this is the plugin code in my app,
var push = PushNotification.init({ "android": {"senderID": "1022715757975", icon: "notification", "iconColor": "#43B426"},
"ios": {"senderID": "1022715757975","alert": "true", "badge": "true", "sound": "true", "gcmSandbox": "true"}, "windows": {} } );
push.on('registration', function(data) {
alert(data.registrationId);
});
push.on('notification', function(data) {
// data.message,
// data.title,
// data.count,
// data.sound,
// data.image,
// data.additionalData
alert(JSON.stringify(data));
});
push.on('error', function(e) {
// e.message
});
@NishatJahan Do you have iOS10 on your device?
I can confirm this issue on iOS 10. With iOS9 everything was fine. On iOS10 when the app is open I get the notifications, but when the app is closed and a notification has been received, clicking on the notification will open the app, but it won't trigger the handler function.
I just tested on an older device, it works well there, so iOS 10 is the source of the problem.
I have opened a new ticket for my problem: #1228
@NishatJahan can you retest with 1.8.2?
Closing, inactivity. Please comment/re-open if you have more details.
| gharchive/issue | 2016-07-25T13:31:28 | 2025-04-01T06:45:22.816104 | {
"authors": [
"NishatJahan",
"NoNameProvided",
"jcesarmobile",
"macdonst"
],
"repo": "phonegap/phonegap-plugin-push",
"url": "https://github.com/phonegap/phonegap-plugin-push/issues/1112",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
140662243 | Use same request for both iOS and Android (Using GCM)
Expected Behaviour
Use same request to GCM Service, to send push to both Android and iOS
Actual Behaviour
I was able to use same request to send push messages to both platforms some days ago, but now I can't do it anymore.
"notification" object in JSON (for iOS devices) is overriding the "data" object for Android.
Steps to Reproduce
{
"registration_ids": ["iOSRegId","AndroidRegId"],
"data": {
"placeId": 407,
"image": "http://someimage",
"title": "Android Title",
"style": "picture",
"picture": "http://somePicture",
"message": "My Message",
"summaryText": "VMy Message",
"actions": [{
"icon": "",
"title": "actionTitle",
"callback": "myCallback"
}]
},
"notification": {
"title": "my title iOS",
"body": "my message iOS"
}
}
Sending this will result in receiving a notification with title "my title iOS" and message "my message iOS" on both iOS and Android
Platform and Version
Android 4.2.2, 4.4, .5.1.1
Sample Push Data Payload
Stated above
@ahlidap yeah, that won't work. When the Android OS sees the 'notification' object in your push data it will take over and do it's own handling of the incoming push object. In order for this plugin to works it's magic, i.e. setup JS call backs properly, the push should not include a notification object. Everything should be in data.
You'll have to segment your users between iOS and Android then send separate pushes.
+1
I'm also using the same payload for both.
Its working for android but not for iOS
"registration_ids" : tokens,
"collapse_key":"score_update",
"data" : {
"message":params.alertData.message,
"title" : "myapp",
"pageId" : "notifications",
"notId" : notId,
"timeStamp" : new Date().getTime()
}
})
@techSavvySaahil yes, I am aware of the problem see my earlier answer. This is an enhancement I want to be able to provide.
| gharchive/issue | 2016-03-14T12:35:50 | 2025-04-01T06:45:22.822080 | {
"authors": [
"ahlidap",
"leonardobazico",
"macdonst",
"techSavvySaahil"
],
"repo": "phonegap/phonegap-plugin-push",
"url": "https://github.com/phonegap/phonegap-plugin-push/issues/702",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.