added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:39:41.344536
| 2016-08-09T12:21:48
|
170152426
|
{
"authors": [
"dstreppa",
"mtanda"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8732",
"repo": "mtanda/grafana-histogram-panel",
"url": "https://github.com/mtanda/grafana-histogram-panel/issues/26"
}
|
gharchive/issue
|
Average calculation in legend
First at all, thank you very much for this plugin @mtanda, it is very useful.
I noted a problem regarding average calculation in the legend. In my case I should have a voltage value close to 230 V, but Grafana shows me out-of-range and non-sense values (greater than 3000 V). Instead maximum and minimum work fine.
I think to have found the problem. The average calculation is bound to the bucket size parameter. If I change it, also the average changes.
Thanks for report. I found a bug. I'll fix it.
I fix the bug, I'll register new version to grafana.net.
|
2025-04-01T06:39:41.352804
| 2019-11-10T13:37:05
|
520601839
|
{
"authors": [
"flixoflax",
"mtgibbs"
],
"license": "WTFPL",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8733",
"repo": "mtgibbs/chartist-plugin-barlabels",
"url": "https://github.com/mtgibbs/chartist-plugin-barlabels/issues/7"
}
|
gharchive/issue
|
Question
Hey,
I was wondering if it is possible to position the barlabel above the bar when the value is positive and below the bar when the value is negative.
My code for the bar chart options
plugins: [ Chartist.plugins.ctBarLabels({ textAnchor: 'middle', labelClass: 'ct-label', showZeroLabels: true, labelOffset: { x: 0, y: -15 }, }) ],
positions the label of positive values with an offset of -15 slightly above the bars.
When I have negative values it looks like this:
I would like to put the labels below the bars, please consider to add an example code on how to achieve this:
Best regards,
flix
Hmm... I don't think I ever handled this use-case. I'll have to sit down and work out a good solution. There's likely a way to do this without the plugin using css classes, but that kind defeats the point. Is that data-set private? Can you post up the code that you're making that graph with and its data set?
Hey, thanks for the quick response.
This is my code:
var data = { labels: ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Nov", "Dez"] series: [[6,0,1,-9,2,-46,-2,5,2,0,6,1],[1,2,6,-8,1,3,1,4,-1,57,1,0]] }; var options = { textAnchor: 'middle', divisor: 4, seriesBarDistance: 10, chartPadding: { left: 40, top: 40, bottom: 40 }, axisX: { showGrid: false, showLabel: true, labelClass: 'ct-label', labelOffset: { x: 0, y: 15 }, }, axisY: { showGrid: true, onlyInteger: true, labelClass: 'ct-label', labelOffset: { x: -15, y: 0 }, labelInterpolationFnc: function(value) { return value + 'β¬'; } }, plugins: [ Chartist.plugins.ctBarLabels({ textAnchor: 'middle', labelClass: 'ct-label', showZeroLabels: true, labelOffset: { x: 0, y: -15 }, }) ], }; new Chartist.Bar('#bar-chart', data, options).on('draw', function(data) { if (data.type == 'bar') { data.element.animate({ y2: { dur: '0.4s', from: data.y1, to: data.y2 } }); } });
I could easily manage this problem if the dataset would be consistent but the dataset is changing all the time... so I need a solution that fits my needs.
Hey,
Were you able to solve my problem? ;)
Best regards
No, I'm sorry. I haven't had the time. Holidays are always tight for me. I wish you had of caught me last month. I'll try to get this banged out, but the soonest I'd probably be able to take a look is two weekends from now. :(
Thanks for your chart data though, I'll get to it when I can.
Alright. Thanks :)
|
2025-04-01T06:39:41.545434
| 2016-11-06T09:43:41
|
187551409
|
{
"authors": [
"mado1987",
"raducrisan1",
"webleb"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8737",
"repo": "muaz-khan/RTCMultiConnection",
"url": "https://github.com/muaz-khan/RTCMultiConnection/issues/283"
}
|
gharchive/issue
|
ICE fails resolving relay when using desktop browser on one side and crosswalk or webview Android on the other side
I used the library in a project and it works fine between browsers from different networks (tried home network + 4G once). It works even between desktop browser and mobile browser (chrome). Unfortunately I cannot make it work when I deploy the APK on Android. I use Crosswalk version 22. I deployed Coturn on a digital ocean account. Tried both turn and turns.
By looking at the web browser console on the desktop I see:
browser to browser, the remote ICE candidates are displayed correctly, the last one which matters is the relay one. The connection was established.
browser to android webview or crosswalk, I see only one remote ICE candidate of type host then a short pause and then the connectivity attempt is dropped.
It happened though that 3 times out of many many attempts it worked, that was when the remote candidate displayed was a relay one. I wasn't able to keep it working consistently, I cannot reproduce to make it work again.
What shall I do to make it work?
Hi Muaz-Khan,
Were you able to create an android app that either broadcasts or displays video content using RTCMultiConnection? If so, did it work when the the two involved parties are in separate networks?
Salut @raducrisan1 . Si mie mi se intampla acelasi lucru. Am o aplicatie in angular si folosesc tot crosswalk 22. Pe consola de la app imi apar cateodata doar host ice candidates, iar alta imi apar si relay. Ai reusit sa gaesti vreo solutie?
Hi,
Yes. I created my own signaling server.
Cheers,
Radu
On Mon, Jan 9, 2017 at 8:45 PM, mado1987<EMAIL_ADDRESS>wrote:
Salut @raducrisan1 https://github.com/raducrisan1 . Si mie mi se
intampla acelasi lucru. Am o aplicatie in angular si folosesc tot crosswalk
22. Pe consola de la app imi apar cateodata doar host ice candidates, iar
alta imi apar si relay. Ai reusit sa gaesti vreo solutie?
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/muaz-khan/RTCMultiConnection/issues/283#issuecomment-271368716,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AOfVE75HcsHCuh9XTcbvbU1pQFySWLIyks5rQoBWgaJpZM4KqgkS
.
Si care a fost problema? Ai un id de skype te rog?Mersi
Hi
how allow video call through webview?
on Android and ios
|
2025-04-01T06:39:41.546891
| 2020-01-10T10:04:05
|
547984395
|
{
"authors": [
"Talbot3",
"valdrinnz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8738",
"repo": "muaz-khan/getStats",
"url": "https://github.com/muaz-khan/getStats/issues/27"
}
|
gharchive/issue
|
Is possible to use this library in my KITE Project ?
How can I get these stats: bandwidth usage, packets lost, local/remote ip addresses and ports, type of connection etc and use them in my KITE Project ?
hi,guys. I did it on last year.
|
2025-04-01T06:39:41.559212
| 2023-02-21T23:26:33
|
1594226204
|
{
"authors": [
"AitorCantero",
"Joperezc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8739",
"repo": "mucsci-students/2023sp-420-SNEK",
"url": "https://github.com/mucsci-students/2023sp-420-SNEK/issues/80"
}
|
gharchive/issue
|
Save and Load: New Json
Save and Load: New Json
[x] Save Parse New Json
Primary: @Joperezc
Secondary: @nrhoopes
Size: MEDIUM
Actual Size: MEDIUM
Actual Time: 1HR 30MIN
Updated Json parse in the form
{guessedWords, wordList, puzzleLetters, requiredLetter, currentPoints, maxPoints}
|
2025-04-01T06:39:41.572766
| 2023-11-20T07:06:18
|
2001508819
|
{
"authors": [
"Aisuko",
"redstarxz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8740",
"repo": "mudler/LocalAI",
"url": "https://github.com/mudler/LocalAI/issues/1307"
}
|
gharchive/issue
|
local build run failed for rwkv backend
LocalAI version:
LocalAI version v1.40.0-38-g3e35b20 (3e35b20a0201db39ac7973c4e2b6b528f3d044b2)
Environment, CPU architecture, OS, and Version:
mac m1
Darwin Kernel Version 22.6.0: arm 64
Describe the bug
load rwkv model failed, chat api request return 500:
Request:
curl http://localhost:8989/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "outv4.bin",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9, "top_p": 0.8, "top_k": 80
}'
{"error":{"code":500,"message":"could not load model - all backends returned error: 17 errors occurred:\n\t* could not load model: rpc error: code = Canceled desc = \n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unknown desc = could not load model\n\t* could not load model: rpc error: code = Unknown desc = unable to load model\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/stablediffusion. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/piper. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\n","type":""}}%
To Reproduce
Expected behavior
load rwkv model success, api return sucess
Logs
LOG :
3:00PM DBG [rwkv] Attempting to load
3:00PM DBG Loading model rwkv from outv4.bin
3:00PM DBG Loading model in memory from file: models/outv4.bin
3:00PM DBG Loading Model outv4.bin with gRPC (file: models/outv4.bin) (backend: rwkv): {backendString:rwkv model:outv4.bin threads:4 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0x1400046e1e0 externalBackends:map[] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
3:00PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/rwkv
3:00PM DBG GRPC Service for outv4.bin will be running at: '<IP_ADDRESS>:64113'
3:00PM DBG GRPC Service state dir: /var/folders/yj/wt9vbj1s34vb69qyywcgwxlh0000gn/T/go-processmanager2591975212
3:00PM DBG GRPC Service Started
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp <IP_ADDRESS>:64113: connect: connection refused"
3:00PM DBG GRPC(outv4.bin-<IP_ADDRESS>:64113): stderr 2023/11/20 15:00:07 gRPC Server listening at <IP_ADDRESS>:64113
3:00PM DBG GRPC Service Ready
3:00PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:outv4.bin ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:4 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/gpt4all RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:models/outv4.bin Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0}
3:00PM DBG [rwkv] Fails: could not load model: rpc error: code = Unknown desc = could not load model
Additional context
Hi @redstarxz we have some examples of rwkv in https://github.com/go-skynet/model-gallery/blob/main/rwkv-raven-1b.yaml. Please check the example. I am not if outv4.bin's format is correct.
Hi @redstarxz we have some examples of rwkv in https://github.com/go-skynet/model-gallery/blob/main/rwkv-raven-1b.yaml. Please check the example. I am not if outv4.bin's format is correct.
Thanks, Finaly, I found the reason, for rwkv backend, the token file name must fit with the model file name...
|
2025-04-01T06:39:41.588757
| 2024-04-16T09:54:53
|
2245629167
|
{
"authors": [
"Porco24",
"StarkSkywalker",
"mugiwara85"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8741",
"repo": "mugiwara85/CodeblockCustomizer",
"url": "https://github.com/mugiwara85/CodeblockCustomizer/issues/82"
}
|
gharchive/issue
|
Add new code syntax
I really hope that the plugin can support more syntax highlighting. I have a lot of HLSL code in my notes, but Obsidian does not support highlighting by default. I wonder if this plugin could be expanded to support more syntax in the future, such as supporting all the latest syntax from PrismJS.
Theoretically it could, but writing the grammar for syntax highlighting is very (and I mean really) complicated. It could take a lot of time to write the grammar for a single language, if you are not familiar with it. And I am not. If you want to write the grammar, I am happy to include it in the plugin.
@Porco24 Did you mean editing mode or reading mode? Or both? During the development of the current release, I had to something with syntax highlighting. If reading mode would be enough for you, that could probably be solved. But editing mode is really challenging. I will take a look, maybe there is a "relatively" easy way.
@mugiwara85 Hi, I mean Both, If this feature is implemented, it would be very helpful to me. I hope to be able to download some highlight rule from the internet and import it into Obsidian.
Hi @StarkSkywalker, Some basics :)
The plugin does not add syntax highlighting to code blocks
Obsidian uses two separate methods for providing syntax highlighting. In editor mode, it uses CodeMirror 6, and in reading mode it uses Prism.js -> This is the reason that the syntax highlighting differs in editor mode and reading mode.
Prism.js supports the makefile language that is why it works in reading mode as shown below: And this is also the reason why it doesn't work in editor mode, because CodeMirror 6 does not support it.
It is possible to create and add syntax highlighting for new languages in CodeMirror 6, but it is very complicated. You basically, have to write for every language the grammar, and that is complicated and time consuming.
But! You are in luck! For MakeFile there is a package, I can add. I'll check it out and contact you later.
Oh! Thank you so much, my friend! The joy is beyond words!Long live the spirit of the Internet!)
I actually learned the basics first through your reply before I tried it, and it turned out just like you said it would.
Hi @StarkSkywalker, Some basics :)
The plugin does not add syntax highlighting to code blocks
Obsidian uses two separate methods for providing syntax highlighting. In editor mode, it uses CodeMirror 6, and in reading mode it uses Prism.js -> This is the reason that the syntax highlighting differs in editor mode and reading mode.
Prism.js supports the makefile language that is why it works in reading mode as shown below: And this is also the reason why it doesn't work in editor mode, because CodeMirror 6 does not support it.
It is possible to create and add syntax highlighting for new languages in CodeMirror 6, but it is very complicated. You basically, have to write for every language the grammar, and that is complicated and time consuming.
But! You are in luck! For MakeFile there is a package, I can add. I'll check it out and contact you later.
@mugiwara85 Oh! Thank you so much, my friend! The joy is beyond words!Long live the spirit of the Internet!)
I actually learned the basics first through your reply before I tried it, and it turned out just like you said it would in reading mode.
Don't worry about your english. It's good (I am also not a native english speaker). I just noticed that the package which adds MakeFile syntax, supports only basic syntax, but nertheless it's more than nothing. I'll report back later if I find out, how I can add it.
@StarkSkywalker Unfortunately, I have bad news for you. I just tried to install that package I mentioned last time, but that won't work. The reason is, that as it turned out, Obsidian uses CodeMirror 6, but not for everything. Specifically, for syntax highlighting it uses CodeMirror 5 nodes. And the package is written in CodeMirror 6 so it won't work :( As far as I could tell, the package would have only added very basic syntax highlight. Basically, comments and that's it. It is also important to mention, that makefile is one of the hardest languages apparently, as I couldn't find any grammar for it. Multiple are asking, but there are just some custom implementations. But it is also important to mention, that CodeMirror 5 syntax highlighting might work. And this might is really just a guest. Unfortunately, I couldn't find a list which languages does CodeMirror 6 support, but CodeMirror 5 does support these: https://github.com/codemirror/codemirror5/tree/master/mode
Is here anything interesting for you? I might be able to import that. (No guarantee)
|
2025-04-01T06:39:41.603290
| 2018-07-04T10:17:26
|
338216743
|
{
"authors": [
"janhoeck",
"lukePeavey",
"markusgattol",
"nerdmax",
"oliviertassinari",
"rockmandash"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8742",
"repo": "mui-org/material-ui",
"url": "https://github.com/mui-org/material-ui/issues/12054"
}
|
gharchive/issue
|
[emotion] Unable to override TextField's label focus class.
First, thank you for this wonderful library.
Please see this minimal example
As you can see, there is no chaging on focus label color, although my custom className is in higher order, there is two combination className stick together, causing my custom className disabled.
Sorry for not typing code because I can't trigger focus when I copy.
I think this is a bug, please help, thanks.
@rockmandash Here is the right approach:
<TextField
id="pid"
label="test"
InputLabelProps={{
FormLabelClasses: {
root: css`
&.focused {
color: red;
}
`,
focused: "focused"
}
}}
/>
https://codesandbox.io/s/20mo9rkl0y
Maybe we should be adding an example with Emotion in the documentation? I have prepared this codesandbox. Do you want to work on it? https://codesandbox.io/s/yw93kl7y0j
And maybe you should use the theming of material-ui.
Actually, documenting react-emotion would be good too.
@oliviertassinari Wow, thank you for your fast reply!
It's working right now! Thank you so much.
Documenting emotion library is good, I don't know if I can work on it, but thank you so much for asking!
@oliviertassinari
Maybe we should be adding an example with Emotion in the documentation?
I think it makes sense to add a section to the style libraries guide on Emotion.
I'd get happy to work on this!
@lukePeavey Awesome. We already have the codesandbox: https://codesandbox.io/s/yw93kl7y0j.
We can add emotion to the list https://material-ui.com/guides/interoperability/. By popularity, I would say after styled-components but before glamourous.
@rockmandash Here is the right approach:
<TextField
id="pid"
label="test"
InputLabelProps={{
FormLabelClasses: {
root: css`
&.focused {
color: red;
}
`,
focused: "focused"
}
}}
/>
https://codesandbox.io/s/20mo9rkl0y
Maybe we should be adding an example with Emotion in the documentation? I have prepared this codesandbox. Do you want to work on it? https://codesandbox.io/s/yw93kl7y0j
Looking at the code, does it mean one has to also use those JSS bits when he wants to use emotion? I'd rather not pull in JSS in addition to emotion but then maybe I'm missing something?
@markusgattol
Looking at the code, does it mean one has to also use those JSS bits when he wants to use emotion? I'd rather not pull in JSS in addition to emotion but then maybe I'm missing something?
What JSS bits are you referring to?
@lukePeavey import JssProvider from "react-jss/lib/JssProvider"; for example from link @oliviertassinari posted in his solution.
I see what you mean...
You need to configure the injection order so that Emotion styles are injected below JSS styles. This is necessary to ensure that Emotion styles have higher priority than the default material ui styles. (otherwise you need to !important)
JSS is already included your project as a dependency of material ui.
Reading through the docs for the last hour I actually figured I'll not use emotion because there's no need. JSS is included, as you said and seems to be first choice when it comes branding i.e. applying an individual style on top of components from material-ui.
fyi, I've been using preact and preact-material-components for a while but the amount of work necessary and the ongoing breaking changes made me drop the entire stack and move back to react and material-ui (the latter which I haven't used before because I switched from react to preact about 18 month ago for reasons of bundle size and speed but then that gap is becoming smaller and less important as time goes on.)
@oliviertassinari should react-emotion be a separate top level section in the interop guide or sub section of emotion
@lukePeavey a sub section?
Just in case some one else find this issue later.
I've created a component to simplify the material ui's css over writing process.
You just need to wrap your whole application in this OverrideMaterialUICss component. This library is a wrapper component which only takes the children prop and renders it without any modification but just moving Material-UI's
|
2025-04-01T06:39:41.608965
| 2020-02-07T10:49:44
|
561562550
|
{
"authors": [
"hckhanh",
"oliviertassinari"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8743",
"repo": "mui-org/material-ui",
"url": "https://github.com/mui-org/material-ui/issues/19601"
}
|
gharchive/issue
|
Support "auto" for them.spacing() - them.spacing("auto")
[X] I have searched the issues of this repository and believe that this is not a duplicate.
Summary π‘
I think that it will be more convenient when spacing function support "auto" value.
const useStyles = makeStyles((theme: Theme) => ({
searchBar: {
width: 1400,
margin: theme.spacing(1, "auto", 1, 4),
},
layoutButtons: {
marginRight: theme.spacing(4)
}
}));
Examples π
In my case,
const useStyles = makeStyles((theme: Theme) => ({
searchBar: {
width: 1400,
marginRight: "auto",
marginBottom: theme.spacing(7)
marginLeft: theme.spacing(4),
},
layoutButtons: {
marginRight: theme.spacing(4)
}
}));
It can be shorten like this:
const useStyles = makeStyles((theme: Theme) => ({
searchBar: {
width: projectSearchBar,
margin: theme.spacing(0, "auto", 7, 4),
},
layoutButtons: {
marginRight: theme.spacing(4)
}
}));
@hckhanh This sounds like a great idea. Do you want to work on it? :)
It would be a good opportunity to unify the behavior between
https://github.com/mui-org/material-ui/blob/07b725e54cdec560dab06f5a662d2869eca9ffb2/packages/material-ui-system/src/spacing.js#L77-L116
and
https://github.com/mui-org/material-ui/blob/07b725e54cdec560dab06f5a662d2869eca9ffb2/packages/material-ui/src/styles/createSpacing.js#L3-L34
hi @oliviertassinari, I will make an PR for this π
We can support any string.
|
2025-04-01T06:39:41.612955
| 2021-02-09T21:08:15
|
804925067
|
{
"authors": [
"Andrew5569",
"maliboo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8744",
"repo": "mui-org/material-ui",
"url": "https://github.com/mui-org/material-ui/issues/24849"
}
|
gharchive/issue
|
Add inputRef to Select
[x] I have searched the issues of this repository and believe that this is not a duplicate.
Summary π‘
It would be nice to have inputRef on Select so that it could easily work with libraries like react-hook-form
Examples π
import React from "react";
import { useForm } from "react-hook-form";
import { Select } from "@material-ui/core";
const MySelect = () => {
const { register, handleSubmit } = useForm();
return (
<form onSubmit={handleSubmit(console.log)}> // should log { mySelect: "1" } on submit
<Select native inputRef={register} name="mySelect">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
</Select>
</form>
);
};
export default MySelect;
Motivation π¦
As I mention, this would make it easier to work with libraries react-hook-form
Use it with <Controller> like this: https://codesandbox.io/s/rhf-mui-select-forked-7xl8z?file=/src/index.js:802-826
<Controller
render={({ ref, onChange }) => (
<Select inputRef={ref} onChange={onChange}>
<MenuItem value="">None</MenuItem>
// (...)
|
2025-04-01T06:39:41.618567
| 2021-09-15T14:40:18
|
997166737
|
{
"authors": [
"mnajdova",
"oliviertassinari",
"yleflour"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8745",
"repo": "mui-org/material-ui",
"url": "https://github.com/mui-org/material-ui/issues/28364"
}
|
gharchive/issue
|
[@mui/styled-engine-sc] The checks breaks styled-components' API
The current implementation of styled-engine-sc breaks styled-components' API when not in production
[x] The issue is present in the latest release.
[x] I have searched the issues of this repository and believe that this is not a duplicate.
Current Behavior π―
styled(MyComponent).attrs({})`` // Error, .attrs is not defined
Expected Behavior π€
According to the styled-components' API, this should be allowed
Steps to Reproduce πΉ
import React from "react";
import { AppBar as MuiAppBar } from "@mui/material";
import styled from "@mui/styled-engine-sc";
const AppBar = styled(MuiAppBar).attrs({
position: "static",
})`
box-shadow: none;
`;
If NODE_ENV=="production", this works
If NODE_ENV!="production", attrs is undefined
Cause
In @mui/styled-engine-sc/index.js there is a specific check done against the function's parameter. But this overrides the default styledFactory object, breaking the styled-component API
Recommended action
I'm all for extra checks, but until there is a better way to do this (sorry, can't think of one right now), this condition should be disabled
We don't support .attrs(). I don't think that we should either, to maximize the interoperability. However, not supporting the behavior in dev and prod sounds better for the DX. cc @mnajdova for thoughts.
We don't support .attrs(). I don't think that we should either, to maximize the interoperability between the different engines (not supported by emotion, goober, etc.). Could importing from styled-components directly if you really need this API work?
However, not supporting the behavior in prod, like in dev sounds better for the DX: no surprises.
Agree, we don't want to support he different APIs. I will create a PR for ensuring the same behavior is persisted in prod mode too.
|
2025-04-01T06:39:41.620905
| 2018-09-27T19:38:57
|
364624291
|
{
"authors": [
"colespencer1453",
"spirosikmd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8746",
"repo": "mui-org/material-ui",
"url": "https://github.com/mui-org/material-ui/pull/13023"
}
|
gharchive/pull-request
|
[StepConnector] Customize connector based on internal states
Closes #13010
This can be accomplished with the following example:
<Stepper
connector={
<StepConnector
classes={{ lineActive: classes.lineActive, lineCompleted: classes.lineCompleted }}
/>
}
>
{/* ... */}
</Stepper>
or with using createMuiTheme:
createMuiTheme({
overrides: {
MuiStepConnector: {
completed: {
'& span': {
borderColor: indigo[500],
},
},
},
},
});
Before
After
@oliviertassinari Would it be possible to include an error class here as well?
|
2025-04-01T06:39:41.622717
| 2024-04-19T22:28:22
|
2254126131
|
{
"authors": [
"oliviertassinari"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8747",
"repo": "mui/base-ui",
"url": "https://github.com/mui/base-ui/pull/333"
}
|
gharchive/pull-request
|
[core] Update monorepo
Propagate https://github.com/mui/material-ui/pull/41901
Preview: https://deploy-preview-333--base-ui.netlify.app/base-ui/getting-started/
On hold, waiting for #326 to be merged
|
2025-04-01T06:39:41.628883
| 2022-12-30T14:10:40
|
1514550568
|
{
"authors": [
"behrangsa",
"marciofaria-git",
"mnajdova"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8748",
"repo": "mui/material-ui",
"url": "https://github.com/mui/material-ui/issues/35673"
}
|
gharchive/issue
|
Syntax error: "@next/font" requires SWC although Babel is being used due to a custom babel config being present.
Duplicates
[X] I have searched the existing issues
Latest version
[X] I have tested the latest version
Steps to reproduce πΉ
Link to live example:
Steps:
Open the example on this page in StackBlitz:
Current behavior π―
β― npm install && npx next dev
warn preInstall No description field
warn preInstall No repository field
warn preInstall No license field
β [1/4] π Resolving dependencies
β Completed in 0.146s
β [2/4] π Fetching dependencies
β info pruneDeps Excluding 8 dependencies. For more information use `--verbose`.
β Completed in 1.945s
β [3/4] π Linking dependencies
β Completed in 3.256s
info security We found `install` scripts which turbo skips for security reasons. For more information see
https://turbo.sh/install-scripts.
ββ<EMAIL_ADDRESS>
success Saved lockfile "package-lock.json"
success Updated "package.json"
success Install finished in 5.417s
ready - started server on <IP_ADDRESS>:3000, url: http://localhost:3000
info - Disabled SWC as replacement for Babel because of custom Babel configuration ".babelrc" https://nextjs.org/docs/messages/swc-disabled
info - Using external babel configuration from /home/projects/xoribkgmv.github/.babelrc
error - ./src/theme.ts:1:1
Syntax error: "@next/font" requires SWC although Babel is being used due to a custom babel config being present.
Read more: https://nextjs.org/docs/messages/babel-font-loader-conflict
^C
~/projects/xoribkgmv.github 5m 21s
Expected behavior π€
No SyntaxError:
error - ./src/theme.ts:1:1
Syntax error: "@next/font" requires SWC although Babel is being used due to a custom babel config being present.
Read more: https://nextjs.org/docs/messages/babel-font-loader-conflict
Context π¦
No response
Your environment π
npx @mui/envinfo
β― npx @mui/envinfo
success Install finished in 3.908s
System:
OS: Linux 5.0 undefined
Binaries:
Node: 16.14.2 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 7.17.0 - /usr/local/bin/npm
Browsers:
Chrome: Not Found
Firefox: Not Found
npmPackages:
@emotion/react: latest => 11.10.5
@emotion/styled: latest => 11.10.5
@mui/base: 5.0.0-alpha.112
@mui/core-downloads-tracker: 5.11.2
@mui/icons-material: latest => 5.11.0
@mui/material: latest => 5.11.2
@mui/private-theming: 5.11.2
@mui/styled-engine: 5.11.0
@mui/system: 5.11.2
@mui/types: 7.2.3
@mui/utils: 5.11.2
@types/react: latest => 18.0.26
react: latest => 18.2.0
react-dom: latest => 18.2.0
typescript: latest => 4.9.4 ```
</details>
The .babelrc file does not exist in the folder, it's strange why it is there when opened with StackBlitz. It works as expected in Codesandbox tough. It shouldn't happen locally as this file does not exist.
Hi, if you are using NextJS 13, some features like @next/fonts are using SWC to compile, so you need to configure swc instead of babel.
SWC documentation: https://swc.rs/docs/getting-started
if you have the .babelrc i can help to make a migration for SWC
|
2025-04-01T06:39:41.869587
| 2024-07-04T15:36:51
|
2391117163
|
{
"authors": [
"GermanAizek",
"matttbe",
"ossama-othman"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8749",
"repo": "multipath-tcp/mptcpd",
"url": "https://github.com/multipath-tcp/mptcpd/pull/294"
}
|
gharchive/pull-request
|
Perhaps a more correct calculation sizeof for memcpy()
@matttbe, @mjmartineau,
am I correct in assuming that in this code, memcpy has different size and it possible underflow or overflow?
I don't know code perfectly, so I'm asking you. If I make a mistake, cancel PR changes.
Thanks for PR, too! It's always good to have more eyes on the code. Much appreciated!
@GermanAizek thank you for this PR.
@ossama-othman thank you for the complete reply! I agree with you, we cannot replace the sizeof() for IPv6.
I guess we can then close this PR.
|
2025-04-01T06:39:41.926933
| 2023-10-08T19:01:26
|
1932008832
|
{
"authors": [
"GaneshSarla",
"munterkalmsteiner"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8750",
"repo": "munterkalmsteiner/core",
"url": "https://github.com/munterkalmsteiner/core/pull/167"
}
|
gharchive/pull-request
|
Remove assignment to the variable as the value is never used(1)
Breaking change
Proposed change
Type of change
[ ] Dependency upgrade
[ ] Bugfix (non-breaking change which fixes an issue)
[ ] New integration (thank you!)
[ ] New feature (which adds functionality to an existing integration)
[ ] Deprecation (breaking change to happen in the future)
[ ] Breaking change (fix/feature causing existing functionality to break)
[ ] Code quality improvements to existing code or addition of tests
Additional information
This PR fixes or closes issue: fixes #
This PR is related to issue:
Link to documentation pull request:
Checklist
[ ] The code change is tested and works locally.
[ ] Local tests pass. Your PR cannot be merged unless tests pass
[ ] There is no commented out code in this PR.
[ ] I have followed the development checklist
[ ] I have followed the perfect PR recommendations
[ ] The code has been formatted using Black (black --fast homeassistant tests)
[ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
[ ] Documentation added/updated for www.home-assistant.io
If the code communicates with devices, web services, or third-party tools:
[ ] The manifest file has all fields filled out correctly.
Updated and included derived files by running: python3 -m script.hassfest.
[ ] New or updated dependencies have been added to requirements_all.txt.
Updated by running python3 -m script.gen_requirements_all.
[ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
[ ] Untested files have been added to .coveragerc.
To help with the load of incoming pull requests:
[ ] I have reviewed two other open pull requests in this repository.
Motivation: Removing dead stores, where a value is assigned to a variable but never used, is crucial for writing high-quality code. It not only makes the code easier to read and maintain but also ensures efficient use of resources. When unused variables clutter the code, it can confuse developers and make debugging more challenging. Additionally, performing calculations or retrieving values that are never used can lead to wasteful resource consumption, affecting the program's performance. Furthermore, dead stores may indicate a logic error, potentially causing unexpected behavior. By getting rid of these unused variables, developers can create more efficient and error-resistant code, ultimately leading to better software quality. In this particular code, I removed "device = entry.data[CONF_DEVICE]" as it is not used.
looks good
|
2025-04-01T06:39:42.031485
| 2022-05-20T20:41:59
|
1243598826
|
{
"authors": [
"chrismdann",
"marcelveldt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8751",
"repo": "music-assistant/hass-music-assistant",
"url": "https://github.com/music-assistant/hass-music-assistant/issues/206"
}
|
gharchive/issue
|
Heos by Marantz and Denon
It doesn't appear to work with HEOS.
What is not working ? Please define your steps...
Anything helpful in your logs that give me a clue ?
I had a quick peek at the source code of the Heos integration in HA and that should theoretically work fine with Music Assistant. So it would really help if you provide a step for step walkthrough what you did and where it went wrong and if you see any errors somewhere.
@chrismdann can you share some more info what is not working otherwise I'll have to close this report
closed due to no response
|
2025-04-01T06:39:42.042269
| 2022-08-25T11:07:35
|
1350725795
|
{
"authors": [
"OzGav",
"SeByDocKy",
"marcelveldt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8752",
"repo": "music-assistant/hass-music-assistant",
"url": "https://github.com/music-assistant/hass-music-assistant/issues/880"
}
|
gharchive/issue
|
Can't update MA from HACs
What version of Music Assistant has the issue?
2022.7.2
The problem
After analysing logs in HACs... it seems that I am trying without success to update MA with patch-2022.7.3. Seems this file is no more present in the github. Is possible to have it availabe in otder to have a normal MA update ?
How to reproduce
Just try to click on the update button
Relevant log output
022-08-24 15:53:34.883 ERROR (MainThread) [custom_components.hacs] <Integration music-assistant/hass-music-assistant> GitHub returned 404 for https://api.github.com/repos/music-assistant/hass-music-assistant/git/trees/patch-2022.7.3
"pushed_at": "2022-08-23T21:18:45",
"releases": true,
"selected_tag": "patch-2022.7.3",
"version_installed": "2022.7.2",
"last_fetched":<PHONE_NUMBER>.722832
Additional information
No response
What version of Home Assistant Core are your running
20228.6
What type of installation are you running?
Home Assistant OS
On what type of hardware are you running?
ODROID
You need to update MA to 2022.8.x as you are running HA 2022.8.6. That option should be in HACS?
No I can't update anything .... that's the main problem since more than in month....
When I click on update I got this error file in hacs pointing the missing file....
No, we can't bring that file back. Besides that it is an old version too.
Can't you just remove Music Assistant completely from HA and HACS and reinstall ?
Or press the button "Update information" first ?
No, we can't bring that file back. Besides that it is an old version too. Can't you just remove Music Assistant completely from HA and HACS and reinstall ? Or press the button "Update information" first ?
I tried all your options.... nothing worked unfortunatly .... :(
Are you on HA version 2022.8.x ?
I don't know in the ./storage/hacs.repositories mentioned with patch file.... impossible to update it manually for another version...
Maybe try to update it manually ?
Download the zipfile for the latest release: https://github.com/music-assistant/hass-music-assistant/releases/download/2022.8.4/mass.zip
unpack it in the custom_components folder, overwriting the existing content
I already tried that option too :( :(
In that case the only option is to remove HACS completely and reinstall. Also delete all the HACS related files and folders.
We can't help you on this I'm afraid as it is strictly taken an HACS issue and not a MA issue.
I see ... in Hacs issue, they said it's a MA issue (deleted file)....
It is not too uncommon that a release is replaced. Bad things happen and code needs to be guarded for that. I find it kind of strange that the whole thing is unrecoverable crashed due to the fact that we deleted a release 10 minutes after it was published (because it was faulty). I really do not understand why HACS keeps grabbing a tag that does not exists, in my opinion that is a bug in HACS.
This is why we have backups so we can restore after something bad happened ;-)
I have recreated that missing tag but I don't think that will fix the issue.
Ok ... I had to delete all HACS and ./storage associated files and also all integrations/frontend one by one :( :( ...
But at least I got an MA updated..... so now for sure I will wait a while before a new MA update...
|
2025-04-01T06:39:42.051962
| 2015-02-26T08:52:47
|
59039349
|
{
"authors": [
"cnstoll",
"davidbarker",
"mikeswanson",
"yenchenlin1994"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8753",
"repo": "mutualmobile/MMWormhole",
"url": "https://github.com/mutualmobile/MMWormhole/issues/16"
}
|
gharchive/issue
|
Method "listenForMessageWithIdentifier" didn't work if I launch watch app from glance
Here is an simple example project:
https://github.com/yenchenlin1994/WormholeBugExample
(You have to change App groups to your own groups)
This app simply use a NSTimer to update counter label on Phone interface,
and then use MMWormhole try to sync value of the label on Watch interface.
It works totally fine when I choose schema WormholeBugExample WatchKit App
to run, and then manually open the phone app on iOS simulator.
However, things changed if I change schema to Glance - WormholeBugExample WatchKit App and then do the same process as above.
After I tap on Glance to launch my watch app, the label on watch's interface didn't correctly sync with label on phone interface.
In fact, it sometimes stop listening for the message at the beginning or when counter counts to particular value (ex: When it counts to 8).
How can I fix it?
You should move everything in awakeWithContext into willActivate. That'll help make sure the wormhole is active. It's also the best way to ensure that UI updates only happen when the interface controller is active. It's also a good idea to stop listening and/or nil out the wormhole in the didDeactivate method of the interface controller.
I've tried your solution,
but problems I met remain the same.
I seem to be having similar issues when activating my WatchKit app from the glance. The WatchKit app works perfectly when I run it directly from Xcode, but if I run the glance scheme and then tap on it to open the WatchKit app, my callbacks don't run.
This has been discussed in the Apple Forums, and based on a response from an Apple employee, it's possible that it's only a simulator bug: https://devforums.apple.com/message/1111028
Yeah. My understanding is that it's an unfortunate (and fairly annoying until we have hardware) simulator bug.
|
2025-04-01T06:39:42.072319
| 2015-01-28T10:48:08
|
55740598
|
{
"authors": [
"cheft",
"tipiirai"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8754",
"repo": "muut/riotjs",
"url": "https://github.com/muut/riotjs/issues/242"
}
|
gharchive/issue
|
No have tag instance access api
grid and item is tag, nesting in html:
<grid>
<item field="name" label="εη§°" />
</grid>
grid not access item instanceοΌsuchοΌ grid.children[0]
item not access grid instanceοΌsuchοΌ item.parent, but item.parent is null
Only 'item' definition in 'grid' item.parent isn't null
children property is now implemented on v2.0.8.
on the nested tag you'll have access to parent property.
resolved on v2.0.8
|
2025-04-01T06:39:42.129516
| 2019-11-20T10:38:27
|
525718339
|
{
"authors": [
"mviereck",
"phil294"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8755",
"repo": "mviereck/x11docker",
"url": "https://github.com/mviereck/x11docker/issues/196"
}
|
gharchive/issue
|
Timeout waiting for log entry "containerrootrc=ready"
Hi! Not really sure whats happening here. Happens with either of xephyr, xpra, nxagent and hostidsplay. As if the docker image doesnt even get started. Here is a verbose log output upload: https://waritschlager.de/share/922650f43f77a6fe.txt
Seems to have broken with a x11docker update of last few weeks/months, as I did not have any such problems beforehand. I can cycle through recent commits to find the culprit tomorrow.
until then, LG
Thank you for the bug report!
I found this error message:
/tmp/containerrootrc: 11: /tmp/containerrootrc: cannot create /x11docker/container.log: Permission denied
It is this line in containerrootrc:
exec 1>>/x11docker/container.log 2>&1
It only serves to redirect some log output. I'll think about a possible alternative.
The error message is confusing because the file exists for sure.
I got around to check the commits already:
bcb791cc9d581e1cb404761c52127ca63d26d110 is when the issue arose. The log says x11docker[14:18:12,517]: Waiting since 2s for /x11docker/containerrootrc.ready to have content, will wait up to 32000 seconds.. Do you need any more verbose logs from around this commit?
Part of the problem (seeing that this issue hasnt been opened yet) may be that I set the permissions on ~/.local and most other folders in $HOME to drwx------ 6 phi phi 4.0K Oct 21 14:08 .local. I think it makes the most sense as the contents of this folder arent any other user's (be it system or human) business. This is what had me open another issue once already (#131), where you fixed it by leveraging a /tmp file.
I also found that running with --cap-default fixes it. If you decide this is a wont-fix, I could personally live with using this flag, but I assume a proper error message would be great, at least.
In the end, I am not sure what exactly is causing this but I hope this will help you. Cheers
It also did not work after a chmod -R o+rX ~/.cache, however one warning came up during this: chmod: changing permissions of '.cache/x11docker/x11docker-xfce-xfce4-terminal-55668200901/share/tini': Operation not permitted.
This is what the respective share folder looks like:
-rw-r--r-- 1 phi phi 3054 Nov 20 14:14 container.CMD.sh
-rw-r--r-- 1 phi phi 4412 Nov 20 14:14 containerrootrc
-rw-r--r-- 1 phi phi 4 Nov 20 14:14 container.user
-rw-rw-rw- 1 phi phi 293 Nov 20 14:14 environment
-rw-r--r-- 1 phi phi 2 Nov 20 14:14 exitcode
-rw-r--r-- 1 phi phi 0 Nov 20 14:14 journalctl.log
prw-rw-rw- 1 phi phi 0 Nov 20 14:14 message.fifo
-rw-rw-rw- 1 phi phi 704 Nov 20 14:14 stderr
-rw-rw-rw- 1 phi phi 0 Nov 20 14:14 stdout
-rw-r--r-- 1 phi phi 51 Nov 20 14:14 timetosaygoodbye
prw-rw-r-- 1 phi phi 0 Nov 20 14:14 timetosaygoodbye.fifo
-rwxr-xr-x 1 root root 0 Nov 20 14:14 tini
-rw-r--r-- 1 phi phi 69315 Nov 20 14:14 x11docker.log
-rw-r--r-- 1 phi phi 104 Nov 20 14:14 Xclientcookie
which looks fine I think (?)
Thanks for your investigation!
Part of the problem (seeing that this issue hasnt been opened yet) may be that I set the permissions on ~/.local and most other folders in $HOME to drwx------ 700. I think it makes the most sense as the contents of this folder arent any other user's business, be it system or human. This is what had had me open another issue once already (#131), where you fixed it by leveraging a /tmp file.
Ah, yes! I already thought that this bug looks somehow familar. So I reintroduced it.
As a first fix that maybe helps I've changed the file permission of x11docker.log to 666. I doubt that this is enough, but it would be nice if you try it out.
I also found that running with --cap-default fixes it.
The point is that x11docker runs the container with docker option --cap-drop=ALL. That disallows a lot of root privileges. But containerrootrc tries to access x11docker.log as root in container and cannot supersede your 700 settings on host.
Adding the capability to supersede access permissions should solve the issue (compare man capabilities):
x11docker -- --cap-add DAC_OVERRIDE -- x11docker/xfce xfce4-terminal
I am currently not sure what would be the best solution.
Always adding capability DAC_OVERRIDE regardless of the setup. Easy, but a security impact on systems that would not need it.
Somehow checking for 700 and only adding DAC_OVERRIDE in that case. Adds some complexity in the code.
Avoiding container root access on host files. Some work, but probably the cleanest solution.
I tried using the latest commit, still no change.
Nope, DAC_OVERRIDE on its own does not cut it, only cap-default.
I am not very sure on to the details of this. But option 1 and 2 do not sound quite elegant. And it would make sense to prevent root files in the host environment. Those are always a bummer when it comes to archives, searches, deletions etc. Like when you want to backup your home folder but get errors somewhere deep nested inside .local/share/x11docker, as it contains root files. Option 3 makes the most sense to me, but this is only intuition.
chmod -R o+rX ~/.cache
Sorry for asking, but what exactly does that do?
No worries β it means recursively giving read access (+4) to "other" users (not owner or group) to everthing inside .cache. X stands for folder execution rights (+1), so they will be able to list folder contents, but it does not apply to files (so no scripts can be run etc). + means "adding permissions", so if any were present beforehand, those are combined. To set them, use =.
Ok wait, the recent commit might have solved this (but everything is kinda laggy now). Just so you dont spend too much unnecessary digging here. I'll get back to you later.
Ok wait, the recent commit might have solved this (but everything is kinda laggy now).
Good! Maybe there is another timeout now. Please give me a fresh logfile and I'll look through it.
And it would make sense to prevent root files in the host environment. Those are always a bummer when it comes to archives, searches, deletions etc.
x11docker already avoids root files in HOME. root only appends lines to existing files. That does not change the file ownership. But this seems to be an issue with 700 for ~/.cache
I am surprised why you have a root-owned tini in the share folder:
-rwxr-xr-x 1 root root 0 Nov 20 14:14 tini
It should not be there. It is shared with the container with:
--volume '/usr/bin/docker-init':'/usr/local/bin/tini':ro \
In my own x11docker cache tini does not appear.
With latest commit:
Started with--cap-default from command line: Works fine, but is slow. Definitely laggier than before, vscode editor not really usable.
Started normally: Does not start (described timeout)
Started from a noninteractive shell, with no special options (no cap-default), using a xfce hotkey shortcut on the host system: Works fine, but also slow. I did not expect that. It is the opposite behavior of a past issue #177 where it was the shortcut action that failed. - I checked the user from such a shortcut job, it is also the normal one, "phi". Maybe a tty issue again?
Started as above via xfce hotkey, but with --cap-default: Same behavior
With previous commit, before your latest fix (at 25b8034f24238eed267784def8f2e33de5a489d1), everything seems the same as above. So nothing changed.
Here's a new verbose log file as requested, from latest commit's ./x11docker -v --xephyr x11docker/xfce xfce4-terminal, run from terminal: https://waritschlager.de/share/f43f639000f6f3c6.txt
Nothing seems to have changed much
I am surprised why you have a root-owned tini in the share folder
I thought I'd try to reproduce it by deleting the respective cache folder. But now when I run it (with cap-default), no folder is recreated inside .cache/x11docker anymore, so I cannot give you any more details.
Okay, regarding the lagginess: This might not be related to this issue at all, but I'll post the info here anyway:
The lags got introduced with 5a35b8107ca043d2f0dd8a2fafe97157164bbc5f. I only encounter those with Xephyr. The application I tested this with is xfce, with vscode as the application running inside. I dont know what is so special about it, but thunar for instance was not lagging. Please tell me if you want more info here.
While digging, I also realized that nxagent and hostdisplay seem to behave more fluent than xephyr and xpra. I dont know how nxagent works, but with hostdisplay it makes sense as no key presses are proxied.
Thank you for your detailed investigation!
The lags got introduced with 5a35b81. Note that this is an earlier one than the one that broke everything described above. I only encounter those lags with Xephyr. The application I tested this with is xfce, with vscode as the application running inside. I dont know what is so special about it, but thunar for instance was not lagging.
Finally an issue that I can fix easily! In the commit you found I've enabled Xephyr option -glamor. From Xephyr -help:
-glamor Enable 2D acceleration using glamor
glamor should help to speed up some things, but obviously it can be problematic. I've disabled it yet. --xephyr should not be laggy anymore.
While digging, I also realized that nxagent and hostdisplay seem to behave more fluent than xephyr and xpra
--hostdisplay is the fastest option because no additional X server is involved. Unfortunately it costs some container isolation. A malicious application could access your host system.
--xpra is the slowest option. But it is a preferred default of x11docker because it provides some nice features, e.g. graphical clipboard support.
Furthermore it is the only seamless solution for --gpu beside the insecure --hostdisplay.
However, if security/container isolation is not a concern, --hostdisplay --gpu is the fastest setup with the lowest overhead.
I thought I'd try to reproduce it by deleting the respective cache folder. But now when I run it (with cap-default), no folder is recreated inside .cache/x11docker anymore, so I cannot give you any more details.
The cache folder only exists while the container is running. If you don't have a cache folder while x11docker is running, something very basical goes wrong.
I just tried to reproduce the 700 issue with chmod -R 700 ~/.cache/x11docker.
Surprisingly I have no issues at all and cannot reproduce your issue. x11docker just starts up well. I'll look closer how to reproduce it.
Huh, sorry - I dont know how the version mismatch happened. I redid it with the latest commit from yesterday, for sure this time: https://waritschlager.de/share/32492a8420106529.txt
That's odd. I doubt that it is a tty issue again. I would see that in the log.
Maybe you have two x11docker on your system, e.g. one in a cloned git folder and one in /usr/bin?
No, this is not the case. xfce shortcut and interactive bash definitely behave differently (one working, the other not). I removed all the times and pids from it with some wild regex and skipped display numbers etc. and below are the notable log output differences when run as xfce shortcut. As expected, the only real difference seems to be that the container.log permission error is gone.
8a9
> DEBUGNOTE: check_host(): Command tty failed. Guess if running on console: no
45a47
> DEBUGNOTE: check_host(): Command tty failed. Guess if running on console: no
124c126
< Running in a terminal: yes
---
> Running in a terminal: no
371,377d372
< grep -x -q 'x11docker/xfce' < /home/phi/.cache/x11docker/docker.imagelist || grep -x -q 'x11docker/xfce:latest' < /home/phi/.cache/x11docker/docker.imagelist || {
< docker inspect x11docker/xfce >>/home/phi/.cache/x11docker/x11docker-xfce-/share/container.log 2>&1 || {
< echo 'Image x11docker/xfce not found locally.' >&2
< echo 'Do you want to pull it from docker hub?' >&2
< askyesno && Dockerpull=yes || error "Image 'x11docker/xfce' not available locally and not pulled from docker hub."
< }
< }
382a378
> env DISPLAY=':0.0' DBUS_SESSION_BUS_ADDRESS='unix:path=/run/user/1000/bus' bash -c "notify-send 'x11docker: Pulling image x11docker/xfce from docker hub'" 2>/dev/null &
1475c1471,1495
< /tmp/containerrootrc: 11: /tmp/containerrootrc: cannot create /x11docker/container.log: Permission denied
---
> mkdir: created directory '/var/run/dbus'
> mkdir: created directory '/tmp/.ICE-unix'
> mkdir: created directory '/tmp/.X11-unix'
> mkdir: created directory '/tmp/.font-unix'
> srwxrwxrwx 1 1000 1001 0 Nov 23 08:10 /X113
>
> ==> /home/phi/.cache/x11docker/x11docker-xfce-/message.log <==
> DEBUGNOTE: Running containerrootrc: Setup as root in container
>
> ==> /home/phi/.cache/x11docker/x11docker-xfce-/share/container.log <==
> lrwxrwxrwx 1 root root 5 Nov 23 08:10 /tmp -> /X113
> mkdir: created directory '/fakehome'
>
> ==> /home/phi/.cache/x11docker/x11docker-xfce-/message.log <==
> DEBUGNOTE: containerrootrc: Container libc: glibc
>
> ==> /home/phi/.cache/x11docker/x11docker-xfce-/share/container.log <==
> removed '/etc/shadow'
>
> ==> /home/phi/.cache/x11docker/x11docker-xfce-/message.log <==
> x11docker: Container system ID: debian
>
>
> ==> /home/phi/.cache/x11docker/x11docker-xfce-/share/container.log <==
> chown: changing ownership of '/tmp/chowntestfile': Operation not permitted
1551d1572
< DEBUGNOTE: waitforlogentry(): tailstderr: Found log entry "x11docker=ready" in store.info.
1553,1592c1574,1586
< DEBUGNOTE: waitforlogentry(): containerrc: Waiting since 11s for log entry "containerrootrc=ready" in store.info
< DEBUGNOTE: waitforlogentry(): containerrc: Waiting since 12s for log entry "containerrootrc=ready" in store.info
...
---
> DEBUGNOTE: waitforlogentry(): tailstderr: Found log entry "x11docker=ready" in store.info.
> DEBUGNOTE: waitforlogentry(): containerrc: Found log entry "containerrootrc=ready" in store.info.
So it fails here: https://github.com/mviereck/x11docker/blob/master/x11docker#L4282. $(convertpath share $Containerlogfile), which resolves to /x11docker/container.log isnt accessible, because /x11docker itself cannot be traversed into. I put an ls -l / at that position and here is the output:
total 16
srwxrwxrwx 1 1000 1001 0 Nov 23 08:57 X120
drwxr-xr-x 2 root root 4096 Jul 14 08:49 bin
drwxr-xr-x 2 root root 6 May 13 2019 boot
drwxr-xr-x 5 root root 360 Nov 23 08:57 dev
drwxr-xr-x 40 root root 4096 Nov 23 08:57 etc
drwxr-xr-x 2 root root 6 May 13 2019 home
drwxr-xr-x 8 root root 107 Jul 14 08:49 lib
drwxr-xr-x 2 root root 34 Jul 8 03:30 lib64
drwxr-xr-x 2 root root 6 Jul 8 03:30 media
drwxr-xr-x 2 root root 6 Jul 8 03:30 mnt
drwxr-xr-x 2 root root 6 Jul 8 03:30 opt
dr-xr-xr-x 374 root root 0 Nov 23 08:57 proc
drwx------ 2 root root 37 Jul 8 03:30 root
drwxr-xr-x 3 root root 60 Nov 23 08:57 run
drwxr-xr-x 2 root root 4096 Jul 8 03:30 sbin
drwxr-xr-x 2 root root 6 Jul 8 03:30 srv
dr-xr-xr-x 13 root root 0 Nov 23 08:57 sys
drwxrwxrwt 2 root root 29 Nov 23 08:57 tmp
drwxr-xr-x 10 root root 105 Jul 8 03:30 usr
drwxr-xr-x 11 root root 139 Jul 8 03:30 var
drwxrwx--- 2 1000 1001 4096 Nov 23 08:57 x11docker
/tmp
ls: cannot access '/x11docker/container.log': Permission denied
and $PWD is /tmp and $USER is phi and $UID is empty and id says uid=0(root) gid=0(root) groups=0(root) and cat /etc/passwd also does not contain phi. So the user $USER doesnt exist..?!
On my host system, the the $UID is (as on most systems) 1000.
Thanks for removing the glamor option! Everything is smooth again.
The cache folder only exists while the container is running.
Oh, huh. The cache folders I described above were present without any container running. So I guess they were leftovers from failed run attempts. Should not matter, this doesnt happen anymore right now.
No, this is not the case. xfce shortcut and interactive bash definitely behave differently (one working, the other not).
That's really odd. I have no good idea why there is a difference. That also indicates that 700 is not the core issue. As been said, I cannot reproduce the issue if I set my own cache to 700.
The only idea I have yet is some sort of >>redirection issue. But I would not know why it only happens in terminal, but not with a shortcut.
echo "exec 1>>$(convertpath share $Containerlogfile) 2>&1"
Could you try to just disable this line? It only serves to redirect some output into the logfile, so it would not hurt essentially. Though, ls fails, too:
ls: cannot access '/x11docker/container.log': Permission denied
$PWD is /tmp and $USER is phi and $UID is empty and id says uid=0(root) gid=0(root) groups=0(root) and cat /etc/passwd also does not contain phi. So the user $USER doesnt exist..?!
The entries in /etc/passwd and etc/group are done shortly after that in containerrootrc.
The cache folders I described above were present without any container running. So I guess they were leftovers from failed run attempts. Should not matter, this doesnt happen anymore right now.
I also get those leftover folders. It seems x11docker does not get enough time to clean up if I shut down the system while x11docker is running. You can use sudo x11docker --cleanup to remove all leftovers (and currently running x11docker sessions).
I did a test in an old Manjarao VM.
I've set ~/.cache/x11docker to 700. It works well.
I could not update Manjaro due to some package conflicts. But I assume that would not change anything.
So i cannot reproduce your issue and have no idea why x11docker fails on your system in terminal only.
Would it be ok for you to just use --cap-default and close the ticket?
Though, if you have an idea and want to investigate further, I am happy to look at, too.
Sure. I'll get back to you when I come accross a meaningful hint. Thank you for the help!
The latest commit runs containerrootrc with flag --privileged. You should not need --cap-default anymore.
x11docker's root setup in container now has no restrictions. This allow to drop --cap-default and the container command will run without privileges at all.
It would be nice if you try this out.
yup, works now out of the box :-)
good job
Great! :-)
Finally solved although we did not find the very special point where it previously failed.
|
2025-04-01T06:39:42.156658
| 2020-07-16T16:26:32
|
658354634
|
{
"authors": [
"EugeneKudr",
"mw99"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8756",
"repo": "mw99/DataCompression",
"url": "https://github.com/mw99/DataCompression/issues/25"
}
|
gharchive/issue
|
Can't unarchive .zip
I try to unzip my .zip file (downloaded from the Internet) using the .unzip () command and get a nil. Also I cannot open on my computer the file zipped by .zip(). With gzip all ok. Please help!
I seem to have figured out that this solution is not suitable for pkzip files, but I need to process this type of file. What can I do?
Hi. Your best bet ist called βminizipβ. Maybe google that together with swift and you should find what you are looking for. Good luck.
|
2025-04-01T06:39:42.179984
| 2020-05-12T17:56:53
|
616848380
|
{
"authors": [
"bayk",
"condensed-io",
"vinayaga07"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8757",
"repo": "mwcproject/mwc-qt-wallet",
"url": "https://github.com/mwcproject/mwc-qt-wallet/issues/322"
}
|
gharchive/issue
|
[QT-Wallet] In "Accounts" page, UI needs updates.
Description:
In Accounts page, observed the following issues,
Needs description about the ACCOUNTS page.
Page UI need to be aligned, observing lot of empty spaces in the right side of the Table .
Please refer screenshot for more details .
for the width issue, can you evenly space the columns horizontally to take up 100% width always?
thanks @vinayaga07 for bringing up some design considerations !
It is current default values, seems like issue is addressed.
None default we can't fix
|
2025-04-01T06:39:42.191950
| 2017-10-05T02:26:11
|
262986584
|
{
"authors": [
"CryptoKlizO",
"mwlang"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8758",
"repo": "mwerner/bittrex",
"url": "https://github.com/mwerner/bittrex/pull/13"
}
|
gharchive/pull-request
|
A few small fixes
handle parsing timestamps when value not supplied (return nil instead of exception).
handle open and closed order on the Order class.
gracefully handle a symbol not returning data for Quote class.
additional attributes for Order class.
Specs updated. Plus, can see code in action on my crypto project
WHAt is this
a few small fixes ?
is this me being hacked?
CryptOKlizO
-------- Original Message --------
Subject: Re: [mwerner/bittrex] A few small fixes (#13)
Local Time: November 1, 2017 1:07 PM
UTC Time: November 1, 2017 8:07 PM
From<EMAIL_ADDRESS>To: mwerner/bittrex<EMAIL_ADDRESS>Subscribed<EMAIL_ADDRESS>Merged #13.
β
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
Hello wtf?
CryptOKlizO
-------- Original Message --------
Subject: Re: [mwerner/bittrex] A few small fixes (#13)
Local Time: November 1, 2017 2:00 PM
UTC Time: November 1, 2017 9:00 PM
From<EMAIL_ADDRESS>To: mwerner/bittrex<EMAIL_ADDRESS>mwerner/bittrex<EMAIL_ADDRESS>Subscribed<EMAIL_ADDRESS>WHAt is this
a few small fixes ?
is this me being hacked?
CryptOKlizO
-------- Original Message --------
Subject: Re: [mwerner/bittrex] A few small fixes (#13)
Local Time: November 1, 2017 1:07 PM
UTC Time: November 1, 2017 8:07 PM
From<EMAIL_ADDRESS>To: mwerner/bittrex<EMAIL_ADDRESS>Subscribed<EMAIL_ADDRESS>Merged #13.
β
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
2025-04-01T06:39:42.286483
| 2019-09-28T18:40:29
|
499806791
|
{
"authors": [
"danmartyn",
"mxcl"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8759",
"repo": "mxcl/PromiseKit",
"url": "https://github.com/mxcl/PromiseKit/issues/1098"
}
|
gharchive/issue
|
Struggling with chaining network requests
Hi there, I've just started using PromiseKit, and am struggling with chaining requests together instead of embedding them...
I'm using PromiseKit 6.11.0 and PMKFoundation 3.3.3 (using SwiftPM in Xcode 11) targeting iOS 13 in a sample project.
I'm using the JSONPlaceholder API to try and get things working. I've created a networking engine that has 2 methods:
import Foundation
import PromiseKit
class JSONPlaceholderNetworkEngine: NetworkEngine {
func getPosts() -> Promise<GetPostsRequest.Response> {
return execute(GetPostsRequest())
}
func getPost(id: Int) -> Promise<GetPostRequest.Response> {
return execute(GetPostRequest(id: id))
}
}
The base class execute method looks like this:
func execute<Request: NetworkRequest>(_ request: Request) -> Promise<Request.Response> {
guard let urlRequest = request.urlRequest(schemeOverride: scheme, hostOverride: host, additionalHeaders: defaultHeaders) else {
return Promise(error: NetworkError.badURLRequest)
}
return session.dataTask(.promise, with: urlRequest).compactMap { (data, response) -> Request.Response? in
guard self.validate(response) else { throw NetworkError.invalidResponse }
return try request.jsonDecoder.decode(Request.Response.self, from: data)
}
}
NetworkRequest's have an associated type, Response that is Decodable.
What I'm trying to accomplish is in the ViewController viewDidLoad method, load the posts using getPosts() then once they are loaded, pick a random one and load it's details using getPost(id: randomPost.id), like this:
jsonNetworkEngine.getPosts().then { posts in
let randomPost = posts.randomElement()!
print("Got \(posts.count) posts! Choosing post \(randomPost.id) to load details:")
self.jsonNetworkEngine.getPost(id: randomPost.id)
}.then { post in
print(post)
}.catch { error in
print("Error: \(error)")
}
But it errors at the first line jsonNetworkEngine.getPosts().then { posts in highlighting the { and saying Unable to infer complex closure return type; add explicit type to disambiguate.
I can make it work if I write the code like this:
jsonNetworkEngine.getPosts().done { posts in
let randomPost = posts.randomElement()!
print("Got \(posts.count) posts! Choosing post \(randomPost.id) to load details:")
self.jsonNetworkEngine.getPost(id: randomPost.id).done { post in
print(post)
}.catch { error in
print("Error: \(error)")
}
}.catch { error in
print("Error: \(error)")
}
But that seems to break the whole idea of not ending up in "call back hell" if the subsequent calls all need to be embedded?
Please see our
Troubleshooting Guide
Ah, it was the return type! I did read the trouble shooting guide and was trying to specify the type but that still wasn't working. Somehow missed the return part though. Thanks for the incredibly fast reply!
|
2025-04-01T06:39:42.288238
| 2016-03-09T09:00:31
|
139516525
|
{
"authors": [
"lammertw",
"nathanhosselton"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8760",
"repo": "mxcl/PromiseKit",
"url": "https://github.com/mxcl/PromiseKit/pull/373"
}
|
gharchive/pull-request
|
Fix Bolts framework reference in PMKiOSCategoryTests
I was unable to run the PMKiOSCategoryTests. It seemed that the Bolts.framework was missing in the Copy Files build phase.
I believe I ended up addressing this myself recently when I ran into the same issue without remembering that this PR existed. So my apologies.
Closed by https://github.com/mxcl/PromiseKit/commit/3b079f6dd6f0c67d85c6ef02f9ed42804dfe4171
|
2025-04-01T06:39:42.296386
| 2018-10-01T17:38:41
|
365567511
|
{
"authors": [
"Akshay-N-Shaju",
"lukeoliff"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8761",
"repo": "my-first-pr/hacktoberfest-2018",
"url": "https://github.com/my-first-pr/hacktoberfest-2018/pull/33"
}
|
gharchive/pull-request
|
updated code/ dir with new python file
Fixes or introduces:
Proposed Changes
Updated readme file
All new py file inside code/.
Thanks for contributing (and putting it in alphabetical order)!
|
2025-04-01T06:39:42.299426
| 2018-10-09T00:46:16
|
368000720
|
{
"authors": [
"joshcanhelp",
"sirocanabarro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8762",
"repo": "my-first-pr/hacktoberfest-2018",
"url": "https://github.com/my-first-pr/hacktoberfest-2018/pull/349"
}
|
gharchive/pull-request
|
Update README.md
:warning: This is a Pull Request Template :warning:
Checkout all the things you've done by placing an *x*\ within the square braces.
Proposed Changes
[ ] I've forked the repository.
[ ] I've created a branch and made my chagnes in it.
[ ] I've read the CODE OF CONDUCT and abide to it.
[ ] I've read the CONTRIBUTING.md
[ ] I understand opening a PULL REQUEST doesn't mean it will be merged for sure.
Nice work, thank you! π
|
2025-04-01T06:39:42.317292
| 2016-10-02T13:17:06
|
180511877
|
{
"authors": [
"harawata",
"luoxn28"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8764",
"repo": "mybatis/mybatis-3",
"url": "https://github.com/mybatis/mybatis-3/issues/797"
}
|
gharchive/issue
|
Can change StatementHandler.query(Statement, ResultHandler) into StatementHandler.query(Statement0.
The paramter ResultHandler not used in implement classes. For test, I remove it in my local project, that's ok.
Modifying an existing public interface could break backward compatibility, so we avoid it unless there is a very good reason.
|
2025-04-01T06:39:42.461635
| 2023-01-07T00:45:10
|
1523359007
|
{
"authors": [
"EV-Builder",
"EdieLemoine"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8765",
"repo": "myparcelnl/woocommerce",
"url": "https://github.com/myparcelnl/woocommerce/issues/943"
}
|
gharchive/issue
|
Bug on Export
Plugin version
latest
WooCommerce version
latest
WordPress version
latest
PHP version
8.1 (not sure)
What went wrong?
"0 - data.shipments[0].recipient.cc - Unknown error: {"fields":["data.shipments[0].recipient.cc"],"human":["data.shipments[0].recipient.cc"]}. Please contact MyParcel."
On export
Reproduction steps
Navigate to ...
Click on ...
See ...
Relevant log output
2023-01-07T00:05:50+00:00 DEBUG *** Creating shipments started ***
2023-01-07T00:05:50+00:00 DEBUG Shipment data for order 170.
2023-01-07T00:05:50+00:00 DEBUG export_order: "0 - data.shipments[0].recipient.cc - Unknown error: {\"fields\":[\"data.shipments[0].recipient.cc\"],\"human\":[\"data.shipments[0].recipient.cc\"]}. Please contact MyParcel."
2023-01-07T00:07:45+00:00 DEBUG *** Creating shipments started ***
2023-01-07T00:07:45+00:00 DEBUG Shipment data for order 170.
2023-01-07T00:07:45+00:00 DEBUG export_order: "0 - data.shipments[0].recipient.cc - Unknown error: {\"fields\":[\"data.shipments[0].recipient.cc\"],\"human\":[\"data.shipments[0].recipient.cc\"]}. Please contact MyParcel."
2023-01-07T00:23:07+00:00 DEBUG *** Creating shipments started ***
2023-01-07T00:23:07+00:00 DEBUG Shipment data for order 170.
2023-01-07T00:23:08+00:00 DEBUG export_order: "0 - data.shipments[0].recipient.cc - Unknown error: {\"fields\":[\"data.shipments[0].recipient.cc\"],\"human\":[\"data.shipments[0].recipient.cc\"]}. Please contact MyParcel."
2023-01-07T00:26:32+00:00 DEBUG *** Creating shipments started ***
2023-01-07T00:26:32+00:00 DEBUG Shipment data for order 170.
2023-01-07T00:26:32+00:00 DEBUG export_order: "0 - data.shipments[0].recipient.cc - Unknown error: {\"fields\":[\"data.shipments[0].recipient.cc\"],\"human\":[\"data.shipments[0].recipient.cc\"]}. Please contact MyParcel."
2023-01-07T00:27:26+00:00 DEBUG *** Creating shipments started ***
2023-01-07T00:27:26+00:00 DEBUG Shipment data for order 186.
2023-01-07T00:27:26+00:00 DEBUG export_order: "0 - data.shipments[0].recipient.cc - Unknown error: {\"fields\":[\"data.shipments[0].recipient.cc\"],\"human\":[\"data.shipments[0].recipient.cc\"]}. Please contact MyParcel."
2023-01-07T00:27:52+00:00 DEBUG *** Creating shipments started ***
2023-01-07T00:27:52+00:00 DEBUG Shipment data for order 186.
2023-01-07T00:27:52+00:00 DEBUG export_order: "0 - data.shipments[0].recipient.cc - Unknown error: {\"fields\":[\"data.shipments[0].recipient.cc\"],\"human\":[\"data.shipments[0].recipient.cc\"]}. Please contact MyParcel."
Additional context
none
Hi @EV-Builder, this should not occur anymore in the v5.0.0 beta versions of the plugin.
Please read this issue for more information and how to report bugs in the new version.
|
2025-04-01T06:39:42.515126
| 2018-06-07T17:44:54
|
330371841
|
{
"authors": [
"mysticatea",
"rafegoldberg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8767",
"repo": "mysticatea/npm-run-all",
"url": "https://github.com/mysticatea/npm-run-all/issues/135"
}
|
gharchive/issue
|
Unable to get local issuer certificate
When I try to yarn add -D npm-run-all I get an error:
[1/4] π Resolving packages...
error: An unexpected error occurred: "https://registry.yarnpkg.com/npm-run-all: unable to get local issuer certificate".
info: If you think this is a bug, please open a bug report with the information provided in "/Users/z00221y/Sites/exp-deploy-vue/yarn-error.log".
info: Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.
I also tried installing via the repo's remote .git URL, but to no avail:
[1/4] π Resolving packages...
error: Couldn't find package<EMAIL_ADDRESS>required by "https://github.com/mysticatea/npm-run-all.git" on the "npm" registry.
info: Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.
error: Couldn't find package<EMAIL_ADDRESS>required by "https://github.com/mysticatea/npm-run-all.git" on the "npm" registry.
error: Couldn't find package<EMAIL_ADDRESS>required by "https://github.com/mysticatea/npm-run-all.git" on the "npm" registry.
Here are some pertinent details re: my local environment:
macOS
Node
NPM
Yarn
v10.12.6
v8.11.2
v6.1.0
v1.6.0
Thank you for this issue.
However, I couldn't reproduce it.
Please open issue on the yarn's repo.
|
2025-04-01T06:39:42.528239
| 2016-10-17T19:23:11
|
183503794
|
{
"authors": [
"Jarlotee",
"coveralls",
"mzabriskie"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8768",
"repo": "mzabriskie/axios",
"url": "https://github.com/mzabriskie/axios/pull/491"
}
|
gharchive/pull-request
|
Fix proxy bugs
This should fix two issues related to using the proxy options.
Hostname is preferred over host
Adds the accompanying host header to the correct url
@mzabriskie found a few bugs while protoyping with your library
Coverage remained the same at 94.393% when pulling ce1ecdae7a035c144b3726e976c3d1a98caa7bd0 on Jarlotee:patch-1 into 3f8b128da4ab11e34f0b880381f9395b2ab0e22f on mzabriskie:master.
Coverage remained the same at 94.393% when pulling ce1ecdae7a035c144b3726e976c3d1a98caa7bd0 on Jarlotee:patch-1 into 3f8b128da4ab11e34f0b880381f9395b2ab0e22f on mzabriskie:master.
Thanks for the PR!
|
2025-04-01T06:39:42.544264
| 2024-05-04T15:09:30
|
2279065257
|
{
"authors": [
"dignifiedquire",
"divagant-martian",
"matheus23"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8770",
"repo": "n0-computer/iroh",
"url": "https://github.com/n0-computer/iroh/pull/2264"
}
|
gharchive/pull-request
|
feat: Update from default-net to rebranded netdev
Description
Upgrades from default-net v0.20 to netdev v0.25, which is simply a rebrand of the original default-net
Allows depending on system-configuration version 0.6 instead of 0.5.1 downstream. This may help iOS compliation.
Breaking Changes
Would need to check if/how much the default-net types were exposed in the API.
Notes & open questions
Not sure yet if it fixes our iOS problems.
Change checklist
[x] Self-review.
[x] Documentation updates if relevant. (no mention of default_net in docs)
[ ] Tests if relevant.
[ ] All breaking changes documented.
Changed items in the public API
===============================
-pub iroh_net::net::interfaces::IpNet::V4(netdev::ip::Ipv4Net)
+pub iroh_net::net::interfaces::IpNet::V4(default_net::ip::Ipv4Net)
-pub iroh_net::net::interfaces::IpNet::V6(netdev::ip::Ipv6Net)
+pub iroh_net::net::interfaces::IpNet::V6(default_net::ip::Ipv6Net)
interfaces should probably not even be exposed as part of the public api, but in any case, here is what changed
This fixes our iOS build together with the instructions from here: https://iroh.computer/docs/examples/ios-starter#add-the-system-configuration-framework
It seems like it's been resolved and was released in version 0.16.
iroh-net already depends on version 0.20 at the moment, so should the macOS logic be adjusted, or is the original issue not actually fixed?
yeah, I think this just never got rechecked after that fix was done
I'd be happy to contribute a change that removes the logic that's possibly subsumed by the upstream fix, but don't have a macOS device to test on :confused:
|
2025-04-01T06:39:42.545978
| 2018-07-11T16:02:43
|
340307697
|
{
"authors": [
"n00py",
"s3gm3nt4ti0nf4ult"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8771",
"repo": "n00py/WPForce",
"url": "https://github.com/n00py/WPForce/pull/13"
}
|
gharchive/pull-request
|
Users list initial feature
So this is my intial feature of providing mutualy exclusive options for usernames (you can use commandline or text file).
Suggested:
migration to python3
multiprocessing instead of threading with pool of threads
Thanks!
|
2025-04-01T06:39:42.551617
| 2018-08-21T06:29:21
|
352402159
|
{
"authors": [
"nbgl",
"wilko77"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8772",
"repo": "n1analytics/anonlink",
"url": "https://github.com/n1analytics/anonlink/issues/153"
}
|
gharchive/issue
|
greedy solver performs poorly
The problem
The interplay between entitymatch.calculate_filter_similarity and entitymatch.greedy_solver is not very good.
The output of calculate_filter_similarity is a list, sorted by similarity score in descending order on a per row basis.
However, the greedy solver does only one pass over the similarity matrix, assigning mappings on a first-come-first-serve basis. Thus, it can happen that it maps A to B although there might be another similarity of B to C with a higher similarity score.
This mismatch leads to poor matching results.
A more sensible approach is to sort the similarity matrix purely by similarity score, irrespective of the row or column.
Comparison of matching results with properly sorted similarity matrix on a febrl generated dataset
example code:
sparse_matrix = calculate_filter_similarity(filters_a, filters_b, k, thresh)
sparse_matrix = sorted(sparse_matrix, key=lambda tup: tup[1], reverse=True)
mapping = greedy_solver(sparse_matrix)
Results without the second line (as in calculate_mapping_greedy):
precision: 0.457
recall: 0.957
vs results with sorted similarity matrix:
precision: 0.692
recall: 0.99986
Proposal
It makes sense to be able to parameterize the calculate_filter_similarity function to control the structure of the similarity matrix.
I don't know if we need the current structure of the similarity matrix anywhere, but anyway, as we only have one sensible solver, the greedy solver, it would make sense to change the default to a more greedy solver pleasing kind of type.
Or, we sort the matrix again in the solver. Which means we essentially sort the matrix twice.
Related to #135. Fixed in the new API in #136.
|
2025-04-01T06:39:42.554579
| 2014-12-19T09:45:07
|
52464483
|
{
"authors": [
"BIGjuevos",
"MathiasLund",
"noma4i"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8773",
"repo": "n1k0/casperjs",
"url": "https://github.com/n1k0/casperjs/issues/1111"
}
|
gharchive/issue
|
back button
Hi!
I know it has been addressed before, but I can't seem to get my page to go back and I'm not that good with casperjs yet...
My script is like this:
casper.wait(15000, function() {
if(this.exists(x("(//tr[@class='pager'])[1]//a[preceding-sibling::span[not(span)]][1]"))) {
loop();
} else {
this.capture("billede1.png");
this.then(function() {
this.back();
console.log("attempting to go back");
});
casper.wait(15000, function() {
this.capture("billede2.png");
});
}
});
and it works very well apart from the fact that it doesn't go back :( the two screenshots are identical..
does anyone have an idea what to do?
thanks in advance!
I appreciate your help :)
CasperJS is built on top of PhantomJS so I believe that you are able utilize this: http://phantomjs.org/api/webpage/method/go-back.html
As part of a cleanup effort:
Looks to be a stale help request. Assuming you've moved on to better and cooler things. Please feel free to re-open if this is still an active issue for you and we'll try our best to help out.
|
2025-04-01T06:39:42.556481
| 2016-03-31T23:00:35
|
145040476
|
{
"authors": [
"BIGjuevos",
"istr",
"mickaelandrieu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8774",
"repo": "n1k0/casperjs",
"url": "https://github.com/n1k0/casperjs/pull/1506"
}
|
gharchive/pull-request
|
move to archived versions of phantomjs since bitbucket throttles
Let's try pulling in phantomjs from somewhere else, and see what happens. This requires some minor tweaks to the travis yml to like gz files as well.
@istr @mickaelandrieu Builds are now pulling from a github release. Also updated travis config to support both bz2 and gz compress tar files for future use.
Sounds good to me :) thank you !
@istr any objections to merging this? I think this should go in before anything else so we can stabilize the builds.
Ok, thank you. Merging this now (sorry, but always try to work from past to current).
|
2025-04-01T06:39:42.604259
| 2023-12-11T01:50:46
|
2034675346
|
{
"authors": [
"FatihKoz",
"rbadger12"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8775",
"repo": "nabeelio/phpvms",
"url": "https://github.com/nabeelio/phpvms/issues/1715"
}
|
gharchive/issue
|
Flight Import Not Calculating Distances
Describe the bug
When importing a CSV with our routes, the system will process all routes without issue, but fills in the distance with 0nm. I have been told that v7 should autocalculate if the csv field for ditance is blank, this is not currently happening. I have included a line of our csv file as it stands.
"EJV,814,,814,1,KHHR,KMCE,,,,,24000,,61,C,,,,,,,,1,B350,,,,"
Which version you are using ? Bug report template kindly wants you to provide this info :)
If you are on beta5 update to latest dev and try again please.
Should be closed as OP not providing info and can not re-create the issue in latest dev
|
2025-04-01T06:39:42.605788
| 2019-03-06T12:17:55
|
417775289
|
{
"authors": [
"StrongGesmbH",
"nabinked"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8776",
"repo": "nabinked/NToastNotify",
"url": "https://github.com/nabinked/NToastNotify/issues/58"
}
|
gharchive/issue
|
Toastr not loading css from cdn
After updating to latest version, toastr is only loading js file from CDN, and not css file
Can you provide more details. I cant reproduce
I had an issue where the styling dissapeared so i took a look at the network traffic and saw that no css file was loaded from the cdn.
But i resolved my issue with creating my own style sheet.
|
2025-04-01T06:39:42.638652
| 2021-10-19T03:37:25
|
1029822012
|
{
"authors": [
"Imberflur",
"nagisa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8777",
"repo": "nagisa/rust_tracy_client",
"url": "https://github.com/nagisa/rust_tracy_client/pull/23"
}
|
gharchive/pull-request
|
Relax literal the requirement of the create_plot macro so that it can be used with stringify
This matches what concat! uses https://doc.rust-lang.org/stable/core/macro.concat.html
Also, concat! will produce a friendly customized error if a literal is not used:
Compared too:
or with multiple macro layers:
I'm sorry, but I won't be able to take a careful look into this PR for a couple more weeks (currently undergoing a job change)
Actually, never mind. I'll merge this now and release a version with this in place and we can release a breaking change later down the line if this turns out to have been a mistake.
@nagisa I'm happy to wait
tracy-client v0.12.6 is up.
|
2025-04-01T06:39:42.657248
| 2021-06-28T11:08:26
|
931452603
|
{
"authors": [
"DenShlk",
"kabumere"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8779",
"repo": "nambicompany/expandable-fab",
"url": "https://github.com/nambicompany/expandable-fab/issues/21"
}
|
gharchive/issue
|
Transparent background for option label
Hi there! Thanks for the beautiful view!
I'd like to make the label background transparent (to show only text), I tried to set "label_backgroundColor" to "android: transparent" (#0000), but it didn't work.
I also tried to disable the label, but when I set the "label_text" empty, there is a still black rectangle.
So, summing up, my ask is to add the ability to use transparent background for labels.
Hey @DenShlk,
Thanks for bringing this issue to our attention. Please try the latest version v1.1.1 now available on Maven Central (or clone the latest commits from the master branch on this repo). Please let me know if the latest update fixes your problem.
Hi! It fixed my issue, thanks for the quick update!
|
2025-04-01T06:39:42.683783
| 2024-09-24T21:42:28
|
2546445100
|
{
"authors": [
"nandagopalan",
"neo7337"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8780",
"repo": "nandlabs/golly-docs",
"url": "https://github.com/nandlabs/golly-docs/issues/13"
}
|
gharchive/issue
|
[FEATURE] Documentation extraction from package
Create a script that extracts package readme from https://github.com/nandlabs/golly. This needs to happen for every release.
We also need to maintain the golly docs link for old releases.
@neo7337 any suggestions ?
Each package would contain a README.md file, we can traverse and parse the documentation based on that and we will also know which documentation belongs to which package. Won't this work?π€ @nandagopalan
Well some packages you may want to have the Readme.md for technical information but don't want it to be documented isn't it?
I guess for that purpose we can create a marker file.
|
2025-04-01T06:39:42.685323
| 2021-05-20T23:04:32
|
897534832
|
{
"authors": [
"archcorsair",
"nandorojo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8781",
"repo": "nandorojo/moti",
"url": "https://github.com/nandorojo/moti/pull/71"
}
|
gharchive/pull-request
|
(docs): Replace usage of <View> with <MotiView>
Since MotiView is now importable directly, replaced all instances of <View> with <MotiView>
Awesome thanks!
|
2025-04-01T06:39:42.691318
| 2022-11-26T21:59:05
|
1465264590
|
{
"authors": [
"denisdefreyne",
"ntkme"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8783",
"repo": "nanoc/nanoc",
"url": "https://github.com/nanoc/nanoc/pull/1629"
}
|
gharchive/pull-request
|
Write dart sass importer in idiomatic ruby
This PR rewrites the dart sass custom importer in idiomatic ruby code.
Thanks! This refactoring totally makes sense.
|
2025-04-01T06:39:42.696499
| 2018-10-20T16:44:33
|
372229646
|
{
"authors": [
"CathalT"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8784",
"repo": "nanocurrency/raiblocks",
"url": "https://github.com/nanocurrency/raiblocks/pull/1317"
}
|
gharchive/pull-request
|
Fix MSVC linker error using rai_bootstrap_weights
see #1316
No problem! Glad to help , I'm mainly a Windows dev myself so it it was definitely self beneficial haha
|
2025-04-01T06:39:42.735622
| 2024-05-23T16:57:17
|
2313429270
|
{
"authors": [
"willbradshaw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8785",
"repo": "naobservatory/mgs-workflow",
"url": "https://github.com/naobservatory/mgs-workflow/issues/18"
}
|
gharchive/issue
|
Update MultiQC version
A recent MultiQC update broke the pipeline. I've switched back to a past version and it's running fine, but it would be nice to stay current.
Still need to update the pipeline to successfully use the new version.
Given that we're not using any of the fancier features of MultiQC and are essentially using it as a FastQC data aggregator, and might be switching to Falco anyway, I'm no longer sure it makes sense to use MultiQC here at all. I expect we'll revisit this after we've made the changes discussed in #74 and #78.
|
2025-04-01T06:39:42.742884
| 2021-08-23T02:34:58
|
976563711
|
{
"authors": [
"kovalev94",
"tkspuk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8786",
"repo": "napalm-automation-community/napalm-huawei-vrp",
"url": "https://github.com/napalm-automation-community/napalm-huawei-vrp/issues/12"
}
|
gharchive/issue
|
A misprint in open method
You have a misprint in open method. At line 126 you wrote - "huawei_telent" instead of "huawei_telnet". It entails an error when you try to use telnet connection instead ssh
thanks.this is typing error. I will fix it.
|
2025-04-01T06:39:42.746128
| 2017-03-03T09:39:57
|
211639257
|
{
"authors": [
"kderynski",
"mirceaulinic"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8787",
"repo": "napalm-automation/napalm-junos",
"url": "https://github.com/napalm-automation/napalm-junos/pull/127"
}
|
gharchive/pull-request
|
Added commit confirm functionality
Related to napalm-automation/napalm-base#213
Hey @mirceaulinic,
Yes, I have tested it. commit_config() without additional argument confirmed=0 doesn't change current behaviour.
BTW I will create PR in a few minutes with brief description of this method in documentation.
Looking at this, I'd like to test the following scenario:
connect using the config_lock optional arg set as False
commit confirmed x minutes
is the config DB still locked?
confirm the commit
is the config DB unlocked?
@mirceaulinic I have tested proposed scenario and config DB was unlocked during commit confirmed period. I have added small fix and now it works as it should, so config DB is locked when device waits for confirmation.
Thanks @kderynski
|
2025-04-01T06:39:42.753644
| 2022-01-17T12:13:55
|
1105781272
|
{
"authors": [
"alisterburt",
"kevinyamauchi"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8788",
"repo": "napari/napari",
"url": "https://github.com/napari/napari/pull/3963"
}
|
gharchive/pull-request
|
switch pd.append to pd.concat
Description
This is a proposed fix for #3962. Pandas is deprecating DataFrame.append in favor of pd.concat. Currently, causes a FutureWarning to be emitted when items are appended to the _FeatureTable.
Type of change
[x] Bug-fix (non-breaking change which fixes an issue)
References
Closes #3962
How has this been tested?
[x] the test suite for my feature covers cases x, y, and z
Final checklist:
[ ] My PR is the minimum possible work for the desired functionality
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] If I included new strings, I have used trans. to make them localizable.
For more information see our translations guide.
merging now to get this in for the release!
|
2025-04-01T06:39:42.790371
| 2024-06-26T17:09:10
|
2375874161
|
{
"authors": [
"SajjadPourali",
"SirMangler"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8789",
"repo": "narrowlink/udp-stream",
"url": "https://github.com/narrowlink/udp-stream/pull/9"
}
|
gharchive/pull-request
|
UdpListener: Use 'poll_send_to' instead of 'poll_send' to allow multiple connections
Currently, listen servers are only able to accept connections from one peer address. This is because currently when the UdpListener accepts a connection, it calls UdpSocket::connect which restricts that socket sending/recv'ing to that peer address indefinitely.
This isn't necessary as tokio's UdpSocket offers poll_send_to, meaning the listener socket doesn't need to become exclusive and can use the stored peer_addr to accept and manage multiple connections.
This was tested using the "echo-udp" example with multiple instances of netcat.
Currently, listen servers are only able to accept connections from one peer address. This is because currently when the UdpListener accepts a connection, it calls UdpSocket::connect which restricts that socket sending/recv'ing to that peer address indefinitely.
This isn't necessary as tokio's UdpSocket offers poll_send_to, meaning the listener socket doesn't need to become exclusive and can use the stored peer_addr to accept and manage multiple connections.
This was tested using the "echo-udp" example with multiple instances of netcat.
As you mentioned, connect is not necessary here.
Thanks for the PR.
I noticed that it made an issue for the client, let me investigate more.
Just fixed on #11
Interesting that I haven't encountered this for some reason, I've been testing a lot since the PR. Glad you caught it however!
|
2025-04-01T06:39:42.792451
| 2023-11-09T17:11:34
|
1986067655
|
{
"authors": [
"Patraskon",
"narzoul"
],
"license": "0BSD",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8790",
"repo": "narzoul/ForceD3D9On12",
"url": "https://github.com/narzoul/ForceD3D9On12/issues/3"
}
|
gharchive/issue
|
DDraw7.SetDisplayMode
DDraw7.SetDisplayMode
Action not supported.
ΠΠ
I have no idea what this is, but as the readme says, no support is provided for D3D9On12 here.
|
2025-04-01T06:39:42.801591
| 2019-08-04T18:06:24
|
476578837
|
{
"authors": [
"kasbah",
"vssystemluba"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8791",
"repo": "nasa-jpl/open-source-rover",
"url": "https://github.com/nasa-jpl/open-source-rover/issues/144"
}
|
gharchive/issue
|
Add Kitspace compatible electronics BOMs?
Hey, I noticed in the readme it says:
The Bill of Materials folder contains (currently just one) Bill of materials file for a specific vendor. We are searching for better ways to help with the ordering process
This is exactly what my open source project kitspace.org is for! It lets you buy across the 5 currently supported distributors with a single click.
To mirror your PCB designs there we need to add a few files: a manifest and the electronics BOMs (can be csv, xlsx or ods) separated into PCB files. When you update the repo on GitHub in the future it will automatically sync your changes. We do have a BOM export script for KiCad as well.
I would be glad to work with you on this and put this project up. Are you interested?
@kasbah we're very interested in anything that helps folks get their parts easier, faster, or less expensively!!!
Would you be willing to take our BOM and build out a project on kitspace to show us what that would look like? It'd be great to get an updated BOM with easier carts / ordering for folks.
Do note that we build our master parts list / BOM dynamically from our build instruction files. There may need to be a little thought going forwards about maintaining accuracy and consistency, but even as a first pass having something like a kitspace project (even for a static BOM as it exists today) would definitely add value over what we already have available.
I'm going to close out this issue here since it's not an issue against our repository per se, but I'd love to see what you can come up with in kitspace for this project. We can continue to chat on this issue or on the OSR forum (https://www.tapatalk.com/groups/jpl_opensource_rover/).
(And FYI if we get a kitspace project / BOM to an acceptable place, we will definitely link to it from our main repository!)
Awesome! @kevinb456, would you be up for generating the BOMs and sending the appropriate pull-requests or should I do it?
I think as a first step we'll add two .csv to the Bill of Materials Files folder one for Arduino_uno_sheild (sic) and one for Control Board. We should be able to use our KiCad scripts and then reconcile that with your Digikey BOM. One issue is that you haven't been using the actual PCB schematic references in any BOMs? I am sure we can figure it out though.
Maybe we can set up a script for you to re-generate these in the future if any of the electronics design change and also incorporate that into your build.
Interested by your approach in general. In my other job we develop an open source microscope (openflexure.org) and we are developing a tool called git-building to generate BOMs from markdown descriptions. It seems like a very similar approach except you are using .tex. I like that your "master parts list" is a spreadsheet (we have been using yaml for a similar purpose but I like this better).
|
2025-04-01T06:39:42.810980
| 2019-10-31T21:02:37
|
515743947
|
{
"authors": [
"skliper"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8792",
"repo": "nasa/cFE",
"url": "https://github.com/nasa/cFE/pull/388"
}
|
gharchive/pull-request
|
Integration Candidate 20191030
Describe the contribution
Fixes #361, fixes #373, fixes #374, fixes #381
Testing performed
Steps taken to test the contribution:
Checked out bundle with OSAL and cFE ic-20191030 branches
make ENABLE_UNIT_TESTS=TRUE SIMULATION=native prep
make
make install
make test
Built without warnings, all tests passed except osal_timer_UT (nominal result on linux)
executed cfe, successful startup with no warnings
Expected behavior changes
Resolved potential lockup bug
Resolved anomalous messages produced during app delete
System(s) tested on:
cFS dev server
OS: Ubuntu 16.04
Versions: bundle with OSAL and cFE ic-20191030 branches
Additional context
None
Contributor Info
Jacob Hageman - NASA/GSFC
CCB 20191106 - approved for merge to master
|
2025-04-01T06:39:42.885559
| 2019-02-26T03:37:26
|
414415373
|
{
"authors": [
"IvyGongoogle",
"nashory",
"rita-qingyu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8793",
"repo": "nashory/DeLF-pytorch",
"url": "https://github.com/nashory/DeLF-pytorch/issues/2"
}
|
gharchive/issue
|
Pretrained weights trained on landmark dataset
I share the pretrained weight trained on landmarkd dataset.
Please download it from the following url:
https://drive.google.com/open?id=1dbdaDyVeIb53iGh4Uk5kA4in9-uURoLM
Hi, thank you for sharing the trained model. Is this keypoints model trained on the full-version and the finetune model trained on the clean-version?
Hello, when I use your 'pretrained_model/ldmk/pca/pca.h5' to extract dimension-reduced DeLF, it shows error:
...
loaded weights from module "pool" ...
load model from "pretrained_model/ldmk/model/keypoint/ckpt/fix.pth.tar"
load PCA parameters...
Traceback (most recent call last):
File "extract/extractor.py", line 432, in <module>
extractor = FeatureExtractor(extractor_config)
File "extract/extractor.py", line 112, in __init__
self.pca_mean = h5file['.']['pca_mean'].value
AttributeError: 'Dataset' object has no attribute 'value'
How to fix it? Can you give me some advises?
@nashory , Hello, when I use your 'pretrained_model/ldmk/pca/pca.h5' to extract dimension-reduced DeLF, it shows error:
...
loaded weights from module "pool" ...
load model from "pretrained_model/ldmk/model/keypoint/ckpt/fix.pth.tar"
load PCA parameters...
Traceback (most recent call last):
File "extract/extractor.py", line 432, in <module>
extractor = FeatureExtractor(extractor_config)
File "extract/extractor.py", line 112, in __init__
self.pca_mean = h5file['.']['pca_mean'].value
AttributeError: 'Dataset' object has no attribute 'value'
How to fix it? Can you give me some advises?
I use np.array(x) to replace x.value, and now it works.
|
2025-04-01T06:39:42.888229
| 2022-04-09T15:45:33
|
1198688334
|
{
"authors": [
"HevandroMP",
"nasso"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8794",
"repo": "nasso/urmusic",
"url": "https://github.com/nasso/urmusic/issues/20"
}
|
gharchive/issue
|
Audio doesn't play in Opera browser
I've always used this site in Opera, but after a while it didn't work anymore
I think this may be related to #19 ?
|
2025-04-01T06:39:43.058647
| 2021-06-28T09:37:46
|
931375405
|
{
"authors": [
"Whip",
"farfromrefug"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8795",
"repo": "nativescript-community/ui-material-components",
"url": "https://github.com/nativescript-community/ui-material-components/issues/311"
}
|
gharchive/issue
|
Can't get the demo to work
Which platform(s) does your issue occur on?
Android 10, Real device
Please, tell us how to recreate the issue in as much detail as possible.
I downloaded the code from this repo and ran the demo (plain js) app. I got the following errors:
ERROR in /Users/vipulk/Documents/MaterialDemo/tsconfig.json
[tsl] ERROR
TS6053: File '../tsconfig' not found.
ERROR in ./app.ts
Module build failed (from ../node_modules/ts-loader/index.js):
Error: error while parsing tsconfig.json
at Object.loader (/Users/vipulk/Documents/MaterialDemo/node_modules/ts-loader/dist/index.js:19:18)
ERROR in ../tsconfig.json
TS6053: File '../tsconfig' not found.
and then after Gradle build
Error executing Static Binding Generator: Couldn't find '/Users/vipulk/Documents/MaterialDemo/platforms/android/build-tools/sbg-bindings.txt' bindings input file. Most probably there's an error in the JS Parser execution. You can run JS Parser with verbose logging by executing "node '/Users/vipulk/Documents/MaterialDemo/platforms/android/build-tools/jsparser/js_parser.js' enableErrorLogging".
Upon executing the suggested command, nothing happens and retrying produces same error.
@Whip the demo code depends on the whole repo structure. Seeing your logs you only extracted the demo from the repo. It wont work.
My bad. Okay I've included the entire repo. Navigated to demo folder
cd demo
ns run android
I get this:
npm WARN deprecated<EMAIL_ADDRESS>Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated<EMAIL_ADDRESS>this library is no longer supported
npm WARN deprecated<EMAIL_ADDRESS>Browserslist 2 could fail on reading Browserslist >3.0 config used in other tools.
npm WARN deprecated<EMAIL_ADDRESS>https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated<EMAIL_ADDRESS>Chokidar 2 will break on node v14+. Upgrade to chokidar 3 with 15x less dependencies.
npm WARN deprecated<EMAIL_ADDRESS>fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2.
npm WARN deprecated<EMAIL_ADDRESS>Deprecated. Please use https://github.com/webpack-contrib/mini-css-extract-plugin
npm WARN deprecated<EMAIL_ADDRESS>Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
npm WARN deprecated<EMAIL_ADDRESS>request has been deprecated, see https://github.com/request/request/issues/3142
npm WARN deprecated<EMAIL_ADDRESS><EMAIL_ADDRESS>is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Please, upgrade your dependencies to the actual version of core-js.
npm ERR! code 1
npm ERR! path /Users/vipulk/Documents/Material-Components/packages/core
npm ERR! command failed
npm ERR! command sh -c node postinstall.js
npm ERR! internal/modules/cjs/loader.js:883
npm ERR! throw err;
npm ERR! ^
npm ERR!
npm ERR! Error: Cannot find module '/Users/vipulk/Documents/Material-Components/packages/core/postinstall.js'
npm ERR! at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15)
npm ERR! at Function.Module._load (internal/modules/cjs/loader.js:725:27)
npm ERR! at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
npm ERR! at internal/main/run_main_module.js:17:47 {
npm ERR! code: 'MODULE_NOT_FOUND',
npm ERR! requireStack: []
npm ERR! }
npm ERR! /Users/vipulk/.npm/_logs/2021-06-28T10_07_56_038Z-debug.log
Unable to install dependencies. Make sure your package.json is valid and all dependencies are correct. Error is: Command npm failed with exit code 1
I've attached the log file mentiond as well.
2021-06-28T10_07_56_038Z-debug.log
Sorry if I'm still being a noob.
@Whip you need to build the project first. The demo is made to show you code. However it is also meant for us to develop with. So it relies on local built project. See the last question of the faq https://github.com/nativescript-community/ui-material-components#faq
It still didn't work. The error I'm getting is
Cannot find module<EMAIL_ADDRESS>Require stack:
- /Users/vipulk/Documents/Material-Components/demo/hooks/before-prepare/nativescript-community-ui-material-core.js
- /usr/local/lib/node_modules/nativescript/lib/common/services/hooks-service.js
- /usr/local/lib/node_modules/nativescript/lib/common/yok.js
- /usr/local/lib/node_modules/nativescript/lib/bootstrap.js
- /usr/local/lib/node_modules/nativescript/lib/nativescript-cli.js
- /usr/local/lib/node_modules/nativescript/bin/tns
This happens on the last step npm run demo.android. I had setup a fresh copy of the repo for running the commands you linked.
@Whip ok i know what s going on! will fix it and push right away
@Whip i pushed a fix and updated the readme. Instead of running npm run tsc you need to run npm run build.all
Worked. Thanks for all your support
|
2025-04-01T06:39:43.061032
| 2024-08-30T19:15:43
|
2498020974
|
{
"authors": [
"JayMatsushiba"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8796",
"repo": "natpac/ecol_erosion_extinction_risk_sharks",
"url": "https://github.com/natpac/ecol_erosion_extinction_risk_sharks/issues/1"
}
|
gharchive/issue
|
Add Figure 5 code
Fig. 5. Bright and dark spots of the global range-size rarity-weighted Red List Index of sharks, rays, and chimaeras values across in 2020 at four biogeographic scales
Defintion of Done
[x] Rmarkdown added with code for generating RLI values for each biogeographic scales (hex, EEZ, LME, FAO)
[ ] Adding final figure outputs
New task:
[ ] Organize figure generation code (currently a mess), preferably as a package for replicability
|
2025-04-01T06:39:43.066258
| 2015-01-29T20:55:41
|
55955447
|
{
"authors": [
"JohnSmall",
"adamalbrecht",
"conzett",
"natritmeyer",
"sherbhachu",
"vmpj"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8797",
"repo": "natritmeyer/site_prism",
"url": "https://github.com/natritmeyer/site_prism/issues/101"
}
|
gharchive/issue
|
SitePrism::NoUrlMatcherForPage following documentation example
The documentation shows the following example:
class Account < SitePrism::Page
set_url "/accounts/{id}{?query*}"
end
It goes onto say that the following test would pass:
@account_page = Account.new
@account_page.load(id: 22, query: { token: "ca2786616a4285bc" })
expect(@account_page.current_url).to end_with "/accounts/22?token=ca2786616a4285bc"
expect(@account_page).to be_displayed
I have a very similar setup I'm trying to test, and I am getting the following error:
Failure/Error: expect(index_page).to be_displayed
SitePrism::NoUrlMatcherForPage:
SitePrism::NoUrlMatcherForPage
# /Users/gconzett/.gem/ruby/2.0.0/gems/site_prism-2.6/lib/site_prism/page.rb:14:in `displayed?'
The page object is as follows:
require_relative '../sections/widget_section.rb'
require_relative '../sections/widget_fields_section.rb'
class WidgetIndexPage < SitePrism::Page
set_url '/widgets'
section :fields, WidgetFieldsSection, 'form'
sections :widgets, WidgetSection, 'tbody tr'
element :create_widget_button, '#create-widget'
end
And the spec itself has this simple assertion which fails:
feature 'Creating a widget' do
scenario 'with valid input' do
index_page = WidgetIndexPage.new
index_page.load
expect(index_page).to be_displayed
end
end
Unfortunately this leads to the above mentioned error. Do I have to define some kind of regex matcher? The coded leads me to believe that it should just use the string for the URL. Any info would be appreciated. Thanks!
+1 I'm having the same issue
+1 I have this too, running 2.2.0.
The page I am visiting is...
http://my.website.url/product
and the page object has...
set_url "#{my.website.url}/product"
Ah, this is happening because the README.md as shown on github reflects a couple of pull requests that have been merged in; but I haven't pushed an updated version of the site_prism gem :/ I'll be doing that over the next few days :)
Ah, when is the new gem going to be released? I just updated to version 2.6 from 2.2 and it didn't fix the problem. Should we take the latest commit from here and use that?
Well there you go. 2.7 fixed this issue for me. Thx Nat! This is a great gem!
|
2025-04-01T06:39:43.070136
| 2016-03-17T07:36:24
|
141503562
|
{
"authors": [
"deviantmk",
"kozlovic"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8798",
"repo": "nats-io/gnatsd",
"url": "https://github.com/nats-io/gnatsd/issues/225"
}
|
gharchive/issue
|
case of a node failure
Hi
So lets say in a cluster of 3 nodes, we push a message but a node fails before the message is delivered, is there a chance that the message would be lost or the other 2 nodes already have it and will deliver it (once or twice?)
or in other words, when a message is pushed to nats.io, is the message acknowledge when its present in all 3 nodes or just that one node where its pushed to and then is redistributed to the rest of the nodes.
There is no acknowledgment at all in NATS. This is a "fire-and-forget" mechanism, in that when a message is sent, there is no guarantee that it is going to be delivered. At the application level, one can use Request (with timeout) to ensure that the receiver gets the message, and resend if the reply is not received within the timeout.
At the server level, it's just a send. So if your application sends to server A, and A sends to B and C (assuming there is an interest on both), there is no guarantee that B and/or C receives it, and therefore no guarantee that subscribers attached to it receive it either. Again, the sender can get around that using request/reply if guarantee delivery is desired.
Note that we have a persistence layer in the works.
thanks
|
2025-04-01T06:39:43.115657
| 2022-05-27T22:46:46
|
1251339422
|
{
"authors": [
"boreddude13",
"ripienaar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8799",
"repo": "nats-io/natscli",
"url": "https://github.com/nats-io/natscli/issues/476"
}
|
gharchive/issue
|
Token authentication bug
The logic used to choose token or username does not work properly, Connection reports: Connection: nats: Authorization Violation. When the CLI attempts to nats.Connect() it is calling this function under the hood to provide natsOpts: https://github.com/nats-io/jsm.go/blob/5a299917dacd0d9daafc9609afae4eed6107c816/natscontext/context.go#L294
The switch logic in this function will never set a token though for the created natsConnection since the CLI treats user/token the same. I was able to verify this is the case building the cli from source and removing the c.User (https://github.com/nats-io/jsm.go/blob/5a299917dacd0d9daafc9609afae4eed6107c816/natscontext/context.go#L298-L299). With this removed the token is properly set in the NATS connection and I can properly interface with my NATS server expecting a token.
Had a few bugs related to this, I think I will add a specific flag for tokens.
Looking again now I am not so sure this is the problem, we dont just always pass User from the CLI, it only sets the user if both a user and password is set, else it pass a token:
https://github.com/nats-io/natscli/blob/0f49ced81752e6eae21b3ee078ba4851114be158/cli/util.go#L749-L753
Given that, the context will do the right thing since it will only have the token option set.
I guess what you cant do though is have a context with a user+password set and then override that with a token. Is that what you were trying to do?
Closing this as working with the slight caveat, let me know if you have further issues
|
2025-04-01T06:39:43.122815
| 2023-08-07T05:05:32
|
1838689044
|
{
"authors": [
"doronbehar",
"natsukagami"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8800",
"repo": "natsukagami/mpd-mpris",
"url": "https://github.com/natsukagami/mpd-mpris/pull/42"
}
|
gharchive/pull-request
|
Add mpd-mpris.desktop
As also discussed in https://github.com/natsukagami/mpd-mpris/pull/37 .
Thanks!
|
2025-04-01T06:39:43.128559
| 2018-02-06T07:08:49
|
294653484
|
{
"authors": [
"minghz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8801",
"repo": "naturomics/CapsNet-Tensorflow",
"url": "https://github.com/naturomics/CapsNet-Tensorflow/pull/56"
}
|
gharchive/pull-request
|
Plot vector distribution
Plotting the major variables of CapsNet in histograms and distribution plots.
This will be used later for estimating the accuracy needed for fixed point number representation.
Sorry, I meant to PR into my own fork
|
2025-04-01T06:39:43.132245
| 2017-10-10T19:35:18
|
264349847
|
{
"authors": [
"SmokinLeather",
"markheath"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8802",
"repo": "naudio/NAudio",
"url": "https://github.com/naudio/NAudio/issues/247"
}
|
gharchive/issue
|
Naudio WPF Demo 8 band Equalizer Frequency ranges
Hi Mark, et al,
I'm working with the Naudio WPF Demo 8 band Equalizer. It works nicely. But I can't figure out the audible frequency bands (0-22KHz range) for each of the band sliders.
The demo code looks like this...
bands = new EqualizerBand[]
{
new EqualizerBand {Bandwidth = 0.8f, Frequency = 100, Gain = 0},
new EqualizerBand {Bandwidth = 0.8f, Frequency = 200, Gain = 0},
new EqualizerBand {Bandwidth = 0.8f, Frequency = 400, Gain = 0},
new EqualizerBand {Bandwidth = 0.8f, Frequency = 800, Gain = 0},
new EqualizerBand {Bandwidth = 0.8f, Frequency = 1200, Gain = 0},
new EqualizerBand {Bandwidth = 0.8f, Frequency = 2400, Gain = 0},
new EqualizerBand {Bandwidth = 0.8f, Frequency = 4800, Gain = 0},
new EqualizerBand {Bandwidth = 0.8f, Frequency = 9600, Gain = 0},
};
So assuming standard audio spans from 0Hz to 22050 Hz, how does the Frequency parm in the EqualizerBand constructor map to the 0-22KHz range?
The frequency parameter is in Hz, and you can give it any value (obviously staying below Nyquist frequency - half the sample rate). I kept the maximum band fairly low in this example to avoid issues with lower sample rates, but no reason why you can't go higher.
Mark,
Thanks for the quick reply.
Ok, so for audio with a 44.1K sample rate I could set Frequency as high as 22050?
Example
new EqualizerBand {Bandwidth = 0.8f, Frequency = 22050, Gain = 0},
What I might do is get the sample rate in the ISampleProvider constructor and calc the Frequency for each band accordingly.
I found this link that has a good explanation of how to choose the Equalizer bands frequencies.
https://www.presonus.com/learn/technical-articles/equalizer-terms-and-tips
With this info, I'm closing this issue. If anyone has anything to add, please do comment below.
|
2025-04-01T06:39:43.145906
| 2023-05-02T21:03:57
|
1693098305
|
{
"authors": [
"glennmatthews",
"itdependsnetworks"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8803",
"repo": "nautobot/nautobot",
"url": "https://github.com/nautobot/nautobot/pull/3679"
}
|
gharchive/pull-request
|
Change REST API to use HyperlinkedModelSerializer
Closes: #DNE
What's Changed
Change the base REST API serializer inheritance from ModelSerializer to HyperlinkedModelSerializer. This wasn't quite a drop-in replacement because for whatever reason, DRF's HyperlinkedModelSerializer, HyperlinkedRelatedField, and HyperlinkedIdentityField classes are not aware of URL namespacing, and so they default to trying to automatic reverse urls like <modelname>-detail instead of what we need of <applabel>-api:<modelname>-detail. I've overridden what I believe to be all of the places where this was being used incorrectly.
This lets us remove the explicit url field from most of our serializers as a bonus.
TODO
[x] Explanation of Change(s)
[ ] Added change log fragment(s) (for more information see the documentation)
[ ] Attached Screenshots, Payload Example
[ ] Unit, Integration Tests
[ ] Documentation Updates (when adding/changing features)
[ ] Example Plugin Updates (when adding/changing features)
[ ] Outline Remaining Work, Constraints from Design
Can you confirm/deny that the effect of this is that if as a plugin developer, I use BaseModelSerializer, it will automatically generate a field called "url" and it will correctly link.
Can you confirm/deny that the effect of this is that if as a plugin developer, I use BaseModelSerializer, it will automatically generate a field called "url" and it will correctly link.
That should be correct, yes.
To be clear that is not the primary intent of this PR, but it is a bonus. I'll add more details tomorrow.
|
2025-04-01T06:39:43.180604
| 2018-05-25T10:56:36
|
326476388
|
{
"authors": [
"coveralls",
"jongmoon",
"younkue"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8815",
"repo": "naver/egjs-view360",
"url": "https://github.com/naver/egjs-view360/pull/211"
}
|
gharchive/pull-request
|
refactor(SpriteImage): Use image tag & transform property
Issue
#210
Details
Use image tag & transform property instead of background property enhancement
Coverage decreased (-0.2%) to 92.852% when pulling 3a302da0fe8153bfa4e8e3902a2a166fccb4af85 on jongmoon:SpriteImage#210 into 6adcc218507258844050137c9145bdfe5f336abe on naver:master.
LGTM
|
2025-04-01T06:39:43.188912
| 2018-04-02T11:15:34
|
310460035
|
{
"authors": [
"RoySRose",
"gsmba6"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8816",
"repo": "naver/pinpoint",
"url": "https://github.com/naver/pinpoint/issues/3965"
}
|
gharchive/issue
|
Pinpoint- configuration support
Description
----> I've downloaded - pinpoint-collector-1.7.2-SNAPSHOT,pinpoint-web-1.7.2-SNAPSHOT,pinpoint-agent-1.7.2-SNAPSHOT.tar,hbase-1.2.6-bin.tar
----> without any modification I've started the HBASE and started the collected and web in TOMCAT version 9.0.6
----> Agent I extracted and placed on the same folder where all the files located.
----> wrote one small JAVA program to wait for half an hour
----> executed with the below command [java test -javaagent:/root/Documents/pinpointagent/pinpointagent.jar - DpinPoint.applicationaName=testapp -DpinPoint.agentId=count]
Note: program excuted successfully.
I've also ran the HBASE table creation and I could see the 15 tables in HBASE UI
Environment
ROOT user
Linux Lin 4.13.0-kali1-amd64
Additional Info
Now I'm struct and don't know how to move further could anyone support on this. thanks in advnce
Hello, @gsmba6
correct me if I'm wrong.
You have completed setting up HBASE.
successfully executed PINPOINT-AGENT, PINPOINT-COLLECTOR, PINPOINT-WEB
small JAVA program running without any problems.
I believe there isn't anything else to take care of besides monitoring your application(in this case small JAVA program) through PINPOINT-WEB(for the installation of WEB, take a look at here)
depending on the configuration that you configured for PINPOINT-WEB in TOMCAT. You can access PINPOINT-WEB and the first screen image that you see will be something like this.
@RoySRose thanks for the quick turn around.
My issue is the Pinpoint web is launching but i could able to see the application name or ID in the web. its plain.. my application drop down is disabled.
@gsmba6
I don't think I can be a much help without any logs or further information.
Since you have downloaded all
pinpoint-collector-1.7.2-SNAPSHOT
pinpoint-web-1.7.2-SNAPSHOTpinpoint-agent-1.7.2-SNAPSHOT.tar,hbase-1.2.6-bin.tar
Only thing that I can think of https://github.com/naver/pinpoint/issues/2273
@RoySRose - Issue resolved.
I've appended the Tomcat - Catalina option in . sh file
But my Eclipse uses its own configuration where I gave the java agent details to VM arguments and the issue resolved and my agent appeared in web UI.
I can monitor the transactions.
|
2025-04-01T06:39:43.200801
| 2019-09-06T07:17:29
|
490170626
|
{
"authors": [
"RoySRose",
"Xylus",
"jicai"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8817",
"repo": "naver/pinpoint",
"url": "https://github.com/naver/pinpoint/issues/5972"
}
|
gharchive/issue
|
where can i get the code of test-pinpoint-demo?
Prerequisites
Please check the FAQ, and search existing issues for similar questions before creating a new issue.YOU MAY DELETE THIS PREREQUISITES SECTION.
I have checked the FAQ, and issues and found no answer.
where can i get the code of test-pinpoint-demo?
thank you.
Hello, @jicai
what do you mean by test-pinpoint-demo
@RoySRose
http://<IP_ADDRESS>:10123/#/main
is the project of APIGatewayοΌShopping-ApiοΌShoppping-Order open sourceοΌ
@jicai
Oh. the demo source code isn't currently an opensource.
@Xylus
any plans?
@jicai Can you access https://github.com/Xylus/pinpoint-demo-apps?
It's set to public so you'll be able to clone and play around with it.
@Xylus
I got it, Thank you.
|
2025-04-01T06:39:43.336301
| 2020-12-31T10:42:30
|
776914425
|
{
"authors": [
"NedDerick",
"daniel-dx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8840",
"repo": "ncform/ncform",
"url": "https://github.com/ncform/ncform/issues/222"
}
|
gharchive/issue
|
Capturing time fields
Thanks for a brilliant library!
I'm trying to generate a from that allows the user to capture a "from-date/time" and a "to-date/time".
What is the best way to do this with NCForm?
If I define the fields with a widget of type "datetime" - they get shown a nice date-picker (I don't mind the date part), but I can't find a way to allow them to capture the time.
I tried setting the widget to "datetime-local" - but then VueJs thinks that there is no component called ncform-datetime-local?
If there is a workaround for this - I would gladly contribute an extra page in the documentation to describe how to do it, so that others can find this easily.
Try this:
{
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "dx: {{+new Date()}}",
"ui": {
"widget": "date-picker"
}
},
"email": {
"type": "string"
},
"age": {
"type": "integer"
},
"adult": {
"type": "boolean"
}
}
}
You can paste the code to Playground
Thanks for the help Daniel. If I paste it into playground, I get the date-picker and this time the date is initialized from the dx-expression, but the "time-part" is still missing.
Is there a way to capture the hours and minutes ?
{
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "dx: {{+new Date()}}",
"ui": {
"widget": "date-picker",
"widgetConfig": {
"type": "datetime"
}
}
},
"email": {
"type": "string"
},
"age": {
"type": "integer"
},
"adult": {
"type": "boolean"
}
}
}
{
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "dx: {{+new Date()}}",
"ui": {
"widget": "date-picker",
"widgetConfig": {
"type": "datetime"
}
}
},
"email": {
"type": "string"
},
"age": {
"type": "integer"
},
"adult": {
"type": "boolean"
}
}
}
If you don't need second, try this:
{
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "dx: {{+new Date()}}",
"ui": {
"widget": "date-picker",
"widgetConfig": {
"type": "datetime",
"format": "yyyy-MM-dd HH:mm"
}
}
},
"email": {
"type": "string"
},
"age": {
"type": "integer"
},
"adult": {
"type": "boolean"
}
}
}
If you don't need second, try this:
{
"type": "object",
"properties": {
"name": {
"type": "string",
"default": "dx: {{+new Date()}}",
"ui": {
"widget": "date-picker",
"widgetConfig": {
"type": "datetime",
"format": "yyyy-MM-dd HH:mm"
}
}
},
"email": {
"type": "string"
},
"age": {
"type": "integer"
},
"adult": {
"type": "boolean"
}
}
}
Thanks Daniel. That worked absolutely perfectly. Really cool!!!
I have one last question, but I'll create another issue for that one - since it;s a little different.
Thanks Daniel. That worked absolutely perfectly. Really cool!!!
I have one last question, but I'll create another issue for that one - since it;s a little different.
|
2025-04-01T06:39:43.341447
| 2017-11-13T20:44:09
|
273571323
|
{
"authors": [
"blakek",
"nchaulet"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8841",
"repo": "nchaulet/node-geocoder",
"url": "https://github.com/nchaulet/node-geocoder/issues/240"
}
|
gharchive/issue
|
Discussion: Cache responses
Would caching results be positively received if added? If so, I'd like to work on a PR for this.
My idea for implementation of the cache would be similar to that of dataloader. There would be a cache option added to the options object that would accept any object with an API similar to Map.
According to the Google Maps ToS, the cache cannot persist for more than 30 days. Caching is acceptable if used "for the purpose of improving the performance...due to network latency". Because of the 30 day limitationβand because many would probably be interested in caching for fewer daysβI'm thinking the options would also have a cacheTimeout option. cacheTimeout's value would be a Number representing the number of milliseconds from the time of caching that the entry would be considered stale and a network request would happen.
Thanks in advance!
Thanks to open the discussion on this subject.
I am not sure about implementing the cache inside the library, but I think it could be nice to have an F.A.Q or explain in the readme, how you can implement cache for node-geocoder.
Let me know your opinions on that.
I hadn't thought about that. Adding an example to the readme would certainly be simpler. I can make a PR if you're interested. Maybe a small example under the "More" section?
Yes Looks a good idea maybe π
|
2025-04-01T06:39:43.343566
| 2020-03-24T11:12:23
|
586880388
|
{
"authors": [
"jvolker",
"nchaulet",
"pa-lem"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8842",
"repo": "nchaulet/node-geocoder",
"url": "https://github.com/nchaulet/node-geocoder/issues/297"
}
|
gharchive/issue
|
HERE: API updates - appId and appCode not supported anymore
Due to recent updates, appId and appCode are not supported anymore (for new Here projects, old projects still work for now, but will be depreciated).
We need to make a standard fetch with url and token for example:
https://geocode.search.hereapi.com/v1/geocode?q=<YOUR_SEARCH>&apiKey=<YOUR_API_KEY>
Thanks to report the issue going to fix this soon
Duplicate of #286
Resolved by #302
|
2025-04-01T06:39:43.355479
| 2019-03-07T20:43:42
|
418504184
|
{
"authors": [
"codecov-io",
"quaggoth"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8843",
"repo": "ncr-devops-platform/nagios-foundation",
"url": "https://github.com/ncr-devops-platform/nagios-foundation/pull/92"
}
|
gharchive/pull-request
|
output sha512 hashes with release archives
Resolves #91
Codecov Report
Merging #92 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #92 +/- ##
=======================================
Coverage 79.08% 79.08%
=======================================
Files 11 11
Lines 483 483
=======================================
Hits 382 382
Misses 101 101
Continue to review full report at Codecov.
Legend - Click here to learn more
Ξ = absolute <relative> (impact), ΓΈ = not affected, ? = missing data
Powered by Codecov. Last update 17aee12...5d4c58c. Read the comment docs.
|
2025-04-01T06:39:43.370301
| 2015-04-12T21:56:40
|
67964766
|
{
"authors": [
"ncw",
"sfenwick"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8844",
"repo": "ncw/stressdisk",
"url": "https://github.com/ncw/stressdisk/issues/4"
}
|
gharchive/issue
|
Problem on Mac OS 10.6.8 with new drive
First time user of stressdisk, so not sure where the issue is with this:
Mac Pro (2010), Mac OS 10.6.8
New WD Black 4TB in external eSATA dock. Dock has a fan and drive is room temp to the touch.
I started stressdisk yesterday with the command:
./stressdisk -duration=24h0m0s -logfile="stressdisk.log" run /Volumes/NewDisk/
I was writing fine, about 175MB/s:
2015/04/11 14:19:54 Writing file "/Volumes/NewDisk/TST_0024" size<PHONE_NUMBER>
2015/04/11 14:20:00 Writing file "/Volumes/NewDisk/TST_0025" size<PHONE_NUMBER>
2015/04/11 14:20:05 Writing file "/Volumes/NewDisk/TST_0026" size<PHONE_NUMBER>
2015/04/11 14:20:11 Writing file "/Volumes/NewDisk/TST_0027" size<PHONE_NUMBER>
2015/04/11 14:20:16 Writing file "/Volumes/NewDisk/TST_0028" size<PHONE_NUMBER>
As it tried to fill the disk, it slowed way, way down:
2015/04/12 03:13:40 Writing file "/Volumes/NewDisk/TST_3989" size<PHONE_NUMBER>
2015/04/12 03:13:56 Writing file "/Volumes/NewDisk/TST_3990" size<PHONE_NUMBER>
2015/04/12 08:48:03 Writing file "/Volumes/NewDisk/TST_3991" size<PHONE_NUMBER>
2015/04/12 10:52:18 Writing file "/Volumes/NewDisk/TST_3992" size<PHONE_NUMBER>
2015/04/12 12:03:40 Writing file "/Volumes/NewDisk/TST_3993" size<PHONE_NUMBER>
2015/04/12 13:14:52 Writing file "/Volumes/NewDisk/TST_3994" size<PHONE_NUMBER>
2015/04/12 14:26:27 Writing file "/Volumes/NewDisk/TST_3995" size<PHONE_NUMBER>
We're past the 24 hours (started 4/11 14:17:42, currently 14:50:45) and it's now reading at about 25MB/s, which I assume is the read random portion of the test. Since it finished writing after the timer should have expired, will it ever stop? Also, are any error messages flushed out of whatever stream they are written to, so that if there are no errors visible I can assume that there have been none?
I'll let it continue to run for now. If I recall, if I kill it now and restart it, will it detect the disk as full and just read the existing files?
As the disk becomes full it will have to write fragmented files most likely, so it isn't entirely unexpected it slows down. This might be caused by something else - maybe seeking back to the catalogue - I'm not sure.
The -duration flag only refers to the read portion of the test, so it should stop 24 hours after it finished writing the test files.
Any errors are summarised in the stats which are printed every minute and they will be printed at the end of the run too.
And yes, if you stop the process, it will detect the disks as full and carry on.
How did the test go?
I'm going to close this I haven't heard from you in a while. If you have more info then please re-open.
Thanks
Nick
|
2025-04-01T06:39:43.380582
| 2017-08-18T10:20:29
|
251202676
|
{
"authors": [
"codecov-io",
"ndaidong"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8845",
"repo": "ndaidong/article-parser",
"url": "https://github.com/ndaidong/article-parser/pull/63"
}
|
gharchive/pull-request
|
v2.1.1
Switch to async/await syntax
Codecov Report
Merging #63 into master will increase coverage by 0.35%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #63 +/- ##
=========================================
+ Coverage 97.54% 97.9% +0.35%
=========================================
Files 23 23
Lines 489 525 +36
=========================================
+ Hits 477 514 +37
+ Misses 12 11 -1
Impacted Files
Coverage Ξ
src/main.js
100% <100%> (ΓΈ)
:arrow_up:
src/parsers/index.js
96.82% <100%> (+2.08%)
:arrow_up:
src/utils/standalizeArticle.js
100% <0%> (ΓΈ)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Ξ = absolute <relative> (impact), ΓΈ = not affected, ? = missing data
Powered by Codecov. Last update 88ab41c...94474a9. Read the comment docs.
|
2025-04-01T06:39:43.385701
| 2011-11-26T17:20:32
|
2356094
|
{
"authors": [
"duckinator",
"fredreichbier"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8846",
"repo": "nddrylliog/rock",
"url": "https://github.com/nddrylliog/rock/issues/337"
}
|
gharchive/issue
|
Global variables are mixed up
See this gist: https://gist.github.com/1395995 the two strings are swapped
Oooh, fun bug :D
|
2025-04-01T06:39:43.389357
| 2022-03-20T18:40:14
|
1174644281
|
{
"authors": [
"ndmitchell",
"shayne-fletcher"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8847",
"repo": "ndmitchell/hlint",
"url": "https://github.com/ndmitchell/hlint/pull/1362"
}
|
gharchive/pull-request
|
Add support for language specifier GHC2021
CmdLine.getExtensions now considers the data constructor GHC2021 (in addition to Haskell98 and Haskell2010).
Thanks!
|
2025-04-01T06:39:43.470378
| 2022-07-20T20:27:25
|
1311859168
|
{
"authors": [
"gagdiez",
"maxhr",
"volovyk-s"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8848",
"repo": "near/create-near-app",
"url": "https://github.com/near/create-near-app/issues/1851"
}
|
gharchive/issue
|
Suggestion - remove rust installation script
I think automatic installation of rust toolchain should be completely removed from this tool, because -
This is too low-level-system thing and it's always dangerous
If someone wants to develop Rust, he should already have the toolchain
Even if we have a dev that this is the first time he ever sees Rust, the best way to learn new language workflows and toolchains is from the actual Rust docs site, there he'll find explanation about what is rustup and targets and so on.. We shouldn't do any magic installations for him
We only should have a message saying that you need wasm32-unknown-unknown target and link to the Rust docs page about it.
@volovyk-s @gagdiez wdyt?
I agree, lets simply detect if the user has Rust installed, and warn them if not.
Ok, we can get rid of this magic. But let's make sure we point users to the Rust doc that will help them to install everything they need.
|
2025-04-01T06:39:43.472570
| 2021-05-18T15:27:30
|
894506343
|
{
"authors": [
"gonta71",
"nagisa",
"volovyk-s"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8849",
"repo": "near/near-cli",
"url": "https://github.com/near/near-cli/issues/762"
}
|
gharchive/issue
|
SIGSEGV error when using Ledger
Probably originated from Ledger itself.
To reproduce: near send acc1.near acc2.near --useLedgerKey. Reproducable on both near-cli 1.6.0 and near-cli 2.0.0.
The transaction is successfully executed, the error occurs after it.
Also related to #758.
I believe this and #758 are the same issue, so closing this as a duplicate of it.
|
2025-04-01T06:39:43.473691
| 2021-11-14T23:20:42
|
1053069373
|
{
"authors": [
"ilblackdragon",
"stefanopepe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8850",
"repo": "near/near-wallet",
"url": "https://github.com/near/near-wallet/issues/2250"
}
|
gharchive/issue
|
Allow custom HD path beyond single number
Currently when recovering wallet, user can only change the last number in the derivation paths.
Users might have keys on other derivation paths, Wallet should support that.
This may be confusing for some users and is normally not offered by other wallets. Would it be acceptable if this can be managed via dev console, e.g. a function that takes as arguments the path integers?
|
2025-04-01T06:39:43.478531
| 2022-02-25T13:44:13
|
1150460758
|
{
"authors": [
"esaminu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8851",
"repo": "near/near-wallet",
"url": "https://github.com/near/near-wallet/pull/2507"
}
|
gharchive/pull-request
|
Feat farming validator UI for wallet testing
Internal testing branch for the new stake farming feature as implemented on #2398
Updated LOCKUP_ACCOUNT_ID_SUFFIX on render preview to lockup.devnet. We can now use https://testnet.lockup.tech/ to deploy lockups for testing. Just update wallet.testnet.near.org to near-wallet-pr-2507.onrender.com when logging in and signing transactions.
Validators for testing:
zentriav2.factory.colorpalette.testnet
domanodes.factory.colorpalette.testnet
testnet preview url
To test, the following farm validators can be used:
zentriav2.factory.colorpalette.testnet
domanodes.factory.colorpalette.testnet
To test lockup staking, use https://testnet.lockup.tech/ to add a lockup to your account and update wallet.testnet.near.org to near-wallet-pr-2512.onrender.com when logging in and signing transactions.
|
2025-04-01T06:39:43.489096
| 2023-10-10T14:07:21
|
1935455423
|
{
"authors": [
"Tguntenaar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8852",
"repo": "near/neardevhub-widgets",
"url": "https://github.com/near/neardevhub-widgets/pull/291"
}
|
gharchive/pull-request
|
Feature/migrate GitHub to existing framework
Resolves #265
Still a WIP
Testnet Preview
Acceptance Criteria:
[x] GitHub plugin aligns with the new framework (https://github.com/near/neardevhub-widgets/issues/253). So an admin can enable, configure, or remove the plugin in the community settings page.
[x] If a community admin enables the GitHub plugin, the GitHub tab shows up on the community page navigation.
[x] Admin can configure the plugin within the community settings page, not from the old GitHub tab. Remove any instances from the GitHub tab to configure the page from the tab.
[x] If a community admin enables the GitHub plugin, they can click configure to customize the the tab name as well as the existing GitHub specific fields, including:
[x] Title*
[x] GitHub Repository URL*. Note: We will need to add a prefix here (similar to the about links section) that says: "https://github.com/" to make it clear what part of the GitHub URL the user needs to enter.
[x] Ticket type* (Issue, Pull Request)
[x] Ticket State*
[x] Description*
[x] New Column* (plus the supporting fields)
[x] Community admin should have a way to preview GitHub board from the configure settings.
[ ] If an admin enables the GitHub plugin, the GitHub field(s) with * above are required
[ ] There is no disruption to old/existing GitHub plugins
[ ] There is migration option provided for existing communities to transition to the new framework
Notes:
In order to edit the tab name issue #283 has to be finished implementing that functionality.
I'm closing this draft, it is already handled by PR 344
|
2025-04-01T06:39:43.513955
| 2023-02-27T08:13:00
|
1600691397
|
{
"authors": [
"vtheskeleton"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8853",
"repo": "neatnik/omg.lol",
"url": "https://github.com/neatnik/omg.lol/issues/616"
}
|
gharchive/issue
|
[Bug] Pressing control in sign up address checking box causes re-checking
Bug Description
If you press the control key on your keyboard while focused on the address lookup box
it will check again, even though the content has not changed
Steps to Reproduce
head to https://home.omg.lol/sign-up
enter something in the box
press ctrl
observe it check again
this is the most minor thing ever but i noticed it so youre getting an issue
|
2025-04-01T06:39:43.523036
| 2024-05-29T19:54:48
|
2324143406
|
{
"authors": [
"kildre"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8854",
"repo": "nebari-dev/jhub-apps",
"url": "https://github.com/nebari-dev/jhub-apps/pull/311"
}
|
gharchive/pull-request
|
Home page updates
Updates to the home page, mainly stylistic changes, but also text updates and iconography.
Reference Issues or PRs
https://gitlab.jatic.net/jatic/team-metrostar/t-e-platform/-/issues/595
This update addresses the issues in the attached ticket.
What does this implement/fix?
This implementation fixes stylistic and design issues
Put a x in the boxes that apply
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds a feature)
[ ] Breaking change (fix or feature that would cause existing features not to work as expected)
[ ] Documentation Update
[x] Code style update (formatting, renaming)
[x] Refactoring (no functional changes, no API changes)
[ ] Build related changes
[ ] Other (please describe):
Testing
[x] Did you test the pull request locally?
[x] Did you add new tests?
Documentation
Access-centered content checklist
Text styling
[ ] The content is written with plain language (where relevant).
[ ] If there are headers, they use the proper header tags (with only one level-one header: H1 or # in markdown).
[ ] All links describe where they link to (for example, check the Nebari website).
[ ] This content adheres to the Nebari style guides.
Non-text content
[ ] All content is represented as text (for example, images need alt text, and videos need captions or descriptive transcripts).
[ ] If there are emojis, there are not more than three in a row.
[ ] Don't use flashing GIFs or videos.
[ ] If the content were to be read as plain text, it still makes sense, and no information is missing.
Any other comments?
Code-wise looks great, just 1 thing to cleanup. Also, please include the following in this PR:
Fix the header opacity
Fix the row spacing between App rows
Ensure the home navigation button can be themed
β, β, β
|
2025-04-01T06:39:43.524911
| 2024-03-13T15:51:17
|
2184350542
|
{
"authors": [
"krassowski",
"marcelovilla"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8855",
"repo": "nebari-dev/nebari-docs",
"url": "https://github.com/nebari-dev/nebari-docs/pull/419"
}
|
gharchive/pull-request
|
Fix typo causing bad rendering (extra backtick)
Before:
After:
Good catch! Thanks @krassowski π
|
2025-04-01T06:39:43.535997
| 2023-11-25T14:43:35
|
2010684136
|
{
"authors": [
"CorvusYe",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8856",
"repo": "nebula-contrib/ngbatis",
"url": "https://github.com/nebula-contrib/ngbatis/pull/267"
}
|
gharchive/pull-request
|
feat: support specify space by param, fix #265
For example:
Boolean spaceFromParam(@Param("specifySpace") String specifySpace);
<select id="spaceFromParam" space="${specifySpace}" spaceFromParam="true">
RETURN true;
</select>
If the specifySpace is not current space, NgBatis will automatically switch.
Can reduce the use of use space in ngql.
Codecov Report
Attention: 23 lines in your changes are missing coverage. Please review.
Comparison is base (d781052) 0.00% compared to head (16da500) 0.00%.
Report is 1 commits behind head on master.
Files
Patch %
Lines
...ebula/contrib/ngbatis/io/MapperResourceLoader.java
0.00%
9 Missing :warning:
.../org/nebula/contrib/ngbatis/proxy/MapperProxy.java
0.00%
7 Missing :warning:
...g/nebula/contrib/ngbatis/config/ParseCfgProps.java
0.00%
4 Missing :warning:
...org/nebula/contrib/ngbatis/models/MethodModel.java
0.00%
3 Missing :warning:
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## master #267 +/- ##
======================================
Coverage 0.00% 0.00%
======================================
Files 75 75
Lines 2568 2583 +15
Branches 278 279 +1
======================================
- Misses 2568 2583 +15
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
2025-04-01T06:39:43.538769
| 2024-07-04T15:38:49
|
2391119937
|
{
"authors": [
"pelletencate"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8857",
"repo": "nebulab/erb-formatter",
"url": "https://github.com/nebulab/erb-formatter/pull/56"
}
|
gharchive/pull-request
|
Improve formatting of multiline ERB
This PR does 2 things:
Subtract the indentation level from Syntax Tree line width
This could be even further improved: line length should take ERB tag start and end tag overhead in consideration
Change the way multiline ERB is formatted. It will do the following:
If formatted ruby yields more than 1 line, then:
The ERB tag start and end will be on separate lines
The entire Ruby code will be indented with an additional 2 spaces
(Due to the fact that I was adding to repetitive logic, I chose to optimize the case statement. If this is undesirable and we prefer even more repetition, I'm OK with having that refactored back)
Oh @elia I completely missed #7 and went my own way with this one.
|
2025-04-01T06:39:43.582354
| 2018-04-25T07:41:00
|
335129095
|
{
"authors": [
"nedbat"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8858",
"repo": "nedbat/coveragepy",
"url": "https://github.com/nedbat/coveragepy/issues/655"
}
|
gharchive/issue
|
there is no flexibility with different python versions.
Originally reported by Abel Asefa (Bitbucket: abelandk, GitHub: abelandk)
When the main process is running with python3. But some other subprocesses were running with python2, it could not get the correct result and produce a SyntaxError.
#!python
File ".../python/python3.5.2/python3.5/site-packages/coverage/parser.py", line 363, in __init__
self.code = compile_unicode(text, filename, "exec")
File ".../python/python3.5.2/lib/python3.5/site-packages/coverage/phystokens.py", line 286, in compile_unicode
code = compile(source, filename, mode)
File ".../constructor.py", line 130
except TypeError, exc:
^
SyntaxError: invalid syntax
During handling of the above exception, another exception occurred:
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/control.py", line 1095, in html_report
return reporter.report(morfs)
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/html.py", line 139, in report
self.report_files(self.html_file, morfs, self.config.html_dir)
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/report.py", line 91, in report_files
report_fn(fr, self.coverage._analyze(fr))
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/control.py", line 968, in _analyze
return Analysis(self.data, it)
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/results.py", line 19, in __init__
self.statements = self.file_reporter.lines()
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/python.py", line 186, in lines
return self.parser.statements
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/python.py", line 181, in parser
self._parser.parse_source()
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/parser.py", line 237, in parse_source
self._raw_parse()
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/parser.py", line 206, in _raw_parse
self.raw_statements.update(self.byte_parser._find_statements())
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/parser.py", line 96, in byte_parser
self._byte_parser = ByteParser(self.text, filename=self.filename)
File "/lab/python/python3.5.2/lib/python3.5/site-packages/coverage/parser.py", line 367, in __init__
filename, synerr.msg, synerr.lineno
coverage.misc.NotPython: Couldn't parse '.../python2/yaml/constructor.py' as Python source: 'invalid syntax' at line 130
Bitbucket: https://bitbucket.org/ned/coveragepy/issue/655
@abelandk: any more information?
|
2025-04-01T06:39:43.587120
| 2022-09-09T09:10:48
|
1367526018
|
{
"authors": [
"Ahmad-A0",
"FahadulShadhin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8859",
"repo": "neetcode-gh/leetcode",
"url": "https://github.com/neetcode-gh/leetcode/pull/1086"
}
|
gharchive/pull-request
|
Adding 678-Valid-Parenthesis-String.cpp
File(s) Modified: 678-Valid-Parenthesis-String.cpp
Language(s) Used: C++
Submission URL: _https://leetcode.com/submissions/detail/795416800/_
Thanks, @FahadulShadhin!
|
2025-04-01T06:39:43.600484
| 2024-07-23T22:32:08
|
2426229379
|
{
"authors": [
"neilenns"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8860",
"repo": "neilenns/streamdeck-trackaudio",
"url": "https://github.com/neilenns/streamdeck-trackaudio/issues/163"
}
|
gharchive/issue
|
Actions don't show warning when station is removed from TrackAudio
To reproduce:
Connect TrackAudio
Add SEA_GND
Add a station state button that reflects SEA_GND, notice that it doesn't show the warning icon which is correct
Delete SEA_GND from TrackAudio
Result: SEA_GND station state button still looks ok
Expected result: SEA_GND station state button should show the warning icon
This will require changes on TrackAudio as well
|
2025-04-01T06:39:43.619412
| 2014-04-18T02:48:51
|
31778205
|
{
"authors": [
"ZackMattor",
"jwarkentin"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8861",
"repo": "nekuz0r/node-arp",
"url": "https://github.com/nekuz0r/node-arp/issues/5"
}
|
gharchive/issue
|
Sometimes incorrectly parsing output
I have some code that is currently calling getMAC() about once a second. Every once in a while it will return 'eth0' instead of the MAC address. It doesn't happen very often compared to how often it's called, but it happens a few times a day. This is running on a Debian Linux machine, btw.
We're hitting this problem too! No clue why it seems to be doing this. The first time i tired it returned the interface name every time, but on the second try it returns the correct mac address... If you have any insights into this let me know and i'll be happy to open a PR. We'll probably keep investigating.
Figured it out! PR incoming ;)
https://github.com/nekuz0r/node-arp/pull/8
|
2025-04-01T06:39:43.629691
| 2019-09-26T06:48:23
|
498693187
|
{
"authors": [
"evias",
"ivyfung1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8862",
"repo": "nemfoundation/nem2-explorer",
"url": "https://github.com/nemfoundation/nem2-explorer/issues/103"
}
|
gharchive/issue
|
Suggestion: Rename of "Node" drop-down selection
Suggest to change "Node" to "Network Type/ Network Name" or "Generationhash" to reflect that those nodes could be from different network.
Maybe Network:Node?
e.g. Mainnet:http://<IP_ADDRESS>:3000/
hey there @ivyfung1 ; I would agree that there could be some information about the network name in the list.
But I think it is important to keep the node IPs there as this is the information that is relevant (which node is the explorer using to read the data I see.)
|
2025-04-01T06:39:43.633739
| 2020-12-03T20:40:30
|
756575454
|
{
"authors": [
"cryptoBeliever"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8863",
"repo": "nemgrouplimited/symbol-desktop-wallet",
"url": "https://github.com/nemgrouplimited/symbol-desktop-wallet/issues/769"
}
|
gharchive/issue
|
Harvesting confirmation view password input
Harvesting confirmation view password input is not visible and we need to scroll.
For other transaction types input is always visible.
Please watch video: https://share.getcloudapp.com/JruqbPel
Same with harvesting activation.
@NikolaiYurchenko could you check?
@NikolaiYurchenko I think we have still inconsistent/bad looking transaction confirmation pop-ups.
create aggregate pop-up -> https://share.getcloudapp.com/yAuZJbYn?utm_source=show
create aggregate (with multisig) -> https://share.getcloudapp.com/d5uPgp2L (three scrools)
how it looks on cosigner side (to sign) -> https://share.getcloudapp.com/p9urewoE (details cutted on top)
how it looks on cosigner side (which already signed tx) -> https://share.getcloudapp.com/NQuKW19m
Fixed.
|
2025-04-01T06:39:43.645893
| 2020-04-24T14:04:49
|
606338908
|
{
"authors": [
"MathiasHaudgaard",
"alexrainman",
"juan-restrepo",
"mihanizm56",
"nemtsov"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8864",
"repo": "nemtsov/json-mask",
"url": "https://github.com/nemtsov/json-mask/issues/72"
}
|
gharchive/issue
|
Mask not working as expected
I have the following json
{
"tom": 3,
"data1": {
"first": "Hello",
"second": {"third": "salude", "fourth": "nvm"}
},
"data2": {
"first": "Bye",
"second": {"third": "Cheers", "fourth": "nvm"}
}
}
And I'm applying the follwing mask on it.
"data1(first,second/third),data2(first,second/third)"
I expected to see the,
{
"data1": {
"first": "Hello",
"second": {"third": "salude"}
},
"data2": {
"first": "Bye",
"second": {"third": "Cheers"}
}
}
but I'm only getting
{
"data1": {
"first": "Hello",
"second": {"third": "salude"}
}
}
Can you help me out?
@MathiasHaudgaard thanks for letting me know! That appears to be a bug. I can reproduce it.
Ok, please let me know when it's fixed. Thanks! :smiley:
@nemtsov It seems like the error comes from the buildTree functions. It exits the scope in the wrong order. When it comes to ")" it pops "/" and now it cant exit the last scope because of
if (peek && peek.tag === "/") doesn't match with "("
I'm bumping into the same error right now. Are there any updates here?
any updates?
Is the library abandonware?
Hey folks, although I donβt have a ton of time to work on this library myself, Iβm more than happy to review PRs and help guide people interested in making contributions to it.
This the only bug i could find so far. Can you fix it?
What i found is, if / is not at the end of the query, what is next is going to be ignored.
Not working:
data1(first,second/third),data2(first,second/third)
Working:
data1(first,second),data2(first,second/third)
This is the only library that supports partial responses in plan JS. It is a shame just to abandon it.
This is resolved in v1.0.4 thanks for reporting it @MathiasHaudgaard
|
2025-04-01T06:39:43.656714
| 2020-04-25T14:07:23
|
606776947
|
{
"authors": [
"Solom00n",
"conker84",
"moxious",
"mroiter-larus"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8865",
"repo": "neo4j-contrib/neo4j-streams",
"url": "https://github.com/neo4j-contrib/neo4j-streams/issues/305"
}
|
gharchive/issue
|
CDC and Kafka Log Compaction
Hi everyone !! This issue is somehow similar to this one #272 but the focus here is on Neo4j Streams Source for CDC in tandem with Kafka Log Compaction rather than on Neo4j Streams Procedures.
When using Change Data Capture with Kafka, one of the most common use cases is managing Kafka partitions as logs. In this scenario, Log Compaction is somehow necessary for CDC: we need a complete dataset but we must also avoid unlimited growth in the size of partitions. From the Kafka documentation:
Log compaction ensures that Kafka will always retain at least the last known value for each message key within the log of data for a single topic partition. It addresses use cases and scenarios such as restoring state after application crashes or system failure or reloading caches after application restarts during operational maintenance.
On data retention:
So far we have described only the simpler approach to data retention where old log data is discarded after a fixed period or when the log reaches some predetermined size. This works well for temporal event data such as logging where each record stands alone. However, an important class of data streams are the log of changes to keyed, mutable data (for example, the changes to a database table).
The current implementation of Neo4j Streams prevents this use cases since the message key is a combination of the transaction id and the id of the event inside the transaction, as shown in the following code (KafkaEventRouter.kt- line 86 - sendEvent method):
private fun sendEvent(partition: Int, topic: String, event: StreamsTransactionEvent) {
if (log.isDebugEnabled) {
log.debug("Trying to send a transaction event with txId ${event.meta.txId} and txEventId ${event.meta.txEventId} to kafka")
}
val producerRecord = ProducerRecord(topic, partition, System.currentTimeMillis(),
"${event.meta.txId + event.meta.txEventId}-${event.meta.txEventId}",
JSONUtils.writeValueAsBytes(event))
send(producerRecord)
}
As far as I know, this will generate a sequence of unique keys, preventing Log Compaction. Instead, choosing the node or relationship id as the message key ("${event.payload.id}") would enable Log Compaction on the Kafka side. However, I'm pretty sure there are specific reasons for creating the key that way: maybe the message keys are used to know where we are in the Neo4j log, to recover after a failure, but I'm not sure.
Can you explain why the keys are generated in this way? Is it possible to enable Log Compaction in Neo4j Streams right now? If the answer is no, I think this is a must-have feature for Change Data Capture. Thanks !!
Considered alternatives
While writing this issue, I actually came up with a possible alternative solution (the same also applies to the issue #304): one can simply create a microservice using Kafka Streams to add custom logic and cope with order guarantees and log compaction. However, I am still convinced that these problems should be addressed within Neo4j Streams.
@Solom00n thanks for the great analysis, we can do it via a configuration param in order to keep both options
@mroiter-larus is this issue close-able?
@moxious this had been put on hold that i know of
Left open pending duplicate functionality PR agaist 3.5 branch. Issue is closeable when 3.5 branch is merged with same fix/improvement.
|
2025-04-01T06:39:43.672909
| 2016-03-21T09:14:45
|
142292257
|
{
"authors": [
"IngvarKofoed",
"zhenlineo"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8866",
"repo": "neo4j/neo4j-dotnet-driver",
"url": "https://github.com/neo4j/neo4j-dotnet-driver/issues/28"
}
|
gharchive/issue
|
Can't run integration unittests
I have spend several hours trying to get the integration unittests to work. I'm running on a clean Windows 10 with Visual Studio 2015 Community and JRE 8. I'm running both VS, cosole and PS as administrator.
I have tracked down the issue to when the PS scripts tries to start the neo4j servies by debugging the unittest. Then I tried running the PS scripts manually and got the same error. Here is the log:
PS C:\Sources\neo4j\original\neo4j-dotnet-driver\Neo4j.Driver\Neo4j.Driver.IntegrationTests\bin\target\neo4j\neo4j-commu
nity-3.0.0-RC1\bin> Invoke-Neo4j start -v
VERBOSE: Neo4j Root is 'C:\Sources\neo4j\original\neo4j-dotnet-driver\Neo4j.Driver\Neo4j.Driver.IntegrationTests\bin\target\neo4j\neo4j-community-3.0.0-RC1'
VERBOSE: Neo4j Server Type is 'Community'
VERBOSE: Neo4j Version is '3.0.0-RC1'
VERBOSE: Neo4j Database Mode is ''
VERBOSE: Start command specified
VERBOSE: Neo4j Windows Service Name is neo4j
VERBOSE: Starting the service. This can take some time...
Invoke-Neo4j : Failed to start service 'Neo4j Graph Database - neo4j (neo4j)'.
At line:1 char:1
Invoke-Neo4j start -v
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Invoke-Neo4j
It works if I get it to run as console:
PS C:\Sources\neo4j\original\neo4j-dotnet-driver\Neo4j.Driver\Neo4j.Driver.IntegrationTests\bin\target\neo4j\neo4j-community-3.0.0-RC1\bin> Invoke-Neo4j console -v
VERBOSE: Neo4j Root is 'C:\Sources\neo4j\original\neo4j-dotnet-driver\Neo4j.Driver\Neo4j.Driver.IntegrationTests\bin\target\neo4j\neo4j-community-3.0.0-RC1'
VERBOSE: Neo4j Server Type is 'Community'
VERBOSE: Neo4j Version is '3.0.0-RC1'
VERBOSE: Neo4j Database Mode is ''
VERBOSE: Console command specified
VERBOSE: Java detected at 'C:\Program Files\Java\jre1.8.0_73\bin\java.exe'
VERBOSE: Java version detected as '1.8'
VERBOSE: Starting Neo4j as a console with command line C:\Program Files\Java\jre1.8.0_73\bin\java.exe -cp
"C:\Sources\neo4j\original\neo4j-dotnet-driver\Neo4j.Driver\Neo4j.Driver.IntegrationTests\bin\target\neo4j\neo4j-community-3.0.0-RC1/lib/;C:\Sources\neo4j\original\neo4j-dotnet-driver\Neo4j.D
river\Neo4j.Driver.IntegrationTests\bin\target\neo4j\neo4j-community-3.0.0-RC1/plugins/" -server -Dorg.neo4j.config.file=conf/neo4j.conf -Dlog4j.configuration=file:conf/log4j.properties
-Dneo4j.ext.udc.source=zip-powershell -Dorg.neo4j.cluster.logdirectory=data/log -Dorg.neo4j.config.file=conf/neo4j.conf -XX:+UseG1GC -XX:-OmitStackTraceInFastThrow -XX:hashCode=5
-XX:+AlwaysPreTouch -XX:+UnlockExperimentalVMOptions -XX:+TrustFinalNonStaticFields -XX:+DisableExplicitGC -Dunsupported.dbms.udc.source=zip -Dfile.encoding=UTF-8
org.neo4j.server.CommunityEntryPoint
2016-03-21 09:06:23.118+0000 INFO Starting...
2016-03-21 09:06:31.680+0000 INFO Started.
2016-03-21 09:06:34.633+0000 INFO Remote interface available at http://localhost:7474/
I have also tried starting the service manually from Windows Services, but got this message:
The Neo4j Graph Database - neo4j service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs.
I would really like to get them running, so I can contribute to the project :)
Any suggestions?
Hello, I looked into your problem yesterday and it seems you have some problem with Write-Error command used in our powershell script. The build also have some problem to use the recent powershell script so I am also wondering if I should just switch to use the bat file instead to avoid these changes to disturb the normal driver code. I am working on this problem and I will come back to you once I got a solution or work around. Will ping you when the build is ready for your PR.
Finally thanks a lot for the PRs.
Thanks for you feed back! Using the BAT file might fix the issues I have seen :)
As you might have seen by now, I have created a PR that does not use PS or the Windows Service for running the Neo4j server. I have tested this on 3 Window PCs and worked on them all - When using the original code, integration tests did not work on any of the 3 PCs. You can consider using this when not running on the build server (eg local testing) and use the original when running on the build server :)
https://github.com/neo4j/neo4j-dotnet-driver/pull/33
As the pr is already merged in, I will close this issue now.
Feel free to reopen it when you got problem again,
|
2025-04-01T06:39:43.683560
| 2018-06-15T17:41:59
|
332860657
|
{
"authors": [
"michael-simons",
"wem"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8867",
"repo": "neo4j/neo4j-ogm",
"url": "https://github.com/neo4j/neo4j-ogm/issues/495"
}
|
gharchive/issue
|
Feature request: General async support
Since there is a release of the async bolt driver. It would be very useful to have it in OGM too.
https://neo4j.com/docs/developer-manual/current/drivers/sessions-transactions/#_asynchronous_programming
Thanks for your input.
Your options right now is using OGM through SDN and it's Spring based async support: Async query results.
We have also spiked some ideas around full reactive support: https://github.com/michael-simons/neo4j-reactive-java-client
I'm closing this issue for the time being. This doesn't mean we're not implementing async, we are tracking this already and having several such tickets around doesn't help. Thank you.
|
2025-04-01T06:39:43.691152
| 2017-06-05T12:21:26
|
233571109
|
{
"authors": [
"klobuczek"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8868",
"repo": "neo4jrb/neo4j",
"url": "https://github.com/neo4jrb/neo4j/issues/1391"
}
|
gharchive/issue
|
Applying each after variable length relationship ignores rel_length
When each is called after an association with variable length the rel_length parameter is ignored.
Additional information which could be helpful if relevant to your issue:
Code example (inline, gist, or repo)
[31] pry(main)> >> r.subordinates(rel_length: { min: 0 }).each.count
Role#subordinates
MATCH (previous:`Role`)
WHERE (ID(previous) = {ID_previous})
OPTIONAL MATCH (previous)<-[rel1:`manager`]-(next:`Role`)
RETURN
ID(previous),
collect(next) | {:ID_previous=>14930}
HTTP REQUEST: 4ms POST http://localhost:7474/db/data/transaction/commit (1 bytes)
=> 6
[32] pry(main)> >> r.subordinates(rel_length: { min: 0 }).count
Role#subordinates
MATCH (role14930)
WHERE (ID(role14930) = {ID_role14930})
MATCH (role14930)<-[rel1:`manager`*0..]-(result_subordinates:`Role`)
RETURN count(result_subordinates) AS result_subordinates | {:ID_role14930=>14930}
HTTP REQUEST: 7ms POST http://localhost:7474/db/data/transaction/commit (1 bytes)
=> 635
Runtime information:
Neo4j database version:
neo4j gem version: 8.0.17
neo4j-core gem version: 7.1.2
For reference a failing spec:
describe 'bug' do
before(:each) do
clear_model_memory_caches
delete_db
stub_active_node_class('Person') do
has_many :out, :knows, model_class: 'Person', type: nil
end
stub_active_node_class('Company') do
has_one :out, :ceo, type: :ceo, model_class: 'Person'
end
end
let(:ceo) { Person.create }
let(:company) { Company.create(ceo: ceo) }
it 'should not drop rel_length' do
expect(ceo.knows(rel_length: { min: 0 }).count).to eq(1)
expect(ceo.knows(rel_length: { min: 0 }).to_a.count).to eq(1)
ceo_via_association = company.ceo
expect(ceo_via_association).to eq(ceo)
expect(ceo_via_association.knows(rel_length: { min: 0 }).count).to eq(1)
expect(ceo_via_association.knows(rel_length: { min: 0 }).to_a.count).to eq(1) # fails returning 0 as the automated 1 + n prevention kicks in and ignores the parameters of knows
end
end
|
2025-04-01T06:39:43.746272
| 2016-12-11T18:02:56
|
194845942
|
{
"authors": [
"dlants",
"expipiplus1",
"thejohnfreeman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8869",
"repo": "neomake/neomake",
"url": "https://github.com/neomake/neomake/issues/832"
}
|
gharchive/issue
|
Preserve line-breaks in error messages
Neomake loses line breaks when displaying errors. Using Haskell as an example:
Neomake w/ hdevtools:
Neomake w/ ghcmod:
Expected behavior
ghcmod-vim preserves line breaks when displaying errors:
History
This seems to be a well-known problem.
When Haskell Makers were first being added to Neomake, there was no attempt to preserve line breaks. However, there were problems capturing every line of a multi-line error, much like Syntastic experienced. @benekastah referenced #15, which was eventually resolved, but it appears the accepted solution was to join lines into one.
In the ghcmod-vim project, @toonn expressed a preference for using that plugin with Syntastic for a few reasons which included preserving line breaks. @expipiplus1 suggested Neomake, but admitted it "still munges everything into a single line". He cited a Syntastic issue on the subject of preserving line breaks, in which @lcd047 imparted some crucial wisdom:
What ghcmod-vim shows are valid error lines intermixed with invalid ones. You'll notice that going to, say, line 2 in the quickfix list and pressing Enter doesn't take you to the error line in the source. Whether this is better than syntastic's UI or not is up to debate.
@expipiplus1 offered an alternative:
ghcmod-vim could work around this by making each individual line from an error it's own proper error message
The downside of this approach is that :lnext won't jump to the next error until after you get through all the lines of a multi-line message.
The code to get ghcmod-vim's behavior exists, but it's a bit gnarly. Messages are first saved to a file, then read with readfile (which corrects null characters to newlines), and then lines after the first are added to the quickfix list without file or line number.
These seem to be two non-ideal alternatives for multi-line messages (to use "invalid" lines or not), each with their own drawbacks but both, in my opinion, preferable to the status quo. A capable framework could provide all three as options to the user. If I wanted to pursue that route, where would I find the best spot in the code to hook a single-message-to-possibly-singleton-list-of-quickfix-items function?
Steps to reproduce
" Use one of the below settings and :Neomake.
let g:neomake_haskell_enabled_makers = ['hdevtools']
let g:neomake_haskell_enabled_makers = ['ghcmod']
Output of the ":verb NeomakeInfo" command
Neomake debug information
Async support: 0
Current filetype:
Enabled makers
For the current filetype (with :Neomake): []
NOTE: the current buffer does not have a filetype.
For the project (with :Neomake!): []
NOTE: you can define g:neomake_enabled_makers to configure it.
Settings
g:neomake_haskell_enabled_makers = ['ghcmod']
shell: /bin/bash
shellcmdflag: -c
Windows: 0
:version
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Nov 24 2016 16:44:48)
Included patches: 1-1689
Extra patches: 8.0.0056
Modified by<EMAIL_ADDRESS>Compiled by<EMAIL_ADDRESS>Huge version with GTK2-GNOME GUI. Features included (+) or not (-):
+acl +conceal +file_in_path +linebreak -mouse_sysmouse +python3 +tcl +wildmenu
+arabic +cryptv +find_in_path +lispindent +mouse_urxvt +quickfix +terminfo +windows
+autocmd +cscope +float +listcmds +mouse_xterm +reltime +termresponse +writebackup
+balloon_eval +cursorbind +folding +localmap +multi_byte +rightleft +textobjects +X11
+browse +cursorshape -footer +lua +multi_lang +ruby +timers -xfontset
++builtin_terms +dialog_con_gui +fork() +menu -mzscheme +scrollbind +title +xim
+byte_offset +diff +gettext +mksession +netbeans_intg +signs +toolbar +xsmp_interact
+channel +digraphs -hangul_input +modify_fname +packages +smartindent +user_commands +xterm_clipboard
+cindent +dnd +iconv +mouse +path_extra +startuptime +vertsplit -xterm_save
+clientserver -ebcdic +insert_expand +mouseshape +perl +statusline +virtualedit +xpm
+clipboard +emacs_tags +job +mouse_dec +persistent_undo -sun_workshop +visual
+cmdline_compl +eval +jumplist +mouse_gpm +postscript +syntax +visualextra
+cmdline_hist +ex_extra +keymap -mouse_jsbterm +printer +tag_binary +viminfo
+cmdline_info +extra_search +langmap +mouse_netterm +profile +tag_old_static +vreplace
+comments +farsi +libcall +mouse_sgr -python -tag_any_white +wildignore
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
2nd user vimrc file: "~/.vim/vimrc"
user exrc file: "$HOME/.exrc"
system gvimrc file: "$VIM/gvimrc"
user gvimrc file: "$HOME/.gvimrc"
2nd user gvimrc file: "~/.vim/gvimrc"
system menu file: "$VIMRUNTIME/menu.vim"
fall-back for $VIM: "/usr/share/vim"
Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -pthread -I/usr/include/gtk-2.0 -I/usr/lib/x86_64-linux-gnu/gtk-2.0/inclu
de -I/usr/include/gio-unix-2.0/ -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/
pixman-1 -I/usr/include/libpng12 -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/libpng12 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -
I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/freetype2 -D_REENTRANT -DORBI
T2=1 -pthread -I/usr/include/libgnomeui-2.0 -I/usr/include/gnome-keyring-1 -I/usr/include/libbonoboui-2.0 -I/usr/include/libxml2 -I/usr/i
nclude/libgnome-2.0 -I/usr/include/libbonobo-2.0 -I/usr/include/bonobo-activation-2.0 -I/usr/include/orbit-2.0 -I/usr/include/libgnomecan
vas-2.0 -I/usr/include/gail-1.0 -I/usr/include/libart-2.0 -I/usr/include/gtk-2.0 -I/usr/lib/x86_64-linux-gnu/gtk-2.0/include -I/usr/inclu
de/gio-unix-2.0/ -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr
/include/libpng12 -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/freetype2 -I/usr/include/gdk-p
ixbuf-2.0 -I/usr/include/libpng12 -I/usr/include/gnome-vfs-2.0 -I/usr/lib/x86_64-linux-gnu/gnome-vfs-2.0/include -I/usr/include/gconf/2 -
I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include
-Wdate-time -g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1
Linking: gcc -L. -Wl,-Bsymbolic-functions -Wl,-z,relro -fstack-protector -rdynamic -Wl,-export-dynamic -Wl,-E -Wl,-Bsymbolic-functions
-fPIE -pie -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -o vim -lgtk-x11-2.0 -lgdk-x11-2.0 -lpangocairo-1.0 -latk-1.0 -lcairo -lgdk_pixbuf-
2.0 -lgio-2.0 -lpangoft2-1.0 -lpango-1.0 -lgobject-2.0 -lglib-2.0 -lfontconfig -lfreetype -lgnomeui-2 -lSM -lICE -lbonoboui-2 -lgnome-2
-lpopt -lbonobo-2 -lbonobo-activation -lORBit-2 -lgnomecanvas-2 -lart_lgpl_2 -lgtk-x11-2.0 -lgdk-x11-2.0 -lpangocairo-1.0 -latk-1.0 -lcai
ro -lgio-2.0 -lpangoft2-1.0 -lpango-1.0 -lfontconfig -lfreetype -lgdk_pixbuf-2.0 -lgnomevfs-2 -lgconf-2 -lgthread-2.0 -lgmodule-2.0 -lgob
ject-2.0 -lglib-2.0 -lSM -lICE -lXpm -lXt -lX11 -lXdmcp -lSM -lICE -lm -ltinfo -lnsl -lselinux -lacl -lattr -lgpm -ldl -L/usr/lib -ll
ua5.2 -Wl,-E -fstack-protector-strong -L/usr/local/lib -L/usr/lib/x86_64-linux-gnu/perl/5.22/CORE -lperl -ldl -lm -lpthread -lcrypt -L
/usr/lib/python3.5/config-3.5m-x86_64-linux-gnu -lpython3.5m -lpthread -ldl -lutil -lm -L/usr/lib/x86_64-linux-gnu -ltcl8.6 -ldl -lz -lpt
hread -lieee -lm -lruby-2.3 -lpthread -lgmp -ldl -lcrypt -lm
I wonder if it'd be possible to replace lnext with a version which skips to the next qf line with a location.
One potential solution here is to extend neomake with a new command that can pretty-print the current error. This seems like a good way to work around the fact that vim quickfix can only store a single line.
Another idea could be to propose this as an enhancement to neovim - perhaps they would be willing to move quickfix in this direction.
|
2025-04-01T06:39:43.755394
| 2024-03-07T08:57:10
|
2173328288
|
{
"authors": [
"jcsp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8870",
"repo": "neondatabase/neon",
"url": "https://github.com/neondatabase/neon/issues/7043"
}
|
gharchive/issue
|
pageserver: clean up ancestral layers after split, old index_part objects
Per the "Cleaning up parent-shard layers" section in #6358 -- currently after a shard split, layers from the parent shards are not deleted until the whole tenant is eventually deleted.
We should implement an occasional online scrub routine that checks which of these are referenced by children, and cleans them up.
It likely makes sense to combine this work with cleaning up old-generation index_part.json objects, as these older objects will likely reference parent shard layers -- we should first define the criteria for cleaning up old indices, and then use the still-alive indices as the source of references for cleaning up parent layers.
### Tasks
- [ ] https://github.com/neondatabase/neon/pull/7925
- [ ] https://github.com/neondatabase/neon/pull/8196
- [ ] https://github.com/neondatabase/cloud/issues/14024
This will be enabled in staging here: https://github.com/neondatabase/aws/pull/1654
Then we'll let it soak for at least a week before proceeding to prod.
|
2025-04-01T06:39:43.764700
| 2022-12-23T14:12:55
|
1509392862
|
{
"authors": [
"SomeoneToIgnore",
"hlinnaka",
"koivunej"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8871",
"repo": "neondatabase/neon",
"url": "https://github.com/neondatabase/neon/pull/3202"
}
|
gharchive/pull-request
|
Replace 'tar' crate with 'tokio-tar'
The synchronous 'tar' crate has required us to use block_in_place and SyncIoBridge to work together with the async I/O in the client connection. Switch to 'async-tar' crate that uses async I/O natively.
As part of this, move the CopyDataWriter implementation to postgres_backend_async.rs. Even though it's only used in one place currently, it's in principle generally applicable whenever you want to use COPY out.
'async-tar' is a fork of the 'tar' crate, just replacing sync functions with corresponding async ones. I'm not sure how well maintained it is, but a crate like this doesn't really need much change or maintenance. If we go with this, it might be good to take a close look at 'async-tar' and compare if it's missing any fixes that have been made on 'tar' crate.
I'm surprised by how many dependencies the 'async-tar' crate pulls in. It depends on 'async-std'; is that really necessary, or could we update 'async-tar' to rely on the corresponding std functionality instead?
It depends on 'async-std'
Let's not pull that in then, it's yet another async runtime and we have tokio for that.
Could something like https://crates.io/crates/tokio-tar be a better choice?
It depends on 'async-std'
Let's not pull that in then, it's yet another async runtime and we have tokio for that.
I see.
Could something like https://crates.io/crates/tokio-tar be a better choice?
Tried that now. There's one problem with it:
error[E0477]: the type `AbortableWrite<'a, W>` does not fulfill the required lifetime
--> pageserver/src/basebackup.rs:44:9
|
44 | ar: Builder<AbortableWrite<'a, W>>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: type must satisfy the static lifetime as required by this binding
--> /home/heikki/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-tar-0.3.0/src/builder.rs:15:46
|
15 | pub struct Builder<W: Write + Unpin + Send + 'static> {
| ^^^^^^^
tokio_tar::Builder requires the writer to be 'static. The reason for that is that it has a mechanism to write the tar EOF block (1024 bytes of zeros), if you just drop the Builder without calling finish. The plain tar crate has that too, and it's straightforward there: the Drop implementation writes the EOF marker. In the async_tar, the Drop implementation uses block_on to do the same. But in tokio_tar, it uses a channel and a separately spawned task to do it.
That's hacky IMHO. The EOF-block-writing-task executes at some not-well defined time after the drop has happened. And it's super annoying for us, because we actually don't want the EOF marker to be written at all. We actually work hard to skip it, that's exactly why we have the AbortableWrite hack.
So ideally tokio-tar just didn't have that Drop implementation, and then it wouldn't require 'static, and we could also remove AbortableWrite. I'm tempted to fork it and do that. Perhaps the upstream project would be interested in a PR for that too; IMHO it's a bad idea to write the EOF block on drop anyway.
Switched to a modified version of 'tokio-tar' without the Drop implementation and the 'static requirement.
Looking more, https://github.com/vorot93/tokio-tar/pull/9 does look quite important in case the work not completing on one poll, however that's only on the read path which we don't use.
Looking at tokio-tar the AsyncWrite usage of slices is not cancellation safe. I doubt know if it was cancellation safe before, but now it definitely is not. I think however the only place it's cancelled is with the shutdown waiter, and then the writer is not restarted back again. Maybe a comment or two could be added about this near... I'll try to find that callsite.
We don't use that Drop path indeed, and I've created a PR that should explicitly help us with that: https://github.com/neondatabase/tokio-tar/pull/1 , there are more details about the upstreaming activities, let's see if they ever get released.
For now, out fork can do that: https://github.com/neondatabase/neon/pull/3239
|
2025-04-01T06:39:43.768132
| 2024-05-24T12:10:19
|
2315251768
|
{
"authors": [
"Shridhad",
"duskpoet"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8872",
"repo": "neondatabase/neonctl",
"url": "https://github.com/neondatabase/neonctl/pull/218"
}
|
gharchive/pull-request
|
fix: connection string uri scheme to postgresql
Use postgresql scheme for connection string as postgres causes issue with some tools
Validate database name passed to connection-string command
:tada: This PR is included in version 1.29.4 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.