id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
123692831 | Camera Problem Marshmallow
I have problem connecting to camera, the message is: Sorry, the Android camera encountered a problem. Still error after restarting the device.
My device is Moto G XT1033
Thanks for your report. At the moment I can't reproduce this but, because I haven't got Marshmallow on any phone yet.
I am just using ZXing internally. Could you try if their app is working for you?
That's https://play.google.com/store/apps/details?id=com.google.zxing.client.android
I have installed zxing client, still error. Maybe you want to check the FreeOTP code, the barcode reader work fine.
http://freeotp.fedorahosted.org/
I meant if you could install the ZXing Barcode Scanner and checking if it also crashes.
I have installed ZXing Barcode Scanner cause i use it, and zxing work fine.
I think this is caused by Android M's runtime permissions.
Reference: https://github.com/journeyapps/zxing-android-embedded/issues/89
Okay, i'll try. Thanks by the way
I will try to organise a Marshmallow phone and push out an update soon.
It's work now. To fix this issue just go to Android setting -> App Permission and then give camera permission to this app.
Thanks.
@riespandi could you please give this beta https://github.com/0xbb/otp-authenticator/releases/tag/v0.1.3-beta.1 a try?
Failed to install :-(
@riespandi ok thanks anyway
Okay. Happy coding
| gharchive/issue | 2015-12-23T17:06:38 | 2025-04-01T04:32:12.025845 | {
"authors": [
"0xbb",
"riespandi"
],
"repo": "0xbb/otp-authenticator",
"url": "https://github.com/0xbb/otp-authenticator/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1559576107 | latest android studio
since android studio's new version (electric eel) came,it's build structure has been changed and now it doesn't show the option of root build.gradle
.
| gharchive/issue | 2023-01-27T11:12:18 | 2025-04-01T04:32:12.105884 | {
"authors": [
"salman-ux70"
],
"repo": "10clouds/FluidBottomNavigation-android",
"url": "https://github.com/10clouds/FluidBottomNavigation-android/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1233604310 | Try: GH Actions summary
Description of the Change
This PR utilizes the new Markdown support of GH Actions summary (suggested by @jeffpaul) to include the test result in the job summary: https://github.com/10up/cypress-wp-utils/actions/runs/2319016114
To generate the markdown report, this PR uses mochawesome reporter. The reporter generates JSON and HTML reports. The report is pretty and solid:
Credits
Props @jeffpaul, @dinhtungdu
Very excited to see this and the resulting more easily to interpret e2e test results!
| gharchive/pull-request | 2022-05-12T08:07:04 | 2025-04-01T04:32:12.108669 | {
"authors": [
"dinhtungdu",
"jeffpaul"
],
"repo": "10up/cypress-wp-utils",
"url": "https://github.com/10up/cypress-wp-utils/pull/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
149885870 | Demonstrate production shells can always be traced to individuals
Figure out how to have the base container image we're using for Concourse jobs log hijackings in an individually-attributable way.
This needs some active research to know what we can/should do, but the team aesthetic is to avoid falling back to jumpboxes.
So I did a couple of hours of research last night / this morning and came up with this POC demo.
It uses concourse to provision containers for hijacking on the fly based on authentication from UAA and prevent hijacking any other containers. This allows us to use MFA via SAML to gate access to all shells!
Check it out:
Step 1: Get a one-time passcode from staging UAA
Visit https://login.green.18f.gov/passcode, authenticate, and copy the passcode. This is the same flow one uses for for cf login --sso.
Step 2: Use it to get a shell
fly -t shelldemo hijack -j shell/YOUR-PASSCODE_HERE /bin/sh
If you actually want to test this out, let me know, and I'll give you the url and credentials to the shelldemo instance I spun up in EC2.
Step 3: That's it!
The rest of Concourse works exactly as expected, only requests to the special shell pipeline, and hijack api calls are intercepted, everything else is passed through to concourse without being altered.
How does it work?
It uses nginx+lua (openresty) to proxy access to the concourse API. The nginx configuration running this demo is here: https://gist.github.com/cnelson/73cb07b994405738847b3274f365d115
Thoughts?
I dig it. Ideally we'd roll the nginx bits into a bosh release that can be added on top of stock concourse. Though, not sure it entirely solves the traceability? @mogul do we have to show the who/when on request? Maybe this is something we can get nginx to dump into it's log and use awslogs to gather up into CloudWatch?
Ideally, yeah, they're going to want to see that it's logged. But the point is proven, we can come up with a chokepoint where those logs get generated without having a static jumpbox, so woohoo!
We can ask UAA who the passcode was for here: https://gist.github.com/cnelson/73cb07b994405738847b3274f365d115#file-nginx-conf-L65 by calling the /userinfo API. From there, we can use nginx to make an HTTP request anywhere we want to log that this happened. That could go to a SIEM, slack, whatever.
Nice. One thing that was cool in Netflix's system was that the person asking for the shell could supply a reason that would get logged when the one-use-only key was actually used. Can you think of a trivial way offhand to add that given this POC? If so, note it, otherwise let's assume that'll be a different story in the Icebox someday.
Either way I think we have enough info to write the story when this reaches Grooming now! One AC I'd add is to ensure this is something we can document and contribute to the community. I'd even say this is blog-worthy once it's done.
We don't have to rely on the built-in /passcode page in UAA for this. We could build our own interface that generates those tokens so instead of going to /passcode you could go to shell-request.cloud.gov or whatever which would be a tiny uaa-invite style app that does whatever we want before issuing them the passcode.
I dig it. Ideally we'd roll the nginx bits into a bosh release that can be added on top of stock concourse
One AC I'd add is to ensure this is something we can document and contribute to the community.
As long as make this a bosh-release, with the appropriate bits broken out as properties (URL to UAA, client-id, etc) I think this would be very easy to release in a way anyone in CF community could use.
That's good, because I mentioned it in their #security channel and pointed
them here. :)
On Thu, Jun 2, 2016 at 11:49 AM, Chris Nelson notifications@github.com
wrote:
I dig it. Ideally we'd roll the nginx bits into a bosh release that can be
added on top of stock concourse
One AC I'd add is to ensure this is something we can document and
contribute to the community.
As long as make this a bosh-release, with the appropriate bits broken out
as properties (URL to UAA, client-id, etc) I think this would be very easy
to release in a way anyone in CF community could use.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/18F/cg-atlas/issues/12#issuecomment-223386141, or mute
the thread
https://github.com/notifications/unsubscribe/AAC6kg5gsDrBceiHMA6XKF6j-q5c9RB6ks5qHyW8gaJpZM4IMHQ8
.
On this topic: It'd be super-cool if activity inside a production shell was actually logged off-host. Is there a modern open source equivalent to the moribund Enterprise Audit Shell (EASH)? It's been abandoned for a long time, but I see that someone seems to be starting a modern fork of it on GitHub: https://github.com/asquelt/eas
(I’m assuming something else fills that vacuum these days which has more community/scrutiny going on, but I'm just not aware of it.)
Well I wasn't going to mention it until I was sure it was possible, but with this solution we have nginx in the middle of the websocket, so we could log all shell activity if we wanted :)
@jacobian says auditd is the way people handle that use-case nowadays.
Audit works at the kernel level. Will have to research how it handles namespaces if we wanted to be able to log each container individually.
Host level is fine as a start. In any case, consider all of it out of scope of this story since it's non-trivial and not critical for compliance. I have a stash of security improvements beyond mere compliance somewhere and will add it there.
(For future reference: This was one of the expected GSA LATO remediations.)
@LinuxBozo Mind posting an update here?
Just to echo @afeld, and not to put any pressure but just to be realistic, if we don't have this by July 21st, it will be highly problematic for the next ATO (the bridge ATO between the 90-day and the FedRAMP P-ATO).
Going over things with @clovett3 and Rajat, as I understood it, the remediation was for MFA, regardless of the implementation. Currently, we are leveraging MFA via GitHub authentication to Concourse, and I walked them through that process (also documented here: https://docs.cloud.gov/ops/troubleshooting-bosh/)
So how would we audit individual users making use of this facility to get a shell? Is there an audit log for Concourse anywhere, and a filtered view of these kinds of requests in particular that we can refer people to?
| gharchive/issue | 2016-04-20T21:04:57 | 2025-04-01T04:32:12.183320 | {
"authors": [
"LinuxBozo",
"NoahKunin",
"afeld",
"cnelson",
"mogul"
],
"repo": "18F/cg-atlas",
"url": "https://github.com/18F/cg-atlas/issues/12",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1323373074 | Give common options to sliders
There should be reasonable defaults to configure the behavior of the slider like:
[ ] Target
[x] Direction
[x] Speed
Don't remember what target is
| gharchive/issue | 2022-07-31T06:14:10 | 2025-04-01T04:32:12.247326 | {
"authors": [
"18nguyenl"
],
"repo": "18nguyenl/Billboard",
"url": "https://github.com/18nguyenl/Billboard/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2215280655 | User page
created user-page
Good job
| gharchive/pull-request | 2024-03-29T12:52:13 | 2025-04-01T04:32:12.262584 | {
"authors": [
"1Cezzo",
"zahidak1999"
],
"repo": "1Cezzo/idatt2105-project-frontend",
"url": "https://github.com/1Cezzo/idatt2105-project-frontend/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2288035571 | [FEAT]: Create Issue forms and make the issue options more visually appealing
Describe the Feature you want to end
Currently, the issue template for this repo seems to be basic also it is less visually appealing.👇
Changes I will make
Expected behavior
Screenshots (optional)
Additional context (optional )
Will add issue forms to the respective issues to make it easier for the contributor to raise it.
Here is an example of how a Bug Issue Form will look like 👇
Kindly assign me this issue under GSSOC'24
good idea!!
I am Agamjot singh. Under GSSOC'24, I would like to work on this issue. Pls assign me this!!
@1Shubham7 Kindly add the GSSOC24 Label along with level to this issue.
| gharchive/issue | 2024-05-09T16:22:52 | 2025-04-01T04:32:12.274554 | {
"authors": [
"1Shubham7",
"agxmbedi",
"dakshsinghrathore"
],
"repo": "1Shubham7/Wife-Predictor-v2",
"url": "https://github.com/1Shubham7/Wife-Predictor-v2/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1412435647 | Create separate folder for different language and add programs.
Example:
Create folder Javascript and create file hello-world.js.
├── Javascript
└── hello-world.js
@nirajpdn add dart programming language: https://github.com/1teacher1/Hacktoberfest2022/pull/36
| gharchive/issue | 2022-10-18T01:23:47 | 2025-04-01T04:32:12.301053 | {
"authors": [
"devkishor8007",
"nirajpdn"
],
"repo": "1teacher1/Hacktoberfest2022",
"url": "https://github.com/1teacher1/Hacktoberfest2022/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1412551824 | Feat functions api javascript lang snippet
Description
(A brief summary of the contribution and the problem it solves. Try to keep the description short and only include relevant details)
Testing
(Describe steps to test out and verify the changes.You can also include images or screenshots of the result for better understanding)
Checklist
[ ] Followed the contribution guidelines
[ ] The code is commented properly to give appropriate context.
@2002Bishwajeet i created another PR please check it out
Please resolve merge conflicts. Also I could you have not synced up with master. Please do that
@2002Bishwajeet could I cut a new branch .
Should be fine with this branch, you have some merge conflicts, google how to merge the branch and resolve conflicts😉. If you still get stuck lemme know what's disturbing you. Also you could reach out to me on appwrite discord server and we will solve the issues there together😄
i just did the changes but i dont know why PR was not created @2002Bishwajeet
| gharchive/pull-request | 2022-10-18T04:25:30 | 2025-04-01T04:32:12.378450 | {
"authors": [
"2002Bishwajeet",
"RJ025"
],
"repo": "2002Bishwajeet/awesome-appwrite-snippets",
"url": "https://github.com/2002Bishwajeet/awesome-appwrite-snippets/pull/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1524598404 | Feat/#6 roll paper create design
추가/수정한 기능 설명
check list
[x] issue number를 브랜치 앞에 추가 하였는가?
[x] 모든 단위 테스트를 돌려보고 기존에 작동하던 테스트에 영향이 없는 것을 확인했는가?
[x] 추가/수정사항을 설명하였는가?
수고하셨습니다!
수고하셨습니다.
| gharchive/pull-request | 2023-01-08T17:18:34 | 2025-04-01T04:32:12.393650 | {
"authors": [
"junvhui",
"minseok1015",
"yunhobb"
],
"repo": "2022-Winter-Bootcamp-Team-K/frontend",
"url": "https://github.com/2022-Winter-Bootcamp-Team-K/frontend/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
222588894 | ue_site not found
i have a simple hello world that i put in MyProject/Content/Scripts/ue_site.py, but UnrealEnginePython fails to find it and crashes at runtime.
[2017.04.18-20.48.42:701][ 0]LogPython: Python VM initialized: 3.5.3 (default, Mar 21 2017, 17:21:33)
[GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]
[2017.04.18-20.48.42:701][ 0]LogPython: Python Scripts search path: ../../../../Documents/Unreal Projects/MyProject/Content/Scripts
[2017.04.18-20.48.42:701][ 0]LogPython:Error: No module named 'ue_site'
Can you try with an empty ue_site.py ?
i tried with an empty ue_site.py but it gives the same error.
i also tried putting empty ue_site.py in:
MyProject/Plugins/UnrealEnginePython/Content
MyProject/Plugins/UnrealEnginePython/Content/Scripts
MyProject/Content
MyProject
i also tried putting an empty ue_site.zip in my project's root folder, and the Content folder, but it always give the same error.
fixed by changing line 76 of UnrealEnginePython.cpp, from:
char *scripts_path = TCHAR_TO_UTF8(*FPaths::Combine(*FPaths::GameContentDir(), UTF8_TO_TCHAR("Scripts")));
to:
char *scripts_path = TCHAR_TO_UTF8(*FPaths::ConvertRelativePathToFull(FPaths::Combine(*FPaths::GameContentDir(), UTF8_TO_TCHAR("Scripts"))) );
Why is this not merged into the tree yet?
| gharchive/issue | 2017-04-19T01:16:33 | 2025-04-01T04:32:12.397309 | {
"authors": [
"hartsantler",
"sabhiram",
"unbit"
],
"repo": "20tab/UnrealEnginePython",
"url": "https://github.com/20tab/UnrealEnginePython/issues/149",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1128130064 | 空格词法切割器代码(重要)
var a = "@bg adobe.png 1.5"
console.log(a.split(/\s+/))
基于正则表达式,实现了多个空格依旧不影响代码执行。
@lzr2006
@TianCan666
| gharchive/issue | 2022-02-09T07:03:17 | 2025-04-01T04:32:12.414317 | {
"authors": [
"2439905184"
],
"repo": "2439905184/EasyAvg-React",
"url": "https://github.com/2439905184/EasyAvg-React/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
874502060 | Add Vercel to static-app-hosting.md
Vercel has app hosting and static hosting usage. I updated the one in app-hosting.md and added to static-app-hosting.md as well.
Thanks for the contrib and update 👍
| gharchive/pull-request | 2021-05-03T11:49:06 | 2025-04-01T04:32:12.422860 | {
"authors": [
"255kb",
"lemedro"
],
"repo": "255kb/stack-on-a-budget",
"url": "https://github.com/255kb/stack-on-a-budget/pull/223",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
380546468 | Fix onPlace listener
Make on block place detection functional
Fixed: https://github.com/2B2TMCBE/bedrockRemoval/commit/f9471b8314a028576b08250a00ab910d05b86d57
| gharchive/issue | 2018-11-14T05:51:40 | 2025-04-01T04:32:12.428975 | {
"authors": [
"andrew121410",
"maxxie114"
],
"repo": "2B2TMCBE/bedrockRemoval",
"url": "https://github.com/2B2TMCBE/bedrockRemoval/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
623527254 | Can't win edax level 1
Didn't dive a lot into details, but following README, I've ran othello-zero v117 against edax level 1, and othello-zero lost miserably.. It didn't even capture one corner.
A B C D E F G H
1 ●─●─●─●─●─●─●─●
2 ●─●─●─●─●─●─●─●
3 ●─●─●─●─●─●─●─●
4 ●─●─●─●─○─○─○─○
5 ●─○─●─●─●─●─●─○
6 ●─●─●─●─●─●─●─●
7 ●─●─●─●─●─●─●─●
8 ●─●─●─●─●─●─●─●
Same with level 2.
It did however win level 0, and somehow level 3, but not level 1, 2 and 4+.
Considering these facts, I'm unfortunately not quite sure about "othello-zero is close to Edax Lv.5" statement.
I also wonder, if you have been checking othello-zero vs edax on numerous various openings, not just the default one.
●Black is othello
○White is Edax
The newer othello version is not always stronger than older one (This is the biggest difference between AlphaGo Zero and AlphaZero). In fact, V067 is much stronger than V117. I will upload all checkpoints from V001 to V117, if you need them.
"Is stronger" doesn't mean "Win always", but "Win mostly". But for some reason, you can only get exactly the same game when othello's and Edax's level is same. For example, if you let othello V067 play with Edax Level4 a hundred times, these a hundred games are identical. I make a new commit, which enable the ''randomness" feature of Edax. So you can get different games every time, and compare othello and Edax more fairly.
Thank you for trying this old and abandoned project. I make it just for fun, not for academic purposes. Some conclusions may be too hasty, But I believe the main algorithm works.
Best.
●Black is othello
○White is Edax
Sorry, I forgot to say that on my black terminal the colors are inverted, so I have inverted them in your code, otherwise black looked white and vice versa in my terminal and made brain damages to me.
So edax is ● here (and it looks white in my terminal). And it is being run on level 1 here.
In fact, V067 is much stronger than V117.
Ok, I'll try to check it out.
Thank you for trying this old and abandoned project. I make it just for fun, not for academic purposes. Some conclusions may be too hasty, But I believe the main algorithm works.
Thank you for a nice and easy getting started into alpha zero. I'm just a bit disappointed that it's a lot weaker than I wished it would have been. Especially after the fact that edax descends to levels 21-23 in no time...
New commit which enable the ''randomness" feature of Edax is online. New release containing all checkpoints is online.
I can make PR into your repo with those improvements if you believe this is worthy.
Of course. Thank you so much.
I'm just a bit disappointed that it's a lot weaker than I wished it would have been.
I totally agree with you. I don't think othello-zero V117 or V067 is a invincible beast. Instead, othello-zero is a little baby. Alpha Zero trains itself to play Chess with over 40 million self-play games. For othello-zero, the number is only 0.2 million(V117). 200 times larger. It will take me about five years to complete the whole training, on my personal computer.
If I have enough computing power, I want to change some config, and rewrite part of train-loop code. Make othello-zero’s algorithm more robust. V117 should have been stronger than V067.
| gharchive/issue | 2020-05-23T00:04:27 | 2025-04-01T04:32:12.435708 | {
"authors": [
"2Bear",
"ixanezis"
],
"repo": "2Bear/othello-zero",
"url": "https://github.com/2Bear/othello-zero/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
96360667 | Remove jsPathsToConcat*ModulesJs files from js:move-separate
Опции jsPathsToConcatBeforeModulesJs и jsPathsToConcatAfterModulesJs позволяют конкатенировать js файлы по шаблону в начало и конец main.js соответственно.
Вполне резонно, исключать перечисленные шаблоны/файлы из бандла задачи js:move-separate.
Так они и так туда не включены)
gulp.src('./markup/' + tarsConfig.fs.staticFolderName + '/js/separate-js/**/*.js')
Я могу в /js/separate-js закинуть файл и указать к ему путь в одной из jsPathsToConcat*.
В итоге файл будет и в общем бандле, сконкатенированный с остальными js, и в separate бандле.
@owanturist типа защита от дурака?
@artem-malko называй как хочешь =)
@owanturist Ладно, делай)
| gharchive/issue | 2015-07-21T16:30:44 | 2025-04-01T04:32:12.453748 | {
"authors": [
"artem-malko",
"owanturist"
],
"repo": "2gis/tars",
"url": "https://github.com/2gis/tars/issues/94",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1167102426 | Expose prometheus with basic auth
This is the beginning of implementing idea 1 from the
list @georiganaelena made in
https://github.com/2i2c-org/infrastructure/issues/328#issuecomment-1016601966.
We have one prometheus running per cluster, but manage many clusters.
A single grafana that can connect to all such prometheus clusters
will help with monitoring as well as reporting. So we need to expose
it as securely as possible to the external world, as it can contain
private information.
In this case, we're using https + basic auth provided by
nginx-ingress
(https://kubernetes.github.io/ingress-nginx/examples/auth/basic/)
to safely expose prometheus to the outside world. We can then
use a grafana that knows these username / passwords to access this
prometheus instance. Each cluster needs its own username / password
(generated with pwgen 64 1), so users in one cluster can not access
prometheus for another cluster.
Ref https://github.com/2i2c-org/infrastructure/issues/328
I used a tiny helper script to generate these encrypted secret files:
#!/bin/bash
set -euo pipefail
F="${1}/enc-support.secret.yaml"
echo "prometheusAuth:" > $F
echo " username: $(pwgen 64 1)" >> $F
echo " password: $(pwgen 64 1)" >> $F
sops -i -e $F
echo "Done $F"%
Then ran find . -type d | xargs -L1 ./gen.bash to run it for all dirs inside config/clusters.
However, meom-ige and farallon still don't have a support cluster, so this can't be used there yet.
Note - I've already run deploy-support for all these :)
Hmmm, the support chart could also do the trick jupyterhub does and generate secrets in a Secret file - if prometheus could accept accessing the content from there.
It may not be qhat makes sense, just floating the idea to ensure it is considered as an option. Someplace, the grafana server would need to have access to those secrets in other clusters so it would probably be relevant to have centrally still.
@consideRatio yeah, that's the other option - but that would require giving kubernetes API access to every single cluster to the centralized grafana, which I would very much prefer to not do.
I've added docs on what needs to happen with DNS, and I think it's already happened for all the clusters mentioned here.
I think what's left here is to automate adding all these as a data source to one centralized grafana.
I think this is ready to go!
Still need to write code that'll populate central grafana with datasources, but let's get this merged until then.
Still need to write code that'll populate central grafana with datasources
I think the upcoming work is actually described here: https://github.com/2i2c-org/infrastructure/issues/328#issuecomment-1082542191, let me know if you disagree.
Btw, thanks for all this work, @yuvipanda!!
| gharchive/pull-request | 2022-03-12T00:51:39 | 2025-04-01T04:32:12.461526 | {
"authors": [
"consideRatio",
"damianavila",
"yuvipanda"
],
"repo": "2i2c-org/infrastructure",
"url": "https://github.com/2i2c-org/infrastructure/pull/1091",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1389577236 | Update pangeo images to latest versions
We have had several reports of performance issues on the Pangeo hubs that might be fixed by using a new image
https://discourse.pangeo.io/t/dask-cluster-stays-idle-for-a-long-time-before-computing/2742
https://discourse.pangeo.io/t/dask-not-completing-large-operations-on-sose-data/2788
Some questions
Is it possible to make an announcement to the hub users that the image is being updated?
If this update breaks user code, how should we respond? It would be great to allow users to select their own images (a feature I've been requesting for a long time).
Is there a way to automate or otherwise obliviate the need for me to make PRs like this every time an update is needed? The release cadence of Pangeo docker images is approximately two per month. This reflects the pace of innovation in the Pangeo software ecosystem.
Is there a way to automate or otherwise obliviate the need for me to make PRs like this every time an update is needed? The release cadence of Pangeo docker images is approximately two per month. This reflects the pace of innovation in the Pangeo software ecosystem.
We have a way to automatically open PRs for new tags that @sgibson91 set up. Perhaps we can add a date filter to it, so it happens every time there's a date tagged release of the pangeo images?
Perhaps we can add a date filter to it, so it happens every time there's a date tagged release of the pangeo images?
I had already implemented that - I can dig the config out of the commits. This was all setup but I disabled it because I received feedback that changing the image under people wasn't favourable
The action I wrote only automates this PR-opening process - so yes, you can rollback.
Here's an example of the PRs it opens https://github.com/2i2c-org/infrastructure/pull/1629
To be clear, we would need individual users to be able to roll back to earlier images, not the entire hub at once. The use case here is that a user has a project that depends crucially on a specific library version that is present in, say 2022.06 but not 2022.09. Even if the default image goes forward to 2022.09, that user needs to be able to choose 2022.06 when they are booting their server.
There is an inherent tension between the desire to get the latest and greatest packages into the hub and the need to preserve stability for users. I'm just trying to brainstorm different ways we might achieve resolve this tension.
A dropdown that people can use to select from all available image tags, automatically populated, seems like it would be killer. We should actually be able to do that now https://github.com/jupyterhub/kubespawner/pull/607 has landed.
We'll have to make an API call to the dockerhub to get the list of tags, and provide them as options.
We should actually be able to do that now
If this is the case, I would love to see it implemented asap. This has been one of our main feature requests from Pangeo world for 2.5 years (https://github.com/jupyterhub/kubespawner/issues/307).
@rabernat looks like I missed a tiny feature there when I implemented it, https://github.com/jupyterhub/kubespawner/pull/640 is needed to enable this. Tiny patch, hopefully it gets merged soon.
| gharchive/pull-request | 2022-09-28T15:40:11 | 2025-04-01T04:32:12.470673 | {
"authors": [
"rabernat",
"sgibson91",
"yuvipanda"
],
"repo": "2i2c-org/infrastructure",
"url": "https://github.com/2i2c-org/infrastructure/pull/1735",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
106566948 | Unhelpful error when saving a docbook file
I have been editing a docbook article and used command formatting in a couple of places. Now I come to save the file I get the error:
Oops!
Document is invalid systemId: myarticle.docbook; lineNumber: 234; columnNumber: 19; element "command" not allowed here; expected the element end-tag or element "address", "anchor", "annotation", "bibliography", "bibliolist", "blockquote", "bridgehead", "calloutlist", "caution", "classsynopsis", "cmdsynopsis", "constraintdef", "constructorsynopsis", "destructorsynopsis", "epigraph", "equation", "example", "fieldsynopsis", "figure", "formalpara", "funcsynopsis", "glossary", "glosslist", "important", "index", "indexterm", "informalequation", "informalexample", "informalfigure", "informaltable", "itemizedlist", "literallayout", "mediaobject", "methodsynopsis", "msgset", "note", "ns:include", "orderedlist", "para", "procedure", "productionset", "programlisting", "programlistingco", "qandaset", "refentry", "refsect1", "remark", "revhistory", "screen", "screenco", "screenshot", "section", "segmentedlist", "sidebar", "simpara", "simplelist", "simplesect", "synopsis", "table", "task", "tip", "toc", "variablelist" or "warning" (with xmlns:ns="/2001/XInclude")
I have looked at the payload sent to the server and it is pseudo-HTML so I imagine the line number is related to an intermediate file. I tracked it down by a bit of guess work and trial and error to this:
<pre>
<code class="command">sample code</code></pre>
I removed formatting on this line and it saved. I think I started with an inline command format then changed to using the formatted style. It could do with validating on the client or returning errors that connect to what the user sees.
Can you provide steps to reproduce or provide the payload sent to the server that produced this error?
To reproduce:
Create a new article
In the first paragraph set the inline style to "Command"
Add some text
Change the paragraph format to "Formatted"
Save
Error
Document is invalid systemId: /Products/Enviroindex/untitled.docbook; lineNumber: 4; columnNumber: 13; element "command" not allowed here; expected element
Payload
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Untitled Article</title>
<style type="text/css"><![CDATA[@import url("https://wiki.3roundstones.com/callimachus/1.4/assets/ckeditor/plugins/codesnippet/lib/highlight/styles/github.css");
body { background-color: #f5f5f5; }
body.cke_editable.cke_show_blocks > * { background-color: #fff; padding-bottom: 5px;}
]]></style>
</head>
<body>
<h1>Untitled Article</h1>
<pre>
<code class="command">command</code></pre>
</body>
</html>
| gharchive/issue | 2015-09-15T14:22:30 | 2025-04-01T04:32:12.487088 | {
"authors": [
"edwardsph",
"jamesrdf"
],
"repo": "3-Round-Stones/callimachus",
"url": "https://github.com/3-Round-Stones/callimachus/issues/229",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1895598479 | 🛑 WEB - web.treinta.co is down
In 1c13b64, WEB - web.treinta.co ($WEB_WEB_TREINTA_CO_URL) was down:
HTTP code: 403
Response time: 618 ms
Resolved: WEB - web.treinta.co is back up in f19c389 after 8 minutes.
| gharchive/issue | 2023-09-14T03:37:35 | 2025-04-01T04:32:12.491157 | {
"authors": [
"DanielVelasquezTreinta"
],
"repo": "30SAS/uptime",
"url": "https://github.com/30SAS/uptime/issues/827",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1922006811 | 🛑 MS - links is down
In 8210550, MS - links ($MS_LINKS_URL) was down:
HTTP code: 404
Response time: 145 ms
Resolved: MS - links is back up in c2f86ed after 11 minutes.
| gharchive/issue | 2023-10-02T14:13:39 | 2025-04-01T04:32:12.493308 | {
"authors": [
"DanielVelasquezTreinta"
],
"repo": "30SAS/uptime",
"url": "https://github.com/30SAS/uptime/issues/917",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
682123922 | Slicers Not Displaying Text, but still work, only on mac
When i open an excel file, that has a slicer on a pivot table:
_f, err := excelize.OpenFile("excel/DistanceDashboard.xlsx")
and then stream it to chrome running on a Mac:
f.Write(w)
The slicers do not display any content in chrome on the mac, they still work, but the selection buttons are not visable.
Streaming the same file to chrome on windows works fine.
Thanks for your feedback, could you provide a spreadsheet xlsx attachment without confidential info?
Attached is the xlsx, it was generated using Excelize; on mac, the slicers do not show the buttons.
If I insert new slicers, the same happens, the slicers do technically work, but the buttons are not shown.
I did try on different machines, and id update to the latest excel version.
Thanks for your help.
NonDisplayingSlicerButtons.xlsx
Could you provide an origin spreadsheet file that opened by excelize? and a detailed code to produce this issue.
| gharchive/issue | 2020-08-19T19:34:58 | 2025-04-01T04:32:12.502645 | {
"authors": [
"TheoBo",
"xuri"
],
"repo": "360EntSecGroup-Skylar/excelize",
"url": "https://github.com/360EntSecGroup-Skylar/excelize/issues/693",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
254773781 | how to move an end point of a line and animate
What are the different ways you could move one end point of a Line and animate it?
The only way I know of so far is to construct a new Line object with the desired points and Transform between them. But is there a proper way to do this without constructing a new Line object?
I typically use the Line.put_start_and_end_on method. So you could write
self.play(line.put_start_and_end_on, line.get_start(), DESIRED_END_POINT)
excellent! thank you!
| gharchive/issue | 2017-09-01T23:46:34 | 2025-04-01T04:32:12.522151 | {
"authors": [
"3b1b",
"chuck1"
],
"repo": "3b1b/manim",
"url": "https://github.com/3b1b/manim/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1331461487 | fix: fail to run inside the bin directory
Solution:
convert $0 to an absolute path first.
Thank you!
| gharchive/pull-request | 2022-08-08T07:38:10 | 2025-04-01T04:32:12.558125 | {
"authors": [
"3noch",
"yihuang"
],
"repo": "3noch/nix-bundle-exe",
"url": "https://github.com/3noch/nix-bundle-exe/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
224106242 | Enhancement request - have option import Swagger into 3scale and display on Portal
Auto import is very cool as is. It would be better if it imported the Swagger to 3scale as an Active Doc and displayed it on the Dev Portal.
thanks @tnscorcoran
it should be available with this PR #28
Once someone review and accept it
| gharchive/issue | 2017-04-25T11:30:23 | 2025-04-01T04:32:12.560341 | {
"authors": [
"picsoung",
"tnscorcoran"
],
"repo": "3scale/3scale-cli",
"url": "https://github.com/3scale/3scale-cli/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
848810550 | From discord to telegram but not telegram to discord
Is your feature request related to a problem? Please describe.
No
Describe the function you'd like
I want a option so i can send from discord to telegram but not viceversa
This already exists, instead of using [[gateway.inout]] use [[gateway.in]] to only receive from this gateway or [[gateway.out]] to only send to this gateway.
| gharchive/issue | 2021-04-01T21:49:33 | 2025-04-01T04:32:12.595424 | {
"authors": [
"42wim",
"borjita2019"
],
"repo": "42wim/matterbridge",
"url": "https://github.com/42wim/matterbridge/issues/1443",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
372553175 | Clean-up message send logic.
Four down (#511, #517, #519, #521), three more to go for #510 and it's accompanying (p)refactor.
This is fairly easy one that deals mostly with breaking up the single Send() method into a main Send() "selector" function and two implementation ones (the existing sendWebhook() and a new sendRTM() (with a further lower-level utility function as well).
The only other significant change is part of this new sendRTM() method: instead of using the slack client of the bridge to do REST calls to Slack we actually use the RTM websocket.
The two other remaining PRs after this one will be:
Large-scale rewrite of the incoming event handling: splitting out the massive handleMessageEvent() method into multiple smaller ones with distinct purposes.
Rewrite of the Connect() method for simplification and with warning for use of legacy-tokens.
👍
| gharchive/pull-request | 2018-10-22T14:49:16 | 2025-04-01T04:32:12.598432 | {
"authors": [
"42wim",
"Helcaraxan"
],
"repo": "42wim/matterbridge",
"url": "https://github.com/42wim/matterbridge/pull/531",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
219892722 | Prohibit information loss
1.) Connection lost Message:
"You have lost connection to the 4Minitz server.
You will lose unsaved data if you close your browser.
Click icon for reconnection attempt.
If problem persist, contact server admin."
[ ] Check usability on smartphone!
2.) Use window.onbeforeunload()
See http://stackoverflow.com/a/1704783/2580805
message: "Do you really want to leave 4Minitz?"
Team estimation: 1d (or less)
| gharchive/issue | 2017-04-06T13:06:57 | 2025-04-01T04:32:12.659251 | {
"authors": [
"derwok"
],
"repo": "4minitz/4minitz",
"url": "https://github.com/4minitz/4minitz/issues/206",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2374248443 | xlsx文件预览日期格式无法正常显示
组件名称及版本: @js-preview/excel 1.7.8
运行环境:node 21.3.0
浏览器:谷歌浏览器
react18
打包工具及版本:webpack5
描述:
xlsx文件中2022/1/13格式日期正常显示应该为2022年1月13日,实际显示为44574。表格中的单元格设置的为自定义日期格式。
同遇到上述问题
自行处理吧,beforeTransformData中修改值
| gharchive/issue | 2024-06-26T04:36:01 | 2025-04-01T04:32:12.678636 | {
"authors": [
"501351981",
"GSsharon",
"SnartL"
],
"repo": "501351981/vue-office",
"url": "https://github.com/501351981/vue-office/issues/309",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2285711385 | OpenOcean - There is no connection to the safe and network is not detected automatically
Open app OpenOcean on Aurora /Arbitrum- https://app.safe.global/apps/open?safe=aurora:0x0a8f15cc539946F2F736576FdE71F704aFCA949b&appUrl=https%3A%2F%2Fgnosis.openocean.finance
Current result: Ethereum is the default network and there is no option to connect Safe
Aurora list
Ethereum list
Expected result: I suggest disabling the app and contacting the team
@foxfanfan @cformcn could you fix the app?
Disabled the app until this is fixed.
| gharchive/issue | 2024-05-08T14:11:41 | 2025-04-01T04:32:12.689474 | {
"authors": [
"katspaugh",
"liliya-soroka"
],
"repo": "5afe/safe-apps-list",
"url": "https://github.com/5afe/safe-apps-list/issues/350",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1155019529 | 【问题】选中结果导出异常
无法选中特定扫描结果进行导出,提示未选择动作。
不选择,直接导出没有问题。
目前可以搜索后倒出或者直接倒出,选择倒出确实有点问题,后续改改
| gharchive/issue | 2022-03-01T08:24:02 | 2025-04-01T04:32:12.702537 | {
"authors": [
"5wimming",
"Zhang21"
],
"repo": "5wimming/ASE",
"url": "https://github.com/5wimming/ASE/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2546966056 | feat: manage sizes
Se incorporó la funcionalidad completa de crear, leer, actualizar y eliminar tamaños siguiendo la línea utilizada en el administrar categorías (#13). Se creó una ruta dentro de las rutas de administración donde se gestionarán los tamaños con un panel similar, y los hooks personalizados para cada operación CRUD.
Algunas cuestiones que me surgieron al implementar esta funcionalidad:
En el back no tenemos validaciones de mínimos. Esto significa que podemos registrar tamaños/categorías/productos con nombre 'a'.
En el panel de administración, en algún punto de la pantalla, deberíamos ser capaces de visualizar los tamaños/categorías/productos lógicamente eliminados.
ChatGPT hace íconos bastante buenos.
@Fedesan14 estoy de acuerdo en lo de una MR por operación del CRUD. Acá sólo quería agilizar porque estamos cortos de tiempos :face_exhaling:
Está perfecto Eze. Dejo algunos comentarios:
En otro MR deberíamos agregar un listado para visualizar los items que han sido eliminados, dando la posibilidad de restaurarlos.
Nunca había probado ChatGPT para la creación de iconos, interesante. De todas formas, si llegas a necesitar te recomiendo páginas como Tabler Icons y PhosphorIcons :ok_hand:
| gharchive/pull-request | 2024-09-25T05:23:56 | 2025-04-01T04:32:12.728806 | {
"authors": [
"EzeSosa",
"delgadomatias"
],
"repo": "7-Seven-Up/megastore-frontend",
"url": "https://github.com/7-Seven-Up/megastore-frontend/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1334159210 | label完了但是跑不动
改了路径跑了一半然后还是报错
某个类别样本数量太少了,不够生成测试集的,你改一下测试集大小n_test
我怎么知道每个类别有多少个
你更新下代码,我加了先输出每个类别总数的了
噢我明白了,代码里是3个类别我只有1和0
话说我6000多张怎么0只有800个1只有280个
话说我6000多张怎么0只有800个1只有280个
宽高比超过2.5的会被丢弃,格式不支持打不开的也丢弃
文件名太长会导致打不开吗,我有4000多张都是jpg啊
难道是我自己加的代码给整个程序整坏了
我强制给它改成支持jpeg了for x in os.listdir(root) if x.lower().endswith('.jpg') or x.lower().endswith('.png') or x.lower().endswith('.jpeg')]
话说4000张图片一张一张按也得累死
| gharchive/issue | 2022-08-10T06:48:15 | 2025-04-01T04:32:12.799320 | {
"authors": [
"7eu7d7",
"clark-2468"
],
"repo": "7eu7d7/pixiv_AI_crawler",
"url": "https://github.com/7eu7d7/pixiv_AI_crawler/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
462419462 | activestorageのゴミが残っていないかを検証する
refs: #1241
9382, 9293, 10118, 11014, 7064あたりが範疇
https://sentry.husq.tk/sentry/backend-production/issues/6460
irb(main):002:0> User.where(id: [9382, 9293, 10118, 11014, 7064]).pluck(:iidxid)
(36.3ms) SELECT "users"."iidxid" FROM "users" WHERE "users"."id" IN ($1, $2, $3, $4, $5) [["id", 9382], ["id", 9293], ["id", 10118], ["id", 11014], ["id", 7064]]
=> ["3390-6546", "1677-6216", "2659-1449", "0866-3099", "2571-2245"]
とりあえず全員画像はあった
全部のblobのkeyがGCSに存在するかといらないやつを消す処理が必要
list = File.read('./hoge').split("\n"); no_key = []; ActiveStorage::Blob.each { |b| list.include?(b.key) ? list.delete(key
) : no_key.push(b) };
のワンラインで確認する
irb(main):014:0> list = File.read('./hoge').split("\n"); no_key = []; ActiveStorage::Blob.find_each { |b| list.include?(b.key) ? list.delet
e(b.key) : no_key.push(b) };
ActiveStorage::Blob Load (10.1ms) SELECT "active_storage_blobs".* FROM "active_storage_blobs" ORDER BY "active_storage_blobs"."id" ASC LIMIT $1 [["LIMIT", 1000]]
=> nil
irb(main):015:0> no_key
=> []
irb(main):016:0> list
=> ["JNLnmeB2h3MQQfhTrPFzD2ag"]
JNLnmeB2h3MQQfhTrPFzD2agのobjectを削除したので完了
| gharchive/issue | 2019-06-30T15:53:38 | 2025-04-01T04:32:12.825470 | {
"authors": [
"8398a7"
],
"repo": "8398a7/abilitysheet",
"url": "https://github.com/8398a7/abilitysheet/issues/1245",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1910566024 | feat: allow setting font to none
When set to none, the default monospace font which is used for raw code blocks will be used. The default is left as Cascadia Code so that this is not a breaking change.
#import "../ansi-render.typ": *
#set page(width: auto, height: auto, margin: 10pt)
#let demo-text = "\u{1b}[38;2;255;0;0mThis text is red.\u{1b}[0m
\u{1b}[48;2;0;255;0mThis background is green.\u{1b}[0m
\u{1b}[38;2;255;255;255m\u{1b}[48;2;0;0;255mThis text is white on a blue background.\u{1b}[0m
\u{1b}[1mThis text is bold.\u{1b}[0m
\u{1b}[4mThis text is underlined.\u{1b}[0m
\u{1b}[38;2;255;165;0m\u{1b}[48;2;255;255;0mThis text is orange on a yellow background.\u{1b}[0m
"
#ansi-render(
demo-text,
inset: 5pt,
radius: 3pt,
theme: terminal-themes.vscode,
font: "invalid font",
)
#ansi-render(
demo-text,
inset: 5pt,
radius: 3pt,
theme: terminal-themes.vscode,
font: none,
)
#ansi-render(
demo-text,
inset: 5pt,
radius: 3pt,
theme: terminal-themes.vscode,
font: "JetBrains Mono",
)
I changed its behavior to fall back on a specific font on none because there's an issue with using raw(str) on rendering empty newlines (not sure if it's a bug with typst) and the default font in raw #2231.
However, I do like the Idea of using same setting as raw has, so feel free to make another PR if you have a solution.
I know there are issues with the default font (I also noticed the ascender being wrong, and the font has been unmaintained for six years), but I thought it would still be the best fit for none given it's the only monospace font that comes bundled with the typst binary. Users can of course always set the font to DejaVu Sans Mono themselves, but that requires them to know what the default font is, and that also might change with future typst versions.
As for the newline problem, you're right there seems to be something fishy going on. raw("a\nb") works as expected, but raw("a");raw("\n");raw("b") stays on one line, and raw("a");raw("\n\n");raw("b") only inserts one newline. I guess we could split the string by newlines and use linebreak() for every \n instead :shrug:
I changed to raw in latest commit and probably fixed the issue.
Okay so you manually do the line breaks now and always use raw, but looks good. Thanks!
I still opened an issue on Typst https://github.com/typst/typst/issues/2240 because the behavior is inconsistent between using text("\n") and raw("\n")
| gharchive/pull-request | 2023-09-25T03:02:24 | 2025-04-01T04:32:12.838660 | {
"authors": [
"8LWXpg",
"RubixDev"
],
"repo": "8LWXpg/typst-ansi-render",
"url": "https://github.com/8LWXpg/typst-ansi-render/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1025160708 | What would you think about additional functionality from the standard C4 library?
One thing I really like on the base Plantuml-stdlib is the ability to add version tags to show systems changing (planned deprecation, future systems). I think we could leverage this right out of the box with some additional configuration options on c4sharp's API but I wasn't sure if you were trying to keep this library super simple or not.
Or
@jtreher wow, dude! You read my mind... I already wrote some code to start this idea some time ago! Fill free to submit a PR to contemplate this feature!
https://github.com/8T4/c4sharp/blob/38d0696c7796dd9e490f4f727538341ddf33f67d/src/components/C4Sharp/Models/Structure.cs#L19
This issues was implemented. Please, see the Release 4.1.x! Thank you for your contribution.
| gharchive/issue | 2021-10-13T12:03:32 | 2025-04-01T04:32:12.841635 | {
"authors": [
"jtreher",
"yanjustino"
],
"repo": "8T4/c4sharp",
"url": "https://github.com/8T4/c4sharp/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1947197945 | Progress bar won't show on same line
Hi,
This is a great plugin, thanks. I have one issue, though - I can't get the bar to be on the same line as the bullet and text. Is this a known issue? I've disabled all other extensions and it still happens.
Thanks,
Alex
Looks like you're missing some CSS somehow. Not sure why. You need to add this in a css code block on you [[roam/css]] page
.roam-block>span>div.dont-focus-block {
display: inline-block;
}
Hi, I added that but it hasn't helped.
I removed the extension and re-added it, then added the code you suggested as a css code block:
Any ideas?
Thanks
Looks like roam changed their HTML try this
.roam-block>span>span.dont-focus-block {
display: inline-block;
}
Amazing, that worked, thanks!
Is there a way to configure it so that the top-level block/checkbox is not included in the count? Ideally, I would like it to only display the checkboxes underneath it. This way, the progress bar is a representation of all the individual actions that make up the item I am tracking.
No that's not possible right now unfortunately. Glad the css worked though
| gharchive/issue | 2023-10-17T11:31:42 | 2025-04-01T04:32:12.855356 | {
"authors": [
"8bitgentleman",
"alexzadeh"
],
"repo": "8bitgentleman/roam-depot-todo-progress-bar",
"url": "https://github.com/8bitgentleman/roam-depot-todo-progress-bar/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1219221914 | 🛑 Mailsystem Internal is down
In 35c0d91, Mailsystem Internal ($MAIL_INT) was down:
HTTP code: 504
Response time: 10598 ms
Resolved: Mailsystem Internal is back up in c88e492.
| gharchive/issue | 2022-04-28T20:03:50 | 2025-04-01T04:32:12.857584 | {
"authors": [
"8ear"
],
"repo": "8ear/upptime",
"url": "https://github.com/8ear/upptime/issues/518",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
352860128 | 테스트 작성법 개선
테스트 목차에 들어갈 내용을 고민하고 있습니다.
테스트 이름 작성법
어떤 트리거와 관련된 테스트 또는 한 모듈 내에서 테스트를 찾을때 테스트명의 의도가 일관적이지 않으면 그 테스트가 시도하려는 것이 무엇인지 알기 어려울 수 있습니다.
테스트에 영어-한글명이 혼재되어 어떤 경우에 대한 테스트를 찾기 어렵습니다.
영어보다는 한글로 작성하면 좋을 것 같습니다. ex: "test_자동투자_켜기"
테스트명을 더 일관적이게 만들기
test_어떤_시나리오, test_어떤_메서드, test_어떤_상태일_경우 등이 가능한 상태입니다. 하지만 테스트_어떤_시나리오 와 같은 이름을 쓰면서 어떤 메서드만 assert하는 경우도 있는데, 이 경우에는 side effect가 잘 테스트되고 있는지 알기 어렵습니다. 같은 시나리오를 처리하려는 테스트가 분산/중복되어 짜일 수도 있습니다.
test_unit_메서드_어떤_상태_테스트, test_scenario_어떤_시나리오 처럼 구분하면 어떨까요?
테스트 단위: 유닛테스트와 시나리오 테스트를 별도로 관리하면 어떨까요?
메서드를 추가하고 그 메서드만을 위한 테스트를 만들 때도 있는데, 시나리오 테스트를 관리해서 메서드를 추가하면 해당 시나리오 테스트에 간단히 추가할 수 있도록 implication, side effect를 테스트하는 것이 좋을 것 같습니다.
히스토리 관리 역할도 수행해 줄 수 있을 것으로 기대합니다.
Mock
여러 방법이 혼재되어 있습니다. 개선이 가능할 것 같습니다.
데코레이터를 사용하는 경우는 다음과 같습니다.
테스트 클래스에 씌우기
개별 테스트 메서드에 씌우기
어떤 테스트 클래스의 대부분의 개별 테스트에 공통된 mock이 필요한 경우에는 mock_메서드 = mock.patch('경로')를 사용합니다. 다음의 경우에 해당합니다. 참고
setUp에 넣기
보통 .assert_called() 와 같이 메서드의 호출 여부를 검사할 때 공유된 메서드를 사용하지 않기 위해 사용합니다.
setUpTestData 또는 setUpClass에 넣기
보통 무거운 작동을 하거나 의존성을 갖는 메서드가 해당 동작을 시행하지 않도록 하기 위해 사용합니다.
fixture 선언
모델의 실제 값들을 fixture로 추출해 테스트해야 하는 경우가 존재합니다.
Factory
Request inside unit test - 내/외부 API 테스트 케이스
테스트에 사용되는 상수 정의
저는 test_ 뒤에 붙는 명칭이 메서드명이였으면 좋겠습니다. 이렇게 해두면 IDE 에서 해당 메서드에 대한 테스트를 찾아주어 리팩토링이 쉽습니다.
그래서 저는 test_auto_invest_자동투자끄기와 같이 메서드명을 쓰고 한글로 설명을 남기는 편입니다.
| gharchive/issue | 2018-08-22T08:52:51 | 2025-04-01T04:32:12.864647 | {
"authors": [
"Kirade",
"silry"
],
"repo": "8percent/styleguide",
"url": "https://github.com/8percent/styleguide/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2388971179 | how to run
Can you please tell me what steps you are following after cloning repo
https://www.youtube.com/watch?v=ZoWems8EqL4
| gharchive/issue | 2024-07-03T15:54:32 | 2025-04-01T04:32:12.907041 | {
"authors": [
"brijr",
"monty97"
],
"repo": "9d8dev/next-wp",
"url": "https://github.com/9d8dev/next-wp/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1836351219 | acme: regex not mathing as much as possible
Considering the following text:
abc xyz
The following Edit command:
Edit ,x/[a-z]+/ s/.*/T&/g
Should result in
Tabc Txyz
But it seems like x is capturing single chars only, instead the full 3 char group, so the command results in :
TaTbTc TxTyTz
I tried to find the problem to fix it myself but could not find where the bug is. It seems to work fine in go sam and plan9port acme.
I think I got it:
https://github.com/pedramos/9fans/commit/c70c0dbca34fe7f2a2ce603b2387cff6a640961b.patch
| gharchive/issue | 2023-08-04T08:57:43 | 2025-04-01T04:32:12.909342 | {
"authors": [
"pedramos"
],
"repo": "9fans/go",
"url": "https://github.com/9fans/go/issues/108",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2168120330 | 🛑 coderank is down
In d6b6e09, coderank (https://profile.codersrank.io/user/9renpoto/) was down:
HTTP code: 503
Response time: 491 ms
Resolved: coderank is back up in b49fb96 after 27 minutes.
| gharchive/issue | 2024-03-05T01:56:22 | 2025-04-01T04:32:12.911703 | {
"authors": [
"9renpoto"
],
"repo": "9renpoto/upptime",
"url": "https://github.com/9renpoto/upptime/issues/1493",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
657166935 | 请问一下图表有没有可能设置默认选中项?
请问一下大神作者,在没有进行任何点击的情况下,我有没有可能通过方法或者属性给图表设置一个默认选中的index?
不知道你要的是否是这种效果 https://api.highcharts.com.cn/highcharts#Point.select
参见相关问题 #932
我想问的效果是,在图表渲染完成后,是否可以通过OC的方法或者属性自动设置某一个数据点为选中状态并且显示纵向数据线?
这个功能没有, Highcharts 貌似没有提供此类 API
那再请问下AAChartKit是否有可以设置柱状图表每个数据柱图形宽度的方式?
那再请问下AAChartKit是否有可以设置柱状图表每个数据柱图形宽度的方式?
自定义柱状图条形图宽度, 参见 issue https://github.com/AAChartModel/AAChartKit/issues/537
那么请问AAChartKit在图表渲染完成后,是否可以获取每个数据点或者数据柱的具体位置(frame)?
那么请问AAChartKit在图表渲染完成后,是否可以获取每个数据点或者数据柱的具体位置(frame)?
这个也没有, 只有能够获取用户手指滑动或者点击图表回调事件, 回调事件中能够得到如下信息
@interface AAMoveOverEventMessageModel : NSObject
@property (nonatomic, copy) NSString *name;
@property (nonatomic, strong) NSNumber *x;
@property (nonatomic, strong) NSNumber *y;
@property (nonatomic, copy) NSString *category;
@property (nonatomic, strong) NSDictionary *offset;
@property (nonatomic, assign) NSUInteger index;
@end
事件信息控制台打印示例如下
2020-07-16 15:52:46.459878+0800 AAChartKit-ProDemo[29990:5410796] 👌👌👌👌 selected point series element name: 2019
user finger moved over!!!,get the move over event message: {
category = C;
index = 6;
name = 2019;
offset = {
plotX = "198.79166666667";
plotY = "324.906987093903";
};
x = 6;
y = "18.6";
}
2020-07-16 15:55:13.688314+0800 AAChartKit-ProDemo[29990:5410796] 👌👌👌👌 selected point series element name: 2017
user finger moved over!!!,get the move over event message: {
category = Swift;
index = 1;
name = 2017;
offset = {
plotX = "45.875";
plotY = "463.1780151312862";
};
x = 1;
y = "6.9";
}
2020-07-16 15:55:14.571877+0800 AAChartKit-ProDemo[29990:5410796] 👌👌👌👌 selected point series element name: 2018
user finger moved over!!!,get the move over event message: {
category = Python;
index = 2;
name = 2018;
offset = {
plotX = "76.458333333333";
plotY = "448.8121940364931";
};
x = 2;
y = "5.7";
}
2020-07-16 15:55:15.598733+0800 AAChartKit-ProDemo[29990:5410796] 👌👌👌👌 selected point series element name: 2018
user finger moved over!!!,get the move over event message: {
category = R;
index = 10;
name = 2018;
offset = {
plotX = "321.125";
plotY = "423.6720071206053";
};
x = 10;
y = "8.6";
}
2020-07-16 15:55:16.334395+0800 AAChartKit-ProDemo[29990:5410796] 👌👌👌👌 selected point series element name: 2017
user finger moved over!!!,get the move over event message: {
category = MATLAB;
index = 11;
name = 2017;
offset = {
plotX = "351.70833333333";
plotY = "430.8549176680018";
};
x = 11;
y = "9.6";
}
我想问的效果是,在图表渲染完成后,是否可以通过OC的方法或者属性自动设置某一个数据点为选中状态并且显示纵向数据线?
参考相同问题:
https://github.com/AAChartModel/AAChartKit-Swift/issues/345
| gharchive/issue | 2020-07-15T08:31:23 | 2025-04-01T04:32:12.948901 | {
"authors": [
"AAChartModel",
"qinzhibo"
],
"repo": "AAChartModel/AAChartKit",
"url": "https://github.com/AAChartModel/AAChartKit/issues/939",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
500362990 | #166: Fix path order for loading dependencies
This should fix the problem with loading dependencies described here #166
Any chance of merging it?
How does this compare to #114, @amis92? Since both PRs purport to fix the problem, which one should we merge?
This one can be closed. I don't need this merge anymore. Waiting for acceptance of this PR, I completely rewrote CodeGeneration.Roslyn because I spotted there a couple of other different issues and the code was over-complicated.
Thank you, @cezarypiatek
| gharchive/pull-request | 2019-09-30T15:46:03 | 2025-04-01T04:32:12.962947 | {
"authors": [
"AArnott",
"cezarypiatek"
],
"repo": "AArnott/CodeGeneration.Roslyn",
"url": "https://github.com/AArnott/CodeGeneration.Roslyn/pull/170",
"license": "MS-PL",
"license_type": "permissive",
"license_source": "github-api"
} |
208674500 | Add support for MSBuild Core (aka dotnet build)
This migrates the library projects to the new .NET SDK (aka .NET Core) project type of VS2017 and NB.GV now works in dotnet build.
We should be very close to this working on linux/OSX as well, although we have problems with finding the native binaries on those systems.
Closes #56
Am I correct, given this PR, that [this doc]
(https://github.com/AArnott/Nerdbank.GitVersioning/blob/master/doc/dotnet-cli.md) is now out of date? There it says "Sorry. DNX and dotnet CLI don't support extensible versioning systems." Can I use NB.GV with a dotnet build vs2017+csproj setup?
Quite right, @el2iot2. Thank you for calling that out. I have updated the doc
| gharchive/pull-request | 2017-02-18T21:56:12 | 2025-04-01T04:32:12.965885 | {
"authors": [
"AArnott",
"el2iot2"
],
"repo": "AArnott/Nerdbank.GitVersioning",
"url": "https://github.com/AArnott/Nerdbank.GitVersioning/pull/111",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2109384037 | Getting started guide for new users
resolution_duplicate | by saw562@nci.org.au
We should have a way of directing users new to accessdev towards the user documentation and/or a 'getting started' type guide that tells them how to set up the machine & the differences from accesscollab.
This should be the first thing they see when they log onto the machine, e.g. a message like
================================================
Welcome to Accessdev!
For help getting started please go to http://accessdev.nci.org.au/getting-started
To read the general user guides go to http://accessdev.nci.org.au/help
You can also ask for help by emailing access_help@nf.nci.org.au
=================================================
The links don't necessarily need to be served by accessdev, they could be redirects e.g. to the trac.nci wiki, the idea is to have a memorable shortcut for people to type out rather than https://trac.nci.org.au/trac/access/wiki/accessdev/...
We could display the message to only new users using a profile.d script that e.g. only runs if the home directory is empty or something along those lines.
Once a user has read the getting started guide and followed its instructions they should be able to run a UMUI test job (e.g. ssh & svn access has been set up) and know how to migrate a job from accesscollab to accessdev (how to copy across any hand edit files & such needed to run)
Issue migrated from trac:78 at 2024-01-31 17:31:08 +1100
ibc599 changed reviews which not transferred by tractive
ibc599 changed status from new to assigned
ibc599 changed owner from `` to Admin
@scott.wales@bom.gov.au changed priority from major to blocker
@scott.wales@bom.gov.au changed owner from Admin to saw562
@scott.wales@bom.gov.au changed resolution from `` to duplicate
@scott.wales@bom.gov.au commented
Dupe of #48
@scott.wales@bom.gov.au changed status from assigned to accepted
@scott.wales@bom.gov.au changed status from accepted to closed
@scott.wales@bom.gov.au commented
Milestone accessdev v1.0 deleted
@scott.wales@bom.gov.au removed milestone (was accessdev v1.0)
| gharchive/issue | 2014-02-20T14:47:31 | 2025-04-01T04:32:12.998552 | {
"authors": [
"penguian"
],
"repo": "ACCESS-NRI/accessdev-Trac-archive",
"url": "https://github.com/ACCESS-NRI/accessdev-Trac-archive/issues/78",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2402230019 | [Snyk] Security upgrade django from 3.2.25 to 4.2.14
This PR was automatically created by Snyk using the credentials of a real user.
Snyk has created this PR to fix 4 vulnerabilities in the pip dependencies of this project.
Snyk changed the following file(s):
services/workshop/requirements.txt
[!IMPORTANT]
Check the changes in this PR to ensure they won't cause issues with your project.
Max score is 1000. Note that the real score may have changed since the PR was raised.
This PR was automatically created by Snyk using the credentials of a real user.
Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
📜 Customise PR templates
🛠 Adjust project settings
📚 Read about Snyk's upgrade logic
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Denial of Service (DoS)
🦉 Directory Traversal
Checkmarx One – Scan Summary & Details – dff96d5f-24db-4e33-ab82-69f87988aef5
New Issues
Severity
Issue
Source File / Package
Checkmarx Insight
CVE-2023-42282
Npm-ip-1.1.8
Vulnerable Package
CVE-2023-45288
Go-golang.org/x/net-v0.0.0-20200324143707-d3edc9973b7e
Vulnerable Package
CVE-2023-50782
Python-cryptography-40.0.2
Vulnerable Package
CVE-2023-52428
Maven-com.nimbusds:nimbus-jose-jwt-9.25.6
Vulnerable Package
CVE-2023-6481
Maven-ch.qos.logback:logback-core-1.2.7
Vulnerable Package
CVE-2024-1597
Maven-org.postgresql:postgresql-42.4.0
Vulnerable Package
CVE-2024-22243
Maven-org.springframework:spring-web-5.3.13
Vulnerable Package
CVE-2024-22257
Maven-org.springframework.security:spring-security-core-5.6.0
Vulnerable Package
CVE-2024-22259
Maven-org.springframework:spring-web-5.3.13
Vulnerable Package
CVE-2024-22262
Maven-org.springframework:spring-web-5.3.13
Vulnerable Package
CVE-2024-23672
Maven-org.apache.tomcat.embed:tomcat-embed-websocket-9.0.55
Vulnerable Package
CVE-2024-23672
Maven-org.apache.tomcat.embed:tomcat-embed-core-9.0.55
Vulnerable Package
CVE-2024-24549
Maven-org.apache.tomcat.embed:tomcat-embed-core-9.0.55
Vulnerable Package
CVE-2024-26130
Python-cryptography-40.0.2
Vulnerable Package
CVE-2024-27088
Npm-es5-ext-0.10.62
Vulnerable Package
CVE-2024-27351
Python-Django-4.1.13
Vulnerable Package
CVE-2024-29180
Npm-webpack-dev-middleware-3.7.3
Vulnerable Package
CVE-2024-31573
Maven-org.xmlunit:xmlunit-core-2.8.3
Vulnerable Package
CVE-2024-34069
Python-Werkzeug-2.0.3
Vulnerable Package
CVE-2024-34750
Maven-org.apache.tomcat.embed:tomcat-embed-core-9.0.55
Vulnerable Package
CVE-2024-37890
Npm-ws-5.2.3
Vulnerable Package
CVE-2024-37890
Npm-ws-6.2.2
Vulnerable Package
CVE-2024-4068
Npm-braces-2.3.2
Vulnerable Package
CVE-2024-4068
Npm-braces-3.0.2
Vulnerable Package
CVE-2024-4340
Python-sqlparse-0.2.4
Vulnerable Package
Cx89a94f30-7a24
Python-sqlparse-0.2.4
Vulnerable Package
JWT_No_Signature_Verification
/services/web/src/utils.js: 21
Attack Vector
Process_Control
/services/community/vendor/go.mongodb.org/mongo-driver/x/mongo/driver/auth/internal/gssapi/sspi_wrapper.c: 12
Attack Vector
Process_Control
/services/community/vendor/github.com/globalsign/mgo/internal/sasl/sspi_windows.c: 17
Attack Vector
CSRF
/services/workshop/crapi/mechanic/views.py: 131
Attack Vector
CVE-2024-21520
Python-djangorestframework-3.14.0
Vulnerable Package
CVE-2024-28849
Npm-follow-redirects-1.15.2
Vulnerable Package
CVE-2024-28863
Npm-tar-6.1.11
Vulnerable Package
CVE-2024-29041
Npm-express-4.18.2
Vulnerable Package
CVE-2024-29415
Npm-ip-1.1.8
Vulnerable Package
CVE-2024-35195
Python-requests-2.30.0
Vulnerable Package
CVE-2024-39249
Npm-async-2.6.4
Vulnerable Package
CVE-2024-4067
Npm-micromatch-3.1.10
Vulnerable Package
CVE-2024-4067
Npm-micromatch-4.0.2
Vulnerable Package
Denial_Of_Service_Resource_Exhaustion
/services/community/vendor/github.com/lib/pq/url.go: 33
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/crapi/shop/tests.py: 143
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/crapi/apps.py: 65
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/crapi/shop/tests.py: 93
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/crapi/shop/tests.py: 126
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/crapi/apps.py: 72
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/crapi/shop/tests.py: 107
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/crapi/shop/tests.py: 163
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/crapi/shop/tests.py: 122
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/utils/helper.py: 5
Attack Vector
Filtering_Sensitive_Logs
/services/workshop/utils/helper.py: 5
Attack Vector
Missing_HSTS_Header
/services/identity/src/main/java/com/crapi/config/JwtAuthEntryPoint.java: 44
Attack Vector
Missing_HSTS_Header
/services/workshop/crapi_site/settings.py: 1
Attack Vector
Privacy_Violation
/services/identity/src/main/java/com/crapi/utils/SMTPMailServer.java: 56
Attack Vector
Privacy_Violation
/services/identity/src/main/java/com/crapi/service/Impl/UserRegistrationServiceImpl.java: 60
Attack Vector
Privacy_Violation
/services/identity/src/main/java/com/crapi/utils/SMTPMailServer.java: 56
Attack Vector
Privacy_Violation
/services/community/api/config/Initialize_mongo.go: 27
Attack Vector
Privacy_Violation
/services/community/api/config/Initialize_mongo.go: 27
Attack Vector
Privacy_Violation
/services/community/api/config/Initialize_postgres.go: 36
Attack Vector
Privacy_Violation
/services/community/vendor/github.com/jinzhu/gorm/field.go: 25
Attack Vector
Privacy_Violation
/services/community/vendor/github.com/jinzhu/gorm/errors.go: 18
Attack Vector
Privacy_Violation
/services/community/vendor/go.mongodb.org/mongo-driver/x/mongo/driver/auth/mongodbcr.go: 92
Attack Vector
Privacy_Violation
/services/community/vendor/go.mongodb.org/mongo-driver/x/mongo/driver/auth/scram.go: 32
Attack Vector
SSL_Verification_Bypass
/services/workshop/crapi/shop/views.py: 128
Attack Vector
SSL_Verification_Bypass
/services/workshop/crapi/merchant/views.py: 77
Attack Vector
SSL_Verification_Bypass
/services/community/api/auth/token.go: 56
Attack Vector
SSRF
/services/workshop/crapi/user/models.py: 61
Attack Vector
SSRF
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
SSRF
/services/identity/src/main/java/com/crapi/controller/ChangeEmailController.java: 43
Attack Vector
Unsafe_Object_Binding
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 147
Attack Vector
Unsafe_Object_Binding
/services/identity/src/main/java/com/crapi/controller/ProfileController.java: 98
Attack Vector
Unsafe_Object_Binding
/services/identity/src/main/java/com/crapi/controller/ChangeEmailController.java: 43
Attack Vector
Unsafe_Object_Binding
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
Unsafe_Object_Binding
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
Unsafe_Object_Binding
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 132
Attack Vector
Unsafe_Object_Binding
/services/identity/src/main/java/com/crapi/controller/UserController.java: 60
Attack Vector
Use_of_Cryptographically_Weak_PRNG
/services/gateway-service/main.go: 131
Attack Vector
Use_of_Cryptographically_Weak_PRNG
/services/gateway-service/main.go: 84
Attack Vector
Use_of_Cryptographically_Weak_PRNG
/services/community/vendor/go.mongodb.org/mongo-driver/x/mongo/driver/topology/topology.go: 380
Attack Vector
Use_of_Cryptographically_Weak_PRNG
/services/community/vendor/go.mongodb.org/mongo-driver/x/mongo/driver/topology/topology.go: 336
Attack Vector
CVE-2022-30636
Go-golang.org/x/crypto-v0.0.0-20200709230013-948cd5f35899
Vulnerable Package
Heap_Inspection
/services/identity/src/main/java/com/crapi/model/SeedUser.java: 31
Attack Vector
Heap_Inspection
/services/identity/src/main/java/com/crapi/entity/UserPrinciple.java: 35
Attack Vector
Heap_Inspection
/services/identity/src/main/java/com/crapi/entity/User.java: 36
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/ProfileController.java: 62
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 95
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/service/Impl/ProfileServiceImpl.java: 210
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/ProfileController.java: 147
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/service/Impl/ProfileServiceImpl.java: 208
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/service/Impl/ProfileServiceImpl.java: 209
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/ChangeEmailController.java: 43
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 54
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/ChangeEmailController.java: 43
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 54
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/ChangeEmailController.java: 43
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/ChangeEmailController.java: 43
Attack Vector
Log_Forging
/services/identity/src/main/java/com/crapi/controller/AuthController.java: 77
Attack Vector
Use_Of_Hardcoded_Password
/services/workshop/utils/mock_methods.py: 84
Attack Vector
Use_Of_Hardcoded_Password
/services/workshop/utils/mock_methods.py: 54
Attack Vector
Use_Of_Hardcoded_Password
/services/workshop/utils/mock_methods.py: 42
Attack Vector
Use_Of_Hardcoded_Password
/services/workshop/crapi/apps.py: 72
Attack Vector
Use_Of_Hardcoded_Password
/services/workshop/crapi/apps.py: 65
Attack Vector
Use_Of_Hardcoded_Password
/services/identity/src/test/java/com/crapi/service/Impl/OtpServiceImplTest.java: 240
Attack Vector
Use_Of_Hardcoded_Password
/services/identity/src/test/java/com/crapi/service/Impl/UserServiceImplTest.java: 534
Attack Vector
Use_Of_Hardcoded_Password
/services/identity/src/test/java/com/crapi/service/Impl/UserServiceImplTest.java: 555
Attack Vector
Use_Of_Hardcoded_Password
/services/identity/src/test/java/com/crapi/service/Impl/ProfileServiceImplTest.java: 361
Attack Vector
Use_Of_Hardcoded_Password
/services/identity/src/test/java/com/crapi/service/Impl/OtpServiceImplTest.java: 240
Attack Vector
Use_Of_Hardcoded_Password
/services/identity/src/test/java/com/crapi/service/Impl/OtpServiceImplTest.java: 240
Attack Vector
Use_Of_Hardcoded_Password
More results are available on AST platform
| gharchive/pull-request | 2024-07-11T04:37:51 | 2025-04-01T04:32:13.210417 | {
"authors": [
"acn-tesch"
],
"repo": "ACN-APPSAS/crAPI",
"url": "https://github.com/ACN-APPSAS/crAPI/pull/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
104534349 | Missing Push Notification Entitlement
Hi,
I recently included AFNetworking in an app and I received the following warning from Apple when I submitted the app for review:
Missing Push Notification Entitlement - Your app appears to include API used to register with the Apple Push Notification service, but the app signature's entitlements do not include the "aps-environment" entitlement. If your app uses the Apple Push Notification service, make sure your App ID is enabled for Push Notification in the Provisioning Portal, and resubmit after signing your app with a Distribution provisioning profile that includes the "aps-environment" entitlement. See "Provisioning and Development" in the Local and Push Notification Programming Guide for more information. If your app does not use the Apple Push Notification service, no action is required. You may remove the API from future submissions to stop this warning. If you use a third-party framework, you may need to contact the developer for information on removing the API.
This app has been submitted to Apple many times before and this is the first time I got this warning, so it seems reasonable to assume that this is being caused by AFNetworking. Is this a known issue? Is there a way to get rid of this warning without removing all of AFNetworking?
Thanks,
Zsombor
I've just started getting this warning email on an app that doesn't use AFNetworking and where I can't work out what change I made to cause the problem - I suspect it is a false positive in Apple's testing and will be silently fixed by them at some point.
Really annoying that there doesn't seem to be any feedback available about what API is triggering the problem.
This Apple forum thread has some indication that it might be an XCode 6.4 issue - https://forums.developer.apple.com/message/42918#42918 - and that lots of other people are seeing it.
Thanks Richard, you are right, this is not an AFNetworking problem after all.
| gharchive/issue | 2015-09-02T17:39:19 | 2025-04-01T04:32:13.271839 | {
"authors": [
"richardgroves",
"zpapp"
],
"repo": "AFNetworking/AFNetworking",
"url": "https://github.com/AFNetworking/AFNetworking/issues/2938",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2234881375 | May I ask if it's possible to apply for a CVE for this project if a security vulnerability is found?
Description
Yes of course, what is involved to apply for CVE?
GitHub can assign us one if we have a disclosure we need to file
| gharchive/issue | 2024-04-10T06:51:34 | 2025-04-01T04:32:13.280500 | {
"authors": [
"hackgoofer",
"ntindle",
"sunriseXu"
],
"repo": "AI-Engineer-Foundation/agent-protocol",
"url": "https://github.com/AI-Engineer-Foundation/agent-protocol/issues/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1669769378 | 运行问题
File "E:\autogpt\my_folder\lib\runpy.py", line 196, in run_module_as_main
return run_code(code, main_globals, None,
File "E:\autogpt\my_folder\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "E:\autogpt\Auto-GPT\autogpt_main.py", line 572, in
main()
File "E:\autogpt\Auto-GPT\autogpt_main.py", line 396, in main
agent.start_interaction_loop()
File "E:\autogpt\Auto-GPT\autogpt_main.py", line 448, in start_interaction_loop
assistant_reply = chat.chat_with_ai(
File "E:\autogpt\Auto-GPT\autogpt\chat.py", line 95, in chat_with_ai
) = generate_context(prompt, relevant_memory, full_message_history, model)
File "E:\autogpt\Auto-GPT\autogpt\chat.py", line 43, in generate_context
current_tokens_used = token_counter.count_message_tokens(current_context, model)
File "E:\autogpt\Auto-GPT\autogpt\token_counter.py", line 24, in count_message_tokens
encoding = tiktoken.encoding_for_model(model)
File "E:\autogpt\my_folder\lib\site-packages\tiktoken\model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
File "E:\autogpt\my_folder\lib\site-packages\tiktoken\registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
File "E:\autogpt\my_folder\lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
File "E:\autogpt\my_folder\lib\site-packages\tiktoken\load.py", line 115, in load_tiktoken_bpe
return {
File "E:\autogpt\my_folder\lib\site-packages\tiktoken\load.py", line 115, in
return {
ValueError: not enough values to unpack (expected 2, got 1)
更新了tiktoken但是问题还没有解决
本项目没有使用AutoGPT。不如到 https://github.com/AI-LLM/prompt-patterns/discussions/4 讨论?
| gharchive/issue | 2023-04-16T08:22:48 | 2025-04-01T04:32:13.287920 | {
"authors": [
"luweigen",
"xai26285"
],
"repo": "AI-LLM/prompt-patterns",
"url": "https://github.com/AI-LLM/prompt-patterns/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2564445268 | add : mongodb integration
Add MongoDB Export Support
Purpose:
Add support for exporting data from MongoDB databases to the VDF format.
Key Changes:
Implemented the ExportMongoDB class that inherits from the ExportVDB base class.
Added functionality to connect to a MongoDB Atlas instance, retrieve data from a specified collection, and export it to Parquet files.
Implemented logic to handle various BSON data types and flatten nested documents.
Added support for detecting the vector dimension automatically if not provided.
Integrated the new MongoDB export functionality into the command-line interface.
Impact:
This change will allow users to export data from MongoDB databases to the VDF format, enabling them to leverage the VDF ecosystem for vector search, embeddings, and other machine learning tasks.
✨ Generated with love by Kaizen ❤️
Original Description
# Add MongoDB Export Functionality
**Purpose:
**
Add support for exporting data from MongoDB databases to the VDF format.
Key Changes:
Introduced a new ExportMongoDB class that inherits from the base ExportVDB class.
Implemented methods to connect to a MongoDB database, fetch data from a specified collection, and export the data to Parquet files.
Added support for handling various BSON data types (ObjectId, Binary, Regex, Timestamp, Decimal128, Code) during the flattening process.
Integrated the new MongoDB export functionality into the command-line interface.
**Impact:
**
Users can now export data from MongoDB databases to the VDF format, enabling seamless integration with the VDF ecosystem and downstream applications.
✨ Generated with love by Kaizen ❤️
Original Description
# Add MongoDB Export Functionality
****Purpose:
**
**
Introduces a new feature to export data from MongoDB into a specified format.
Key Changes:
Added .cfg and environment-related entries to .gitignore.
Updated requirements.txt to include pymongo.
Created mongodb_export.py for handling MongoDB data exports.
Implemented argument parsing for MongoDB connection and export parameters.
Enhanced utility functions to support MongoDB-specific data handling.
****Impact:
**
**
This addition allows users to seamlessly export data from MongoDB, enhancing the tool's versatility.
✨ Generated with love by Kaizen ❤️
Original Description
# Add MongoDB Export Functionality
******Purpose:
**
**
**
Introduce functionality to export data from MongoDB to a specified format.
Key Changes:
Added .cfg and environment-related entries to .gitignore.
Updated requirements.txt to include pymongo for MongoDB support.
Implemented ExportMongoDB class for handling MongoDB data exports.
Added command-line argument parsing for MongoDB connection and export parameters.
Integrated data flattening and exporting to Parquet format.
******Impact:
**
**
**
This enhancement allows users to seamlessly export data from MongoDB, improving data integration capabilities.
✨ Generated with love by Kaizen ❤️
Original Description
# Add MongoDB Export Functionality
********Purpose:
**
**
**
**
Adds the ability to export data from a MongoDB database to the VDF format.
Key Changes:
Added a new ExportMongoDB class that inherits from the ExportVDB base class.
Implemented methods to connect to a MongoDB database, fetch data from a specified collection, and export the data to Parquet files.
Included support for handling various BSON data types (ObjectId, Binary, Regex, Timestamp, Decimal128, Code) during the flattening process.
Added a new mongodb subparser to the command-line interface to allow users to specify MongoDB connection details and export options.
********Impact:
**
**
**
**
This change will enable users to export data from MongoDB databases to the VDF format, allowing for easier integration with the VDF ecosystem and downstream applications.
✨ Generated with love by Kaizen ❤️
Original Description
- [ ] export script
- [ ] import script
[!IMPORTANT]
Adds MongoDB export functionality with BSON handling and Parquet export in mongodb_export.py.
MongoDB Export Integration:
Adds ExportMongoDB class in mongodb_export.py for exporting data from MongoDB.
Implements make_parser() and export_vdb() methods for argument parsing and export logic.
Handles BSON type conversions and data flattening in flatten_dict().
Exports data to Parquet format with vector dimension detection in get_data().
Configuration:
Adds MONGODB to DBNames in names.py.
Updates db_metric_to_standard_metric in util.py to include MongoDB distance metrics.
Dependencies:
Adds pymongo to requirements.txt.
This description was created by for 6788f900fc2e64c21ba17d05d2844fab454aa712. It will automatically update as commits are pushed.
Thanks for contributing to Vector-io!
please also give a short readme or how-to for exporting data from mongo, as it is a bit harder than a normal VectorDB (connection string v/s looking up fields like admin password from the portal). Thanks.
Got you comments! Will do the needful!
@dhruv-anand-aintech
| gharchive/pull-request | 2024-10-03T16:14:17 | 2025-04-01T04:32:13.311314 | {
"authors": [
"dhruv-anand-aintech",
"vipul-maheshwari"
],
"repo": "AI-Northstar-Tech/vector-io",
"url": "https://github.com/AI-Northstar-Tech/vector-io/pull/110",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2629296813 | VALIDATE THE E-MAIL AND ADD METRICS TO MAKE THE PASSWORD STRONGER
Describe the bug
The e-mail field in the sign-up form is accepting arbitrary e-mails, we need to validate the e-mails.
I would like to impose a regular expression for checking the valid e-mails.
As password plays a vital role to protect one's personal information. There should be set a guidelines lines minimum number of digits etc. In the present model there is only one condition that is the password should be of length 6 characters.
What would I do if I am assigned the task:
Would continue the rule that the length of the password should be 6 or more
Make sure that password contains combination of uppercase letters, lower case letters, digits, special characters.
Make sure that commonly used passwords are not used.
No sequence of characters.(Eg : abcd, 1234)
To Reproduce
Steps to reproduce the behavior:
Go to SignUp
Enter invalid email
Enter a vague password
See error : Click on submit the form is submitted.
Expected behavior
Invalid e-mails and weak passwords should not be accepted by the form
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Hope you assign me this issue @ANSHIKA-26, I am eager to work on it.
@ANSHIKA-26, please assign the issue so that I can start working on pr.
Thank you
this issue had been raised previously , can you recheck once if its already working or not ? if not kindly attach a video of the same , since the PR for solving this issue was already merged , if the issue still exists it will be assigned to you , as of now , the issue is not assigned to you so kindly do not work on it , thanks
| gharchive/issue | 2024-11-01T15:39:50 | 2025-04-01T04:32:13.410235 | {
"authors": [
"ANSHIKA-26",
"sindhuja184"
],
"repo": "ANSHIKA-26/WordWise",
"url": "https://github.com/ANSHIKA-26/WordWise/issues/1610",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
249237085 | Added Graphloc API
Hi, I'm adding a new geolocation free GraphQL API I've been working on. You can check it out here: https://graphloc.com
@geraldoramos Great work 👍 and thank you for submission.
Thank you @IvanGoncharov
| gharchive/pull-request | 2017-08-10T05:29:26 | 2025-04-01T04:32:13.471301 | {
"authors": [
"IvanGoncharov",
"geraldoramos"
],
"repo": "APIs-guru/graphql-apis",
"url": "https://github.com/APIs-guru/graphql-apis/pull/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
920822428 | Fix type in README.md
bellow -> below
🙂
@timdotbiz Thanks a lot 👍
| gharchive/pull-request | 2021-06-14T22:27:35 | 2025-04-01T04:32:13.472442 | {
"authors": [
"IvanGoncharov",
"timdotbiz"
],
"repo": "APIs-guru/graphql-faker",
"url": "https://github.com/APIs-guru/graphql-faker/pull/151",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
785400751 | Simple way to get the distance from camera to marker
Feature request: simple way to get position, orientation, distance of marker. Where is this data?
There doesn’t appear to be a simple way to get distance from the camera to the marker.
I have found how to get the position from the Arcontroller. This object also includes line and vertex which aren’t documented clearly.
https://github.com/AR-js-org/AR.js-Docs/blob/master/docs/ui-events.md#get-distance-from-marker
| gharchive/issue | 2021-01-13T19:58:24 | 2025-04-01T04:32:13.497278 | {
"authors": [
"diskgrinder",
"nicolocarpignoli"
],
"repo": "AR-js-org/AR.js",
"url": "https://github.com/AR-js-org/AR.js/issues/207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
284562710 | CLColorConvert and NEColorConvert output does not match for some input-output combinations
Hi,
The outputs for NEColorConvert and CLColorConvert do not match when trying to convert a YUV image to RGB space. This is observed for conversions from UYVY422/YUYV422/IYUV/NV12/NV21 to RGB888/RGBA888.
A sample code illustrating the mismatch for conversion from UYVY422 image to RGB888 is attached here. From the log, it can be seen that some bytes are 255 in NEON while the corresponding CL output is some other number. Similar mismatch happens for other combinations also.
Is this a known issue? Is there a overflow handling parameter for these classes? Kindly help to resolve this.
Thanks,
Rohit
acl_colorconvert.txt
colorConvert_CL_and_NEON_mismatch.txt
RGB888->RGBA8888 and RGBA8888->RGB888 conversions seem to be proper on both CL and NEON.
Facing the following issues also in ColorConvert kernels on NEON and CL.
RGB888 => YUV444 :
For some input pixels, output chroma pixels from NEON and CL mismatches.
For example, for R= 33, G = 81, B = 46, output pixels are Y = 68, U = 115, V = 105 on NEON but Y = 68, U = 117, V = 106 on CL. Note that there is a difference of 2 in U data.
Another example: R = 37, G = 126, B = 246 gives Y = 115, U = 198, V = 77 on NEON and Y = 115, U = 198, V = 79 on CL, a difference of 2 in V data.
In both above examples, CL seems to be giving the proper pixels. Could you please look into this and confirm whether there's an issue on NEON (or CL) side?
RGBA8888 => YUV444 on NEON:
Here also, on NEON side there is a difference of 2 in U/V data.
RGBA8888 => YUV444 on CL:
Here, unlike RGB888 => YUV444 CL output is not at all proper. I am attaching both CL and NEON outputs here for different resolutions. While there is a difference of +/-2 from the expected output on NEON side (No.3 above), certain pixels in CL side differs from the expected output by a large margin.
Request to look into these also and resolve the concerns.
Thanks,
Rohit
Log files:
colorConvert_CL_and_NEON_RGB_to_YUV4_mismatch_8x8.txt
colorConvert_CL_and_NEON_RGBA_to_YUV4_issue_8x8.txt
colorConvert_CL_and_NEON_RGBA_to_YUV4_issue_16x8.txt
Hi Rohit,
I've run the code provided on different devices with different configurations and I was not able to reproduce the issue.
In order to help you further with this issue, can you please provide the Scons build parameters and the device (OS, Arch) you are using to test? Also, are you using the latest ACL version (17.12)? If you are running on an Android device, can you provide the NDK revision you've used?
Kind regards,
Chris
Hi Chris,
We are not using Scons to build the compute library. We are using a proprietary tool chain to build the compute library on an Ubuntu machine. We are using ACL version 17.10 currently. Are there any fixes for color convert kernels in 17.12? We are also not running on Android device but a Linux device. We can't share more details about the device as of now.
Please let me know if you need more information from our side.
Thanks,
Rohit
Hi Rohit,
I've tested your code on v17.12 on multiple 64-bit Linux platforms and I was not able to reproduce the issue. In fact, I do not see any mismatches between NEON and CL outputs.
Also, it's not clear to me how you can test cases 2), 3) and 4) given that both NEON and CL ColorConvert kernels do not support those conversions. How do you run those?
Another question would be, what version of the OpenCL library are you using?
Hi @mdigiorgio
Thank you for the response.
According to the available documentation, void configure(const IImage * input, IMultiImage * output) function in NEColorConvert supports RGB888/RGBA8888 => YUV444 on neon (cases 2 and 3).
Similarly void configure(const ICLImage *input, ICLMultiImage * output) in CLColorConvert supports RGBA8888 => YUV444 conversion (case 4).
We will get back with the OpenCL version.
Hi @mdigiorgio the OpenCL version we are using is 1.2
Hi @manjunathahg,
Thanks for the information. Two points:
in the code you have provided, you are using format UYVY422, not YUV444.
The configure methods you mentioned are not the ones that get called in your examples. In fact, in the example you declare srcImage and dstImage as Image / CLImage objects, whereas the ones you mentioned are called when you use an IMultiImage / ICLMultiImage object as destination. This makes sense because you want a conversion to a planar format, so the output should be a multi planar image.
Hope this helps.
Closing, reopen if needed
| gharchive/issue | 2017-12-26T14:22:59 | 2025-04-01T04:32:13.517342 | {
"authors": [
"cristian-szabo-arm",
"manjunathahg",
"mdigiorgio",
"mpekatsoula",
"rohit-unnimadhavan"
],
"repo": "ARM-software/ComputeLibrary",
"url": "https://github.com/ARM-software/ComputeLibrary/issues/321",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
325882509 | Update example instructions
Instructions updated to include the full sequence needed with command syntax to connect to Mbed Cloud
cc: @szysas @KalleVayrynen @ansondtx20 @bridadan
Nice work @dlfryar!
| gharchive/pull-request | 2018-05-23T21:37:00 | 2025-04-01T04:32:13.536290 | {
"authors": [
"bridadan",
"dlfryar"
],
"repo": "ARMmbed/java-coap",
"url": "https://github.com/ARMmbed/java-coap/pull/19",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
124715129 | PolarSSL 1.2 does not build with Visual Studio 2010
Recent versions of PolarSSL 1.2 (1.2.18 and earlier) do not build under Visual Studio 2010, although they do build on later versions of Visual Studio - 2013 and 2015.
Other later versions of mbed TLS are unaffected.
ARM Internal Ref: IOTSSL-590
Unless I'm mistaken, you fixed that in 1.2.19, so I'm closing that bug now.
| gharchive/issue | 2016-01-04T08:30:29 | 2025-04-01T04:32:13.673127 | {
"authors": [
"ciarmcom",
"mpg",
"sbutcher-arm"
],
"repo": "ARMmbed/mbedtls",
"url": "https://github.com/ARMmbed/mbedtls/issues/385",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
378639111 | PSA: Phase 1: Test using opaque private ECDSA key in TLS client
Description
Task description
Test usage of PK_OPAQUE_PSA in ssl_set_own_cert()
adapt ssl_client/server2.c
might need internal adaptations in TLS to work
add test using that option to ssl-opt.sh
make sure it's run in all.sh
Scope restriction: the ECDSA part of the ECDHE-ECDSA key exchange, TLS 1.2
Execution of the task
did only ssl_client2.c as doing server too would require some code duplication while waiting for #1272, and we care mostly about client side so far.
turned out no internal adaptation in the TLS module was needed
test added to ssl-opt.sh as planned
making sure the added test (in fact, all available tests) is run in all.sh in a PSA-enabled build is part of the base PR so nothing more's needed here
Note: This branch is based on #2162 - only new commits compare to that PR need reviewing here.
Status
READY - though it will need rebasing before merge.
Requires Backporting
NO
Migrations
NO
Steps to test or reproduce
build with MBEDTLS_USE_PSA_CRYPTO enabled inconfig.h (as well as its dependencies)
run tests/ssl-opt.sh -f Opaque
cc: @gilles-peskine-arm - I'm not requesting a formal review from you, but if you feel like checking how we're using your APIs, feel free to have a look at a92c360 and let us know if you see any problem.
@hanno-arm Thanks for your review! I believe I've addressed your feedback, please review again!
@hanno-arm @AndrzejKurek I've rebased over the lastest #2162 (rebased on #2184) so that I could eliminated redundancies with #2184. I've also simplified the history while at it. The previous history is still available at: https://github.com/mpg/mbedtls/tree/iotssl-2574-pk-opaque-tls-0 Please review again.
@hanno-arm Thanks for your review! I believe I've addressed your feedback. Please review again.
Note: I did not rebase over the most recent #2162 because the recent changes in that branch do not affect anything here, so it wouldn't bring any value now, and everything will need rebasing later anyway.
@hanno-arm @AndrzejKurek I rebased on #2162 (itself rebased on recent #2206), used the utility function that I recently moved from here to #2162, and cleaned up the history. Please review again.
A local run of all.sh -k -r passed on cc93b01 except for the following independent issues:
Mbed Crypto exported: will be removed soon
test that triggers a GnuTLS bug: addressed by #2209
@mpg I took the liberty to rebase this on top of the current psa-integration-utilities and changed the target branch to development-psa.
| gharchive/pull-request | 2018-11-08T09:11:18 | 2025-04-01T04:32:13.681889 | {
"authors": [
"hanno-arm",
"mpg"
],
"repo": "ARMmbed/mbedtls",
"url": "https://github.com/ARMmbed/mbedtls/pull/2183",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
571269935 | Handshake tests refactor
This PR contains a refactor of SSL tests, to simplify adding new tests and to reduce code duplication, with the future aim being a single handshake function testing buffer resizing in all cases.
This goal is reached by passing a structure with options to the handshake function. These have a default value, so that specialised tests only have to overwrite a couple of them in order to work.
Furthermore, a polymorphic-like mechanism has been introduced to the test cases, so that one calls another, from specialised to more general ones. Thanks to that, dependencies and default values can be provided all on appropriate layers. Also, the .data files are cleaner thanks to that.
I'm currently investigating CI issues.
The only failure was due to the 1n-1 split happening also in the SSLv3 test.
I changed the number of bytes exchanged between server and client to 0 both in the first commit, merging app data (in the .data file), and in the last, refactor commit, by adding a || version == MBEDTLS_SSL_MINOR_VERSION_0 to the if.
I also updated the raised issue.
| gharchive/pull-request | 2020-02-26T11:33:54 | 2025-04-01T04:32:13.684370 | {
"authors": [
"AndrzejKurek"
],
"repo": "ARMmbed/mbedtls",
"url": "https://github.com/ARMmbed/mbedtls/pull/3069",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1985183132 | 🛑 Weather App API is down
In 969ef27, Weather App API (https://weather-app-backend-y96o.onrender.com/api/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Weather App API is back up in f1692b2 after 10 minutes.
| gharchive/issue | 2023-11-09T09:15:24 | 2025-04-01T04:32:13.706512 | {
"authors": [
"ASJordi"
],
"repo": "ASJordi/website-activity-status",
"url": "https://github.com/ASJordi/website-activity-status/issues/530",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2045456533 | 🛑 URL Shortener API is down
In f4dfd3d, URL Shortener API (https://api-url-zgau.onrender.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: URL Shortener API is back up in 936ca98 after 1 hour, 6 minutes.
| gharchive/issue | 2023-12-18T00:37:30 | 2025-04-01T04:32:13.709175 | {
"authors": [
"ASJordi"
],
"repo": "ASJordi/website-activity-status",
"url": "https://github.com/ASJordi/website-activity-status/issues/714",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2087365489 | 🛑 API Words is down
In 9b74f83, API Words (https://simple-api-words.onrender.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: API Words is back up in 0d273ca after 28 minutes.
| gharchive/issue | 2024-01-18T02:40:52 | 2025-04-01T04:32:13.711608 | {
"authors": [
"ASJordi"
],
"repo": "ASJordi/website-activity-status",
"url": "https://github.com/ASJordi/website-activity-status/issues/917",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2091964971 | 🛑 Weather App API is down
In ceca08c, Weather App API (https://weather-app-backend-y96o.onrender.com/api/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Weather App API is back up in a5a8d9b after 21 minutes.
| gharchive/issue | 2024-01-20T08:31:25 | 2025-04-01T04:32:13.714276 | {
"authors": [
"ASJordi"
],
"repo": "ASJordi/website-activity-status",
"url": "https://github.com/ASJordi/website-activity-status/issues/942",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
514882798 | Enhance charts further
[x] Improve on the implementation of ChartBox
[x] Add average expense line to expense timeline
[x] Create new chart type for expense timeline
[x] Add tag-view option for expense timeline
[x] Add HeatMap chart type
Average expense line functionality deferred to v2.0
Tag-view option for expense timeline implemented as fully fledged option for all chart types.
| gharchive/issue | 2019-10-30T18:17:21 | 2025-04-01T04:32:13.861042 | {
"authors": [
"weiijiie"
],
"repo": "AY1920S1-CS2103T-F12-4/main",
"url": "https://github.com/AY1920S1-CS2103T-F12-4/main/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
507034236 | Exchange class to handle Currency conversion
Make a utility class to handle currency conversion for the convert command.
Will extend CurrencyUtil to handle non-default to non-default currency conversion.
| gharchive/issue | 2019-10-15T06:28:45 | 2025-04-01T04:32:13.862085 | {
"authors": [
"krusagiz"
],
"repo": "AY1920S1-CS2103T-W12-2/main",
"url": "https://github.com/AY1920S1-CS2103T-W12-2/main/issues/130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
723743604 | chore: Update Add command help text
Closes #64
Codecov Report
Merging #65 into master will not change coverage.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## master #65 +/- ##
=========================================
Coverage 72.30% 72.30%
Complexity 410 410
=========================================
Files 72 72
Lines 1278 1278
Branches 130 130
=========================================
Hits 924 924
Misses 316 316
Partials 38 38
Impacted Files
Coverage Δ
Complexity Δ
.../java/seedu/address/logic/commands/AddCommand.java
100.00% <100.00%> (ø)
8.00 <1.00> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e89cd1a...effedc5. Read the comment docs.
| gharchive/pull-request | 2020-10-17T12:10:35 | 2025-04-01T04:32:13.869936 | {
"authors": [
"chrisgzf",
"codecov-io"
],
"repo": "AY2021S1-CS2103T-F11-1/tp",
"url": "https://github.com/AY2021S1-CS2103T-F11-1/tp/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
723762263 | Add delete attendanceRecord command
Resolves #138
Codecov Report
Merging #152 into master will increase coverage by 0.10%.
The diff coverage is 84.41%.
@@ Coverage Diff @@
## master #152 +/- ##
============================================
+ Coverage 80.57% 80.68% +0.10%
- Complexity 920 934 +14
============================================
Files 136 138 +2
Lines 2693 2770 +77
Branches 309 322 +13
============================================
+ Hits 2170 2235 +65
Misses 440 440
- Partials 83 95 +12
Impacted Files
Coverage Δ
Complexity Δ
...ddress/logic/commands/DeleteAttendanceCommand.java
81.35% <81.35%> (ø)
10.00 <10.00> (?)
...ss/logic/parser/DeleteAttendanceCommandParser.java
94.11% <94.11%> (ø)
3.00 <3.00> (?)
...va/seedu/address/logic/parser/TutorsPetParser.java
100.00% <100.00%> (ø)
30.00 <0.00> (+1.00)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ee93a4f...0d9ec4d. Read the comment docs.
| gharchive/pull-request | 2020-10-17T13:45:56 | 2025-04-01T04:32:13.878569 | {
"authors": [
"codecov-io",
"junlong4321"
],
"repo": "AY2021S1-CS2103T-T10-4/tp",
"url": "https://github.com/AY2021S1-CS2103T-T10-4/tp/pull/152",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1006931703 | Update DG.md
Add additional Use Cases, NFRs
Codecov Report
Merging #36 (aef6055) into master (dc2fc36) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #36 +/- ##
=========================================
Coverage 70.91% 70.91%
Complexity 442 442
=========================================
Files 82 82
Lines 1389 1389
Branches 142 142
=========================================
Hits 985 985
Misses 356 356
Partials 48 48
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update dc2fc36...aef6055. Read the comment docs.
| gharchive/pull-request | 2021-09-25T02:16:26 | 2025-04-01T04:32:13.905002 | {
"authors": [
"Tanishq4331",
"codecov-commenter"
],
"repo": "AY2122S1-CS2103T-W08-3/tp",
"url": "https://github.com/AY2122S1-CS2103T-W08-3/tp/pull/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1042171598 | Update DG
Part of https://github.com/AY2122S1-CS2103T-W16-2/tp/issues/206.
Preview here
Codecov Report
Merging #248 (19814b0) into master (6217ca6) will increase coverage by 0.02%.
The diff coverage is 80.00%.
@@ Coverage Diff @@
## master #248 +/- ##
============================================
+ Coverage 77.48% 77.50% +0.02%
Complexity 1149 1149
============================================
Files 139 139
Lines 3344 3348 +4
Branches 374 374
============================================
+ Hits 2591 2595 +4
Misses 657 657
Partials 96 96
Impacted Files
Coverage Δ
...22s1_cs2103t_w16_2/btbb/commons/core/Messages.java
0.00% <ø> (ø)
..._2/btbb/logic/commands/order/DoneOrderCommand.java
91.30% <ø> (ø)
.../btbb/logic/commands/order/UndoneOrderCommand.java
91.30% <ø> (ø)
...122s1_cs2103t_w16_2/btbb/model/order/Deadline.java
91.66% <ø> (ø)
...tbb/model/predicate/ValueWithinRangePredicate.java
92.85% <ø> (ø)
...ava/ay2122s1_cs2103t_w16_2/btbb/ui/MainWindow.java
0.00% <ø> (ø)
...16_2/btbb/model/predicate/PredicateCollection.java
91.17% <60.00%> (ø)
...6_2/btbb/logic/commands/order/AddOrderCommand.java
100.00% <100.00%> (ø)
... and 1 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7a71ca7...19814b0. Read the comment docs.
| gharchive/pull-request | 2021-11-02T10:53:21 | 2025-04-01T04:32:13.920946 | {
"authors": [
"codecov-commenter",
"sivayogasubramanian"
],
"repo": "AY2122S1-CS2103T-W16-2/tp",
"url": "https://github.com/AY2122S1-CS2103T-W16-2/tp/pull/248",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1179356301 | Update DG for list command
Update list command in DG
Closes #114
@LeeEnEn hi en en! might wanna fix these first to let the checks run. cya in a bit!
@domlimm failed cause of missing new line at eof 😠
I have fixed it.
| gharchive/pull-request | 2022-03-24T11:05:46 | 2025-04-01T04:32:13.925660 | {
"authors": [
"LeeEnEn",
"domlimm"
],
"repo": "AY2122S2-CS2103-F11-2/tp",
"url": "https://github.com/AY2122S2-CS2103-F11-2/tp/pull/134",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1419784758 | Improve regex to prevent bugs or malicious use
[ ] findPrefixPosition() checks for whitespaces
[ ] give users a way to escape slashes, if title contains a slash for example
Pushed back to v1.5
| gharchive/issue | 2022-10-23T12:50:29 | 2025-04-01T04:32:13.941233 | {
"authors": [
"parth-io"
],
"repo": "AY2223S1-CS2103-F14-4/tp",
"url": "https://github.com/AY2223S1-CS2103-F14-4/tp/issues/148",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1410803524 | Update package name in test to uninurse
Update package name in test as it was missed out on the previous commit
Codecov Report
Base: 71.15% // Head: 71.15% // No change to project coverage :thumbsup:
Coverage data is based on head (cb3d8ca) compared to base (a098224).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## master #177 +/- ##
=========================================
Coverage 71.15% 71.15%
Complexity 491 491
=========================================
Files 91 91
Lines 1605 1605
Branches 178 178
=========================================
Hits 1142 1142
Misses 413 413
Partials 50 50
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2022-10-17T03:00:42 | 2025-04-01T04:32:13.946344 | {
"authors": [
"BlopApple",
"codecov-commenter"
],
"repo": "AY2223S1-CS2103T-T12-4/tp",
"url": "https://github.com/AY2223S1-CS2103T-T12-4/tp/pull/177",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1637047002 | DG use cases need to cut down similar ones.
Eg. list vehicles and list customers
the duplicates shorten it.
Good issue!
| gharchive/issue | 2023-03-23T08:13:28 | 2025-04-01T04:32:13.950054 | {
"authors": [
"9fc70c892",
"junlee1991"
],
"repo": "AY2223S2-CS2103-W17-4/tp",
"url": "https://github.com/AY2223S2-CS2103-W17-4/tp/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1649694739 | [PE-D][Tester D] Invalid Phone number can still be added
Assuming in terms of singapore context, each phone number can only have exactly 8 digits, but when I keyed in 7 digits it worked. I put low severity since this problem can be easily rectified
Labels: severity.Low type.FunctionalityBug
original: sembcorpp/ped#1
Can confirm that it is a bug.
Change validation to 8 digits starting with 6, 8 and 9.
Maybe we should also update in the UG that our product is targeted towards Singaporeans ( if we are going to limit the phone number like such )
| gharchive/issue | 2023-03-31T16:47:06 | 2025-04-01T04:32:13.952469 | {
"authors": [
"Jarrett0203",
"glyfy",
"nus-se-script"
],
"repo": "AY2223S2-CS2103T-T12-3/tp",
"url": "https://github.com/AY2223S2-CS2103T-T12-3/tp/issues/118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1649796211 | [PE-D][Tester C] Invalid restrictions imposed on IC
Invalid restrictions imposed on IC
The image below illustrates the restrictions of the IC that were imposed by the developers.
However, this limits the usage of the application purely to people with identification numbers that follow this pattern.
There exists a group of people, for instance certain foreigners, with identification numbers that might not follow this pattern.
In cases like this, the GP clinic will not be able to add them to their list of patients.
Effects
Functionality of the product is compromised, as it can no longer serve as an effective patient information management system.
Hence the severity has been labelled as High.
Labels: type.FunctionalityBug severity.High
original: hongshenggg/ped#4
Thank you for pointing out this, but based on the research we found, each foreigner that works or studies in sg they will have a fin number, and based on Wiki: "The structure of the NRIC number/FIN is @xxxxxxx#, where: @ and # are letter. " so our system allows to add for foreigner fin number since fin number also have 7 digits.
| gharchive/issue | 2023-03-31T18:01:01 | 2025-04-01T04:32:13.956259 | {
"authors": [
"Jiayan-Lim",
"nus-se-script"
],
"repo": "AY2223S2-CS2103T-W09-3/tp",
"url": "https://github.com/AY2223S2-CS2103T-W09-3/tp/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2628239461 | [JunBoon] jobCompany Integration app fail to start when JSON is modified such that job has a non-existing company
Perhaps a more graceful way is the assume that JSON file is corrupted and start with a fresh JSON file?
Alright, I can just catch the CompanyNotFound Exception when loading the address book and return a fresh one if the address book was corrupted.
| gharchive/issue | 2024-11-01T03:39:27 | 2025-04-01T04:32:13.957792 | {
"authors": [
"Green-Tea-123",
"KengHian"
],
"repo": "AY2425S1-CS2103-F13-4/tp",
"url": "https://github.com/AY2425S1-CS2103-F13-4/tp/issues/189",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2645269790 | [PE-D][Tester A] Target User reasoning not specified
from the developer guide, your target user is specifically Prudential Insurance Agents
here are some of my concerns:
What issues are prudential agents facing?
Are there any specialized features that only prudential agents can make use of, and not other insurance agents?
it would be nice if these would be states in the DG
Labels: severity.Medium type.DocumentationBug
original: Jaynon/ped#13
add in DG
| gharchive/issue | 2024-11-08T22:17:27 | 2025-04-01T04:32:13.960155 | {
"authors": [
"Justincjr",
"nus-se-script"
],
"repo": "AY2425S1-CS2103T-T14-1/tp",
"url": "https://github.com/AY2425S1-CS2103T-T14-1/tp/issues/312",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2644729295 | [PE-D][Tester A] Wrong naming of application in user guide
I believe AddressBook should be renamed to TrackMate here instead.
Labels: severity.Low type.DocumentationBug
original: Incogdino/ped#6
Will update AddressBook data to TrackMate data.
| gharchive/issue | 2024-11-08T17:42:21 | 2025-04-01T04:32:13.961904 | {
"authors": [
"Fui03",
"nus-se-script"
],
"repo": "AY2425S1-CS2103T-W08-1/tp",
"url": "https://github.com/AY2425S1-CS2103T-W08-1/tp/issues/301",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1468158735 | fix FP exception flags handling with softfloat
Fix FP Exception flags handling with softfloat package.
This patch can fix #160
This implement was integrationed when merge NP21/W.
Please confirmation.
| gharchive/pull-request | 2022-11-29T14:27:16 | 2025-04-01T04:32:13.963974 | {
"authors": [
"AZO234",
"amuramatsu"
],
"repo": "AZO234/NP2kai",
"url": "https://github.com/AZO234/NP2kai/pull/161",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
865277753 | Cache responses from the A+ API
Description of the PR
This adds caching for A+ responses. It also uses the caching by downloading all submission results now from the API end point of an individual submission, which fixes the issue of incorrect statuses in submissions. The cache file can get relatively large, so there's some logic for doing IO in a background thread and combining multiple file writes when there's a burst of insertions into the cache.
Beware in testing:
Loading the exercises initially will take some time, since it's doing quite many requests when no cache exists yet. There is no indication of progress, so it might appear as if the exercises are simply broken. As such, I would also consider this blocked by #609.
Testing
unit
integration
e2e
manual
- [x] new unit tests created
- [x] all unit tests pass
- [ ] new integration tests created
- [x] all integration tests pass
- [ ] new e2e tests created
- [x] all e2e tests pass
- [x] manual testing went well
Have you updated the TESTING.md or other relevant documentation on your branch?
[x] Yes.
[x] Not yet. I will do it next.
[x] Not relevant.
Do you think it would be possible to show the assignment tree first with "stupid" proxy objects constructed from the data got from the "get all exercises" API call, and then "realize" them with the actual data when it's there?
Do you think it would be possible to show the assignment tree first with "stupid" proxy objects constructed from the data got from the "get all exercises" API call, and then "realize" them with the actual data when it's there?
Good idea! Will do
Just a minor issue, but crossed my mind. There's .gitignore file in each project's .idea folder. Should a-plus-cache.json be added there?
Just a minor issue, but crossed my mind. There's .gitignore file in each project's .idea folder. Should a-plus-cache.json be added there?
Yeah that would be good. It will work fine when switching machines, but it will generate some pointless diffs though. Is there some IntelliJ API for that or shall I just append a line to it?
Just a minor issue, but crossed my mind. There's .gitignore file in each project's .idea folder. Should a-plus-cache.json be added there?
Yeah that would be good. It will work fine when switching machines, but it will generate some pointless diffs though. Is there some IntelliJ API for that or shall I just append a line to it?
I don't know, but...
Actually, could or should that file being stored somewhere else like where IntelliJ itself stores caches/settings? For example, it seems there's a folder named sonarlint in %APPDATA%\JetBrains\IdeaIC2021.1 folder (so called system directory of IntelliJ IDEA), and there are data used by SonarLint plugin.
Even though cached server responses are kind of related to a certain project, on the other hand, they are not, if you got what I mean.
Just a minor issue, but crossed my mind. There's .gitignore file in each project's .idea folder. Should a-plus-cache.json be added there?
Yeah that would be good. It will work fine when switching machines, but it will generate some pointless diffs though. Is there some IntelliJ API for that or shall I just append a line to it?
I don't know, but...
Actually, could or should that file being stored somewhere else like where IntelliJ itself stores caches/settings? For example, it seems there's a folder named sonarlint in %APPDATA%\JetBrains\IdeaIC2021.1 folder (so called system directory of IntelliJ IDEA), and there are data used by SonarLint plugin.
Even though cached server responses are kind of related to a certain project, on the other hand, they are not, if you got what I mean.
That's definitely a possibility assuming there are no permission issues with writing there (which there shouldn't be).
It's true that it's kinda not project related. After all, the same URL should yield the same result regardless of which course a project is using. If that approach is taken, should the code also then have a singleton class available from every project to reflect that?
Just a minor issue, but crossed my mind. There's .gitignore file in each project's .idea folder. Should a-plus-cache.json be added there?
Yeah that would be good. It will work fine when switching machines, but it will generate some pointless diffs though. Is there some IntelliJ API for that or shall I just append a line to it?
I don't know, but...
Actually, could or should that file being stored somewhere else like where IntelliJ itself stores caches/settings? For example, it seems there's a folder named sonarlint in %APPDATA%\JetBrains\IdeaIC2021.1 folder (so called system directory of IntelliJ IDEA), and there are data used by SonarLint plugin.
Even though cached server responses are kind of related to a certain project, on the other hand, they are not, if you got what I mean.
That's definitely a possibility assuming there are no permission issues with writing there (which there shouldn't be).
It's true that it's kinda not project related. After all, the same URL should yield the same result regardless of which course a project is using. If that approach is taken, should the code also then have a singleton class available from every project to reflect that?
Possibly... I'll consider that when reviewing your code. 👍
| gharchive/pull-request | 2021-04-22T18:13:38 | 2025-04-01T04:32:13.980186 | {
"authors": [
"OlliKiljunen",
"nikke234"
],
"repo": "Aalto-LeTech/intellij-plugin",
"url": "https://github.com/Aalto-LeTech/intellij-plugin/pull/615",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2083818379 | refactor: tiny misc.
Current dependencies on/for this PR:
main
PR #704 👈
PR #705
This stack of pull requests is managed by Graphite.
| gharchive/pull-request | 2024-01-16T12:17:16 | 2025-04-01T04:32:13.983423 | {
"authors": [
"MartinBernstorff"
],
"repo": "Aarhus-Psychiatry-Research/psycop-common",
"url": "https://github.com/Aarhus-Psychiatry-Research/psycop-common/pull/704",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1592244397 | Assembly resolution differences
I have legacy Delphi 6 application, which is extended by some .NET Framework 4.8 modules, by using the nuget package DllExport. That project basically works the same as this one.
I'm trying to move these .NET modules to .net 7, using DNNE. I'm really, really close. I run into behavioral differences in Assembly.Load.
The .NET modules use devexpress using WPF for UI. At some point, WPF is trying to load a given fully qualified type. This fails on code running using DNNE, but succeeds when running in a standalone .net 7 project. I recompiled WPF, and putting a breakpoint on the spot where it fails, and faking the load using the debugger, everything works like a charm. This suggests to me that it's not a problem of missing dependencies.
Are there any known differences in the host this project makes and the default executable host?
I'm not sure if the above makes sense or not. Please let me know what I can do to get this issue diagnosed.
I'm not sure if the above makes sense or not. Please let me know what I can do to get this issue diagnosed.
I get the fact that there are behavioral differences, but the precise issue being faced is difficult for me to fully understand. My first instinct would be to use fuslogvw.exe to help narrow down how they are being loaded for .NET Framework and then use dotnet-trace to understand what is going on in .NET 7.
Are there any known differences in the host this project makes and the default executable host?
I assume this i referring to a process where .NET 7 is hosted. Yes, DNNE loads the assembly into it's down Assembly Load Context. This is the current design of the .NET hosting API, although a new option is being added in .NET 8.
The differences I mean is settings between a "normal" .net 7 wpf application, and the situation where the Delphi6 process loads .net 7 with DNNE.
Ah. Yes, there are fundamental differences between the two. The first is that I don't think the WPF workload is going to be included in the TPA, so referencing any WPF assemblies is going to be tough without using a custom AssemblyLoadContext to help find the WPF assemblies. This will also need to help find Devexpress assemblies unless they are adjacent to the exporting managed assembly.
The dotnet-trace tool mentioned above will help sort out where assemblies are being looked for and what the custom AssemblyLoadContext will need to do in order to find the other assemblies.
I will do further digging with dotnet-trace, although it seems all dll's are loaded fine.
but can you give me any pointers where to start looking for host configuration differences?
@mterwoord I'm relatively sure the logic I mentioned above is the culprit - the lack of WPF assemblies in the TPA. The entry point for coreclr initialization is here, but starts in DNNE from here. For a normal WPF application many more properties are filled out by reading the .runtimeconfig.json file and passing it down.
Were you able to collect a trace? Stepping through the apphost start up for a WPF application is interesting from an academic exercise, but I'm not convinced that is worth the effort. Please let me know if you were having trouble collecting the traces. Recall that DNNE loads its assembly in a seperate ALC so handling ALCs for DNNE is likely to be a requirement regardless of how the runtime is initialized.
Perhaps @elinor-fung has another avenue of investigation or thoughts.
the lack of WPF assemblies in the TPA
If your library references the .NET Desktop SDK, the generated .runtimeconfig.json should include the corresponding framework (Microsoft.WindowsDesktop.App) such that WPF assemblies will be in the TPA.
The trace would be helpful to see what is actually going on. Like @AaronRobinsonMSFT, I expect this is ALC-related - even though the DLLs are loaded fine, they may not be in the ALC your code is expecting.
Other thoughts:
Does loading DevExpress.Xpf.Core.WpfSvgPalette explicitly work?
AssemblyLoadContext.GetLoadContext tells you which ALC contains the specified Assembly instance, so you can check if your types are coming from different ALCs
Reflection APIs like the two in your repro infer the active ALC when resolving types and assemblies. You can explicitly set the reflection context that will be used.
Did some digging.
The .runtimeconfig.json references the Desktop SDK.
What I found out before, but it's being confirmed by a dotnet-trace run, is that my code (exported method by DNNE) starts off in an "IsolatedComponentLoadContext", and somehow, at some point, it's also utilizing the Default context. The situation where I get issues shows that somehow the code is running from the Default context instead of the IsolatedComponentLoadCOntext.
Is there any way to get stack traces of the loader events i'm looking for? Ie, find out where the transition is being made?
Other option would be to somehow get rid of the IsolatedComponentLoadContext and only have 1 ALC, which is also the default one.
I did some further digging. It seems that some DevExpress module initializer triggers code into the Default ALC.
I now tricked my main entrypoint in giving control to the Default ALC, and then everytrhing works. I will close this ticket now. I'm happy to give more insights if needed/wanted by anyone.
| gharchive/issue | 2023-02-20T17:37:17 | 2025-04-01T04:32:14.006480 | {
"authors": [
"AaronRobinsonMSFT",
"elinor-fung",
"mterwoord"
],
"repo": "AaronRobinsonMSFT/DNNE",
"url": "https://github.com/AaronRobinsonMSFT/DNNE/issues/154",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2499013823 | Service Not Available - Custom backEnd
Hello friend, good evening.
I was very happy to see your message on REDDIT, about your project to download entire MOODLE courses.
However, I was unable to connect, even though the server (https://moodle-ead.unipampa.edu.br) and course are available online.
I am attaching the error message.
If you can help me, I would be very grateful.
Congratulations on the project!
Regards.
Gerson..
Hello! Thanks for reporting this issue. Unfortunately, I can't see any error messages attached in this comment, so I can't really debug the tool without further information!
| gharchive/issue | 2024-08-31T19:14:54 | 2025-04-01T04:32:14.009154 | {
"authors": [
"Aathish04",
"gersontk777"
],
"repo": "Aathish04/moodlearchiver",
"url": "https://github.com/Aathish04/moodlearchiver/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2219801414 | Sbt cross-build is broken
Describe the bug
The project uses sbt-matrix plugin for cross build. But git actions use ++ notations.
Also some combinations seem to fail to build.
Depends on #193
This item depends on:
AbsaOSS/atum-service#193
By Dependent Issues.
This item depends on:
AbsaOSS/atum-service#193
By Dependent Issues.
| gharchive/issue | 2024-04-02T07:44:19 | 2025-04-01T04:32:14.079757 | {
"authors": [
"benedeki"
],
"repo": "AbsaOSS/atum-service",
"url": "https://github.com/AbsaOSS/atum-service/issues/184",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1632059882 | Fix viewer search path on MacOS
Fix #1292
Add in module path as part of search path to module path.
Fixes setup in build area, and should also fix install as well.
This is with Ventura, using clang build setup.
@kwokcb This might deserve some deeper investigation, as it's not an issue that's been reported on MacOS before. Does this only apply to developer builds, or does it apply to the shipping builds of the viewer in releases? If we can narrow this down further, then we should be able to implement a more targeted solution.
Hi @jstone-lucasfilm,
This does not occur with the installed folder as MaterialXView is in a subfolder below libraries. For builds,
with my current configuration MaterialXView is at the same level as libraries so always fails.
If anyone sets up MaterialXView at the same level as libraries then it will fail, so I think this is a suitable patch.
It also feels strange to always only look for libraries in the parent folder of the binary.
Hi @kwokcb
What does the output layout look like on Ventura? Are you creating application bundle?
For builds which only create one build area, there is no subfolder for the build type (e.g. Release/RelWithDebugInfo/Debug).
Thus the executable is in the same folder as libraries. I was using -GXcode which created the separate build areas and am no longer doing so (I'm basically trying to build exactly as CI does as the multiple build case does not create Python executables with the correct architecture for M1/M2 etc.).
@ashwinbhat, neither the build area or installs create bundles so it's always possible to move files around and have libraries not found. I guess this would be the best way to handle this, but it's not currently being done.
@jstone-lucasfilm , this sounds like simple logic to add. As the Graph Editor also follows the same logic it will also need to be updated as it is also failing when in the same folder as libraries. I have created a utility which starts from any root to do the searching.
| gharchive/pull-request | 2023-03-20T12:58:52 | 2025-04-01T04:32:14.098911 | {
"authors": [
"ashwinbhat",
"jstone-lucasfilm",
"kwokcb"
],
"repo": "AcademySoftwareFoundation/MaterialX",
"url": "https://github.com/AcademySoftwareFoundation/MaterialX/pull/1293",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1582801390 | Fix for reading memory mapped files with DWA compression
Hi, I mentioned this issue at a TSC meeting a couple of months ago, sorry for the delay in creating a PR.
I use a custom class derived from Imf::IStream for reading .exr files that supports memory mapping on Windows and Linux/macOS. I noticed I was getting crashes when reading DWA compressed files, and after looking into it discovered that the DWA code was writing into read only data by casting away the const. I guess for regular I/O this didn't cause an issue since it is buffered, but it does cause a crash with memory mapped data. For example in Visual Studio:
Exception thrown: write access violation.
> OpenEXR-3_2_d.dll!Imf_3_2::Xdr::read<Imf_3_2::CharPtrIO,char const *>(const char * & in, unsigned __int64 & v) Line 522 C++
OpenEXR-3_2_d.dll!Imf_3_2::DwaCompressor::uncompress(const char * inPtr, int inSize, Imath_3_1::Box<Imath_3_1::Vec2<int>> range, const char * & outPtr) Line 2260 C++
OpenEXR-3_2_d.dll!Imf_3_2::DwaCompressor::uncompress(const char * inPtr, int inSize, int minY, const char * & outPtr) Line 2218 C++
OpenEXR-3_2_d.dll!Imf_3_2::`anonymous namespace'::LineBufferTaskIIF::execute() Line 844 C++
IlmThread-3_2_d.dll!IlmThread_3_2::`anonymous namespace'::NullThreadPoolProvider::addTask(IlmThread_3_2::Task * t) Line 372 C++
IlmThread-3_2_d.dll!IlmThread_3_2::ThreadPool::addTask(IlmThread_3_2::Task * task) Line 700 C++
IlmThread-3_2_d.dll!IlmThread_3_2::ThreadPool::addGlobalTask(IlmThread_3_2::Task * task) Line 723 C++
OpenEXR-3_2_d.dll!Imf_3_2::ScanLineInputFile::readPixels(int scanLine1, int scanLine2) Line 1704 C++
OpenEXR-3_2_d.dll!Imf_3_2::InputFile::readPixels(int scanLine1, int scanLine2) Line 900 C++
OpenEXR-3_2_d.dll!Imf_3_2::InputPart::readPixels(int scanLine1, int scanLine2) Line 65 C++
OpenEXR-3_2_d.dll!Imf_3_2::RgbaInputFile::readPixels(int scanLine1, int scanLine2) Line 1356 C++
OpenEXRTest.exe!`anonymous namespace'::writeReadScanLines(const char * fileName, int width, int height, Imf_3_2::Compression compression, const Imf_3_2::Array2D<Imf_3_2::Rgba> & p1) Line 353 C++
OpenEXRTest.exe!testExistingStreams(const std::string & tempDir) Line 1007 C++
OpenEXRTest.exe!main(int argc, char * * argv) Line 245 C++
[External Code]
This PR includes both a fix for the DWA code and adds support for memory mapped I/O to the tests. Specifically:
Fix the DWA code by using a small temporary buffer
Add memory map support to testExistingStreams
Modify testExistingStreams to run the tests with all compression types
Move the utility function WidenFilename() to ImfMisc.h/.cpp for use with Windows filenames
Change WidenFilename() to use std::wstring_convert()
It would also be nice to change char* IStream::readMemoryMapped() to const char* IStream::readMemoryMapped() since the memory should be read only, or that could be a separate PR.
Thanks!
It looks like the CI is failing from a network timeout:
Warning: Failed to download action 'https://api.github.com/repos/actions/checkout/tarball/ac593985615ec2ede58e132d2e21d2b1cbd6127c'. Error: The SSL connection could not be established, see inner exception.
Warning: Back off 19.262 seconds before retry.
Warning: Failed to download action 'https://api.github.com/repos/actions/checkout/tarball/ac593985615ec2ede58e132d2e21d2b1cbd6127c'. Error: The SSL connection could not be established, see inner exception.
Warning: Back off 26.111 seconds before retry.
Error: The SSL connection could not be established, see inner exception.
Thanks for the fix, and especially for the tests! Just to confirm, this should be ABI compatible, right? It looks like you've exposed a previously hidden symbol WidenFilename, but that's the only API/ABI change?
Yes, the only change to the API is exposing the WidenFilename() function. The function only has two lines of code and is sort of Windows specific, so it could also be left out of the public API and just copied to where it is needed.
It looks like WidenFilename was already duplicated in test/OpenEXRTest/TestUtilFStream.h. Would you mind swapping that out for your version in ImfMisc.h? Better to reduce the duplication
Sure thing, I removed the function in test/OpenEXRTest/TestUtilFStream.h and updated the PR.
| gharchive/pull-request | 2023-02-13T17:53:38 | 2025-04-01T04:32:14.105073 | {
"authors": [
"cary-ilm",
"darbyjohnston"
],
"repo": "AcademySoftwareFoundation/openexr",
"url": "https://github.com/AcademySoftwareFoundation/openexr/pull/1333",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
155212101 | Move temporary folders out of $HOME
Quoting @oscarrenalias in #75:
When creating temporary folders as in TEMP_CERT_PATH, we may want to use something like mktmp to create temporary folders in the correct place for doing so (/tmp or /var/tmp, let the shell decide) instead of creating them in places like $HOME/docker_certs, as users may be led to believe that this is an official folder.
Alternatively, we could create the folder relative to the script's current path, so that everything remains within the adop directory.
Hi Nic,
Don't forget the diff between /tmp and /var/tmp, e.g. TTL and location!
</Linux_101>
cheers,
Rob
On 17 May 2016 at 10:16, Nick Griffin notifications@github.com wrote:
Quoting @oscarrenalias https://github.com/oscarrenalias in #75
https://github.com/Accenture/adop-docker-compose/pull/75:
When creating temporary folders as in TEMP_CERT_PATH, we may want to use
something like mktmp to create temporary folders in the correct place for
doing so (/tmp or /var/tmp, let the shell decide) instead of creating them
in places like $HOME/docker_certs, as users may be led to believe that this
is an official folder.
Alternatively, we could create the folder relative to the script's current
path, so that everything remains within the adop directory.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/Accenture/adop-docker-compose/issues/77
Apologies for the typo Nick.
On 17 May 2016 at 11:11, Robert Wells robwells57@gmail.com wrote:
Hi Nic,
Don't forget the diff between /tmp and /var/tmp, e.g. TTL and location!
</Linux_101>
cheers,
Rob
On 17 May 2016 at 10:16, Nick Griffin notifications@github.com wrote:
Quoting @oscarrenalias https://github.com/oscarrenalias in #75
https://github.com/Accenture/adop-docker-compose/pull/75:
When creating temporary folders as in TEMP_CERT_PATH, we may want to use
something like mktmp to create temporary folders in the correct place for
doing so (/tmp or /var/tmp, let the shell decide) instead of creating them
in places like $HOME/docker_certs, as users may be led to believe that this
is an official folder.
Alternatively, we could create the folder relative to the script's
current path, so that everything remains within the adop directory.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/Accenture/adop-docker-compose/issues/77
That's why we should use something like mktmp, right?
I have worked a bit on this. I will make a pull request soon. The main point here is that for windows to work on one of the steps (docker cp ${TMP_FOLDER} jenkins-slave:${CERT_PATH}) then the temporary folder cannot be in /tmp because it looks like the way docker performs the copy in git bash is to convert the path to Windows path and it converts /tmp/... to C:\tmp and then it fails because C:\tmp does not exist :(
I have now created a PR for this #104 . Please let me know what you think, any feedback is very welcome
| gharchive/issue | 2016-05-17T09:16:48 | 2025-04-01T04:32:14.120811 | {
"authors": [
"josequaresma",
"nickdgriffin",
"oscarrenalias",
"robwells57"
],
"repo": "Accenture/adop-docker-compose",
"url": "https://github.com/Accenture/adop-docker-compose/issues/77",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
203211393 | PHPCast: No song playing getPlaying API request returns a 500 error.
When there is no song playing the getPlaying API endpoint constantly reports a 500 error. This needs to be handled where the response will explain that there is no song playing.
This issue has now been fixed and now has been rolled out.
| gharchive/issue | 2017-01-25T20:19:40 | 2025-04-01T04:32:14.133273 | {
"authors": [
"AceXintense"
],
"repo": "AceXintense/PHPCast",
"url": "https://github.com/AceXintense/PHPCast/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1465750858 | Bug: Changes not being saved
I had to blow away my WTF directory because of a different issue, but when I tried to put the Rehack.lua file back it keeps getting reset to the Default single "Welcome page" every time I reload/log. I've tried manually adding new pages and they don't "stick" either. I have no lua error code, and can't figure out why the file keeps getting rewritten
This is on Retail, with 2.4.2 installed. I also tried deleting my installation of REHack and then installed it from the 2.4.2.zip here of github.
I also tried with versions 2.4.1, 2.4.2 and got the same result.
It is not connected with addon version at all. Make sure that REHack.lua is not set to read only.
Forgot to mention that. It was the first thing I checked, even though the file I copied over had just had a new page added before I had to reset everything.
Well, even though it was clearly NOT marked as Read only in the Properties dialog....something with permissions was screwed up. I just decided to do a recursive removal of Read only from the retail folder. Miraculously, I can now make changes to it in game that stick.
| gharchive/issue | 2022-11-28T03:37:37 | 2025-04-01T04:32:14.138891 | {
"authors": [
"AcidWeb",
"nancikennedy"
],
"repo": "AcidWeb/REHack",
"url": "https://github.com/AcidWeb/REHack/issues/3",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1960375832 | Tools for solidity server keeps crashing in WSL - windows
I have tried some of the commands here but it's not working
Running 'woke lsp --port 43343'
[Error - 10:08:21 PM] Traceback (most recent call last):
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/server.py", line 268, in _task_done_callback
task.result()
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 187, in run
await self.__compilation_loop()
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 977, in __compilation_loop
await self.__compile(self.__discovered_files)
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 848, in __compile
self.__ir_reference_resolver.run_post_process_callbacks(
File "/home/fave/.local/lib/python3.10/site-packages/woke/ast/ir/reference_resolver.py", line 174, in run_post_process_callbacks
callback(callback_params)
File "/home/fave/.local/lib/python3.10/site-packages/woke/ast/ir/meta/import_directive.py", line 165, in _post_process
assert referenced_declaration is not None
AssertionError
[Info - 10:08:21 PM] Connection to server got closed. Server will restart.
[Error - 10:09:11 PM] Traceback (most recent call last):
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/server.py", line 268, in _task_done_callback
task.result()
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 187, in run
await self.__compilation_loop()
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 977, in __compilation_loop
await self.__compile(self.__discovered_files)
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 848, in __compile
self.__ir_reference_resolver.run_post_process_callbacks(
File "/home/fave/.local/lib/python3.10/site-packages/woke/ast/ir/reference_resolver.py", line 174, in run_post_process_callbacks
callback(callback_params)
File "/home/fave/.local/lib/python3.10/site-packages/woke/ast/ir/meta/import_directive.py", line 165, in _post_process
assert referenced_declaration is not None
AssertionError
[Info - 10:09:11 PM] Connection to server got closed. Server will restart.
[Error - 10:10:00 PM] Traceback (most recent call last):
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/server.py", line 268, in _task_done_callback
task.result()
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 187, in run
await self.__compilation_loop()
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 977, in __compilation_loop
await self.__compile(self.__discovered_files)
File "/home/fave/.local/lib/python3.10/site-packages/woke/lsp/lsp_compiler.py", line 848, in __compile
self.__ir_reference_resolver.run_post_process_callbacks(
File "/home/fave/.local/lib/python3.10/site-packages/woke/ast/ir/reference_resolver.py", line 174, in run_post_process_callbacks
callback(callback_params)
File "/home/fave/.local/lib/python3.10/site-packages/woke/ast/ir/meta/import_directive.py", line 165, in _post_process
assert referenced_declaration is not None
AssertionError
[Info - 10:10:00 PM] Connection to server got closed. Server will restart.
use this woke lsp --port 54942 it says this
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/fave/.local/bin/woke:8 in <module> │
│ │
│ 5 from woke.cli.__main__ import main │
│ 6 if __name__ == '__main__': │
│ 7 │ sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) │
│ ❱ 8 │ sys.exit(main()) │
│ 9 │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/click/core.py:1157 in __call__ │
│ │
│ 1154 │ │
│ 1155 │ def __call__(self, *args: t.Any, **kwargs: t.Any) -> t.Any: │
│ 1156 │ │ """Alias for :meth:`main`.""" │
│ ❱ 1157 │ │ return self.main(*args, **kwargs) │
│ 1158 │
│ 1159 │
│ 1160 class Command(BaseCommand): │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/rich_click/rich_group.py:21 in main │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/click/core.py:1078 in main │
│ │
│ 1075 │ │ try: │
│ 1076 │ │ │ try: │
│ 1077 │ │ │ │ with self.make_context(prog_name, args, **extra) as ctx: │
│ ❱ 1078 │ │ │ │ │ rv = self.invoke(ctx) │
│ 1079 │ │ │ │ │ if not standalone_mode: │
│ 1080 │ │ │ │ │ │ return rv │
│ 1081 │ │ │ │ │ # it's not safe to `ctx.exit(rv)` here! │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/click/core.py:1688 in invoke │
│ │
│ 1685 │ │ │ │ super().invoke(ctx) │
│ 1686 │ │ │ │ sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) │
│ 1687 │ │ │ │ with sub_ctx: │
│ ❱ 1688 │ │ │ │ │ return _process_result(sub_ctx.command.invoke(sub_ctx)) │
│ 1689 │ │ │
│ 1690 │ │ # In chain mode we create the contexts step by step, but after the │
│ 1691 │ │ # base command has been invoked. Because at that point we do not │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/click/core.py:1434 in invoke │
│ │
│ 1431 │ │ │ echo(style(message, fg="red"), err=True) │
│ 1432 │ │ │
│ 1433 │ │ if self.callback is not None: │
│ ❱ 1434 │ │ │ return ctx.invoke(self.callback, **ctx.params) │
│ 1435 │ │
│ 1436 │ def shell_complete(self, ctx: Context, incomplete: str) -> t.List["CompletionItem"]: │
│ 1437 │ │ """Return a list of completions for the incomplete value. Looks │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/click/core.py:783 in invoke │
│ │
│ 780 │ │ │
│ 781 │ │ with augment_usage_errors(__self): │
│ 782 │ │ │ with ctx: │
│ ❱ 783 │ │ │ │ return __callback(*args, **kwargs) │
│ 784 │ │
│ 785 │ def forward( │
│ 786 │ │ __self, __cmd: "Command", *args: t.Any, **kwargs: t.Any # noqa: B902 │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/click/decorators.py:33 in new_func │
│ │
│ 30 │ """ │
│ 31 │ │
│ 32 │ def new_func(*args: "P.args", **kwargs: "P.kwargs") -> "R": │
│ ❱ 33 │ │ return f(get_current_context(), *args, **kwargs) │
│ 34 │ │
│ 35 │ return update_wrapper(new_func, f) │
│ 36 │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/woke/cli/lsp.py:53 in run_lsp │
│ │
│ 50 │ config = WokeConfig() │
│ 51 │ config.load_configs() # load ~/.woke/config.toml and ./woke.toml │
│ 52 │ │
│ ❱ 53 │ asyncio.run(run_server(config, port)) │
│ 54 │
│ │
│ /usr/lib/python3.10/asyncio/runners.py:44 in run │
│ │
│ 41 │ │ events.set_event_loop(loop) │
│ 42 │ │ if debug is not None: │
│ 43 │ │ │ loop.set_debug(debug) │
│ ❱ 44 │ │ return loop.run_until_complete(main) │
│ 45 │ finally: │
│ 46 │ │ try: │
│ 47 │ │ │ _cancel_all_tasks(loop) │
│ │
│ /usr/lib/python3.10/asyncio/base_events.py:649 in run_until_complete │
│ │
│ 646 │ │ if not future.done(): │
│ 647 │ │ │ raise RuntimeError('Event loop stopped before Future completed.') │
│ 648 │ │ │
│ ❱ 649 │ │ return future.result() │
│ 650 │ │
│ 651 │ def stop(self): │
│ 652 │ │ """Stop running the event loop. │
│ │
│ /home/fave/.local/lib/python3.10/site-packages/woke/cli/lsp.py:28 in run_server │
│ │
│ 25 │ │ writer.close() │
│ 26 │ │ logger.info("Client disconnected") │
│ 27 │ │
│ ❱ 28 │ server = await asyncio.start_server(client_callback, port=port) │
│ 29 │ logger.info(f"Started LSP server on port {port}") │
│ 30 │ │
│ 31 │ async with server: │
│ │
│ /usr/lib/python3.10/asyncio/streams.py:85 in start_server │
│ │
│ 82 │ │ │ │ │ │ │ │ │ │ loop=loop) │
│ 83 │ │ return protocol │
│ 84 │ │
│ ❱ 85 │ return await loop.create_server(factory, host, port, **kwds) │
│ 86 │
│ 87 │
│ 88 if hasattr(socket, 'AF_UNIX'): │
│ │
│ /usr/lib/python3.10/asyncio/base_events.py:1519 in create_server │
│ │
│ 1516 │ │ │ │ │ try: │
│ 1517 │ │ │ │ │ │ sock.bind(sa) │
│ 1518 │ │ │ │ │ except OSError as err: │
│ ❱ 1519 │ │ │ │ │ │ raise OSError(err.errno, 'error while attempting ' │
│ 1520 │ │ │ │ │ │ │ │ │ 'to bind on address %r: %s' │
│ 1521 │ │ │ │ │ │ │ │ │ % (sa, err.strerror.lower())) from None │
│ 1522 │ │ │ │ completed = True │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OSError: [Errno 98] error while attempting to bind on address ('0.0.0.0', 54942): address already in use
Hey @0xfave,
the assertion error should be fixed in Tools for Solidity 1.10.4. Regarding the second error, it seems that port 54942 was already being used on your device.
Thank you
| gharchive/issue | 2023-10-25T02:09:25 | 2025-04-01T04:32:14.145629 | {
"authors": [
"0xfave",
"michprev"
],
"repo": "Ackee-Blockchain/tools-for-solidity-vscode",
"url": "https://github.com/Ackee-Blockchain/tools-for-solidity-vscode/issues/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
525953551 | fix(versions): update Activiti/activiti-cloud-acceptance-scenarios versions into master
UpdateBot pushed maven dependency: org.activiti.cloud.acc:activiti-cloud-acceptance-tests-dependencies to: 7.1.103
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.acc:activiti-cloud-acceptance-tests-dependencies 7.1.103
| gharchive/pull-request | 2019-11-20T17:23:54 | 2025-04-01T04:32:14.168383 | {
"authors": [
"jx-activiti-cloud"
],
"repo": "Activiti/activiti-cloud-acceptance-scenarios",
"url": "https://github.com/Activiti/activiti-cloud-acceptance-scenarios/pull/683",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
481194770 | fix(versions): update Activiti/activiti-cloud-app-service versions into develop
UpdateBot pushed maven dependency: org.activiti.cloud.common:activiti-cloud-service-common-dependencies to: 7.1.49
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.common:activiti-cloud-service-common-dependencies 7.1.49
| gharchive/pull-request | 2019-08-15T15:04:08 | 2025-04-01T04:32:14.171865 | {
"authors": [
"jx-activiti-cloud"
],
"repo": "Activiti/activiti-cloud-app-service",
"url": "https://github.com/Activiti/activiti-cloud-app-service/pull/136",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
559252630 | fix(versions): update Activiti/activiti-cloud-notifications-service-graphql versions into develop
UpdateBot pushed maven dependency: org.activiti.cloud.query:activiti-cloud-query-dependencies to: 7.1.222
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.query:activiti-cloud-query-dependencies 7.1.222
| gharchive/pull-request | 2020-02-03T18:22:56 | 2025-04-01T04:32:14.173391 | {
"authors": [
"jx-activiti-cloud"
],
"repo": "Activiti/activiti-cloud-notifications-service-graphql",
"url": "https://github.com/Activiti/activiti-cloud-notifications-service-graphql/pull/248",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
498817014 | update org.activiti.cloud.dependencies:activiti-cloud-dependencies to 7.1.156
UpdateBot pushed maven dependency: org.activiti.cloud.dependencies:activiti-cloud-dependencies to: 7.1.156
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.dependencies:activiti-cloud-dependencies 7.1.156 --merge false
| gharchive/pull-request | 2019-09-26T11:05:08 | 2025-04-01T04:32:14.175011 | {
"authors": [
"jx-activiti-cloud"
],
"repo": "Activiti/ttc-connectors-dummytwitter",
"url": "https://github.com/Activiti/ttc-connectors-dummytwitter/pull/231",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
417746740 | update org.activiti.cloud.dependencies:activiti-cloud-dependencies to 7.1.22
UpdateBot pushed maven dependency: org.activiti.cloud.dependencies:activiti-cloud-dependencies to: 7.1.22
UpdateBot commands:
updatebot push-version --kind maven org.activiti.cloud.dependencies:activiti-cloud-dependencies 7.1.22 --merge false
| gharchive/pull-request | 2019-03-06T11:05:24 | 2025-04-01T04:32:14.176597 | {
"authors": [
"jx-activiti-cloud"
],
"repo": "Activiti/ttc-connectors-dummytwitter",
"url": "https://github.com/Activiti/ttc-connectors-dummytwitter/pull/91",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2221890303 | Added project_id key format
Added a new key format for json project_id without deleting the old one modrinth_project_id. It looks for project_id in json in prioritize and if it does not find it then it looks for modrinth_project_id for old json format compatibility. It was made with purpose for future compatibility with new remote types as curseforge or something else.
use json.optString(...) instead of json.opt("project_id").toString()
here we go
will be fixed in an upcoming update
| gharchive/pull-request | 2024-04-03T04:51:15 | 2025-04-01T04:32:14.252701 | {
"authors": [
"AdamCalculator",
"HiWord9"
],
"repo": "AdamCalculator/DynamicPack",
"url": "https://github.com/AdamCalculator/DynamicPack/pull/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1368884074 | [ADDITION] Zip Co
Category
Fintech
Software to be added
Zip Co
Supporting Material
URL: https://www.linkedin.com/company/zip-co-limited/
Description: Buy Now, Pay Later (BNPL) provider available at over 90,000 locations
Size:
HQ: Sydney
LinkedIn: https://www.linkedin.com/company/zip-co-limited/
See Record on Airtable:
https://airtable.com/app0Ox7pXdrBUIn23/tblYbuZoILuVA0X3L/rectLe8jpQQ6v2Z5Y
Enough info in Airtable, adding to list
| gharchive/issue | 2022-09-11T09:15:38 | 2025-04-01T04:32:14.257449 | {
"authors": [
"AdamXweb"
],
"repo": "AdamXweb/awesome-aussie",
"url": "https://github.com/AdamXweb/awesome-aussie/issues/25",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1337972245 | Shrinking a tab enough to hide some content renders it inaccessible until it's extended again
When a tab is shrunk enough to obscure any of the internal content, the tab content should become a scrollable area.
I'm thinking about ditching the custom tab widget approach and replace it with our own egui_dock::Tab with an API similar to one of egui::Window.
Closed by #6
| gharchive/issue | 2022-08-13T14:02:36 | 2025-04-01T04:32:14.258671 | {
"authors": [
"Adanos020"
],
"repo": "Adanos020/egui_dock",
"url": "https://github.com/Adanos020/egui_dock/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1393901014 | Create linear search hacktoberfest
please accept my pr
follow rules
| gharchive/pull-request | 2022-10-02T21:13:55 | 2025-04-01T04:32:14.264450 | {
"authors": [
"AdarshAddee",
"BUNNY2210"
],
"repo": "AdarshAddee/Hacktoberfest2022_for_Beginers",
"url": "https://github.com/AdarshAddee/Hacktoberfest2022_for_Beginers/pull/219",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.