id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
174590709 | placeholder not works in ember 2.7.0
These styles will not works any more:
https://github.com/AddJam/ember-content-editable/blob/master/addon/styles/addon.css#L1-L4
Not sure why, but if you change [contenteditable=true] to .ember-content-editable or any other class name that user specified, placeholder will works again.
Hello,
Looks like this bug is back. Adding the "placeholder" property doesn't do anything.
I have checked the merge referenced here, but my CSS is updated as per merge; still, the placeholder doesn't work. I am running on Ember 2.11.
Any ideas?
Hey @nightire. I have taken over this addon. I am not aware of any issues with the last ember versions when using placeholders. Would you mind opening a new issue if this still is a problem as this issue is a year old by now.
| gharchive/issue | 2016-09-01T18:17:09 | 2025-04-01T04:32:14.267091 | {
"authors": [
"danidr",
"nightire",
"st-h"
],
"repo": "AddJam/ember-content-editable",
"url": "https://github.com/AddJam/ember-content-editable/issues/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1213264971 | Feature: Collections
Description
People now have "favourites" which is a sort of collection. Users would be able to generate other curated lists of scripts in their own collecitons.
Would likely want to move favourites to be a collection rather than it's own model. This indicates we'd also want some form of public/private flag because people might not want to share their favourites but may want to share other collections.
| gharchive/issue | 2022-04-23T10:24:24 | 2025-04-01T04:32:14.393714 | {
"authors": [
"AdmiralGT"
],
"repo": "AdmiralGT/botc-scripts",
"url": "https://github.com/AdmiralGT/botc-scripts/issues/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
495981666 | AEM Author system/console gets 403
Required Information
[ ] AEM Version, including Service Packs, Cumulative Fix Packs, etc: __AEM 6.3 SP2.CFP2
[ ] ACS AEM Commons Version: _4.3.2
[ ] Reproducible on Latest? yes
Expected Behavior
http://localhost:4502/system/console/configMgr
should be available
Actual Behavior
Author system console not found after restart. On only the author instance http://localhost:4502/system/console/configMgr
we get this error
Resource at '/system/console/configMgr' not found: No resource found
To resolve it we restart the instance and try to login to the system/console URL immediately
that only works if we catch it in time
If I remove the ACS commons package and restart issue goes away.
Do the error.logs show anything?
No that is the thing there is no good details into the issue. Here is the latest logs when the issue happens.
https://download.3sharecorp.com/customer/IR/auth-logs-configs.201910918.tar.gz
I had an adobe ticket as well if you are able to view it here is the link.
https://daycare.day.com/content/home/irc/irc_us/customer_services/189805.html#post0041
Norm Raby | ROM Team Engineer
Normand.Raby@3sharecorp.commailto:normand.raby@3sharecorp.com | Twitterhttps://twitter.com/3SHARE | LinkedInhttps://www.linkedin.com/company/1621497/ | Facebookhttps://www.facebook.com/3share/
[id:image001.png@01D5114E.F3598830]https://www.3sharecorp.com/
[id:image002.png@01D5114E.F3598830]https://evolve.3sharecorp.com/
From: "david g." notifications@github.com
Reply-To: Adobe-Consulting-Services/acs-aem-commons reply@reply.github.com
Date: Thursday, September 19, 2019 at 3:21 PM
To: Adobe-Consulting-Services/acs-aem-commons acs-aem-commons@noreply.github.com
Cc: "normand.raby@3sharecorp.com" normand.raby@3sharecorp.com, Author author@noreply.github.com
Subject: Re: [Adobe-Consulting-Services/acs-aem-commons] AEM Author system/console gets 403 (#2052)
Do the error.logs show anything?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/2052?email_source=notifications&email_token=AGT6EUOHOM4Y6ATDIHI3CIDQKPGMZA5CNFSM4IYPCNPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7ERMGQ#issuecomment-533272090, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AGT6EULA5G224QPQA67B74TQKPGMZANCNFSM4IYPCNPA.
Disclaimer The information in this email and any attachments may contain proprietary and confidential information that is intended for the addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, retention or use of the contents of this information is prohibited. When addressed to our clients or vendors, any information contained in this e-mail or any attachments is subject to the terms and conditions in any governing contract. If you have received this e-mail in error, please immediately contact the sender and delete the e-mail.
David G, do you have anything else on this? Is there something else you new or we can do to troubleshoot this?
Norm Raby | ROM Team Engineer
Normand.Raby@3sharecorp.commailto:normand.raby@3sharecorp.com | Twitterhttps://twitter.com/3SHARE | LinkedInhttps://www.linkedin.com/company/1621497/ | Facebookhttps://www.facebook.com/3share/
[id:image001.png@01D5114E.F3598830]https://www.3sharecorp.com/
[id:image002.png@01D5114E.F3598830]https://evolve.3sharecorp.com/
From: "david g." notifications@github.com
Reply-To: Adobe-Consulting-Services/acs-aem-commons reply@reply.github.com
Date: Thursday, September 19, 2019 at 3:21 PM
To: Adobe-Consulting-Services/acs-aem-commons acs-aem-commons@noreply.github.com
Cc: "normand.raby@3sharecorp.com" normand.raby@3sharecorp.com, Author author@noreply.github.com
Subject: Re: [Adobe-Consulting-Services/acs-aem-commons] AEM Author system/console gets 403 (#2052)
Do the error.logs show anything?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/2052?email_source=notifications&email_token=AGT6EUOHOM4Y6ATDIHI3CIDQKPGMZA5CNFSM4IYPCNPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7ERMGQ#issuecomment-533272090, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AGT6EULA5G224QPQA67B74TQKPGMZANCNFSM4IYPCNPA.
Disclaimer The information in this email and any attachments may contain proprietary and confidential information that is intended for the addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, retention or use of the contents of this information is prohibited. When addressed to our clients or vendors, any information contained in this e-mail or any attachments is subject to the terms and conditions in any governing contract. If you have received this e-mail in error, please immediately contact the sender and delete the e-mail.
Can you be more specific how one can reproduce this? A very simple step-by-step explanation would be great.
I gave all the information I have. This only happens on the author instance for one client. It does not happen all the time. Sometimes a restart resolves it. Remove the ACS commons and restarting always resolves it.
Norm Raby | ROM Team Engineer
Normand.Raby@3sharecorp.commailto:normand.raby@3sharecorp.com | Twitterhttps://twitter.com/3SHARE | LinkedInhttps://www.linkedin.com/company/1621497/ | Facebookhttps://www.facebook.com/3share/
[id:image001.png@01D5114E.F3598830]https://www.3sharecorp.com/
[id:image002.png@01D5114E.F3598830]https://evolve.3sharecorp.com/
From: Jörg Hoh notifications@github.com
Reply-To: Adobe-Consulting-Services/acs-aem-commons reply@reply.github.com
Date: Friday, October 18, 2019 at 3:15 AM
To: Adobe-Consulting-Services/acs-aem-commons acs-aem-commons@noreply.github.com
Cc: "normand.raby@3sharecorp.com" normand.raby@3sharecorp.com, Author author@noreply.github.com
Subject: Re: [Adobe-Consulting-Services/acs-aem-commons] AEM Author system/console gets 403 (#2052)
Can you be more specific how one can reproduce this? A very simple step-by-step explanation would be great.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/2052?email_source=notifications&email_token=AGT6EUKNXFODPQHWPPN35QTQPFPA7A5CNFSM4IYPCNPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBTBMFA#issuecomment-543561236, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AGT6EUIB6DDZGD2PVNJET6TQPFPA7ANCNFSM4IYPCNPA.
Disclaimer The information in this email and any attachments may contain proprietary and confidential information that is intended for the addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, retention or use of the contents of this information is prohibited. When addressed to our clients or vendors, any information contained in this e-mail or any attachments is subject to the terms and conditions in any governing contract. If you have received this e-mail in error, please immediately contact the sender and delete the e-mail.
The only thing which can go wrong here is that the webconsole itself is not bound/active. The configMgr is not a dedicated servlet, but essential part of the webconsole.
Were you able to access the webconsole at all?
Is there an accurate way to monitor the system/console is working?
I get different results sometimes. Currently I have an author instance that gives me a blank page for anything under /system/console. If I view page source I get:
[cid:image001.png@01D590C7.7F592D20]
Norm Raby | ROM Team Engineer
Normand.Raby@3sharecorp.commailto:normand.raby@3sharecorp.com | Twitterhttps://twitter.com/3SHARE | LinkedInhttps://www.linkedin.com/company/1621497/ | Facebookhttps://www.facebook.com/3share/
[id:image001.png@01D5114E.F3598830]https://www.3sharecorp.com/
[id:image002.png@01D5114E.F3598830]https://evolve.3sharecorp.com/
From: Jörg Hoh notifications@github.com
Reply-To: Adobe-Consulting-Services/acs-aem-commons reply@reply.github.com
Date: Friday, November 1, 2019 at 2:43 PM
To: Adobe-Consulting-Services/acs-aem-commons acs-aem-commons@noreply.github.com
Cc: "normand.raby@3sharecorp.com" normand.raby@3sharecorp.com, Author author@noreply.github.com
Subject: Re: [Adobe-Consulting-Services/acs-aem-commons] AEM Author system/console gets 403 (#2052)
The only thing which can go wrong here is that the webconsole itself is not bound/active. The configMgr is not a dedicated servlet, but essential part of the webconsole.
Were you able to access the webconsole at all?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/2052?email_source=notifications&email_token=AGT6EUNTMVTHENAVMVRPL53QRR2FVA5CNFSM4IYPCNPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC32EZI#issuecomment-548905573, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AGT6EUP4P5OAHYHMDHPH243QRR2FVANCNFSM4IYPCNPA.
Disclaimer The information in this email and any attachments may contain proprietary and confidential information that is intended for the addressee(s) only. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, retention or use of the contents of this information is prohibited. When addressed to our clients or vendors, any information contained in this e-mail or any attachments is subject to the terms and conditions in any governing contract. If you have received this e-mail in error, please immediately contact the sender and delete the e-mail.
I've experienced this as well, and we also use ACS Commons. To fix this I've been reinstall the latest service pack, that seems to restore functionality.
| gharchive/issue | 2019-09-19T19:19:27 | 2025-04-01T04:32:14.424985 | {
"authors": [
"davidjgonzalez",
"joerghoh",
"mschmid42",
"normraby"
],
"repo": "Adobe-Consulting-Services/acs-aem-commons",
"url": "https://github.com/Adobe-Consulting-Services/acs-aem-commons/issues/2052",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
351364430 | Support nested ternary
Nested ternary in HTL / Sightly causes error.
Nested ternary operators are allowed in grammar and work without any issues for me, eg.:
${'something' ? 'a' : ('1' ? 'x' : 'x')} // prints a
${'' ? 'a' : ('1' ? 'x' : 'x')} // prints x
Nested expressions have to be enclosed in parentheses. If it's not the case, then I think you should post snippet that does not work.
Thanks, I did not realize nested expressions had to be enclosed in parenthesis. Closing issue.
| gharchive/issue | 2018-08-16T20:53:03 | 2025-04-01T04:32:14.427013 | {
"authors": [
"karollewandowski",
"robotnewyork"
],
"repo": "Adobe-Marketing-Cloud/htl-spec",
"url": "https://github.com/Adobe-Marketing-Cloud/htl-spec/issues/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2653520565 | Updated dotnet sdk to 9
fixed some vulnerabilities
Pull Request Test Coverage Report for Build 11806981564
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 96.25%
Totals
Change from base Build 11712736684:
0.0%
Covered Lines:
122
Relevant Lines:
124
💛 - Coveralls
| gharchive/pull-request | 2024-11-12T22:40:02 | 2025-04-01T04:32:14.433507 | {
"authors": [
"Adolfok3",
"coveralls"
],
"repo": "Adolfok3/AuthorizationInterceptor",
"url": "https://github.com/Adolfok3/AuthorizationInterceptor/pull/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
759647125 | jdk15 does not run on test-ibmcloud-rhel6-x64-1
When jdk15 tests land on test-ibmcloud-rhel6-x64-1 java -version fails with the message:
13:13:27 Run /home/jenkins/workspace/Test_openjdk15_j9_sanity.functional_x86-64_linux_xl/openjdkbinary/j2sdk-image/bin/java -version
13:13:27 =JAVA VERSION OUTPUT BEGIN=
13:13:27 /home/jenkins/workspace/Test_openjdk15_j9_sanity.functional_x86-64_linux_xl/openjdkbinary/j2sdk-image/bin/java: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /home/jenkins/workspace/Test_openjdk15_j9_sanity.functional_x86-64_linux_xl/openjdkbinary/j2sdk-image/bin/../lib/libjli.so)
Sample run: https://ci.adoptopenjdk.net/job/Test_openjdk15_j9_sanity.functional_x86-64_linux_xl/29/console
This is OpenJ9 specific. We switched some of the builds back to building on CentOS6 due to an end-user concern. The statement from OpenJ9 is clear that they do not support running on that level
The question therefore becomes whether we should stop testing on it completely, or find a way to stop the OpenJ9 JDK15+ builds from being scheduled on this system. (FYA @smlambert)
One way would presumably be to make jdk release be one of the Jenkins machine label attributes. Bit of a maintenance issue each time a new release comes along though.
One way would presumably be to make jdk release be one of the Jenkins machine label attributes. Bit of a maintenance issue each time a new release comes along though.
Yep ... it'll be a pain ;-)
Just hit this again testing the openj9-0.24.0-m2 jdk15 builds. It would be good not to risk landing on this machine when we do the Jan 2021 release builds.
Just hit this again testing the openj9-0.24.0-m2 jdk15 builds. It would be good not to risk landing on this machine when we do the Jan 2021 release builds.
and again: https://ci.adoptopenjdk.net/view/Failing Test Jobs/job/Test_openjdk15_j9_extended.system_x86-64_linux_xl/147/
and again: https://ci.adoptopenjdk.net/view/Failing Test Jobs/job/Test_openjdk15_j9_extended.system_x86-64_linux_xl/147/
Should be fixed by https://github.com/AdoptOpenJDK/openjdk-build/pull/2372 which has now been merged
Should be fixed by https://github.com/AdoptOpenJDK/openjdk-build/pull/2372 which has now been merged
| gharchive/issue | 2020-12-08T17:43:40 | 2025-04-01T04:32:14.442811 | {
"authors": [
"andrew-m-leonard",
"lumpfish",
"sxa"
],
"repo": "AdoptOpenJDK/openjdk-infrastructure",
"url": "https://github.com/AdoptOpenJDK/openjdk-infrastructure/issues/1746",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
650700079 | TreeBinAssert.java fails on Coretto
Describe the bug
TreeBinAssert.java fails against a build of Coretto's jdk11.
This bug was fixed upstream in openjdk11 by this code change, however this change did not get imported into Corretto (despite its age).
Could this be a bug with Coretto's merge system?
Coretto bug raised here.
To Reproduce
https://trss.adoptopenjdk.net/output/test?id=5efea4b42e55416417c06c11
Expected behavior
The test should pass.
Additional context
Add any other context about the problem here.
We have already had another case where the test code we take is not the correct version with the repo that is used to build Corretto. In order to be able to match the repo, the build pipeline should pass the SHA of the jdk11u repo to test pipelines, otherwise we have to make the assumption that its built from tip.
I believe this is the case, and that the fix would be to update how corretto build pipeline is called.
See https://github.com/AdoptOpenJDK/openjdk-build/issues/1674
Here the Coretto team assert that "Our master and develop branches are still based on 11.0.7, but after 11.0.8 releases, they will be updated (and this error will be fixed)"
So I don't expect this issue to go away until after the release.
https://github.com/AdoptOpenJDK/openjdk-tests/pull/1876 has addressed the issue with matching test material to jdk repo.
| gharchive/issue | 2020-07-03T17:06:45 | 2025-04-01T04:32:14.448208 | {
"authors": [
"adamfarley",
"smlambert"
],
"repo": "AdoptOpenJDK/openjdk-tests",
"url": "https://github.com/AdoptOpenJDK/openjdk-tests/issues/1866",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
366830071 | Perf tests post stage failed when running odm tests
Perf tests post stage failed when running odm tests
https://ci.adoptopenjdk.net/view/Test_perf/job/openjdk8_hs_perftest_x86-64_linux
groovy.lang.MissingPropertyException: No such property: testTarName for class: groovy.lang.Binding
at groovy.lang.Binding.getVariable(Binding.java:63)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:264)
Seems benchmark_test_output_dir is hard coded, while testTarname is a passing in parameter.
def benchmark_test_output_dir = 'jvmtest/performance/odm/ilog_wodm881/leftoverResults';
if (fileExists(benchmark_test_output_dir)) {
sh "tar -zcf ${testTarName} benchmark_test_output.tar.gz"
Close in favor of https://github.com/AdoptOpenJDK/openjdk-tests/pull/737
| gharchive/issue | 2018-10-04T14:53:45 | 2025-04-01T04:32:14.450398 | {
"authors": [
"sophia-guo"
],
"repo": "AdoptOpenJDK/openjdk-tests",
"url": "https://github.com/AdoptOpenJDK/openjdk-tests/issues/617",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2568236063 | ы
Описание PR
Ченджлог
Почему / Баланс
Багфикс
Ссылка на публикацию в Discord
Техническая информация
Медиа
Требования
[ ] Я прочитал(а) и следую Руководство по созданию пулл реквестов. Я понимаю, что в противном случае мой ПР может быть закрыт по усмотрению мейнтейнера.
[ ] Я добавил скриншоты/видео к этому пулл реквесту, демонстрирующие его изменения в игре, или этот пулл реквест не требует демонстрации в игре
Критические изменения
Чейнджлог
:cl: Котя
fix: Теперь при наведении на лежачего персонажа, даже если он под столом и предметами, пули будут попадать.
ссылку бы на то обсуждение в дисе
А для станбатонов еще бы, если уже не сделано.
Я затестил, работает норм. Но опять же, еще надо для удара пофиксить и будет огонь. Хотябы для того, который через ЛКМ.
| gharchive/pull-request | 2024-10-05T17:44:58 | 2025-04-01T04:32:14.479601 | {
"authors": [
"FaDeOkno",
"Mirokko",
"Schrodinger71"
],
"repo": "AdventureTimeSS14/space_station_ADT",
"url": "https://github.com/AdventureTimeSS14/space_station_ADT/pull/610",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1633584058 | Initializing Quiz through Start Button
GIVEN I am taking a code quiz
WHEN I click the start button
THEN a timer starts and I am presented with a question
Added in timer functionality that initiates when the start button is pressed, working on presenting questions.
The Code Quiz can now be initialized by clicking the 'Start' Button and the first question is able to be displayed properly.
| gharchive/issue | 2023-03-21T10:09:39 | 2025-04-01T04:32:14.514530 | {
"authors": [
"AegeanGrey"
],
"repo": "AegeanGrey/code-quiz",
"url": "https://github.com/AegeanGrey/code-quiz/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1399452097 | Update setup.py
Should we add all the requirements in the install_requires list?
Which one is missing?
No, it's not required.
Okay
| gharchive/issue | 2022-10-06T11:50:33 | 2025-04-01T04:32:14.576138 | {
"authors": [
"Agent-Hellboy",
"kailashchoudhary11"
],
"repo": "Agent-Hellboy/pyytdata",
"url": "https://github.com/Agent-Hellboy/pyytdata/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1962643014 | feat: upgrade native sdk n423_w4182_0.3.0
Update native sdk n423_w4182_0.3.0 dependencies
This pull request is trigger by bot, you can checkout this branch and update it.
Examples built:
https://download.agora.io/demo/test/agora_rtc_engine_android_6.2.4-sp.030.web_20231026042018.zip
https://download.agora.io/demo/test/agora_rtc_engine_ios_6.2.4-sp.030.web_20231026042213.zip
https://download.agora.io/demo/test/agora_rtc_engine_macos_6.2.4-sp.030.web_20231026041626.zip
https://download.agora.io/demo/test/agora_rtc_engine_windows_6.2.4-sp.030.web_20231026041601.zip
This comment is commented by bot, do not edit it directly
Examples built:
https://download.agora.io/demo/test/agora_rtc_engine_android_6.2.4-sp.030.web_20231026044302.zip
https://download.agora.io/demo/test/agora_rtc_engine_ios_6.2.4-sp.030.web_20231026044645.zip
https://download.agora.io/demo/test/agora_rtc_engine_macos_6.2.4-sp.030.web_20231026044108.zip
https://download.agora.io/demo/test/agora_rtc_engine_windows_6.2.4-sp.030.web_20231026043841.zip
https://download.agora.io/demo/test/agora_rtc_engine_web_6.2.4-sp.030.web_20231026043609.zip
This comment is commented by bot, do not edit it directly
Examples built:
https://download.agora.io/demo/test/agora_rtc_engine_android_6.2.4-sp.030.web_20231101031745.zip
https://download.agora.io/demo/test/agora_rtc_engine_macos_6.2.4-sp.030.web_20231101031355.zip
https://download.agora.io/demo/test/agora_rtc_engine_windows_6.2.4-sp.030.web_20231101031419.zip
https://download.agora.io/demo/test/agora_rtc_engine_web_6.2.4-sp.030.web_20231101031052.zip
This comment is commented by bot, do not edit it directly
| gharchive/pull-request | 2023-10-26T03:42:48 | 2025-04-01T04:32:14.596502 | {
"authors": [
"littleGnAl"
],
"repo": "AgoraIO-Extensions/Agora-Flutter-SDK",
"url": "https://github.com/AgoraIO-Extensions/Agora-Flutter-SDK/pull/1406",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1609635577 | Disappearing cards
Cards on the progress screen send to be vanishing while scrolling on mobile. I haven't noticed it happen in desktop. I think disabling pointer-events on the element may solve this issue.
Fixed in branch merged by #30.
| gharchive/issue | 2023-03-04T06:21:13 | 2025-04-01T04:32:14.645781 | {
"authors": [
"AhamSammich"
],
"repo": "AhamSammich/lets-play-koikoi",
"url": "https://github.com/AhamSammich/lets-play-koikoi/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2632605359 | Uncaught ImportError in utils.py line 9
Aider version: 0.61.0
Python version: 3.12.4
Platform: Windows-11-10.0.22631-SP0
Python implementation: CPython
Virtual environment: No
OS: Windows 11 (64bit)
Git version: git version 2.45.2.windows.1
An uncaught exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "__main__.py", line 7, in <module>
sys.exit(main())
^^^^^^
File "main.py", line 623, in main
main_model = models.Model(
^^^^^^^^^^^^^
File "models.py", line 734, in __init__
res = self.validate_environment()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "models.py", line 938, in validate_environment
res = litellm.validate_environment(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "llm.py", line 23, in __getattr__
self._load_litellm()
File "llm.py", line 30, in _load_litellm
self._lazy_module = importlib.import_module("litellm")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "__init__.py", line 10, in <module>
from litellm.caching.caching import Cache, DualCache, RedisCache, InMemoryCache
File "__init__.py", line 1, in <module>
from .caching import Cache, LiteLLMCacheType
File "caching.py", line 40, in <module>
from litellm.types.utils import all_litellm_params
File "utils.py", line 9, in <module>
from openai.types.completion_usage import (
ImportError: cannot import name 'CompletionTokensDetails' from 'openai.types.completion_usage' (C:\Users\sriva\AppData\Local\Programs\Python\Python312\Lib\site-packages\openai\types\completion_usage.py)
Thanks for trying aider and filing this issue.
This looks like a duplicate of #1690. Please see the comments there for more information, and feel free to continue the discussion within that issue.
I'm going to close this issue for now. But please let me know if you think this is actually a distinct issue and I will reopen this issue.
| gharchive/issue | 2024-11-04T11:50:17 | 2025-04-01T04:32:14.655689 | {
"authors": [
"Srivamshi-ai",
"paul-gauthier"
],
"repo": "Aider-AI/aider",
"url": "https://github.com/Aider-AI/aider/issues/2241",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
303141708 | Fix upload logs crash. Set log file size limit
Asana task: https://app.asana.com/0/361770107085503/559464521227450/f
Fix logs rotation flow. Add size limit for log file to 1 MB.
There is no time to implement Winston as we would need a custom transport since Winston doesn’t support React Native file systems.
Can you be more specific with what is wrong with this PR? We need it in today.
Based on verbal convo with @thehobbit85, he approved this PR as long as we replace the logging functionality with Winston in the next release.
| gharchive/pull-request | 2018-03-07T15:25:12 | 2025-04-01T04:32:14.658605 | {
"authors": [
"paullinator",
"vsashyn"
],
"repo": "Airbitz/edge-react-gui",
"url": "https://github.com/Airbitz/edge-react-gui/pull/523",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
161971177 | make sure it works for multiple languages
the only thing I could think of might break are the getBlocksWith... functions, but it'd be best to test for everything once they add testable localization
should be resolved now that onChange uses the block type (which should always be english 📦 )
| gharchive/issue | 2016-06-23T16:52:01 | 2025-04-01T04:32:14.666294 | {
"authors": [
"Airhogs777"
],
"repo": "Airhogs777/sb3-theme",
"url": "https://github.com/Airhogs777/sb3-theme/issues/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1938127869 | Fix type hints issue from Rohmu 1.2.0
About this change - What it does
Fixes the issues that appeared with Rohmu 1.2.0.
To be noted that it requires the following changes on rohmu's side: https://github.com/Aiven-Open/rohmu/pull/150 and hence, it will not pass until a new Rohmu version can be released.
Resolves: #xxxxx
Why this way
Codecov Report
Merging #606 (cdb8208) into main (c6d3db4) will decrease coverage by 0.10%.
The diff coverage is 89.18%.
:exclamation: Current head cdb8208 differs from pull request most recent head eeb6034. Consider uploading reports for the commit eeb6034 to get more accurate results
@@ Coverage Diff @@
## main #606 +/- ##
==========================================
- Coverage 91.35% 91.26% -0.10%
==========================================
Files 32 32
Lines 4710 4727 +17
==========================================
+ Hits 4303 4314 +11
- Misses 407 413 +6
Files
Coverage Δ
pghoard/basebackup/base.py
92.25% <100.00%> (ø)
pghoard/basebackup/chunks.py
96.85% <100.00%> (+0.01%)
:arrow_up:
pghoard/restore.py
89.61% <100.00%> (ø)
pghoard/transfer.py
98.69% <100.00%> (+<0.01%)
:arrow_up:
pghoard/basebackup/delta.py
95.29% <84.61%> (-0.66%)
:arrow_down:
pghoard/common.py
93.20% <87.50%> (-0.21%)
:arrow_down:
... and 2 files with indirect coverage changes
| gharchive/pull-request | 2023-10-11T15:41:20 | 2025-04-01T04:32:14.686393 | {
"authors": [
"Mulugruntz",
"codecov-commenter"
],
"repo": "Aiven-Open/pghoard",
"url": "https://github.com/Aiven-Open/pghoard/pull/606",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1756029120 | Update README.md
If you have time, please help me.
https://github.com/Issa-N/AttractingContributors-FavoriteBook.git
Thanks.
Please pull request again because of the confilct.
So, I cant' merge
| gharchive/pull-request | 2023-06-14T04:25:32 | 2025-04-01T04:32:14.728960 | {
"authors": [
"Ak99-S091",
"Issa-N"
],
"repo": "Ak99-S091/AttractingContributors",
"url": "https://github.com/Ak99-S091/AttractingContributors/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1718384224 | Write script to merge unstable to master for feature release
As per the release instructions:
1. Make sure your local master is up-to-date: git checkout master && git pull
2. Make sure your local unstable is up-to-date: git checkout unstable && git pull
3. Create a merge commit that selects the state of unstable and push it: git merge -s ours master && git push
4. Fast-forward master to the merge commit: git checkout master && git merge unstable && git push
5. Update the version number in package.json and package-lock.json on unstable to some provisional new version number, and push it.
This tasks also requires handling GitHub secrets to enable secure push to remote.
Prerequisite:
New Version
Current branch
And figure other...
Added shell script to perform the above steps and checks in between. Integration with GitHub action is pending
The script has the following features:
Exit on first command failure. Achieved using set -e option.
Check if $VERSION is set.
Setting configs to the ACTOR of the GitHub action, i.e., who initiates the action. Achieved using below code snippet
# Set git configs
git config --global user.name "${GITHUB_ACTOR}"
git config --global user.email "${GITHUB_ACTOR}@users.noreply.github.com"
Check if the branch is set to unstable,
Checks if the version has been successfully updated in package.json and package-lock.json.
| gharchive/issue | 2023-05-21T06:09:09 | 2025-04-01T04:32:14.733142 | {
"authors": [
"AkMo3"
],
"repo": "AkMo3/cytoscape.js",
"url": "https://github.com/AkMo3/cytoscape.js/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
104076767 | Missed golden cookies
Hi, nice add-on. I only miss one feature: too see how many golden cookies I have missed. You can see it by typing "Game.missedGoldenClicks" (without quotes) in the console. Other add-ons have it so I am sure you could add it too.
Ah, so you do already know about that stat command. I guess I can add it but maybe I'll have it as a setting or hidden as I know my OCD would not be happy with how many I've missed, lol.
Finally added thanks to @Alhifar. Will be released in Version 2.3
| gharchive/issue | 2015-08-31T15:34:11 | 2025-04-01T04:32:14.746417 | {
"authors": [
"Aktanusa",
"erriperry"
],
"repo": "Aktanusa/CookieMonster",
"url": "https://github.com/Aktanusa/CookieMonster/issues/41",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
198241532 | iOS 10.2 crashes spike related to CFNetwork
Hello, In my app i use only Alamofire for the networking.
Right now the live version of the app is using Alamofire 4.1.0 and the one waiting for review is in 4.2.0
I have noticed a significant increase on crashes in iOS 10.2.
All the crashes are related to the CFNetwork. Most of the them have reducted stack trace.
I am pasting below some sample stack traces.
Have you noticed anything similar?
My crash free users before 10.2 were arround 99.7%, now a I am at 97.8% and I need to find the what is causing the issue
Thank you.
Crashed: com.apple.CFNetwork.Connection
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000110
Crashed: com.apple.CFNetwork.Connection
0 CFNetwork 0x18524131c TCPIOConnection::copyProperty(__CFString const*) + 44
1 CFNetwork 0x1851117dc SPDYConnection::_onqueue_closeStream(SPDYStream*) + 236
2 CFNetwork 0x1851117dc SPDYConnection::_onqueue_closeStream(SPDYStream*) + 236
3 CFNetwork 0x1851116cc ___ZN14SPDYConnection19startEnqueuedStreamEP10SPDYStream_block_invoke_2 + 28
4 libdispatch.dylib 0x1839221fc _dispatch_call_block_and_release + 24
5 libdispatch.dylib 0x1839221bc _dispatch_client_callout + 16
6 libdispatch.dylib 0x1839303dc _dispatch_queue_serial_drain + 928
7 libdispatch.dylib 0x1839259a4 _dispatch_queue_invoke + 652
8 libdispatch.dylib 0x18393234c _dispatch_root_queue_drain + 572
9 libdispatch.dylib 0x1839320ac _dispatch_worker_thread3 + 124
10 libsystem_pthread.dylib 0x183b2b2a0 _pthread_wqthread + 1288
11 libsystem_pthread.dylib 0x183b2ad8c start_wqthread + 4
Crashed: com.apple.NSURLConnectionLoader
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000012d42c68
Crashed: com.apple.NSURLConnectionLoader
0 libobjc.A.dylib 0x1904c80a0 objc_retain + 16
1 CFNetwork 0x1921b5054 <redacted> + 240
2 CFNetwork 0x192110ce8 <redacted> + 348
3 CFNetwork 0x1921e5990 <redacted> + 104
4 CFNetwork 0x1921e591c <redacted> + 36
5 CFNetwork 0x19217b12c <redacted> + 332
6 CFNetwork 0x19217afa0 <redacted> + 60
7 CFNetwork 0x19217af38 <redacted> + 268
8 CFNetwork 0x1920ecec0 <redacted> + 116
9 CFNetwork 0x19207f110 <redacted> + 48
10 CFNetwork 0x19207f044 <redacted> + 220
11 CFNetwork 0x19207d3d0 <redacted> + 128
12 CFNetwork 0x1921b48fc <redacted> + 1904
13 CFNetwork 0x1921b40bc <redacted> + 144
14 CFNetwork 0x1921b600c <redacted> + 28
15 libdispatch.dylib 0x1909021bc <redacted> + 16
16 libdispatch.dylib 0x19090dab0 <redacted> + 376
17 CFNetwork 0x1922b22a8 <redacted> + 36
18 CoreFoundation 0x191951c18 CFArrayApplyFunction + 68
19 CFNetwork 0x1922b218c <redacted> + 136
20 CFNetwork 0x1922b34b4 <redacted> + 312
21 CFNetwork 0x1922b3220 <redacted> + 64
22 CoreFoundation 0x191a26b5c <redacted> + 24
23 CoreFoundation 0x191a264a4 <redacted> + 524
24 CoreFoundation 0x191a240a4 <redacted> + 804
25 CoreFoundation 0x1919522b8 CFRunLoopRunSpecific + 444
26 CFNetwork 0x1921578f0 <redacted> + 336
27 Foundation 0x19258ce68 <redacted> + 1024
28 libsystem_pthread.dylib 0x190b0d850 <redacted> + 240
29 libsystem_pthread.dylib 0x190b0d760 _pthread_start + 282
30 libsystem_pthread.dylib 0x190b0ad94 thread_start + 4
These don't seem to have anything directly related to Alamofire. I suggest you report these crashes to Apple and see what they say. Also, asking a question on StackOverflow will get more eyes on these issues than you'd get here. I'm closing this, but please let us know if you learn anything more about this apparent increase in crashes on iOS 10.2.
Hello, in our latest release we used the 4.2 version of Alamofire iOS 10.2 as a base SDK.
All the CFNetwork crashes are gone!
Thanks!
i'm not use Alamofire SDK(i use AFNetworking3.1),and i still got lots of this seem crashs!
Got this crash while using Alamofire 4.4.0.
Apart from Alamofire I use Mixpanel/OneSignal/Crashlytics
I mean other libs which use network.
Might be some issue with them but saw this posted issue in Alamofire
0 CFNetwork
TCPIOConnection::copyProperty(__CFString const*) + 44
1
CFNetwork
SPDYConnection::_onqueue_closeStream(SPDYStream*) + 236
2 CFNetwork
SPDYConnection::_onqueue_closeStream(SPDYStream*) + 236
3 CFNetwork
___ZN14SPDYConnection19startEnqueuedStreamEP10SPDYStream_block_invoke_2 + 28
4 libdispatch.dylib
_dispatch_call_block_and_release + 24
5 libdispatch.dylib
_dispatch_client_callout + 16
6 libdispatch.dylib
_dispatch_queue_serial_drain + 928
7 libdispatch.dylib
_dispatch_queue_invoke + 884
8 libdispatch.dylib
_dispatch_root_queue_drain + 540
9 libdispatch.dylib
_dispatch_worker_thread3 + 124
10 libsystem_pthread.dylib
_pthread_wqthread + 1096
| gharchive/issue | 2016-12-31T18:03:36 | 2025-04-01T04:32:14.770102 | {
"authors": [
"EugeneGoloboyar",
"cliapis",
"jshier",
"julianL0veios"
],
"repo": "Alamofire/Alamofire",
"url": "https://github.com/Alamofire/Alamofire/issues/1886",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
265158373 | How to send multiple header
let allHeader : [String : NSString] = [ "Authorization": "bearer (bearerToken)!)" as NSString, "latitude" : String(self.latitude) as NSString, "longitude" : String(self.longitude) as NSString ]
Alamofire.request(Constants.web_api, method: .post, parameters: nil, encoding: JSONEncoding.default,headers: allHeader).responseJSON { (responseData) -> Void in
i try like this one but got the error .. " Extra argument 'method' in call
i found the solution ..
let allHeader: HTTPHeaders = [
"Authorization": bearerToken,
"latitude": String(self.latitude),
"longitude": String(self.longitude)
]
| gharchive/issue | 2017-10-13T03:21:44 | 2025-04-01T04:32:14.772847 | {
"authors": [
"akmalshukri"
],
"repo": "Alamofire/Alamofire",
"url": "https://github.com/Alamofire/Alamofire/issues/2323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
819250963 | Delta Tail Logo
Add new icon(s) to the library
Please fill out the table below with the desired information per icon
name
category
description
tail-DL
logo
Delta Air Lines
Icon art
The icon(s) per this request, has the new art been completed?
[x] Yes
[ ] No
If YES, does the new icon art follow all the recommendations from the Auro Icon Design Guidelines?
[x] Yes
[ ] No
If YES, the icon(s) in the request, have they been added to the UiKit Abstract repo?
[ ] Yes
[ ] No
Attach exported SVG files to this issue
See the options below for attaching a file to this request.
I’m confused as to the state of things. Can this tail be added to the lib?
| gharchive/issue | 2021-03-01T20:45:40 | 2025-04-01T04:32:14.786955 | {
"authors": [
"blackfalcon",
"gusnaughton"
],
"repo": "AlaskaAirlines/Icons",
"url": "https://github.com/AlaskaAirlines/Icons/issues/66",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2671650191 | [Perf] Avoid deserialization of previously added blocks
Motivation
Validators often optimistically request blocks (given that MAX_BLOCKS_BEHIND is only 1). As a result, validators often receive a block which is incredibly expensive to deserialize, even though they already advanced.
This PR avoids deserialization if we already advanced to request.end_height.
As a future optimization, we can also meddle with the deserialization of DataBlocks so we can deserialize only some blocks if we already advanced.
Test Plan
[ ] Local network failed, seemingly at the point where we switch from Block syncing to Consensus syncing
[ ] Ran deployed network
This PR breaks our fragile syncing logic. After resetting a validator ledger, they're unable to sync to tip. Given that the performance benefit is relatively small, this topic can be revisited when syncing is more extensively documented and modeled.
| gharchive/pull-request | 2024-11-19T10:36:23 | 2025-04-01T04:32:14.933365 | {
"authors": [
"vicsn"
],
"repo": "AleoNet/snarkOS",
"url": "https://github.com/AleoNet/snarkOS/pull/3444",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1376109830 | feat: reimplement #[type = ..] on enums for enum generation
This is a reimplementation of #24. I tried merging that PR into main but I had some merge issues because the PR is old.
All the tests pass and you can now add #[ts(type = "enum"] or #[ts(type = "const enum"] to generate an enum like
#[ts(type = "const enum")]
enum Dog {
Woof,
Bark,
Howl
}
generates a typescript enum like
const enum Dog = { Woof, Bark, Howl };
Credits to @JakubKoralewski for implementing this.
Thanks, really appreciate it! The tests currently seem to fail, do you know why?
The problem seems to be running cargo test --no-default-features. I'll check it out.
Ok I think I fixed it. The tests were using attribute macros from serdelike serde(rename_all = ...) instead of ts(rename_all = ...) without having having the right directives enabled. I changed them to the latter now it should pass.
@NyxCode is there any possibility for this one to be merged in when you have capacity?
@NyxCode This PR hasn't been active since 2022. Considering that and the fact that TS enums are generally considered to be really bad, should this PR be closed?
@escritorio-gustavo Agreed, especially with your current work on enums in mind, which might require some deep refactors.
If @pintariching is still interested in this, it'd probably make sense to re-write this based on the current master once enum flattening lands.
Great, given that I'll also close #24, since this is a reimplementation of that
| gharchive/pull-request | 2022-09-16T15:31:30 | 2025-04-01T04:32:14.937836 | {
"authors": [
"NyxCode",
"escritorio-gustavo",
"lorenzolewis",
"pintariching"
],
"repo": "Aleph-Alpha/ts-rs",
"url": "https://github.com/Aleph-Alpha/ts-rs/pull/118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2647817518 | compile-time checked #[ts(optional)]
Builds on top of #366.
Get's rid of all string manipulation in the macro
Produces a compile-time error when #[ts(optional)] is used on a non-Option field. The error is not great yet - it's probably possible to improve this though.
#[ts(optional = nullable)]
d: i32,
error[E0308]: mismatched types
--> ts-rs/tests/integration/optional_field.rs:94:10
|
94 | #[derive(TS)]
| ^^
| |
| expected `PhantomData<Option<_>>`, found `PhantomData<i32>`
| expected due to this
|
= note: expected struct `PhantomData<std::option::Option<_>>`
found struct `PhantomData<i32>`
This is achieved (technically semver-breaking) by adding an associated type, OptionInnerType to the TS trait. This type is set to Self for all types, except for Option<T>, where it is set to `T´.
With that, the behaviour of #[ts(optional)] never errors by itself. The mechanism to produce an error if it's used on a non-Option field is completely separate.
This is done by emitting this code:
use std::marker::PhantomData;
let actual: PhantomData<#ty> = PhantomData;
let must: PhantomData<Option<_>> = actual;
If #ty is not an Option, that code fails to compile, and produces the error as seen above.
Alternative way to provoke a compile-time error:
fn a<T>(_: std::option::Option<T>) {}
fn b<T>(x: #ty) { a(x) }
Notably, what unfortunately does not work is something like this:
let true = <#ty as TS>::IS_OPTION;
I massively improved the error. For
#[derive(TS)]
#[ts(export, export_to = "optional_field/", optional)]
struct OptionalStruct {
// `#[ts(optional)]` on a type that isn't `Option<T>` does nothing
#[ts(optional = nullable)]
d: i32,
}
we now get
error[E0277]: `#[ts(optional)]` can only be used on fields of type `Option`
--> ts-rs/tests/integration/optional_field.rs:104:5
|
104 | #[ts(optional = nullable)]
| ^
|
= help: the trait `IsOption` is not implemented for `i32`
= note: `#[ts(optional)]` was used on a field of type i32, which is not permitted
= help: the trait `IsOption` is implemented for `std::option::Option<T>`
There is a weird note attached to the error though, which I don't know how to get rid of:
note: required by a bound in `<OptionalStruct as TS>::inline::check_that_field_is_option`
--> ts-rs/tests/integration/optional_field.rs:94:10
|
94 | #[derive(TS)]
| ^^ required by this bound in `check_that_field_is_option`
...
104 | #[ts(optional = nullable)]
| - required by a bound in this function
= note: this error originates in the derive macro `TS` (in Nightly builds, run with -Z macro-backtrace for more info)
There is a weird note attached to the error though, which I don't know how to get rid of:
note: required by a bound in `<OptionalStruct as TS>::inline::check_that_field_is_option`
--> ts-rs/tests/integration/optional_field.rs:94:10
|
94 | #[derive(TS)]
| ^^ required by this bound in `check_that_field_is_option`
...
104 | #[ts(optional = nullable)]
| - required by a bound in this function
= note: this error originates in the derive macro `TS` (in Nightly builds, run with -Z macro-backtrace for more info)
With how good the error message is overall, I don't think this is a problem
| gharchive/pull-request | 2024-11-11T00:12:55 | 2025-04-01T04:32:14.944416 | {
"authors": [
"NyxCode",
"gustavo-shigueo"
],
"repo": "Aleph-Alpha/ts-rs",
"url": "https://github.com/Aleph-Alpha/ts-rs/pull/367",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
92410946 | Outline on .trumbowyg-editor causes hidden insert marker
Chrome Version 43.0.2357.81 (64-bit) (not verified in other browsers)
When you focus the text area, the input marker doesn't show up until you start typing.
If you remove outline in inspector and focus, it shows the character, albiet with the ugly outline
That's a Chrome bug :/ I can't do anything.
| gharchive/issue | 2015-07-01T15:17:21 | 2025-04-01T04:32:14.946332 | {
"authors": [
"Alex-D",
"Eein"
],
"repo": "Alex-D/Trumbowyg",
"url": "https://github.com/Alex-D/Trumbowyg/issues/155",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1888743573 | 🛑 Home is down
In 96eb234, Home ($SECRET_URL_HOME) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Home is back up in 979c041 after 2 hours, 31 minutes.
| gharchive/issue | 2023-09-09T13:27:47 | 2025-04-01T04:32:14.985454 | {
"authors": [
"AlexanderKlimashevskiy"
],
"repo": "AlexanderKlimashevskiy/upptime",
"url": "https://github.com/AlexanderKlimashevskiy/upptime/issues/223",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
822869937 | 🛑 Home is down
In b553a04, Home ($SECRET_URL_HOME) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Home is back up in 3e8ebd0.
| gharchive/issue | 2021-03-05T08:29:56 | 2025-04-01T04:32:14.987679 | {
"authors": [
"AlexanderKlimashevskiy"
],
"repo": "AlexanderKlimashevskiy/upptime",
"url": "https://github.com/AlexanderKlimashevskiy/upptime/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1212027180 | change compile to implementation
Fix Bug Language
is this cannot merge, how to do a patch in the project?
+1 to get this approved. patch-package is yuck.
+1 @AlexanderZaytsev it solve 0.68v + compile problem
| gharchive/pull-request | 2022-04-22T08:43:14 | 2025-04-01T04:32:14.989036 | {
"authors": [
"HeavenMin",
"YvesBoah",
"idanlevi1",
"martnd"
],
"repo": "AlexanderZaytsev/react-native-i18n",
"url": "https://github.com/AlexanderZaytsev/react-native-i18n/pull/291",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
409136338 | bluetooth detection not working
pulseaudio segment does not detect bluetooth.
relevant portion of pactl info sinks:
Sink #6
State: RUNNING
Name: bluez_sink.40_EF_4C_39_89_B2.a2dp_sink
Description: Razer Leviathan
Driver: module-bluez5-device.c
Sample Specification: s16le 2ch 44100Hz
Channel Map: front-left,front-right
Owner Module: 31
Mute: no
Volume: front-left: 19643 / 30% / -31.40 dB, front-right: 19643 / 30% / -31.40 dB
balance 0.00
Base Volume: 65536 / 100% / 0.00 dB
Monitor Source: bluez_sink.40_EF_4C_39_89_B2.a2dp_sink.monitor
Latency: 60133 usec, configured 45317 usec
Flags: HARDWARE DECIBEL_VOLUME LATENCY
Properties:
bluetooth.protocol = "a2dp_sink"
device.description = "Razer Leviathan"
device.string = "40:EF:4C:39:89:B2"
device.api = "bluez"
device.class = "sound"
device.bus = "bluetooth"
device.form_factor = "speaker"
bluez.path = "/org/bluez/hci0/dev_40_EF_4C_39_89_B2"
bluez.class = "0x240414"
bluez.alias = "Razer Leviathan"
device.icon_name = "audio-speakers-bluetooth"
Ports:
speaker-output: Speaker (priority: 0, available)
Active Port: speaker-output
Formats:
pcm
pactl info:
Server String: /run/user/1000/pulse/native
Library Protocol Version: 32
Server Protocol Version: 32
Is Local: yes
Client Index: 704
Tile Size: 65472
User Name: redacteduser
Host Name: redactedhost
Server Name: pulseaudio
Server Version: 12.2-rebootstrapped
Default Sample Specification: s16le 2ch 44100Hz
Default Channel Map: front-left,front-right
Default Sink: bluez_sink.40_EF_4C_39_89_B2.a2dp_sink
Default Source: alsa_input.pci-0000_00_1f.3.analog-stereo
Cookie: a098:0fbb
Built Waybar from d0370ac, installed dependencies (from Fedora 29 repos, excepting wlroots 0.3.0 and sway 1.0 rc1 built from source in the past week) until meson found everything (including optional dependencies).
Yep, but you can still define bluez_sink.40_EF_4C_39_89_B2.a2dp_sink in your config with an icon :)
The documentation and code suggest this is an existing feature, and the a2dp_sink string the code checks for / tries to check for is present. Is that code/feature abandoned?
I appreciate you sharing a workaround, looks like I'll need to do that for every Bluetooth device I connect.
Oh yeah forgot that feature, but you want a special icon, right? Because what is implemented is the ability to use a class
I think he is talking about the format-bluetooth options of the pulseaudio module, that is broken.
Correct. Edited issue for clarity
| gharchive/issue | 2019-02-12T06:10:20 | 2025-04-01T04:32:15.012815 | {
"authors": [
"Alexays",
"FlorianFranzen",
"dkrieger"
],
"repo": "Alexays/Waybar",
"url": "https://github.com/Alexays/Waybar/issues/169",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2086430332 | Symbol lookup error
Every time I try to run Waybar, I get this error:
waybar: symbol lookup error: /usr/lib/libtracker-sparql-3.0.so.0: undefined symbol: g_assertion_message_cmpint
I am using the hyprland desktop enviroment.
Steps to reproduce:
Run Waybar from terminal
I think the provided steps to reproduce are not sufficient to get a proper context.
Can you please bisect your config and try to identify which Waybar module is exactly causing the problem? Can you provide a minimal Waybar config to reproduce? AFAICS the libtracker-sparql library is not directly used by Waybar nor Hyprland so it has probably to do with one module your config is calling.
Hi, did you find a solution for this?
| gharchive/issue | 2024-01-17T15:26:44 | 2025-04-01T04:32:15.014994 | {
"authors": [
"apiraino",
"calamitywoah",
"ziova"
],
"repo": "Alexays/Waybar",
"url": "https://github.com/Alexays/Waybar/issues/2841",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1744640266 | 122 - Implement SearchManagerContext and useSearchManager
Issue Number
#122
This PR is the latest codebase for the search results (relevant closed PRs 115(note), 116(note), 118(note)).
Purpose/Implementation Notes
API Integration for the refinebio search endpoint.
Using the context API and the refinebio API search endpoints, implemented the SearchManagerContext and its hook useSearchManager which includes necessary methods to perform searches.
Please review the latest UI here.
NOTE: Currently using the api.search.get method in /api directory, but it will be swapped with the refinebio-js helper's api.search.get method later on.
Types of changes
Breaking change (fix or feature that would cause existing functionality to not work as expected)
The files that need to be reviewed in this PR (the initial commit of this PR starts at 6bae79b and ends at 7eb1bdb - those commits includes the minor UI adjustments/typos and I included the detailed descriptions per commit for your easy review 🔎):
[ Newly added ]
Context: SearchManagerContext
Hook: useSearchManager
Helpers: formatURLString (this file is also included in PR 125), fetchSearch
[ Modified ]
Pages: _app, search, experiment (commit c619a37)
Config: options.js
Components:
Pagination (commit 4d068c6)
PageSizes (commit 9cd0ff6)
SearchBulkAction (commit 1f3e1ef)
NonDownloadableExperiment (commit 585d8d9),
SearchFilterList (commit ee5ada1)
SearchFilter (commit 7315f12, 9f5815d)
SearchCardHeader (commit bec5c90)
SearchCardBody (commit 35a0e0b)
SearchCardFooter (commit 0787554)
HomeHero (commit 6136b87)
Other: next.config.js (commit a6cca83, c5b9266)
NOTE I merged the dev branch.
Functional tests
List out the functional tests you've completed to verify your changes work locally.
Checklist
[x] Lint and unit tests pass locally with my changes
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have added necessary documentation (if appropriate)
[ ] Any dependent changes have been merged and published in
@davidsmejia as requested today's 1:1, I prepared the PRs for you to review tomorrow during the time you allocated time for PR review.
3 of 3
This is the last PR for review. This is the latest search result page. I also updated my initial PR comment and included a lit of files for you to review. Let me know if you have any questions!
NOTE:
After this PR review, PR 123, and then 131 should be reviewed.
Due to the dataset manager implementation and API integration (currently in progress), some of the older PRs (for the dataset/download pages) will be be outdated and closed.
p should not exist on the client side in the new version, we should only have offset and limit
I used p(the currently selected page number) and size (the number of results that are currently showing on the page) parameters not only to match the refinebio-frontend implementation, but also to create a user-friendly URL. For instance, the interpretation of p would be easier for users to understand than offset since it's commonly used (e.g., when a user want to go to page 5 by simply altering the URL). What do you think 💬 ?
I also do not understand what's going on in src/helpers/getSearchQueryForAPI.js. If this is to support previous existing urls we should make that clearer in the code.
This helper is used to filter out any parameters(e.g., empty, sortby) that are only used on the client side when making the API requests to the server. In the helper file, I'll include a comment describing what it does and improve it as needed 👍
Other than that looks good!
🥳 🎉
| gharchive/pull-request | 2023-06-06T21:30:18 | 2025-04-01T04:32:15.121712 | {
"authors": [
"nozomione"
],
"repo": "AlexsLemonade/refinebio-web",
"url": "https://github.com/AlexsLemonade/refinebio-web/pull/126",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1875723028 | Save normalized counts in AnnData to X
According to the CZI schema, the raw counts should be saved to the X layer, unless normalized data is also included. In the event that normalized data is present the raw counts should be moved to raw.X and the normalized data stored in X. Right now, we are always storing the raw counts in X regardless of the presence of normalized counts. According to our design discussion notes about these changes, we would like to make this change, we should just be sure to document it!
For filtered and unfiltered counts before adding in normalized counts, do we want to have counts as X or as raw.X?
filtered and unfiltered store as X and then switch to the normalized counts being X in the normalized object only
I think the best way to do this is to add a check for the presence of the logcounts assay in the SCE object before converting to AnnData. Then we can assign the X assay by adding a variable to scpcaTools::sce_to_anndata() that lets us dictate which assay to save as the X layer in the AnnData object. Something like:
if("logcounts" %in% assayNames(sce)){
x_assay <- "logcounts"
} else {
x_assay <- "counts"
}
Closed by #439
| gharchive/issue | 2023-08-31T15:05:44 | 2025-04-01T04:32:15.125449 | {
"authors": [
"allyhawkins"
],
"repo": "AlexsLemonade/scpca-nf",
"url": "https://github.com/AlexsLemonade/scpca-nf/issues/429",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1901477779 | Production Deploy
Production Deploy
Change Log
Fix the button style overwritten issue
🗒️ We'll deploy the update to production after the minor tweaks on the contribution card, thus closing this PR.
| gharchive/pull-request | 2023-09-18T18:04:22 | 2025-04-01T04:32:15.127036 | {
"authors": [
"nozomione"
],
"repo": "AlexsLemonade/scpca-portal",
"url": "https://github.com/AlexsLemonade/scpca-portal/pull/441",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1851795427 | POWR2 - massive amounts of PTR requests
Hi,
I have a massive amount of PTR requests at my Pi-hole of the POWR2 device. I think it is since the last update of this integration but I am unsure.
It is one request every second.
Which logs could help here?
Thanks
Hyper
It just stopped spamming PTR and I have no clue why 🤷♂️
| gharchive/issue | 2023-08-15T16:53:13 | 2025-04-01T04:32:15.128462 | {
"authors": [
"HyperCriSiS"
],
"repo": "AlexxIT/SonoffLAN",
"url": "https://github.com/AlexxIT/SonoffLAN/issues/1218",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
777385458 | Uncaught Thread Exception
Hi,
I am using Sonoff Pow R2 and add it to Home Assistant (Hass). However, the switch often goes disabled after a short time. If I restart Hass, it comes available then unavailable gain. Here is the logs from Hass.
Logger: root
Source: custom_components/sonoff/sonoff_local.py:185
First occurred: 8:44:26 AM (1 occurrences)
Last logged: 8:44:26 AM
Uncaught thread exception
Traceback (most recent call last):
File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.8/site-packages/zeroconf/__init__.py", line 1750, in run
self._service_state_changed.fire(
File "/usr/local/lib/python3.8/site-packages/zeroconf/__init__.py", line 1508, in fire
h(**kwargs)
File "/config/custom_components/sonoff/sonoff_local.py", line 185, in _zeroconf_handler
state = json.loads(data)
File "/usr/local/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 188 (char 187)
Please goto file /config/custom_components/sonoff/sonoff_local.py and add line:
_LOGGER.warning(data)
before line 185:
state = json.loads(data)
And then show logs when error occured
Please goto file /config/custom_components/sonoff/sonoff_local.py and add line:
_LOGGER.warning(data)
before line 185:
state = json.loads(data)
And then show logs when error occured
Logger: custom_components.sonoff.sonoff_local
Source: custom_components/sonoff/sonoff_local.py:184
Integration: Sonoff (documentation, issues)
First occurred: 9:02:03 AM (68 occurrences)
Last logged: 9:03:20 AM
b'{"alarmVValue":[-1,-1],"alarmCValue":[-1,-1],"alarmPValue":[-1,-1],"switch":"on","startup":"on","pulse":"off","pulseWidth":500,"sledOnline":"on","power":42.72,"voltage":229.48,"current":0.20,"ssid":"Uknown","bssid":"88:d2:74:ff:55:8b"}'
b'{"alarmVValue":[-1,-1],"alarmCValue":[-1,-1],"alarmPValue":[-1,-1],"switch":"on","startup":"on","pulse":"off","pulseWidth":500,"sledOnline":"on","power":42.77,"voltage":229.09,"current":0.20,"ssid":"Uknown","bssid":"88:d2:74:ff:55:8b"}'
b'{"alarmVValue":[-1,-1],"alarmCValue":[-1,-1],"alarmPValue":[-1,-1],"switch":"on","startup":"on","pulse":"off","pulseWidth":500,"sledOnline":"on","power":42.91,"voltage":228.95,"current":0.20,"ssid":"Uknown","bssid":"88:d2:74:ff:55:8b"}'
b'{"alarmVValue":[-1,-1],"alarmCValue":[-1,-1],"alarmPValue":[-1,-1],"switch":"on","startup":"on","pulse":"off","pulseWidth":500,"sledOnline":"on","power":42.87,"voltage":229.29,"current":0.20,"ssid":"Uknown","bssid":"88:d2:74:ff:55:8b"}'
b'{"alarmVValue":[-1076222718,-1076222718],"alarmCValue":[-1076222718,-1076222718],"alarmPValue":[-1076222718,-1076222718],"switch":"on","startup":"on","pulse":"off","pulseWidth":2147483647.-2147483549,"sledOnline":"on","power":2147483647.-2147483549,"voltage":2147483647.-2147483549,"current":0.20,"ssid":"xxxxxx","bssid":"xx:xx:xx:xx:xx:xx"}'
Logger: custom_components.sonoff.sonoff_local
Source: custom_components/sonoff/sonoff_local.py:184
Integration: Sonoff (documentation, issues)
First occurred: 9:02:03 AM (68 occurrences)
Last logged: 9:03:20 AM
b'{"alarmVValue":[-1,-1],"alarmCValue":[-1,-1],"alarmPValue":[-1,-1],"switch":"on","startup":"on","pulse":"off","pulseWidth":500,"sledOnline":"on","power":42.72,"voltage":229.48,"current":0.20,"ssid":"Uknown","bssid":"88:d2:74:ff:55:8b"}'
b'{"alarmVValue":[-1,-1],"alarmCValue":[-1,-1],"alarmPValue":[-1,-1],"switch":"on","startup":"on","pulse":"off","pulseWidth":500,"sledOnline":"on","power":42.77,"voltage":229.09,"current":0.20,"ssid":"Uknown","bssid":"88:d2:74:ff:55:8b"}'
b'{"alarmVValue":[-1,-1],"alarmCValue":[-1,-1],"alarmPValue":[-1,-1],"switch":"on","startup":"on","pulse":"off","pulseWidth":500,"sledOnline":"on","power":42.91,"voltage":228.95,"current":0.20,"ssid":"Uknown","bssid":"88:d2:74:ff:55:8b"}'
b'{"alarmVValue":[-1,-1],"alarmCValue":[-1,-1],"alarmPValue":[-1,-1],"switch":"on","startup":"on","pulse":"off","pulseWidth":500,"sledOnline":"on","power":42.87,"voltage":229.29,"current":0.20,"ssid":"Uknown","bssid":"88:d2:74:ff:55:8b"}'
b'{"alarmVValue":[-1076222718,-1076222718],"alarmCValue":[-1076222718,-1076222718],"alarmPValue":[-1076222718,-1076222718],"switch":"on","startup":"on","pulse":"off","pulseWidth":2147483647.-2147483549,"sledOnline":"on","power":2147483647.-2147483549,"voltage":2147483647.-2147483549,"current":0.20,"ssid":"xxxxxx","bssid":"xx:xx:xx:xx:xx:xx"}'
I don't see the voltage and power updated. I set they refreshed every 1s.
I don't see the voltage and power updated. I set they refreshed every 1s.
Here is my configuration.yaml for Sonoff Pow R2.
sonoff:
sensors: [temperature, humidity, power, current, voltage, rssi]
force_update: [temperature, humidity, power, current, voltage, rssi]
scan_interval: '00:00:01' # (optional) default 5 minutes
devices:
xxx:
devicekey: xxx
name: Master Fan Switch
Here is my configuration.yaml for Sonoff Pow R2.
sonoff:
sensors: [temperature, humidity, power, current, voltage, rssi]
force_update: [temperature, humidity, power, current, voltage, rssi]
scan_interval: '00:00:01' # (optional) default 5 minutes
devices:
xxx:
devicekey: xxx
name: Master Fan Switch
1 second is to low interval. If you using auto mode - you can remove any force_update. With cloud connection POW will send data update in real time.
1 second is to low interval. If you using auto mode - you can remove any force_update. With cloud connection POW will send data update in real time.
Dear Alex.
Thanks for your feedback. I just wonder how to set auto mode? I just want to use local LAN only- no cloud.
In addition, I want to set scan_interval = 1s because I want to base on the voltage to set the status of my fan. For instance, if the current >= 0.1 A --> fan on, else off.
In general, I am facing with 2 issues:
My power consumption do not works. Here is the configuration.
platform: template
sensors:
sonoff_today_consumption:
friendly_name: Today consumpion
unit_of_measurement: kWh
value_template: "{{ state_attr('switch.sonoff_xxx', 'consumption').0 }}"
sonoff_ten_days_consumption:
friendly_name: 10 days consumpion
unit_of_measurement: kWh
value_template: "{% set p=state_attr('switch.sonoff_xxx', 'consumption') %}{{ p[:10]|sum if p }}"
My Sonoff Pow R2 often unavailable.
Many thanks for your help!
Dear Alex.
Thanks for your feedback. I just wonder how to set auto mode? I just want to use local LAN only- no cloud.
In addition, I want to set scan_interval = 1s because I want to base on the voltage to set the status of my fan. For instance, if the current >= 0.1 A --> fan on, else off.
In general, I am facing with 2 issues:
My power consumption do not works. Here is the configuration.
platform: template
sensors:
sonoff_today_consumption:
friendly_name: Today consumpion
unit_of_measurement: kWh
value_template: "{{ state_attr('switch.sonoff_xxx', 'consumption').0 }}"
sonoff_ten_days_consumption:
friendly_name: 10 days consumpion
unit_of_measurement: kWh
value_template: "{% set p=state_attr('switch.sonoff_xxx', 'consumption') %}{{ p[:10]|sum if p }}"
My Sonoff Pow R2 often unavailable.
Many thanks for your help!
Consumption loads from cloud
Consumption loads from cloud
Fixed in latest release
Thanks Alex!
| gharchive/issue | 2021-01-02T01:53:18 | 2025-04-01T04:32:15.146884 | {
"authors": [
"AlexxIT",
"trannamhung"
],
"repo": "AlexxIT/SonoffLAN",
"url": "https://github.com/AlexxIT/SonoffLAN/issues/333",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1419104257 | bt mesh entities don't reflect the status changed by Mijia app but can be only on/off
Is there any one faced this same issue
"bt mesh switch entities don't reflect the status changed by Mijia app but can be only set on / off"
I am useing the updated firmware for mi smart home hub ZNDMWG02LM and installed one xiaomi bub 2 ZSWG01CM
My ha server worked well before and this issue just happened abroud 6 days ago
{
"home_assistant": {
"installation_type": "Home Assistant Supervised",
"version": "2022.10.5",
"dev": false,
"hassio": true,
"virtualenv": false,
"python_version": "3.10.5",
"docker": true,
"arch": "x86_64",
"timezone": "Asia/Taipei",
"os_name": "Linux",
"os_version": "5.15.0-52-generic",
"supervisor": "2022.10.0",
"host_os": "Ubuntu 20.04.5 LTS",
"docker_version": "20.10.20",
"chassis": "desktop",
"run_as_root": true
},
"custom_components": {
"tapo": {
"version": "1.2.16",
"requirements": [
"plugp100==2.1.18"
]
},
"aqara_gateway": {
"version": "0.0.14",
"requirements": [
"paho-mqtt>=1.5.0"
]
},
"xiaomi_miio_raw": {
"version": "2022.8.0.0",
"requirements": [
"construct==2.10.56",
"python-miio>=0.5.12"
]
},
"sonoff": {
"version": "3.3.1",
"requirements": [
"pycryptodome>=3.6.6"
]
},
"xiaomi_miio_fan": {
"version": "2022.8.0.0",
"requirements": [
"construct==2.10.56",
"python-miio>=0.5.12"
]
},
"var": {
"version": "0.15.0",
"requirements": []
},
"hacs": {
"version": "1.28.2",
"requirements": [
"aiogithubapi>=22.2.4"
]
},
"tuya_v2": {
"version": "1.2",
"requirements": [
"tuya-iot-py-sdk==0.2.2"
]
},
"xiaomi_cloud_map_extractor": {
"version": "v2.2.0",
"requirements": [
"pillow",
"pybase64",
"python-miio",
"requests",
"pycryptodome"
]
},
"webrtc": {
"version": "v2.3.1",
"requirements": []
},
"xiaomi_gateway3": {
"version": "2.1.2",
"requirements": [
"zigpy>=0.33.0"
]
},
"line_notify": {
"version": "1.0.1",
"requirements": []
},
"tapo_control": {
"version": "3.7.0",
"requirements": [
"pytapo==2.3",
"onvif-zeep-async==1.2.0"
]
},
"hello_miai": {
"version": "1.0.0",
"requirements": []
},
"dohome": {
"version": "0.2.0",
"requirements": []
},
"xiaomi_miio_cooker": {
"version": "2022.8.0.0",
"requirements": [
"construct==2.10.56",
"python-miio>=0.5.12"
]
},
"nodered": {
"version": "1.1.2",
"requirements": []
},
"xiaomi_miot": {
"version": "0.7.0",
"requirements": [
"construct==2.10.56",
"python-miio>=0.5.6",
"micloud>=0.3"
]
},
"yeelight_bt": {
"version": "1.3.0",
"requirements": [
"bleak>=0.18.0",
"bleak-retry-connector>=2.1.3"
]
},
"notify_line": {
"version": "2022.7.1",
"requirements": []
},
"zhimi": {
"version": "1.0.1",
"requirements": [
"miservice>=2.0.1"
]
}
},
"integration_manifest": {
"domain": "xiaomi_gateway3",
"name": "Xiaomi Gateway 3",
"config_flow": true,
"documentation": "https://github.com/AlexxIT/XiaomiGateway3",
"issue_tracker": "https://github.com/AlexxIT/XiaomiGateway3/issues",
"codeowners": [
"@AlexxIT"
],
"dependencies": [
"http"
],
"requirements": [
"zigpy>=0.33.0"
],
"version": "2.1.2",
"iot_class": "local_push",
"is_built_in": false
},
"data": {
"version": "9459a0f",
"options": {
"host": "***",
"token": "***",
"telnet_cmd": "{\"method\":\"set_ip_info\",\"params\":{\"ssid\":\"\\\"\\\"\",\"pswd\":\"123123 ; passwd -d admin ; echo enable > /sys/class/tty/tty/enable; telnetd\"}}",
"ble": true,
"stats": true,
"debug": [
"true"
],
"buzzer": true,
"memory": true,
"zha": false
},
"errors": [],
"device": {
"type": "mesh",
"model": 3789,
"fw_ver": null,
"available": true,
"decode_time": 2,
"encode_time": 3,
"entities": {
"channel_1": "off",
"channel_2": "off",
"mesh": {
"state": "2022-10-22T05:43:12+00:00",
"value": "2022-10-22T05:43:12.780703+00:00"
}
},
"gateways": [
"54ef4424c031",
"54ef4431600e",
"54ef4423bfed"
],
"stats": {
"mac": "b460ede09cc6",
"available": true,
"msg_received": 5,
"last_msg": "3.p.1"
},
"unique_id": "b460ede09cc6"
},
"logger": null
}
}
我也遇到了这种情况
Is problem still actual?
Is problem still actual?
thanks a lot, but
yes, still be there.
the diagnostics as below for your reference
If necessary, I can share my mijia app devices with you and let you login into my ha system to check it.
{
"home_assistant": {
"installation_type": "Home Assistant OS",
"version": "2023.1.7",
"dev": false,
"hassio": true,
"virtualenv": false,
"python_version": "3.10.7",
"docker": true,
"arch": "aarch64",
"timezone": "Asia/Taipei",
"os_name": "Linux",
"os_version": "5.15.90",
"supervisor": "2023.01.1",
"host_os": "Home Assistant OS 9.5",
"docker_version": "20.10.22",
"chassis": "embedded",
"run_as_root": true
},
"custom_components": {
"notify_line": {
"version": "2022.7.1",
"requirements": []
},
"zhimi": {
"version": "1.0.1",
"requirements": [
"miservice>=2.0.1"
]
},
"line_notify": {
"version": "1.0.1",
"requirements": []
},
"dohome": {
"version": "0.2.0",
"requirements": []
},
"tuya_v2": {
"version": "1.2",
"requirements": [
"tuya-iot-py-sdk==0.2.2"
]
},
"sonoff": {
"version": "3.3.1",
"requirements": [
"pycryptodome>=3.6.6"
]
},
"webrtc": {
"version": "v3.0.1",
"requirements": []
},
"xiaomi_miio_cooker": {
"version": "2022.11.0.0",
"requirements": [
"construct==2.10.56",
"python-miio>=0.5.12"
]
},
"hello_miai": {
"version": "1.0.0",
"requirements": []
},
"yeelight_bt": {
"version": "1.3.0",
"requirements": [
"bleak>=0.18.0",
"bleak-retry-connector>=2.1.3"
]
},
"xiaomi_miot": {
"version": "0.7.5",
"requirements": [
"construct==2.10.56",
"python-miio>=0.5.6",
"micloud>=0.3"
]
},
"frigate": {
"version": "3.0.0",
"requirements": []
},
"xiaomi_gateway3": {
"version": "3.0.1",
"requirements": [
"zigpy>=0.42.0"
]
},
"nodered": {
"version": "1.1.2",
"requirements": []
},
"hacs": {
"version": "1.30.1",
"requirements": [
"aiogithubapi>=22.10.1"
]
},
"tapo_control": {
"version": "4.2.1",
"requirements": [
"pytapo==2.9.2",
"onvif-zeep-async==1.2.0"
]
},
"aqara_gateway": {
"version": "0.0.14",
"requirements": [
"paho-mqtt>=1.5.0"
]
},
"xiaomi_cloud_map_extractor": {
"version": "v2.2.0",
"requirements": [
"pillow",
"pybase64",
"python-miio",
"requests",
"pycryptodome"
]
},
"xiaomi_miio_fan": {
"version": "2022.8.0.0",
"requirements": [
"construct==2.10.56",
"python-miio>=0.5.12"
]
},
"var": {
"version": "0.15.0",
"requirements": []
},
"xiaomi_miio_raw": {
"version": "2022.12.0.0",
"requirements": [
"construct==2.10.56",
"python-miio>=0.5.12"
]
}
},
"integration_manifest": {
"domain": "xiaomi_gateway3",
"name": "Xiaomi Gateway 3",
"config_flow": true,
"documentation": "https://github.com/AlexxIT/XiaomiGateway3",
"issue_tracker": "https://github.com/AlexxIT/XiaomiGateway3/issues",
"codeowners": [
"@AlexxIT"
],
"dependencies": [
"http"
],
"requirements": [
"zigpy>=0.42.0"
],
"version": "3.0.1",
"iot_class": "local_push",
"is_built_in": false
},
"data": {
"version": "41fa3b1",
"options": {
"host": "",
"token": ""
},
"errors": [],
"device": {
"type": "mesh",
"model": 2007,
"fw_ver": null,
"available": true,
"decode_time": 51,
"encode_time": 52,
"entities": {
"switch": "off"
},
"gateways": [
"54ef444128f0",
"54ef443f1e94"
],
"unique_id": "5ce50cef2ebc"
},
"logger": null
}
}
I can see you using multiple gateways. In my experience, this can be a problem. These gateways have very glitchy Mesh software when there are multiple gateways.
Are you using the latest firmware? 1.5.4?
I can see you using multiple gateways. In my experience, this can be a problem. These gateways have very glitchy Mesh software when there are multiple gateways. Are you using the latest firmware? 1.5.4?
sorry, it would be better to inform you beforehand.
Because already updated all the gateways , please check the chart as below
Looks like you have some "blind gateway" still running?
This makes it impossible to allow the intergration to pull the status of device, since it never reach the gateway.
Maybe you can try to disable gateways that didn't mark as auxiliary gateway or Hubs and try again.
看起来你还开着盲网关
这个集成只能读取连接在多模网关上的设备
如果设备连接到盲网关就无法控制了
可以先关掉盲网关试试
Looks like you have some "blind gateway" still running? This makes it impossible to allow the intergration to pull the status of device, since it never reach the gateway. Maybe you can try to disable gateways that didn't mark as auxiliary gateway or Hubs and try again.
thanks for your suggestion.
but my consideration is that if it just connects to Xiaomi central gateway directly?
so I think if Xiaomi central gateway is the root cause
Yes, the black gateway ZSWG01CM is the root cause.
I have the same issue, all mesh devices (I have mesh switches and mesh bulbs) can only send status from HA to MiHome, but cannot receive the status change from MiHome. If I disconnect the balck gateway, then everything works well, but this is not a solution.
Same as #975
https://github.com/AlexxIT/XiaomiGateway3/releases/tag/v3.2.0
| gharchive/issue | 2022-10-22T05:44:47 | 2025-04-01T04:32:15.180435 | {
"authors": [
"AlexxIT",
"kingwap99",
"oimag",
"super-new13",
"xrh0905"
],
"repo": "AlexxIT/XiaomiGateway3",
"url": "https://github.com/AlexxIT/XiaomiGateway3/issues/849",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
874041271 | Chocolate Distribution Problem
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is.
Would like to add Python solution for chocolate distribution problem
Link to the question : https://practice.geeksforgeeks.org/problems/chocolate-distribution-problem3825/1
Describe the solution you'd like
A clear and concise description of what you want to happen.
Consider an array A , M students and N be the size of array
Sort the array
Maximum and minimum are the last and first elements of the array
Store the difference of max and min in min_diff
Iterate through the array
Since the difference has to be minimum , Find the minimum between min_diff and difference between A[i + M -1 ] and A[i]
and store it in min_diff
Return min_diff
Do you want to work on it
Mention the one language you want to be assigned for this issue.
Python
Please assign me this issue as a GSSOC 21 participant
Please assign this to me in C++
| gharchive/issue | 2021-05-02T20:50:21 | 2025-04-01T04:32:15.222196 | {
"authors": [
"Srishti013",
"dsrao711"
],
"repo": "Algo-Phantoms/Algo-Tree",
"url": "https://github.com/Algo-Phantoms/Algo-Tree/issues/1855",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
831257002 | Issue 528
Hey @tarun26091999 @rudrakshi99
I have added the code for the longest increasing subsequence in python for the issue - 528.
Please look at the code and merge it.
#528
change the readme
| gharchive/pull-request | 2021-03-14T20:52:33 | 2025-04-01T04:32:15.223508 | {
"authors": [
"Jagruthi13",
"tarun26091999",
"yasharth291"
],
"repo": "Algo-Phantoms/Algo-Tree",
"url": "https://github.com/Algo-Phantoms/Algo-Tree/pull/1012",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2582670396 | [NEW ALGORITHM] Hopcroft-Karp Algorithm (Maximum Bipartite Matching)
Issue will be closed if:
You mention more than one algorithm. You can create a separate issue for each algorithm once the current one is completed.
You propose an algorithm that is already present or has been mentioned in a previous issue.
You create a new issue without completing your previous issue.
Note: These actions will be taken seriously. Failure to follow the guidelines may result in the immediate closure of your issue.
Name:
[Hopcroft-Karp Algorithm (Maximum Bipartite Matching)]
About:
Propose a new algorithm to be added to the repository
The Hopcroft-Karp algorithm finds the maximum matching in a bipartite graph. It is widely used in network flow algorithms and scheduling.
Labels:
new algorithm, gssoc-ext, hacktoberfest, level1
Assignees:
[x] Contributor in GSSoC-ext
[x] Want to work on it
@amanver45 assined
| gharchive/issue | 2024-10-12T07:59:14 | 2025-04-01T04:32:15.227061 | {
"authors": [
"amanver45",
"pankaj-bind"
],
"repo": "AlgoGenesis/C",
"url": "https://github.com/AlgoGenesis/C/issues/588",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2608697188 | Create Program.c
For issue no #1176
not assigned
| gharchive/pull-request | 2024-10-23T13:49:41 | 2025-04-01T04:32:15.227865 | {
"authors": [
"aniruddhaadak80",
"pankaj-bind"
],
"repo": "AlgoGenesis/C",
"url": "https://github.com/AlgoGenesis/C/pull/1178",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
137221221 | Replace T with argument of Type
Fixed #12
It replaces T with cellType(viewType) because T is a reference to the superclass of cellType.
If you find my mistake, close this PR.
Thanks.
Thanks for the PR!
I'm a bit surprised as I thought https://github.com/AliSoftware/Reusable/pull/11/files already addressed that issue. Will take a look at this in details this week!
Ok I see now, #11 only changed it in dequeue… methods but not in register…, gotcha!
Well spotted sir, will merge tonight once at home!
Version 2.2.1 is now available on CocoaPods!
Thanks again for the Pull Request :+1:
Thank you for updating. :) :+1:
| gharchive/pull-request | 2016-02-29T11:05:05 | 2025-04-01T04:32:15.238586 | {
"authors": [
"AliSoftware",
"narirou"
],
"repo": "AliSoftware/Reusable",
"url": "https://github.com/AliSoftware/Reusable/pull/13",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
712818999 | Add Sift and engagio
fixes #3403
Thanks!
| gharchive/pull-request | 2020-10-01T12:57:24 | 2025-04-01T04:32:15.239454 | {
"authors": [
"AliasIO",
"honjes"
],
"repo": "AliasIO/wappalyzer",
"url": "https://github.com/AliasIO/wappalyzer/pull/3404",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1986988249 | What Features are Used in the Geometric Pre-training Tasks in GeoLayoutLM?
I appreciate the work done with GeoLayoutLM, but I find that the explanation about the geometric pre-training tasks in the paper is not very clear. The paper mentions "The input of these tasks are text-segments feature {Bi} which can be either of the five features:{Hi},{Mvi},{Fvi},{Mt,b(i)},{Fi,b,(i)}, and then there pre-training tasks such as P = Softmax(Linear[Bi,Bj])) (Direction and Distance Modeling) are set up. However, it's not specified which of the five features is the Bi used in this task. Could you please provide more clarity on this matter?
The geometric pre-training objective is exerted to the 5 features respectively. The final objective is the sum of them, plus a mvlm loss.
Thank you very much for the prompt reply
By the way, do the features of the mask tokens not participate in these geometric pre-training tasks?
By the way, do the features of the mask tokens not participate in these geometric pre-training tasks?
They still probably participate in geometric pre-training tasks, since their coordinates are not masked.
Thank you very much for the prompt reply
| gharchive/issue | 2023-11-10T06:17:16 | 2025-04-01T04:32:15.242295 | {
"authors": [
"ccx1997",
"minhoooo1"
],
"repo": "AlibabaResearch/AdvancedLiterateMachinery",
"url": "https://github.com/AlibabaResearch/AdvancedLiterateMachinery/issues/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2733945890 | Add test on language model settings page
Adds test link and green message on success, red on error.
If this looks ok, I can also add a test link on the API Service settings page as shown here: #436
I tested this PR on my setup with Anthropic, OpenAI, Gemini, Groq, and Ollama.
I think I need to add tests for the new test controller action. And probably the model and service too.
Nice improvements! I'm merging this in now.
| gharchive/pull-request | 2024-12-11T20:37:00 | 2025-04-01T04:32:15.288078 | {
"authors": [
"krschacht",
"mattlindsey"
],
"repo": "AllYourBot/hostedgpt",
"url": "https://github.com/AllYourBot/hostedgpt/pull/584",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
217362133 | Sort by column header only
I want the table to be sorted when a user clicks on the header line, not the rows. When I enabled datasort, clicking on rows will also sort the table. Is this supported?
@shaohui-liu2000
Did you resolve this problem?
| gharchive/issue | 2017-03-27T20:15:49 | 2025-04-01T04:32:15.302554 | {
"authors": [
"reactionic127",
"shaohui-liu2000"
],
"repo": "AllenFang/react-bootstrap-table",
"url": "https://github.com/AllenFang/react-bootstrap-table/issues/1177",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2576464856 | Add button to download entire query metadata
Among the query options like Duplicate I think it would be nice to have an option to download the entire metadata set.
After chatting with scientists I think this is YAGNI
| gharchive/issue | 2024-10-09T16:56:40 | 2025-04-01T04:32:15.316022 | {
"authors": [
"SeanLeRoy"
],
"repo": "AllenInstitute/biofile-finder",
"url": "https://github.com/AllenInstitute/biofile-finder/issues/263",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1465323025 | Refactor package
Standard python package structure with setup.py, parallax root code folder
Relative imports within package
Test code moved to scripts outside package
Fixed all import * instances
Some cleanup for lib and helper (more needed)
PEP8-compliant file and variable naming
Add startup script
Add license file (presumed AISL)
Should not be merged until #5 is complete.
Probably easiest to review by looking at diffs one commit at a time.
Should not be merged until #5 is complete.
Do you mean #6 ?
Thanks; #6 is correct
Thanks Luke! I pushed some changes on top of this branch:
reverted the license to MIT (per AIND open science guidelines)
fixed merge conflicts with master
fixed some bugs introduced by snake case; mostly these were due to Qt widget methods no longer being overridden (e.g. key_press_event())
removed some unused functions from lib and helper
Let me know what you think!
Everything looks good to me!
| gharchive/pull-request | 2022-11-27T03:56:25 | 2025-04-01T04:32:15.320914 | {
"authors": [
"campagnola",
"chronopoulos"
],
"repo": "AllenNeuralDynamics/parallax",
"url": "https://github.com/AllenNeuralDynamics/parallax/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
234146714 | css3transform NPM包有多余内容,可否清理一下?
刚刚用OmniDiskSweeper扫了一下自己的工程,发现css3transform竟然有60MB,都是被react/example/node_modules占用的
刚开始还以为是自己的NPM出问题了,后来发现下载的就是这么大:https://registry.npmjs.org/css3transform/-/css3transform-1.1.5.tgz
额,多谢。已经清理并提交npm~~
| gharchive/issue | 2017-06-07T09:26:03 | 2025-04-01T04:32:15.325062 | {
"authors": [
"dntzhang",
"imyzf"
],
"repo": "AlloyTeam/AlloyTouch",
"url": "https://github.com/AlloyTeam/AlloyTouch/issues/68",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
986212314 | Submit job along with job definition and dependecies as jars to job service
Is your feature request related to a problem? Please describe.
Currently with job service, the job definition and dependent classes are part of and built along with Alluxio. When debugging an issue, one needs to recompile the whole project, restart the whole cluster running job service with updates Alluxio binaries, and rerun the updated job. This is cumbersome and error-prone.
Describe the solution you'd like
Allow the job definition to be decoupled with the job execution, i.e. job master accepts an arbitrary executable jar along with all its dependencies, and schedule the job to be run on job workers. The job service does not need to be restarted.
Describe alternatives you've considered
None.
Urgency
Medium.
Additional context
This issue is from discussion with @maobaolong. They encountered the problem running stress benchmarks, when they needed to tweak the benchmarks to accommodate their cluster.
FYI @yuzhu @LuQQiu.
With this feature request, we are almost bordering on a general purpose execution framework. Not sure we want job service to be that, or reinvent the wheel while there are many excellent implementations already.
Since it is a such a major shift in the job service responsibility, it will require some discussion and planning.
FYI @apc999 @madanadit
@maobaolong do you have some more usecase other than benchmarking at this point for this feature request?
| gharchive/issue | 2021-09-02T06:20:05 | 2025-04-01T04:32:15.328948 | {
"authors": [
"dbw9580",
"yuzhu"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/issues/14016",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
574379244 | Port changes of [#11061] to branch-2.2
Improves the expression and formats slightly. Also mention in IDE integration.
pr-link: Alluxio/alluxio#11061
change-id: cid-3850948560d5325da060916849ae38b48a91b6f8
The auto cherry-pick of this PR failed so I'm manually doing this. Seems to be due to a conflict on whitespacing.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/8627/
Test FAILed.
alluxio-bot, test this please
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/8628/
Test PASSed.
alluxio-bot, merge this please
| gharchive/pull-request | 2020-03-03T02:47:16 | 2025-04-01T04:32:15.332975 | {
"authors": [
"AmplabJenkins",
"gpang",
"jiacheliu3"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/pull/11099",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
858618004 | Support open file for override
With this PR, the following bash command can execute successfully.
echo hello>mountPoint/file
echo world>mountPoint/file
Codecov Report
Merging #13236 (4cc3491) into master (4c4ec92) will decrease coverage by 26.54%.
The diff coverage is 8.13%.
@@ Coverage Diff @@
## master #13236 +/- ##
=============================================
- Coverage 43.87% 17.32% -26.55%
+ Complexity 8932 2625 -6307
=============================================
Files 1359 1359
Lines 78176 78240 +64
Branches 9500 9502 +2
=============================================
- Hits 34297 13555 -20742
- Misses 40967 63490 +22523
+ Partials 2912 1195 -1717
Impacted Files
Coverage Δ
Complexity Δ
...ava/alluxio/client/block/stream/BlockInStream.java
0.00% <0.00%> (-55.79%)
0.00 <0.00> (-35.00)
...xio/client/block/stream/BlockWorkerDataReader.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
.../java/alluxio/client/file/AlluxioFileInStream.java
0.00% <0.00%> (-74.54%)
0.00 <0.00> (-42.00)
...xio/network/protocol/databuffer/NioDataBuffer.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
...main/java/alluxio/worker/block/io/BlockReader.java
0.00% <ø> (-100.00%)
0.00 <0.00> (-1.00)
.../alluxio/worker/block/io/LocalFileBlockReader.java
0.00% <ø> (-65.63%)
0.00 <0.00> (-9.00)
...in/java/alluxio/fuse/AlluxioJniFuseFileSystem.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
...ation/fuse/src/main/java/alluxio/fuse/StackFS.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
...n/java/alluxio/cli/docgen/MetricsDocGenerator.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
...common/src/main/java/alluxio/conf/PropertyKey.java
97.77% <100.00%> (-1.67%)
40.00 <0.00> (-19.00)
... and 664 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 915a687...4cc3491. Read the comment docs.
@LuQQiu Revert last commit to original OpenFileEntries and CreateFileEntries. PTAL.
Codecov Report
Merging #13236 (49ab5f7) into master (4c4ec92) will increase coverage by 0.00%.
The diff coverage is 18.60%.
@@ Coverage Diff @@
## master #13236 +/- ##
==========================================
Coverage 43.87% 43.87%
- Complexity 8932 8941 +9
==========================================
Files 1359 1359
Lines 78176 78277 +101
Branches 9500 9507 +7
==========================================
+ Hits 34297 34343 +46
- Misses 40967 41019 +52
- Partials 2912 2915 +3
Impacted Files
Coverage Δ
Complexity Δ
...xio/client/block/stream/BlockWorkerDataReader.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
...xio/network/protocol/databuffer/NioDataBuffer.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
...main/java/alluxio/worker/block/io/BlockReader.java
100.00% <ø> (ø)
1.00 <0.00> (ø)
.../alluxio/worker/block/io/LocalFileBlockReader.java
63.33% <ø> (-2.30%)
8.00 <0.00> (-1.00)
...in/java/alluxio/fuse/AlluxioJniFuseFileSystem.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
...ation/fuse/src/main/java/alluxio/fuse/StackFS.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
...n/java/alluxio/cli/docgen/MetricsDocGenerator.java
0.00% <0.00%> (ø)
0.00 <0.00> (ø)
.../java/alluxio/client/file/AlluxioFileInStream.java
75.34% <40.00%> (+0.80%)
45.00 <5.00> (+3.00)
...ava/alluxio/client/block/stream/BlockInStream.java
56.25% <83.33%> (+0.46%)
36.00 <3.00> (+1.00)
...common/src/main/java/alluxio/conf/PropertyKey.java
99.44% <100.00%> (+<0.01%)
59.00 <0.00> (ø)
... and 29 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 915a687...49ab5f7. Read the comment docs.
alluxio-bot, merge this please
| gharchive/pull-request | 2021-04-15T08:10:22 | 2025-04-01T04:32:15.368544 | {
"authors": [
"LuQQiu",
"codecov-commenter",
"codecov-io",
"maobaolong"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/pull/13236",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
232941636 | [SMALLFIX] Improve efficiency of async path cache
@calvinjia
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/15585/
Test PASSed.
Merged build finished. Test PASSed.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/15586/
Test PASSed.
@gpang One minor comment, looks good otherwise.
@calvinjia Updated. Thanks!
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/15587/
Test PASSed.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/15588/
Test PASSed.
| gharchive/pull-request | 2017-06-01T16:45:24 | 2025-04-01T04:32:15.374205 | {
"authors": [
"AmplabJenkins",
"calvinjia",
"gpang"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/pull/5526",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
281601569 | [ALLUXIO-3069] Async Partial Caching
https://alluxio.atlassian.net/browse/ALLUXIO-3069
Note this is merging to a temporary async-caching branch
This is the client side change. Points of interest:
Removal of synchronous partial and passive caching logic from FileInStream
Addition of a AsyncCacheRequest client -> worker RPC
Removal of InStreamOptions as a user facing option (never made sense), now it is a holder for some state and provides a basic utility method
Define responsibilities of AlluxioBlockStore and BlockInStream in creating a BlockInStream object
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17939/Build result: FAILURE[...truncated 3684 lines...][JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/pom.xml to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/worker/pom.xml to org.alluxio/alluxio-core-server-worker/1.7.0-SNAPSHOT/alluxio-core-server-worker-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/worker/target/alluxio-core-server-worker-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-server-worker/1.7.0-SNAPSHOT/alluxio-core-server-worker-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/worker/target/alluxio-core-server-worker-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-server-worker/1.7.0-SNAPSHOT/alluxio-core-server-worker-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/logserver/pom.xml to org.alluxio/alluxio-logserver/1.7.0-SNAPSHOT/alluxio-logserver-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/logserver/target/alluxio-logserver-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-logserver/1.7.0-SNAPSHOT/alluxio-logserver-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/logserver/target/alluxio-logserver-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-logserver/1.7.0-SNAPSHOT/alluxio-logserver-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/pom.xml to org.alluxio/alluxio-core-client/1.7.0-SNAPSHOT/alluxio-core-client-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/pom.xml to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/target/alluxio-examples-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/target/alluxio-examples-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/pom.xml to org.alluxio/alluxio-core/1.7.0-SNAPSHOT/alluxio-core-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/pom.xml to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/target/alluxio-core-server-proxy-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/target/alluxio-core-server-proxy-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/pom.xml to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/target/alluxio-shell-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/target/alluxio-shell-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/tests/pom.xml to org.alluxio/alluxio-tests/1.7.0-SNAPSHOT/alluxio-tests-1.7.0-SNAPSHOT.pomchannel stoppedArchiving artifacts
Test FAILed.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17958/
Test PASSed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17975/Failed Tests: 162org.alluxio:alluxio-tests: 162alluxio.cli.fs.command.CatCommandIntegrationTest.catalluxio.cli.fs.command.CatCommandIntegrationTest.catWildcardalluxio.cli.fs.command.ChecksumCommandIntegrationTest.checksumalluxio.cli.fs.command.CopyFromLocalCommandIntegrationTest.copyFromLocalLargealluxio.cli.fs.command.CopyFromLocalCommandIntegrationTest.copyFromLocalalluxio.cli.fs.command.CopyFromLocalCommandIntegrationTest.copyFromLocalFileToDstPathalluxio.cli.fs.command.CopyFromLocalCommandIntegrationTest.copyFromLocalOverwritealluxio.cli.fs.command.CopyFromLocalCommandIntegrationTest.copyFromLocalTestWithFullURIalluxio.cli.fs.command.CopyToLocalCommandIntegrationTest.copyToLocalalluxio.cli.fs.command.CopyToLocalCommandIntegrationTest.copyToLocalLargealluxio.cli.fs.command.CopyToLocalCommandIntegrationTest.copyToLocalWildcardalluxio.cli.fs.command.CopyToLocalCommandIntegrationTest.copyToLocalWildcardExistingDiralluxio.cli.fs.command.CopyToLocalCommandIntegrationTest.copyToLocalWildcardHieralluxio.cli.fs.command.CopyToLocalCommandIntegrationTest.copyToLocalRelativePathDiralluxio.cli.fs.command.CopyToLocalCommandIntegrationTest.copyToLocalDiralluxio.cli.fs.command.CopyToLocalCommandIntegrationTest.copyToLocalRelativePathalluxio.cli.fs.command.CpCommandIntegrationTest.copyToLocalalluxio.cli.fs.command.CpCommandIntegrationTest.copyToLocalLargealluxio.cli.fs.command.CpCommandIntegrationTest.copyFromLocalLargealluxio.cli.fs.command.CpCommandIntegrationTest.copyFileNewalluxio.cli.fs.command.CpCommandIntegrationTest.copyFromLocalalluxio.cli.fs.command.CpCommandIntegrationTest.copyToLocalWildcardalluxio.cli.fs.command.CpCommandIntegrationTest.copyDirNewalluxio.cli.fs.command.CpCommandIntegrationTest.copyFromLocalFileToDstPathalluxio.cli.fs.command.CpCommandIntegrationTest.copyDirExistingalluxio.cli.fs.command.CpCommandIntegrationTest.copyFromLocalOverwritealluxio.cli.fs.command.CpCommandIntegrationTest.copyFileExistingalluxio.cli.fs.command.CpCommandIntegrationTest.copyToLocalWildcardExistingDiralluxio.cli.fs.command.CpCommandIntegrationTest.copyToLocalWildcardHieralluxio.cli.fs.command.CpCommandIntegrationTest.copyWildcardalluxio.cli.fs.command.CpCommandIntegrationTest.copyFromLocalTestWithFullURIalluxio.cli.fs.command.CpCommandIntegrationTest.copyToLocalDiralluxio.cli.fs.command.HeadCommandIntegrationTest.headSmallFilealluxio.cli.fs.command.HeadCommandIntegrationTest.headFileWithUserSpecifiedBytesWithUnitalluxio.cli.fs.command.HeadCommandIntegrationTest.headFileWithUserSpecifiedBytesalluxio.cli.fs.command.HeadCommandIntegrationTest.headWildcardalluxio.cli.fs.command.HeadCommandIntegrationTest.headLargeFilealluxio.cli.fs.command.LoadCommandIntegrationTest.loadFileWithLocalOptionalluxio.cli.fs.command.PersistCommandTest.persistMultiFilesAndDirsalluxio.cli.fs.command.PersistCommandTest.persistWithAncestorPermissionalluxio.cli.fs.command.PersistCommandTest.persistTwicealluxio.cli.fs.command.PersistCommandTest.persistDirectoryalluxio.cli.fs.command.PersistCommandTest.persistalluxio.cli.fs.command.PersistCommandTest.persistMultiFilesalluxio.cli.fs.command.TailCommandIntegrationTest.tailFileWithUserSpecifiedBytesalluxio.cli.fs.command.TailCommandIntegrationTest.tailLargeFilealluxio.cli.fs.command.TailCommandIntegrationTest.tailFileWithUserSpecifiedBytesWithUnitalluxio.cli.fs.command.TailCommandIntegrationTest.tailSmallFilealluxio.cli.fs.command.TailCommandIntegrationTest.tailWildcardalluxio.client.BufferedBlockInStreamIntegrationTest.readTest1alluxio.client.BufferedBlockInStreamIntegrationTest.readTest2alluxio.client.BufferedBlockInStreamIntegrationTest.readTest3alluxio.client.BufferedBlockInStreamIntegrationTest.skipalluxio.client.FileInStreamIntegrationTest.eofSeekalluxio.client.FileInStreamIntegrationTest.concurrentRemoteReadalluxio.client.FileInStreamIntegrationTest.readTest1alluxio.client.FileInStreamIntegrationTest.readTest2alluxio.client.FileInStreamIntegrationTest.readTest3alluxio.client.FileInStreamIntegrationTest.readEndOfFilealluxio.client.FileInStreamIntegrationTest.seekalluxio.client.FileInStreamIntegrationTest.skipalluxio.client.FileInStreamIntegrationTest.remoteReadLargeFilealluxio.client.FileOutStreamAsyncWriteIntegrationTest.asyncWritealluxio.client.FileOutStreamIntegrationTest.outOfOrderWrite[0]alluxio.client.FileOutStreamIntegrationTest.writeBytes[0]alluxio.client.FileOutStreamIntegrationTest.longWrite[0]alluxio.client.FileOutStreamIntegrationTest.writeSpecifyLocal[0]alluxio.client.FileOutStreamIntegrationTest.writeByteArray[0]alluxio.client.FileOutStreamIntegrationTest.writeTwoByteArrays[0]alluxio.client.FileOutStreamIntegrationTest.outOfOrderWrite[1]alluxio.client.FileOutStreamIntegrationTest.writeBytes[1]alluxio.client.FileOutStreamIntegrationTest.longWrite[1]alluxio.client.FileOutStreamIntegrationTest.writeSpecifyLocal[1]alluxio.client.FileOutStreamIntegrationTest.writeByteArray[1]alluxio.client.FileOutStreamIntegrationTest.writeTwoByteArrays[1]alluxio.client.FileOutStreamIntegrationTest.outOfOrderWrite[2]alluxio.client.FileOutStreamIntegrationTest.writeBytes[2]alluxio.client.FileOutStreamIntegrationTest.longWrite[2]alluxio.client.FileOutStreamIntegrationTest.writeSpecifyLocal[2]alluxio.client.FileOutStreamIntegrationTest.writeByteArray[2]alluxio.client.FileOutStreamIntegrationTest.writeTwoByteArrays[2]alluxio.client.IsolatedFileSystemIntegrationTest.lockBlockTest2alluxio.client.IsolatedFileSystemIntegrationTest.lockBlockTest3alluxio.client.IsolatedFileSystemIntegrationTest.unlockBlockTest1alluxio.client.IsolatedFileSystemIntegrationTest.unlockBlockTest2alluxio.client.IsolatedFileSystemIntegrationTest.unlockBlockTest3alluxio.client.LocalBlockInStreamIntegrationTest.readTest1alluxio.client.LocalBlockInStreamIntegrationTest.readTest2alluxio.client.LocalBlockInStreamIntegrationTest.readTest3alluxio.client.LocalBlockInStreamIntegrationTest.seekalluxio.client.LocalBlockInStreamIntegrationTest.skipalluxio.client.RemoteReadIntegrationTest.readTest1[0]alluxio.client.RemoteReadIntegrationTest.readTest2[0]alluxio.client.RemoteReadIntegrationTest.readTest3[0]alluxio.client.RemoteReadIntegrationTest.readTest7[0]alluxio.client.RemoteReadIntegrationTest.incompleteFileReadCancelsRecache[0]alluxio.client.RemoteReadIntegrationTest.seekAroundLocalBlock[0]alluxio.client.RemoteReadIntegrationTest.seek[0]alluxio.client.RemoteReadIntegrationTest.skip[0]alluxio.client.RemoteReadIntegrationTest.completeFileReadTriggersRecache[0]alluxio.client.RemoteReadIntegrationTest.heartbeat1[0]alluxio.client.RemoteReadIntegrationTest.readMultiBlockFile[0]alluxio.client.RemoteReadIntegrationTest.readTest1[1]alluxio.client.RemoteReadIntegrationTest.readTest2[1]alluxio.client.RemoteReadIntegrationTest.readTest3[1]alluxio.client.RemoteReadIntegrationTest.readTest7[1]alluxio.client.RemoteReadIntegrationTest.incompleteFileReadCancelsRecache[1]alluxio.client.RemoteReadIntegrationTest.seekAroundLocalBlock[1]alluxio.client.RemoteReadIntegrationTest.seek[1]alluxio.client.RemoteReadIntegrationTest.skip[1]alluxio.client.RemoteReadIntegrationTest.completeFileReadTriggersRecache[1]alluxio.client.RemoteReadIntegrationTest.heartbeat1[1]alluxio.client.RemoteReadIntegrationTest.readMultiBlockFile[1]alluxio.client.UnderStorageReadIntegrationTest.readalluxio.client.UnderStorageReadIntegrationTest.seekalluxio.client.UnderStorageReadIntegrationTest.skipalluxio.client.UnderStorageReadIntegrationTest.readMultiBlockFilealluxio.client.concurrent.FileInStreamConcurrencyIntegrationTest.FileInStreamConcurrencyalluxio.hadoop.FileSystemStatisticsTest.bytesReadStatisticsalluxio.hadoop.HdfsFileInputStreamIntegrationTest.readTest1alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readTest2alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readTest2alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readTest3alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readTest3alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readTest4alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readTest4alluxio.hadoop.HdfsFileInputStreamIntegrationTest.seekNegativealluxio.hadoop.HdfsFileInputStreamIntegrationTest.seekNegativealluxio.hadoop.HdfsFileInputStreamIntegrationTest.availablealluxio.hadoop.HdfsFileInputStreamIntegrationTest.availablealluxio.hadoop.HdfsFileInputStreamIntegrationTest.positionedReadNoCachealluxio.hadoop.HdfsFileInputStreamIntegrationTest.positionedReadNoCachealluxio.hadoop.HdfsFileInputStreamIntegrationTest.ufsSeekalluxio.hadoop.HdfsFileInputStreamIntegrationTest.ufsSeekalluxio.hadoop.HdfsFileInputStreamIntegrationTest.inMemSeekalluxio.hadoop.HdfsFileInputStreamIntegrationTest.inMemSeekalluxio.hadoop.HdfsFileInputStreamIntegrationTest.positionedReadNoCacheNoPartialCachealluxio.hadoop.HdfsFileInputStreamIntegrationTest.positionedReadNoCacheNoPartialCachealluxio.hadoop.HdfsFileInputStreamIntegrationTest.seekNegativeUfsalluxio.hadoop.HdfsFileInputStreamIntegrationTest.seekNegativeUfsalluxio.hadoop.HdfsFileInputStreamIntegrationTest.readFullyTest1alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readFullyTest1alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readFullyTest2alluxio.hadoop.HdfsFileInputStreamIntegrationTest.readFullyTest2alluxio.hadoop.HdfsFileInputStreamIntegrationTest.seekPastEofalluxio.hadoop.HdfsFileInputStreamIntegrationTest.seekPastEofalluxio.hadoop.HdfsFileInputStreamIntegrationTest.positionedReadCachealluxio.hadoop.HdfsFileInputStreamIntegrationTest.positionedReadCachealluxio.hadoop.HdfsFileInputStreamIntegrationTest.seekPastEofUfsalluxio.hadoop.HdfsFileInputStreamIntegrationTest.seekPastEofUfsalluxio.hadoop.fs.DFSIOIntegrationTest.alluxio.hadoop.fs.DFSIOIntegrationTestalluxio.proxy.FileSystemClientRestApiTest.downloadalluxio.proxy.s3.S3ClientRestApiTest.getSmallObjectalluxio.proxy.s3.S3ClientRestApiTest.completeMultipartUploadalluxio.proxy.s3.S3ClientRestApiTest.uploadPartalluxio.proxy.s3.S3ClientRestApiTest.putLargeObjectalluxio.proxy.s3.S3ClientRestApiTest.putSmallObjectalluxio.proxy.s3.S3ClientRestApiTest.getLargeObjectalluxio.worker.block.meta.TierPromoteIntegrationTest.promoteBlock[0]alluxio.worker.block.meta.TierPromoteIntegrationTest.promoteBlock[1]alluxio.worker.block.meta.TieredStoreIntegrationTest.deleteWhileReadalluxio.worker.block.meta.TieredStoreIntegrationTest.promoteBlock
Test FAILed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17978/
Test PASSed.
Merged build finished. Test PASSed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17980/Build result: FAILURE[...truncated 2515 lines...][JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/pom.xml to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/pom.xml to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/worker/pom.xml to org.alluxio/alluxio-core-server-worker/1.7.0-SNAPSHOT/alluxio-core-server-worker-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/logserver/pom.xml to org.alluxio/alluxio-logserver/1.7.0-SNAPSHOT/alluxio-logserver-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/pom.xml to org.alluxio/alluxio-core-client/1.7.0-SNAPSHOT/alluxio-core-client-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/pom.xml to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/target/alluxio-examples-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/target/alluxio-examples-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/pom.xml to org.alluxio/alluxio-core/1.7.0-SNAPSHOT/alluxio-core-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/pom.xml to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/target/alluxio-core-server-proxy-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/target/alluxio-core-server-proxy-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/pom.xml to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/target/alluxio-shell-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/target/alluxio-shell-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/tests/pom.xml to org.alluxio/alluxio-tests/1.7.0-SNAPSHOT/alluxio-tests-1.7.0-SNAPSHOT.pomchannel stoppedArchiving artifacts
Test FAILed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17982/Build result: FAILURE[...truncated 2515 lines...][JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/pom.xml to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/pom.xml to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/worker/pom.xml to org.alluxio/alluxio-core-server-worker/1.7.0-SNAPSHOT/alluxio-core-server-worker-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/logserver/pom.xml to org.alluxio/alluxio-logserver/1.7.0-SNAPSHOT/alluxio-logserver-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/pom.xml to org.alluxio/alluxio-core-client/1.7.0-SNAPSHOT/alluxio-core-client-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/pom.xml to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/target/alluxio-examples-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/target/alluxio-examples-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/pom.xml to org.alluxio/alluxio-core/1.7.0-SNAPSHOT/alluxio-core-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/pom.xml to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/target/alluxio-core-server-proxy-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/target/alluxio-core-server-proxy-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/pom.xml to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/target/alluxio-shell-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/target/alluxio-shell-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/tests/pom.xml to org.alluxio/alluxio-tests/1.7.0-SNAPSHOT/alluxio-tests-1.7.0-SNAPSHOT.pomchannel stoppedArchiving artifacts
Test FAILed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17989/Build result: FAILURE[...truncated 2518 lines...][JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/pom.xml to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/target/alluxio-core-client-fs-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/pom.xml to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/worker/pom.xml to org.alluxio/alluxio-core-server-worker/1.7.0-SNAPSHOT/alluxio-core-server-worker-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/logserver/pom.xml to org.alluxio/alluxio-logserver/1.7.0-SNAPSHOT/alluxio-logserver-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/pom.xml to org.alluxio/alluxio-core-client/1.7.0-SNAPSHOT/alluxio-core-client-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/pom.xml to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/target/alluxio-examples-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/target/alluxio-examples-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/pom.xml to org.alluxio/alluxio-core/1.7.0-SNAPSHOT/alluxio-core-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/pom.xml to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/target/alluxio-core-server-proxy-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/target/alluxio-core-server-proxy-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/pom.xml to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/target/alluxio-shell-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/target/alluxio-shell-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/tests/pom.xml to org.alluxio/alluxio-tests/1.7.0-SNAPSHOT/alluxio-tests-1.7.0-SNAPSHOT.pomchannel stoppedArchiving artifacts
Test FAILed.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17990/
Test PASSed.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17992/
Test PASSed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/17996/Failed Tests: 18org.alluxio:alluxio-core-client-fs: 1alluxio.client.block.AlluxioBlockStoreTest.getInStreamUfsorg.alluxio:alluxio-tests: 17alluxio.cli.fs.command.LoadCommandIntegrationTest.loadFileWithLocalOptionalluxio.cli.fs.command.LoadCommandIntegrationTest.loadDiralluxio.cli.fs.command.LoadCommandIntegrationTest.loadFilealluxio.client.RemoteReadIntegrationTest.readTest1[0]alluxio.client.RemoteReadIntegrationTest.readTest2[0]alluxio.client.RemoteReadIntegrationTest.readTest3[0]alluxio.client.RemoteReadIntegrationTest.completeFileReadTriggersRecache[0]alluxio.client.RemoteReadIntegrationTest.readMultiBlockFile[0]alluxio.client.RemoteReadIntegrationTest.readTest1[1]alluxio.client.RemoteReadIntegrationTest.readTest2[1]alluxio.client.RemoteReadIntegrationTest.readTest3[1]alluxio.client.RemoteReadIntegrationTest.completeFileReadTriggersRecache[1]alluxio.client.RemoteReadIntegrationTest.readMultiBlockFile[1]alluxio.client.UnderStorageReadIntegrationTest.readalluxio.client.UnderStorageReadIntegrationTest.readMultiBlockFilealluxio.hadoop.HdfsFileInputStreamIntegrationTest.positionedReadCachealluxio.worker.block.meta.TieredStoreIntegrationTest.promoteBlock
Test FAILed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/18001/Build result: FAILURE[...truncated 830 lines...][JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/protobuf/dependency-reduced-pom.xml to org.alluxio/alluxio-core-protobuf/1.7.0-SNAPSHOT/alluxio-core-protobuf-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/protobuf/target/alluxio-core-protobuf-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-protobuf/1.7.0-SNAPSHOT/alluxio-core-protobuf-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/protobuf/target/alluxio-core-protobuf-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-protobuf/1.7.0-SNAPSHOT/alluxio-core-protobuf-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/keyvalue/pom.xml to org.alluxio/alluxio-keyvalue/1.7.0-SNAPSHOT/alluxio-keyvalue-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/pom.xml to org.alluxio/alluxio-parent/1.7.0-SNAPSHOT/alluxio-parent-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/keyvalue/server/pom.xml to org.alluxio/alluxio-keyvalue-server/1.7.0-SNAPSHOT/alluxio-keyvalue-server-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/underfs/oss/pom.xml to org.alluxio/alluxio-underfs-oss/1.7.0-SNAPSHOT/alluxio-underfs-oss-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/hdfs/pom.xml to org.alluxio/alluxio-core-client-hdfs/1.7.0-SNAPSHOT/alluxio-core-client-hdfs-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/keyvalue/common/pom.xml to org.alluxio/alluxio-keyvalue-common/1.7.0-SNAPSHOT/alluxio-keyvalue-common-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/pom.xml to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/pom.xml to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/worker/pom.xml to org.alluxio/alluxio-core-server-worker/1.7.0-SNAPSHOT/alluxio-core-server-worker-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/logserver/pom.xml to org.alluxio/alluxio-logserver/1.7.0-SNAPSHOT/alluxio-logserver-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/pom.xml to org.alluxio/alluxio-core-client/1.7.0-SNAPSHOT/alluxio-core-client-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/pom.xml to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/pom.xml to org.alluxio/alluxio-core/1.7.0-SNAPSHOT/alluxio-core-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/pom.xml to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/pom.xml to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/tests/pom.xml to org.alluxio/alluxio-tests/1.7.0-SNAPSHOT/alluxio-tests-1.7.0-SNAPSHOT.pomchannel stoppedArchiving artifacts
Test FAILed.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/18002/Failed Tests: 1org.alluxio:alluxio-core-client-fs: 1alluxio.client.block.AlluxioBlockStoreTest.getInStreamUfs
Test FAILed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/18005/
Test PASSed.
Merged build finished. Test PASSed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/18006/Build result: FAILURE[...truncated 928 lines...][JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/protobuf/dependency-reduced-pom.xml to org.alluxio/alluxio-core-protobuf/1.7.0-SNAPSHOT/alluxio-core-protobuf-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/protobuf/target/alluxio-core-protobuf-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-protobuf/1.7.0-SNAPSHOT/alluxio-core-protobuf-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/protobuf/target/alluxio-core-protobuf-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-protobuf/1.7.0-SNAPSHOT/alluxio-core-protobuf-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/keyvalue/pom.xml to org.alluxio/alluxio-keyvalue/1.7.0-SNAPSHOT/alluxio-keyvalue-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/pom.xml to org.alluxio/alluxio-parent/1.7.0-SNAPSHOT/alluxio-parent-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/keyvalue/server/pom.xml to org.alluxio/alluxio-keyvalue-server/1.7.0-SNAPSHOT/alluxio-keyvalue-server-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/underfs/oss/pom.xml to org.alluxio/alluxio-underfs-oss/1.7.0-SNAPSHOT/alluxio-underfs-oss-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/hdfs/pom.xml to org.alluxio/alluxio-core-client-hdfs/1.7.0-SNAPSHOT/alluxio-core-client-hdfs-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/keyvalue/common/pom.xml to org.alluxio/alluxio-keyvalue-common/1.7.0-SNAPSHOT/alluxio-keyvalue-common-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/fs/pom.xml to org.alluxio/alluxio-core-client-fs/1.7.0-SNAPSHOT/alluxio-core-client-fs-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/pom.xml to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-sources.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/common/target/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar to org.alluxio/alluxio-core-common/1.7.0-SNAPSHOT/alluxio-core-common-1.7.0-SNAPSHOT-tests.jar[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/worker/pom.xml to org.alluxio/alluxio-core-server-worker/1.7.0-SNAPSHOT/alluxio-core-server-worker-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/logserver/pom.xml to org.alluxio/alluxio-logserver/1.7.0-SNAPSHOT/alluxio-logserver-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/client/pom.xml to org.alluxio/alluxio-core-client/1.7.0-SNAPSHOT/alluxio-core-client-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/examples/pom.xml to org.alluxio/alluxio-examples/1.7.0-SNAPSHOT/alluxio-examples-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/pom.xml to org.alluxio/alluxio-core/1.7.0-SNAPSHOT/alluxio-core-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/core/server/proxy/pom.xml to org.alluxio/alluxio-core-server-proxy/1.7.0-SNAPSHOT/alluxio-core-server-proxy-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/shell/pom.xml to org.alluxio/alluxio-shell/1.7.0-SNAPSHOT/alluxio-shell-1.7.0-SNAPSHOT.pom[JENKINS] Archiving /home/jenkins/workspace/Alluxio-Pull-Request-Builder/tests/pom.xml to org.alluxio/alluxio-tests/1.7.0-SNAPSHOT/alluxio-tests-1.7.0-SNAPSHOT.pomchannel stoppedArchiving artifacts
Test FAILed.
Merged build finished. Test FAILed.
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/18007/
Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/18014/
Test PASSed.
Merged build finished. Test PASSed.
| gharchive/pull-request | 2017-12-13T02:11:05 | 2025-04-01T04:32:15.505944 | {
"authors": [
"AmplabJenkins",
"calvinjia"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/pull/6606",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
421709901 | Add baseurl support to docs - 1.4
Replace .. with {{site.baseurl}}. The value for site.baseurl can be
set within _config.yml. This makes building docs for other locations
simpler.
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/Alluxio-Pull-Request-Builder/2455/
Test FAILed.
| gharchive/pull-request | 2019-03-15T21:24:20 | 2025-04-01T04:32:15.509411 | {
"authors": [
"AmplabJenkins",
"ZacBlanco"
],
"repo": "Alluxio/alluxio",
"url": "https://github.com/Alluxio/alluxio/pull/8581",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
171023741 | Remove Singleton annotation from service providers
We should instead bind them in modules:
bind(type.service()).to(type.serviceProvider()).in(Singleton.class);
This is extremely useful for testing services.
Also note that not all services are inherently singletons, e.g. local timer. Maybe we can add this metadata to ServiceType interface. Default can be Scopes.SINGLETON which can be changed to Scopes.NO_SCOPE.
done in 0.2.7
| gharchive/issue | 2016-08-13T19:30:30 | 2025-04-01T04:32:15.519332 | {
"authors": [
"AlmasB"
],
"repo": "AlmasB/FXGL",
"url": "https://github.com/AlmasB/FXGL/issues/220",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
304048921 | Debug Draw
Hello! Is there a way to turn on debug draw in fxgl ? If it is, so how i can do this?
You can enable debug views in Dev mode via ctrl+0, then select view ->
debug. It only shows hit boxes, nothing else.
Let me know what things you want to be able to see.
On Sat, 10 Mar 2018 at 7:06 am, Łukasz notifications@github.com wrote:
Hello! Is there a way to turn on debug draw in fxgl ? If it is, so how i
can do this?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/AlmasB/FXGL/issues/516, or mute the thread
https://github.com/notifications/unsubscribe-auth/ADbZ9mb-Xmivz_AiKtdnX87GOzNT8w1Cks5tc3uPgaJpZM4SlLGN
.
I wanted to see fixtures for player, like on this example (ground sensor):
Done in 0.5.2
| gharchive/issue | 2018-03-10T07:06:55 | 2025-04-01T04:32:15.523121 | {
"authors": [
"AlmasB",
"Polkasa"
],
"repo": "AlmasB/FXGL",
"url": "https://github.com/AlmasB/FXGL/issues/516",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1300254159 | Crash: DappBrowserFragment.java line 665 com.afcashapp.app.ui.DappBrowserFragment.expandCollapseView
@JamesSmartCell @justindg How can this happen, do you have any ideas?
| gharchive/issue | 2022-07-11T06:12:17 | 2025-04-01T04:32:15.553333 | {
"authors": [
"seabornlee"
],
"repo": "AlphaWallet/alpha-wallet-android",
"url": "https://github.com/AlphaWallet/alpha-wallet-android/issues/2714",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
657293907 | All-encompassing QR code scanner #1723
Closes #1723, #1826
@hboon also can how can i determin if detected address is not Ethereum wallet address?
@hboon also can how can i determin if detected address is not Ethereum wallet address? ...
@vladyslav-iosdev you mean if it’s a contract address and not wallet? We can’t. We can only do a simple length and string check to make sure it’s an Ethereum address. While we might do something smarter like querying the address to figure out if it’s a token/contract, sometimes the user might really want to watch it like a “wallet”, so can’t really be sure. Also a contract might be a smart contract wallet :p
So have to present all options that applies to Ethereum addresses. I haven’t checked the code yet, but remember the comment I made about providing configurable “source” options like UIImagePicker. We probably will use this same code but with specific options in existing screens like Send screen where is address to send to or EIP681 but never prompt the user to add a custom contract. Similarly in the import wallet screen.
@vladyslav-iosdev are you waiting for me to review this?
@vladyslav-iosdev are you waiting for me to review this?
actually i dont't know, i worked on it to long ago, and i don't remember what, not finished here yet. but anyway u can check it.
@vladyslav-iosdev sorry, my bad. I'll check it out.
So the universal scanner (for this PR at least) is only activated in the Wallet tab, right? The scanner functionality in other screens remain unchanged?
yes, right.
So the universal scanner (for this PR at least) is only activated in the Wallet tab, right? The scanner functionality in other screens remain unchanged?
yes, right.
That's great. More self-contained :) Thanks.
Can you try with this EIP681 link:
ethereum:0x89d24a6b4ccb1b6faa2625fe562bdd9a23260359/transfer?address=0x007bEe82BDd9e866b2bd114780a47f2261C684E3&uint256=1.3e18
If I scan it with the universal scanner, it shows the amount to transfer as 1.3e18. but it should be 1.3. If you scan the same link when in the DAI card, it shows 1.3 as expected.
@hboon done, now should be ok.
Looks good, but seems to be failing a test. Can you help fix it? After that we just need to wait for AlphaWallet/QRCodeReaderViewController#2 to be ready.
damn, right, i missed about test. i will return to it after finish bartercard task.
Looks good, but seems to be failing a test. Can you help fix it? After that we just need to wait for AlphaWallet/QRCodeReaderViewController#2 to be ready.
@hboon i pushed tests fixes.
@vladyslav-iosdev I think the commit the Podfile should point to should be 30d1a2a7d167d0d207ae0ae3a4d81bcf473d7a65 instead, according to https://github.com/AlphaWallet/QRCodeReaderViewController/commits/alphawallet?
@vladyslav-iosdev I think the commit the Podfile should point to should be 30d1a2a7d167d0d207ae0ae3a4d81bcf473d7a65 instead, according to https://github.com/AlphaWallet/QRCodeReaderViewController/commits/alphawallet?
oh, yep, u right. done.
Sorry I missed this:
Can you remove this part :branch=>'alphawallet' from the Podfile? I don't know what would happen if we specify both the branch and commit hash.
Can you remove this part :branch=>'alphawallet' from the Podfile? I don't know what would happen if we specify both the branch and commit hash.
done
| gharchive/pull-request | 2020-07-15T11:58:29 | 2025-04-01T04:32:15.563538 | {
"authors": [
"hboon",
"vladyslav-iosdev"
],
"repo": "AlphaWallet/alpha-wallet-ios",
"url": "https://github.com/AlphaWallet/alpha-wallet-ios/pull/2037",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1428674213 | Update back navigation for create lock screen #5699
Closes #5699
Tests not building. Maybe Xcode 13?
Tests not building. Maybe Xcode 13?
checking
| gharchive/pull-request | 2022-10-30T08:34:13 | 2025-04-01T04:32:15.565260 | {
"authors": [
"hboon",
"oa-s"
],
"repo": "AlphaWallet/alpha-wallet-ios",
"url": "https://github.com/AlphaWallet/alpha-wallet-ios/pull/5700",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2567993138 | 🛑 Samaras Mining is down
In 68c0b19, Samaras Mining (https://samarasmining.com) was down:
HTTP code: 523
Response time: 407 ms
Resolved: Samaras Mining is back up in aebdb57 after 21 minutes.
| gharchive/issue | 2024-10-05T12:44:50 | 2025-04-01T04:32:15.568647 | {
"authors": [
"Altair47"
],
"repo": "Altair47/3nt-upptime",
"url": "https://github.com/Altair47/3nt-upptime/issues/1454",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2113504398 | 🛑 Veritas MTC is down
In caaded6, Veritas MTC (https://veritasmtc.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Veritas MTC is back up in d6a6d7c after 24 minutes.
| gharchive/issue | 2024-02-01T21:20:18 | 2025-04-01T04:32:15.570900 | {
"authors": [
"Altair47"
],
"repo": "Altair47/3nt-upptime",
"url": "https://github.com/Altair47/3nt-upptime/issues/913",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2622581914 | Added support for macOS
Added support for macOS and fixed some minor bugs.
Updated the source code
make a release so i can test it.
make a release so i can test it.
https://github.com/G1aD05/SkyOS-bugfix/releases/tag/bugfix
maybe add how to run on MacOS into the readme now? (make another pull)
| gharchive/pull-request | 2024-10-30T00:27:04 | 2025-04-01T04:32:15.572651 | {
"authors": [
"G1aD05",
"webbrowser11"
],
"repo": "Alter-Net-codes/SkyOS",
"url": "https://github.com/Alter-Net-codes/SkyOS/pull/16",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1884063195 | 是否可以直接提供一个 可以执行的 pipeline
比如,不需要 自己cp config.cmake 做修改,也不用 clone 内容到本地,
本地放一张图片和一个小模型,可以一个shell 跑起来?
或者,可以一个shell 拉取下来 模型和图片?不需要在 clone了
采取了第二种方式,并按照你上次编译运行的体验重新修改并整理README.md文档
| gharchive/issue | 2023-09-06T13:49:51 | 2025-04-01T04:32:15.637032 | {
"authors": [
"Alwaysssssss",
"ChunelFeng"
],
"repo": "Alwaysssssss/nndeploy",
"url": "https://github.com/Alwaysssssss/nndeploy/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
176756132 | Bamboo integration
Hi,
Is there any documentation how to integration the plugin with bamboo?
Regards,
Oleg
It should be like any other plugin.
We did not test it with bamboo but it probably depends on the build system (maven/ant/sonar-scanner) you are using inside bamboo.
You have to deploy the plugin to bamboo server and then provide the relevant properties, please read the documentation for the build system you are using.
I was able to integrate sonar, bitbucket and bamboo via https://github.com/tomasbjerre/pull-request-notifier-for-bitbucket
| gharchive/issue | 2016-09-13T21:34:24 | 2025-04-01T04:32:15.641277 | {
"authors": [
"kingoleg",
"t-8ch"
],
"repo": "AmadeusITGroup/sonar-stash",
"url": "https://github.com/AmadeusITGroup/sonar-stash/issues/75",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1960035407 | generator: Add missing service definitions
Adds definitions for:
AggregationService
CompositionService
JobService
LicenseService
LogService
TelemetryService
I hope you can forgive my tardiness in reviewing this pull request! Thank you so much for your contribution, and I hope you have the opportunity to support this project with additional meaningful contributions in the future!
| gharchive/pull-request | 2023-10-24T20:36:59 | 2025-04-01T04:32:15.645578 | {
"authors": [
"AmateurECE",
"pmundt"
],
"repo": "AmateurECE/redfish-codegen",
"url": "https://github.com/AmateurECE/redfish-codegen/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
366917472 | As a project maintainer, I want to have the LUIS training model in source control so it can be backed up and deployed with the app
LUIS models trained in the luis portal can be exported as a text representation. There are also tools available to describe the training of a luis model using special file formats that can be built into a normal luis backup file. These luis files can then be deployed to the luis instance and rebuilt from scratch. Following this practice allows the luis model history to be captured, allows each app environment to use separate instances of LUIS, and allows changes to the luis model to be applied during normal app deploys.
Acceptance criteria:
Luis training model is represented in source control
Both the portal driven training model and the CLI based definition are considered and the better approach is used
LUIS model is deployed during app deployment
This item will not be needed until a use case is identified that requires MS LUIS
| gharchive/issue | 2018-10-04T18:35:35 | 2025-04-01T04:32:15.653354 | {
"authors": [
"m4thfr34k",
"maxnorth"
],
"repo": "AmericanRedCross/damage-assessment-bot",
"url": "https://github.com/AmericanRedCross/damage-assessment-bot/issues/119",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2466194611 | snowflake-ml-python 1.6.1
snowflake-ml-python 1.6.1
Destination channel: Defaults
Links
PKG-5513
dev_url: https://github.com/snowflakedb/snowflake-ml-python/tree/1.6.1
conda_forge: https://github.com/conda-forge/snowflake-ml-python-feedstock/blob/main/recipe/meta.yaml
pypi: https://pypi.org/project/snowflake-ml-python/1.6.1
pypi inspector: https://inspector.pypi.io/project/snowflake-ml-python/1.6.1
Explanation of changes:
updated to 1.6.1
quick quibble,
# For fsspec[http] in conda
is also true for aiohttp which explains why we have it in our recipe and it doesn't appear in pyproject.toml (in the PyPI tarball).
We should probably have a similar comment to the one for requests so that n00bs don't scratch their heads for a bit.
quick quibble,
# For fsspec[http] in conda
is also true for aiohttp which explains why we have it in our recipe and it doesn't appear in pyproject.toml (in the PyPI tarball).
We should probably have a similar comment to the one for requests so that n00bs don't scratch their heads for a bit.
good idea,
I can see that the most recent fsspec[http] does not require requests but I would leave it there for this round
https://inspector.pypi.io/project/fsspec/2024.6.1/packages/90/b6/eba5024a9889fcfff396db543a34bef0ab9d002278f163129f9f01005960/fsspec-2024.6.1.tar.gz/fsspec-2024.6.1/pyproject.toml#line.62
, and add the comment for aiohttp the same for requests @psteyer
| gharchive/pull-request | 2024-08-14T15:35:27 | 2025-04-01T04:32:15.739270 | {
"authors": [
"ifitchet",
"lorepirri",
"psteyer"
],
"repo": "AnacondaRecipes/snowflake-ml-python-feedstock",
"url": "https://github.com/AnacondaRecipes/snowflake-ml-python-feedstock/pull/34",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1095800398 | sqlite 3.37.2
Update sqlite to 3.37.2
Version change: bump version number from 3.37.0 to 3.37.2
Requirements from conda-forge: https://github.com/conda-forge/sqlite-feedstock/blob/master/recipe/meta.yaml
dev_url: https://sqlite.org/src/dir?ci=trunk
Actions:
Add pread64 CFLAGS in build.sh (align with conda-forge recipe https://github.com/conda-forge/sqlite-feedstock/commit/05ca616665cb40669322fb1f21a2b822e0c0c6fb)
Update year
Align left selectors for readability
Result:
all-succeeded
@varlackc I've updated home and doc urls with https
| gharchive/pull-request | 2022-01-06T23:29:54 | 2025-04-01T04:32:15.743067 | {
"authors": [
"anaconda-pkg-build",
"skupr-anaconda"
],
"repo": "AnacondaRecipes/sqlite-feedstock",
"url": "https://github.com/AnacondaRecipes/sqlite-feedstock/pull/9",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1897359939 | Move infrastructure to repo and set up pipelines
This pull request moves the Bicep infrastructure-as-code from the azure repository to this repository, following convention from analog-core. Further, it builds upon the existing Github Actions workflows to also build this new infrastructure, along with deploying.
This also enables deployments to production.
Overall the direction is right. I have added various comments that you should address.
One thought: Should we already adapt our future pipeline design flow for Shifty so we do not have to redo it later? See suggested future flow in AnalogIO/coffeecard_app#474 under Releasing. The basic idea is that the main branch always deploys to the dev environment. A new Github release triggers a deployment to the prod env.
Sounds like a great idea to start adapting for release-based workflow. I will get on that
| gharchive/pull-request | 2023-09-14T21:54:15 | 2025-04-01T04:32:15.748956 | {
"authors": [
"duckth"
],
"repo": "AnalogIO/shifty-webapp",
"url": "https://github.com/AnalogIO/shifty-webapp/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
393848062 | Add polygon error when use cesium with excel-export.
When I import Nodejs module excel-export, cesium add polygon will occur an error.
My code:
import excelExport from 'excel-export'
var viewer = new Cesium.Viewer('cesiumContainer');
let polygon = viewer.entities.add({
name: 'area polygon',
polygon: {
hierarchy: [new Cesium.Cartesian3(2000,1789,2000), new Cesium.Cartesian3(2000,1799,2000), new Cesium.Cartesian3(2000,1789,2050)],
perPositionHeight: true,
material: Cesium.Color.fromCssColorString('rgba(217, 48, 53, 1.0)'),
outline: true,
outlineColor: Cesium.Color.fromCssColorString('rgba(217, 48, 53, 1.0)'),
outlineWidth: 2
}
})
cesium version: 1.50
browser: electron
This is strange. Does that code work if you remove the first line?
The only thing is that this polygon happens to be under the surface I think. If you set a height instead of perPositionHeight and use viewer.zoomTo you can see it. Here's a live Sandcastle.
If you can provide more information it would be helpful. Running the unminified version of CesiumJS and posting the full error message here would help a lot too.
no longer crashes
| gharchive/issue | 2018-12-24T09:52:51 | 2025-04-01T04:32:15.754541 | {
"authors": [
"OmarShehata",
"dminor112",
"hpinkos"
],
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/issues/7439",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
177709890 | Don't reproject Web Mercator imagery tiles unnecessarily
Previously, Cesium would reproject all Web Mercator imagery tiles to Geographic on load. This had two costs:
It took some time (not much, probably).
It introduced some blurring in the imagery because destination pixels were not aligned with source texels.
And then, of course, Cesium would reproject those Geographic images onto the globe with the normal 3D perspective projection. So effectively, Cesium reprojected imagery tiles twice.
After this PR, Cesium includes Web Mercator texture coordinates with the terrain tiles, so Web Mercator tiles can be projected directly onto the screen without going via Geographic in most cases. Some notes:
The imagery looks a bit sharper after this change.
I can't think of any reason that interpolating texture coordinates linearly in Web Mercator space is any less accurate than interpolating in Geographic space. In fact I think interpolating in Web Mercator should be a bit better because it's conformal-ish.
Terrain meshes are now bigger; there's a an extra 12-bit compressed float per vertex, which currently means an extra whole float per vertex. I spent way more time than I should have trying to avoid this. I thought I could eliminate the height attribute in most cases, especially when using 3D only. And I wrote some really hairy code to support this (in the terrain3 branch if you want to gaze upon the horror). But in the end it was just too hard and I gave up.
The Web Mercator coordinates are currently only used for terrain tiles that fall entirely within the Web Mercator bounds (+/- ~85 degrees latitude). For tiles that cross that line, we still reproject to geographic. We could avoid this by making sure that our terrain meshes always include vertices at +/- 85 degrees latitude (i.e. add them when we create the mesh) so a triangle never crosses that line. Triangles that cross the Web Mercator boundary cause problems when interpolating texture coordinates for fragments.
The imagery looks a bit sharper after this change.
Do you have a nice before/after (maybe with an image diff if useful) we could use for twitter and the release blog?
Terrain meshes are now bigger; there's a an extra 12-bit compressed float per vertex, which currently means an extra whole float per vertex...
Ouch. @bagnell and I will brainstorm since the trend is to use less memory, not more. Maybe it is unavoidable. Maybe we can optimize something else. Maybe this is the best we can do.
The Web Mercator coordinates are currently only used for terrain tiles that fall entirely within the Web Mercator bounds (+/- ~85 degrees latitude). For tiles that cross that line, we still reproject to geographic. We could avoid this by making sure that our terrain meshes always include vertices at +/- 85 degrees latitude (i.e. add them when we create the mesh) so a triangle never crosses that line. Triangles that cross the Web Mercator boundary cause problems when interpolating texture coordinates for fragments.
This could be done at runtime with quick reject tests, right? Do you think that would be cleaner than the current approach, which is...complicated. Or are you suggesting a breaking terrain spec change and doing this offline? The later is not out of the question, it would just have to be longer-term.
@bagnell can you please also review this when you are available?
Ouch. @bagnell and I will brainstorm since the trend is to use less memory, not more. Maybe it is unavoidable. Maybe we can optimize something else. Maybe this is the best we can do.
Here are some ideas:
Include the vertical geographic and/or web mercator texture coordinate, but not both unless we actually have two different imagery layers with two different projections. Aside from the complexity of managing optional vertex attributes like this, we also need to worry about other things that use the texture coordinates, like the water effect.
Eliminate the height. It's used for upsampling, and it's used in 2D and CV. We could handle the upsampling case by storing the heights separately (they don't need to be sent to the GPU). And of course 2D and CV don't matter if you're using the 3D-only option. But this is a lot of complexity and most applications won't benefit because they don't disable 2D and CV. We could at least let every app get some benefit by creating meshes for 2D and CV on the CPU (when displaying those views) rather than using vertex shader displacement.
We could at least let every app get some benefit by creating meshes for 2D and CV on the CPU (when displaying those views) rather than using vertex shader displacement.
Or maybe by using a separate buffer for the heights when rendering in this view.
This could be done at runtime with quick reject tests, right? Do you think that would be cleaner than the current approach, which is...complicated. Or are you suggesting a breaking terrain spec change and doing this offline? The later is not out of the question, it would just have to be longer-term.
I think it can be reasonably done at runtime. For heightmap terrain it's easy, for quantized mesh it's a bit of work, but not too hard. I'm not sure the overall complexity would go down here though. ;)
Before:
After:
Diff (used https://huddle.github.io/Resemble.js/):
It's most obvious in contrasty bits, so the labels mostly in these screenshots. Save the two images and flip back and forth between them to really see it. On contrastier maps (like Bing Maps Roads for example), lots of map features look noticeably sharper. Overall, it's admittedly fairly subtle, though.
Ok I think this is ready. Of course, it's up to you guys if you think the improvement is worth the extra float. I tried and failed to measure any performance or memory usage difference with this change. Actually, the memory usage of the GPU process in Chrome was consistently 10 meg (out of 500+!) lower after this change, possibly just because there's less texture creation now and perhaps Chrome doesn't clean up old ones very aggressively?
I'd be interested in the options I mentioned above for eliminating the height or the two sets of texture coordinates in order to reduce the amount of vertex data, but it's not straightforward and I think it's more important to do some other things first, like improve the tile load process.
Gif of both versions:
I modified the Path example to fly over the continental U.S. with the camera pointed straight down to test performance. I ran MSI Afterburner in the background to watch results, keeping as many other things constant as I could (no other windows open, opening from an empty chrome, etc.).
VRAM usage was ~10mb lower on the new version, but RAM usage was ~100mb higher. GPU and CPU usage was about the same.
1.25:
noReproject:
@bagnell could you please evaluate this at the bug bash?
This looks good to me. For the extra float, I'll add the ideas from @kring above to an issue.
Thanks again @kring. I moved the CHANGES.md update to 1.27 in master.
| gharchive/pull-request | 2016-09-19T06:09:12 | 2025-04-01T04:32:15.770203 | {
"authors": [
"bagnell",
"denverpierce",
"kring",
"pjcozzi"
],
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/pull/4339",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
149193399 | npm doesn't need a path to the module in package.json
Fix for #10
Coverage remained the same at 90.253% when pulling d8519cff43e359431900a60e1e9046d5962aee95 on fix-npm-run-coverage into 56ea6d3f53ea361e2c2b3626773ef134b3b292d0 on master.
Thanks @lasalvavida!
@pjcozzi Did you mean to close this without merging?
No, thanks!
| gharchive/pull-request | 2016-04-18T15:56:55 | 2025-04-01T04:32:15.773285 | {
"authors": [
"coveralls",
"lasalvavida",
"pjcozzi"
],
"repo": "AnalyticalGraphicsInc/gltf-pipeline",
"url": "https://github.com/AnalyticalGraphicsInc/gltf-pipeline/pull/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1527824504 | Documentation Improvement
Proposed feature
I think the details provided in the readme file might not be sufficient for a new contributor to start making contributions.
We can make some changes in the file which makes the process of contributing easier with a step by step contributing process.
Possible use cases for the feature
This would be helpful for the first-time/new contributors to start contributing.
Also the project will get contributions from many new contributors.
I would like to work on this issue and contribute to the project.
I assigned the issue to you.
| gharchive/issue | 2023-01-10T18:22:58 | 2025-04-01T04:32:15.801370 | {
"authors": [
"Anamika1-cpu",
"AnishDubey27"
],
"repo": "Anamika1-cpu/BlogsBee",
"url": "https://github.com/Anamika1-cpu/BlogsBee/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2352007417 | Homework11: Anastasiya Solodukho
Added Homework # 11
@Stml89, I fixed. Please, check.
| gharchive/pull-request | 2024-06-13T20:36:09 | 2025-04-01T04:32:15.808947 | {
"authors": [
"AnastasiyaSo"
],
"repo": "AnastasiyaSo/PythonTestProject",
"url": "https://github.com/AnastasiyaSo/PythonTestProject/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
522718004 | Carousel UWP with Listview items Swipe not working correctly
If i swipe on uwp most times its now working correct. The Carousel hangs in between views.
Is there any undocumented renderer i have to preserve or add to the additional assemblys?
Hi @bPoller2810, unfortunately not
This feature requires writing custom renderer for UWP (like it's done for iOS and Android)
If you wish, you can make PR :)
| gharchive/issue | 2019-11-14T09:07:53 | 2025-04-01T04:32:15.851781 | {
"authors": [
"AndreiMisiukevich",
"bPoller2810"
],
"repo": "AndreiMisiukevich/CardView",
"url": "https://github.com/AndreiMisiukevich/CardView/issues/308",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
58370997 | Support for switching ruby lambda literals
It'd be cool if we could switch between
-> (a, b) { a.do_stuff_with(b) }
and
lambda do |a, b|
a.do_stuff_with(b)
end
Though there's gonna have to be some fiagling to make it not overlap with the block-syntax splitjoiner
(thanks for the awesome plugin, btw :smile:)
Sorry it took me a while to get to this.
It would be possible to transform the arrow-block into a lambda, but I'm not sure under what circumstances we want which. I guess both syntaxes are perfectly valid for both one-line and multiline formats, right? Is this a matter of coding style?
If it is, one thing I'm considering is adding an option, similar to the one in #67. Maybe something like:
" Arrow lambda when splitting, but normal "lambda" when joining
let g:splitjoin_ruby_arrow_lambdas = 'Sj'
" Arrow lambda when joining, but normal "lambda" when splitting
let g:splitjoin_ruby_arrow_lambdas = 'sJ'
Does this look like a reasonable option to you? Or do you think it should always be one or the other for some reason? Or is it different depending on circumstances (in which case, maybe it could be added as a rule in switch.vim instead, so you can switch between them on a case-by-case basis)?
This is wrong. Multi-line lambda in ruby is written like this:
lambda { |x|
...
}
Not with do end like blocks.
that's just like, your opinion, man
https://github.com/bbatsov/ruby-style-guide#single-line-blocks
https://github.com/bbatsov/rubocop/blob/master/lib/rubocop/cop/style/block_delimiters.rb#L44
Oh right, sorry, it's only when you pass multiline lambda as a parameter. For example when using scope in Rails. You need to surround it with () or use curly braces otherwise the block will be treated as a parameter for scope.
It just messed up in my head.
yeah, you're right on that.
Sorry, again, for taking so long to address this.
For starters, there was a separate issue related to this, actually: https://github.com/AndrewRadev/switch.vim/issues/27
What I implemented there was a way to toggle between the arrow style and lambdas. That said, it would still be a useful tool to have in splitjoin, I assume not many people use the lambda syntax anyway. I'll leave this open and hope I can get to this a bit sooner this time.
Small correction just in case:
I assume not many people use the lambda syntax anyway
Ruby style guide recommends using lambda for multiline blocks: https://github.com/bbatsov/ruby-style-guide#lambda-multi-line.
| gharchive/issue | 2015-02-20T15:53:07 | 2025-04-01T04:32:15.864490 | {
"authors": [
"AndrewRadev",
"astyagun",
"firedev",
"glittershark"
],
"repo": "AndrewRadev/splitjoin.vim",
"url": "https://github.com/AndrewRadev/splitjoin.vim/issues/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
294671476 | Fix #11
Hello,
I tried to fix issue #11 by adding a label for all authorized sessions (jira_authorized_sessions_gauge).
The anonymous sessions are therefore the difference between the total number of sessions (jira_total_sessions_gauge) and the authorized ones.
See:
Thank you!
| gharchive/pull-request | 2018-02-06T08:31:21 | 2025-04-01T04:32:15.872515 | {
"authors": [
"AndreyVMarkelov",
"PatrickSchuster"
],
"repo": "AndreyVMarkelov/jira-prometheus-exporter",
"url": "https://github.com/AndreyVMarkelov/jira-prometheus-exporter/pull/12",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2581358859 | 🛑 Harbor is down
In 9f11fcd, Harbor (https://harbor.androz2091.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Harbor is back up in 41a9929 after 2 hours, 9 minutes.
| gharchive/issue | 2024-10-11T13:15:12 | 2025-04-01T04:32:15.882668 | {
"authors": [
"Androz2091"
],
"repo": "Androz2091/status",
"url": "https://github.com/Androz2091/status/issues/1036",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2523167914 | 🛑 Immich is down
In d09f4ea, Immich (https://photos.androz2091.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Immich is back up in 5a97c41 after 8 minutes.
| gharchive/issue | 2024-09-12T19:16:15 | 2025-04-01T04:32:15.885184 | {
"authors": [
"Androz2091"
],
"repo": "Androz2091/status",
"url": "https://github.com/Androz2091/status/issues/814",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2198258113 | 🛑 Home Assistant is down
In e3a8167, Home Assistant (https://ha.andywebservices.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Home Assistant is back up in 2e1a728 after 6 minutes.
| gharchive/issue | 2024-03-20T18:25:24 | 2025-04-01T04:32:15.891605 | {
"authors": [
"andrewmzhang"
],
"repo": "AndyWebServices/upptime",
"url": "https://github.com/AndyWebServices/upptime/issues/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1832153780 | Bug in the final result of my instance
Hello;
By executing your code on this instance :
14: 2,5
2: 14,3,5
3: 2,4,5
4: 3,5
5: 14,2,3,4,6
6: 5,7,8
7: 6,8,9,10,11,12,13
8: 6,7,9,10,11,12,13
9: 7,8
10: 7,8,11
11: 7,8,10,12
12: 7,8,11,13
13: 7,8,12
It is clear that to get a clear result, i replace the node '1' by the node '14' because you use the node '1' to represent the label "parallel".
I get this result: P(1(7,8),0(9,P(10,11,12,13)),6,3,14,4,2,5)
But what i was expected is this result:
P(P(14,2,3,4),5,6,0(7,8),1(9,P(10,11,12,13)))
There is two issues:
The two labeled '0' and '1' are inversed (following your description)
The final result is wrong, the module P(14,2,3,4) is not detected
Best
Hi!
How did you calculate the expected modular decomposition?
I would say that the first point is an error in the readme rather than the code - surely the label '1' on an internal vertex of the MDT should indicate that the corresponding strong module is a series module (and not parallell, as stated). That is, any two vertices $x$ and $y$ with a '1'-labeled lca are adjacent inte the underlying graph. For example, this is the case for 7 and 8 in your graph. I'll fix the readme!
The second point is more subtle: it is true that $\{14,2,3,4\}$ is a module, and it is prime (it induces a $P_4$ in the graph). But the root is also prime-labeled (as the full vertex set is a prime modules), the function reduceMD so to say "merges" these two prime vertices in the output. I'm honestly uncertain if this is correct or not (in other cases, eg a complete graph, this merging should be done for sure). If I recall correctly, the original paper is very vague about how an unreduced MDT should be transformed to a reduced MDT. When I find the time I'll consult the literature and see if I can find a fix! You're also more than welcome to see if you can find a solution and send a pull request.
Cheers!
| gharchive/issue | 2023-08-01T23:10:29 | 2025-04-01T04:32:15.990817 | {
"authors": [
"AnnaLindeberg",
"Nasaku03"
],
"repo": "AnnaLindeberg/ModularDecomposition",
"url": "https://github.com/AnnaLindeberg/ModularDecomposition/issues/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
472014295 | Problem with “serverless-offline” plugin
Following guide verbatim - problem encountered at
https://serverless-stack.com/chapters/add-a-create-note-api.html
error thrown at first “serverless” command:
$ serverless invoke local create --path mocks/create-event.json
Serverless Error ---------------------------------------
Serverless plugin "serverless-offline" initialization errored: Unexpected token )
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 6.5.0
Serverless Version: 1.48.2
Enterprise Plugin Version: 1.3.2
Platform SDK Version: 2.0.4
I realise this may be an issue with the serverless-offline plugin, however it’s currently a blocker for anyone attempting to follow the serverless-stack tutorial
Update: Following the steps on a new system works. Seems the problem was specific to my environment, closing
| gharchive/issue | 2019-07-23T23:41:48 | 2025-04-01T04:32:15.995109 | {
"authors": [
"jjgrayston"
],
"repo": "AnomalyInnovations/serverless-stack-com",
"url": "https://github.com/AnomalyInnovations/serverless-stack-com/issues/369",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
342506731 | Datetime overlay in a corner
Is your feature request related to a problem? Please describe.
My xiaoyi's do not always show the correct time, maybe one of them didn't have internet connection so it starts off from 1970-01-01 0:00, or there's the one that no matter how I set the timezone will always show a time that's 2 hours in the past.
Describe the solution you'd like
A textual, basic configurable (size/color/position) overlay showing the current system date and time
Describe alternatives you've considered
As of now, I have a php script fetching a screenshot of the current display, add an overlay via GD and then returning it to the HTTP request to the server, but it's molasses, naturally
Additional context
Attached is an example of what I get with my PHP script (sorry for the lacking freehand-mouse drawing skills in the censored area)
If you can generate the clock face as a video stream we can play like a
camera feed, I believe we can do this with the "layers" branch.
On Wed, Jul 18, 2018 at 5:58 PM, ephestione notifications@github.com
wrote:
Is your feature request related to a problem? Please describe.
My xiaoyi's do not always show the correct time, maybe one of them didn't
have internet connection so it starts off from 1970-01-01 0:00, or there's
the one that no matter how I set the timezone will always show a time
that's 2 hours in the past.
Describe the solution you'd like
A textual, basic configurable (size/color/position) overlay showing the
current system date and time
Describe alternatives you've considered
As of now, I have a php script fetching a screenshot of the current
display, add an overlay via GD and then returning it to the HTTP request to
the server, but it's molasses, naturally
Additional context
Attached is an example of what I get with my PHP script (sorry for the
lacking freehand-mouse drawing skills in the censored area)
[image: template]
https://user-images.githubusercontent.com/19911280/42910045-5d52bb98-8ae6-11e8-8e65-1ddde85d1d51.jpg
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Anonymousdog/displaycameras/issues/3, or mute the
thread
https://github.com/notifications/unsubscribe-auth/AdGEKbsp_tSfjYI27g65pgedRjhYgHyGks5uH696gaJpZM4VVb55
.
When I first got the idea of adding a clock, I did figure that I could generate a rtsp stream of said clock, but I immediately dismissed it as overkill and too much intricate.
So you say that having a video stream is the only feasible way to overlay a text clock on the quad display?
On July 27, 2018 8:02:00 PM GMT+02:00, Anonymousdog notifications@github.com wrote:
If you can generate the clock face as a video stream we can play like a
camera feed, I believe we can do this with the "layers" branch.
On Wed, Jul 18, 2018 at 5:58 PM, ephestione notifications@github.com
wrote:
Is your feature request related to a problem? Please describe.
My xiaoyi's do not always show the correct time, maybe one of them
didn't
have internet connection so it starts off from 1970-01-01 0:00, or
there's
the one that no matter how I set the timezone will always show a time
that's 2 hours in the past.
Describe the solution you'd like
A textual, basic configurable (size/color/position) overlay showing
the
current system date and time
Describe alternatives you've considered
As of now, I have a php script fetching a screenshot of the current
display, add an overlay via GD and then returning it to the HTTP
request to
the server, but it's molasses, naturally
Additional context
Attached is an example of what I get with my PHP script (sorry for
the
lacking freehand-mouse drawing skills in the censored area)
[image: template]
https://user-images.githubusercontent.com/19911280/42910045-5d52bb98-8ae6-11e8-8e65-1ddde85d1d51.jpg
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Anonymousdog/displaycameras/issues/3, or mute
the
thread
https://github.com/notifications/unsubscribe-auth/AdGEKbsp_tSfjYI27g65pgedRjhYgHyGks5uH696gaJpZM4VVb55
.
Unless you have a utility that displays a live clock with the configuration
you want, yes.
On Fri, Jul 27, 2018, 5:32 PM ephestione notifications@github.com wrote:
When I first got the idea of adding a clock, I did figure that I could
generate a rtsp stream of said clock, but I immediately dismissed it as
overkill and too much intricate.
So you say that having a video stream is the only feasible way to overlay
a text clock on the quad display?
On July 27, 2018 8:02:00 PM GMT+02:00, Anonymousdog <
notifications@github.com> wrote:
If you can generate the clock face as a video stream we can play like a
camera feed, I believe we can do this with the "layers" branch.
On Wed, Jul 18, 2018 at 5:58 PM, ephestione notifications@github.com
wrote:
Is your feature request related to a problem? Please describe.
My xiaoyi's do not always show the correct time, maybe one of them
didn't
have internet connection so it starts off from 1970-01-01 0:00, or
there's
the one that no matter how I set the timezone will always show a time
that's 2 hours in the past.
Describe the solution you'd like
A textual, basic configurable (size/color/position) overlay showing
the
current system date and time
Describe alternatives you've considered
As of now, I have a php script fetching a screenshot of the current
display, add an overlay via GD and then returning it to the HTTP
request to
the server, but it's molasses, naturally
Additional context
Attached is an example of what I get with my PHP script (sorry for
the
lacking freehand-mouse drawing skills in the censored area)
[image: template]
<
https://user-images.githubusercontent.com/19911280/42910045-5d52bb98-8ae6-11e8-8e65-1ddde85d1d51.jpg
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Anonymousdog/displaycameras/issues/3, or mute
the
thread
<
https://github.com/notifications/unsubscribe-auth/AdGEKbsp_tSfjYI27g65pgedRjhYgHyGks5uH696gaJpZM4VVb55
.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
https://github.com/Anonymousdog/displaycameras/issues/3#issuecomment-408544027,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AdGEKYWHqIiqnwdY1bAbI8IP4Yr9Gum3ks5uK4b6gaJpZM4VVb55
.
There are several scripts that allow to have a live digital clock in the form of a text constantly showing on the screen, for example:
https://www.commandlinefu.com/commands/view/11336/create-a-continuous-digital-clock-in-linux-terminal
maybe there can be a small overlay where a screen instance is used to place a roughly 100px x 20px rectagle in the top-left corner over the top-left camera?
Feature enhancement request outside project scope
| gharchive/issue | 2018-07-18T21:58:17 | 2025-04-01T04:32:16.027352 | {
"authors": [
"Anonymousdog",
"ephestione"
],
"repo": "Anonymousdog/displaycameras",
"url": "https://github.com/Anonymousdog/displaycameras/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1041748174 | Update read-me installation/usage instructions from template
https://github.com/asdf-vm/asdf-plugin-template
Thank you, @mathew-fleisch !
| gharchive/pull-request | 2021-11-01T23:36:57 | 2025-04-01T04:32:16.062888 | {
"authors": [
"Antiarchitect",
"mathew-fleisch"
],
"repo": "Antiarchitect/asdf-helm-cr",
"url": "https://github.com/Antiarchitect/asdf-helm-cr/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
330418434 | Play and Pause Option
Work so far on this needed option for external media playback in our halls is much appreciated
Not really a fault issue but the need to be able to play and leave the final frame on screen by pausing after playback. VLC is currently being used as this offers this option but users find VLC complicated. We would like to use this option for HLC training videos that work by leaving an image on screen for a seamless presentation. The simplicity this software offers is much appreciated by end users.
Since Soundbox development was finished I never bothered to request the option but at this early stage in development of OnlyM it seems opportune to request this option we need in training with HLC material.
regards Robin
@Roblinproductions thanks for feedback. The final frame of a video is often empty, so I'd welcome some thoughts about what the basic requirement is. Thanks, Antony
@omaha4 - when I use the sign language app and set a video to "freeze" it does indeed stop on the last frame of the video which is empty - a black screen. How does this differ from what OnlyM already provides (the ability to revert to a black backdrop when the video finishes)?
Ahh, lets say after paragraph 2 of the watchtower, or some other Sign language pub, there is a picture that is bookmarked, and in our group without cameras I need to hold on that picture and talk about it, I can do just that with freeze. Its like having an extra person, the freeze command was set before the meeting. To Robinproductions point, if his HLC training videos are not black on the last frame of the video, and indeed, have a picture on it, and they need to discuss it, then freezing, or pausing it would be the option they select, for that video. VLC can do this, but it is so deep in the options, you wouldn't know about it if you didn't read the branch's instructions, and even then, it is a universal option, not a case by case option. Remember, not all videos are going to be blank on the last frame.
Thanks, that's useful. If I understand correctly, some videos (specifically some jw sign-language videos and perhaps some HLC videos) are designed to conclude with a still image (presumably shown in the last few frames) where the intent is that the video should be frozen at that point while further discussion continues. Furthermore, it's important to be able to specify in advance whether a particular video should be frozen at the end or allowed to conclude normally. Right?
That's the main point. Sign Language videos have bookmarks that allow for complex customization in the playlist, but that is a topic for another day. But there are videos out there, hearing and deaf, that don't black out on the last frame ( I seem to remember this very subject almost burnt me in VLC a while back). It is for these videos, that being able to individually select, pause or stop, would be useful to some.
Thanks. There are 2 sets of similar requirements here (HLC and Sign Language). Sadly, both are edge cases as far as the core purpose of OnlyM is concerned, and adding support for them risks feature creep (support for bookmarks, freeze to a specific frame, etc). I want to keep OnlyM simple and recognise that this will limit its application in some scenarios. I may add a per-video "freeze" option to allow the video to pause on the final frame, but that's as far as I think it will go. Thank for your valuable input.
Indeed, I was using sign language as an example as an example of what could be done. JW Sign language is doing an awesome job of filling our needs ( it actually took 4 years of patiently waiting for the customizable playlist feature to make it to JWSL :) ). If you use VLC for your videos, remember to have the
Appreciate you considering the option. A per video freeze function would solve our issue. Thank you
#17 implemented
| gharchive/issue | 2018-06-07T20:11:00 | 2025-04-01T04:32:16.075578 | {
"authors": [
"AntonyCorbett",
"Roblinproductions",
"omaha4"
],
"repo": "AntonyCorbett/OnlyM",
"url": "https://github.com/AntonyCorbett/OnlyM/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2573144001 | Give some interactive scroll animation on the top of page
When i scroll the above u can see on header that line also increases.
@Anu27n
I added the PR; plz review it
@Telomelonia reviewed , star this repo thank you
| gharchive/issue | 2024-10-08T13:02:41 | 2025-04-01T04:32:16.077491 | {
"authors": [
"Anu27n",
"Telomelonia"
],
"repo": "Anu27n/603work",
"url": "https://github.com/Anu27n/603work/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
901461008 | changed vvm to pysolcx
This is a draft and requires review
note: this code was copied and modified from https://github.com/ApeWorX/ape-vyper/blob/main/ape_vyper/compiler.py
WARNING: No project files detected in '/home/somatt/work/ape/token/contracts'
WARNING: No project files detected in '~/ape/token/contracts'
need to update workflow yaml file - doesn't have module updated, h/t @fubuloubu
need to add testing similarly to ape-vyper h/t @fubuloubu
error
> progress_bar = tqdm(total=total_size, unit="iB", unit_scale=True)
E TypeError: 'NoneType' object is not callable
Run flake8 ./<MODULE_NAME> ./tests ./setup.py
flake8 ./<MODULE_NAME> ./tests ./setup.py
shell: /usr/bin/bash -e {0}
env:
pythonLocation: /opt/hostedtoolcache/Python/3.8.10/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.10/x64/lib
/home/runner/work/_temp/0e8cef5f-3d75-4fda-bb45-bc0decf783a5.sh: line 1: MODULE_NAME: No such file or directory
Error: Process completed with exit code 1.
it looks like MODULE_NAME is not set
https://github.com/ApeWorX/ape-solidity/blob/755c50de78647a5ad7ebe82a6b4ab6df01216909/ape_solidity/compiler.py#L92
mypy.....................................................................Failed
- hook id: mypy
- exit code: 1
ape_solidity/compiler.py:92: error: Item "None" of "Optional[Any]" has no attribute "select"
Found 1 error in 1 file (checked 1 source file)
need to update workflow yaml file - doesn't have module updated, h/t @fubuloubu
https://github.com/ApeWorX/ape-solidity/blob/755c50de78647a5ad7ebe82a6b4ab6df01216909/.github/workflows/publish.yaml#L30
hook into github template system
Fixes https://github.com/ApeWorX/ape/issues/69
| gharchive/pull-request | 2021-05-25T22:43:01 | 2025-04-01T04:32:16.136571 | {
"authors": [
"ShadeUndertree",
"sabotagebeats"
],
"repo": "ApeWorX/ape-solidity",
"url": "https://github.com/ApeWorX/ape-solidity/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
674361918 | Question: Regarding changing the ADAS algorithms of Apollo
Hello,
I am doing a project which requires me to develop a software program. This program has to send software updates that modify the Apollo's ADAS algorithms (similar to OTA updates). And I have to validate whether the changes work using LGSVL simulator
Can I connect with someone who has already worked on a similar topic or someone in this community can guide me?
I use Apollo 3.0
@ashwinsarvesh thanks for using Apollo. The funtion is not supported in Apollo yet. The team will keep you updated if there's an update on that. Thanks again. This ticket is being closed. Please feel free to raise other issues if needed.
Hello @jinghaomiao
Thank you for your reply.
I know that it is not supported by Apollo. But I am trying to connect with people who are working on a similar topic.
If this is not the right forum to ask this question, could you tell me where to post this question :)
| gharchive/issue | 2020-08-06T14:33:46 | 2025-04-01T04:32:16.147430 | {
"authors": [
"ashwinsarvesh",
"jinghaomiao"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/12056",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
282600315 | [How to Run Offline Perception Visualizer]libGL error: failed to load driver: swrast
I try to run offline perception visualizer according to the document how_to_run_offline_perception_visualizer.md.
When it came to the last step, run the visualizer with command:
/apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_lidar_visualizer_tool
There is no window jumping out, error messages:
The visualization window did not come out because the libGL error: failed to load driver:swrast
By checking the glfw_viewer.cc code,
`bool GLFWViewer::WindowInit() {
if (!glfwInit()) {
AERROR << "Failed to initialize glfw !\n";
return false;
}
window_ = glfwCreateWindow(win_width_, win_height_, "opengl_visualizer",
nullptr, nullptr);
if (window_ == nullptr) {
AERROR << "Failed to create glfw window!\n";
glfwTerminate();
return false;
}`
It seems that the function glfwCreateWindow gives the libGL error message, it can't create a window.
I think it may be related to the GPU driver. But I have installed the GPU driver in docker according to
how_to_run_perception_module_on_your_local_computer.md. I've used the deviceQuery to test it, the result is:
So what's the problem?
Do you use command "./NVIDIA-Linux-x86_64-375.39.run --no-opengl-files -a -s" on your host ubuntu 14.04?
@chucklqsun My host ubuntu is 16.04, and the NVIDIA driver is 384.90, I installed the driver using .run file without any parameter, and the driver in docker is 384.90, I install it with parameter "--no-opengl-files -a -s"
Do you mean that I need to install the driver again on my host pc with parameter "--no-opengl-files -a -s"?
Yes, use version 375.39 and with parameters. And you may use ubuntu 14.04, which is recommended by Apollo Team.
@chucklqsun So you think that maybe the GPU driver version problem? I remember that the doc says that it's not essential to use the 375.39, you just need to keep the GPU driver version in docker is the same to the one in host.
@hzdsdhr Maybe you can refer here: https://github.com/ApolloAuto/apollo/issues/1687
My host machine is also Ubuntu16.04
$ uname -a
Linux xxxxxx 4.4.0-104-generic #127-Ubuntu SMP Mon Dec 11 12:16:42 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
My host and docker Nvidia driver version are both 384.98, not recommend 375.39
xxx@in_dev_docker:/apollo$ nvidia-smi
Mon Dec 18 11:52:48 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.98 Driver Version: 384.98 |
|-------------------------------+----------------------+----------------------+
@hzdsdhr did you solve your issue? if so, could you share your solution?
@hzdsdhr @hadiTab
Any updates? did anyone solve this problem?
reinstalling drivers with --no-opengl-files fixed the problem for me
| gharchive/issue | 2017-12-16T05:00:41 | 2025-04-01T04:32:16.156571 | {
"authors": [
"Durant35",
"chucklqsun",
"hadiTab",
"hzdsdhr",
"snuffysasa"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/1832",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
355557920 | Error While Running Localization on Localization demo data
Hi,
I am running Localization demo offline. I am following the commands given at link.
While playing Rosbag following error is coming:
$ rosbag play *.bag
[ INFO] [1535630577.277322062]: Opening 000.bag
[ INFO] [1535630577.315954598]: Opening 001.bag
[ INFO] [1535630577.414806626]: Opening 002.bag
[ INFO] [1535630577.529829252]: Opening 003.bag
[FATAL] [1535630577.530225819]: Error reading from file: wanted 4 bytes, read 0 bytes
So we decided to run ros bags one by one but we get the following error:
$ rosbag play 000.bag
[ INFO] [1535630840.956989545]: Opening 000.bag
Waiting 0.2 seconds after advertising topics... done.
Hit space to toggle paused, or 's' to step.
[ERROR] [1535630841.215605037]: Client [/localization] wants topic /apollo/sensor/gnss/corrected_imu to have datatype/md5sum [pb_msgs/CorrectedImu/81aef4a818ce273a8af85a440ccdb0f7], but our version has [pb_msgs/Imu/bdef0ba51869607ed95736d41e80c1f5]. Dropping connection.
[PAUSED] Bag Time: 1514423656.648570 Duration: 4.375880 / 59.870157
It Looks like a mismatch between published msg type and subscribed msg type for Imu.
Please help!
Closed due to inactivity. If the problem persists, pls feel free to reopen it or create a new one and refer to it.
| gharchive/issue | 2018-08-30T12:13:54 | 2025-04-01T04:32:16.159178 | {
"authors": [
"Jyothikumar-b",
"daohu527"
],
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/5568",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1410192969 | Added my second playlist
Added my second playlist
Update the forked stream and then make your required changes!
| gharchive/pull-request | 2022-10-15T14:58:05 | 2025-04-01T04:32:16.178019 | {
"authors": [
"Apoorv-cloud",
"prabaltripathi"
],
"repo": "Apoorv-cloud/1_Hacktoberfest-22",
"url": "https://github.com/Apoorv-cloud/1_Hacktoberfest-22/pull/190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
341600119 | Fix Solus OS dependency
ONLYOFFICE DesktopEditor AppImage fails on Solus OS because there are omitted dependency:
libkeyutils.so.1
libGLU.so.1
These libraries don't include into AppImage because libkeyutils.so.1 is in excludelist and libglu1-mesa (libGLU.so.1 library) is in excludedeblist.
They are probably there for a reason - so I'd ask you to ship the ONLYOFFICE AppImage with those included for a while, and see if it leads to any issues with other users. It would be great if you'd report back here after a while how it went.
I have the same problem and still not solved. Thank you!
I have the same problem and still not solved. Thank you!
Which operating system and version are you running with which ONLYOFFICE AppImage and version?
| gharchive/pull-request | 2018-07-16T17:02:45 | 2025-04-01T04:32:16.207717 | {
"authors": [
"agolybev",
"probonopd",
"tudorels"
],
"repo": "AppImage/pkg2appimage",
"url": "https://github.com/AppImage/pkg2appimage/pull/338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1262574844 | In-App events Status code failure 404
Report
Plugin Version
via 6.6.0
On what Platform are you having the issue?
ios and android
What did you do?
Init sdk and validate it worked having the Success string log.
Included the following In-App event:
appsFlyer.logEvent(
'af_login',
{},
(success) => {
console.log({ success })
},
(error) => {
console.log({ error })
},
)
What did you expect to happen?
Event to be logged (console.log({ success}))
What happened instead?
Got this error:
{
"error": "Status code failure 404",
}
Please provide any other relevant information.
I'm using Expo v44 with EAS Build.
@jorgeruvalcaba, Did you find the solution to your issue?
If yes, can you please share?
Any update for this issue?
@amit-kremer93
Can you let us know if this is fixed or any work around ? I still see this in 6.10.3 version.
| gharchive/issue | 2022-06-07T00:27:23 | 2025-04-01T04:32:16.247152 | {
"authors": [
"jorgeruvalcaba",
"pratik-adh",
"singhagam1",
"thaycacac"
],
"repo": "AppsFlyerSDK/appsflyer-react-native-plugin",
"url": "https://github.com/AppsFlyerSDK/appsflyer-react-native-plugin/issues/390",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1465231172 | Depenencies/updates
Update dependencies to their newest versions
Breaks building for Node 12.x and 14.x
Closes #17 and #18
| gharchive/pull-request | 2022-11-26T18:59:23 | 2025-04-01T04:32:16.254640 | {
"authors": [
"Apsysikal"
],
"repo": "Apsysikal/bachome",
"url": "https://github.com/Apsysikal/bachome/pull/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
277680380 | Aptomi keeps creating new policy generations + possible race
Policy was already pre-populated (not sure about the state)
Aptomi keeps creating new policy generations when you feed the same policy over API
[aptomi] aptomictl policy apply --username Sam -f examples/03-twitter-analytics/policy 19:03:17 ☁ master ☂ 𝝙 ⚡ ✭ 𝝙
&{{policy-update-result} 15 [action-component-update/cluster-us-east#main#twitter_stats#prod#tweeviz action-component-update/cluster-us-east#main#twitter_stats#prod#root action-component-endpoints/cluster-us-east#main#twitter_stats#prod#tweeviz action-post-process/]}
[aptomi] aptomictl policy apply --username Sam -f examples/03-twitter-analytics/policy 19:03:17 ☁ master ☂ 𝝙 ⚡ ✭ 𝝙
&{{policy-update-result} 16 [action-component-update/cluster-us-east#main#twitter_stats#prod#root action-component-update/cluster-us-east#main#twitter_stats#prod#tweeviz action-component-endpoints/cluster-us-east#main#twitter_stats#prod#tweeviz action-post-process/]}
[aptomi] aptomictl policy apply --username Sam -f examples/03-twitter-analytics/policy 19:03:17 ☁ master ☂ 𝝙 ⚡ ✭ 𝝙
&{{policy-update-result} 17 []}
[aptomi] aptomictl policy apply --username Sam -f examples/03-twitter-analytics/policy 19:03:17 ☁ master ☂ 𝝙 ⚡ ✭ 𝝙
&{{policy-update-result} 18 []}
...
[aptomi] aptomictl policy apply --username Sam -f examples/03-twitter-analytics/policy 19:03:17 ☁ master ☂ 𝝙 ⚡ ✭ 𝝙
&{{policy-update-result} 42 []}
[aptomi] aptomictl policy apply --username Sam -f examples/03-twitter-analytics/policy 19:03:17 ☁ master ☂ 𝝙 ⚡ ✭ 𝝙
&{{policy-update-result} 43 []}
A separate problem is -- note policy generations #15 and #16. Managed to create them somehow with two consecutive API calls (less
than a second between them). Seems like this situation should be impossible (diff is the same really). May be there is a major bug with
saving/updating desired state somewhere. Or a race condition.
Once fixed, can we PLEASE add a unit test, which will specifically check that policy update is correctly
handled and no duplicate policies are being produced?
The root cause of this bug was duplicate prod dependency objects after changing version in initial one using sed (that created new file with -changed- in name). Fixed by adding validation for duplicate objects in both API and CLI
| gharchive/issue | 2017-11-29T09:18:36 | 2025-04-01T04:32:16.257696 | {
"authors": [
"Frostman",
"ralekseenkov"
],
"repo": "Aptomi/aptomi",
"url": "https://github.com/Aptomi/aptomi/issues/198",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1314255899 | contextisolation和require冲突怎么解决呀
webPreferences: {
nodeIntegration: true,
contextIsolation: true,
preload: join(__dirname, '../preload/index.js'),
devTools: isDev
}
contextIsolation设为true
开发版本:界面正常,可调用IPC方法,但不用使用require方法
发布版本:界面空白
contextIsolation设为false
开发版本:界面正常,不可调用IPC方法
发布版本:界面正常,不可调用IPC方法
请问怎么解决这个问题呀
前端没用require,import也算嘛。请问在哪里修改呀
你的前端部分有 import { ipcRenderer } from 'electron' 这样的引入,你的前端应该是和 electron 完全隔离的,所以不要直接从 electron 中引入东西。
| gharchive/issue | 2022-07-22T04:57:54 | 2025-04-01T04:32:16.281608 | {
"authors": [
"ArcherGu",
"veryws"
],
"repo": "ArcherGu/fast-vite-nestjs-electron",
"url": "https://github.com/ArcherGu/fast-vite-nestjs-electron/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2254955197 | Add a cache Service
A Service which allow to use cache:
$cache->get('key', static fn () => $value); // Return stored value or the one returned by closure
Implement https://www.php-fig.org/psr/psr-16/
| gharchive/issue | 2024-04-21T07:59:08 | 2025-04-01T04:32:16.282841 | {
"authors": [
"Gashmob"
],
"repo": "Archict/core",
"url": "https://github.com/Archict/core/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2439180783 | games: Fix icons for newly added ACs not showing up and update games.json
Changes:
games.ts
getLogo() function:
Fix icons for newly added ACs by adding them to the function, with NACE (NetEase) preceding ACE (Tencent) in order to clear up confusion between both ACs.
games.json
Wuthering Waves:
add note stating that it's playable on GeForce NOW.
add note that workarounds are required to run the game on Proton (these are hacky workarounds, and will likely get you banned). The game's status remains broken.
Honkai: Star Rail:
add note stating that it's playable on GeForce NOW.
ANOTHER EDEN:
Add identified ACs (fixes #1692)
Lost Light:
Change game status to Broken based on recent reports. (completely fixes #1688)
Marvel Rivals:
Add back NEAC (kernel mode AC) as the game will likely use it in conjunction w/ NACE (anti-tamper) due to the game's competitive nature.
Snowbreak (overseas server):
Add better URL slug (w/ "overseas" removed for to be less specific.)
Add link to the game on Steam.
Minor title change. Reverted.
Snowbreak (Chinese server):
Add better URL slug.
Change game's status to Broken
Minor title change.
Add note that workarounds are required to run the game on Proton (these are hacky workarounds, and will likely get you banned). The game's status remains broken.
Once Human
Change NEAC to NACE as NEAC driver wasn't found in the game files
Update game status to Running
Added tinkering notes.
If you want clarification on why WuWa and SCZ CN should be both considered broken despite having workarounds, see @cybik's comments on #1384
@Starz0r I've made changes and added them to my comment. If I need to change anything else plase let me know.
| gharchive/pull-request | 2024-07-31T05:45:43 | 2025-04-01T04:32:16.523859 | {
"authors": [
"WhippuSIF"
],
"repo": "AreWeAntiCheatYet/AreWeAntiCheatYet",
"url": "https://github.com/AreWeAntiCheatYet/AreWeAntiCheatYet/pull/1694",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.