id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
74988505 | Object view error checking and path parsing
Add a ipfs permalink for ipns data:
Data does not display a big empty box when empty:
Error handling: (see #49)
These things are moving towards #43
btw, the font issues only appear when i run in dev-mode, looking at this too
right now, the links + data boxes are a bit odd -- doesn't feel like a single object. wonder if pushing them together may be a good visual, something like:
but with better indicators / headers or whatever
Some alignment improvements, proposed grouping-of-objects.
Changed 'go back' into 'Parent object', which is more to the point.
Not using 'window.location.hash = something' anymore, but context.router.transitionTo()
Moved path parsing into its own utils/path.js for use in other parts of the app
@krl these changes looks great to me -- thanks for the screenshots.
i really like the combined links + data box. + love that you clearly tell the user that it's empty. :+1:
the permalink probably should be outside still, sorry for not being clear. it could also have a "copy" button like other places in the ui.
i'd prefer -- when there is data -- for the data to occupy the whole box, instead of having the word "Data" and the actual data nested one more box deeper. looks too complex.
For example:
totally right that we need to clarify the difference between the two segments to people, so maybe we could do something like:
where the xml protobuf json links change the encoding in place, and the "external link" is a link to /object/data?arg=<key>&enc=<encoding> (in the api route for now, later on we can see about exposing read only things like /object/data to the normal gateway, or something).
we can probably address these things in a separate PR-- whatever you prefer. I'm good to merge as is or fix.
(( disclaimer for my pedantic stylistic nits: sorry to go back and forth on it, since i designed lots of the UX here (along with a designer) i'm going to claim responsibility for it. most of it doesn't matter but some things (even small) like the placement of boxes have strong implications in terms of people's understanding of things. i think right now i have the most cohesive perspective into all the ipfs touch points and can probably make the most informed decisions. i hope to defer all this to a full time designer once we get one. ))
Still not 100% sure about the headings, IANRAD either, but i think this addresses your stylistic comments.
Pedantic is good!
Btw, json/protobuf/xml viewer is also something i think we should have here.
LGTM
thanks for making the design changes! :+1:
| gharchive/pull-request | 2015-05-10T19:47:01 | 2025-04-01T06:44:33.342529 | {
"authors": [
"jbenet",
"krl"
],
"repo": "ipfs/webui",
"url": "https://github.com/ipfs/webui/pull/50",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2666734917 | 🛑 NowBoarding (tokyo) is down
In adff5ba, NowBoarding (tokyo) (https://tokyo.nowboarding.club/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NowBoarding (tokyo) is back up in ce5a152 after 6 minutes.
| gharchive/issue | 2024-11-17T23:37:22 | 2025-04-01T06:44:33.345174 | {
"authors": [
"iplanwebsites"
],
"repo": "iplanwebsites/up",
"url": "https://github.com/iplanwebsites/up/issues/2829",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1718612758 | 🛑 Lovely (demo) is down
In 684e5c7, Lovely (demo) (https://demo.lovely.link/) was down:
HTTP code: 500
Response time: 2804 ms
Resolved: Lovely (demo) is back up in 1926975.
| gharchive/issue | 2023-05-21T19:21:23 | 2025-04-01T06:44:33.347603 | {
"authors": [
"iplanwebsites"
],
"repo": "iplanwebsites/up",
"url": "https://github.com/iplanwebsites/up/issues/822",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1509811041 | Help wanted identifying channels
Source
https://www.skylink.cz/televize-pres-satelit/programy
Channels
See below
Notes
I'm working on a guide for skylink.cz and need some assistance identifying channels. In addition to the channels listed on their website I've found a lot of other data but it the unfortunately that data doesn't show the channel name. I've been able to identify some of it but need help with the rest. Most are in Czech or Slovakian.
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=42
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=44
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=45
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=46
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=47
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=48
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=49
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=50
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=51
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=52
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=53
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=54
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=96
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=98
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=105
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=122
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=132
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=141
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=149
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=152
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=168
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=169
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=170
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=171
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=172
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=174
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=182
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=184
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=185
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=186
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=187
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=189
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=190
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=191
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=194
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=196
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=197
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=198
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=199
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=200
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=201
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=203
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=216
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=218
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=235
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=236
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=238
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=239
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=243
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=245
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=246
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=247
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=249
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=252
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=254
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=259
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=260
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=266
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=269
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=273
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=276
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=288
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=305
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=307
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=324
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=329
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=354
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=356
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=360
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=361
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=365
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=366
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=367
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=368
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=376
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=379
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=383
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=384
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=387
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=393
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=395
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=399
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=400
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=401
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=405
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=406
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=407
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=408
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=409
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=448
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=471
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=495
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=528
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=699
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=902
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=108
English language, looks like a children's channel
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=204
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=244
Eurosport ?
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=232
One of the many MTV channel?
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=95
Adult channel
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=20
Shown on their website as "MTV Europe CZ", but we don't have that in the database
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=296
Shown on their website as "Náš region TV". Is that known by another name or does it need adding to the database?
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=13
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=14
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=207
Viasat Explore / History / Nature. Database has local version for Hungary, Poland and Russia, as well as the parent Swedish channel. Are these one of those another local version that needs adding to the database?
I also need SK/CZ version of Viasat Explore / History / Nature. They are indeed missing. Do you create PR to database or should I?
Some of your questions I was able to identify. I can look into more if needed:
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=108
BabyTV.uk
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=204
Eurosport1.fr
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=244
Eurosport2.fr
https://services.mujtvprogram.cz/tvprogram2services/services/tvprogrammelist_mobile.php?channel_cid=232
MTVLive.uk
I thought it might be BabyTV but when I checked another source the programme names/times didn't seem to match. I hadn't spotted that there was already a source for Eurosport 1/2 so those were just a duplicate with slightly less details, so I didn't need them in the end
I've done the PR for the EPG. Hopefully @freearhey will get chance to do the rest of the setup in the next couple of days.
I also need SK/CZ version of Viasat Explore / History / Nature. They are indeed missing. Do you create PR to database or should I?
Are you ok to do it. I've left them in the channels XML commented out if you want to update that as well.
| gharchive/issue | 2022-12-23T22:04:47 | 2025-04-01T06:44:33.395710 | {
"authors": [
"RevGear",
"m1k1o"
],
"repo": "iptv-org/epg",
"url": "https://github.com/iptv-org/epg/issues/1570",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1456912199 | Fix: Sony Channels are Viasat now
Hi @freearhey,
Maybe xmltv_ids should be modified, I don't know.
Thanks
Btw:
About channel name changes https://hu.wikipedia.org/wiki/Viasat_2 https://hu.wikipedia.org/wiki/Viasat_Film
Fix them
| gharchive/pull-request | 2022-11-20T13:54:58 | 2025-04-01T06:44:33.398268 | {
"authors": [
"smrtprjcts"
],
"repo": "iptv-org/epg",
"url": "https://github.com/iptv-org/epg/pull/1370",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
687312915 | initialize-php escapet Slashes in URLs nicht richtig
Fehlerbeschreibung
Was ist passiert?
system.json sieht folgendermaßen aus:
{"broadcastServiceUriPush":"http://testcenter-broadcasting-service:3000","broadcastServiceUriSubscribe":"ws://localhost/bs"}
Was sollte passieren?
{"broadcastServiceUriPush":"http://testcenter-broadcasting-service:3000","broadcastServiceUriSubscribe":"ws://localhost/bs"}
Call: Endpoint, Payload, Headers etc.
Hier der Aufruf in der entrypoint.sh
php /var/www/html/scripts/initialize.php
--user_name=$SUPERUSER_NAME
--user_password=$SUPERUSER_PASSWORD
--workspace=$WORKSPACE_NAME
--test_login_name=$TEST_LOGIN_NAME
--test_login_password=$TEST_LOGIN_PASSWORD
--test_person_codes="xxx yyy"
--broadcast_service_uri_push=$BROADCAST_SERVICE_URI_PUSH
--broadcast_service_uri_subscribe=$BROADCAST_SERVICE_URI_SUBSCRIBE
Scheint ja keine Probleme zu bereiten
| gharchive/issue | 2020-08-27T14:58:32 | 2025-04-01T06:44:33.412095 | {
"authors": [
"paflov",
"rhenck"
],
"repo": "iqb-berlin/testcenter-backend",
"url": "https://github.com/iqb-berlin/testcenter-backend/issues/147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
833802550 | The previews of links to the IR Anthology display ACL Anthology as the source.
(See the small "ACL Anthology" there. This should be change to IR Anthology.)
While at it, please also search for other occurrences of "ACL Anthology" in the code base and handle them accordingly, if needed.
The remaining file that contain ACL are all in the info or faq directory. These should not be visible right now, so i think we are covered for now.
./content/faq/linking.md:Title: How do I link to publications in the ACL Anthology?
./content/faq/linking.md:Both the DOIs and the canonical Anthology URLs embed the 8-character ACL Anthology Identifier.
./content/faq/attachments.md:We often also process a week or so after major *ACL conferences.
./content/faq/software.md:Title: What software does the ACL Anthology use?
./content/faq/software.md:The ACL Anthology is built using completely open-source software, with further developments and customization done by the ACL Anthology staff.
./content/faq/doi.md:Title: How does the ACL Anthology use Digital Object Identifiers (DOIs)?
./content/faq/doi.md:The ACL uses DOIs to refer to scholarly works persistently in accordance with
./content/faq/doi.md:cross-disciplinary standards. Prior to 2012, ACL materials were assigned DOIs by
./content/faq/doi.md:the Association of Computational Machinery (ACM). Since 2014, ACL has assigned
./content/faq/doi.md:its own DOIs to materials using a convention based off of the ACL Anthology
./content/faq/doi.md:Identifiers. In 2015, ACL had to change DOI prefix due to accounting practices
./content/faq/doi.md:better cross-reference ACL materials.*
./content/faq/doi.md:Note while the ACL Anthology hosts third-party materials coming from sister
./content/faq/doi.md:societies, these materials are hosted courtesy the ACL but are not assigned DOIs
./content/faq/doi.md:by the ACL due to costs and copyright limitations; the DOI information above is
./content/faq/doi.md:only applicable to ACL sponsored events by ACL, its chapters or SIGs.
./content/faq/related-work.md:Title: What other resources are there based on the ACL Anthology?
./content/faq/related-work.md:The [ACL Anthology Reference Corpus](http://acl-arc.comp.nus.edu.sg/) provides **plain text and other metadata** for a controlled subset of the ACL Anthology.
./content/faq/related-work.md:The following people have written reflections on the ACL Anthology:
./content/faq/related-work.md:> [The ACL Anthology: Current State and Future Directions](https://aclweb.org/anthology/W18-2504/)
./content/faq/related-work.md:Many other [publications based on the ACL Anthology](https://scholar.google.de/scholar?q=acl+anthology) exist.
./content/faq/volunteer.md:Title: How can I help improve the ACL Anthology?
./content/faq/copyright.md:Title: What is the copyright for materials in the ACL Anthology?
./content/faq/copyright.md:The ACL materials that are hosted in the Anthology are licensed to the general public under a liberal usage policy that allows unlimited reproduction, distribution and hosting of materials on any other website or medium, for non-commercial purposes.
./content/faq/copyright.md:Prior to 2016, all ACL materials are licensed using the [Creative Commons 3.0 BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/3.0/) (Attribution, Non-Commercial, Share-Alike) license.
./content/faq/copyright.md:Note that these policies only cover ACL materials.
./content/faq/_index.md:subtitle: Other information about the ACL Anthology
./content/info/reflections-ksj.md:Title: ACL 1993—1994
./content/info/reflections-ksj.md:As an organisation, the ACL has clearly evolved over the forty years since its
./content/info/reflections-ksj.md:interest over the same period is a tricky and interesting question.) The ACL's
./content/info/reflections-ksj.md:ACL's community was not very large in its first two decades, partly because
./content/info/reflections-ksj.md:important period in the development of the ACL, and both for these reasons and
./content/info/reflections-ksj.md:do. The day-to-day operation of the ACL had to be kept going, and at the same
./content/info/reflections-ksj.md:gotten from the fact that the ACL office was in the basement of Don and Betty
./content/info/reflections-ksj.md:Walker's house in Morristown (NJ), and that much of the information about ACL
./content/info/reflections-ksj.md:head. Betty, as an ACL employee, was very fortunately familiar with the manifold
./content/info/reflections-ksj.md:details of ACL's office-driven activities, notably of the way membership,
./content/info/reflections-ksj.md:at a distressing and demanding time for her, the ACL's day-to-day operations
./content/info/reflections-ksj.md:during 1993 and 1994 would never have been kept going. The ACL owes a great debt
./content/info/reflections-ksj.md:fourth, to determine the state of the ACL's finances and establish them on a
./content/info/reflections-ksj.md:Don had already begun the process of creating an independent ACL office with an
./content/info/reflections-ksj.md:already familiar as a past President with the ACL, accepted our proposal.
./content/info/reflections-ksj.md:The ACL Conference is a major financial commitment, as well as a logistic one,
./content/info/reflections-ksj.md:event for the participating community, it was an important showcase for the ACL
./content/info/reflections-ksj.md:death, was understanding the ACL's finances. Though I had had one meeting with
./content/info/reflections-ksj.md:while not rich, the ACL was perfectly viable financially.
./content/info/reflections-ksj.md:Carberry of running conferences, were drafted in as mentors: the ACL could not
./content/info/reflections-ksj.md:the ACL was a modest organisation, without all the apparatus of full-scale
./content/info/reflections-ksj.md:achieved by 1993. He had fostered the ACL so that it could profit from the
./content/info/reflections-ksj.md:growth in scale and range of the ACL's activities, with nearly forty items
./content/info/reflections-ksj.md:write, many documents, ranging from how to cost ACL conferences via revised ACL
./content/info/reflections-ksj.md:constitutions to emergency ACL Meeting resolutions. Equally, Don had initiated
./content/info/reflections-ksj.md:policy changes, for example, about holding ACL conferences outside North
./content/info/reflections-ksj.md:the ACL has advanced successfully since the mid nineties suggests that Don had
./content/info/reflections-ksj.md:laid a good foundation for the latest phase of the ACL's life, and that the many
./content/info/reflections-ksj.md:needed to be done during 1993-4, with the ACL's first non-North American
./content/info/reflections-ksj.md:happening in the last decade since, and what does ACL's future look like?
./content/info/reflections-ksj.md:In the last decade, ACL has restructured itself formally and become more clearly
./content/info/reflections-ksj.md:decade, notably in dialogue work, though ACL still has some way to go to be as
./content/info/reflections-ksj.md:The ACL has responded, as its conferences and journal show, to the trends of the
./content/info/reflections-ksj.md:need good models for it. The ACL has shown that it can successfully embrace both
./content/info/reflections-ksj.md:--- Karen Spärck Jones, ACL President 1994
./content/info/ids.md:- The old format had a fixed number of identifiers for venues (for example, "P" was for "ACL").
./content/info/ids.md:Typically, an acronym will be selected ("lrec" for [LREC](http://lrec.org), "acl" for [ACL](http://www.aclweb.org/), etc.).
./content/info/ids.md:The canonical URLs for the Anthology are formed by appending the ACL ID to the Anthology URL.
./content/info/ids.md:For large or prominent events related to ACL or its sister organizations that recur on a yearly basis, we used a separate lettered prefix (e.g., ACL, which is "P").
./content/info/ids.md: Each conference proceedings may have up to 999 papers; conferences with more papers than this upper limit should consult the ACL Anthology Editor on how to split the proceedings into separate volumes.
./content/info/ids.md: If a workshop exceeds 99 papers in a year, please consult with the ACL Anthology Editor.
./content/info/development.md:export ACLANTHOLOGY=$(pwd)/acl-anthology
./content/info/development.md:export PYTHONPATH+=:$ACLANTHOLOGY/bin
./content/info/development.md: <booktitle>Proceedings of <fixed-case>ACL</fixed-case> 2018, Student Research Workshop</booktitle>
./content/info/corrections.md:The staff of the ACL Anthology can process requests for both many types of corrections to data in the ACL Anthology.
./content/info/volunteer.md:Title: Help Shape the ACL Anthology!
./content/info/volunteer.md:The ACL Anthology is a treasure. It is the repository of our community's
./content/info/credits.md:Thanks to the **[ACL Anthology](https://www.aclweb.org/anthology/)** for releasing their software stack to build the ACL Anthology website, which we adapted to build the IR Anthology.
./content/info/contrib.md:subtitle: General information on submitting proceedings to the ACL Anthology (for event chairs)
./content/info/contrib.md:This page contains general information about submitting the proceedings of a conference to the ACL Anthology.
./content/info/contrib.md:(For information about the complete conference management process, particularly for ACL conference publications chairs, please refer to [the official documentation](https://acl-org.github.io/ACLPUB/).)
./content/info/contrib.md: Its basic form is the conference acronym, such as ACL, NAACL, JEP/TALN/RECITAL, and so on.
./content/info/contrib.md: A [slugified](https://en.wikipedia.org/wiki/Clean_URL#Slug) version of this acronym, containing only numerals and lowercase ASCII letters, is used in the URL for the venue's page on the Anthology (e.g., [ACL → acl](https://www.aclweb.org/anthology/venues/acl), [JEP/TALN/RECITAL → jeptalnrecital](https://www.aclweb.org/anthology/venues/jeptalnrecital)), and also forms a component of the [Anthology ID]({{< relref "ids.md" >}}).
./content/info/contrib.md: Note: a common mistake is to include the year in the venue identifier, e.g., ACL2020 (perhaps in part because Softconf often uses this).
./content/info/contrib.md:After your conference management software has collected all the camera-ready papers and associated attachments, you will arrange all the volumes of your proceedings into [ACLPUB](https://github.com/acl-org/ACLPUB) format, as described in the [ACLPUB → Anthology documentation](https://acl-org.github.io/ACLPUB/anthology.html).
./content/info/contrib.md:If you used [Softconf](https://www.softconf.com)'s [STARTv2 conference management system](https://www.softconf.com/about/start-v2-mainmenu-26), the situation is easy for you, since ACLPUB is integrated.
./content/info/contrib.md:For meetings using EasyChair, you will need to first convert to ACLPUB format using [our easy2acl scripts](https://github.com/acl-org/easy2acl).
./content/info/contrib.md:The end result in either case is a `data` directory containing ACLPUB proceedings, one for each conference (again, see [the ACLPUB -> Anthology instructions](https://acl-org.github.io/ACLPUB/anthology.html)).
./content/info/contrib.md:+ https://github.com/acl-org/ACLPUB/blob/master/doc/authors/ACL-copyright-form.pdf
./content/info/contrib.md:Forms should be signed by authors and saved using the ACL Anthology identifiers as names.
./content/info/contrib.md:If you need to assign ISBN numbers, please provide the exact titles of each volume to be assigned an ISBN and forward this information to Priscilla Rasmussen, ACL Business Manager.
./content/info/reflections-aj.md:years of ACL (which coincide with the period between ACL 1982 held at Penn and
./content/info/reflections-aj.md:ACL 2002 also held at Penn) or twenty more years from ACL 2002. As you will see
./content/info/reflections-aj.md:for the first couple of years of ACL (until about early 64) I did not officially
./content/info/reflections-aj.md:belong to ACL. This was not because I disliked something about ACL or the people
./content/info/reflections-aj.md:initial name of the current ACL. More importantly, I guess I was not sure myself
./content/info/reflections-aj.md:perhaps, therefore, not needing any special name. Of course, once I joined ACL I
| gharchive/issue | 2021-03-17T13:55:29 | 2025-04-01T06:44:33.416902 | {
"authors": [
"najtin",
"potthast"
],
"repo": "ir-anthology/ir-anthology",
"url": "https://github.com/ir-anthology/ir-anthology/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2276618461 | enforce irpa extension on save
avoids user error like trying to use save_module_parameters("my_mod.safetensors", myModule)
Something is off with the commits on this PR after a force push.
| gharchive/pull-request | 2024-05-02T22:34:50 | 2025-04-01T06:44:33.425393 | {
"authors": [
"ScottTodd",
"dan-garvey"
],
"repo": "iree-org/iree-turbine",
"url": "https://github.com/iree-org/iree-turbine/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2701441667 | error: 'func.func' op exceeded stack allocation limit of 32768 bytes for function. Got 229376 bytes
What happened?
For the given IR
module {
func.func @"torch-jit-export"(%arg2: !torch.vtensor<[16,256,56,56],f32>) -> !torch.vtensor<[16,256,56,56],f32> attributes {torch.onnx_meta.ir_version = 3 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.producer_name = "pytorch", torch.onnx_meta.producer_version = "0.3"} {
%10 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<256x4x3x3xf32>} : () -> !torch.vtensor<[256,4,3,3],f32>
%11 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<256xf32>} : () -> !torch.vtensor<[256],f32>
%12 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<256xf32>} : () -> !torch.vtensor<[256],f32>
%13 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<256xf32>} : () -> !torch.vtensor<[256],f32>
%14 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<256xf32>} : () -> !torch.vtensor<[256],f32>
%529 = torch.operator "onnx.Conv"(%arg2, %10) {torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.group = 64 : si64, torch.onnx.kernel_shape = [3 : si64, 3 : si64], torch.onnx.pads = [1 : si64, 1 : si64, 1 : si64, 1 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[16,256,56,56],f32>, !torch.vtensor<[256,4,3,3],f32>) -> !torch.vtensor<[16,256,56,56],f32>
%530 = torch.operator "onnx.BatchNormalization"(%529, %11, %12, %13, %14) {torch.onnx.epsilon = 9.99999974E-6 : f32, torch.onnx.momentum = 0.899999976 : f32} : (!torch.vtensor<[16,256,56,56],f32>, !torch.vtensor<[256],f32>, !torch.vtensor<[256],f32>, !torch.vtensor<[256],f32>, !torch.vtensor<[256],f32>) -> !torch.vtensor<[16,256,56,56],f32>
%531 = torch.operator "onnx.Relu"(%530) : (!torch.vtensor<[16,256,56,56],f32>) -> !torch.vtensor<[16,256,56,56],f32>
return %531 : !torch.vtensor<[16,256,56,56],f32>
}
}
getting error as
../model.torch_onnx.mlir:10:12: error: 'func.func' op exceeded stack allocation limit of 32768 bytes for function. Got 229376 bytes
%531 = torch.operator "onnx.Relu"(%530) : (!torch.vtensor<[16,256,56,56],f32>) -> !torch.vtensor<[16,256,56,56],f32>
^
Steps to reproduce your issue
command:
iree-compile --iree-hal-target-backends=llvm-cpu --iree-llvmcpu-target-cpu=host -o abc.vmfb model.torch_onnx.mlir
Version: IREE compiler version 3.1.0rc20241127 @ 615e7ff5ad0b97a214802f6b1b8a965dd05c1722
What component(s) does this issue relate to?
Compiler
Version information
No response
Additional context
No response
Can you share more information about the program(s) that failed and put that information in the issue title? It's difficult to prioritize issues and expand test coverage without that context.
We have ONNX zoo model regnetz_* which are failing with above error
Okay, RegNetZ, a CNN-based model.
Paper: https://arxiv.org/abs/2103.06877
sample model zoo source (contains many for regnet, regnetx, regnety, regnetz, etc.): https://github.com/onnx/models/blob/main/Computer_Vision/regnetz_040_Opset16_timm/regnetz_040_Opset16.onnx
| gharchive/issue | 2024-11-28T09:21:57 | 2025-04-01T06:44:33.431234 | {
"authors": [
"ScottTodd",
"pdhirajkumarprasad"
],
"repo": "iree-org/iree",
"url": "https://github.com/iree-org/iree/issues/19333",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1286636598 | [Bufferization] Incorrect caching of subspan ops with the same buffer with different types
What happened?
The dispatch with the fused reduction and elementwise op references a same buffer with different shape and the bufferization's findOrCreateSubspanBuffer() returns a wrong one.
We have two subspans for <1001xf32> and <1x1001xf32> which are mapped to the same input buffer. While doing the bufferization, memref<1001xf32> is created, and it is found again for the second subspan, and returned. Since the types are different, the verifier reports an error later.
$ iree-opt --iree-codegen-iree-comprehensive-bufferize dispatch_72.mlir
dispatch_72.mlir:20:10: error: 'linalg.generic' op expected operand rank (1) to match the result rank of indexing_map #0 (2)
%9 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> ()>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%5, %8 : tensor<1x1001xf32>, tensor<f32>) outs(%3 : tensor<1x1001xf32>) {
^
dispatch_72.mlir:20:10: note: see current operation: "linalg.generic"(%4, %12, %7) ({
^bb0(%arg0: f32, %arg1: f32, %arg2: f32):
%17 = "arith.divf"(%3, %arg1) : (f32, f32) -> f32
%18 = "arith.mulf"(%arg0, %17) : (f32, f32) -> f32
"linalg.yield"(%18) : (f32) -> ()
}) {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> ()>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"], operand_segment_sizes = dense<[2, 1]> : vector<2xi32>} : (memref<1001xf32>, memref<f32>, memref<1x1001xf32>) -> ()
$ cat dispatch_72.mlir
module {
func.func @main_dispatch_72() {
%c5120 = arith.constant 5120 : index
%c0 = arith.constant 0 : index
%cst = arith.constant 0.000000e+00 : f32
%cst_0 = arith.constant 1.000000e+00 : f32
%0 = hal.interface.binding.subspan set(0) binding(0) type(storage_buffer) offset(%c5120) alignment(64) : !flow.dispatch.tensor<readonly:1001xf32>
%1 = hal.interface.binding.subspan set(0) binding(0) type(storage_buffer) offset(%c5120) alignment(64) : !flow.dispatch.tensor<readonly:1x1001xf32>
%2 = hal.interface.binding.subspan set(0) binding(1) type(storage_buffer) offset(%c0) alignment(64) : !flow.dispatch.tensor<writeonly:1x1001xf32>
%3 = flow.dispatch.tensor.load %2, offsets = [0, 0], sizes = [1, 1001], strides = [1, 1] : !flow.dispatch.tensor<writeonly:1x1001xf32> -> tensor<1x1001xf32>
%4 = flow.dispatch.tensor.load %0, offsets = [0], sizes = [1001], strides = [1] : !flow.dispatch.tensor<readonly:1001xf32> -> tensor<1001xf32>
%5 = flow.dispatch.tensor.load %1, offsets = [0, 0], sizes = [1, 1001], strides = [1, 1] : !flow.dispatch.tensor<readonly:1x1001xf32> -> tensor<1x1001xf32>
%6 = bufferization.alloc_tensor() : tensor<f32>
%7 = linalg.fill ins(%cst : f32) outs(%6 : tensor<f32>) -> tensor<f32>
%8 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> ()>], iterator_types = ["reduction"]} ins(%4 : tensor<1001xf32>) outs(%7 : tensor<f32>) attrs = {lowering_config = #iree_codegen.lowering_config<tile_sizes = [[0]]>} {
^bb0(%arg0: f32, %arg1: f32):
%10 = arith.addf %arg0, %arg1 : f32
linalg.yield %10 : f32
} -> tensor<f32>
%9 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> ()>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%5, %8 : tensor<1x1001xf32>, tensor<f32>) outs(%3 : tensor<1x1001xf32>) {
^bb0(%arg0: f32, %arg1: f32, %arg2: f32):
%10 = arith.divf %cst_0, %arg1 : f32
%11 = arith.mulf %arg0, %10 : f32
linalg.yield %11 : f32
} -> tensor<1x1001xf32>
flow.dispatch.tensor.store %9, %2, offsets = [0, 0], sizes = [1, 1001], strides = [1, 1] : tensor<1x1001xf32> -> !flow.dispatch.tensor<writeonly:1x1001xf32>
return
}
}
Steps to reproduce your issue
No response
What component(s) does this issue relate to?
No response
Version information
No response
Additional context
No response
@okkwon Hey Okwan, can you help add a priority to this?
A PR is under review: https://github.com/iree-org/iree/pull/9652.
| gharchive/issue | 2022-06-28T01:24:13 | 2025-04-01T06:44:33.437023 | {
"authors": [
"alliemartens",
"okkwon"
],
"repo": "iree-org/iree",
"url": "https://github.com/iree-org/iree/issues/9641",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1316844455 | [Util/Analysis] FloatRangeValueElement computation breaks with additional TiedOpInterface ops
What happened?
As a cleanup, I implemented TiedOpInterface on additional ops such as LinalgOp. This breaks two test cases. It seems that the range computation in compiler/src/iree/compiler/Dialect/Util/Analysis/Attributes/Range.cpp no longer works correctly. However, from reading the interface documentation, it makes sense for LinalgOps to implement this interface. So I suspect that there is a bug somewhere in the FloatRangeValueElement computation (or the traversal in Explorer.cpp).
For more details and a test case to reproduce, see #9887.
Steps to reproduce your issue
What component(s) does this issue relate to?
No response
Version information
No response
Additional context
No response
from your PR: https://github.com/iree-org/iree/pull/9887#issuecomment-1194040267
[Solver] update changed [FloatRangeValueElement] value %arg1 with state fp-range: [0.000000, 0.000000, TRUNC]
+ queuing tied operand %0
? working on %0
== emitting op linalg.init_tensor
WALK: mlir-asm-printer: Verifying operation: builtin.module
%0 = linalg.init_tensor [5, 1] : tensor<5x1xf32>
UNRECOGNIZED OP: mlir-asm-printer: Verifying operation: func.func
%0 = linalg.init_tensor [5, 1] : tensor<5x1xf32> (signalling pessimistic fixpoint for mlir-asm-printer: Verifying operation: builtin.module
%1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<5x1xf32>) -> tensor<5x1xf32>)
looks like the issue is the walk here:
https://github.com/iree-org/iree/blob/20c95cfc76d2e211adc9a4dffaf767b6cb5d6aae/compiler/src/iree/compiler/Dialect/Util/Analysis/Attributes/Range.cpp#L179-L182
I think failing when the walk hits init_tensor is fine - that means that the potential value is undefined - but the walk should be stopping at the fill which totally overwrites the out.
There's a few ways to fix this up. One would be to have the walk check for fill, set the value, and then return WalkResult::skip() to skip the tied operands. The code doesn't handle this today but could easily be changed by checking for skip and continuing (vs how it only checks for interruption and breaks):
https://github.com/iree-org/iree/blob/20c95cfc76d2e211adc9a4dffaf767b6cb5d6aae/compiler/src/iree/compiler/Dialect/Util/Analysis/Explorer.cpp#L723
->
auto fnResult = fn(resultValue);
if (fnResult.wasInterrupted()) break;
if (fnResult.wasSkipped()) continue;
@benvanik Do we have an assignee to fix this? P1?
not sure - I responded on the PR that this came from and this should be able to be fixed there.
Fixed with #9887.
| gharchive/issue | 2022-07-25T13:29:58 | 2025-04-01T06:44:33.443759 | {
"authors": [
"alliemartens",
"benvanik",
"matthias-springer"
],
"repo": "iree-org/iree",
"url": "https://github.com/iree-org/iree/issues/9889",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2384916317 | [Flow] Fix dominance error in FormScalarDispatches
Issue here https://github.com/iree-org/iree/issues/17759
When preforming horizontal fusion, ops that were clonable (but not used by the fusion group) were ignored. If these ops were dependent on values produced by 'root ops', then the root op would get moved into the region. The logic here probably needs to get refactored because there is probably missed horizontal fusion opportunities. This probably need a more in depth analysis, however.
The current logic is that after finding a root, rootOp, that can be put into a dispatch, it would search backwards for further root ops, skipping over clonable ops, and stopping when encountering a non-clonable and non-root operation (Note: a root op is not clonable. see isScalarOperation). In order for this to work properly, none of the clonable ops can use values defined by roots that will be horizontally fused.
In the example below, assume %0 and %2 both satisfy isSliceRoot. first %2 would be found as the main root, then %1 would be skipped over because is is clonable (assumed to be moved into the dispatch but not). Finally, %0 would be horizontally fused breaking the use-def chain.
%0 = linalg.generic ...
%1 = tensor.extract %0[] ...
%2 = linalg.generic ...
...
uses of %1 and %2
Unlike when forming the slice to move into the dispatch, the horizontal fusion root is used outside of the dispatch. So to fix this case, we must keep track of uses between where the root is and where it will be moved to. This ensures that it doesnt get moved past any uses.
| gharchive/pull-request | 2024-07-01T23:38:41 | 2025-04-01T06:44:33.447423 | {
"authors": [
"IanWood1"
],
"repo": "iree-org/iree",
"url": "https://github.com/iree-org/iree/pull/17785",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2538943405 | ThemeSelector Update Phase2
ThemeSelector now has a new property: ThemeSource.
ThemeToggleButton actually controls ThemeSource if it is not set.
Instead of maintaining a nullable boolean as internal state, ThemeToggleButton now just looking for next Theme available in ThemeSource.
ThemeToggleButton always respect RequestThemeVariant on initialized. if it is set.
Introduce a new Control, ThemeSelector. It behaves like a combobox.
Further fix of #340
In Controller+ThreeState mode, if the system defaults to Dark theme, the toggle can switch between three states. If the system defaults to Light theme, the toggle can only switch between two states
| gharchive/pull-request | 2024-09-20T14:15:47 | 2025-04-01T06:44:33.451598 | {
"authors": [
"rabbitism",
"zdpcdt"
],
"repo": "irihitech/Ursa.Avalonia",
"url": "https://github.com/irihitech/Ursa.Avalonia/pull/418",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2041017102 | Refactoring Nomenclature
Proposed Changes
Rename all occurrences of virtlet in variable and function names to provider
Rename all occurrences of virtlet in structure to libvirtprovider
Rename all occurrences of virtlet in description and flags to libvirt-provider
Rename all occurrences of ori in structure to iri
Fixes https://github.com/ironcore-dev/libvirt-provider/issues/74
@so-sahu you need to sign your commits :)
| gharchive/pull-request | 2023-12-14T06:39:22 | 2025-04-01T06:44:33.461042 | {
"authors": [
"hardikdr",
"so-sahu"
],
"repo": "ironcore-dev/libvirt-provider",
"url": "https://github.com/ironcore-dev/libvirt-provider/pull/76",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
615230066 | Explore string_grouper for fast fuzzy matching
Maybe needs to be ported to cudf
https://github.com/Bergvca/string_grouper
This article has a good point about Levenshtein distance and how it grows quadratic
https://bergvca.github.io/2017/10/14/super-fast-string-matching.html.
Maybe we could consider to remove it for 3.0
| gharchive/issue | 2020-05-09T18:03:07 | 2025-04-01T06:44:33.495178 | {
"authors": [
"argenisleon"
],
"repo": "ironmussa/Optimus",
"url": "https://github.com/ironmussa/Optimus/issues/919",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
952299753 | I need a 1.17 jar
I need a 1.17 jar, but I can't find it. I hope the author will give me a link.,thanks.
The reason you can't find it is because there's no 1.17 variant developed yet. Hope this helps and have a nice day!
#124
GitHub issues are reserved for problems with the mod, not for version requests.
| gharchive/issue | 2021-07-25T15:37:16 | 2025-04-01T06:44:33.507366 | {
"authors": [
"9ikj",
"DereC4",
"TheDevilCard",
"irtimaled"
],
"repo": "irtimaled/BoundingBoxOutlineReloaded",
"url": "https://github.com/irtimaled/BoundingBoxOutlineReloaded/issues/125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
300149785 | Change locale
Hi, thanks for the plugin.
How we can change locale? Maybe is some add_filter for this?
Hi @vitalijm, what do you mean by locale? Do you mean language or something?
Yes, language. I can't find where or how to show recaptcha in other language.
The language of the reCAPTCHA widget should be automatically loaded based on the user's locale.
There seems to be a way to fix the language though: https://developers.google.com/recaptcha/docs/display
Is this what you are looking for?
I need something like:
add_action( 'wpcf7_enqueue_scripts', 'custom_recaptcha_enqueue_scripts', 11 );
function custom_recaptcha_enqueue_scripts() {
wp_deregister_script( 'google-recaptcha' );
$url = 'https://www.google.com/recaptcha/api.js';
$url = add_query_arg( array(
'onload' => 'recaptchaCallback',
'render' => 'explicit',
'hl' => 'fr-CA' ), $url );
wp_register_script( 'google-recaptcha', $url, array(), '2.0', true );
}
Where most important part of code is "'hl' => 'fr-CA'", with this I can now modify recaptcha display language.
I currently don't have the bandwidth to work on this plugin. If you wish you can create a PR to add an ACF option to modify the hl parameter when fetching api.js.
The list of language codes is here: https://developers.google.com/recaptcha/docs/language
| gharchive/issue | 2018-02-26T08:40:51 | 2025-04-01T06:44:33.512630 | {
"authors": [
"irvinlim",
"vitalijm"
],
"repo": "irvinlim/acf-recaptcha",
"url": "https://github.com/irvinlim/acf-recaptcha/issues/30",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
589126442 | Query props is empty
Describe the bug
Query props is empty. Strange.
Steps to reproduce
Repo here: https://github.com/markolofsen/i18next-demo
$ yarn dev
http://localhost:3000/en
If remove localeSubpaths: localeSubpathVariations[localeSubpaths], in i18.js then ok, but language detection from url is brokening.
Expected behaviour
Income props should be:
query: {
lang: 'en',
}
Now I'm getting query {}
You copied the example dir directly instead of learning from it: most likely process.env.LOCALE_SUBPATHS was undefined.
| gharchive/issue | 2020-03-27T13:16:36 | 2025-04-01T06:44:33.535673 | {
"authors": [
"isaachinman",
"markolofsen"
],
"repo": "isaachinman/next-i18next",
"url": "https://github.com/isaachinman/next-i18next/issues/665",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1566468086 | Remove IPInfoDB Integration
The IPInfoDB service has been decommissioned and James let us know that the location tracking feature is not widely used anyway.
Pull Request Check List
[ ] Unit Tests & Regression Tests Added (Optional)
[ ] Removed Unnecessary Logs/System.Outs/Comments/TODOs
[ ] Added enough Logging to monitor expected behaviour change
[ ] Security - Injection - everything run by an interpreter (SQL, OS...) is either validated or escaped
[ ] Security - Data Exposure - PII is not stored or sent unencrypted
[ ] Security - Data Exposure - Test any altered or created endpoints using swagger
[ ] Security - Access Control - Check authorisation on every new endpoint
[ ] Security - New dependency - configured sensibly not relying on defaults
[ ] Security - New dependency - Searched for any know vulnerabilities
[ ] Security - New dependency - Signed up team to mailing list
[ ] Security - New dependency - Added to dependency list
[ ] DB schema changes - postgres-rutherford-create-script updated
[ ] DB schema changes - upgrade script created matching create script
[ ] Updated Release Procedure & Documentation (& Considered Implications to Previous Versions)
[ ] Peer-Reviewed
We're now looking at replacing this with MaxMind file-based lookup, since we need this for our Plausible setup anyhow. I think these changes got merged into the Isaac CS upstream.
| gharchive/pull-request | 2023-02-01T16:57:24 | 2025-04-01T06:44:33.541112 | {
"authors": [
"jsharkey13",
"seanmorton"
],
"repo": "isaacphysics/isaac-api",
"url": "https://github.com/isaacphysics/isaac-api/pull/511",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
96964973 | end event for a test occurs after the next test has already started
Basically with
test('a', function() {
t.on('end', END
}
test('b', function() {
.... this test will start before the END function of test a runs...
This is not so useful. I'm using t.on(end) because when tests end I need to some cleanup before the next test. Since the test is quite async, and uses a plan, I don't really know when everything is done, not without reimplementing the plan...
I can dump cleanup for test a at the beginning of test b, and I'm about to do that, but I think that this behaviour is not right.
Runnable reproduction in test form:
var tap = require('tap');
var run = null;
// timeout to work around #152
tap.test('parent', {timeout: 1000}, function(t) {
tap.test('a', function(t) {
console.assert(run === null); run = 'a';
t.plan(1);
t.on('end', function() {
console.error('END: a == %j', run);
console.assert(run === 'a'); run = null;
});
setImmediate(function() { t.assert(true); });
});
tap.test('b', function(t) {
console.assert(run === null); run = 'b';
t.plan(1);
t.on('end', function() {
console.error('END: b == %j', run);
console.assert(run === 'b'); run = null;
});
setImmediate(function() { t.assert(true); });
});
t.end();
});
output:
% TAP_BAIL=1 node _.js
TAP version 13
# Subtest: parent
1..0
ok 1 - parent # time=10.589ms
# Subtest: a
1..1
ok 1 - expect truthy value
ok 2 - a # time=10.239ms
# Subtest: b
not ok 1 - AssertionError: false == true
---
at:
file: console.js
line: 87
column: 23
function: Console.assert
name: AssertionError
actual: false
expected: true
operator: ==
generatedMessage: true
test: b
message: 'AssertionError: false == true'
stack: |
Console.assert (console.js:87:23)
_.js:20:13
Test.test (node_modules/tap/lib/test.js:346:27)
Test._processQueue (node_modules/tap/lib/test.js:716:13)
Test.<anonymous> (node_modules/tap/lib/test.js:336:10)
Test.end (node_modules/tap/lib/test.js:701:10)
...
Bail out! # AssertionError: false == true
Bail out! # AssertionError: false == true
END: a == "a"
From observation, it appears that either the end event is emitted in a next tick, and shouldn't, or that when a test assertion is made that matches the plan, causing a test to end, that the next test should be started in a next tick.
Doing the latter would help with stack depths on failure. I sometimes deliberately introduce a setImmediate() into sync sub-tests because otherwise a whole sequence of sub-tests run on a single long stack, so when a subtest at the end fails and dumps a stack trace, it can be excessively long and uninteresting.
It's not a nextTick issue. It's that the parent is attaching a listener to the end event when the child is created, and in that handler, starting the next child test.
This patch should fix it, but I haven't created a test for this, and the day is getting on towards done:
diff --git a/lib/test.js b/lib/test.js
index 55ff1c7..3a1353e 100644
--- a/lib/test.js
+++ b/lib/test.js
@@ -361,7 +361,7 @@ Test.prototype.test = function test (name, extra, cb) {
child.on('complete', function (res) {
results = res
})
- child.on('end', function () {
+ child.on('afterEnd', function () {
if (child._threw && child._ok) {
child._ok = false
extra.error = child._threw
@@ -741,8 +741,10 @@ Test.prototype.done = Test.prototype.end = function end () {
this.emit('complete', final)
var self = this
- if (!this._endEmitted)
+ if (!this._endEmitted) {
this.emit('end')
+ this.emit('afterEnd')
+ }
this._endEmitted = true
}
| gharchive/issue | 2015-07-24T04:53:20 | 2025-04-01T06:44:33.545472 | {
"authors": [
"isaacs",
"sam-github"
],
"repo": "isaacs/node-tap",
"url": "https://github.com/isaacs/node-tap/issues/153",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
2327115603 | Isar vs Hive (what should we be using now?)
I am confused on what to use. On Isar faq page I find this.
Isar vs Hive
The answer is easy: Isar was started as a replacement for Hive and is now at a state where I recommend always using Isar over Hive.
On the hive quick start page I find this:
Before you start: Consider using [Isar](https://isar.dev/) a Flutter database by the author of Hive that is superior in every way!
And in the github page of hive I find this:
🐝 To bee or not to bee: Hive or Isar?
It's not always black and yellow! 🖤💛 Both Hive and Isar have their sweet spots. Hive is a lightweight wrapper around Isar so if you are looking for a simple key-value store, Hive might be enough. Isar is the way to go if you need queries, relations, and more advanced features.
I am building a shopping list/ todo list. Super simple. What should I be using? I would think hive because it is just a key value store. But after reading the other pages I am not sure anymore. Can we have clarification on this?
Hive, in my opinion is better for clean architecture because you don't have to use the generator and all the annotations in your core model classes.
consider this product model using dart_mappable package to generate toMap and fromMap
@MappableClass(discriminatorKey: 'type')
sealed class Product with ProductMappable {
const Product(this.id, {required this.isAvailable});
final int id;
final bool isAvailable;
}
@MappableClass()
class AvailableProduct extends Product with AvailableProductMappable {
const AvailableProduct(super.id) : super(isAvailable: true);
}
@MappableClass()
class UnAvailableProduct extends Product with UnAvailableProductMappable {
const UnAvailableProduct(super.id) : super(isAvailable: false);
}
using this code at your core folder of your app is a good practice because it depends only on dart_mappable and not on your services (Hive, Isar, Sqflite) or logic (Bloc, Riverpod) ....
then you add the following code to make it compatible with Hive
class ProductAdapter extends TypeAdapter<Product> {
@override
Product read(BinaryReader reader) {
final map = reader.readMap().map((key, value) => MapEntry(key.toString(), value));
return ProductMapper.fromMap(map);
}
@override
int get typeId => 10;
@override
void write(BinaryWriter writer, Product obj) {
writer.writeMap(obj.toMap());
}
}
If you want now to change, lets say to Sqflite, you don't need to change your core prodcut models.
To bee or not to bee:
I suggest you find the most recent version of either hive or isar that works for your requirement. Do not bank on any future functionality being delivered to bail you out. Now if you're able to fork and self maintain go crazy. I use hive for exactly the reasons @CardosoShlomo cites. But don't have much faith in future support.
| gharchive/issue | 2024-05-31T07:08:53 | 2025-04-01T06:44:33.553528 | {
"authors": [
"978bobs",
"CardosoShlomo",
"OnClickListener2048",
"vanlooverenkoen"
],
"repo": "isar/hive",
"url": "https://github.com/isar/hive/issues/1292",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2270145985 | Help Wanted - How does one generate minimum code examples from NNSmith bug reports
Hi, apologies for this issue, I am working with NNSmith (and Neuri) and have several generated bug reports from fuzzing. However, when looking at bug reports developed as a result of NNSmith like this one here there's a minimal code sample that can be used to trigger and reproduce the bug encountered. However, I can't seem to figure out how to generate such examples utilizing NNSmith. Is there an additional utility, or a technique to do so? I can see the model.pth and pkl files, but would like a nudge or some guidance in how to utilize them to create a reproducible code sample if that makes sense?
Sorry again for the call for help.
@talha-ahsan Hi, I think that nnsmith.report_syn can generate the crash code
Thanks for the issue. @syheliel is right we have nnsmith.report_syn to synthesize a bug report for PyTorch compilers. It's not perfect but should be useful in many cases (at least at the time we develop it). :D
Apologies for the incredibly long delay, I must've missed the notification. I'll start working with report_syn, thank you!
| gharchive/issue | 2024-04-29T23:12:14 | 2025-04-01T06:44:33.565079 | {
"authors": [
"ganler",
"syheliel",
"talha-ahsan"
],
"repo": "ise-uiuc/nnsmith",
"url": "https://github.com/ise-uiuc/nnsmith/issues/139",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2666489561 | Where to fetch data using this component?
Hi, first of all, thanks for this solution, it works really well.
I have a question: where is the correct place to put the data fetcher? Since the root component of all pages is inside the TransitionRouter which is a client component, the pages will not be server components. Am I missing something?
Hi, first of all, thanks for this solution, it works really well.
I have a question: where is the correct place to put the data fetcher? Since the root component of all pages is inside the TransitionRouter which is a client component, the pages will not be server components. Am I missing something?
You can wrap any server component with a client component, and it will still behave like a server component. Have a look on the next.js site in case 👍 https://nextjs.org/docs/app/building-your-application/rendering/composition-patterns
@danielmorena If you create a client component just to use the TransitionRouter, then everything (page, layout) can be server components without any problem. This is how all demos are made. Let me know if this answers your question.
| gharchive/issue | 2024-11-17T20:42:28 | 2025-04-01T06:44:33.613845 | {
"authors": [
"danielmorena",
"edoardolunardi",
"ismamz"
],
"repo": "ismamz/next-transition-router",
"url": "https://github.com/ismamz/next-transition-router/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1611347106 | 🛑 mercadosMadrid.isolucion.co is down
In c259c5e, mercadosMadrid.isolucion.co (https://mercadosMadrid.isolucion.co) was down:
HTTP code: 0
Response time: 0 ms
Resolved: mercadosMadrid.isolucion.co is back up in 3362d8d.
| gharchive/issue | 2023-03-06T12:46:24 | 2025-04-01T06:44:33.636738 | {
"authors": [
"isolucionco"
],
"repo": "isolucionco/EstadoIsolucionHosting",
"url": "https://github.com/isolucionco/EstadoIsolucionHosting/issues/1127",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2224376688 | 🛑 apiiso01com.isolucion.com is down
In f492523, apiiso01com.isolucion.com (https://apiiso01com.isolucion.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: apiiso01com.isolucion.com is back up in fb38b6f after 22 days, 23 hours, 18 minutes.
| gharchive/issue | 2024-04-04T03:35:30 | 2025-04-01T06:44:33.640025 | {
"authors": [
"isolucionco"
],
"repo": "isolucionco/EstadoIsolucionHosting",
"url": "https://github.com/isolucionco/EstadoIsolucionHosting/issues/2609",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1370647350 | 🛑 sigo.icontec.org/isolucionv4 is down
In 349dca1, sigo.icontec.org/isolucionv4 (https://sigo.icontec.org/isolucionv4) was down:
HTTP code: 0
Response time: 0 ms
Resolved: sigo.icontec.org/isolucionv4 is back up in d678778.
| gharchive/issue | 2022-09-12T23:28:06 | 2025-04-01T06:44:33.643471 | {
"authors": [
"isolucionco"
],
"repo": "isolucionco/EstadoIsolucionHosting",
"url": "https://github.com/isolucionco/EstadoIsolucionHosting/issues/655",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
589242337 | Switch to Apache 2.0 license
Switch from MIT to Apache 2.0.
All contributors agree with this change.
Coverage decreased (-1.6%) to 75.532% when pulling 09ea3b0933e78dc80f2b73f8f0d3c236bfe11ac5 on bettio:switch-to-apache2 into 5847604f95ad236b87233fd57ffc58e8bf9999c1 on ispirata:master.
| gharchive/pull-request | 2020-03-27T16:12:16 | 2025-04-01T06:44:33.659487 | {
"authors": [
"bettio",
"coveralls"
],
"repo": "ispirata/cfxxl",
"url": "https://github.com/ispirata/cfxxl/pull/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
401778746 | new service entries or editing the existing one does not get propagated to istio proxies
Describe the bug
Changes from adding new service entries or editing the existing one does not get propagated to istio proxies running as side cars.
Expected behavior
Changes will get propagated to istio proxy containers.
Steps to reproduce the bug
Create a simple deployment by applying the following:
apiVersion: v1
kind: Service
metadata:
name: myclient
labels:
app: myclient
spec:
ports:
- port: 80
name: http
selector:
app: myclient
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: myclient
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: myclient
spec:
containers:
- command:
- /bin/sleep
- infinity
image: tutum/curl
imagePullPolicy: IfNotPresent
name: myclient
resources: {}
create s service entry by applying the following:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: banix
spec:
hosts:
- banix.default.global
location: MESH_INTERNAL
ports:
- name: http1
number: 8080
protocol: http
resolution: DNS
addresses:
- 127.255.0.14
endpoints:
- address: 169.22.214.227
ports:
http1: 15443
if you run the istioctl command below
istioctl proxy-status
the output will contain something like the following:
myclient-5768b9778d-n2hr8.default SYNCED SYNCED SYNCED (50%) SYNCED istio-pilot-7c6cc5b69-6npmh 1.1.0
wait a few minutes until you see the output change to:
myclient-5768b9778d-n2hr8.default SYNCED SYNCED SYNCED (50%) NOT SENT istio-pilot-7c6cc5b69-6npmh 1.1.0
from now on changes to the service entry that affect the route entry will not get sync'ed to the proxy in my client pod.
Simply edit the service entry and change the address from 127.255.0.14 to something else. Notice that from my client it still thinks the address is 127.255.0.14.
Version
with daily builds, including release-1.1-20190114-09-15 and release-1.1-20190120-09-16
Installation
Using helm templates.
Environment
Linux
Cluster state
still seeing it in istio 1.1 official release - today.
what's you problem? @iftachsc
| gharchive/issue | 2019-01-22T14:02:54 | 2025-04-01T06:44:33.683286 | {
"authors": [
"hzxuzhonghu",
"iftachsc",
"mbanikazemi"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/11145",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
569479123 | 502's when host contains port
Bug description
Hey,
I've noticed some people requesting some of our services with a port on the end of the host header, for example https://www.some-service.com:443, this translates to having a Host: some-service.com:443.
Envoy is then rejecting this (we have REGISTRY_ONLY) with a 502.
I've tried adding some-service.com:443 to the VirtualService hosts, but it doesn't work.
I could start looking at rewriting the host header to remove it at our edge, but would be easier if this worked in the VirtualService?
Thoughts?
Expected behavior
Adding some-host.com:443 to the VirtualService hosts, should make it routable.
Steps to reproduce the bug
Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
1.4.5
How was Istio installed?
Environment where bug was observed (cloud vendor, OS, etc)
Usually we set up virtual hosts with and without the port included. I think there may be some situations where that doesn't happen though, the logic is fairly complicated. Can you post the route snippet?
So my setup might be causing this, but so we're very clear we see this:
user -- https://www-preprod.autotrader.co.uk:443 --> nginx -- http://app.some-app with Host: www-preprod.autotrader.co.uk:443 -- envoy --> 502.
And I'm not able to add the 443 to the virtualservice:
Error: 1 error occurred:
* VirtualService/consumer-gateway/consumer-gateway-app: domain name "autotrader.co.uk:443" invalid (label "uk:443" invalid)
Ah got it, so the mismatch is because we have a different port in the host than the routes, not just that it contains the port. Thanks!
I probably need to mutate it at my edge then!
Interestingly @howardjohn this doesn't register in the metrics as a BlackHoleCluster, despite it 502'ing
FYI in investigating this; I've stumbled on the fact with telemetry v2, blackholecluster metrics no longer work: https://github.com/istio/istio/issues/21385
Closing as I've solved this at our edge with nginx (linked issue)
| gharchive/issue | 2020-02-23T11:13:51 | 2025-04-01T06:44:33.691035 | {
"authors": [
"Stono",
"howardjohn"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/21384",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
756874313 | Why can't destination rules and virtual services with same hostname be applied simultaneously?
(This is used to request new product features, please visit https://discuss.istio.io for questions on using Istio)
Describe the feature request
Is it possible to route traffic by applying traffic routing rules recursively?
Here is the situation: we have several independent routing rules, each of them can be described by one destination rule and one virtual service.
For example, we have three rules naming ruleA, ruleB and ruleC. According to ruleA, traffic match conditionA1 with be routed to subsetA1 and traffic match conditionA2 with be routed to subsetA2.Likewise, we have subsetB1 for conditionB1, subsetB2 for conditionB2 and subsetC1 for conditionC1, subsetC2 for conditionC2.
In that case, for traffic that matches conditionA1 and conditionB1 and conditionC1, we want it to be routed to the intersection of subsetA1, subsetB1 and subsetC1.
Describe alternatives you've considered
We have considered two solutions.
First one is to enumerate all possibilities of combinations of different routing rules in one destination rule and one virtual service. With the increase of the number of rules, the complexity and redundancy of configuration is becoming more and more unacceptable. In our case here, with four routing rules, each of which has more than 10 available subsets, the configuration is unreadable.
The second one is to define routing rules separately, then chain one virtual service to another. In that case, we will have clear configuration for each routing rule. However, destination rules with the same hostname will be overwrite in proxy.sidecarscope. This solution turned out to be unpractical.
Affected product area (please put an X in all that apply)
[ ] Docs
[ ] Installation
[X] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Affected features (please put an X in all that apply)
[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane
Additional context
Can you give an example to help understand?
For example, two individual destination rules and virtual services are defined for the reviews service.
We define the first pair of vs and dr so that traffic can be routed to different versions of reviews service. While the second pair of vs and dr is defined so that all traffic from a specific user is routed to a specific group of hosts.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- match:
- uri:
prefix: "/wpcatalog"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v2
- match:
- uri:
prefix: "/catalog"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v1
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-destination
spec:
host: reviews.prod.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: product-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: user1
- match:
- headers:
end-user:
exact: jack
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: user2
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: product-destination
spec:
host: reviews.prod.svc.cluster.local
subsets:
- name: user1
labels:
userGroup: user1
- name: user2
labels:
userGroup: user2
In order to make both of them effective, by current design, we have to define them by listing all possible combination in one VirtualService and DestinationRule as following:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: cross-destination
spec:
host: reviews.prod.svc.cluster.local
subsets:
- name: user1-v1
labels:
userGroup: user1
version: v1
- name: user2-v1
labels:
userGroup: user2
version: v1
- name: user1-v2
labels:
userGroup: user1
version: v2
- name: user2-v2
labels:
userGroup: user2
version: v2
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: cross-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- match:
- headers:
end-user:
exact: jason
uri:
prefix: "/wpcatalog"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: user1-v2
- match:
- headers:
end-user:
exact: jack
uri:
prefix: "/wpcatalog"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: user2-v2
- match:
- headers:
end-user:
exact: jason
uri:
prefix: "/catalog"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: user1-v1
- match:
- headers:
end-user:
exact: jack
uri:
prefix: "/catalog"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: user2-v1
And when it comes to more possibilities, the complexity and redundancy of configuration is quite obvious, as picture shown below:
IC, seems dup of https://github.com/istio/istio/issues/22997
IC, seems dup of #22997
Thanks for your input. It seems that combine #28914 and #22997 together, the use case we have is fully covered.
Lets track this in https://github.com/istio/istio/issues/22997. Thanks!
| gharchive/issue | 2020-12-04T07:25:04 | 2025-04-01T06:44:33.701693 | {
"authors": [
"LynnnnnnnYang",
"howardjohn",
"hzxuzhonghu"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/29406",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
770273536 | Update to golang 1.16
TODO:
[ ] Bump binary in docker image
[x] ensure all tests pass
[x] Ensure benchmark is reasonable
[ ] Make use of new features
[ ] https://tip.golang.org/doc/go1.16#library-embed
agent binary size decrease 38mb -> 34 mb
--- FAIL: TestConvertResources (0.15s)
--- FAIL: TestConvertResources/tls (0.01s)
conversion_test.go:48: 1 error occurred:
* Gateway/gateway/default: spec.listeners.tls.routeOverride.certificate: Invalid value: "null": spec.listeners.tls.routeOverride.certificate in body must be of type string: "null"
FAIL
single unit test failure, I think its just relying on implementation details of some error messages. Should be an easy fix.
name old time/op new time/op delta
RouteGeneration/gateways-6 47.5ms ± 1% 43.5ms ± 1% -8.50% (p=0.008 n=5+5)
RouteGeneration/gateways-shared-6 193ms ± 2% 176ms ± 1% -8.65% (p=0.008 n=5+5)
RouteGeneration/empty-6 921µs ± 2% 870µs ± 2% -5.54% (p=0.008 n=5+5)
RouteGeneration/tls-6 934µs ± 4% 881µs ± 3% -5.64% (p=0.016 n=5+5)
RouteGeneration/telemetry-6 929µs ± 2% 876µs ± 3% -5.67% (p=0.008 n=5+5)
RouteGeneration/virtualservice-6 1.94ms ± 1% 1.83ms ± 2% -5.22% (p=0.008 n=5+5)
ClusterGeneration/gateways-6 11.6ms ± 1% 11.0ms ± 1% -5.24% (p=0.008 n=5+5)
ClusterGeneration/gateways-shared-6 11.5ms ± 1% 11.0ms ± 3% -4.38% (p=0.008 n=5+5)
ClusterGeneration/empty-6 7.19ms ± 1% 6.73ms ± 2% -6.39% (p=0.008 n=5+5)
ClusterGeneration/tls-6 6.99ms ± 2% 6.55ms ± 1% -6.32% (p=0.008 n=5+5)
ClusterGeneration/telemetry-6 4.82ms ± 1% 4.58ms ± 1% -4.95% (p=0.008 n=5+5)
ClusterGeneration/virtualservice-6 1.18ms ± 2% 1.13ms ± 1% -3.95% (p=0.008 n=5+5)
ListenerGeneration/gateways-6 17.5ms ± 2% 15.9ms ± 0% -9.43% (p=0.008 n=5+5)
ListenerGeneration/gateways-shared-6 34.7µs ± 2% 32.0µs ± 1% -7.60% (p=0.008 n=5+5)
ListenerGeneration/empty-6 1.41ms ± 3% 1.37ms ± 1% ~ (p=0.056 n=5+5)
ListenerGeneration/tls-6 1.33ms ± 1% 1.32ms ± 1% ~ (p=0.310 n=5+5)
ListenerGeneration/telemetry-6 1.52ms ± 2% 1.49ms ± 1% ~ (p=0.056 n=5+5)
ListenerGeneration/virtualservice-6 109µs ± 1% 101µs ± 1% -7.13% (p=0.008 n=5+5)
NameTableGeneration/gateways-6 31.2ns ± 4% 31.5ns ± 1% ~ (p=0.310 n=5+5)
NameTableGeneration/gateways-shared-6 31.9ns ± 5% 31.7ns ± 2% ~ (p=0.841 n=5+5)
NameTableGeneration/empty-6 1.38µs ± 1% 1.20µs ± 1% -13.21% (p=0.008 n=5+5)
NameTableGeneration/tls-6 1.38µs ± 3% 1.18µs ± 2% -14.80% (p=0.008 n=5+5)
NameTableGeneration/telemetry-6 1.38µs ± 2% 1.16µs ± 3% -15.83% (p=0.008 n=5+5)
NameTableGeneration/virtualservice-6 1.38µs ± 3% 1.16µs ± 1% -15.96% (p=0.008 n=5+5)
SecretGeneration/secrets-10-6 36.4µs ± 1% 34.0µs ± 1% -6.62% (p=0.008 n=5+5)
SecretGeneration/secrets-1000-6 3.83ms ± 2% 3.42ms ± 2% -10.67% (p=0.008 n=5+5)
EndpointGeneration/1/100-6 285µs ± 1% 279µs ± 1% -1.99% (p=0.008 n=5+5)
EndpointGeneration/10/10-6 94.0µs ± 2% 88.5µs ± 6% -5.90% (p=0.008 n=5+5)
EndpointGeneration/100/10-6 735µs ± 3% 651µs ± 5% -11.40% (p=0.008 n=5+5)
EndpointGeneration/1000/1-6 681µs ± 7% 648µs ± 5% ~ (p=0.095 n=5+5)
name old kb/msg new kb/msg delta
RouteGeneration/gateways-6 2.67k ± 0% 2.67k ± 0% ~ (all equal)
RouteGeneration/gateways-shared-6 12.6k ± 0% 12.6k ± 0% ~ (all equal)
RouteGeneration/empty-6 31.8 ± 0% 31.8 ± 0% -0.13% (p=0.008 n=5+5)
RouteGeneration/tls-6 31.8 ± 0% 31.8 ± 0% -0.13% (p=0.008 n=5+5)
RouteGeneration/telemetry-6 31.8 ± 0% 31.8 ± 0% -0.13% (p=0.008 n=5+5)
RouteGeneration/virtualservice-6 80.7 ± 0% 80.7 ± 0% +0.05% (p=0.008 n=5+5)
ClusterGeneration/gateways-6 293 ± 0% 293 ± 0% +0.03% (p=0.008 n=5+5)
ClusterGeneration/gateways-shared-6 293 ± 0% 293 ± 0% +0.03% (p=0.008 n=5+5)
ClusterGeneration/empty-6 270 ± 0% 270 ± 0% -0.11% (p=0.008 n=5+5)
ClusterGeneration/tls-6 265 ± 0% 265 ± 0% ~ (all equal)
ClusterGeneration/telemetry-6 120 ± 0% 120 ± 0% -0.17% (p=0.008 n=5+5)
ClusterGeneration/virtualservice-6 29.4 ± 0% 29.4 ± 0% +0.07% (p=0.008 n=5+5)
ListenerGeneration/gateways-6 1.23k ± 0% 1.23k ± 0% ~ (all equal)
ListenerGeneration/gateways-shared-6 2.17 ± 0% 2.17 ± 0% -0.18% (p=0.008 n=5+5)
ListenerGeneration/empty-6 11.6 ± 0% 11.6 ± 0% -0.43% (p=0.008 n=5+5)
ListenerGeneration/tls-6 8.49 ± 0% 8.49 ± 0% +0.05% (p=0.008 n=5+5)
ListenerGeneration/telemetry-6 11.6 ± 0% 11.6 ± 0% -0.43% (p=0.008 n=5+5)
ListenerGeneration/virtualservice-6 4.96 ± 0% 4.96 ± 0% +0.02% (p=0.008 n=5+5)
NameTableGeneration/gateways-6 0.00 0.00 ~ (all equal)
NameTableGeneration/gateways-shared-6 0.00 0.00 ~ (all equal)
NameTableGeneration/empty-6 0.00 0.00 ~ (all equal)
NameTableGeneration/tls-6 0.00 0.00 ~ (all equal)
NameTableGeneration/telemetry-6 0.00 0.00 ~ (all equal)
NameTableGeneration/virtualservice-6 0.00 0.00 ~ (all equal)
SecretGeneration/secrets-10-6 46.0 ± 0% 46.0 ± 0% +0.02% (p=0.008 n=5+5)
SecretGeneration/secrets-1000-6 4.60k ± 0% 4.60k ± 0% ~ (all equal)
EndpointGeneration/1/100-6 5.79 ± 0% 5.79 ± 0% ~ (all equal)
EndpointGeneration/10/10-6 2.83 ± 0% 2.83 ± 0% ~ (all equal)
EndpointGeneration/100/10-6 26.2 ± 0% 26.2 ± 0% +0.11% (p=0.008 n=5+5)
EndpointGeneration/1000/1-6 26.6 ± 0% 26.6 ± 0% -0.04% (p=0.008 n=5+5)
name old resources/msg new resources/msg delta
RouteGeneration/gateways-6 1.00k ± 0% 1.00k ± 0% ~ (all equal)
RouteGeneration/gateways-shared-6 2.00 ± 0% 2.00 ± 0% ~ (all equal)
RouteGeneration/empty-6 2.00 ± 0% 2.00 ± 0% ~ (all equal)
RouteGeneration/tls-6 2.00 ± 0% 2.00 ± 0% ~ (all equal)
RouteGeneration/telemetry-6 2.00 ± 0% 2.00 ± 0% ~ (all equal)
RouteGeneration/virtualservice-6 1.00 ± 0% 1.00 ± 0% ~ (all equal)
ClusterGeneration/gateways-6 1.00k ± 0% 1.00k ± 0% ~ (all equal)
ClusterGeneration/gateways-shared-6 1.00k ± 0% 1.00k ± 0% ~ (all equal)
ClusterGeneration/empty-6 410 ± 0% 410 ± 0% ~ (all equal)
ClusterGeneration/tls-6 410 ± 0% 410 ± 0% ~ (all equal)
ClusterGeneration/telemetry-6 410 ± 0% 410 ± 0% ~ (all equal)
ClusterGeneration/virtualservice-6 104 ± 0% 104 ± 0% ~ (all equal)
ListenerGeneration/gateways-6 2.00 ± 0% 2.00 ± 0% ~ (all equal)
ListenerGeneration/gateways-shared-6 2.00 ± 0% 2.00 ± 0% ~ (all equal)
ListenerGeneration/empty-6 6.00 ± 0% 6.00 ± 0% ~ (all equal)
ListenerGeneration/tls-6 6.00 ± 0% 6.00 ± 0% ~ (all equal)
ListenerGeneration/telemetry-6 6.00 ± 0% 6.00 ± 0% ~ (all equal)
ListenerGeneration/virtualservice-6 3.00 ± 0% 3.00 ± 0% ~ (all equal)
NameTableGeneration/gateways-6 0.00 0.00 ~ (all equal)
NameTableGeneration/gateways-shared-6 0.00 0.00 ~ (all equal)
NameTableGeneration/empty-6 1.00 ± 0% 1.00 ± 0% ~ (all equal)
NameTableGeneration/tls-6 1.00 ± 0% 1.00 ± 0% ~ (all equal)
NameTableGeneration/telemetry-6 1.00 ± 0% 1.00 ± 0% ~ (all equal)
NameTableGeneration/virtualservice-6 1.00 ± 0% 1.00 ± 0% ~ (all equal)
SecretGeneration/secrets-10-6 10.0 ± 0% 10.0 ± 0% ~ (all equal)
SecretGeneration/secrets-1000-6 1.00k ± 0% 1.00k ± 0% ~ (all equal)
EndpointGeneration/1/100-6 100 ± 0% 100 ± 0% ~ (all equal)
EndpointGeneration/10/10-6 10.0 ± 0% 10.0 ± 0% ~ (all equal)
EndpointGeneration/100/10-6 10.0 ± 0% 10.0 ± 0% ~ (all equal)
EndpointGeneration/1000/1-6 1.00 ± 0% 1.00 ± 0% ~ (all equal)
name old alloc/op new alloc/op delta
RouteGeneration/gateways-6 17.0MB ± 0% 16.8MB ± 0% -1.23% (p=0.008 n=5+5)
RouteGeneration/gateways-shared-6 69.3MB ± 0% 68.4MB ± 0% -1.31% (p=0.008 n=5+5)
RouteGeneration/empty-6 605kB ± 0% 604kB ± 0% -0.04% (p=0.016 n=5+4)
RouteGeneration/tls-6 605kB ± 0% 604kB ± 0% ~ (p=0.056 n=5+5)
RouteGeneration/telemetry-6 606kB ± 0% 606kB ± 0% ~ (p=0.841 n=5+5)
RouteGeneration/virtualservice-6 890kB ± 0% 884kB ± 0% -0.76% (p=0.008 n=5+5)
ClusterGeneration/gateways-6 5.01MB ± 0% 4.96MB ± 1% -1.02% (p=0.008 n=5+5)
ClusterGeneration/gateways-shared-6 5.01MB ± 0% 4.97MB ± 0% -0.96% (p=0.008 n=5+5)
ClusterGeneration/empty-6 2.79MB ± 0% 2.76MB ± 0% -0.94% (p=0.008 n=5+5)
ClusterGeneration/tls-6 2.77MB ± 0% 2.75MB ± 0% -0.71% (p=0.008 n=5+5)
ClusterGeneration/telemetry-6 2.03MB ± 0% 2.02MB ± 0% -0.94% (p=0.008 n=5+5)
ClusterGeneration/virtualservice-6 505kB ± 0% 501kB ± 0% -0.98% (p=0.008 n=5+5)
ListenerGeneration/gateways-6 7.11MB ± 1% 7.10MB ± 0% ~ (p=0.310 n=5+5)
ListenerGeneration/gateways-shared-6 14.9kB ± 0% 14.9kB ± 0% ~ (p=0.421 n=5+5)
ListenerGeneration/empty-6 744kB ± 0% 747kB ± 0% +0.38% (p=0.008 n=5+5)
ListenerGeneration/tls-6 714kB ± 0% 717kB ± 0% +0.40% (p=0.008 n=5+5)
ListenerGeneration/telemetry-6 781kB ± 0% 783kB ± 0% +0.31% (p=0.008 n=5+5)
ListenerGeneration/virtualservice-6 61.4kB ± 0% 61.3kB ± 0% -0.17% (p=0.000 n=4+5)
NameTableGeneration/gateways-6 0.00B 0.00B ~ (all equal)
NameTableGeneration/gateways-shared-6 0.00B 0.00B ~ (all equal)
NameTableGeneration/empty-6 264B ± 0% 264B ± 0% ~ (all equal)
NameTableGeneration/tls-6 264B ± 0% 264B ± 0% ~ (all equal)
NameTableGeneration/telemetry-6 265B ± 0% 264B ± 0% ~ (p=0.167 n=5+5)
NameTableGeneration/virtualservice-6 264B ± 0% 264B ± 0% ~ (all equal)
SecretGeneration/secrets-10-6 60.3kB ± 0% 60.2kB ± 0% -0.28% (p=0.008 n=5+5)
SecretGeneration/secrets-1000-6 6.01MB ± 0% 6.00MB ± 0% ~ (p=0.651 n=5+5)
EndpointGeneration/1/100-6 101kB ± 0% 99kB ± 0% -1.58% (p=0.008 n=5+5)
EndpointGeneration/10/10-6 14.7kB ± 0% 14.5kB ± 0% -1.03% (p=0.008 n=5+5)
EndpointGeneration/100/10-6 63.4kB ± 0% 63.2kB ± 0% -0.27% (p=0.008 n=5+5)
EndpointGeneration/1000/1-6 53.2kB ± 0% 53.1kB ± 0% ~ (p=0.087 n=5+5)
name old allocs/op new allocs/op delta
RouteGeneration/gateways-6 210k ± 0% 210k ± 0% ~ (p=0.079 n=5+5)
RouteGeneration/gateways-shared-6 907k ± 0% 908k ± 0% ~ (p=1.000 n=5+5)
RouteGeneration/empty-6 6.11k ± 0% 6.12k ± 0% ~ (p=0.794 n=5+4)
RouteGeneration/tls-6 6.12k ± 0% 6.12k ± 0% ~ (p=0.714 n=5+5)
RouteGeneration/telemetry-6 6.13k ± 0% 6.13k ± 0% ~ (p=0.611 n=5+5)
RouteGeneration/virtualservice-6 11.1k ± 0% 11.1k ± 0% -0.01% (p=0.029 n=4+4)
ClusterGeneration/gateways-6 81.4k ± 0% 81.4k ± 0% ~ (p=1.000 n=5+5)
ClusterGeneration/gateways-shared-6 81.6k ± 0% 81.6k ± 0% ~ (p=1.000 n=5+5)
ClusterGeneration/empty-6 41.8k ± 0% 41.8k ± 0% -0.01% (p=0.008 n=5+5)
ClusterGeneration/tls-6 41.4k ± 0% 41.4k ± 0% -0.01% (p=0.008 n=5+5)
ClusterGeneration/telemetry-6 33.4k ± 0% 33.4k ± 0% ~ (p=0.286 n=4+5)
ClusterGeneration/virtualservice-6 8.31k ± 0% 8.31k ± 0% ~ (p=0.167 n=5+5)
ListenerGeneration/gateways-6 85.4k ± 0% 85.0k ± 0% -0.42% (p=0.008 n=5+5)
ListenerGeneration/gateways-shared-6 215 ± 0% 214 ± 0% ~ (p=0.095 n=5+4)
ListenerGeneration/empty-6 14.5k ± 0% 14.2k ± 0% -2.24% (p=0.000 n=4+5)
ListenerGeneration/tls-6 14.2k ± 0% 13.9k ± 0% -2.21% (p=0.029 n=4+4)
ListenerGeneration/telemetry-6 15.1k ± 0% 14.8k ± 0% -2.17% (p=0.000 n=4+5)
ListenerGeneration/virtualservice-6 831 ± 0% 824 ± 0% -0.84% (p=0.008 n=5+5)
NameTableGeneration/gateways-6 0.00 0.00 ~ (all equal)
NameTableGeneration/gateways-shared-6 0.00 0.00 ~ (all equal)
NameTableGeneration/empty-6 6.00 ± 0% 6.00 ± 0% ~ (all equal)
NameTableGeneration/tls-6 6.00 ± 0% 6.00 ± 0% ~ (all equal)
NameTableGeneration/telemetry-6 6.00 ± 0% 6.00 ± 0% ~ (all equal)
NameTableGeneration/virtualservice-6 6.00 ± 0% 6.00 ± 0% ~ (all equal)
SecretGeneration/secrets-10-6 189 ± 0% 189 ± 0% ~ (all equal)
SecretGeneration/secrets-1000-6 18.0k ± 0% 18.0k ± 0% ~ (p=0.556 n=4+5)
EndpointGeneration/1/100-6 1.81k ± 0% 1.71k ± 0% -5.47% (p=0.008 n=5+5)
EndpointGeneration/10/10-6 187 ± 0% 178 ± 0% -4.81% (p=0.008 n=5+5)
EndpointGeneration/100/10-6 192 ± 0% 182 ± 0% -5.01% (p=0.000 n=5+4)
EndpointGeneration/1000/1-6 25.6 ± 2% 25.0 ± 0% ~ (p=0.095 n=5+4)
wow, big increase
ETA for golang 1.16 is 2/1. Istio release date is 2/9 with a code freeze of 2/1. If go releases late that’ll delay istio. I can see reasons for/against. I'm fine with it myself, but delaying the release if we are waiting on go could be problematic.
Agreed, I am on the fence. If we do it for 1.16, lets start updating master to the beta/rc of golang so the update in code freeze is minimal. FWIW k8s had the exact same dilemma with go 1.15 and they went for the upgrade - but I have no clue if they are happy with that decision and what different factors they have (for one, longer release cycle - I don't think go 1.15 will go out of support for us)
cc @shamsher31 @brian-avery @ericvn @stevenctl Thoughts? Looking at https://endoflife.date/go 1.15 will be suported until 1.17 is released (ETA August)
John, do you know anyone on the k8s team we could ask about that experience? Some of the features would be nice to get and I know there's been work in the x509 library which we use.
Looking back at Istio releases, we typically move up a Golang version every 2 releases. This make sense since they release on a roughly 6 month cycle to our 3 month cycle. Continuing with that, we would release Istio 1.9 on Golang 1.15 (we released 1.8 on 1.15). What this does is makes the Istio support end roughly line up with the Golang end of support every other release. For example, Golang 1.13 (used in Istio 1.4 and 1.5) end of support would have been 8/11/20 and the Istio 1.5 end of support was 8/24/20. If we tried to move up a release so Istio 1.9 and 1.10 used Golang 1.16, we are better on the back end, but have an issue if Golang isn't released on time. And we would hit a similar problem in August with the release of Golang 1.19 and Istio 1.11.
I'm probably more comfortable leaving us on our current schedule where we are close on the back end dates and not putting a release at risk which implies we stay on Golang 1.15 for two Istio releases as we have done previously.
Looking back at Istio releases, we typically move up a Golang version every 2 releases. This make sense since they release on a roughly 6 month cycle to our 3 month cycle. Continuing with that, we would release Istio 1.9 on Golang 1.15 (we released 1.8 on 1.15). What this does is makes the Istio support end roughly line up with the Golang end of support every other release. For example, Golang 1.13 (used in Istio 1.4 and 1.5) end of support would have been 8/11/20 and the Istio 1.5 end of support was 8/24/20. If we tried to move up a release so Istio 1.9 and 1.10 used Golang 1.16, we are better on the back end, but have an issue if Golang isn't released on time. And we would hit a similar problem in August with the release of Golang 1.19 and Istio 1.11.
I'm probably more comfortable leaving us on our current schedule where we are close on the back end dates and not putting a release at risk which implies we stay on Golang 1.15 for two Istio releases as we have done previously.
@howardjohn Is this complete and able to be closed?
yes
| gharchive/issue | 2020-12-17T18:30:49 | 2025-04-01T06:44:33.716498 | {
"authors": [
"ericvn",
"howardjohn",
"jacob-delgado"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/29693",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
871446962 | [release-1.10] convert gateway test to be workload agnostic
Manual cherrypick required.
#32095 failed to apply on top of branch "release-1.10":
Applying: remove SkippedForMulticluster
Applying: convert gateway test to be workload agnostic
Using index info to reconstruct a base tree...
M pkg/test/framework/components/echo/echotest/run.go
M pkg/test/framework/components/istio/ingress.go
M pkg/test/framework/components/istio/ingress/interface.go
M tests/integration/pilot/common/routing.go
M tests/integration/pilot/common/traffic.go
M tests/integration/security/sds_ingress/util/util.go
M tests/integration/security/util/authn/authn_util.go
Falling back to patching base and 3-way merge...
Auto-merging tests/integration/security/util/authn/authn_util.go
Auto-merging tests/integration/security/sds_ingress/util/util.go
Auto-merging tests/integration/pilot/common/traffic.go
Auto-merging tests/integration/pilot/common/routing.go
CONFLICT (content): Merge conflict in tests/integration/pilot/common/routing.go
Auto-merging pkg/test/framework/components/istio/ingress/interface.go
Auto-merging pkg/test/framework/components/istio/ingress.go
Auto-merging pkg/test/framework/components/echo/echotest/run.go
CONFLICT (content): Merge conflict in pkg/test/framework/components/echo/echotest/run.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0002 convert gateway test to be workload agnostic
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
@stevenctl are you planning to backport manually?
| gharchive/issue | 2021-04-29T20:00:40 | 2025-04-01T06:44:33.719596 | {
"authors": [
"howardjohn",
"istio-testing"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/32546",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
302393397 | Need tracing e2e test
Istio currently doesn't have anything that verifies tracing data in Zipkin that is generated per request (including ingress and Mixer spans). One should be added.
Doug OK for me to try taking this for the fixit week?
@smawson absolutely. I was going to start on it if I was blocked on current paths later this week, but am happy to have someone else own this. Enjoy!
@douglas-reid There is an existing zipkin_test.go under the pilot e2e tests that seems to verify zipkin tracing, is more needed beyond what that is doing?
@smawson I think we want one that uses a more complex app deployment than the pilot e2e setup for a few reasons:
Validating Mixer tracing
User reports of often only getting one span per trace
Future work on configuring trace sampling (ability to turn off tracing for particular destinations, setting tracing to 1%).
I don't believe the zipkin_test.go will cover all of that.
| gharchive/issue | 2018-03-05T17:35:04 | 2025-04-01T06:44:33.722744 | {
"authors": [
"douglas-reid",
"smawson"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/3947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
354742114 | Example of using certmanager
Describe the feature request
It would be very helpful to have an example of using the certmanager option in the documentation of Istio.
Additional context
Currently, we have the options for leveraging it, but there is no corresponding example/task in the documentation.
@geeknoid can you move this to the istio.io repo or @EddiG would you mind filing this issue here: https://github.com/istio/istio.github.io/issues/new
Cheers
-steve
| gharchive/issue | 2018-08-28T13:53:34 | 2025-04-01T06:44:33.725757 | {
"authors": [
"EddiG",
"sdake"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/8312",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
271674839 | bazel_to_go: do not overwrite the files if the generated ones are same
What this PR does / why we need it:
Currently bazel_to_go.py overwrites the generated files, but that could be problematic, sometimes the generated files have meaningless differences, like the name of the source files. With this PR, bazel_to_go.py checks their contents and makes the copy only when there's changes, and it explicitly exclude the contents for source files.
With this PR, I think now presubmit should always invoke bazel_to_go and then check if there are meaningful changes on the generated files -- presubmit should fail in such case; that would be done by another PR.
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): updates #1515
Special notes for your reviewer:
Release note:
none
@mandarjog, please try this on your environment -- I actually haven't set up the environment of different source paths yet.
/retest
@jmuk there is always changes in the FileDescriptorProto, so just checking the proto.RegisterFile is not enough.
I have verified that this is indeed the case.
😢 it's harder to ignore the part of descriptor bytes in the go file with this scheme let me think.
@mandarjog
added a few more patterns, I think this captures your points. Please take another look.
I'm confused - which files are the problem ? The .pb.go files have relative paths, simply cp from baze-genfiles is enough and git will not find changes.
ask @mandarjog for the details; I've heard that some path relatives can be different on some platform (like starting from bazel-genfiles or including local-fastbuild or not, etc). I haven't seen this on my environment though.
@jmuk: The following test failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
prow/e2e-suite-rbac-no_auth.sh
906a40ec46dc28ba81975f4f4758f13d394268c9
link
/test e2e-suite-rbac-no_auth
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@jmuk: The following tests failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
prow/e2e-suite-rbac-no_auth.sh
906a40ec46dc28ba81975f4f4758f13d394268c9
link
/test e2e-suite-rbac-no_auth
prow/e2e-suite-rbac-auth.sh
906a40ec46dc28ba81975f4f4758f13d394268c9
link
/test e2e-suite-rbac-auth
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/lgtm
@jmuk it works on a Mac now. Thanks.
/test e2e-suite-rbac-auth e2e-suite-rbac-no_auth
| gharchive/pull-request | 2017-11-07T01:01:47 | 2025-04-01T06:44:33.739581 | {
"authors": [
"costinm",
"istio-testing",
"jmuk",
"mandarjog",
"yutongz"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/1570",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
547231672 | Update istioctl webhook injector service name
Fixes https://github.com/istio/istio/issues/20003
Injection is now handled by Pilot by default
@howardjohn: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
pilot-multicluster-e2e_istio
e6f16d52eff294f0bb35f7a4d54c73ce31e7a011
link
/test pilot-multicluster-e2e_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/retest
| gharchive/pull-request | 2020-01-09T03:51:18 | 2025-04-01T06:44:33.744134 | {
"authors": [
"howardjohn",
"istio-testing"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/20019",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
800832247 | WIP: Fix loop in DNS traffic
Fixes https://github.com/istio/istio/issues/30309
Not ready for review yet
Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all
/test all
@howardjohn: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
release-notes_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test release-notes_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
release-notes_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test release-notes_istio
lint_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test lint_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
release-notes_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test release-notes_istio
lint_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test lint_istio
integ-pilot-multicluster-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-multicluster-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
release-notes_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test release-notes_istio
lint_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test lint_istio
integ-pilot-multicluster-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-multicluster-tests_istio
unit-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test unit-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
release-notes_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test release-notes_istio
lint_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test lint_istio
integ-pilot-multicluster-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-multicluster-tests_istio
unit-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test unit-tests_istio
integ-telemetry-mc-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-telemetry-mc-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
release-notes_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test release-notes_istio
lint_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test lint_istio
integ-pilot-multicluster-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-multicluster-tests_istio
unit-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test unit-tests_istio
integ-telemetry-mc-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-telemetry-mc-k8s-tests_istio
integ-pilot-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/test all
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
lint_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test lint_istio
integ-pilot-multicluster-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-multicluster-tests_istio
unit-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test unit-tests_istio
integ-telemetry-mc-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-telemetry-mc-k8s-tests_istio
integ-pilot-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-k8s-tests_istio
release-notes_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test release-notes_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
integ-pilot-multicluster-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-multicluster-tests_istio
unit-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test unit-tests_istio
integ-telemetry-mc-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-telemetry-mc-k8s-tests_istio
integ-pilot-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-k8s-tests_istio
release-notes_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test release-notes_istio
lint_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test lint_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
integ-pilot-multicluster-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-multicluster-tests_istio
integ-telemetry-mc-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-telemetry-mc-k8s-tests_istio
integ-pilot-k8s-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-k8s-tests_istio
release-notes_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test release-notes_istio
lint_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test lint_istio
unit-tests_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test unit-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@howardjohn: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
integ-pilot-multicluster-tests_istio
0868bed00822d41daeddcabcf48359f4d8401fa4
link
/test integ-pilot-multicluster-tests_istio
release-notes_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test release-notes_istio
lint_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test lint_istio
unit-tests_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test unit-tests_istio
integ-pilot-k8s-tests_istio
eefca6190da35a331f54d3a4d754e94a604516eb
link
/test integ-pilot-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@stevenctl I think it would be great to have in 1.9. However, give its not a trivial change to the iptables, it may be risky to get in a 1.9.1+. So ideally would be 1.9.0 if possible - your call? I know its very late.
Given John's comment I agree with Steven that we should prioritize this and try and get this into 1.9.0.
In response to a cherrypick label: #30647 failed to apply on top of branch "release-1.9":
Applying: Fix DNS loop when there is a local DNS proxy
Using index info to reconstruct a base tree...
M tests/integration/pilot/common/routing.go
M tools/istio-iptables/pkg/cmd/root.go
M tools/istio-iptables/pkg/cmd/run.go
M tools/istio-iptables/pkg/config/config.go
Falling back to patching base and 3-way merge...
Auto-merging tools/istio-iptables/pkg/config/config.go
CONFLICT (content): Merge conflict in tools/istio-iptables/pkg/config/config.go
Auto-merging tools/istio-iptables/pkg/cmd/run.go
Auto-merging tools/istio-iptables/pkg/cmd/root.go
Auto-merging tests/integration/pilot/common/routing.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 Fix DNS loop when there is a local DNS proxy
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
In response to a cherrypick label: new issue created for failed cherrypick: #30699
| gharchive/pull-request | 2021-02-04T00:58:50 | 2025-04-01T06:44:33.806457 | {
"authors": [
"howardjohn",
"istio-testing",
"jacob-delgado"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/30647",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
839047108 | Remove thrift filter support in pilot
This was added a few releases back as experimental and was never fully
completed. After discussion with the original author and the networking
WG, we have decided to remove this as there is no owner to drive it's
completion.
Note: https://github.com/aeraki-framework/aeraki supports Thrift if
desired.
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Pull Request Attributes
Please check any characteristics that apply to this pull request.
[ ] Does not have any changes that may affect Istio users.
@howardjohn: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
lint_istio
695e141c217655989c9127c6fc439389a82f8465
link
/test lint_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
| gharchive/pull-request | 2021-03-23T19:26:31 | 2025-04-01T06:44:33.813173 | {
"authors": [
"howardjohn",
"istio-testing"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/31661",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
921188886 | cache blackhole and passthrough clusters
These two clusters only depend on meshconfig.
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[x] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Pull Request Attributes
Please check any characteristics that apply to this pull request.
[x] Does not have any user-facing changes. This may include CLI changes, API changes, behavior changes, performance improvements, etc.
@hzxuzhonghu: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test gencheck_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test gencheck_istio
integ-ipv6-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-ipv6-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test gencheck_istio
integ-ipv6-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-ipv6-k8s-tests_istio
integ-distroless-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-distroless-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test gencheck_istio
integ-ipv6-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-ipv6-k8s-tests_istio
integ-distroless-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-distroless-k8s-tests_istio
integ-pilot-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test gencheck_istio
integ-ipv6-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-ipv6-k8s-tests_istio
integ-distroless-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-distroless-k8s-tests_istio
integ-pilot-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-k8s-tests_istio
integ-security-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-security-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test gencheck_istio
integ-ipv6-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-ipv6-k8s-tests_istio
integ-distroless-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-distroless-k8s-tests_istio
integ-pilot-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-k8s-tests_istio
integ-security-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-security-k8s-tests_istio
integ-pilot-multicluster-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-multicluster-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test gencheck_istio
integ-ipv6-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-ipv6-k8s-tests_istio
integ-distroless-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-distroless-k8s-tests_istio
integ-pilot-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-k8s-tests_istio
integ-security-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-security-k8s-tests_istio
integ-pilot-multicluster-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-multicluster-tests_istio
integ-security-multicluster-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-security-multicluster-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test gencheck_istio
integ-ipv6-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-ipv6-k8s-tests_istio
integ-distroless-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-distroless-k8s-tests_istio
integ-pilot-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-k8s-tests_istio
integ-security-k8s-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-security-k8s-tests_istio
integ-pilot-multicluster-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-multicluster-tests_istio
integ-security-multicluster-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-security-multicluster-tests_istio
unit-tests_istio
812cc610efbc23b03053a9e138270c7ac06c6b82
link
/test unit-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@hzxuzhonghu: The following tests failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
integ-pilot-multicluster-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-pilot-multicluster-tests_istio
integ-security-multicluster-tests_istio
a2d3ac07a9b4f498448bb3315aa78723e03da100
link
/test integ-security-multicluster-tests_istio
unit-tests_istio
812cc610efbc23b03053a9e138270c7ac06c6b82
link
/test unit-tests_istio
integ-telemetry-k8s-tests_istio
812cc610efbc23b03053a9e138270c7ac06c6b82
link
/test integ-telemetry-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
I will test #33338 to have a look at what's the performance improve
I think better to do benchmark to see if the complexity is worth. I have tried in the past, but it did not give significant benefits so left it. Is n't this automatically handled by cluster caching?
Yes, if we have all cds cache, then it is not needed.
| gharchive/pull-request | 2021-06-15T09:19:38 | 2025-04-01T06:44:33.870451 | {
"authors": [
"hzxuzhonghu",
"istio-testing",
"ramaraochavali"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/33435",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1333260930 | Promoted istioctl x uninstall to istioctl uninstall
fixes #40339
Signed-off-by: Tong Li litong01@us.ibm.com
Please provide a description of this PR:
I'm thinking we should mark the experimental command deprecated and then remove it in the next minor release to give people time to move their scripts over. That's something the UX team likely has input/rules for.
I'm thinking we should mark the experimental command deprecated and then remove it in the next minor release to give people time to move their scripts over. That's something the UX team likely has input/rules for.
@ericvn I've talked to @esnible and used softGraduatedCmd method, please see the latest commit.
In response to a cherrypick label: new pull request created: #40471
| gharchive/pull-request | 2022-08-09T13:28:32 | 2025-04-01T06:44:33.873229 | {
"authors": [
"ericvn",
"istio-testing",
"litong01"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/40362",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2398851799 | Drop/replace unused logging flag in istio-cni chart.
In the istio-cni chart:
values.cni.logLevel currently does nothing, mark it as deprecated.
values.global.logging.level is the value to use.
also introduce values.cni.logging.level as a component-level override for values.global.logging.level, to keep things consistent.
@bleggett: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
Test name
Commit
Details
Required
Rerun command
release-notes_istio
066fa5c14ec42c5669b1b377292c94dcbda58643
link
true
/test release-notes
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
/test release-notes
| gharchive/pull-request | 2024-07-09T18:14:54 | 2025-04-01T06:44:33.878985 | {
"authors": [
"bleggett",
"istio-testing"
],
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/51972",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
247460072 | DO NOT MERGE! Fast Forward stable to 0754fe4a71945196649216c86dce2a888cb22295.
This PR will be merged automatically once checks are successful.
This change is
Jenkins job mixer/postsubmit passed
Jenkins job mixer/e2e-suite passed
| gharchive/pull-request | 2017-08-02T17:10:16 | 2025-04-01T06:44:33.881058 | {
"authors": [
"istio-testing"
],
"repo": "istio/mixer",
"url": "https://github.com/istio/mixer/pull/992",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
167497805 | Add to appstore
Someone can add this to app store? - No MAC OS currently and AMD PC ;__;
I have submitted it to Beta Review, which should at least let us know what issues will arise/need to be fixed before it can be submitted to the AppStore... @istornz if it passes, I have no problem submitting it and including your copyright notice with it. I already pay the yearly fee for other projects anyways.
I have a paid account too. When the porting will be more advanced I will put the app online on the App Store.
any news about this? i just setup my own server and it will be very handy to download iPokeGo from appstore <3
| gharchive/issue | 2016-07-26T00:48:34 | 2025-04-01T06:44:33.896564 | {
"authors": [
"HypeZ85",
"istornz",
"jankcat",
"zubr93"
],
"repo": "istornz/iPokeGo",
"url": "https://github.com/istornz/iPokeGo/issues/28",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
996036728 | restrict adding lowrole users to admin
dev
JIRA
Main JIRA ticket
Code reviewers
[ ] @github_username
Second Level Review
[ ] @github_username
Summary of issue
ToDo
Summary of change
ToDo
Testing approach
ToDo
CHECK LIST
[ ] СI passed
[ ] Сode coverage >=95%
[ ] PR is reviewed manually again (to make sure you have 100% ready code)
[ ] All reviewers agreed to merge the PR
[ ] I've checked new feature as logged in and logged out user if needed
[ ] PR meets all conventions
What a masterpiece! Some really good work)
💩 thanks
| gharchive/pull-request | 2021-09-14T13:42:10 | 2025-04-01T06:44:33.910245 | {
"authors": [
"DmytroZelinsky"
],
"repo": "ita-social-projects/EPlast-Client",
"url": "https://github.com/ita-social-projects/EPlast-Client/pull/585",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
581111470 | BUG[FOOTER] The scrolling page is displayed in landing page after clicking on the 'My cabinet' link
Environment:
OS: Windows 10Pro, V.1909
Browser: Chrome 80.0.3987.122 / 64-bit
Steps to reproduce:
Go to 'Green city' application https://ita-social-projects.github.io/GreenCityClient/#/welcome
Scroll down to the bottom of the page
Click on 'My cabinet' link
Verify the displayed page
Actual result:The scrolling page is displayed in landing page after clicking on the 'My cabinet' link
Expected result: The 'Log in' form is displayed for unlogged user in landing page after clicking on the 'My cabinet' link
#384
Video link:
https://drive.google.com/open?id=1dA_KE_pdkCOs5p9kzK-jgL6uFTPoNBmb
Bug not fixed.
Appears page instead 'Sign In' pop up
In the footer "My habits" instead "My cabinet"
Old functional of Authorization will be deleted and after will be add link for button to open Auth modal window instead of old Auth page
| gharchive/issue | 2020-03-14T08:44:43 | 2025-04-01T06:44:33.915016 | {
"authors": [
"KaterynaIlkiv",
"KimAT25",
"traviscolt"
],
"repo": "ita-social-projects/GreenCity",
"url": "https://github.com/ita-social-projects/GreenCity/issues/512",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1921517298 | 🎨 Hero Page Done #2
Description
Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.
Reference the issue related to this PR as shown below. Every issue has it's own number you can find the issue numbers here.
Fixes # (issue) - Example: Fixes #10, #20 NOTE: GitHub issue numbers should be prefixed with # to aid with issue tracking.
Type of change
Please delete options that are not relevant.
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
Screenshot (optional)
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
[ ] Test A
[ ] Test B
Test Configuration: (optional)
Firmware version:
Hardware:
Toolchain:
SDK:
Checklist
[ ] My code follows the style guidelines of this project
[ ] I have performed a self-review of my code
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[ ] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] New and existing unit tests pass locally with my changes
[ ] Any dependent changes have been merged and published in downstream modules
Good work, but closing this pr as issue is done, please pick another issue
| gharchive/pull-request | 2023-10-02T09:00:51 | 2025-04-01T06:44:33.921762 | {
"authors": [
"JAPHETHNYARANGA",
"iamkinglana"
],
"repo": "italanta/elewa-website-html",
"url": "https://github.com/italanta/elewa-website-html/pull/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
231947350 | Accesso privati
Salve
è previsto un accesso da parte di soggetti privati per la verifica delle informazioni. Ad esempio io devo mandare una lettera al signor MARIO ROSSI,
posso chiedere l'indirizzo?
posso validare l'indirizzo fornendo io i dati
Grazie
Al momento un privato può interrogare soltanto i propri dati.
| gharchive/issue | 2017-05-29T07:32:40 | 2025-04-01T06:44:33.927663 | {
"authors": [
"andmattia",
"gcarbonin"
],
"repo": "italia/anpr",
"url": "https://github.com/italia/anpr/issues/187",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
266864275 | Cancellazione retroattiva
In riferimento alla issue 369 riporto la frase " la cancellazione non può essere retroattiva (ad esempio, subentro del comune 10-10-2017; cancellazione per emigrazione in comune non subentrato con decorrenza 8-10-2017);"
Nel caso si verifichi la fattispecie in esempio, subentro 10/10 e cancellazione 8/10, come si procede?
Filipazzi - Maggioli
La data di decorrenza della cancellazione è 10-10: la persona in ANPR è entrata il 10-10.
Ne segue che il comune di immigrazione la iscrive l'8 e quello di emigrazione la cancella il 10.
Non faccio commenti.
Filipazzi - MAggioli
Il commento lo evito anch'io, ma permettetemi l'osservazione che siamo nel caso di un comune non subentrato, e quindi per legge si applica la 223/89 precedente alle modifiche di ANPR, ovvero questa:
http://www.normattiva.it/uri-res/N2Ls?urn:nir:stato:decreto.del.presidente.della.repubblica:1989-05-30;223!vig=2015-08-01
all'art. 18 comma 2 "...ai fini della corrispondente cancellazione anagrafica, da effettuarsi, con la medesima decorrenza di cui al comma 1, entro due giorni lavorativi."
Per legge da come la leggo io, dovremmo cancellarli dal giorno 8.
Siamo nel caso di cancellazioni che sono trasmesse dopo il subentro. So che non piace ma quella scheda in ANPR è transitata, i dati sono stati forniti all'esterno, altri comuni la possono aver interrogata.
Di conseguenza, non può sparire nel nulla come in ANPR non fosse mai entrata.
Guardi che io solidarizzo con lei dal punto di vista informatico, Però la legge non l'ho scritta io, e la modifica del 2015 stabilisce all'art.3 comma 2:
2. Fino al subentro dell'anagrafe nazionale della popolazione residente, il comune non transitato procede a tutti gli adempimenti anagrafici con l'osservanza delle disposizioni del decreto del Presidente della Repubblica 30 maggio 1989, n. 223, previgenti alla data di entrata in vigore del presente decreto. Le medesime disposizioni continuano, altresi', ad applicarsi agli adempimenti anagrafici che interessano congiuntamente un comune transitato ed un comune non transitato.
Se ho capito il problema, per cortesia trovate la soluzione informatica. il punto è che si deve mantenere la consecutività della residenza, ovvero non possono esserci ne buchi ne sovrapposizioni ovvero
neanche le persone possono sparire nel nulla per un paio di giorni, ne possono essere residenti in due comuni diversi lo stesso giorno.
so che quando saremo tutti subentrati non avremo più il problema, ma per adesso....
grazie
Raffaella
Se avessi potuto metterci del mio l'articolo del transitorio sarebbe stato molto diverso. In ogni caso, ripetendo fino allo sfinimento che operazioni con data retroattiva per il Ministero non possono esistere, una persona entrata in ANPR NON PUO' SPARIRE.
Nel caso in esame, si dovrebbe pensare ad un flusso completamente diverso in modo da garantire la tracciabilita' delle informazioni e parallelamente, l'operatività dei comuni.
Prendetela come un'ipotesi di soluzione che qualcuno dovrà approvare. Supponiamo che la cancellazione arrivi al comune nei termini previsti con riferimento ad una data antecedente al subentro. In assenza di questo ritardo "tecnico", il comune non avrebbe conferito il soggetto in quanto la sua scheda sarebbe stata archiviata.
Si potrebbe in questo caso prevedere un "annullamento del subentro" di quella persona, che successivamente sarebbe cancellata dall'anagrafe LOCALE del comune.
In questo modo tutto tornerebbe a posto
Caso di oggi:
Una persona residente viene a rifare la carta d'identità il 18 /10.
Il 19/10 arriva al protocollo la richiesta di cancellazione (apr4) da comune non subentrato.
Data di decorrenza di cancellazione 17/10.
Oggi 20/10 faccio la cancellazione.
Errore bloccante in quanto risulta un'operazione sulla persona con data successiva alla data di cancellazione.
Ho tempo 5 giorni per rispondere al comune di emigrazione
Che faccio?
Grazie
Raffaella
Non voglio porre in secondo piano l'urgenza di Spinea, ma solo collegare anche questa precedente segnalazione al problema #264 . Sono alcune casistiche, ma non le uniche.
| gharchive/issue | 2017-10-19T14:31:04 | 2025-04-01T06:44:33.936965 | {
"authors": [
"MarcoMigliori",
"Raffive",
"bfilipazzi",
"gcarbonin"
],
"repo": "italia/anpr",
"url": "https://github.com/italia/anpr/issues/370",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1457668831 | 🛑 CGO Express! is down
In 854ff86, CGO Express! (https://cgoxgptw.culturalgo.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CGO Express! is back up in 0465161.
| gharchive/issue | 2022-11-21T10:12:45 | 2025-04-01T06:44:33.945511 | {
"authors": [
"itbm-services"
],
"repo": "itbm-services/monitoring",
"url": "https://github.com/itbm-services/monitoring/issues/72",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1351954586 | Fix displayed liquidation price for ETHUSD offer
Due to the way the liquidation price is calculated for quanto payout curves, after we merge #2742 we will display incorrect liquidation prices for ETHUSD offers.
Check the thread linked below for full context and proposed solutions.
Originally posted by @luckysori in https://github.com/itchysats/itchysats/pull/2742#discussion_r953713156
I don't see a way how we can compute this in the backend and display it in the frontend without going back end forth. I'll compute the liquidation price in the frontend only.
| gharchive/issue | 2022-08-26T08:44:21 | 2025-04-01T06:44:33.949677 | {
"authors": [
"bonomat",
"luckysori"
],
"repo": "itchysats/itchysats",
"url": "https://github.com/itchysats/itchysats/issues/2754",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1300374706 | Temporary increase wait time for state changes
This is an attempt to temporarily fix failing macos test job on CI - an
underlying regression makes the tests not finish within previously working
constraints, particularly in the rollover case.
bors r+
| gharchive/pull-request | 2022-07-11T08:24:10 | 2025-04-01T06:44:33.950778 | {
"authors": [
"da-kami",
"klochowicz"
],
"repo": "itchysats/itchysats",
"url": "https://github.com/itchysats/itchysats/pull/2394",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1073762484 | A few frontend improvements
fix: improves reference price being undefined
fix: moves the alert box below the grid to allow for longer messages
old:
fix: @da-kami thought this was a cashing problem 🙈
fix: make the input full width
fix: makes the button size fix width
new
bors r+
| gharchive/pull-request | 2021-12-07T21:18:07 | 2025-04-01T06:44:33.953985 | {
"authors": [
"bonomat",
"da-kami"
],
"repo": "itchysats/itchysats",
"url": "https://github.com/itchysats/itchysats/pull/826",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
977149396 | Outdated permissions for S3 remote
According to this DVC page, the necessary permissions for using an S3 remote are s3:ListBucket, s3:GetObject, s3:PutObject, s3:DeleteObject. However, I cannot dvc push with these permissions only. Instead, I get an access denied error during transfer from cache to remote which looks like the following:
ERROR: failed to transfer '<PATH_TO_FILE>' - Failed to upload <PATH_TO_FILE> to <PATH_TO_S3_KEY>: An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied
It seems that at least this CreateMultipartUpload permission is lacking in the doc page. It would be nice to review if any other permissions are missing as well and then update the docs.
Actually, it turns out that my problem was that DVC does not authenticate by environment variables like aws-cli and boto3 do. I'll open a new issue for that.
| gharchive/issue | 2021-08-23T15:20:10 | 2025-04-01T06:44:33.999739 | {
"authors": [
"felipegssantos"
],
"repo": "iterative/dvc.org",
"url": "https://github.com/iterative/dvc.org/issues/2746",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
461729150 | document DVC API
dvc.api.open
dvc.api.get_content
please more insights here
@Suor can you describe it briefly, please :)
@jorgeorpinel @shcheklein can you please explain this a bit more. I would like to take this up.
@Suor ? can you give a summary or is there a link? we should probably put docstrings around APIs before we release it.
sorry, I am unable to search dvc.api.get_content . Do you mean get_url or __getattr__ ?
@Naba7 it's read now, I believe. There are only three public methods in dvc.api. Those should be described. And let's create a ticket on iterative/dvc to update docstrings for those APIs.
They have some short docstrings, I will update them based on future docs or discussion here if we decide to do that.
So, what is this about? There are three public things in dvc.api now:
read(path, repo=None, rev=None, remote=None, mode="r", encoding=None) - returns the contents of an artifact as a bytes object or a string.
get_url(path, repo=None, rev=None, remote=None) - returns an url of an artifact.
open(path, repo=None, rev=None, remote=None, mode="r", encoding=None) - opens an artifact as a file, may only be used as context manager:with dvc.api.open("path/to/data.csv", remote="my-s3", encoding="utf-8") as f:
for line in f:
process(line)
Arguments always mean the same:
path - a path to an artifact, relative to repo root,
repo - a path or git url of a repo,
rev - revision, i.e. a branch, a tag, a sha. This only works with an url in repo,
remote - a name of a remote to fetch artifact from/give url to
mode - a mode with which we open a file, the only sensible options are r/rt and rb
encoding - an encoding used to decode contents to a string
mode and encoding mirror their namesakes builtin open() has.
This issue is meant for improving docstrings around dvc.api Link
k, thanks @Suor . @Naba7 now we need to come with a good place and a format for it. Probably, we need a separate top-level section. API reference similar to command reference we have.
Shall we describe read, open, and get_url in different sections inside API Reference?
We can have another section for describing path, repo, rev, remote, mode, encoding.
@Naba7 yes, it should be a separate section per call. At least for now.
path, repo, rev, remote, mode, encoding - are not APIs. repo is, but it's not officially released yet.
Can you take a look for some good projects that have APIs documented to come up with a good page template for this?
How about this format angularJS or microsoft or Azure?
We can use these formats and write a introductory page listing all the APIs and linking them to GitHub
@Suor are repo=None, rev=None, remote=None default values actually those? Or do they get turned into repo='.', rev='HEAD', remote=(read from config file)? Probably important to document (both in docstring and) in the API ref.
@shcheklein re
Probably, we need a separate top-level section. API reference similar to command reference we have.
Agree, perhaps in docs path /api and the index page for that section could explain what is the API and how to start using it. Actually its not that obvious! I'm not 100% sure what we mean by "the DVC API" for example. Is it a Python library people can install separately?
that's right. I would say that it's not separate though, it's the same DVC package.
Sorry, I didn't understand it. I thought DVC-API is same as DVC package being installed by binary-files or by any other means and it is a separate package to run on top of git and we can implement it locally or on remote machines.
@Naba7 ?? I'm not sure I understand this. DVC api is just a python module defined in the DVC itself. No need to install any extras to being able to use it.
Suggested intro. text for the new /api-reference section index page:
The DVC API is part of the dvc Python module installed along with DVC. You may include it in your Python code with the following line:
from dvc import api as dvcapi
Then list read, open, and get_url in a bullet list, linked to their own pages (which should also have left pane navigation items).
@shcheklein Okay. I get it now. I thought for getting DVC-api we need to download it separately.
@jorgeorpinel As @shcheklein had said that repo is also an api, we should include it in the intro.
And
... some of the core functions of DVC such as add, push, pull, commit, checkout, etc., ...
Writing one-liner for read,open, get_url, repo(?) such as:
read[link] - returns the contents of an artifact as a bytes object or a string.
get_url[link] - returns an url of an artifact.
open[link] - opens an artifact as a file.
repo[link] - contains all the core command-files that are used in Python scripts.
Other followed up pages may contain vivid description and followed up by examples. What more can we include here?
I think that's good enough to start a PR. Please let us know, thanks!
@jorgeorpinel
I would say simply use import dvc.api instead of from dvc import api as dvcapi, more straightforward and almost the same length:
import csv
import pickle
import dvc.api
# Loading from content
model = pickle.loads(dvc.api.read("some-model.pkl", repo="https://github.com/..."))
# Loading using file descriptor
with dvc.api.open("dataset.csv", repo=...) as fd:
reader = csv.reader(fd)
for row in reader:
# ...
# Obtaining an url
resource_url = dvc.api.get_url("path/to/resource.ext", repo=..., remote="s3")
@Naba7 I would start with some Usage section, with short and most common section, the continue with complete API listing.
Or another layout: Install, Usage, Methods sections. Then each method goes on its separate page, with full operation and params description, more examples. The point is making it glanceable and copy-pastable.
I think the layout : Install, Usage, Methods and describing each methods is better.
@shcheklein @jorgeorpinel If you agree to this, I will start working on the same.
@Naba7 yep, I like the idea. So, we can start with three levels:
Python API is the top most
it includes Install, Usage, Method Reference
Method Reference includes one page per each method with simple example. And we need to discuss the structure for it.
@jorgeorpinel any thoughts on this?
Agree. Looking forward to see a first version PR 🙂
I am sorry. I won't be able to work further on this PR.
@Naba7 np! thank you for all your contributions ;)
They have some short docstrings, I will update them based on future docs or discussion here if we decide to do that.
@Suor please see iterative/dvc/issues/3092
@Suor are repo=None, rev=None, remote=None default values actually None? Or do they get turned into repo='.', rev='HEAD', remote=(read from config file)?
@Suor can you confirm about this Q ^ please? Thanks
| gharchive/issue | 2019-06-27T20:18:45 | 2025-04-01T06:44:34.022267 | {
"authors": [
"Naba7",
"Suor",
"jorgeorpinel",
"shcheklein"
],
"repo": "iterative/dvc.org",
"url": "https://github.com/iterative/dvc.org/issues/463",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1408070451 | TEMPERATURE CALCULATOR
Tech Stack:
Python
Tkinter
Descriotion
This calculator is used to convert Temperatures in Celcius to Kelvien and Fahrenheit and Vice versa
@its-puneet assign it to me.
@its-puneet please assign this issue.
@its-puneet please assign this issue to me.
Do it brother
Please assign me , I want to do it.
@its-puneet please assign me, I want to do it.
| gharchive/issue | 2022-10-13T16:05:12 | 2025-04-01T06:44:34.079393 | {
"authors": [
"CrypticRevenger",
"Nirvik07",
"its-puneet"
],
"repo": "its-puneet/Geeky-Coders",
"url": "https://github.com/its-puneet/Geeky-Coders/issues/162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2563542516 | Ульянов А.М. 6312 Лаб. 2
ФИО: Ульянов А.М. @Sanechek600
Номер группы: 6312
Краткое описание исходного кода: Это модуль, написанный моим дипломником несколько лет назад. Его работа была посвящена анализу сверхразрешению изображений. Конкретно этот код занимается геометрическим согласованием объектов в кадре при помощи различных методов. Однако, нет сильной нужды вникать в бизнес-логику этого модуля, чтобы увидеть проблемы с оформлением кода, а также с его логической и физической декомпозицией.
Зачтено
| gharchive/pull-request | 2024-10-03T09:21:30 | 2025-04-01T06:44:34.101146 | {
"authors": [
"alxmcs"
],
"repo": "itsecd/prog-instruments-labs",
"url": "https://github.com/itsecd/prog-instruments-labs/pull/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1379516150 | Bookstore: connect to API
💯 Achievements.
Understand the concept of state management.
Use store, actions, and reducers in React.
Connect React and Redux.
Connect an existing API via Redux and use the data to display something in a UI.
Ensure there are no linter errors.
Use correct Gitflow.
Make sure that you documented your work in a professional way.
Cheers and Happy coding!👏👏👏 to you as well.
| gharchive/pull-request | 2022-09-20T14:26:15 | 2025-04-01T06:44:34.106749 | {
"authors": [
"itsmraga-hub"
],
"repo": "itsmraga-hub/Bookstore-CMS",
"url": "https://github.com/itsmraga-hub/Bookstore-CMS/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1233466388 | Optimize by decreasing br_mis_pred
Attempt to reduce # -of places where CPU needs to decide which branch to take
Enable compiler ( g++, clang++ ) specific pragma(s) for hinting safe loop unroll
Eventually target increasing throughput of harpocrates::{encrypt, decrypt} routines
See latest benchmark details https://github.com/itzmeanjan/harpocrates/pull/2/commits/2d46ffd20325d716a5a5a204b6d1300092165299?short_path=b335630#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5
For understanding how exactly this PR attains what it attains
Notice, change in br_pred & br_mis_pred field values | br_pred -> total branch predictions & br_mis_pred -> branch mispredictions
Also note, instructions issued per cycle
Now if you compare throughput, it should make better sense
| gharchive/pull-request | 2022-05-12T05:33:16 | 2025-04-01T06:44:34.119429 | {
"authors": [
"itzmeanjan"
],
"repo": "itzmeanjan/harpocrates",
"url": "https://github.com/itzmeanjan/harpocrates/pull/2",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1437437391 | chore(GH-29): add Code Climate maintainability badge
Fixes https://github.com/ivanilves/travelgrunt/issues/29
| gharchive/pull-request | 2022-11-06T15:11:56 | 2025-04-01T06:44:34.152452 | {
"authors": [
"ivanilves"
],
"repo": "ivanilves/travelgrunt",
"url": "https://github.com/ivanilves/travelgrunt/pull/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1950154995 | 🛑 2witter is down
In 5715d54, 2witter (https://2witter.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: 2witter is back up in 0c38516 after 5 days, 14 hours, 26 minutes.
| gharchive/issue | 2023-10-18T16:22:14 | 2025-04-01T06:44:34.154829 | {
"authors": [
"ivanwo"
],
"repo": "ivanwo/2witter-uptime",
"url": "https://github.com/ivanwo/2witter-uptime/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1967291815 | List optional status for parameters from ivcap services get action
Just walking through some documentation and realized that it would be useful if the parameters had optional vs required listed for each of the service arguments:
$ ivcap services get @1
Example output:
ID urn:ivcap:service:00000000-0000-0000-0000-000000000000
Name cv-pipeline-v0-pm1
Description todo
Status ???
Provider ID urn:ivcap:provider:00000000-0000-0000-0000-000000000000
Account ID urn:ivcap:account:00000000-0000-0000-0000-000000000000
Parameters ┌──────────────────┬────────────────────────────────┬────────────┬─────────┐
│ NAME │ DESCRIPTION │ TYPE │ DEFAULT │
├──────────────────┼────────────────────────────────┼────────────┼─────────┤
│ image │ Image to use for processing. │ artifact │ ??? │
├──────────────────┼────────────────────────────────┼────────────┼─────────┤
│ images │ Collection of images to run in │ collection │ ??? │
│ │ ference over. │ │ │
├──────────────────┼────────────────────────────────┼────────────┼─────────┤
│ log_version_info │ Log version info then terminat │ bool │ ??? │
│ │ e. │ │ │
└──────────────────┴────────────────────────────────┴────────────┴─────────┘
Good point. I thought that was indirectly taken care of by 'default' but that is most likely a) too meta, and b) most likely wrong.
| gharchive/issue | 2023-10-30T01:25:27 | 2025-04-01T06:44:34.162765 | {
"authors": [
"a-stacey",
"maxott"
],
"repo": "ivcap-works/ivcap-cli",
"url": "https://github.com/ivcap-works/ivcap-cli/issues/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
87310842 | Setup for opam packaging.
Just came across this library looking for something like async_parallel but for lwt.
I've just created most of what's required for an opam package minus pushing it up to opam. For that I'd need a tag created matching the version number of the library 0.1.0.
Oh, thanks. I was striving to do it myself, but never got a good kick for goodness. I will push it into opam.
Done https://github.com/ocaml/opam-repository/pull/4215
Thanks :+1:
| gharchive/pull-request | 2015-06-11T11:30:32 | 2025-04-01T06:44:34.170014 | {
"authors": [
"ivg",
"tmcgilchrist"
],
"repo": "ivg/parallel",
"url": "https://github.com/ivg/parallel/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1309564976 | pub.dev button in readmy does not link to pub.dev page
pub.dev button in README.md should link to lan_scanner pub.dev page
Fixed, thank you.
| gharchive/issue | 2022-07-19T13:57:04 | 2025-04-01T06:44:34.179649 | {
"authors": [
"guyluz11",
"ivirtex"
],
"repo": "ivirtex/lan_scanner",
"url": "https://github.com/ivirtex/lan_scanner/issues/6",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
509577901 | Input File Example
Can you attach .abs file example for frc_demo please?
I have attached an abs file and its aligned ppm, like frgc's format. They are in the samples folder.
| gharchive/issue | 2019-10-20T10:53:42 | 2025-04-01T06:44:34.180698 | {
"authors": [
"Sudarsono",
"gdahia"
],
"repo": "ivision-ufba/depth-face-detection",
"url": "https://github.com/ivision-ufba/depth-face-detection/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1952167126 | Svg content is not rendering to PDF
Images are not rendering in PDF, in my case, I am trying to convert the canvas to PDF. All the text is fine but in place of images getting empty space.
Hi @rahit-ip, please provide an example using codesandbox or similar in which the issue is reproducible. I'd also suggest enable the logging as follow to check for any possible insights.
const options = {
overrides: {
canvas: {
logging: true
}
}
}
Since this is probably related to the html2canvas package, I'd go also in the meantime check for similar issues in their repo https://github.com/niklasvh/html2canvas/issues.
| gharchive/issue | 2023-10-19T12:42:30 | 2025-04-01T06:44:34.182558 | {
"authors": [
"ivmarcos",
"rahit-ip"
],
"repo": "ivmarcos/react-to-pdf",
"url": "https://github.com/ivmarcos/react-to-pdf/issues/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
389412931 | Add snake_case to variable names like prometheus/cloudwatch_exporter
Hi!
prometheus/cloudwatch_exporter exports the metric_names with underscores, for example for ELB RequestCount, currently we are exporting it like:
aws_elb_requestcount_sum
instead of:
aws_elb_request_count_sum
Some projects like gitlab, use underscores with queries because it's like the "official" one
That piece of code transforms names to snake_case
Example:
# HELP aws_elb_httpcode_backend_4_xx_sum Help is not implemented yet.
# TYPE aws_elb_httpcode_backend_4_xx_sum gauge
aws_elb_httpcode_backend_4_xx_sum{Name="test-elb",environment="test-elb-ppnamx"} 0
# HELP aws_elb_httpcode_backend_5_xx_sum Help is not implemented yet.
# TYPE aws_elb_httpcode_backend_5_xx_sum gauge
aws_elb_httpcode_backend_5_xx_sum{Name="test-elb",environment="test-elb-ppnamx"} 0
# HELP aws_elb_request_count_sum Help is not implemented yet.
# TYPE aws_elb_request_count_sum gauge
aws_elb_request_count_sum{Name="test-elb",environment="test-elb-ppnamx"} 5
# HELP yace_cloudwatch_requests_total Help is not implemented yet.
# TYPE yace_cloudwatch_requests_total counter
yace_cloudwatch_requests_total 4
Cool i like that! Thanks! Could you please change README.md as well?
Cool i like that! Thanks! Could you please change README.md as well?
And regexp is missing in prometheus.go as dependency. Go test is red.
done!
I will merge it the next days and release it with some other change. Sorry for the long time to wait.
| gharchive/pull-request | 2018-12-10T17:42:39 | 2025-04-01T06:44:34.217619 | {
"authors": [
"sanchezpaco",
"tsupertramp"
],
"repo": "ivx/yet-another-cloudwatch-exporter",
"url": "https://github.com/ivx/yet-another-cloudwatch-exporter/pull/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1323988550 | 🛑 05Takagi is down
In 427b6cc, 05Takagi (https://lixingyong.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: 05Takagi is back up in ba0bcb4.
| gharchive/issue | 2022-08-01T07:54:50 | 2025-04-01T06:44:34.226948 | {
"authors": [
"iwyang"
],
"repo": "iwyang/check",
"url": "https://github.com/iwyang/check/issues/1621",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1456794434 | 🛑 06IILeeのBlog is down
In ab6da60, 06IILeeのBlog (https://www.iilee.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: 06IILeeのBlog is back up in 8f81030.
| gharchive/issue | 2022-11-20T06:51:35 | 2025-04-01T06:44:34.229318 | {
"authors": [
"iwyang"
],
"repo": "iwyang/check",
"url": "https://github.com/iwyang/check/issues/2168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2085561693 | 🛑 05Takagi is down
In 757227f, 05Takagi (https://lixingyong.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: 05Takagi is back up in ea5f491 after 19 minutes.
| gharchive/issue | 2024-01-17T07:20:58 | 2025-04-01T06:44:34.231550 | {
"authors": [
"iwyang"
],
"repo": "iwyang/check",
"url": "https://github.com/iwyang/check/issues/5219",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2418957082 | 🛑 21Eurkon is down
In 426bb51, 21Eurkon (https://blog.eurkon.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: 21Eurkon is back up in c833809 after 9 minutes.
| gharchive/issue | 2024-07-19T13:42:15 | 2025-04-01T06:44:34.233816 | {
"authors": [
"iwyang"
],
"repo": "iwyang/check",
"url": "https://github.com/iwyang/check/issues/8419",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2484605028 | 🛑 21Eurkon is down
In 9010302, 21Eurkon (https://blog.eurkon.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: 21Eurkon is back up in dddc8ab after 9 minutes.
| gharchive/issue | 2024-08-24T13:39:31 | 2025-04-01T06:44:34.236073 | {
"authors": [
"iwyang"
],
"repo": "iwyang/check",
"url": "https://github.com/iwyang/check/issues/9195",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1806681061 | 🛑 Ixtanet SAT is down
In d6e7f81, Ixtanet SAT ($MEDIA_SAT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Ixtanet SAT is back up in 8ab63ae.
| gharchive/issue | 2023-07-16T18:44:29 | 2025-04-01T06:44:34.238291 | {
"authors": [
"aalonzolu"
],
"repo": "ixtanet/upptime",
"url": "https://github.com/ixtanet/upptime/issues/504",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
396148040 | driver loading?
Hi,
I think userspace driver is the way to go, but how would such driver get autoloaded by the kernel? Any pointers where to look at?
Maybe this userspace driver can even be packaged as an umh kernel module, i.e. so called elf module?
I did create this question as issue because I didn't know where to ask otherwise!
It doesn't -- the user space driver is linked into the application using it.
Or for drivers being used by several applications: there is some service running with the driver that offers some kind of interface for other applications (i.e., TAPS in the future)
| gharchive/issue | 2019-01-05T08:59:40 | 2025-04-01T06:44:34.277797 | {
"authors": [
"emmericp",
"thomasmey"
],
"repo": "ixy-languages/ixy-languages",
"url": "https://github.com/ixy-languages/ixy-languages/issues/3",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
338104818 | update data use method createOrUpdate()
I have a problem when update data use the method "createOrUpdate()" , everytime it create a new entry , but I just want to update a data
Can you show some of the entities? Have you tried to debug the database and the method to see what it is doing?
sorry, I am not set ID, this cause the issuse
Figured. So it can be closed?
| gharchive/issue | 2018-07-04T01:51:24 | 2025-04-01T06:44:34.345017 | {
"authors": [
"fylih",
"j256"
],
"repo": "j256/ormlite-android",
"url": "https://github.com/j256/ormlite-android/issues/99",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
87500081 | Clean up remaining taskSkip flags.
The correct way to disable a task by default is enabled = false in the task creation.
The correct way for a user to enable or disable a task permanently is
j2objcTASKNAME {
enabled = true
}
or similar in their build.gradle.
The correct way for a user to disable a task temporarily is:
./gradlew build -x TASKTOSKIP
Doc pull request here: https://github.com/j2objc-contrib/j2objc-gradle/pull/148
| gharchive/issue | 2015-06-11T21:12:05 | 2025-04-01T06:44:34.347021 | {
"authors": [
"advayDev1",
"confile"
],
"repo": "j2objc-contrib/j2objc-gradle",
"url": "https://github.com/j2objc-contrib/j2objc-gradle/issues/145",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2593239868 | 🛑 Docu sign is down
In 5d0ee2e, Docu sign (https://sign.j551n.com) was down:
HTTP code: 530
Response time: 77 ms
Resolved: Docu sign is back up in acea7dd after 12 minutes.
| gharchive/issue | 2024-10-16T23:17:14 | 2025-04-01T06:44:34.360018 | {
"authors": [
"j551n-ncloud"
],
"repo": "j551n-ncloud/Monitor",
"url": "https://github.com/j551n-ncloud/Monitor/issues/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
530163506 | pose information for real_val
Hi, thanks for sharing the code.
I tried to run the benchmark.py but it requires pose information in real_val/scene_*/ dictionary. I downloaded NOCS-REAL275-additional.zip, but it seems that it only contains pose information for real_train dataset.
Could you share the pose information for real_val dataset or could you tell me how can I get the information by myself?
Thank you so much!
Hi,
The real_val/scene_*/ could be found at the original NOCS dataset. In their README Datasets section, you could found Real Dataset: Test.
Hi, I downloaded the Real Dataset: Test dataset from NOCS, but I can not find *_pose.txt file which required in benchmark.py:
gt_path = img_path + "_pose.txt"
The file types in the dataset are: *_color.png *_coord.png *_depth.png *_mask.png *_meta.txt.
Hi, I have the same problem with pose information for real_val. when I run the train.py, it can't find file pose.txt in My_NOCS/data/real_val/scene/. Can` you tell me how did you solve the problem?
Thank you very much!
Hi, I have the same problem with pose information for real_val. when I run the train.py, it can't find file pose.txt in My_NOCS/data/real_val/scene/. Can` you tell me how did you solve the problem?
Thank you very much!
Hi, please check with the MyNOCS/data_pose folder, copy the _pose.txt files into corresponding folders under My_NOCS/data/. Hope this helps : )
Hi, I rechecked the My_NOCS/data_pose folder, but there is only train/ floder and 'real_train/' floder. So, I can't find the corresponding pose floder for My_NOCS/data/real_veal/
Hi, I rechecked the My_NOCS/data_pose folder, but there is only train/ floder and 'real_train/' floder. So, I can't find the corresponding pose floder for My_NOCS/data/real_veal/
Hi, I also encountered the same problem as you. Have you solved it? Thank you very much.
Hi, I have the same problem with pose information for real_val. when I run the train.py, it can't find file pose.txt in My_NOCS/data/real_val/scene/. Can` you tell me how did you solve the problem?
Thank you very much!
Hi, please check with the MyNOCS/data_pose folder, copy the _pose.txt files into corresponding folders under My_NOCS/data/. Hope this helps : )
Hi, If it is copying the _pose.txt files into corresponding folders, does this mean that real_val and real_train are the same? Thank you so much.
Hi, I have the same problem with pose information for real_val. when I run the train.py, it can't find file pose.txt in My_NOCS/data/real_val/scene/. Can` you tell me how did you solve the problem?
Thank you very much!
Hi, please check with the MyNOCS/data_pose folder, copy the _pose.txt files into corresponding folders under My_NOCS/data/. Hope this helps : )
Hi, If it is copying the _pose.txt files into corresponding folders, does this mean that real_val and real_train are the same? Thank you so much.
Hi,I also tried this, but found that the two folders are different. I remember that real_train and real_val folders contain different content
Hi, I have the same problem with pose information for real_val. when I run the train.py, it can't find file pose.txt in My_NOCS/data/real_val/scene/. Can` you tell me how did you solve the problem?
Thank you very much!
Hi, please check with the MyNOCS/data_pose folder, copy the _pose.txt files into corresponding folders under My_NOCS/data/. Hope this helps : )
Hi, If it is copying the _pose.txt files into corresponding folders, does this mean that real_val and real_train are the same? Thank you so much.
Hi,I also tried this, but found that the two folders are different. I remember that real_train and real_val folders contain different content
Thank you for your answer. Yes, it should not be the same. Looking forward to the author's update.
Hi, I have the same problem with pose information for real_val. when I run the train.py, it can't find file pose.txt in My_NOCS/data/real_val/scene/. Can` you tell me how did you solve the problem?
Thank you very much!
Hi, please check with the MyNOCS/data_pose folder, copy the _pose.txt files into corresponding folders under My_NOCS/data/. Hope this helps : )
Hi, If it is copying the _pose.txt files into corresponding folders, does this mean that real_val and real_train are the same? Thank you so much.
Hi,I also tried this, but found that the two folders are different. I remember that real_train and real_val folders contain different content
Thank you for your answer. Yes, it should not be the same. Looking forward to the author's update.
Hello, I found I meet the same problem now. Could you please tell me how to find the pose information for real_val?
Thanks again for your kind help!
| gharchive/issue | 2019-11-29T05:34:05 | 2025-04-01T06:44:34.378398 | {
"authors": [
"JiazeWang",
"j96w",
"lx935484038",
"mymrblack",
"sunshantong"
],
"repo": "j96w/6-PACK",
"url": "https://github.com/j96w/6-PACK/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
295456952 | Add typographic defaults to theme
Values should use a standard scale:
http://jxnblk.com/modular/
https://polished.js.org/docs/#modularscale
Theme attributes that use the modular scale should now be set to the number of steps on the scale and then primitives that use these theme attributes should resolve the actual CSS value/unit using the polished helper:
fontSize: ${({ fontSize, modular=true, theme }) => modular && modularScale(theme.typography[fontSize] || fontSize, modularScaleBase, modularScaleRatio) || fontSize};
There also needs to be an 'escape hatch' from the scale, hence the modular Boolean prop.
This makes the following three primitives possible:
<Text /> // where theme.typography.fontSize = 0
<Text fontSize={1} />
<Text fontSize='16.5px' modular={false} />
| gharchive/issue | 2018-02-08T10:22:04 | 2025-04-01T06:44:34.409085 | {
"authors": [
"lukehedger"
],
"repo": "jaakmusic/primitives",
"url": "https://github.com/jaakmusic/primitives/issues/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1773510145 | Add link about CDI
https://jakarta.ee/specifications/cdi/3.0/
https://jakarta.ee/specifications/cdi/3.0/jakarta-cdi-spec-3.0.pdf
A bean is a source of contextual objects which define application state and/or logic. These objects
are called contextual instances of the bean. The container creates and destroys these instances and
associates them with the appropriate context. Contextual instances of a bean may be injected into
other objects (including other bean instances) that execute in the same context. A bean may bear
metadata defining its lifecycle and interactions with other beans.
A bean comprises the following attributes:
• A (nonempty) set of bean types
• A (nonempty) set of qualifiers
• A scope
• Optionally, a bean name
• A set of interceptor bindings
• A bean implementation
https://github.com/jabrena/spring-boot-user-beans/pull/94
| gharchive/issue | 2023-06-25T22:01:34 | 2025-04-01T06:44:34.417751 | {
"authors": [
"jabrena"
],
"repo": "jabrena/spring-boot-user-beans",
"url": "https://github.com/jabrena/spring-boot-user-beans/issues/80",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1816914416 | [Device Support Request] Garza Smart TRV
I initially created this issue on the zha-device-handlers repository.
I have a "Garza Smart TRV" that is reported as a "TS0601" manufactured by "_TZE200_mz5y07w2".
It seems that adding the device info to the class "Saswell_Thermostat_TZE200" of this file makes it work fine.
I downloaded your repo to my ZHA quirks folder, modified said class to add my device info, re paired it and is working.
Pushed
https://github.com/jacekk015/zha_quirks/commit/17033c8de6ef036af6f5b148b046e3f8e4076548
| gharchive/issue | 2023-07-22T22:31:29 | 2025-04-01T06:44:34.420280 | {
"authors": [
"Mr-Alicates",
"jacekk015"
],
"repo": "jacekk015/zha_quirks",
"url": "https://github.com/jacekk015/zha_quirks/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1239563882 | http://jack960330.gitee.io/qhjack-blog/2022/05/17/转入hexo,第一篇博客,部署/
123
@jack9603301
testt
这是测试
这是测试
牛哇牛哇
| gharchive/issue | 2022-05-18T07:45:50 | 2025-04-01T06:44:34.422038 | {
"authors": [
"LeacHar",
"jack9603301"
],
"repo": "jack9603301/qhjack-blog",
"url": "https://github.com/jack9603301/qhjack-blog/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2238752804 | 2.1 onward does not work at all
nothing happens when the game crashes on 2.1 or 2.12
This mod doesn't appear to have been updated for version 2.12. Does not create logs when CTD happens. Looking at the red4ext logs I found this message about CTD Helper:
[RED4ext] [warning] CTD Helper (version: 0.1.3) is incompatible with the current patch. The requested runtime of the plugin is 3.0.72.54038
Could this please be updated as it is very useful for troubleshooting CTDs caused by mods? Thanks for all your work on this @jackhumbert, its appreciated!
Found version 0.1.4 of CTD Helper under releases here on Github. Tested that version and received similar errors in the RED4ext logs.
[RED4ext] [warning] CTD Helper (version: 0.1.4) is incompatible with the current patch. The requested runtime of the plugin is 3.0.74.63017
Please update as I am trying to troubleshoot an intermittent CTD issue in my modded CP 2077 and this tool has been extremely useful in the past to troubleshoot possible bad behaving mods. Thanks!
| gharchive/issue | 2024-04-11T23:08:28 | 2025-04-01T06:44:34.441111 | {
"authors": [
"Montep00",
"darkone0513"
],
"repo": "jackhumbert/ctd_helper",
"url": "https://github.com/jackhumbert/ctd_helper/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
721384290 | package doesn't work with latest expo sdk 39.0.0
modal not showing
Upgraded to expo-sdk 40.0.0
works perfectly now!
| gharchive/issue | 2020-10-14T11:40:14 | 2025-04-01T06:44:34.445734 | {
"authors": [
"jacklam718",
"wasiqkhan"
],
"repo": "jacklam718/react-native-modals",
"url": "https://github.com/jacklam718/react-native-modals/issues/237",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
189563975 | How to apply wheelzoom with multi image (grid image)?
This js seem only working first image if have more image.
I have it!
Thanks
| gharchive/issue | 2016-11-16T02:16:19 | 2025-04-01T06:44:34.446876 | {
"authors": [
"ngtrian"
],
"repo": "jackmoore/wheelzoom",
"url": "https://github.com/jackmoore/wheelzoom/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
179503794 | Set CallTip Border Color
I know it is possible to set ForeColor and BackColor of a CallTip via
((Scintilla)this).Styles[Style.CallTip].ForeColor = Color.Gray; // text color
((Scintilla)this).Styles[Style.CallTip].BackColor = Color.Black; // background color
However, this does not change the color of the CallTip border.
Is there a way to do that?
Thanks!
Short answer, no.
| gharchive/issue | 2016-09-27T14:09:08 | 2025-04-01T06:44:34.460643 | {
"authors": [
"h3tch",
"jacobslusser"
],
"repo": "jacobslusser/ScintillaNET",
"url": "https://github.com/jacobslusser/ScintillaNET/issues/278",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2199109739 | 🛑 Noiseless Storage (EU-FI) is down
In 64c974c, Noiseless Storage (EU-FI) ($NSSTORAGE_URI) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Noiseless Storage (EU-FI) is back up in 7b24139 after 7 minutes.
| gharchive/issue | 2024-03-21T04:47:52 | 2025-04-01T06:44:34.469593 | {
"authors": [
"jae1911"
],
"repo": "jae1911/status.777.tf",
"url": "https://github.com/jae1911/status.777.tf/issues/716",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
451814379 | Update dependencies
Update axios due to a security vulnerability and then update various devDependencies.
Closes #49
Closes #50
Closes #51
Closes #52
Closes #53
Closes #54
:tada: This PR is included in version 1.4.1 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2019-06-04T06:11:13 | 2025-04-01T06:44:34.472950 | {
"authors": [
"jaebradley"
],
"repo": "jaebradley/wakatime-cli",
"url": "https://github.com/jaebradley/wakatime-cli/pull/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
534139780 | Enable multiplexing of "/api/traces" endpoint
Requirement - what kind of business use case are you trying to solve?
cmd/collector/app/http_handler.go defines the /api/traces HTTP endpoint allowing callers to save spans.
The current use of this endpoint is for external AWS authenticated callers. However, there is a requirement now for services within the organization's internal network to make direct calls to the collector to save spans; with a new HTTP endpoint /api/internal/traces.
In effect, this multiplexes the external and internal endpoints to the same "saveSpan" handler:
/api/traces --
\
--------> Save Span Handler
/
/api/internal/traces --
Why HTTP?
Our internal collectors support HTTP, TChannel and gRPC endpoints for submitting spans, so a natural question is, why choose to expose this additional endpoint in HTTP instead of TChannel or gRPC?
Jaeger clients were designed to talk to agents running on hosts listening over thrift over UDP endpoints for submitting spans, as well as exposing other HTTP endpoints such as for querying sampling rates.
Hence our Jaeger clients currently only support calling HTTP endpoints and not TChannel or gRPC. It is worth noting there have been expressions of interest for Jaeger clients to support calling gRPC endpoints, however, it was a non-trivial change due to taking on additional dependencies.
Problem - what in Jaeger blocks you from solving the requirement?
The APIHandler struct within cmd/collector/app/http_handler.go implements the RegisterRoutes interface function which registers the /api/traces endpoint to the *APIHandler.saveSpan handler function.
The RegisterRoutes function takes in a single router *mux.Router parameter, and the function is responsible for knowing exactly what endpoints to expose and handler functions to map to those endpoints.
There, however, does not appear to be a way to support the above requirement of configuring more that one endpoint to the same handler.
Proposal - what do you suggest to solve the problem or improve the existing situation?
The proposal is to export the saveSpan function to:
func (aH *APIHandler) SaveSpan(w http.ResponseWriter, r *http.Request)
... enabling embedding of the APIHandler in our internal repository, essentially "subclassing" this struct like so:
type InternalAPIHandler struct {
app.APIHandler
}
func NewInternalAPIHandler(apih *app.APIHandler) *InternalAPIHandler {
return &InternalAPIHandler{APIHandler: *apih}
}
func (iapih *InternalAPIHandler) RegisterRoutes(router *mux.Router) {
iapih.APIHandler.RegisterRoutes(router)
router.HandleFunc("/api/internal/traces", iapih.SaveSpan).Methods(http.MethodPost)
}
Alternatives considered
RegisterRoutes can register multiple endpoints and does not assume registering of just a single handler, hence changing the interface to pass multiple endpoint paths is not a valid option, let alone breaking the interface for other implementers and callers.
Another possibility is to hardcode the additional endpoint within RegisterRoutes like so:
router.HandleFunc("/api/traces", aH.saveSpan).Methods(http.MethodPost)
router.HandleFunc("/api/internal/traces", aH.saveSpan).Methods(http.MethodPost)
However, this use case of both an authenticated and unauthenticated endpoint for saving spans is specific to our organization and it's unlikely other users would have the same use case.
Any open questions to address
Is it safe to have multiple endpoints calling the same APIHandler.SaveSpan function? Are there any concurrency-related concerns such as race conditions or data corruption?
sgtm
| gharchive/issue | 2019-12-06T16:53:40 | 2025-04-01T06:44:34.501484 | {
"authors": [
"albertteoh",
"yurishkuro"
],
"repo": "jaegertracing/jaeger",
"url": "https://github.com/jaegertracing/jaeger/issues/1967",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
645503697 | Use gRPC instead of tchannel in hotrod
Related to #2036
Related to https://github.com/jaegertracing/jaeger/issues/2129
This PR removes the dependency on tchannel-go from hotrod and instead uses gRPC. It also introduces https://github.com/opentracing-contrib/go-grpc dependency to trace the calls.
The number of reported spans is the same as before.
@pavolloffay Is it possible to add a TLS testcase to improve the coverage?
The test coverage seems odd.
None of the following files are changed in this PR:
cmd/query/app/server.go | 93.18% <88.23%> (-1.63%) |
-- | -- | --
cmd/query/app/flags.go | 100.00% <100.00%> (ø) |
plugin/storage/badger/spanstore/reader.go
Anyways there are not tests in hotrod.
| gharchive/pull-request | 2020-06-25T12:00:29 | 2025-04-01T06:44:34.504916 | {
"authors": [
"objectiser",
"pavolloffay"
],
"repo": "jaegertracing/jaeger",
"url": "https://github.com/jaegertracing/jaeger/pull/2307",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1049862871 | Add remote gRPC option for storage plugin
Which problem is this PR solving?
Resolves #3377
Short description of the changes
Adds the option to host a gRPC storage API on a remote endpoint
using regular gRPC. Previously the plugin system only supported
local socket connections through the go-hashicorp plugin system.
I couldn't find how the existing plugin was tested end-to-end, so couldn't write better tests. If more tests are needed, please point me in the right direction.
Also wasn't sure about the CLI option naming.
@yurishkuro for the integration tests I will need to either:
modify examples/memstore-plugin to be able to accept a gRPC address cli flag to listen on (and then either start the plugin or the grpc server depending on the flag)
or
create a new examples/memstore-remote to only act as a gRPC server (not the plugin)
Option 1 is less code changes but the examples become a bit more muddled. Do you have any preference if we want a separate example for this?
Hmm I like that idea. Let me try it out.
Thanks for the detailed review @yurishkuro. I think I addressed all the issues
| gharchive/pull-request | 2021-11-10T13:55:09 | 2025-04-01T06:44:34.508793 | {
"authors": [
"cevian"
],
"repo": "jaegertracing/jaeger",
"url": "https://github.com/jaegertracing/jaeger/pull/3383",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1213714431 | fix: close grpc client when in close
Signed-off-by: vuuihc zshyou@gmail.com
Which problem is this PR solving?
the grpc Client Close is not called when exit
Short description of the changes
Close the grpc Client when exit
I missed that it's Close in RunAndThen, I'll close this PR.
| gharchive/pull-request | 2022-04-24T17:23:37 | 2025-04-01T06:44:34.510390 | {
"authors": [
"vuuihc"
],
"repo": "jaegertracing/jaeger",
"url": "https://github.com/jaegertracing/jaeger/pull/3639",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
143288777 | Behaviors - Declare method not scoped to behavior
Is it possible to declare a method in a behavior that will be available at the document level?
doc.behaviorDefinedMethod();
Rather than needing to scope down to the behavior first?
You can create behavior that add extra methods to the document. However from what I remember it's only possible in v2. If you want to do it in 1.2 you have to create module.
| gharchive/issue | 2016-03-24T16:08:45 | 2025-04-01T06:44:34.516038 | {
"authors": [
"jagi",
"jimmiebtlr"
],
"repo": "jagi/meteor-astronomy",
"url": "https://github.com/jagi/meteor-astronomy/issues/311",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1476811258 | BiggerStreamPreview - Cant Compile [TypeError]
TypeError: Cannot destructure property 'Dispatcher' of 'external_BoundedLibrary_namespaceObject.WebpackModules.getByProps(...)' as it is undefined.
at eval (betterdiscord://plugins/BiggerStreamPreview.plugin.js:102:5)
at buildPlugin (betterdiscord://plugins/BiggerStreamPreview.plugin.js:247:12)
at eval (betterdiscord://plugins/BiggerStreamPreview.plugin.js:254:7)
at Object.requireAddon (<anonymous>:4:39192)
at Object.loadAddon (<anonymous>:4:9163)
at Object.loadAddon (<anonymous>:4:36864)
at Object.loadAllAddons (<anonymous>:4:11637)
at Object.initialize (<anonymous>:4:6059)
at Object.initialize (<anonymous>:4:36059)
at Object.startup (<anonymous>:4:396505)
@Tharki-God WE NEED YOU HELP pLEASE 😃
Download [this] (https://github.com/jaimeadf/BetterDiscordPlugins/blob/82815643e624f93e7069f95280d4a4a94e84841d/dist/BiggerStreamPreview/BiggerStreamPreview.plugin.js)
Thanks, however this version isn't part of any branch of the repo and its update URL still points to the previous version, so how will future updates work ? Thanks
| gharchive/issue | 2022-12-05T15:24:02 | 2025-04-01T06:44:34.520556 | {
"authors": [
"KaKi87",
"LcyDev",
"SourCandy80",
"Tharki-God"
],
"repo": "jaimeadf/BetterDiscordPlugins",
"url": "https://github.com/jaimeadf/BetterDiscordPlugins/issues/393",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1801706654 | Add Website Link to About Section and Host the Website
Description of the new feature.
The current "about" section of the project does not include a link to the website. As a contributor at GSSOC'23, I propose adding a website link to the "about" section and hosting the website to make it easily accessible to users.
Describe briefly how you would implement this new feature.
I want to host it on render.
Screenshots Example (if applicable)
No response
Additional information
No response
Contributor terms
[X] I agree to follow the Contributing Instructions
Hey @SAUMYXA ! Go Ahead!
Thankyou @jain-rishabh-21 !!
Hey @SAUMYXA! I want some more information on this! Can you ping me on linkedin or discord?
| gharchive/issue | 2023-07-12T20:29:34 | 2025-04-01T06:44:34.525807 | {
"authors": [
"SAUMYXA",
"jain-rishabh-21"
],
"repo": "jain-rishabh-21/DoctorVerse",
"url": "https://github.com/jain-rishabh-21/DoctorVerse/issues/118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
660069802 | feature: support more types
Having only int and str as type is meagre and should be extended to e.g. bool.
Bools are supported, just don't put a type.
**OPTIONS**
* test
* flags: -t --test
* desc: Also run the tests
~~~bash
[[ "$test" == "true" ]] && echo "RUNNING TESTS"
~~~
Added a boolean example to the docs: https://github.com/jakedeichert/mask#optional-flags
| gharchive/issue | 2020-07-18T10:40:57 | 2025-04-01T06:44:34.533523 | {
"authors": [
"cherusk",
"jakedeichert"
],
"repo": "jakedeichert/mask",
"url": "https://github.com/jakedeichert/mask/issues/67",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2092303441 | Inconsistent record/replay
Hi developers,
Great tool! I've tried your tool with following code but got inconsistent traces.
use wasm_bindgen::prelude::*;
#[wasm_bindgen(module = "/name.js")]
extern "C" {
fn name(src: String) -> String;
}
#[wasm_bindgen]
extern "C" {
// Use `js_namespace` here to bind `console.log(..)` instead of just
// `log(..)`
#[wasm_bindgen(js_namespace = console)]
fn log(s: &str);
// The `console.log` is quite polymorphic, so we can bind it with multiple
// signatures. Note that we need to use `js_name` to ensure we always call
// `log` in JS.
#[wasm_bindgen(js_namespace = console, js_name = log)]
fn log_u32(a: u32);
// Multiple arguments too!
#[wasm_bindgen(js_namespace = console, js_name = log)]
fn log_many(a: &str, b: &str);
}
#[wasm_bindgen]
pub fn greet(a: &str) -> String {
let res = format!("Hello, {}!", name(a.to_string()));
log(&res);
res
}
import init, { create, process, math, greet } from './pkg/demo.js';
async function run() {
await init();
console.log(greet("Hello"));
console.log(greet("World"));
}
run();
I got following traces:
[Expected]
IF;0;wbg;__wbg_name_3abd14ecb7c22373
IF;1;wbg;__wbg_log_24dc0bd9d833d95d
EC;45;__wbindgen_add_to_stack_pointer;-16
ER
EC;24;__wbindgen_malloc;5,1
ER
EC;11;greet;1048560,1114120,5
L;0;memory;1114120;72
L;0;memory;1114121;101
L;0;memory;1114122;108
L;0;memory;1114123;108
L;0;memory;1114124;111
IC;0;__wbg_name_3abd14ecb7c22373
EC;24;__wbindgen_malloc;9,1
EC;33;__wbindgen_free;1114136,5,1
IR;0;__wbg_name_3abd14ecb7c22373;
L;0;memory;1048500;9
L;0;memory;1048496;40
L;0;memory;1048498;17
L;0;memory;1114152;72
L;0;memory;1114153;101
L;0;memory;1114154;108
L;0;memory;1114155;108
L;0;memory;1114156;111
L;0;memory;1114157;82
L;0;memory;1114158;117
L;0;memory;1114159;115
L;0;memory;1114160;116
IC;1;__wbg_log_24dc0bd9d833d95d
IR;1;__wbg_log_24dc0bd9d833d95d;
ER
EC;45;__wbindgen_add_to_stack_pointer;16
ER
EC;33;__wbindgen_free;1114168,17,1
ER
EC;45;__wbindgen_add_to_stack_pointer;-16
ER
EC;24;__wbindgen_malloc;5,1
ER
EC;11;greet;1048560,1114120,5
L;0;memory;1114120;87
L;0;memory;1114121;111
L;0;memory;1114122;114
L;0;memory;1114123;108
L;0;memory;1114124;100
IC;0;__wbg_name_3abd14ecb7c22373
EC;24;__wbindgen_malloc;9,1
EC;33;__wbindgen_free;1114136,5,1
IR;0;__wbg_name_3abd14ecb7c22373;
L;0;memory;1114152;87
L;0;memory;1114153;111
L;0;memory;1114154;114
L;0;memory;1114156;100
L;0;memory;1114160;116
IC;1;__wbg_log_24dc0bd9d833d95d
IR;1;__wbg_log_24dc0bd9d833d95d;
ER
EC;45;__wbindgen_add_to_stack_pointer;16
ER
EC;33;__wbindgen_free;1114168,17,1
ER
[Actual]
IF;0;wbg;__wbg_name_3abd14ecb7c22373
IF;1;wbg;__wbg_log_24dc0bd9d833d95d
EC;45;__wbindgen_add_to_stack_pointer;-16
ER
EC;24;__wbindgen_malloc;5,1
ER
EC;11;greet;1048560,1114120,5
L;0;memory;1114120;72
L;0;memory;1114121;101
L;0;memory;1114122;108
L;0;memory;1114123;108
L;0;memory;1114124;111
IC;0;__wbg_name_3abd14ecb7c22373
EC;24;__wbindgen_malloc;9,1
EC;33;__wbindgen_free;1114136,5,1
IR;0;__wbg_name_3abd14ecb7c22373;
L;0;memory;1048500;9
L;0;memory;1048496;40
L;0;memory;1048498;17
L;0;memory;1114152;72
L;0;memory;1114153;101
L;0;memory;1114154;108
L;0;memory;1114155;108
L;0;memory;1114156;111
L;0;memory;1114157;82
L;0;memory;1114158;117
L;0;memory;1114159;115
L;0;memory;1114160;116
IC;1;__wbg_log_24dc0bd9d833d95d
IR;1;__wbg_log_24dc0bd9d833d95d;
ER
EC;45;__wbindgen_add_to_stack_pointer;16
ER
EC;33;__wbindgen_free;1114168,17,1
ER
EC;45;__wbindgen_add_to_stack_pointer;-16
ER
EC;24;__wbindgen_malloc;5,1
ER
EC;11;greet;1048560,1114120,5
IC;0;__wbg_name_3abd14ecb7c22373
EC;24;__wbindgen_malloc;9,1
EC;33;__wbindgen_free;1114136,5,1
IR;0;__wbg_name_3abd14ecb7c22373;
L;0;memory;1114152;87
L;0;memory;1114153;111
L;0;memory;1114154;114
L;0;memory;1114156;100
L;0;memory;1114160;116
IC;1;__wbg_log_24dc0bd9d833d95d
IR;1;__wbg_log_24dc0bd9d833d95d;
ER
EC;45;__wbindgen_add_to_stack_pointer;16
ER
EC;33;__wbindgen_free;1114168,17,1
ER
If seems that the generator put the second memory load world in the wrong location.
import fs from 'fs'
import path from 'path'
let instance
let imports = {}
imports['wbg'] = {}
let global_0 = -1
imports['wbg']['__wbg_name_3abd14ecb7c22373'] = () => {
global_0++
switch (global_0) {
case 0:
instance.exports.__wbindgen_malloc(9,1)
instance.exports.__wbindgen_free(1114136,5,1)
new Uint8Array(instance.exports.memory.buffer)[1048500] = 9
new Uint8Array(instance.exports.memory.buffer)[1048496] = 40
new Uint8Array(instance.exports.memory.buffer)[1048498] = 17
new Uint8Array(instance.exports.memory.buffer)[1114152] = 72
new Uint8Array(instance.exports.memory.buffer)[1114153] = 101
new Uint8Array(instance.exports.memory.buffer)[1114154] = 108
new Uint8Array(instance.exports.memory.buffer)[1114155] = 108
new Uint8Array(instance.exports.memory.buffer)[1114156] = 111
new Uint8Array(instance.exports.memory.buffer)[1114157] = 82
new Uint8Array(instance.exports.memory.buffer)[1114158] = 117
new Uint8Array(instance.exports.memory.buffer)[1114159] = 115
new Uint8Array(instance.exports.memory.buffer)[1114160] = 116
break
case 1:
instance.exports.__wbindgen_malloc(9,1)
instance.exports.__wbindgen_free(1114136,5,1)
new Uint8Array(instance.exports.memory.buffer)[1114152] = 87
new Uint8Array(instance.exports.memory.buffer)[1114153] = 111
new Uint8Array(instance.exports.memory.buffer)[1114154] = 114
new Uint8Array(instance.exports.memory.buffer)[1114156] = 100
new Uint8Array(instance.exports.memory.buffer)[1114160] = 116
break
}
if ((global_0 >= 0) && global_0 < 2) {
return undefined }
}
let global_1 = -1
imports['wbg']['__wbg_log_24dc0bd9d833d95d'] = () => {
global_1++
switch (global_1) {
case 0:
// THIS SECTION IS IN THE WRONG LOCATION
new Uint8Array(instance.exports.memory.buffer)[1114120] = 87
new Uint8Array(instance.exports.memory.buffer)[1114121] = 111
new Uint8Array(instance.exports.memory.buffer)[1114122] = 114
new Uint8Array(instance.exports.memory.buffer)[1114123] = 108
new Uint8Array(instance.exports.memory.buffer)[1114124] = 100
// THIS SECTION IS IN THE WRONG LOCATION
break
}
if ((global_1 >= 0) && global_1 < 2) {
return undefined }
}
export function replay(wasm) {instance = wasm.instance
instance.exports.__wbindgen_add_to_stack_pointer(-16)
instance.exports.__wbindgen_malloc(5,1)
new Uint8Array(instance.exports.memory.buffer)[1114120] = 72
new Uint8Array(instance.exports.memory.buffer)[1114121] = 101
new Uint8Array(instance.exports.memory.buffer)[1114122] = 108
new Uint8Array(instance.exports.memory.buffer)[1114123] = 108
new Uint8Array(instance.exports.memory.buffer)[1114124] = 111
instance.exports.greet(1048560,1114120,5)
instance.exports.__wbindgen_add_to_stack_pointer(16)
instance.exports.__wbindgen_free(1114168,17,1)
instance.exports.__wbindgen_add_to_stack_pointer(-16)
instance.exports.__wbindgen_malloc(5,1)
instance.exports.greet(1048560,1114120,5)
instance.exports.__wbindgen_add_to_stack_pointer(16)
instance.exports.__wbindgen_free(1114168,17,1)
}
export function instantiate(wasmBinary) {
return WebAssembly.instantiate(wasmBinary, imports)
}
if (process.argv[2] === 'run') {
const p = path.join(path.dirname(import.meta.url).replace(/^file:/, ''), 'index.wasm')
const wasmBinary = fs.readFileSync(p)
instantiate(wasmBinary).then((wasm) => replay(wasm))
}
Hi,
thanks for your interest. Even though this repository is currently public there does not yet exist a stable release yet, so bugs like the one you mentioned can occur frequently. We are currently actively developing on this tool and have a few things on our bucket lists. I will keep you updated on the fixing of this bug.
Thanks for the report! I will try to look into this this week.
Hi @aoli-al, can you provide a repository that contains the code example you provided, making it easier to reproduce the problem? I think I can try to make it somehow work with the code you provided but I think with a runnable code repository it would make a debugging process way simpler.
Thanks for the PR. It would best to reduce this test case to the core issue and then create a minimal node test-case that reproduces the bug. Once we figure out the core issue, we can fix the bug. Offline test cases are intended to test the overall wasm-r3 pipeline not the record and replay logic.
Hmm but I think this case is good enough. I'll take a look at the offline test case and make a node case out of it.
Yes, this is the minimum example I can get. The core issue is when you call greet, it calls console.log which is an import call. This will set lastFunctionReturn here.
While calling greet second time, replay will check lastFunctionReturn and put the following memory load section at in the function return section of console.log.
new Uint8Array(instance.exports.memory.buffer)[1114120] = 87
new Uint8Array(instance.exports.memory.buffer)[1114121] = 111
new Uint8Array(instance.exports.memory.buffer)[1114122] = 114
new Uint8Array(instance.exports.memory.buffer)[1114123] = 108
new Uint8Array(instance.exports.memory.buffer)[1114124] = 100
The correct version of replay function should be:
instance.exports.__wbindgen_add_to_stack_pointer(-16)
instance.exports.__wbindgen_malloc(5,1)
new Uint8Array(instance.exports.memory.buffer)[1114120] = 72
new Uint8Array(instance.exports.memory.buffer)[1114121] = 101
new Uint8Array(instance.exports.memory.buffer)[1114122] = 108
new Uint8Array(instance.exports.memory.buffer)[1114123] = 108
new Uint8Array(instance.exports.memory.buffer)[1114124] = 111
instance.exports.greet(1048560,1114120,5)
instance.exports.__wbindgen_add_to_stack_pointer(16)
instance.exports.__wbindgen_free(1114168,17,1)
instance.exports.__wbindgen_add_to_stack_pointer(-16)
instance.exports.__wbindgen_malloc(5,1)
new Uint8Array(instance.exports.memory.buffer)[1114120] = 87
new Uint8Array(instance.exports.memory.buffer)[1114121] = 111
new Uint8Array(instance.exports.memory.buffer)[1114122] = 114
new Uint8Array(instance.exports.memory.buffer)[1114123] = 108
new Uint8Array(instance.exports.memory.buffer)[1114124] = 100
instance.exports.greet(1048560,1114120,5)
instance.exports.__wbindgen_add_to_stack_pointer(16)
instance.exports.__wbindgen_free(1114168,17,1)
I think the main issue here is how to associate correct bytes to the imported function calls.
I think with kind of logic in commit https://github.com/jakobgetz/wasm-r3/commit/4f05d7a70b0ed12b843fc63fd523e0428ac43fdd , this solves this case. Node tests and given case is passing but one offline test and some of the online tests are passing due to some other reasons I think. Will try to solve them after the vacation and write a detailed writeup on this.
| gharchive/issue | 2024-01-21T01:37:23 | 2025-04-01T06:44:34.549225 | {
"authors": [
"aoli-al",
"doehyunbaek",
"jakobgetz"
],
"repo": "jakobgetz/wasm-r3",
"url": "https://github.com/jakobgetz/wasm-r3/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
689946808 | FIX Container request hanging
Often when sending a request to a container it will be done, but the UI does not recieve a response from Dashboard-Server. This makes the UI loader hang
Instead of having a dedicated response topic, it would be better to only have overview and stat channels and include a status and message attribute in the JSON response from Dashboard Server
Eureka! I'm quite confident that I discovered the bug. The issue was early on identified to be the Socketserver not "receiving" the latest events on the "command-responses-" topic.
The socket server looks for the reponse topics using a prefix. It looks for all the topics starting with "command-responses-", and the dashboard servers emit "command-responses-{Servername}-{SelfContainerId}". However, I believe it only looks once, when the consumer i created, at what topics contains this prefix. That means, if the topic is created after the consumer is created, then the consumer never realize that the new "reponse topic" exists.
I'm still unsure of how to properly fix this. The solution proposed by Omvk97, but regardless of what solution is decided upon, it will require quite a bit of work. I'll discuss this with Mads.
After having played around with the consumer and threading in the socket server, I cannot find a way to "restart" the consumer and its thread with an updated list of topics to subscribe on. The closest I got was updating the assignments of the consumer, which takes effect immediatly, however, there appeared to be no way to gracefully stop the consumer's task of consuming.
This means I'm now leaning towards starting a "fresh" consumer everytime a command request is issued, and then limiting the Consume task to 5 or 10 seconds, and simply terminate afterwards, as the "result" would be too late for the user anways. The UI could show an error, and allow the user to "reissue" the request. The consumer would listen to only the correct topic.
On a side now, I'm not sure that I agree on the removal of cmd responses topic as a concept, as it would leave commands such as rename, restart, etc. without a response.
After a conversation with @mads5606 I will now attempt to implement a "fresh" consumer of the response topic every time a request command is issued instead of the background service currently implemented.
During the process of implementing a fresh consumer, I ended up finding a way to get the "status" of the current consumer, and therefore know when it should be restarted and when it subscribes to the right topics. This means that I changed the plan back to a background worker, that only restarts gracefully when a subscription topic is new, or else it runs as usual. It appears to be working, however, there is a slight issue. Background workers are supposed to be "forgotten" after they have been setup, therefore I turned it into a "singleton", but it is not great code practice to do so. Therefore I will discuss the setup with Mads before creating a pull request.
The current implementation is located at: https://github.com/jakobhviid/Dashboard-Interface-Docker/tree/feature/issue-9-response-topic-bug
After a brief discussion with Mads, it was agreed that a simple solution is prefered over a complicated one. Therefore the background service instance remains available as singleton, eventhough this is not the intention of a background service.
I have wrapped up the fixes in a pull request. The final implementation allows the background service with the Kafka consumer (running in its own async task) to be cancelled and started again. When it is started again, it will use the updated list of the subscription topics. It is only possible to add to this list for now, as there is no use for removing topics again at the moment.
The methods outside of the background service, can use a simple method to check whether the topic is currently being subscribed to. If the topic is not subscribed to, it will update the list of desired topics and then "restart" the consumer.
After a brief discussion with Mads, it was agreed that a simple solution is prefered over a complicated one. Therefore the background service instance remains available as singleton, eventhough this is not the intention of a background service.
I have wrapped up the fixes in a pull request. The final implementation allows the background service with the Kafka consumer (running in its own async task) to be cancelled and started again. When it is started again, it will use the updated list of the subscription topics. It is only possible to add to this list for now, as there is no use for removing topics again at the moment.
The methods outside of the background service, can use a simple method to check whether the topic is currently being subscribed to. If the topic is not subscribed to, it will update the list of desired topics and then "restart" the consumer.
| gharchive/issue | 2020-09-01T08:40:18 | 2025-04-01T06:44:34.557544 | {
"authors": [
"Omvk97",
"dstoft"
],
"repo": "jakobhviid/Dashboard-Interface-Docker",
"url": "https://github.com/jakobhviid/Dashboard-Interface-Docker/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
135802950 | Repo Detail Edit Listbox formatting and error messages
We discussed this on #426, but just so it isn't forgotten:
[x] The listboxes on repository detail page are not formatted as well as they used to be.
[x] The error message which is derived from the name of one of the listbox group properties is not very nice - see attached image.
Commits https://github.com/jakubgarfield/Bonobo-Git-Server/commit/174d17cb911ff6cf18ba5edc479f3a856f4beaca and https://github.com/jakubgarfield/Bonobo-Git-Server/commit/a64b615db48cac12da1c4a187379f7a83d600da1
| gharchive/issue | 2016-02-23T17:22:38 | 2025-04-01T06:44:34.560366 | {
"authors": [
"RedX2501",
"willdean"
],
"repo": "jakubgarfield/Bonobo-Git-Server",
"url": "https://github.com/jakubgarfield/Bonobo-Git-Server/issues/481",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1305468358 | Common binary data reading example or gallary?
I think it's good to have some examples like reading common binary, like https://formats.kaitai.io/
Now that I've finally put up the website, we have a community showcase! https://binrw.rs/community/#showcase
Thanks for the reminder!
| gharchive/issue | 2022-07-15T01:33:28 | 2025-04-01T06:44:34.607326 | {
"authors": [
"WindSoilder",
"jam1garner"
],
"repo": "jam1garner/binrw",
"url": "https://github.com/jam1garner/binrw/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
394770238 | Use Nuxt.js
[x] api.plusarchive.com リポジトリを作成する (データを提供する API)
[x] api.plusarchive.com は Yii で実装する
[x] このリポジトリを admin.plusarchive.com にリネームする
[x] plusarchive.com リポジトリを作成して Initial commit する
[x] README に api.plusarchive.com, plusarchive.com へのリンクを作成する
Done.
| gharchive/issue | 2018-12-29T07:28:22 | 2025-04-01T06:44:34.612509 | {
"authors": [
"jamband"
],
"repo": "jamband/admin.plusarchive.com",
"url": "https://github.com/jamband/admin.plusarchive.com/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.