id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2507215603
Error thrown when setting dynamic current and vehicle not charging, expected? I've created an automation that uses the service "set circuit dynamic limit" for my Easee One charger. I always get this error when executing it with the car is not charging, is this to be expected? Logger: homeassistant.components.automation.limit_dynamic_current Source: helpers/script.py:525 integration: Automation (documentation, issues) First occurred: 10:04:46 AM (1 occurrences) Last logged: 10:04:46 AM Limit dynamic current: Error executing script. Unexpected error for call_service at pos 3: 'dynamicCircuitCurrentP1' Traceback (most recent call last): File "/config/custom_components/easee/controller.py", line 760, in check_circuit_current or charger_data.state[compare_p2] != current_p2 ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/pyeasee/utils.py", line 55, in __getitem__ if type(self._storage[key]) == str and validate_iso8601(self._storage[key]): ~~~~~~~~~~~~~^^^^^ KeyError: 'dynamicCircuitCurrentP2' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 525, in _async_step await getattr(self, handler)() File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 764, in _async_call_service_step response_data = await self._async_run_long_action( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/src/homeassistant/homeassistant/helpers/script.py", line 727, in _async_run_long_action return await long_task ^^^^^^^^^^^^^^^ File "/usr/src/homeassistant/homeassistant/core.py", line 2763, in async_call response_data = await coro ^^^^^^^^^^ File "/usr/src/homeassistant/homeassistant/core.py", line 2806, in _execute_service return await target(service_call) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/easee/services.py", line 582, in circuit_execute_set_current circuit = controller.check_circuit_current( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/easee/controller.py", line 766, in check_circuit_current charger_data.config[compare_p1] != current_p1 ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/pyeasee/utils.py", line 55, in __getitem__ if type(self._storage[key]) == str and validate_iso8601(self._storage[key]): ~~~~~~~~~~~~~^^^^^ KeyError: 'dynamicCircuitCurrentP1 Closed as it'an easee issue, not charging card issue
gharchive/issue
2024-09-05T09:07:19
2025-04-01T06:46:01.299207
{ "authors": [ "jacoscar" ], "repo": "tmjo/charger-card", "url": "https://github.com/tmjo/charger-card/issues/67", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
223549714
Fix example in readme Replaces process.env.HOME with require('os').homedir(). Thank you!
gharchive/pull-request
2017-04-22T08:27:48
2025-04-01T06:46:01.312626
{ "authors": [ "HarrySarson", "eush77" ], "repo": "tmpvar/repl.history", "url": "https://github.com/tmpvar/repl.history/pull/13", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1571960521
xvfb-run -s "-ac -screen 0 1280x1024x24" pm2 start bin/www -i 4 pm2 跑项目出以下错误: 0|www | 开始生成视频: -------start--------- 0|www | TypeError: Cannot read property 'ARRAY_BUFFER' of null 0|www | at FFTransition.createBuffer (/volumn/ffmpeg/FFCreator/lib/animate/transition.js:73:45) 0|www | at FFTransition.createTransition (/volumn/ffmpeg/FFCreator/lib/animate/transition.js:61:10) 0|www | at FFTransition.bindGL (/volumn/ffmpeg/FFCreator/lib/animate/transition.js:40:10) 0|www | at /volumn/ffmpeg/FFCreator/lib/core/renderer.js:122:57 0|www | at arrayEach (/volumn/ffmpeg/node_modules/lodash/_arrayEach.js:15:9) 0|www | at forEach (/volumn/ffmpeg/node_modules/lodash/forEach.js:38:10) 0|www | at Renderer.transBindGL (/volumn/ffmpeg/FFCreator/lib/core/renderer.js:122:5) 0|www | at Renderer.start (/volumn/ffmpeg/FFCreator/lib/core/renderer.js:55:10) 0|www | at processTicksAndRejections (internal/process/task_queues.js:95:5) 怎么解决啊 https://tnfe.github.io/FFCreator/#/qa/qa Please Read QA 删除一部分参数即可修复这个bug. xvfb-run pm2 start bin/www -i 4
gharchive/issue
2023-02-06T05:52:04
2025-04-01T06:46:01.602483
{ "authors": [ "drawcall", "f1748x", "zdu-strong" ], "repo": "tnfe/FFCreator", "url": "https://github.com/tnfe/FFCreator/issues/322", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2196463442
运行过程的一些问题记录 本地环境 mac Python 版本要求 3.10,3.11 版本会有依赖报错; Nodejs 版本 v14.21.3; package.json 新增两项依赖 "webpack": "^4.46.0", "webpack-cli": "^4.0.0",我本地的swiper依赖进行了指定版本 "swiper": "=5.4.5" ; server/public/static 目录下新增 fonts 目录,把字体文件都放进去; server/service/video.js 78行 改为 comp.setFont('./public/static/demo/wryh.ttf'); 是找当前目录下的 public 文件夹; mark
gharchive/issue
2024-03-20T01:42:12
2025-04-01T06:46:01.605391
{ "authors": [ "drawcall", "jiandao7114" ], "repo": "tnfe/shida", "url": "https://github.com/tnfe/shida/issues/27", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
75303969
Unit test for Django Resource containing an ImageField Hi, I am trying to write an unit test for my resource, but I am getting the error: {"error": "Expecting property name enclosed in double quotes: line 1 column 2 (char 1)"} Here is my model: class MyModel(models.Model): input_image = models.ImageField('Input image', upload_to='process') My resource: class MyModelResource(DjangoResource): preparer = FieldsPreparer(fields={ 'input_image': 'input_image_url', 'id': 'id', }) def is_authenticated(self): return True @skip_prepare def list(self): return list(MyModel.objects.all().values('id')) def update(self): raise MethodNotAllowed() def delete(self): raise MethodNotAllowed() def detail(self, pk): return MyModel.objects.get(id=pk) def create(self): form = MyModelForm(self.request, self.data, self.request.FILES) if form.is_valid(): obj = form.save() My unit test class CountProcessResourceTest(TestCase): def setUp(self): self.client = Client() self.img_url = 'https://www.djangoproject.com/s/img/small-fundraising-heart.png' image = urllib.urlopen(self.img_url) self.image = SimpleUploadedFile(name='test_image.jpg', content=image.read(), content_type='image/png') self.object = CountProcess.objects.create(input_image=self.image) def test_create(self): api_url = reverse('api_mymodel_list') t = {"input_image": self.image} response = self.client.post(api_url, t, content_type='application/json') print response.request print response.content What am I doing wrong? Silvio, tries to use @property with dict, works for me, and facilitates the use of thumbnail att.
gharchive/issue
2015-05-11T17:49:46
2025-04-01T06:46:01.610997
{ "authors": [ "guilhermetavares", "silviomoreto" ], "repo": "toastdriven/restless", "url": "https://github.com/toastdriven/restless/issues/48", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
52243888
Periodic splines? Do we want to use periodic splines? Why not specify them using trig interpolants? I'm concerned about the limitation to fourth order accuracy for smooth boundaries. Perhaps (at the risk of complexity) there needs to be a periodic-closed-curve class with subclassed implementations. The periodic splines are there because I was using them for various experiments. I've found them useful for being able to specify arbitrary smooth-ish boundaries. They probably should be done using trig interpolates; the current code was based on some older stuff I used trying to save time. Any parameterised closed curve should probably be considered periodic. It makes sense to me to create a periodic/parameterised subclass of closed curve.
gharchive/issue
2014-12-17T14:11:25
2025-04-01T06:46:01.637355
{ "authors": [ "ehkropf", "tobydriscoll" ], "repo": "tobydriscoll/conformalmapping", "url": "https://github.com/tobydriscoll/conformalmapping/issues/50", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
124572882
initial support for filter rules Addresses https://github.com/toddjordan/ember-cli-dynamic-forms/issues/1 @jeremywrowe do you mind reviewing this? I refactored the component a bit to separate main functions out of the lifecycle hooks and make things a bit more clear. Will do Looks good, just some small tweaks / questions.
gharchive/pull-request
2016-01-02T03:14:20
2025-04-01T06:46:01.666584
{ "authors": [ "jeremywrowe", "toddjordan" ], "repo": "toddjordan/ember-cli-dynamic-forms", "url": "https://github.com/toddjordan/ember-cli-dynamic-forms/pull/12", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
137077709
Add support for require object and $onInit controller method Technically they are the features of directives and not components, but it would require too much fragile patching in versions prior to 1.5. Directives can live fine without them, but they are essential for intercomponent communication, components have no alternatives to link and thus are barely usable without them. Can we add them, please? Do you think polyfilling this would be achievable? Unsure if there are things here that rely on Angular internals. @toddmotto Absolutely. I believe it has to be something like this. It seems to be quite close to what $compile does in 1.5. Awesome. Do you think hi-jacking the Controller would be better and then running this.oninit first? Yes, I'm quite sure about that, that's what is done under the hood. All required controllers are available as instance properties when $onInit runs on instance, that's its purpose. I fixed it, it should be preLink to maintain the proper order of $onInit execution. Sounds good to me, let's get this added! Ok. It looks like there's nothing to break down here. Hi there, On my project we are really interested on using this polyfill mainly for the $onInit method. Do you have scheduled the merge of that pull request? Thank you very much. Hey. Super busy at moment, will try land it this week, you can pull in the code from the PR for now though if you'd like as a temporary solution! On 14 March 2016 at 11:40, Pablo Lázaro notifications@github.com wrote: Hi there, On my project we are really interested on using this polyfill mainly for the $onInit method. Do you have scheduled the merge of that pull request? Thank you very much. — Reply to this email directly or view it on GitHub https://github.com/toddmotto/angular-component/issues/17#issuecomment-196274079 . -- @toddmotto
gharchive/issue
2016-02-28T17:55:55
2025-04-01T06:46:01.674221
{ "authors": [ "bisubus", "pablolazaro", "toddmotto" ], "repo": "toddmotto/angular-component", "url": "https://github.com/toddmotto/angular-component/issues/17", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
265748955
Detail view: edit start/stop times User can edit start and stop times by tapping on a start/stop time in edit view When editing start time, show previous slot as well When editing stop time, show next slot Don't allow editing stop time for currently running time slot Use handle to slide up/down Popup is constrained to the sides (20pt from top and 16 from the bottom); not to a certain height; so the height should change based on the device size. Increase time slots's heights proportionally so they'll cover the popup. I see your point but then we should change this logic everywhere (eg in edit view, popups etc). Let's keep it as it is for now but i'll think about it. Ok, another thing: We can't draw over the status bar, so we can't put that white overlay there. What I'll do, if it's ok with you, is lower the popup a little bit so you can kind of see the app header. @Odrakir Yes, sure. 👍 and sorry, i know that we can't put anything on top of the statusbar but somehow i missed to reorganize layers. One more thing, does the color of the handle match the category of the slot you are trying to edit or is it always yellow? It should match the slot's color user tries to edit.
gharchive/issue
2017-10-16T12:17:42
2025-04-01T06:46:01.689663
{ "authors": [ "Odrakir", "pe0ny" ], "repo": "toggl/superday", "url": "https://github.com/toggl/superday/issues/814", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
441601000
Choose / Edit Project & Customer, Delete Task-Entry, Describe the new feature I would like to be able to choose Project for a new TimeTrack I would like to be able to edit existing Timetrack (means the recorded time.. .and when the timer starts to run. I would like to have an delete-Button if I accidentally start a timer and wan't to remove it How does this help you? Improve my productivity to prevent me from switching to toggle to archive this. If we couldn't add this feature, is there a compromise you can think of? No @xstable afaik you can already add/remove project for new time-entries or any running time-entry We'll look to add a delete option 👍 we have issues for those things already: https://github.com/toggl/toggl-button/issues/1463 and https://github.com/toggl/toggl-button/issues/1399 oh wait, we don't have issues for deleting and editing stopped entries. i'll just rename this one
gharchive/issue
2019-05-08T07:55:55
2025-04-01T06:46:01.693467
{ "authors": [ "dooart", "shantanuraj", "xstable" ], "repo": "toggl/toggl-button", "url": "https://github.com/toggl/toggl-button/issues/1395", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
511655017
Todoist - Tags not pulling through Windows 10 Chrome 77 Button 1.44.1 Relevant integration (if any): Todoist 🐛 Describe the bug Tags are not pulling through on time entries to Toggl from Todoist. Expected behaviour Tags will appear in Toggl from Todoist time entries. Steps to reproduce Stop a time entry that includes tags. Steps to reproduce the behaviour: Start a time entry with Todoist integration that includes a tag Finish the timer Other details or context This is how tags show now on the main list (assumption that tags in Todoist are named labels) This is how it appears in task details view: User report User Report
gharchive/issue
2019-10-24T01:43:21
2025-04-01T06:46:01.698184
{ "authors": [ "dianetoggl" ], "repo": "toggl/toggl-button", "url": "https://github.com/toggl/toggl-button/issues/1529", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
492816151
Add checksum verification for application updates 💻 Environment Platform: Lib (Windows/Linux, macOS is using Sparkle) 📒 Description The problem is that sometimes a corrupted installer process can be launched "successfully" (not fail during the process start) and so it is indistinguishable from a non-corrupted one. This can be fixed by adding a checksum verification. We need to: [ ] Add checksum field to updates.json. [ ] Add checksum verification to our app. Part 1 done with: https://github.com/toggl/toggldesktop-branding/pull/52/
gharchive/issue
2019-09-12T13:44:57
2025-04-01T06:46:01.700868
{ "authors": [ "IndrekV", "skel35" ], "repo": "toggl/toggldesktop", "url": "https://github.com/toggl/toggldesktop/issues/3310", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
817262315
About Learning Rate Scaling Hi, I'm interested in the actual implemantation about Learning Rate Scaling you mentioned in the paper, but I can't find the exact position of it on code. I do find a method: get_scale_factor(opt) in main_train.py, but I don't find any call of it, could you point it out? Hi, we simply change the learning rate for different layers in the generator. This is done here.
gharchive/issue
2021-02-26T11:05:34
2025-04-01T06:46:01.702795
{ "authors": [ "IridiumH", "tohinz" ], "repo": "tohinz/ConSinGAN", "url": "https://github.com/tohinz/ConSinGAN/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1809184735
Remember The Sidebar Setting After Closing and Reopening The Plugin Describe the solution you'd like If a user collapses the sidebar, remember the setting when the user next reopens the plugin. Additional context +1 for this! 👋 +1 Would also love to see that implemented. I work on relatively small screen so I have the Token Studio quite shrank and open it only when I need it. It's a bit frustrating that sidebar takes most of the space on start and doesn't allow me to click through tokens without collapsing it first. Added to Featurebase Roadmap : 🪙 Token organization enhancements
gharchive/issue
2023-07-18T06:04:25
2025-04-01T06:46:01.706533
{ "authors": [ "DevTomanuel", "UdayHyma", "davidmcneroe86", "jkazimierczak", "keeganedwin" ], "repo": "tokens-studio/figma-plugin", "url": "https://github.com/tokens-studio/figma-plugin/issues/2072", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
633308051
Feature/issue 4574 simplification 👏 解決する issue / Resolved Issues close #4574 📝 関連する issue / Related Issues ⛏ 変更内容 / Details of Changes Aggregated chart generation procedure in Bar/Doughnut/Line into one 📸 スクリーンショット / Screenshots No CSS/appearance change is made. 画面サイズを変更しない場合でもグラフの表示がおかしいようなのですが...原因不明です.... @y-chan あれ、どこで外れたんだろう…。 いずれにしても、ご指摘ありがとうございます!
gharchive/pull-request
2020-06-07T11:05:45
2025-04-01T06:46:01.730182
{ "authors": [ "Nekoya3", "mcdmaster" ], "repo": "tokyo-metropolitan-gov/covid19", "url": "https://github.com/tokyo-metropolitan-gov/covid19/pull/4754", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
807745763
Toggle dark mode This is a feature request. Is it planned to add a feature to support toggling the theme? This could also be saved in local storage so the site remebers the user's choice. Adding a toggling button will destroy the default UI, it should be a plugin. If vuepress-next support dark mode, this theme will be moved to a toggling plugin. Now, you can change it by: <html> -> auto <html theme="light"> -> light <html theme="dark"> -> dark
gharchive/issue
2021-02-13T13:02:26
2025-04-01T06:46:01.734039
{ "authors": [ "nandi95", "tolking" ], "repo": "tolking/vuepress-theme-default-prefers-color-scheme", "url": "https://github.com/tolking/vuepress-theme-default-prefers-color-scheme/issues/29", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1446458867
Advanced Wireless Inventory not working across dimensions I have a crafting terminal set up next to a level 4 beacon and connected it to the advanced wireless terminal but it’s not letting me access it across dimensions. I have the same issue and it doesn't even work in the same dimension further than 64 blocks even the beacon is touching the terminal and is activated properly. This is quite annoying hoping for a fix soon. Thankyou! strange works on fabric 1.3.2 strange works on fabric 1.3.2 Does it work on medieval Minecraft? in my studies and use, the area that the wireless terminal is bound too needs to be chunkloaded, or else when the terminal calls for it, it has no idea it exists or where to go. I reccommend chunkloading your entire area that envolves your chests, and tom's storage components. in my studies and use, the area that the wireless terminal is bound too needs to be chunkloaded, or else when the terminal calls for it, it has no idea it exists or where to go. I reccommend chunkloading your entire area that envolves your chests, and tom's storage components. How would I do that? I reccomend the Open Parties and Claims mod that allows you to force load chunks, i forceload chunks from my journey map with this mod. if you do have the journey mod and that mod installed, simply open your journey map, Shift Right click on the chunk square you want to claim, then Shift right click on it again to force load it, forceloaded chunkc will have dotted lines throughout the claimed chunk box on the journey map. if you want to ever see in action how i did that, let me know, we could get together on discord and i can share screen and nshow you what i do and how i have mine setup. I reccomend the Open Parties and Claims mod that allows you to force load chunks, i forceload chunks from my journey map with this mod. if you do have the journey mod and that mod installed, simply open your journey map, Shift Right click on the chunk square you want to claim, then Shift right click on it again to force load it, forceloaded chunkc will have dotted lines throughout the claimed chunk box on the journey map. if you want to ever see in action how i did that, let me know, we could get together on discord and i can share screen and nshow you what i do and how i have mine setup. That would be helpful in my studies and use, the area that the wireless terminal is bound too needs to be chunkloaded, or else when the terminal calls for it, it has no idea it exists or where to go. I reccommend chunkloading your entire area that envolves your chests, and tom's storage components. How would I do that? in my studies and use, the area that the wireless terminal is bound too needs to be chunkloaded, or else when the terminal calls for it, it has no idea it exists or where to go. I reccommend chunkloading your entire area that envolves your chests, and tom's storage components. How would I do that? Open Xaero's map claim the chunks where your Beacon is and make them forceloaded by shift clicking while selecting the chunks to load. This very much is till a problem as this topic has a point i am using it too and even have a chunkloader mod long with a map loaded in that area and in the nether it is saying it is out of range even with a beacon set up along with the entire house and it loaded with the beacon the mod for the chunk loader is Chunk Loader and i also am using Xaero's map to make a chunk with it Added an indicator to the terminal if it sees the beacon, make sure the beacon is in a 8 block radius. Added an indicator to the terminal if it sees the beacon, make sure the beacon is in a 8 block radius. it still doesn't work for me in long distances in the overworld and neither in other dimensions, the chunks are forceloaded
gharchive/issue
2022-11-12T13:48:22
2025-04-01T06:46:01.754766
{ "authors": [ "FoxyTails1987", "H3lpMeMan", "ItzDingus", "KinyWolf", "Nemo12123", "Pultex", "Xipher81", "tom5454" ], "repo": "tom5454/Toms-Storage", "url": "https://github.com/tom5454/Toms-Storage/issues/165", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
652168163
Loop detected when there is no loop Setup like this gives an error that says "loop is detected", when it's clearly not the case. Also, items from only first chest are accessible The inventory cables are designed for long range one to one connections. Use the Inventory Connector and Inventory Trims to connect them like this.
gharchive/issue
2020-07-07T09:34:40
2025-04-01T06:46:01.756498
{ "authors": [ "DedMaxim", "tom5454" ], "repo": "tom5454/Toms-Storage", "url": "https://github.com/tom5454/Toms-Storage/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
827935001
nil DNU #isBlockNode from PPFormatter >> #needsParenthesisFor: Occurs reproducibly when pretty-printing Object>>compositeAnimations (part of hpi-swa/animations). 10 March 2021 4:42:27.56515 pm VM: Win32 - Smalltalk Image: Squeak6.0alpha [latest update: #20249] SecurityManager state: Restricted: false FileAccess: true SocketAccess: true Working Dir C:\Users\Christoph\OneDrive\Dokumente\Squeak Trusted Dir C:\Users\Christoph\OneDrive\Dokumente\Squeak\Christoph Untrusted Dir C:\Users\Christoph\OneDrive\Dokumente\My Squeak UndefinedObject (Object) >> #doesNotUnderstand: #isBlockNode Receiver: nil Arguments and temporary variables: aMessage: isBlockNode exception: MessageNotUnderstood: UndefinedObject>>isBlockNode resumeValue: nil Receiver's instance variables: nil PPFormatter >> #needsParenthesisFor: Receiver: a PPFormatter Arguments and temporary variables: aNode: {registry := AnimAnimationRegistry value} Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() PPFormatter >> #visitNode: Receiver: a PPFormatter Arguments and temporary variables: aNode: {registry := AnimAnimationRegistry value} needsParens: nil Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() PPFormatter >> #visitMessageNode: Receiver: a PPFormatter Arguments and temporary variables: aNode: {(registry := AnimAnimationRegistry value) == nil} multiLine: nil isInCascade: false Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() MessageNode >> #accept: Receiver: {(registry := AnimAnimationRegistry value) == nil} Arguments and temporary variables: aVisitor: a PPFormatter Receiver's instance variables: comment: nil pc: nil receiver: {registry := AnimAnimationRegistry value} selector: {==} precedence: 2 special: 0 arguments: {{nil}} sizes: #(nil) equalNode: nil caseErrorNode: nil originalReceiver: {registry := AnimAnimationRegistry value} originalSelector: #== originalArguments: {{nil}} [] in PPFormatter >> #preFormat: Receiver: a PPFormatter Arguments and temporary variables: aNode: {(registry := AnimAnimationRegistry value) == nil} formatter: {a PPFormatter} Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() FullBlockClosure >> #cull: Receiver: [closure] in PPFormatter >> #preFormat: Arguments and temporary variables: firstArg: {(registry := AnimAnimationRegistry value) == nil} Receiver's instance variables: outerContext: PPFormatter >> #preFormat: startpcOrMethod: ([] in PPFormatter>>#preFormat: "a CompiledBlock(3847756)") numArgs: 0 receiver: a PPFormatter [] in Dictionary >> #at:ifAbsentPut: Receiver: a Dictionary() Arguments and temporary variables: key: {(registry := AnimAnimationRegistry value) == nil} aBlock: [closure] in PPFormatter >> #preFormat: Receiver's instance variables: tally: 0 array: #(nil nil nil nil nil) Dictionary >> #at:ifAbsent: Receiver: a Dictionary() Arguments and temporary variables: key: {(registry := AnimAnimationRegistry value) == nil} aBlock: [closure] in Dictionary >> #at:ifAbsentPut: Receiver's instance variables: tally: 0 array: #(nil nil nil nil nil) Dictionary >> #at:ifAbsentPut: Receiver: a Dictionary() Arguments and temporary variables: key: {(registry := AnimAnimationRegistry value) == nil} aBlock: [closure] in PPFormatter >> #preFormat: Receiver's instance variables: tally: 0 array: #(nil nil nil nil nil) PPFormatter >> #preFormat: Receiver: a PPFormatter Arguments and temporary variables: aNode: {(registry := AnimAnimationRegistry value) == nil} formatter: {a PPFormatter} Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() PPFormatter >> #willBeMultiLine: Receiver: a PPFormatter Arguments and temporary variables: aNode: {(registry := AnimAnimationRegistry value) == nil} text: nil Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() [] in PPFormatter >> #isMultiLineMessage: Receiver: a PPFormatter Arguments and temporary variables: < Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() [] in OrderedCollection (Collection) >> #anySatisfy: Receiver: an OrderedCollection({(registry := AnimAnimationRegistry value) == nil} {[#()]}) Arguments and temporary variables: aBlock: {(registry := AnimAnimationRegistry value) == nil} each: [closure] in PPFormatter >> #isMultiLineMessage: Receiver's instance variables: array: {{(registry := AnimAnimationRegistry value) == nil} . {[#()]}} firstIndex: 1 lastIndex: 2 OrderedCollection >> #do: Receiver: an OrderedCollection({(registry := AnimAnimationRegistry value) == nil} {[#()]}) Arguments and temporary variables: aBlock: [closure] in OrderedCollection (Collection) >> #anySatisfy: index: 1 Receiver's instance variables: array: {{(registry := AnimAnimationRegistry value) == nil} . {[#()]}} firstIndex: 1 lastIndex: 2 OrderedCollection (Collection) >> #anySatisfy: Receiver: an OrderedCollection({(registry := AnimAnimationRegistry value) == nil} {[#()]}) Arguments and temporary variables: aBlock: [closure] in PPFormatter >> #isMultiLineMessage: Receiver's instance variables: array: {{(registry := AnimAnimationRegistry value) == nil} . {[#()]}} firstIndex: 1 lastIndex: 2 PPFormatter >> #isMultiLineMessage: Receiver: a PPFormatter Arguments and temporary variables: aNode: {AnimAnimationRegistry value ifNil: [#()] ifNotNil: [:registry | regis...etc... relevantParts: an OrderedCollection({(registry := AnimAnimationRegistry value) ...etc... Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() PPFormatter >> #visitMessageNode: Receiver: a PPFormatter Arguments and temporary variables: aNode: {AnimAnimationRegistry value ifNil: [#()] ifNotNil: [:registry | regis...etc... multiLine: nil isInCascade: false Receiver's instance variables: stream: a WriteStream indent: 1 parents: a Dictionary({[#()]}->{AnimAnimationRegistry value ifNil: [#()] ifNo...etc... preFormatCache: a Dictionary() comments: an OrderedCollection() MessageNode >> #accept: Receiver: {AnimAnimationRegistry value ifNil: [#()] ifNotNil: [:registry | registry compositeAnimations...etc... Arguments and temporary variables: aVisitor: a PPFormatter Receiver's instance variables: comment: nil pc: nil receiver: {(registry := AnimAnimationRegistry value) == nil} selector: {ifTrue:ifFalse:} precedence: 3 special: 17 arguments: an OrderedCollection({[#()]} {[:registry | registry compositeAnimations...etc... sizes: #(nil nil) equalNode: nil caseErrorNode: nil originalReceiver: {AnimAnimationRegistry value} originalSelector: #ifNil:ifNotNil: originalArguments: an OrderedCollection({[#()]} {[:registry | registry compositeAnimations...etc... --- The full stack --- UndefinedObject (Object) >> #doesNotUnderstand: #isBlockNode PPFormatter >> #needsParenthesisFor: PPFormatter >> #visitNode: PPFormatter >> #visitMessageNode: MessageNode >> #accept: [] in PPFormatter >> #preFormat: FullBlockClosure >> #cull: [] in Dictionary >> #at:ifAbsentPut: Dictionary >> #at:ifAbsent: Dictionary >> #at:ifAbsentPut: PPFormatter >> #preFormat: PPFormatter >> #willBeMultiLine: [] in PPFormatter >> #isMultiLineMessage: [] in OrderedCollection (Collection) >> #anySatisfy: OrderedCollection >> #do: OrderedCollection (Collection) >> #anySatisfy: PPFormatter >> #isMultiLineMessage: PPFormatter >> #visitMessageNode: MessageNode >> #accept: [] in PPFormatter >> #visitNode: Dictionary >> #at:ifPresent:ifAbsent: PPFormatter >> #visitNode: PPFormatter >> #visitReturnNode: ReturnNode >> #accept: [] in PPFormatter >> #visitNode: Dictionary >> #at:ifPresent:ifAbsent: PPFormatter >> #visitNode: [] in [] in PPFormatter >> #visitBlockNode: OrderedCollection >> #do: [] in PPFormatter >> #visitBlockNode: PPFormatter >> #indent:around: PPFormatter >> #visitBlockNode: BlockNode >> #accept: [] in PPFormatter >> #visitNode: Dictionary >> #at:ifPresent:ifAbsent: PPFormatter >> #visitNode: [] in PPFormatter >> #visitMethodNode: PPFormatter >> #indent:around: PPFormatter >> #visitMethodNode: MethodNode >> #accept: PPFormatter class >> #formatString:class:noPattern:notifying: PPFormatter class >> #format:in:notifying: Browser (CodeHolder) >> #sourceStringPrettifiedAndDiffed Browser >> #selectedMessage [] in Browser >> #contents [] in Browser (CodeHolder) >> #editContentsWithDefault: Dictionary >> #at:ifAbsent: Browser (CodeHolder) >> #editContentsWithDefault: Browser >> #contents PluggableTextMorphPlus (PluggableTextMorph) >> #getText PluggableTextMorphPlus (PluggableTextMorph) >> #update: PluggableTextMorphPlus >> #update: [] in Browser (Object) >> #changed: DependentsArray >> #do: Browser (Object) >> #changed: Browser >> #changed: Browser (Object) >> #contentsChanged Browser (CodeHolder) >> #contentsChanged Browser (CodeHolder) >> #togglePrettyPrint Browser (CodeHolder) >> #setContentsSymbol: PluggableDropDownListMorph >> #basicSelection: -- and more not shown -- Should be fixed now, please reopen if you manage to reproduce it :) Unfortunately, still occurs on my end (just ran the install script again). Are you sure you have committed everything? 😅 Hmm tests on CI are green for this. What method are you currently trying it out on? Did you get; load the install script? I'm very sorry for the late answer - first I had network problems with my image and then I was on holiday ... Anyway: Yes, of course, I missed the get, now it works as well on my end! 👍 No problem at all, thank you for the feedback :)
gharchive/issue
2021-03-10T15:43:02
2025-04-01T06:46:01.796933
{ "authors": [ "LinqLover", "tom95" ], "repo": "tom95/poppy-print", "url": "https://github.com/tom95/poppy-print/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
245926997
Fixes window inner size calc for hidpi windows X11 X11 always return the geometry in pixel units. Since window.get_inner_size returns the size in points in other window manager implementations X11 should also return in points instead of pixels. This commit also fixes https://github.com/tomaka/glutin/issues/903 Thanks
gharchive/pull-request
2017-07-27T05:07:09
2025-04-01T06:46:01.799237
{ "authors": [ "tomaka", "umurgdk" ], "repo": "tomaka/winit", "url": "https://github.com/tomaka/winit/pull/245", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
328232030
Added a GitHub PULL_REQUEST_TEMPLATE I figured this could be useful for making sure contributors are aware of what changes they have to make. Thanks! This is a really good idea, since about 50% of the time I have to tell someone to add a CHANGELOG entry. I've made some changes to better reflect my expectations. Hopefully from now on things will be a bit more efficient around here.
gharchive/pull-request
2018-05-31T17:37:18
2025-04-01T06:46:01.800571
{ "authors": [ "dannyfritz", "francesca64" ], "repo": "tomaka/winit", "url": "https://github.com/tomaka/winit/pull/542", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
307154210
Wrong content-length during record after modify body by ResponseTransformer Hello, I need to replace an absolute url in response with wiremock in proxy mode. During record after modify body by ResponseTransformer I'm getting wrong (not updated) Content-Length . When I try to update Content-Length in the ResponseTransformer, a final result is headers are missing e.g. Set-Cookie. steps to reproduce: wiremock-standalone-2.14.0.jar prepare a HTTP server with response: headers Set-Cookie: IReallyNeedThisCookie body test for headers issue or use http://www.mocky.io/v2/5ab204022e000081004cbf2f Prepare wiremock ResponseTransformer and add it as --extension: `import com.github.tomakehurst.wiremock.common.FileSource; import com.github.tomakehurst.wiremock.extension.Parameters; import com.github.tomakehurst.wiremock.http.*; import java.util.regex.Pattern; public class ReplaceResponseTransformer extends SuperResponseTransformer { static Pattern mockyPattern = Pattern.compile("for headers"); @Override public Response transform(Request request, Response response, FileSource files, Parameters parameters) { String body = response.getBodyAsString(); if (body != null && mockyPattern.matcher(body).find()) { body = body.replaceAll(mockyPattern.pattern(), "for HTTP HEADERS"); return Response.Builder.like(response).body(body) .headers(updateContentLength(response.getHeaders(), body.length())) .build(); } return response; } private HttpHeaders updateContentLength(HttpHeaders headers, int bodyLength) { HttpHeaders newHttpHeaders = HttpHeaders.noHeaders(); for (HttpHeader header : headers.all()) { if (header.key().equalsIgnoreCase("Content-Length")) { newHttpHeaders.plus(new HttpHeader(header.key(), bodyLength + "")); } else { newHttpHeaders.plus(header); } } return newHttpHeaders; } @Override public boolean applyGlobally() { return true; } }` /usr/bin/java -cp my-extension-0.0.1.jar:wiremock-standalone-2.14.0.jar com.github.tomakehurst.wiremock.standalone.WireMockServerRunner --verbose --match-headers Set-Cookie --extensions my.extensions.ReplaceResponseTransformer --https-port 6443 --https-keystore wiremock.keystore --enable-browser-proxying wireMock.startStubRecording("http://www.mocky.io/v2/5ab204022e000081004cbf2f"); call wiremock: curl 'https://localhost:6443/' -v -k GET / HTTP/1.1 Host: localhost:6443 User-Agent: curl/7.47.0 Accept: / < HTTP/1.1 200 OK < Vary: Accept-Encoding, User-Agent < Transfer-Encoding: chunked < Server: Jetty(9.2.z-SNAPSHOT) < Connection #0 to host localhost left intact test for HTTP HEADERS issue (response body is as expected but there is no Set-Cookie header) if you remove updateContentLength from ReplaceResponseTransformer the returned Content-Length is shorter then the response with applied transformation so the content is trimmed Closing this due to its age and the fact that the recorder has changed quite significantly in the meantime. Please reopen if still an issue.
gharchive/issue
2018-03-21T08:29:13
2025-04-01T06:46:01.809642
{ "authors": [ "piotrbo", "tomakehurst" ], "repo": "tomakehurst/wiremock", "url": "https://github.com/tomakehurst/wiremock/issues/907", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
225945015
fix deepSize so that empty arrays and objects have a size of 1 #632 Some tests had to be adapted wrt. expected sizes or expected distances, but all of these made sense to me. Hi Tom, I rebased this to your current master.
gharchive/pull-request
2017-05-03T10:40:27
2025-04-01T06:46:01.810843
{ "authors": [ "zuckel" ], "repo": "tomakehurst/wiremock", "url": "https://github.com/tomakehurst/wiremock/pull/660", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1558782163
Feature/mac m1 emul support Added platform to Docker compose file for Apple Silicon M chips Will be part of 0.2.0 release instead
gharchive/pull-request
2023-01-26T21:09:38
2025-04-01T06:46:01.820444
{ "authors": [ "tomas-gajarsky" ], "repo": "tomas-gajarsky/facetorch", "url": "https://github.com/tomas-gajarsky/facetorch/pull/31", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
728758824
RFC Change default logrotate settings for /var/mail/maillog Change default logrotate settings for /var/mail/maillog [x] I would like to contribute to the project (code, documentation, advocacy, integration, ...) Description At the moment, the log files /var/log/mail/mail.info /var/log/mail/mail.warn /var/log/mail/mail.err are rotated weekly, four times (see /etc/logrotate.d/rsyslog) On the other hand, /var/log/mail/mail.log is rotated daily by default and only once. So there are only two days back in the logs. To keep it uniform and avoid confusion about "missing" logs, I propose changing that also to an interval of 4. So someone can set LOGROTATE_INTERVAL to weekly and got the same amount of logs for mail.log as for the other mail.* logs. Even better would be, to change the default value of LOGROTATE_INTERVAL to weekly also. Possible impacts Changing default settings is always tricky. The only impact I see, is the increased usage of disk space for the additional logs. I like this. Streamlining logs is a good idea. Can you provide a PR? I'll review and merge it. What about changing the default value of LOGROTATE_INTERVAL to weekly? I can't see any negative impacts to be honest, therefore I see nothing wrong with it Hm, I responded to the PR before I noticed the issue. I generally don't like to change defaults but I do agree that the rotations should be streamlined. As I recall the reason for keeping one file for mail.log is that it is processed by the reports. Keeping one file means that the previous day (or week or month) is available for the summary, but nothing more. If that is changed we need to double-check that the reports still produce the expected results. When we are looking at this I have often wondered if mail.info really is useful. I certainly never use it. Perhaps it can be removed? Let's continue here for now ;-) Maybe @aendeavor can mark the PR as draft. I haven't found a way to do so. Second, why do you change the number of retained log files and (more specifically) why do you want to keep 7 days, 5 weeks and 12 months respectively? daily => rotate 7 times (1 week) weekly => rotate 5 times (1 month) monthly => rotate 12 times (1 year) Initially I wanted to set it also to rotate 4 times. But I found it odd, when setting daily, to keep 4 days (why 4 and not the last week 7?) Same goes for weekly. Why 4 weeks and not keep the whole month (5)? That was the intention behind that. Another thing that came to my mind: Why does LOGROTATE_INTERVAL only affect /var/log/mail.log and not the rest of the log files? I find it odd, to have the possibility, to change one rotate setting, but the rest is hard coded to 4? On variable to manage them all equally would be best imho. It was before my time, but I think the original idea was that mail.log is used for the pflogsumm report on rotation. The other logs are just kept for manual analysis. That is most likely why they are treated differently. The pflogsumm uses /var/log/mail/mail.log.1 and that should be the most recently rotated file, so I think it is safe to change the rotate setting to keep additional copies. Your suggestion is perhaps as good as any. Not sure if we need to parameterize it, most installations are probably happy with the defaults. What's the current status here? For now, I only would change the rotate amount from 1 to 4 for var/log/mail/mail.log (then its aligned with the other logs) and leave the rest unchanged. I very much agree:) Done with https://github.com/tomav/docker-mailserver/pull/1667
gharchive/issue
2020-10-24T10:21:44
2025-04-01T06:46:01.843938
{ "authors": [ "aendeavor", "casperklein", "erik-wramner" ], "repo": "tomav/docker-mailserver", "url": "https://github.com/tomav/docker-mailserver/issues/1666", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
153604030
Webmin Integration Hi, Just had this idea while working on the rainloop docker image, it would be awesome to have a web admin interface to control postfix and dovecot. But I guess this would have to be directly integrated in docker-mailserver and not as an extra container. What do you think? No, we don't want to integrate such tools in this image because we don't want to have a big and unmaintainable image. We want to be focused on the core mail service (smtp and associated filters + client protocoles like imap). A webmail is a client for our point of view, because generally it users imap (like outlook, mail.app, thunderbird) to connect the server. Webmin can help beginners but it won't help users to understand how a server works and how it should be configured using config files. Personnally, I started using linux with webmin and I did not learn a thing the first weeks. We'll be happy to have such tools in our ecosystem, and to recommend them in our pages if they work well, so let us know how you figured it out. Alright, no problem. It was just for a ease of use. But I doubt I can integrate it in a separate container. Even using volumes ? Maybe, but it will make things a lot more complicated. And I don't think I can do it if I don't create volumes on your docker-mailserver image. I created an dockerfile with rainloop, check it out :) @matze19999 that would be easier with a link. I assume that your container administers docker-mailserver? https://github.com/matze19999/Postfix_RainLoopOnDocker Yeah :) Right, it doesn't administer docker-mailserver from the outside, but builds an adapted image with rainloop included. I have not problem with that of course, though I agree with Tom about not having it in this project. However, make sure you test it properly as it may conflict with how things is done. I assume that rainloop writes to the native Postfix configuration files and docker-mailserver also updates them from time to time. You may end up losing configuration when one tool overwrites the changes done by the other tool. I've tested it and it doesn't touch the postfix configuration or causes any issues 👌🏽 Perhaps I missed your point. If it is just a webmail client, why don't you simply run it beside docker-mailserver as another container in the same compose file? That is much cleaner and you don't have to rebuild the image. In fact you should be able to use the official rainloop image (if there is one, couldn't find it) in that way without having to build anything. If it is a web-based admin tool then it has to modify the configuration or there is no point with it. If it does modify the configuration it may run into conflicts.
gharchive/issue
2016-05-07T16:39:44
2025-04-01T06:46:01.850266
{ "authors": [ "erik-wramner", "manthis", "matze19999", "tomav" ], "repo": "tomav/docker-mailserver", "url": "https://github.com/tomav/docker-mailserver/issues/178", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
190125975
Add support for dovecot-antispam plugin http://wiki2.dovecot.org/Plugins/Antispam The antispam plugin allows you to retrain the spam filter by simply moving emails in and out of the Spam folder. Can this be added to the container? Running the sa-learn process by hand or via cron is a little clumsy. looks nice, but the software is not released since 2013. might look into it in February. @mathuin @tronicum could you suggest a PR for this feature? It would be very nice to have it. Why not use the successor with sieve? AntispamWithSieve It would be nice opportunity to implement a global sieve filter as well.. :) Sounds good, could you provide a pr? Ah, forgot about it. I'll have a look into it tomorrow Is this still relevant, no activity since Feb 2018? And no response since August, closing.
gharchive/issue
2016-11-17T18:28:10
2025-04-01T06:46:01.854293
{ "authors": [ "17Halbe", "erik-wramner", "johansmitsnl", "mathuin", "tronicum" ], "repo": "tomav/docker-mailserver", "url": "https://github.com/tomav/docker-mailserver/issues/384", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
56818047
empty result for example? Hi, I installed and created a file example.js with the example code in it: var microdata = require('node-microdata-scraper'); var url = 'https://raw.github.com/mhausenblas/schema-org-rdf/master/examples/Thing/Product/Product.microdata'; microdata.parseUrl(url, function(err, json) { if (!err && json) { console.log("hello world"); console.log(json); } }); when I run node example.js all I get is an empty array: [] I checked the URL from the example which is still online. Also, I can see my own console.log("hello world") in the callback function. Are I'm doing sth. wrong here? Cheers, Johannes Hi @jhercher, let me know if it works now. Hi tomav, now I get an error: $ node example.js module.js:340 throw err; ^ Error: Cannot find module 'node-microdata-scraper' at Function.Module._resolveFilename (module.js:338:15) at Function.Module._load (module.js:280:25) at Module.require (module.js:364:17) at require (module.js:380:17) at Object.<anonymous> (/Volumes/#/#/#/node-microdata-scraper/example.js:1:79) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) node-microdata-scraper is missing as you can see in the error message. Try npm install node-microdata-scraper before you retry. Ah, Thanks! It's working now.
gharchive/issue
2015-02-06T14:30:55
2025-04-01T06:46:01.857916
{ "authors": [ "jhercher", "tomav" ], "repo": "tomav/node-microdata-scraper", "url": "https://github.com/tomav/node-microdata-scraper/issues/3", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
132360815
fix(olHelper): ratio support for ImageWMS layers included ratio configuration option from ol.source.ImageWMS interface. Fixes #236 @hjaekel Thx for your PR. I've seen some tests fail in Travis. Could you please check and eventually fix them? Then I'm fine merging your PR in. @hjaekel merged it in, thx for contributing!! :smile:
gharchive/pull-request
2016-02-09T08:43:24
2025-04-01T06:46:01.859447
{ "authors": [ "hjaekel", "juristr" ], "repo": "tombatossals/angular-openlayers-directive", "url": "https://github.com/tombatossals/angular-openlayers-directive/pull/237", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
72588303
PHP Fatal error: Maximum function nesting level of '100' reached, aborting! Increase xdebug.max_nesting_level. Done @ 23dc783d1c44e10deda86157c7c6c534a8cfac63.
gharchive/issue
2015-05-02T05:18:39
2025-04-01T06:46:01.927974
{ "authors": [ "tomzx" ], "repo": "tomzx/php-semver-checker", "url": "https://github.com/tomzx/php-semver-checker/issues/58", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1674475252
Propose Architecture and Probe <-> Server API I put this into the README for no other place seemed better and so that we can have a PR over this. It's written in a definitive style for brevity but, please, read it as a proposal. I'll raise some questions/caveats below. @PabloMansanet Do you want to take a quick look? We might want to move on and start experimenting / implementation. Sorry, I totally missed this because I'm not getting slack notifications 😅 I thought we had registered this repository with the bot! Please go ahead with experimentation. I'm about to leave now but I'll have more thoughts tomorrow. Looking good as a first approach though!
gharchive/pull-request
2023-04-19T08:55:56
2025-04-01T06:46:01.929725
{ "authors": [ "PabloMansanet", "goodhoko", "skywhale" ], "repo": "tonarino/acoustic_profiler", "url": "https://github.com/tonarino/acoustic_profiler/pull/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2512206505
TON INTERNAL WALLET SEND TO JETTON Hi , i send my Ton staked to my wallet and than endup at jetton wallet ,please anyone can help me. I attached with me all the details. @Ihramraz Thanks for posting the issue. Your query would be best dealt with by the support team. Please see the link below to our dedicated support line: Support : Help Center Note: Click on the live chat icon at the bottom corner of the page to start a conversation
gharchive/issue
2024-09-08T05:27:24
2025-04-01T06:46:01.934570
{ "authors": [ "Ihramraz", "Khan-arsh" ], "repo": "toncenter/tonweb", "url": "https://github.com/toncenter/tonweb/issues/259", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2540450652
BlockingIOError: [Errno 11] write could not complete without blocking Traceback (most recent call last): File "/home/draeician/git/gogoanime/printer/printer.py", line 76, in render File "/home/draeician/git/gogoanime/printer/printer.py", line 112, in __render File "/home/draeician/git/gogoanime/printer/console/linux_console.py", line 1176, in set_text BlockingIOError: [Errno 11] write could not complete without blocking During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.12/threading.py", line 1075, in _bootstrap_inner File "/usr/lib/python3.12/threading.py", line 1012, in run File "/home/draeician/git/gogoanime/printer/printer.py", line 82, in render File "/home/draeician/git/gogoanime/utils/debugging.py", line 6, in debug_log OSError: [Errno 24] Too many open files: 'debug.log' Issue comes up with large series, for example boruto-naruto-next-generations-dub has over 200 episodes. Should be fixed by 2fe88fbfc10f0fc0119920f1626e3bc0baf9c652
gharchive/issue
2024-09-21T18:19:01
2025-04-01T06:46:01.938097
{ "authors": [ "draeician", "tonder0812" ], "repo": "tonder0812/gogoanime", "url": "https://github.com/tonder0812/gogoanime/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
521071069
AWS region not being honored Checklist [x] Upgrade Jets: Are you using the latest version of Jets? This allows Jets to fix issues fast. There's a jets upgrade command that makes this a simple task. There's also an Upgrading Guide: http://rubyonjets.com/docs/upgrading/ [x ] Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.rubyonjets.com [x] Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample. My Environment Software Version Operating System OSX Jets 2.3.4 Ruby 2.5.7 Expected Behaviour The developer guide / deploy guides should show you how to honor the region in AWS https://rubyonjets.com/quick-start/ https://rubyonjets.com/reference/jets-deploy/ Current Behavior If you follow the developer guide, you will see that the jets defaults to us-east-1 instead of what is defined in the ~/.aws/config file like the jets-deploy says it should. Step-by-step reproduction instructions Pre-test Make sure that the aws cli app is NOT installed or at least not in your PATH. You can verify this git:(master) ✗ aws configure get region zsh: command not found: aws Test Setup the ~/.aws/config as described in the docs Run the deploy command. It will use us-east-1 Code Sample You can see if the aws cli isn't installed it skips over the config: https://github.com/tongueroo/jets/blob/112d9ff4f91c4dafc160d63371f15aaaafe0ec89/lib/jets/aws_info.rb#L17 Solution Suggestion Either update the docs to say the CLI is required or don't call the aws cli tool (preferred, b/c you don't need to start another process just to read from a text file). What should be the expected behaviour if there is no AWS CLI installed and no ~/.aws/config is found? Should the deployment fail or just go ahead and deploy on us-east-1? Maybe we can deploy it on us-east-1 and then display a message to notify the user about this? What should be the expected behaviour if there is no AWS CLI installed and no ~/.aws/config is found? I don't have a strong opinion about this. My problem is that the docs say to add ~/.aws/config but then don't honor what is in the file if the cli isn't installed. This doesn't make sense because the docs don't say the AWS client is required. I think you should either: require the AWS client to be installed Use ruby to directly read from that file instead of starting up another process (making your deploy slower). I have just 1 problem with the second option, if there any are other parts of Jets, which also depend on AWS CLI being installed(maybe @tongueroo can chime in)? Because if so, then updating the docs to require AWS CLI makes more sense. Either way we can make a small PR on it. I would update the docs and the code: https://github.com/tongueroo/jets/blob/112d9ff4f91c4dafc160d63371f15aaaafe0ec89/lib/jets/aws_info.rb#L17 This should be throwing an error if the client isn't present. It seems like the intention was for it to be optional (based on the code and preceding comment). RE: AWS region not being honored Dug into this. Unable to reproduce. Was able to deploy to different regions with different aws profiles in ~/.aws/config: $ AWS_PROFILE=tung-west aws configure get region us-west-2 $ AWS_PROFILE=tung-east aws configure get region us-east-1 $ $ AWS_PROFILE=tung-east jets deploy ... $ AWS_PROFILE=tung-east jets url API Gateway Endpoint: https://0m17ffdv9l.execute-api.us-east-1.amazonaws.com/dev $ AWS_PROFILE=tung-west jets deploy ... $ AWS_PROFILE=tung-west jets url API Gateway Endpoint: https://ui6ce467hc.execute-api.us-west-2.amazonaws.com/dev $ $ AWS_PROFILE=tung-east aws apigateway get-rest-apis { "items": [ { "id": "0m17ffdv9l", "name": "demo-dev", "createdDate": 1577135079, "binaryMediaTypes": [ "multipart/form-data" ], "apiKeySource": "HEADER", "endpointConfiguration": { "types": [ "EDGE" ] } } ] } $ AWS_PROFILE=tung-west aws apigateway get-rest-apis { "items": [ { "id": "ui6ce467hc", "name": "demo-dev", "createdDate": 1577134926, "binaryMediaTypes": [ "multipart/form-data" ], "apiKeySource": "HEADER", "endpointConfiguration": { "types": [ "EDGE" ] } } ] } $ Details of jets deploy: https://gist.github.com/tongueroo/7b244c11c27ec882c9bb17c9d73af00f Wondering if AWS_REGION might be set? When it is set, it takes higher precedence than what's in ~/.aws/config. https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#config-settings-and-precedence So then it would always deploy to the same region, regardless of what's in ~/.aws/config. It's actually a useful way to deploy to multiple regions without having to create another AWS_PROFILE. IE: AWS_REGION=us-east-1 jets deploy AWS_REGION=us-west-2 jets deploy RE: require the AWS client to be installed Yes. There are places where jets calls shells out to the aws cli. Would like to make it optional. Thought it was working. Unsure if it's worth the time. Think will just update the docs to say install it to move on. RE: Use ruby to directly read from that file instead of starting up another process (making your deploy slower). Sure. Not shelling out to the aws cil is preferable. Will consider a PR for it. Tip: Used a2ikm/aws_config in the tongueroo/aws-mfa-secure gem achieve this there. So that's how it would be done. But in the aws-mfa-secure gem itself, it shells out as a trick to grab temporary credentials because the gem itself overrides aws-sdk-core credentials logic to achieve what it needs to do. FWIW, the aws-mfa-secure decoration logic only triggers if you actually configure your ~/.aws/config to use it. Both should probably be updated so that it doesn't shell out to the aws cli at all. Believe Jets is the easier one to update actually. Will consider PRs for both. @tongueroo $ AWS_PROFILE=tung-west aws configure get region us-west-2 This should be returning: command not found: aws Once you have verified that aws cli is NOT installed, try running jets deploy and see which region it deploys to. Oh I see. So the aws cli is not installed and jets doesn't respect the region because it fails on shelling out. Make sense. Cool, will just add a note to make sure aws cli is installed. Why not update the code too? Sure. Will consider PRs. Otherwise will get to it in time. No sweat either way.
gharchive/issue
2019-11-11T16:47:56
2025-04-01T06:46:01.960396
{ "authors": [ "KevinColemanInc", "gokaykucuk", "tongueroo" ], "repo": "tongueroo/jets", "url": "https://github.com/tongueroo/jets/issues/403", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
851193175
check_extra7102 - missing public IPs check_extra7102 in Line 36 tries to get all public IPs. However, it is only getting the elastic IPs. This is also documented in the used describe-addresses reference . The solution is to use describe-network-interfaces instead and filter for the PublicIPs. I usually use the following command: aws ec2 describe-network-interfaces --query "NetworkInterfaces[*].Association.PublicIp" Hi @as-km, Thanks for your comment. I have been looking at your command and the one that is used in Prowler and I see the following as you mention: In Prowler aws ec2 describe-addresses --query 'Addresses[*].PublicIp'. In doc describe-addresses: Describes the specified Elastic IP addresses or all of your Elastic IP addresses. (Console EC2 / Network & Security / Elastic IPs). I see that includes all Elastic IPs including those not associated. Your suggestion aws ec2 describe-network-interfaces --query "NetworkInterfaces[*].Association.PublicIp". In doc describe-network-interfaces: Describes one or more of your network interfaces. (Console EC2 / Network & Security / Network Interfaces). I see that includes all Elastic and Public IPs that are associated. That means that probably usage of describe-network-interfaces may make Prowler more accurate for used interfaces. Would you mind to send a fix to branch 2.4? If you can't, please let me know. Thanks again!
gharchive/issue
2021-04-06T08:42:32
2025-04-01T06:46:01.964741
{ "authors": [ "as-km", "toniblyx" ], "repo": "toniblyx/prowler", "url": "https://github.com/toniblyx/prowler/issues/768", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
128404866
Can't give multiple configuration files on command line Hello, I am trying to load several configuration files at once: tmuxp load wiki-canada.yaml wiki-grapher.yaml However, tmuxp seems to concatenate them: (E) [16:01:33] tmuxp.cli cli.command_load():421 wiki-canada.yaml wiki-grapher.yaml not found. Is this a bug or a feature? Neither, it's a functionality that doesn't exist yet. Interested making a PR? If not I can swing around to it @madprog shipped. 0.10.0 is on pypi Thank you!
gharchive/issue
2016-01-24T15:31:19
2025-04-01T06:46:01.974645
{ "authors": [ "madprog", "tony" ], "repo": "tony/tmuxp", "url": "https://github.com/tony/tmuxp/issues/133", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
284340902
Protobuf module conflicts with exprotobuf The top level module name Protobuf is also the top level module for exproto (https://github.com/bitwalker/exprotobuf/blob/master/lib/exprotobuf.ex). This makes it difficult to use protobuf-elixir in projects that also have a secondary dependency on exproto. I know it's a lot to ask to rename it but I wanted to bring it up here. It's likely that others have or will run into this also. In my case I'm unable to use https://github.com/peburrows/diplomat because of it's dependency on exproto. I had similar problem, but based on how libs are named on hex and how .ex files are named, the problem is that exprotobuf is using wrong namespace for modules. It's hard to solve this problem for the moment because Erlang/Elixir doesn't support the features like importing modules. I found a similar issue https://github.com/elixir-lang/elixir/issues/5232
gharchive/issue
2017-12-24T03:54:59
2025-04-01T06:46:01.977527
{ "authors": [ "amatalai", "cjab", "tony612" ], "repo": "tony612/protobuf-elixir", "url": "https://github.com/tony612/protobuf-elixir/issues/21", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
541809261
Add Swift Package Manager support (Fixes #176) Fix #176 by creating Package.swift. Until the next version is tagged, you will need to specify the commit instead. @tonymillion, please merge or comment on some of these PRs, this is a useful repository. Not ready to merge - I messed up
gharchive/pull-request
2019-12-23T15:53:43
2025-04-01T06:46:01.981132
{ "authors": [ "turtlemaster19" ], "repo": "tonymillion/Reachability", "url": "https://github.com/tonymillion/Reachability/pull/177", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1264121987
[ADD] wantsTurboFrame for GET requests I'm doing a search function and would like to submit the form as GET so that it updates the URL as well. Since there is a wantsTurboStream examining the accept header, does it make sense to add wantsTurboFrame to examine for the turbo-frame header? This way, backend will know to send a partial instead of the full render. Please ignore above, it appears the turbo middleware is already handling turbo frames. You can detect if the request was made targeting a Turbo Frame by checking if there's a Turbo-Frame header in the request (Turbo adds that). Something like this: if (request()->headers->has('Turbo-Frame')) { // handle differently... }
gharchive/issue
2022-06-08T03:00:32
2025-04-01T06:46:01.986033
{ "authors": [ "bilogic", "tonysm" ], "repo": "tonysm/turbo-laravel", "url": "https://github.com/tonysm/turbo-laravel/issues/69", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
183342303
Try HTML5 placeholder Polyfill https://github.com/ginader/HTML5-placeholder-polyfill Internet Explorer 9 and lower - really? - Even Microsoft stopped supporting them and according to caniuse.com IE 8 + IE 9 together is 0.5 % of global users. However, I don't know much about mobile browser versions - are iOS Safari < 4.0 or Android Browser < 2.0 still commonly in use? @Julix91 I created this issue as a "note" actually. Your truth - no need to support old browsers today! It's better to focus on new browsers keeping in mind some working fallback for older ones.
gharchive/issue
2016-10-17T07:16:31
2025-04-01T06:46:01.988480
{ "authors": [ "Julix91", "tonystar" ], "repo": "tonystar/float-label-css", "url": "https://github.com/tonystar/float-label-css/issues/9", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
59844197
Promoted search results label missing padding Minor style tweak but the promoted search results label needs some padding or the overall settings fly-out width needs increasing. Already fixed in February https://github.com/torchbox/wagtail/commit/e5e79d2e2b2f64e88660d485fb40a3e0eb200895
gharchive/issue
2015-03-04T19:01:12
2025-04-01T06:46:02.073102
{ "authors": [ "davecranwell", "tmsndrs" ], "repo": "torchbox/wagtail", "url": "https://github.com/torchbox/wagtail/issues/1040", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
117581263
Cannot overwrite image attributes in template with empty values For certain images in my templates I want to set an empty alt attribute. According to the documentation this is done with {% image obj.image alt="" %} But this does not work, and I think it should. If I overwrite an attribute this way, wagtail should not make any checks of my values and just accept them. This might be related to use of a mutable dict default for the attrs kwarg of ImageNode: https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailimages/templatetags/wagtailimages_tags.py#L34 This should probably be None with a check in the init to set it to an empty dict if a dict isn't passed in. The problem is in the implementation of Rendition.img_tag https://github.com/torchbox/wagtail/blob/0f0310858b015a62fcaf342108b76ea038b5bd45/wagtail/wagtailimages/models.py#L448 - it's only designed to support passing in extra attributes, not replacing the default ones (and the documentation is consistent with this). The ability to override the default attributes would be a reasonable thing to add, though. (@nealtodd, the use of attrs={} in the tag is harmless, because that property never gets altered anywhere in the ImageNode implementation. I admit it did make me go 'eww' on first glance though :-) ) Actually, that might not be the cause (but should be addressed anyway, and I think there are some other uses of mutable defaults elsewhere. PR coming up for those). I may be tracing the through the code path wrongly but it looks like any attributes matching the image's own ones don't replace them, they get added as well, with the images own ones coming first (and the browser parsing meaning it wins out): https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailimages/models.py#L451 Snap! :) In Chrome Dev Tools one would not see the duplicate attributes – so I checked with wget, and actually the overwritten attribute appears twice in HTML. Chrome is apparently always taking the first attribute, if it's set several times in HTML, no matter if the latter are empty or have a value! Hence overwriting attributes should not work at all (at least in Chrome?), and this should be fixed. Django contains a very useful and very poorly documented utility called flatatt that is useful in this case, which I've used to solve this issue in #1938
gharchive/issue
2015-11-18T12:58:20
2025-04-01T06:46:02.079396
{ "authors": [ "andycrafft", "gasman", "nealtodd", "timheap" ], "repo": "torchbox/wagtail", "url": "https://github.com/torchbox/wagtail/issues/1933", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1339867132
Updating torch.range to torch.arange torch.range is now deprecated and should be replaced with torch.arange, which is consistent with pythons built-in range. torch.range also was producing errors with high-valued ranges, see #377 Yeah but what about older versions of pytorch? Do we break them? We should run a full jenkins Since when is it deprecated? Do you have a link? @turian torch.range has been deprecated in favour of torch.arange since torch v0.1 https://github.com/pytorch/pytorch/releases/tag/v0.1.12 -- so we should be good for backward compatibility :) Okay. Why are tests still breaking? Tests were breaking because the deprecated function call Trainer.test(test_dataloader=) was removed from Lightning (formally PTL) in version 1.6, see https://github.com/Lightning-AI/lightning/pull/10325. test_dataloader was deprecated in v1.4.0, see https://github.com/Lightning-AI/lightning/pull/7431. Replacing Trainer.test(test_dataloader=) with Trainer.test(dataloader=) will work for ptl>=1.4
gharchive/pull-request
2022-08-16T06:17:59
2025-04-01T06:46:02.087420
{ "authors": [ "jorshi", "turian" ], "repo": "torchsynth/torchsynth", "url": "https://github.com/torchsynth/torchsynth/pull/378", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
521954798
:bug: PG pool error listeners causing "MaxListenersExceededWarning" node warning My recent commit "Added error handling to connection pools for PostgreSQL" introduced a potential memory leak where the pool would accumulate a error event listener each time it is used. This resulted in seeing a node warning in console like (node:19396) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 22 error listeners added to [Pool]. Use emitter.setMaxListeners() to increase limit Removed the error listeners from the individual pool using code and added it to the pool creation function, which now checks if a error listener is present before adding another to the pool. This should prevent too many error listeners being added to the pool. Really sorry that I introduced a bug like this. But I think it was found and fixed it pretty quick. nervous grin Thank you! Good fix
gharchive/pull-request
2019-11-13T04:38:18
2025-04-01T06:46:02.175994
{ "authors": [ "Aidan-Chey", "petersirka" ], "repo": "totaljs/node-sqlagent", "url": "https://github.com/totaljs/node-sqlagent/pull/41", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1698355431
Failed to add key I always see this error will there be a way to fix it? I have the same issue. Same here I even put in my own 12pb key and nope I HAVE FIXED THE ISSUE just dm on discord on how to fix it too complicated to say here Keeboy99#2878 You need to add your own base_keys and everything will work. https://github.com/totoroterror/warp-cloner#configuration You need to add your own base_keys and everything will work. https://github.com/totoroterror/warp-cloner#configuration How?
gharchive/issue
2023-05-06T01:11:39
2025-04-01T06:46:02.181420
{ "authors": [ "Fortress937", "Keeboy99", "danaenayati", "totoroterror" ], "repo": "totoroterror/warp-cloner", "url": "https://github.com/totoroterror/warp-cloner/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2393685728
🛑 MSP-HIT is down In dbf7dbd, MSP-HIT (https://hit.hanati.co.kr/) was down: HTTP code: 0 Response time: 0 ms Resolved: MSP-HIT is back up in 2b88979 after 8 minutes.
gharchive/issue
2024-07-06T19:35:01
2025-04-01T06:46:02.214911
{ "authors": [ "touguy" ], "repo": "touguy/uptime", "url": "https://github.com/touguy/uptime/issues/567", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
342393484
Client RPCs are not type safe The generated client stubs take a generic grpc::Encodable rather than the concrete expected Prost type. Is there a reason for that? The way it works right now those stubs accept any Prost Protobuf message, which can lead to subtle and difficult to track down bugs like #16. To me it seems like the stubs should be changed to only accepting their expected type. Sorry about the lack of clarity. Also, I was wrong: It's not all client RPCs that suffer from lack of type safety. It does happen for client streaming RPCs though. For an example of how to "reproduce" this error, open tower-grpc-interop/src/client.rs, go to the Testcase::client_streaming match branch and replace the util::client_payload(27182),s that are the elements of the stream to send to the server with pb::Empty {}. This should not only make the test fail (on current master it already fails, see #71, #16), it shouldn't even compile because you're passing the wrong type to the client streaming RPC. But it compiles fine, it just sends the wrong Protobuf type over the wire silently. If this would be type safe, #16 would have been caught by the compiler and no Wireshark debugging would have been necessary.
gharchive/issue
2018-07-18T16:02:05
2025-04-01T06:46:02.225620
{ "authors": [ "per-gron" ], "repo": "tower-rs/tower-grpc", "url": "https://github.com/tower-rs/tower-grpc/issues/73", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
984363259
Card: Fate Dispenser, Ol'Howard Bigger change containing handling of Improvement cards and correctly setting controllers for attachments. In this PR, Improvements should be properly handled. Also controller for all attachments should be correctly set to the controller of parent, and Totems triggering set to anyone's Shaman. Only Improvement implemented is in this PR (Fate Dispenser), Totems for testing: Mother Bear's rage or Spirit Trail
gharchive/pull-request
2021-08-31T22:02:58
2025-04-01T06:46:02.236634
{ "authors": [ "mmeldo" ], "repo": "townteki/townsquare", "url": "https://github.com/townteki/townsquare/pull/881", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2253512636
Einige Attribute besitzen die falschen Datetypen Hallo, nach der Einrichtung des Adapters erscheinen folgende Einträge im Log. Und über die Nacht geht es so weiter. Vielen Dank im Voraus Github sollte nicht für Hilfestellungen verwendet werden. Dafür ist das Forum da. Punkt 1: überprüfe im ems-esp custumizations die min und max Werte bzw. den aktuellen Wert. Ändere die Werte ggfs. Punkt 2: kann ich nicht nachvollziehen. Stopp die Instanz, lösche die Objektstruktur und starte neu. Punkt 1: Es sind alles Standardwerte die von der Heizung kommen. Die min/max Werte für "burnmaxpower" waren falsch vergeben, aber die anderen Werte sind alle Zeichenketten. Fanwork: Punkt 2: Wenn ich die Objektstruktur lösche und neu starte sieht es wie auf dem ersten Screenshot aus. Habe ich natürlich schon versucht. Hier noch eine frische Log-Datei: iobroker.2024-04-19.log Nach dem löschen der Objektstruktur. Hast du in den ems-esp settings die Attribute nach Anleitung richtig gesetzt? Nein, es ist mein erster Adapter der mit Dateitypen ein Problem hat. Für die Übersichtlichkeit und Verarbeitung sind string/boolean schon besser. Um den Adapter zu verwenden musst du die Formatting Options so setzen wie beschrieben. (so wie alle anderen Anwender auch) Der API-Output ist abhängig von der eingestellten Sprache, Wozu habe ich die Doku geschrieben ?! Ich baue in der nächsten Version ein Check ein.
gharchive/issue
2024-04-19T17:25:43
2025-04-01T06:46:02.253637
{ "authors": [ "mattreim", "tp1de" ], "repo": "tp1de/ioBroker.ems-esp", "url": "https://github.com/tp1de/ioBroker.ems-esp/issues/56", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2274519683
cinで受け取るとき、初期値に意味がないことを明記 入力値を10倍にするコードで、変数xはcin >> xで上書きされるのですが、xの初期値が10であるため「10倍」と関係あると思う人がいると思い、コメントを付けておきました #18 の「宣言と代入」と少しだけ関係があるかもしれません (宣言のみ、を教えるなら、このコードの代入部分を消去しておけばよいため) cin 命令を用いると、コンソールから入力する事ができる。 の部分を cin 命令を用いると、コンソールから入力し、変数に代入することができる のようにして、代入されることを明記するのはどうでしょうか。 cin 命令を用いると、コンソールから入力する事ができる。 の部分を cin 命令を用いると、コンソールから入力し、変数に代入することができる のようにして、代入されることを明記するのはどうでしょうか。 ↑やっても良いと思います(面倒ですがもう一回 pr 出す感じで、、)
gharchive/pull-request
2024-05-02T03:27:53
2025-04-01T06:46:02.362521
{ "authors": [ "ErrorSyntax1", "Takeno-hito", "ZOI-dayo" ], "repo": "traP-jp/pg-basic", "url": "https://github.com/traP-jp/pg-basic/pull/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1678271922
Update trace4cats-fs2,fs2-kafka Updates [io.janstenpickle:trace4cats-fs2](https://github.com/trace4cats/trace4cats) [io.janstenpickle:trace4cats-testkit](https://github.com/trace4cats/trace4cats) [com.github.fd4s:fs2-kafka](https://github.com/fd4s/fs2-kafka) Any chance to get this merged and released soon?
gharchive/pull-request
2023-04-21T10:16:01
2025-04-01T06:46:02.375353
{ "authors": [ "fpeyron", "shagoon" ], "repo": "trace4cats/trace4cats-kafka", "url": "https://github.com/trace4cats/trace4cats-kafka/pull/139", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
206472392
Create hr.yml Added an Croatian translations. Thank you :D
gharchive/pull-request
2017-02-09T11:06:32
2025-04-01T06:46:02.386218
{ "authors": [ "hdpero", "tractorcow" ], "repo": "tractorcow/silverstripe-fluent", "url": "https://github.com/tractorcow/silverstripe-fluent/pull/249", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2101137232
0xCa8EBfB8e1460Aaac7c272CB9053B3D42412AAc2 add GAU logo add GAU logo
gharchive/pull-request
2024-01-25T20:43:23
2025-04-01T06:46:02.386959
{ "authors": [ "Kerimsaid" ], "repo": "traderjoe-xyz/joe-tokenlists", "url": "https://github.com/traderjoe-xyz/joe-tokenlists/pull/1118", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1172427714
Slippage and MEV preventation in trades Creaet a function that calculates amountIn and amountOut values for buy/sell with a specific slippage. See this question for details: https://ethereum.stackexchange.com/questions/99404/pancakeswap-anti-front-run First for Uniswap v2, as its dog eats dog frontrun battle at BNB Chain. See also https://tradingstrategy.ai/docs/glossary.html#term-0 Can you make a this kind of unit test prepare a trade where I set the max slippage to 1%, making the signed transaction ready then trade in the pool so that the price moves more than 1% now try to broadcast the transaction so that it revets (there would be too much slippage, the trade is not allowed to execute) As a bonus, trade analyzer should able to tell if the trade was reverted because of slippage. Though not sure how we can pick up this from the transaction receipt, I believe is going to be quite hard. The ”real” EVM nodes do not store the revert reason for very long time (few hundreds of blocks) and one might need to replay the transaction. https://snakecharmers.ethereum.org/web3py-revert-reason-parsing/
gharchive/issue
2022-03-17T14:13:49
2025-04-01T06:46:02.390718
{ "authors": [ "miohtama" ], "repo": "tradingstrategy-ai/eth-hentai", "url": "https://github.com/tradingstrategy-ai/eth-hentai/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
978512715
Histogram Series overlap and hide smaller values (regression in 3.5.0) Lightweight Charts Version: 3.5.0 It worked in 3.4.0 Steps/code to reproduce: Worked in https://codesandbox.io/s/lightweightchartsdemo-forked-5xq2p?file=/src/index.js Does not work (same code) in https://codesandbox.io/s/lightweightchartsdemo-forked-qlg1u Actual behavior: The largest bar series on a given value overlaps the others, and covers up all smaller ones Expected behavior: The bars are rendered from largest in the back to smallest in the front Screenshots: CodeSandbox/JSFiddle/etc link: Worked in https://codesandbox.io/s/lightweightchartsdemo-forked-5xq2p?file=/src/index.js Does not work (same code) in https://codesandbox.io/s/lightweightchartsdemo-forked-qlg1u @timocov I think this is related to the 3.5.0 release - it works as expected on the one right before it Duplicate of #812 This is intended behaviour, you need to change the order of creating series to make it work as you want. Just updated the release notes to avoid further confusing https://github.com/tradingview/lightweight-charts/releases/tag/v3.5.0. What if the bars alternate in tallest height? For instance, in series 1 the values go 0 10 5 and in series two they go 0 5 10, and we render series one first. Series two will completely overlap series 5 in that case. There needs to be specific logic on each tick render to render them in order, by value, unless I'm misunderstanding that Actually, what is the difference between the two versions you provided? It seems that they are the same? Ha, that's because I added a hist.reverse() to the not working link to confirm that it would work :) I just reset it - it should be back to "not working" - which is, as you said, just the order the series were added Ha, that's because I added a hist.reverse() to the not working link to confirm that it would work :) So, what the issue now is? 🙂 OK I just updated to a minimum viable example https://codesandbox.io/s/lightweightchartsdemo-forked-qlg1u?file=/src/index.js:1877-1913 If you add two series that aren't strictly increasing, one of them will overlap You can see that in the third datapoint, the blue line overlaps the red line. The order in which series are added is not guaranteed to be the ordering of values on the series. Histograms will overlap if the tallest one alternates
gharchive/issue
2021-08-24T21:49:06
2025-04-01T06:46:02.401335
{ "authors": [ "jonluca", "timocov" ], "repo": "tradingview/lightweight-charts", "url": "https://github.com/tradingview/lightweight-charts/issues/826", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1090012407
Update to go1.17 What does this PR do? This PR: Updates the Go version to 1.17 Updates the Traefik dependency version to v2.5.6 an update of golangci-lint can be useful also https://github.com/traefik/mesh/blob/d69475264c5d0200647cb626933df330900a6f59/Dockerfile#L24 I will do the golangci-lint update thanks for pointing this! By the way, I do not understand why the integration tests are not executed?! any ideas? /sem-approve /sem-approve I opened another PR because the semaphore workflow is triggered. Superseded by https://github.com/traefik/mesh/pull/809
gharchive/pull-request
2021-12-28T16:44:39
2025-04-01T06:46:02.404867
{ "authors": [ "kevinpollet", "ldez" ], "repo": "traefik/mesh", "url": "https://github.com/traefik/mesh/pull/806", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2418440405
A way to explain why a request match a certain router when router rules conflict happen Welcome! [X] Yes, I've searched similar issues on GitHub and didn't find any. [X] Yes, I've searched similar issues on the Traefik community forum and didn't find any. What did you expect to see? We have 3k+ rules. One of them is bad and will receive more traffic than we intended to but it did not cause troubles immediately after it was created. Some months later, we have a new request that was routed to this router. Since the API does not exist in the end service, we got a 404 response. The question is: which router stole our request ? Some hacks we have tried: list all routes and search bad ones by eyes, i.e., kubectl get ingress routes.traefik.io -A -o yaml | grep some-magic-regex. rely on some external monitoring system to find which service returned 404 response recently. A prometheus example looks like sum(rate(some_metric{code="404'}[1m])) > 0 There are many other occasions when rule conflicts happen. I believe for many people, they will not realize there exists a conflict until days even months later. As far as I can tell, nginx does not have a similar feature. My worst experience debugging conflict using nginx was reading the doc of location many times. Without an efficient debug tool, it is always a long-time work. I know to avoid conflict completely, one need to write precise rules instead of vague rules. However this is not the case since we are migrating from nginx to Traefik and having a lot of location ~ or location / rules, Hello @rpstw and thanks for your interest in Traefik, We are not sure to fully understand every detail of your use case, is this issue a feature request? Could you please elaborate on this, what solution would you expect? Did you consider using tracing to see where the request is going? create rule a months later some request wrongly matches a there are thounds of rules,how to find the a quickly? The problem is solving an existing conflict. It's hard to find suspects without relying on an external system like metrics, tracing or access log. I agree with @rtribotte identifying potential conflicts is hard. Many database systems have a built-in way to help users understanding their queries. A example shows why a document has a score of 1.6943598 during the process of an elastic search query: { "_index":"my-index-000001", "_id":"0", "matched":true, "explanation":{ "value":1.6943598, "description":"weight(message:elasticsearch in 0) [PerFieldSimilarity], result of:", "details":[ { "value":1.6943598, "description":"score(freq=1.0), computed as boost * idf * tf from:", "details":[ { "value":2.2, "description":"boost", "details":[] }, Similar, an "explain" telling us which router will handle the request could be helpful. However, I'm not sure how helpful it would be so comments are welcome. Hello @rpstw, I would not make an equivalence between explaining a query result against a database and the rule conflict issue. Traefik handles the request with the first matching handler. Unlike databases, rule evaluation is not complex, there are no heuristics except the router's priority, and Traefik would look over and evaluate every rule if needed. Since you know the request you want to evaluate (to check if it is handled by the correct router), it is as simple as making the request and checking which router handled it. Then, why not use access logs or tracing for that? The why is relying on an external system that we or maybe many others don't have. However, I realized a so-called explain functionality is basically equivalent to the access-log. The only difference is interactive or long-lasting, so maybe it's not a must-have. I'll try the access log for now. Thanks for the great explanation @kevinpollet.
gharchive/issue
2024-07-19T09:15:42
2025-04-01T06:46:02.414985
{ "authors": [ "kevinpollet", "rpstw", "rtribotte" ], "repo": "traefik/traefik", "url": "https://github.com/traefik/traefik/issues/10918", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1373948939
Cannot use a plugin that doesn't require configuration Welcome! [X] Yes, I've searched similar issues on GitHub and didn't find any. [X] Yes, I've searched similar issues on the Traefik community forum and didn't find any. What did you do? I'm trying to load a plugin that doesn't require any configuration In the traefik configuration file I have to declare something like below to bypass: middlewares: dummy_plugin: plugin: dummy_plugin: key: value What did you see instead? With the configuration middlewares: dummy_plugin: plugin: dummy_plugin: {} plugin: missing plugin configuration: dummy_plugin What version of Traefik are you using? from v2.5 to latest What is your environment & configuration? http: routers: whoami: middlewares: - dummy_plugin entrypoints: - http service: whoami rule: Host(`domain.com`) services: whoami: loadBalancer: servers: - url: http://whoami passHostHeader: false middlewares: dummy_plugin: plugin: esi: {} Add more configuration information here. If applicable, please paste the log output in DEBUG level No response Thank you! 🙏 Closed by #9338.
gharchive/issue
2022-09-15T05:35:42
2025-04-01T06:46:02.419989
{ "authors": [ "darkweak", "traefiker" ], "repo": "traefik/traefik", "url": "https://github.com/traefik/traefik/issues/9337", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1450982590
Custom plugin not working Welcome! [X] Yes, I've searched similar issues on GitHub and didn't find any. [X] Yes, I've searched similar issues on the Traefik community forum and didn't find any. What did you do? I tested custom plugin in EKS traefik deployment, but it's not working. helm chart config: additionalArguments: - "--log.level=DEBUG" - "--experimental.plugins.example.modulename=github.com/traefik/plugindemo" - "--experimental.plugins.example.version=v0.2.2" the pod error message: time="2022-11-08T15:42:07Z" level=debug msg="loading of plugin: example: github.com/traefik/plugindemo@v0.2.2" time="2022-11-08T15:42:12Z" level=error msg="Plugins are disabled because an error has occurred." error="failed to download plugin github.com/traefik/plugindemo: failed to call service: Get \"https://plugins.traefik.io/public/download/github.com/traefik/plugindemo/v0.2.2\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" the network is ok, I can wget https://plugins.traefik.io/public/download/github.com/traefik/plugindemo/v0.2.2 in the pod Then I test different version of traefik docker image: Custom plugin working fine: v2.7.0 v2.8.0 v2.8.1 v2.8.3 Custom plugin not working: v2.8.4 v2.8.8 v2.9.4 Conclusion: The latest working version is Traefik v2.8.3 What did you see instead? time="2022-11-08T15:42:07Z" level=debug msg="loading of plugin: example: github.com/traefik/plugindemo@v0.2.2" time="2022-11-08T15:42:12Z" level=error msg="Plugins are disabled because an error has occurred." error="failed to download plugin github.com/traefik/plugindemo: failed to call service: Get \"https://plugins.traefik.io/public/download/github.com/traefik/plugindemo/v0.2.2\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" What version of Traefik are you using? v2.9.4 What is your environment & configuration? additionalArguments: - "--log.level=DEBUG" - "--experimental.plugins.example.modulename=github.com/traefik/plugindemo" - "--experimental.plugins.example.version=v0.2.2" If applicable, please paste the log output in DEBUG level time="2022-11-08T15:42:07Z" level=debug msg="loading of plugin: example: github.com/traefik/plugindemo@v0.2.2" time="2022-11-08T15:42:12Z" level=error msg="Plugins are disabled because an error has occurred." error="failed to download plugin github.com/traefik/plugindemo: failed to call service: Get \"https://plugins.traefik.io/public/download/github.com/traefik/plugindemo/v0.2.2\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Hello @scrazy77, Thanks for your interest in Traefik! To debug this further, can you share with us the version of the HelmChart you are using? Can you also share the command and the values.yaml file? Chart.yaml apiVersion: v2 name: traefik description: A Traefik based Kubernetes ingress controller type: application version: 18.3.0 appVersion: 2.9.4 keywords: - traefik - ingress home: https://traefik.io/ sources: - https://github.com/traefik/traefik - https://github.com/traefik/traefik-helm-chart maintainers: - name: emilevauge email: emile@vauge.com - name: dtomcej email: daniel.tomcej@gmail.com - name: ldez email: ldez@traefik.io - name: mloiseleur email: michel.loiseleur@traefik.io - name: charlie-haley email: charlie.haley@traefik.io icon: https://raw.githubusercontent.com/traefik/traefik/v2.3/docs/content/assets/img/traefik.logo.png annotations: artifacthub.io/changes: | - ⬆️ Update Traefik appVersion to 2.9.4 values.yaml # Default values for Traefik image: name: traefik # defaults to appVersion tag: v2.9.4 pullPolicy: IfNotPresent hub: enabled: false deployment: enabled: true kind: Deployment replicas: 2 terminationGracePeriodSeconds: 60 minReadySeconds: 0 annotations: {} labels: {} podAnnotations: {} podLabels: {} additionalContainers: [] additionalVolumes: [] initContainers: [] shareProcessNamespace: false imagePullSecrets: [] lifecycle: {} podDisruptionBudget: enabled: false ingressClass: enabled: false isDefaultClass: false experimental: plugins: enabled: true plugin-requestid: modulename: github.com/pipe01/plugin-requestid version: v1.0.0 traefikxrequeststart: modulename: github.com/EasySolutionsIO/traefikxrequeststart version: v0.0.3 kubernetesGateway: enabled: false gateway: enabled: true ingressRoute: dashboard: enabled: true annotations: {} labels: {} entryPoints: ["traefik"] rollingUpdate: maxUnavailable: 0 maxSurge: 1 readinessProbe: failureThreshold: 1 initialDelaySeconds: 2 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 livenessProbe: failureThreshold: 3 initialDelaySeconds: 2 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 providers: kubernetesCRD: enabled: true allowCrossNamespace: false allowExternalNameServices: true allowEmptyServices: false namespaces: - "default" - "traefik" kubernetesIngress: enabled: true allowExternalNameServices: false allowEmptyServices: false namespaces: - "default" - "traefik" publishedService: enabled: false volumes: [] additionalVolumeMounts: [] logs: general: level: ERROR access: enabled: true filters: {} fields: general: defaultmode: keep names: {} headers: defaultmode: drop names: {} metrics: prometheus: entryPoint: metrics tracing: {} globalArguments: - "--global.checknewversion" additionalArguments: - "--log.level=DEBUG" - "--experimental.plugins.plugin-requestid.modulename=github.com/pipe01/plugin-requestid" - "--experimental.plugins.plugin-requestid.version=v1.0.0" - "--experimental.plugins.traefikxrequeststart.modulename=github.com/EasySolutionsIO/traefikxrequeststart" - "--experimental.plugins.traefikxrequeststart.version=v0.0.3" env: [] envFrom: [] ports: traefik: port: 9000 expose: false exposedPort: 9000 protocol: TCP web: port: 8000 expose: true exposedPort: 80 protocol: TCP websecure: port: 8443 expose: true exposedPort: 443 protocol: TCP http3: enabled: false tls: enabled: true options: "" certResolver: "" domains: [] middlewares: [] metrics: port: 9100 expose: false exposedPort: 9100 protocol: TCP tlsOptions: {} tlsStore: {} service: enabled: true single: true type: LoadBalancer annotationsTCP: {} annotationsUDP: {} labels: {} spec: {} loadBalancerSourceRanges: [] externalIPs: [] autoscaling: enabled: false persistence: enabled: false name: data accessMode: ReadWriteOnce size: 128Mi path: /data annotations: {} certResolvers: {} hostNetwork: false rbac: enabled: true namespaced: false podSecurityPolicy: enabled: false serviceAccount: name: "" serviceAccountAnnotations: {} resources: {} nodeSelector: {} tolerations: [] topologySpreadConstraints: [] securityContext: capabilities: drop: [ALL] readOnlyRootFilesystem: true runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 podSecurityContext: fsGroup: 65532 extraObjects: [] Hello @scrazy7, Thanks for sharing! So far, I should say that the issue looks to be related (only) to a network issue. Unfortunately, this means that we will probably not be able to reproduce it. Thus, you are mentioning having this error (Plugins are disabled because an error has occurred) only starting with the v2.8.4. Can you please share with us the full debug logs trace of execution with the v2.8.3? v2.8.4 > kubectl logs traefik-78cbff6d78-6k787 -n traefik time="2022-11-16T06:25:25Z" level=info msg="Configuration loaded from flags." time="2022-11-16T06:25:25Z" level=info msg="Traefik version 2.8.4 built on 2022-09-02T14:42:59Z" time="2022-11-16T06:25:25Z" level=debug msg="Static configuration loaded {\"global\":{\"checkNewVersion\":true},\"serversTransport\":{\"maxIdleConnsPerHost\":200},\"entryPoints\":{\"metrics\":{\"address\":\":9100/tcp\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"http2\":{\"maxConcurrentStreams\":250},\"udp\":{\"timeout\":\"3s\"}},\"traefik\":{\"address\":\":9000/tcp\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"http2\":{\"maxConcurrentStreams\":250},\"udp\":{\"timeout\":\"3s\"}},\"web\":{\"address\":\":8000/tcp\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"http2\":{\"maxConcurrentStreams\":250},\"udp\":{\"timeout\":\"3s\"}},\"websecure\":{\"address\":\":8443/tcp\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{\"tls\":{}},\"http2\":{\"maxConcurrentStreams\":250},\"udp\":{\"timeout\":\"3s\"}}},\"providers\":{\"providersThrottleDuration\":\"2s\",\"kubernetesIngress\":{\"namespaces\":[\"default\",\"traefik\"]},\"kubernetesCRD\":{\"namespaces\":[\"default\",\"traefik\"],\"allowExternalNameServices\":true}},\"api\":{\"dashboard\":true},\"metrics\":{\"prometheus\":{\"buckets\":[0.1,0.3,1.2,5],\"addEntryPointsLabels\":true,\"addServicesLabels\":true,\"entryPoint\":\"metrics\"}},\"ping\":{\"entryPoint\":\"traefik\",\"terminatingStatusCode\":503},\"log\":{\"level\":\"DEBUG\",\"format\":\"common\"},\"accessLog\":{\"format\":\"common\",\"filters\":{},\"fields\":{\"defaultMode\":\"keep\",\"headers\":{\"defaultMode\":\"drop\"}}},\"pilot\":{\"dashboard\":true},\"experimental\":{\"plugins\":{\"plugin-requestid\":{\"moduleName\":\"github.com/pipe01/plugin-requestid\",\"version\":\"v1.0.0\"}}}}" time="2022-11-16T06:25:25Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://doc.traefik.io/traefik/contributing/data-collection/\n" time="2022-11-16T06:25:25Z" level=warning msg="Traefik Pilot is deprecated and will be removed soon. Please check our Blog for migration instructions later this year." time="2022-11-16T06:25:25Z" level=debug msg="loading of plugin: plugin-requestid: github.com/pipe01/plugin-requestid@v1.0.0" time="2022-11-16T06:25:30Z" level=error msg="Plugins are disabled because an error has occurred." error="failed to download plugin github.com/pipe01/plugin-requestid: failed to call service: Get \"https://plugin.pilot.traefik.io/public/download/github.com/pipe01/plugin-requestid/v1.0.0\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" time="2022-11-16T06:25:30Z" level=debug msg="Configured Prometheus metrics" metricsProviderName=prometheus time="2022-11-16T06:25:30Z" level=info msg="Starting provider aggregator aggregator.ProviderAggregator" time="2022-11-16T06:25:30Z" level=debug msg="Starting TCP Server" entryPointName=metrics time="2022-11-16T06:25:30Z" level=debug msg="Starting TCP Server" entryPointName=traefik time="2022-11-16T06:25:30Z" level=info msg="Starting provider *traefik.Provider" time="2022-11-16T06:25:30Z" level=debug msg="*traefik.Provider provider configuration: {}" time="2022-11-16T06:25:30Z" level=info msg="Starting provider *ingress.Provider" time="2022-11-16T06:25:30Z" level=debug msg="*ingress.Provider provider configuration: {\"namespaces\":[\"default\",\"traefik\"]}" time="2022-11-16T06:25:30Z" level=info msg="ingress label selector is: \"\"" providerName=kubernetes time="2022-11-16T06:25:30Z" level=info msg="Creating in-cluster Provider client" Hello @scrazy77, Thank you for the information. We'll try to reproduce the error on our side, and we'll keep you updated according to the results. In the meantime, you can try to install Traefik using the latest version of the Helm Chart (20.2.1) instead of the version 18.3.0 you are currently using. It brings some modifications to the Traefik configuration. In the meantime, you can try to install Traefik using the latest version of the Helm Chart (20.2.1) instead of the version Hi @nmengin : I try helm chart v20.2.1 today, the results still same: traefik v2.8.3 load all plugins and works well. traefik v2.8.4 can't load plugins. There is only 1 difference between 2.8.3 and 2.8.4: in 2.8.4, Traefik logs the error when an error occurs during the download of a plugin instead of stopping Traefik. https://github.com/traefik/traefik/compare/v2.8.3...v2.8.4 So I think you have a problem with your network, you have something randomly slowing down your network, but it's not related to a specific version of Traefik. So I think you have a problem with your network, you have something randomly slowing down your network, but it's not related to a specific version of Traefik. Hi @nmengin Thanks for help. I think it shouldn't be a "random" network problem, because I can reproduce it easily in every version > v2.8.4. and version <= v2.8.3 all work fine. All test in the same AWS EKS cluster & same node group & same VPC network... Very strange... and no idea.... I had the same issue this morning with image version v2.9.4, the problem went away after updating to v2.9.5. Using the plugin url in curl or the browser resulted in the correct file being downloaded. Plugins are disabled because an error has occurred." error="failed to download plugin github.com/TRIMM/traefik-maintenance: failed to call service: Get \"https://plugins.traefik.io/public/download/github.com/TRIMM/traefik-maintenance/v1.0.1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I tried v2.9.5 still the same: time="2022-11-22T06:36:30Z" level=debug msg="loading of plugin: plugin-requestid: github.com/pipe01/plugin-requestid@v1.0.0" time="2022-11-22T06:36:35Z" level=error msg="Plugins are disabled because an error has occurred." error="failed to download plugin github.com/pipe01/plugin-requestid: failed to call service: Get \"https://plugins.traefik.io/public/download/github.com/pipe01/plugin-requestid/v1.0.0\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" There is no difference between v2.9.4 and v2.9.5 around the plugin topics. https://github.com/traefik/traefik/compare/v2.9.4...v2.9.5 The different feedback leads me to the same conclusion a networking issue: one piece of the network is slow. I tested with different environments and I never reproduced the problem. The current timeout is 5 seconds, it's already a large timeout. Where are deployed your applications (geographically)? Where are deployed your applications (geographically)? Amazon EKS Tokyo Region I have built the master version and tested. All plugins load fine! Thanks! > kubectl logs traefik-65bfff4694-6l7rb -n traefik time="2022-11-24T05:07:23Z" level=info msg="Configuration loaded from flags." time="2022-11-24T05:07:23Z" level=info msg="Traefik version 81a5b1b4c8946dfa9cbee298c2f4a263df407b0f built on 2022-11-23_03:01:37AM" time="2022-11-24T05:07:23Z" level=debug msg="Static configuration loaded {\"global\":{\"checkNewVersion\":true},\"serversTransport\":{\"maxIdleConnsPerHost\":200},\"entryPoints\":{\"metrics\":{\"address\":\":9100/tcp\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"http2\":{\"maxConcurrentStreams\":250},\"udp\":{\"timeout\":\"3s\"}},\"traefik\":{\"address\":\":9000/tcp\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"http2\":{\"maxConcurrentStreams\":250},\"udp\":{\"timeout\":\"3s\"}},\"web\":{\"address\":\":8000/tcp\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"http2\":{\"maxConcurrentStreams\":250},\"udp\":{\"timeout\":\"3s\"}},\"websecure\":{\"address\":\":8443/tcp\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{\"tls\":{}},\"http2\":{\"maxConcurrentStreams\":250},\"udp\":{\"timeout\":\"3s\"}}},\"providers\":{\"providersThrottleDuration\":\"2s\",\"kubernetesIngress\":{\"namespaces\":[\"default\",\"traefik\"]},\"kubernetesCRD\":{\"namespaces\":[\"default\",\"traefik\"],\"allowExternalNameServices\":true}},\"api\":{\"dashboard\":true},\"metrics\":{\"prometheus\":{\"buckets\":[0.1,0.3,1.2,5],\"addEntryPointsLabels\":true,\"addServicesLabels\":true,\"entryPoint\":\"metrics\"}},\"ping\":{\"entryPoint\":\"traefik\",\"terminatingStatusCode\":503},\"log\":{\"level\":\"DEBUG\",\"format\":\"common\"},\"accessLog\":{\"format\":\"common\",\"filters\":{},\"fields\":{\"defaultMode\":\"keep\",\"headers\":{\"defaultMode\":\"drop\"}}},\"experimental\":{\"plugins\":{\"plugin-requestid\":{\"moduleName\":\"github.com/pipe01/plugin-requestid\",\"version\":\"v1.0.0\"},\"traefikxrequeststart\":{\"moduleName\":\"github.com/EasySolutionsIO/traefikxrequeststart\",\"version\":\"v0.0.3\"}}}}" time="2022-11-24T05:07:23Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://doc.traefik.io/traefik/contributing/data-collection/\n" time="2022-11-24T05:07:23Z" level=debug msg="loading of plugin: plugin-requestid: github.com/pipe01/plugin-requestid@v1.0.0" time="2022-11-24T05:07:30Z" level=debug msg="loading of plugin: traefikxrequeststart: github.com/EasySolutionsIO/traefikxrequeststart@v0.0.3"
gharchive/issue
2022-11-16T06:49:19
2025-04-01T06:46:02.444636
{ "authors": [ "ldez", "nmengin", "ronaldtb", "rtribotte", "scrazy77" ], "repo": "traefik/traefik", "url": "https://github.com/traefik/traefik/issues/9512", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1167356532
Update errorpages.md Correct instead spelling What does this PR do? Motivation More [ ] Added/updated tests [ ] Added/updated documentation Additional Notes Hello, "in lieu of" is an English expression https://dictionary.cambridge.org/dictionary/english/in-lieu-of
gharchive/pull-request
2022-03-12T18:07:26
2025-04-01T06:46:02.448690
{ "authors": [ "ezzahraoui", "ldez" ], "repo": "traefik/traefik", "url": "https://github.com/traefik/traefik/pull/8835", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
166982384
Not displaying anything(Empty activity) Hi,I downloaded run the sample app, It is not displaying anything only empty screen.What I am missing. Change year to 2017 It works
gharchive/issue
2016-07-22T07:00:25
2025-04-01T06:46:02.449524
{ "authors": [ "Joe4545" ], "repo": "traex/CalendarListview", "url": "https://github.com/traex/CalendarListview/issues/49", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1223469262
Makefile: add make dev; refine env rebuilding Now that we have setup.cfg, our Makefile should also track it. Blocked on #268.
gharchive/pull-request
2022-05-02T23:42:38
2025-04-01T06:46:02.452087
{ "authors": [ "woodruffw" ], "repo": "trailofbits/pip-audit", "url": "https://github.com/trailofbits/pip-audit/pull/267", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1344471446
Merge https://github.com/trallnag/prometheus-fastapi-instrumentator/pull/157 please It would be great to avoid having False printed everywhere, ruining the logs. It's merged. You can close this issue. It's merged. You can close this issue. thanks for the hint
gharchive/issue
2022-08-19T13:48:48
2025-04-01T06:46:02.468121
{ "authors": [ "chbndrhnns", "haf", "trallnag" ], "repo": "trallnag/prometheus-fastapi-instrumentator", "url": "https://github.com/trallnag/prometheus-fastapi-instrumentator/issues/177", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
2042453993
B-17955 Create new field 'Calculated total SIT days' in sit display Summary This is adding a new field in SIT information display under "Move task order" tab. The new label is "Calculated total SIT days" and it displays value representing calculated total SIT days for that shipment. calculatedTotalDaysInSIT is created in the backend to represent this data and is added under sitStatus object. This new field is not stored in DB and is calculated upon GET request. http://officelocal:3000/swagger-ui/ghc.html#/mtoShipment/listMTOShipments How to test Create a move with a HHG shipment and follow the normal flow to create a SIT service item. Once approved as a TOO, label and the value should be visible under Move task order tab. (See image below for placement. Also added in review/edit modals. Screenshots So... Curious here. What's the difference between this added value and the value that's under the "Total days used"? They seem to always be the same. One is just a UI calculation and the other is a backend calculation that we are passing through? So... Curious here. What's the difference between this added value and the value that's under the "Total days used"? They seem to always be the same. One is just a UI calculation and the other is a backend calculation that we are passing through? Actually these seem to be the same calculation in pkg/services/mto_shipment/shipment_sit_status.go: shipmentSITStatus.TotalSITDaysUsed = CalculateTotalDaysInSIT(shipmentSITs, today) shipmentSITStatus.CalculatedTotalDaysInSIT = CalculateTotalDaysInSIT(shipmentSITs, today) Do we know why there's redundant use of these values? So... Curious here. What's the difference between this added value and the value that's under the "Total days used"? They seem to always be the same. One is just a UI calculation and the other is a backend calculation that we are passing through? Actually these seem to be the same calculation in pkg/services/mto_shipment/shipment_sit_status.go: shipmentSITStatus.TotalSITDaysUsed = CalculateTotalDaysInSIT(shipmentSITs, today) shipmentSITStatus.CalculatedTotalDaysInSIT = CalculateTotalDaysInSIT(shipmentSITs, today) Do we know why there's redundant use of these values? Good question Daniel! That value TotalSITDaysUsed is expected to change in a different story we are working on and will not have the same functionality as the CalculatedTotalDaysInSIT once that story is finished. So... Curious here. What's the difference between this added value and the value that's under the "Total days used"? They seem to always be the same. One is just a UI calculation and the other is a backend calculation that we are passing through? Actually these seem to be the same calculation in pkg/services/mto_shipment/shipment_sit_status.go: shipmentSITStatus.TotalSITDaysUsed = CalculateTotalDaysInSIT(shipmentSITs, today) shipmentSITStatus.CalculatedTotalDaysInSIT = CalculateTotalDaysInSIT(shipmentSITs, today) Do we know why there's redundant use of these values? Good question Daniel! That value TotalSITDaysUsed is expected to change in a different story we are working on and will not have the same functionality as the CalculatedTotalDaysInSIT once that story is finished. Could you link to that BL item or expand on what the change will be? Just curious on what that change will be prior to approving. Or even a brief synopsis in your description would be fine, too. Or even a brief synopsis in your description would be fine, too. So this is the link: https://www13.v1host.com/USTRANSCOM38/story.mvc/Summary?oidToken=Story%3A869482 so current the value for TotalSITDaysUsed is incorrect and needs to be fixed. So the logic for TotalSITDaysUsed will be changed to reflect the correct calculation which will be different from CalculatedTotalDaysInSIT Or even a brief synopsis in your description would be fine, too. So this is the link: https://www13.v1host.com/USTRANSCOM38/story.mvc/Summary?oidToken=Story%3A869482 so current the value for TotalSITDaysUsed is incorrect and needs to be fixed in above story. So the logic for TotalSITDaysUsed will be changed to reflect the correct calculation which will be different from CalculatedTotalDaysInSIT Most likely a clamped version of the calculation. Thanks for the link. Sounds good & makes sense.
gharchive/pull-request
2023-12-14T20:42:50
2025-04-01T06:46:02.481675
{ "authors": [ "danieljordan-caci", "taeJungCaci" ], "repo": "transcom/mymove", "url": "https://github.com/transcom/mymove/pull/11653", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
300701889
Document how to avoid SQL injections Description We are adding explicit documentation to our backend guide about how to avoid SQL injection. Additionally, a new item has been added to the PR template to help everyone remember. Reviewer Notes The link to the documentation won't work until this PR is merged 😄 Verification Steps [ ] The documentation is understandable and properly captures our prior discussion re: avoiding SQL injection. References Pivotal story for this change @breanneboland Yeah, that's a good idea.
gharchive/pull-request
2018-02-27T16:31:50
2025-04-01T06:46:02.484274
{ "authors": [ "jim" ], "repo": "transcom/mymove", "url": "https://github.com/transcom/mymove/pull/190", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
776595428
MB-6044: use an internally wrapped TruncateAll function Description This lays the groundwork for future work that changes how the TruncateAll command will be called. We are preparing to use a more limited PostgreSQL role when we run the application than the role we currently use. The new role will not have the TRUNCATE permission. Therefore, we will add some logic in PopTestSuite.TruncateAll to allow a more privileged user to make the call to suite.db.TruncateAll() while allowing the rest of the tests to run as the less privileged user. I wanted to make those changes in a separate PR to keep the changes as small and atomic as possible, so I am making the changes to how we call TruncateAll in this PR. The other changes will be in other PRs. In addition, I added error checking to any TruncateAll calls that did not previously check for errors. Setup Add any steps or code to run in this section to help others prepare to run your code: docker kill $(docker ps --quiet --filter ancestor=postgres:12.2) docker rm $(docker ps --all --quiet --filter ancestor=postgres:12.2) make db_test_reset && make db_test_migrate && make db_e2e_up DB_PORT_TEST=5433 DB_NAME_TEST=test_db NO_DB=1 APPLICATION=app scripts/run-server-test Code Review Verification Steps [x] Request review from a member of a different team. References Jira story for this change Messages :book: :link: MB-6044 Generated by :no_entry_sign: dangerJS against 77b6c32fdf87d92ba18511c50c72fcc421dce58e
gharchive/pull-request
2020-12-30T19:04:04
2025-04-01T06:46:02.488558
{ "authors": [ "carterjones", "robot-mymove" ], "repo": "transcom/mymove", "url": "https://github.com/transcom/mymove/pull/5566", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1588703166
Update readme.md for Typo demo-persistence.ts with demo-conversation.ts in the readme demo Typo demo-persistence.ts with demo-conversation.ts in the readme demo Thanks @waynejohny 🙏
gharchive/pull-request
2023-02-17T03:41:27
2025-04-01T06:46:02.561651
{ "authors": [ "transitive-bullshit", "waynejohny" ], "repo": "transitive-bullshit/chatgpt-api", "url": "https://github.com/transitive-bullshit/chatgpt-api/pull/354", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
607474780
Error with offspect -tms when creating cache-file from .cnt-file Describe the bug SmartMove protocol: When trying to create a new CacheFile from cnt-files and documentation.txt, no module 'offspect.input.tms.contralateral_mep' is found and no cache file created. When replacing contralateral_mep with cmep in the code, a different error occurs: The channel label does not get accepted. To Reproduce Run offspect tms -t test.hdf5 -f VvNn_VvNn_1970-01-01_00-00-01.cnt VvNn\ 1970-01-01_00-00-01.cnt documentation.txt -r contralateral_mep -c Ch1 -pp 100 100 -e 1 from command prompt in folder with said .cnt-file Alternatively, run offspect tms -t test.hdf5 -f VvNn_VvNn_1970-01-01_00-00-01.cnt VvNn\ 1970-01-01_00-00-01.cnt documentation.txt -r cmep -c Ch1 -pp 100 100 -e 1 from command prompt in folder with said .cnt-file Expected behavior A cache file is created from both .cnt-files and documentation.txt Desktop: OS: Windows 10 Additional context I checked the channel labels via python & .get_channel_info. For the emg-cnt, the result was ['Ch1', 'Ch2', 'Ch3', .. ., 'PO8', 'Oz'], for eeg ['Fp1', 'Fpz', 'Fp2', ... , 'PO8', 'Oz']. Replacing Ch1 in the code with any of these did not change the outcome. -r contralateral_mep is deprecated, and failing is expected behavior. I fixed the unexpected keyword in the most recent develop commit. But now i get raise Exception(f"Received an invalid libeep file handle") Exception: Received an invalid libeep file handle This other bug is weird, because libeep as well as offspect tests pass. The invalid libeep file handle is caused by eemagines binary which i only wrapped. Weirdly enough, eep-peek VvNn_VvNn_2000-12-31_23-59-59.cnt and eep-peek VvNn_VvNn_2000-12-31_23-59-59.cnt run fine without exception. Yet, from libeep import peek fname = "VvNn_VvNn_1970-01-01_00-00-01.cnt" peek(fname) fails with the exception, even though it should be behaviourally identical to the main called sucessfully by eep-peek: def main(): import sys filename = sys.argv[1] peek(filename) Okay, that was because i used file expansion from the terminal. Can you check whether the files VvNn_VvNn_1970-01-01_00-00-01.cnt and VvNn\ 1970-01-01_00-00-01.cntdo even exist? Similarily, ffspect tms -t test.hdf5 -f "VvNn 2000-12-31_23-59-59.cnt" VvNn_VvNn_2000-12-31_23-59-59.cnt documentation.txt -r cmep -c Ch1 -pp 100 100 -e 1 runs without exception now, but doesnt find any traces. Please note that i use different filenames. Adapt the snippets accordingly. @FelixQuirm Please confirm and close if it works now. Works for me now as well, thanks!
gharchive/issue
2020-04-27T11:36:41
2025-04-01T06:46:02.589940
{ "authors": [ "FelixQuirm", "agricolab" ], "repo": "translationalneurosurgery/tool-offspect", "url": "https://github.com/translationalneurosurgery/tool-offspect/issues/28", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2415864414
CT static API support This PR enables Tessera to support a CT Static API personality. There's broadly two things which are necessary to be able to support that personality: Formatting of EntryBundle is different and incompatible tlog-tiles specifies a length/value scheme, whereas CT Static API specifies just value and relies on the fact that value in this case is self-describing. Each log entry contains its own index in the log. The SCT returned to the submitter contains the sequence number of the entry in the log in the CtExtensions field of the SCT struct. What's not obvious from the spec is that this must also be part of the Merkle leaf (otherwise compatibility with RFC6962 would be broken). This means that we don't actually know the LeafData or its MerkleLeafHash until we've sequenced it. Beyond enabling a CT Static API personality to be built, other large goals for this PR are: To make it hard/impossible to build new transparency log ecosystems which attempt to use these patterns outside of CT. Avoid "overly invasive" changes spreading throughout Tessera, instead containing them as close as possible to the personality: Non-CT logs shouldn't have to pay a performance cost simply because the support exists Contain changes to the sequencing side, leaving integration concerned only with managing Merkle specifics. Toward #41 Codecov Report Attention: Patch coverage is 23.23944% with 109 lines in your changes missing coverage. Please review. Please upload report for BASE (main@81ca927). Learn more about missing BASE report. Files Patch % Lines ctonly/ct.go 0.00% 66 Missing :warning: storage/gcp/gcp.go 47.05% 16 Missing and 2 partials :warning: ct_only.go 0.00% 12 Missing :warning: entry.go 43.75% 7 Missing and 2 partials :warning: storage/queue.go 75.00% 2 Missing and 1 partial :warning: storage/integrate.go 50.00% 0 Missing and 1 partial :warning: Additional details and impacted files @@ Coverage Diff @@ ## main #73 +/- ## ======================================= Coverage ? 43.55% ======================================= Files ? 14 Lines ? 1102 Branches ? 0 ======================================= Hits ? 480 Misses ? 550 Partials ? 72 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here. Summarizing our conversation here for the record. The logic makes sense to me, so I'll approve this PR. We could improve naming though to make it easier to read the code: There are 2 things called entry: tessera.Entry, and storage.entry. And storage.entry stores a tessera.Entry in a data field. Not to mention storage.SequencedEntry. There are 3 things called index: tessera.Entry.internal.Index (an integer pointer), tessera.Entry.Index (a method interface), storage.Entry.index (a future method).
gharchive/pull-request
2024-07-18T09:37:20
2025-04-01T06:46:02.622304
{ "authors": [ "AlCutter", "codecov-commenter", "phbnf" ], "repo": "transparency-dev/trillian-tessera", "url": "https://github.com/transparency-dev/trillian-tessera/pull/73", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2176267388
Documentation Add some documentation for the current version of box. This is very much work-in-progress, however, should describe the first dev0 version to be released on pypi. Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Project coverage is 96.76%. Comparing base (10a893b) to head (ee714da). Additional details and impacted files @@ Coverage Diff @@ ## main #25 +/- ## ======================================= Coverage 96.76% 96.76% ======================================= Files 21 21 Lines 1114 1114 ======================================= Hits 1078 1078 Misses 36 36 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
gharchive/pull-request
2024-03-08T15:32:12
2025-04-01T06:46:02.628849
{ "authors": [ "codecov-commenter", "trappitsch" ], "repo": "trappitsch/box", "url": "https://github.com/trappitsch/box/pull/25", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2339678645
🛑 TechinAsia_WordPress_API is down In 9eb6224, TechinAsia_WordPress_API (https://www.techinasia.com/wp-json/techinasia/2.0/posts) was down: HTTP code: 403 Response time: 228 ms Resolved: TechinAsia_WordPress_API is back up in 9c8cd0c after 10 minutes.
gharchive/issue
2024-06-07T06:23:25
2025-04-01T06:46:02.631544
{ "authors": [ "traqy" ], "repo": "traqy/upptime", "url": "https://github.com/traqy/upptime/issues/10853", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2068778483
🛑 TechinAsia_WordPress_API is down In 873f2bc, TechinAsia_WordPress_API (https://www.techinasia.com/wp-json/techinasia/2.0/posts) was down: HTTP code: 403 Response time: 231 ms Resolved: TechinAsia_WordPress_API is back up in b1255c3 after 16 minutes.
gharchive/issue
2024-01-06T18:53:18
2025-04-01T06:46:02.634115
{ "authors": [ "traqy" ], "repo": "traqy/upptime", "url": "https://github.com/traqy/upptime/issues/2524", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2083272150
🛑 TechinAsia is down In 06ba833, TechinAsia (https://www.techinasia.com) was down: HTTP code: 403 Response time: 731 ms Resolved: TechinAsia is back up in abce9f8 after 10 minutes.
gharchive/issue
2024-01-16T08:17:35
2025-04-01T06:46:02.636477
{ "authors": [ "traqy" ], "repo": "traqy/upptime", "url": "https://github.com/traqy/upptime/issues/3112", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2122524944
🛑 TechinAsia_Laravel_API is down In 327ff2f, TechinAsia_Laravel_API (https://www.techinasia.com/api/2.0/companies) was down: HTTP code: 403 Response time: 231 ms Resolved: TechinAsia_Laravel_API is back up in b561cbc after 10 minutes.
gharchive/issue
2024-02-07T09:06:50
2025-04-01T06:46:02.639076
{ "authors": [ "traqy" ], "repo": "traqy/upptime", "url": "https://github.com/traqy/upptime/issues/4368", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2135901798
🛑 TechinAsia_WordPress_API is down In 6cd582d, TechinAsia_WordPress_API (https://www.techinasia.com/wp-json/techinasia/2.0/posts) was down: HTTP code: 403 Response time: 172 ms Resolved: TechinAsia_WordPress_API is back up in bb2a8cb after 16 minutes.
gharchive/issue
2024-02-15T08:19:44
2025-04-01T06:46:02.641880
{ "authors": [ "traqy" ], "repo": "traqy/upptime", "url": "https://github.com/traqy/upptime/issues/4850", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2175262117
🛑 TechinAsia_WordPress_API is down In 4851254, TechinAsia_WordPress_API (https://www.techinasia.com/wp-json/techinasia/2.0/posts) was down: HTTP code: 403 Response time: 174 ms Resolved: TechinAsia_WordPress_API is back up in 50c4f6b after 7 minutes.
gharchive/issue
2024-03-08T04:32:41
2025-04-01T06:46:02.644484
{ "authors": [ "traqy" ], "repo": "traqy/upptime", "url": "https://github.com/traqy/upptime/issues/6123", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2194725642
🛑 TechinAsia is down In cb88db7, TechinAsia (https://www.techinasia.com) was down: HTTP code: 403 Response time: 724 ms Resolved: TechinAsia is back up in 37e7342 after 7 minutes.
gharchive/issue
2024-03-19T11:36:02
2025-04-01T06:46:02.646826
{ "authors": [ "traqy" ], "repo": "traqy/upptime", "url": "https://github.com/traqy/upptime/issues/6766", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1587258256
Add LCP patches from the upstream Also, fix the numbering of vpp-patches Draft for now b/c I want to make sure LCP binapis are fully working first. Removing the draft status b/c the changes turned out to be enough
gharchive/pull-request
2023-02-16T08:36:12
2025-04-01T06:46:02.651754
{ "authors": [ "ivan4th" ], "repo": "travelping/fpp-vpp", "url": "https://github.com/travelping/fpp-vpp/pull/28", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
98505976
APT source whitelist request for ppa:openmw/openmw deb http://ppa.launchpad.net/openmw/openmw/ubuntu precise main deb-src http://ppa.launchpad.net/openmw/openmw/ubuntu precise main this ppa also has libsdl2, which is could provide the source needed for: travis-ci/apt-package-whitelist#366 travis-ci/travis-ci#3799 in case someone needs libsdl2-dev, here is a simple workaround: If you are using CMake, it will automatically find it's location because of SDL2DIR install: - export MVDIR=`pwd` - cd ~ - mkdir sdl2install - export SDL2DIR="`pwd`/sdl2install" - wget https://www.libsdl.org/release/SDL2-2.0.3.tar.gz - tar xzf SDL2-2.0.3.tar.gz - cd SDL2-2.0.3 - ./configure --prefix=$SDL2DIR - make - make install Any news on this? The workaround provided by @ouned is cool, but having to fetch and compile sdl2, sdl2-ttf, sdl2-image, sdl2-mixer, etc. make builds very long. @PuKoren I guess you can't just use trusty now? @ouned: ho, I didn't know using trusty would solve this. I ended up using docker to build and test my project, thanks for your answer
gharchive/issue
2015-08-01T04:25:57
2025-04-01T06:46:02.654801
{ "authors": [ "LavenderMoon", "PuKoren", "maqifrnswa", "ouned" ], "repo": "travis-ci/apt-source-whitelist", "url": "https://github.com/travis-ci/apt-source-whitelist/issues/88", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
211978666
Introduce range-based table name lookup and (temporary) cutoff-based archived log spoofing, the plan being: Create new tables like the old tables: -- not quite like this, but close: CREATE TABLE logs2 (LIKE logs INCLUDING ALL); CREATE TABLE log_parts2 (LIKE log_parts INCLUDING ALL); Configure the table lookup ranges such that once a future log id is passed that all writes are being made to logs2 and log_parts2, e.g: log_record = database.latest_log_record some_log_id = log_record.id + 1000 some_job_id = log_record.job_id + 1000 mapping = { logs: { log_id: [ { range: [0, some_log_id], table: 'logs' }, { range: [some_log_id, INT_MAX], table: 'logs2' } ], job_id: [ { range: [0, some_job_id], table: 'logs' }, { range: [some_job_id, INT_MAX], table: 'logs2' } ] } log_parts: { log_id: [ { range: [0, some_log_id], table: 'log_parts' }, { range: [some_log_id, INT_MAX], table: 'log_parts2' } ], active: :log_parts2 } } Once the oldest records in the logs2 table have been archived (some_log_id), set the cutoff for archive spoofing to be within the range [some_job_id, INT_MAX] so that any reads for logs before the cutoff do not need to access the database at all. Once there are no reads or writes accessing the older tables, we can choose to do invasive actions such as performing VACUUM FULL logs; VACUUM FULL log_parts. Problems I don't believe it's OK to use the cutoff-based archived log spoofing long-term, as this will mean that any jobs before the cutoff that are restarted will not return the updated log content. We may choose to circumvent this by creating new log records rather than mutating the old ones, but such a solution is probably full of fun gotchas given the current assumptions about log record mutation. Should there be user-facing messaging about our use of cutoff-based archived log spoofing while we are performing maintenance? If so, is a broadcast sufficient, or should the travis-logs HTTP API be updated so that travis-api can bubble up such information at the per-job level? Nope! Not like this, at least.
gharchive/pull-request
2017-03-05T19:57:58
2025-04-01T06:46:02.676885
{ "authors": [ "meatballhat" ], "repo": "travis-ci/travis-logs", "url": "https://github.com/travis-ci/travis-logs/pull/79", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
620710307
Add TBox for CLTV prediction This template runs CLTV prediction in a similar way to: https://cloud.google.com/solutions/machine-learning/clv-prediction-with-offline-training-intro Dataset: Online Retail Data Set provided by UCI. Aggregate on customer_id Use records before the threshold to create features. Target value is CLTV, aggregating over an entire time rage from oldest and threshold to threshold to the latest. Split the aggregated customer records into 8:2 train & test By using the train set, train a regressor that predicts lifetime value from limited information collacted before the threshold. Predict lifetime value for the train set, and evaluate the accuracy of prediction. |<------ calculate lifetime value ------>| |<- create feature ->| |--------------------|-------------------|-> time ^ ^ ^ oldest threshold latest @chezou Updated. PTAL
gharchive/pull-request
2020-05-19T06:20:36
2025-04-01T06:46:02.705993
{ "authors": [ "takuti" ], "repo": "treasure-data/treasure-boxes", "url": "https://github.com/treasure-data/treasure-boxes/pull/265", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2103658906
feature: doctest support Did you check the tree-sitter docs? [x] I have read all the tree-sitter docs if it relates to using the parser Is your feature request related to a problem? Please describe. Doctests are typically embedded inside docstrings in Python code, e.g. def factorial(n): """Return the factorial of n, an exact integer >= 0. >>> [factorial(n) for n in range(6)] [1, 1, 2, 6, 24, 120] """ Today, the grammar does not parse docstrings and is oblivious to the doctests potentially lurking inside. Describe the solution you'd like The grammar should parse these doctests so that we can have proper editing support. Describe alternatives you've considered No alternatives have been considered. Additional context Some more discussion on the topic: https://github.com/microsoft/pylance-release/discussions/4196 https://www.reddit.com/r/neovim/comments/zbmqcc/is_it_possible_to_have_rust_doc_test_comments/ This would be better suited for injections, so you can then filter out the actual docstrings with queries, and inject "pydoc" into them. Take a look at jsdoc or luadoc for inspirations if you'd like to embark on that! But nothing is actionable here for that, so I'll close this out Documentation on language injections: https://tree-sitter.github.io/tree-sitter/syntax-highlighting#language-injection.
gharchive/issue
2024-01-27T16:25:05
2025-04-01T06:46:02.725223
{ "authors": [ "amaanq", "malthe" ], "repo": "tree-sitter/tree-sitter-python", "url": "https://github.com/tree-sitter/tree-sitter-python/issues/251", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1468983138
feat: add maxValue and minValue prop to AreaChart, BarChart, and LineChart Adds an optional maxValue and minValue prop to the following components: AreaChart, BarChart, and LineChart. This allows for more flexibility setting the domain of the YAxis component from Recharts. This is needed for a use-case were the domain might be a percentage (0-100) and the data is only a small percentage. Cheers! Hi @mitrotasios or @christopherkindl! 👋 sorry to bother you directly, but any thoughts on this enhancement? Something we could consider adding? @mitrotasios perfect, I like minValue and maxValue 👍 I've updated the PR and added some stories to showcase/test the props. awesome thanks! How urgent is this for you? Of course, happy to contribute! Awesome library. How urgent is this for you? Moderate urgency. We'd like to have the feature sooner than later to help complete a product feature, but not critical 👍 cool, we'll try to merge this as soon as we can. Would you maybe like to hop on a call at any time? We would love to hear feedback and more about your use case in general. Feel free to email me: achi@tremor.so Hi @mitrotasios just checking in on this. What is the Tremor release cadence? And anything I can help with that may be blocking this from merging into main? Hey @samrose3. There is no regular release cadence unfortunately because we work full time and our schedule changes here and there. We will release this most likely tomorrow though. hey @samrose3, I made a few changes to allow only numeric values as inputs for the minValue and maxValue props. We'd like to keep the API framework agnostic and avoid using recharts-specific inputs as we are considering migrating to another chart library eventually. I hope you understand. Will merge this to beta now. Thanks a lot for the work! 🤗 @samrose3 JFYI, we will merge this to main tomorrow probably, but you can already use the beta version by npm installing @tremor/react@beta-feat-1.3.0 if you like :) Woohoo!! Happy to help and thank you for accepting the contribution 🚀
gharchive/pull-request
2022-11-30T03:11:59
2025-04-01T06:46:02.743017
{ "authors": [ "mitrotasios", "samrose3" ], "repo": "tremorlabs/tremor", "url": "https://github.com/tremorlabs/tremor/pull/217", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
971251823
Print formatted code closes #13 This PR does not address the cursor movement along the formatted code. Cursor just moves in a straight line, and honestly not sure why but looks worse. Looks good on my end. Note: commit f084d0d does not fix cursor alignment. It is set up for alignment, but no work was done to properly align the cursor based on any font. The cursor is now misaligned again because it depended on font size. WordList styling was removed in commit 2940a87. WordList styling was removed in commit 2940a87. Oh right whoops
gharchive/pull-request
2021-08-15T23:49:59
2025-04-01T06:46:02.762225
{ "authors": [ "mineugene", "trewjames" ], "repo": "trewjames/sourcetype", "url": "https://github.com/trewjames/sourcetype/pull/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2181199826
Microsoft Entra Authentication We use Microsoft Entra Authentication for our Azure Elastic pool Dbs. There is no option for this in login form. Hi, you can enter a connection string in the source textbox instead of using the Connect dialog to build the connection string. I think this will allow you to use Entra. DBA Dash is using the latest SqlClient so it should support this. This link has some connection string examples. Let me know if it works. It would be nice to have the configuration GUI allow for the selection of "Microsoft Entra ID with MFA" in the drop down, so we can put in the EID user name and then it would do the normal browser pop to allow the password and MFA for getting the token. I think this is OAuth. We have this working We used the command line DBADashConfig -c to set them up example connection string below - we've given the VM running the DBA dash server a managed identity and then granted that required access on Azure SQL Servers we want to monitor "ConnectionString": "Data Source=***********.database.windows.net;Encrypt=True;Trust Server Certificate=True;Authentication=ActiveDirectoryManagedIdentity;Application Name=DBADash", New connection dialog coming soon... Included in 3.10 🚀
gharchive/issue
2024-03-12T09:59:49
2025-04-01T06:46:04.618175
{ "authors": [ "DavidWiseman", "EpitomeOfDeath", "jacobgexigo", "rgrwilloughby" ], "repo": "trimble-oss/dba-dash", "url": "https://github.com/trimble-oss/dba-dash/issues/847", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2057520842
Modus Table: row selection issue when density changes. Prerequisites [X] I have searched for duplicate or closed issues [X] I have read the contributing guidelines Describe the issue There is an issue with the row selection when the density property changes. Steps to reproduce: Select a few rows. Change the density value See the preselected rows are loaded Select a row. See the previous selected rows are selected again. Reduced test cases No response What operating system(s) are you seeing the problem on? No response What browser(s) are you seeing the problem on? No response What is the issue regarding ? @trimble-oss/modus-web-components What version of npm package are you using ? No response Priority Medium What product/project are you using Modus Components for ? e-builder What is your team/division name ? e-builder Are you willing to contribute ? Yes Are you using Modus Web Components in production ? No response Also, the pagination change event is being triggered by the change to density. Should we trigger a density change event? @cjwinsor any advice on this one? Should this be fixed? @apaddock This should be fixed, it most likely is related to state management between the tanstack table and our internal state handling, assuming he are managing that state. If not that we will need to dig a bit deeper. @apaddock This is an issue and should be fixed, it most likely is related to state management between the tanstack table and our internal state handling, assuming we are managing that state. If not that we will need to dig a bit deeper. https://github.com/trimble-oss/modus-web-components/assets/168108000/70f29dc0-b241-46fb-958f-b7b5dc3e2b13 @cjwinsor The problem seems to be with the storybook. I have tested it in my local and it seems to be working fine.  The issue occurs not only with density changes but with all control items. @yohernandez Are you able to reproduce outside of storybook, or was this only identified as an issue in storybook? @cjwinsor No, I am not. It's probably an issue in Storybook. @cjwinsor In the modus table component, a watch decorator is used for rowSelectionOptions which is causing this problem. To resolve this, we can leave the preSelected row as [] in the storybook or we can remove this function ? modus-table.tsx @prashanthr6383 Can you explain more about leaving preSelected row as []? Do you mean how its set in the storybook? @cjwinsor Yes.  Whenever a property gets changed, the storybook updates all the properties instead of just the ones that we need. This is why when we change other properties in the storybook, it updates a particular property and sets the default values for the remaining properties. In our case we set the preSelected value as 0 and if some property gets changed the storybook sets back to the default value. can we remove the preselected value if it is not necessary ? Lets see what we can do to correct the behavior in the storybook, seems to be fine outside of storybook. By default we will set the preselected as [] and add a new example to demonstrate the preselection behavior.
gharchive/issue
2023-12-27T15:45:41
2025-04-01T06:46:04.630088
{ "authors": [ "apaddock", "cjwinsor", "prashanth-offcl", "prashanthr6383", "yohernandez" ], "repo": "trimble-oss/modus-web-components", "url": "https://github.com/trimble-oss/modus-web-components/issues/1983", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
321149257
Add mdc-radio-group component mdc-radio buttons should be placed inside of an <mdc-radio-group> unless the DOM structure would make that impossible (e.g., radio-buttons inside of table cells). The mdc-radio-group should have a value property that reflects the currently selected radio-button inside of the group. Individual radio-buttons inside of a radio-group will inherit the name of the group. Releasing as part of v0.39.1 bug fix release. It's the best method to resolve #1300
gharchive/issue
2018-05-08T11:34:02
2025-04-01T06:46:04.632356
{ "authors": [ "trimox" ], "repo": "trimox/angular-mdc-web", "url": "https://github.com/trimox/angular-mdc-web/issues/952", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1112073130
Move SDK sample code to being injected We should not have sample application code embedded directly in the documentation. Instead, we should have a folder called samples which we can then code inject. This will help ensure the documentation and the codebase do not go out of sync with each other. @MichaelEdwardBlack need any help on this issue? @fundthmcalculus I think I should be good on this. I believe most of it has been resolved. There are just a few things here and there that I'll get as I update to 1.4 I'll close this issue then.
gharchive/issue
2022-01-24T01:46:23
2025-04-01T06:46:04.663105
{ "authors": [ "MichaelEdwardBlack", "fundthmcalculus" ], "repo": "trinsic-id/sdk", "url": "https://github.com/trinsic-id/sdk/issues/397", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
312001324
Parsing a Float error Hi, I was using a mappable model like this struct BalanceData: Mappable { var balance = Float() init?(map: Map){ } mutating func mapping(map: Map) { balance <- map["balance"] } } And this is the response I am parsing into that model { "status": 200, "message": "OK", "error": "", "data": { "balance": 797.76 } } This was working just great until I update to Xcode 9.3 and macOS High Sierra this morning. Now BalanceData.balance is getting 0 instead of 797.76, I have to change var balance = Float() to Double() in model in order to get the correct value. Seeing this same issue with our models after moving to Xcode 9.3 (Swift 4.1). I've been able to verify that the behavior doesn't exist on Xcode 9.2 (Swift 4.0.3). We have data from the network coming in as follows: 1 : 2 elements - key : "someCoordinateX" - value : 94.3 Doing the follow operations results in different values based on the Xcode version the project is compiled with. var someCoordinateX: Float? map[kSomeCoordinateXKey].currentValue // this shows 94.3 // Xcode 9.2 someCoordinateX <- map[kSomeCoordinateXKey] // this shows 94.3 // Xcode 9.3 someCoordinateX <- map[kSomeCoordinateXKey] // this shows nil This is with ObjectMapper v. 5.0.0 I can confirm this issue as well. It won't map longitude and latitude for one of our models. fwiw I ran into a similar issue and noticed it doesn't happen for me when running on iOS 11 sim but does on iOS 10 sim Looks like there has been an update to fix this in this ObjectMapper release https://github.com/Hearst-DD/ObjectMapper/releases/tag/3.2.0 I would assume we just need an update to this project to point to that release
gharchive/issue
2018-04-06T14:39:26
2025-04-01T06:46:04.725702
{ "authors": [ "alexpersian", "mattsellars", "michael-mckenna", "quetool", "wemped" ], "repo": "tristanhimmelman/AlamofireObjectMapper", "url": "https://github.com/tristanhimmelman/AlamofireObjectMapper/issues/242", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2220317681
How can i use perf_analyzer with python backend ? I am using python backend and inside model.py i am calling sentiment classification model from huggingface pipeline. Model will take one argument which is text for which user want to get sentiment classification. How can i use perf_analyzer in order to test concurrent request and metrics?? Below is the curl command to get response from my model: curl --location --request POST 'http://localhost:8000/v2/models/sentiment/infer' \ --header 'Content-Type: application/json' \ --data-raw '{ "inputs":[ { "name": "text", "shape": [1], "datatype": "BYTES", "data": ["I really enjoyed this"] } ] }' Profiling Python models would be like profiling any other Triton model. Please refer to the quick start guide for more info: https://github.com/triton-inference-server/client/blob/main/src/c%2B%2B/perf_analyzer/docs/quick_start.md Please see the link below for how to provide your own data file: https://github.com/triton-inference-server/client/blob/main/src/c%2B%2B/perf_analyzer/docs/input_data.md @Tabrizian Thanks for the clarification, now i am able to use perf_analyzer for my use case. Model Analyzer] Initializing GPUDevice handles [Model Analyzer] Using GPU 0 NVIDIA A100-SXM4-40GB with UUID GPU-d9a0447f-f8fa-9d2f-79fc-ecf2567dacc2 [Model Analyzer] WARNING: Overriding the output model repo path "./rerenker_output1" [Model Analyzer] Starting a local Triton Server [Model Analyzer] Loaded checkpoint from file /model_repositories/checkpoints/0.ckpt [Model Analyzer] GPU devices match checkpoint - skipping server metric acquisition [Model Analyzer] [Model Analyzer] Starting quick mode search to find optimal configs [Model Analyzer] [Model Analyzer] Creating model config: reranker_config_default [Model Analyzer] [Model Analyzer] Creating model config: bge_reranker_v2_onnx_config_default [Model Analyzer] [Model Analyzer] Profiling reranker_config_default: client batch size=1, concurrency=24 [Model Analyzer] Profiling bge_reranker_v2_onnx_config_default: client batch size=1, concurrency=8 [Model Analyzer] [Model Analyzer] perf_analyzer took very long to exit, killing perf_analyzer [Model Analyzer] perf_analyzer did not produce any output. [Model Analyzer] Saved checkpoint to model_repositories/checkpoints/1.ckpt [Model Analyzer] Creating model config: reranker_config_0 [Model Analyzer] Setting instance_group to [{'count': 1, 'kind': 'KIND_GPU'}] [Model Analyzer] Setting max_batch_size to 1 [Model Analyzer] Enabling dynamic_batching [Model Analyzer] [Model Analyzer] Creating model config: bge_reranker_v2_onnx_config_0 [Model Analyzer] Setting instance_group to [{'count': 1, 'kind': 'KIND_GPU'}] [Model Analyzer] Setting max_batch_size to 1 [Model Analyzer] Enabling dynamic_batching [Model Analyzer] [Model Analyzer] Profiling reranker_config_0: client batch size=1, concurrency=2 [Model Analyzer] Profiling bge_reranker_v2_onnx_config_0: client batch size=1, concurrency=2 [Model Analyzer] [Model Analyzer] perf_analyzer took very long to exit, killing perf_analyzer [Model Analyzer] perf_analyzer did not produce any output. [Model Analyzer] No changes made to analyzer data, no checkpoint saved. Traceback (most recent call last): File "/opt/app_venv/bin/model-analyzer", line 8, in sys.exit(main()) File "/opt/app_venv/lib/python3.10/site-packages/model_analyzer/entrypoint.py", line 278, in main analyzer.profile( File "/opt/app_venv/lib/python3.10/site-packages/model_analyzer/analyzer.py", line 124, in profile self._profile_models() File "/opt/app_venv/lib/python3.10/site-packages/model_analyzer/analyzer.py", line 233, in _profile_models self._model_manager.run_models(models=models) File "/opt/app_venv/lib/python3.10/site-packages/model_analyzer/model_manager.py", line 145, in run_models self._stop_ma_if_no_valid_measurement_threshold_reached() File "/opt/app_venv/lib/python3.10/site-packages/model_analyzer/model_manager.py", line 239, in _stop_ma_if_no_valid_measurement_threshold_reached raise TritonModelAnalyzerException( model_analyzer.model_analyzer_exceptions.TritonModelAnalyzerException: The first 2 attempts to acquire measurements have failed. Please examine the Tritonserver/PA error logs to determine what has gone wrong.
gharchive/issue
2024-04-02T11:51:09
2025-04-01T06:46:04.744537
{ "authors": [ "Tabrizian", "riyajatar37003", "sumittagadiya" ], "repo": "triton-inference-server/model_analyzer", "url": "https://github.com/triton-inference-server/model_analyzer/issues/853", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2359977211
[Pipeliner] NFC: Expose Pipeliner infrastructure for use by other target backends Non-functional changes to expose lib/Dialect/TritonGPU/Transforms/Pipeliner infrastructure for use by other target backends. See use here https://github.com/triton-lang/triton/pull/4148. Sorry for the delay, @pawelszczerbuk could you take a look and give post-commit comments if any?
gharchive/pull-request
2024-06-18T14:14:12
2025-04-01T06:46:04.782112
{ "authors": [ "ThomasRaoux", "sjw36" ], "repo": "triton-lang/triton", "url": "https://github.com/triton-lang/triton/pull/4155", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2546039798
[AMD] Adjust shared layout order in pipeline pass Batch dimension should be slowest one, other cases are not supported by MFMA/WMMA/MMA pipeline. For reviewers: This change ensures that batch dimension is the slowest in shared layout, this is a limitation of most shared -> mma conversions. We have a related assert in distributed -> shared layout converter: https://github.com/triton-lang/triton/blob/main/lib/Conversion/TritonGPUToLLVM/MemoryOpToLLVM.cpp#L27 and we do same thing with order in other transformations, like here: https://github.com/triton-lang/triton/blob/main/lib/Dialect/TritonGPU/Transforms/ReduceDataDuplication.cpp#L64 I've hit this issue while experimenting with FMA dot3d kernels. Do you have any perf analysis for this case? Not yet, this never happened before. This can happen only when applying dot to 3d tensors, and as far as I know, there are no target kernels using them.
gharchive/pull-request
2024-09-24T18:08:06
2025-04-01T06:46:04.784914
{ "authors": [ "binarman" ], "repo": "triton-lang/triton", "url": "https://github.com/triton-lang/triton/pull/4796", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
811196625
Use new make target csharedlib and csharedlib_opt This PR requires the upstream PR https://bitbucket.org/tgrassi/krome/pull-requests/92/allow-building-a-shared-library-with-c to be merged first. Pull Request Test Coverage Report for Build 654935645 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage increased (+1.1%) to 90.0% Totals Change from base Build 588927478: 1.1% Covered Lines: 9 Relevant Lines: 10 💛 - Coveralls Pull Request Test Coverage Report for Build 654935645 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage increased (+1.1%) to 90.0% Totals Change from base Build 588927478: 1.1% Covered Lines: 9 Relevant Lines: 10 💛 - Coveralls
gharchive/pull-request
2021-02-18T15:20:09
2025-04-01T06:46:04.794173
{ "authors": [ "coveralls", "sloede" ], "repo": "trixi-framework/KROME.jl", "url": "https://github.com/trixi-framework/KROME.jl/pull/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2035861303
Create Downgrade.yml Similar to https://github.com/SciML/SciMLBase.jl/pull/553. The basic intention is to check whether our lower compat bounds are still accurate. This sounds like a good idea to me. I chose to just test threaded runs without coverage tracking since that's reasonably fast and covers a good amount of different aspects. See https://github.com/cjdoris/julia-downgrade-compat-action @jlchan Could you please figure out the minimum version of StartUpDG.jl (or other dependencies) required to let tests pass? Currently, only the DGMulti tests fail, see https://github.com/trixi-framework/Trixi.jl/actions/runs/7177988031/job/19545377350?pr=1771#step:8:5324 @jlchan Could you please figure out the minimum version of StartUpDG.jl (or other dependencies) required to let tests pass? Currently, only the DGMulti tests fail, see https://github.com/trixi-framework/Trixi.jl/actions/runs/7177988031/job/19545377350?pr=1771#step:8:5324 Odd, the failures are with OrdinaryDiffEq.jl, StaticArrays.jl, and Static.jl. I'll try to figure out what broke, but it's not immediately clear. There is also /home/runner/work/Trixi.jl/Trixi.jl/examples/dgmulti_2d/elixir_euler_fdsbp_periodic.jl elixir_euler_fdsbp_periodic.jl: Error During Test at /home/runner/work/Trixi.jl/Trixi.jl/test/test_trixi.jl:228 Got exception outside of a @test LoadError: UndefKeywordError: keyword argument `N` not assigned Stacktrace: [1] StartUpDG.RefElemData(elem::NodesAndModes.Quad, approx_type::SummationByPartsOperators.PeriodicDerivativeOperator{Float64, 2, 2, SummationByPartsOperators.FastMode, SummationByPartsOperators.Fornberg1998, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}) @ StartUpDG ~/.julia/packages/StartUpDG/dgtGs/src/RefElemData.jl:142 @ranocha the issue with StartUpDG.jl is that I added the SummationByPartsOperators.jl extension in 0.17.7, so to get the FDSBP tests to pass, we need 0.17.7. Hopefully fixed the failures via https://github.com/trixi-framework/Trixi.jl/pull/1771/commits/f134e7a65af7668244acb04e1dfc5ecf4d40b9c7. Lets wait for CI. Hm...it looks like Downgrade.yml is still failing after setting the StartUpDG.jl compat bound to 0.17.7 (the most recent release). I guess this is related to another package's lower compat bound? @ranocha since I can't reproduce the CI errors for DGMulti locally, I was planning on just manually setting all compat bounds to the most recent versions and then backtracking them to their current versions one by one. Do you mind if I do that in this PR or would you prefer that in a new one? Feel free to do so here I will have a look It looks like all commits after 8a81714 did not change the test failures It was working locally - but I didn't test the multi-threaded time integration... The CI failure seems to be related to LoopVectorization.jl and StrideArrays.jl. Maybe try bumping one of these two (even higher)? Feel free to go ahead with it. I don't have the bandwidth to do so this week. I gave it another try and found that https://github.com/JuliaSIMD/StrideArrays.jl/issues/77 was the issue. Thus, bumping StrideArrays to v0.1.26 fixed the problem locally for me.
gharchive/pull-request
2023-12-11T14:53:50
2025-04-01T06:46:04.803357
{ "authors": [ "JoshuaLampert", "jlchan", "ranocha" ], "repo": "trixi-framework/Trixi.jl", "url": "https://github.com/trixi-framework/Trixi.jl/pull/1771", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1588939235
Reset watch_history_color How can i reset the watch_history_color to default (not blue)? If i set to undef, i lost video details text at all By setting it to "black".
gharchive/issue
2023-02-17T08:28:56
2025-04-01T06:46:04.804990
{ "authors": [ "Flashwalker", "trizen" ], "repo": "trizen/youtube-viewer", "url": "https://github.com/trizen/youtube-viewer/issues/425", "license": "artistic-2.0", "license_type": "permissive", "license_source": "bigquery" }
1483915704
Question about performance From the document it's said that 4000, 000 cells/per seconds read is supported. In what circustance the test is performed? For example, random access read ? sequntial read ? Traverse colums in specific orders while iterating each row? I wrote a program that read cells into memory by iterating each row and read each cell in a same order (not sequntial read in each row). It took 10 seconds to read them all. Both the cpu and disk io are not really high (cpu < 15%, disk io < 0.2% in task manager under windows11). The program is built using mingw and run in windows11. I have no idea what prevents the program from makeing more utilization of cpu. Any hint given will be appreciated. Thanks! There's a file Benchmark.cpp, I've never run it, but it will give you an idea of how the tests were conducted. Indeed! reading random by cell is indeed much much slower, down to about 2000 read per second. I found that if I read sequential and put the values in an std::unordered_map. Then random read from the map is back to being crazy fast Command: CEXTXLSXBENCH3 0.485968 seconds str=100223, int=27026, reals= 72751, total = 200000 here’s a quick test, it just a quick proof of concept void XlsxUnitTests::bench3() { auto hasher = [](size_t row, size_t col) { std::size_t h1 = std::hash<size_t>{}(row); std::size_t h2 = std::hash<size_t>{}(col); return h1 ^ (h2 << 1); }; XlsxPerfTimer timer; timer.start(); OpenXLSX::XLDocument doc; doc.open(wstr_to_utf8(benchPath)); auto wks = doc.workbook().worksheet("Sample-spreadsheet-file"); std::unordered_map<size_t, OpenXLSX::XLCellValue> map; map.reserve(wks.rows().rowCount() * wks.rows().begin()->cellCount()); for (size_t irow = 0; auto & row : wks.rows()) { irow++; const std::vector<OpenXLSX::XLCellValue>& values = row.values(); for (size_t icol = 0; const auto & value : values) { icol++; map[hasher(irow, icol)] = value; } } std::vector<std::string> strs; std::vector<int> ints; std::vector<double> reals; std::random_device rd; std::mt19937 gen(rd()); std::uniform_int_distribution<> distrCol(1, 8); std::uniform_int_distribution<> distrRow(1, 50000); for (int idx = 0; idx < 200000; idx++) { const auto& value = map[hasher(distrRow(gen), distrCol(gen))]; switch (value.type()) { case OpenXLSX::XLValueType::String: strs.push_back(value.get<std::string>()); break; case OpenXLSX::XLValueType::Integer: ints.push_back(value.get<int>()); break; case OpenXLSX::XLValueType::Float: reals.push_back(value.get<double>()); break; } } timer.end(); auto total = strs.size() + ints.size() + reals.size(); acutPrintf(_T("\nstr=%d, int=%d, reals= %d, total = %d"), strs.size() , ints.size() , reals.size(), total); } Indeed! reading random by cell is indeed much much slower, down to about 2000 read per second. I found that if I read sequential and put the values in an std::unordered_map. Then random read from the map is back to being crazy fast Command: CEXTXLSXBENCH3 0.485968 seconds str=100223, int=27026, reals= 72751, total = 200000 here’s a quick test, it just a quick proof of concept void XlsxUnitTests::bench3() { auto hasher = [](size_t row, size_t col) { std::size_t h1 = std::hash<size_t>{}(row); std::size_t h2 = std::hash<size_t>{}(col); return h1 ^ (h2 << 1); }; XlsxPerfTimer timer; timer.start(); OpenXLSX::XLDocument doc; doc.open(wstr_to_utf8(benchPath)); auto wks = doc.workbook().worksheet("Sample-spreadsheet-file"); std::unordered_map<size_t, OpenXLSX::XLCellValue> map; map.reserve(wks.rows().rowCount() * wks.rows().begin()->cellCount()); for (size_t irow = 0; auto & row : wks.rows()) { irow++; const std::vector<OpenXLSX::XLCellValue>& values = row.values(); for (size_t icol = 0; const auto & value : values) { icol++; map[hasher(irow, icol)] = value; } } std::vector<std::string> strs; std::vector<int> ints; std::vector<double> reals; std::random_device rd; std::mt19937 gen(rd()); std::uniform_int_distribution<> distrCol(1, 8); std::uniform_int_distribution<> distrRow(1, 50000); for (int idx = 0; idx < 200000; idx++) { const auto& value = map[hasher(distrRow(gen), distrCol(gen))]; switch (value.type()) { case OpenXLSX::XLValueType::String: strs.push_back(value.get<std::string>()); break; case OpenXLSX::XLValueType::Integer: ints.push_back(value.get<int>()); break; case OpenXLSX::XLValueType::Float: reals.push_back(value.get<double>()); break; } } timer.end(); auto total = strs.size() + ints.size() + reals.size(); acutPrintf(_T("\nstr=%d, int=%d, reals= %d, total = %d"), strs.size() , ints.size() , reals.size(), total); } Thanks!! After trying the approach of bench 1 and 3, I found that the way of calling the api does affect the performance. Using the rows() call seem to load everything into memory and make it super fast to read. Finally get it done. Thank you
gharchive/issue
2022-12-08T07:25:27
2025-04-01T06:46:04.824999
{ "authors": [ "AlanYiNew", "CEXT-Dan" ], "repo": "troldal/OpenXLSX", "url": "https://github.com/troldal/OpenXLSX/issues/198", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1447725969
🛑 PiHole Backup is down In 6685a23, PiHole Backup (https://pihole-backup.tronflix.app/admin/login.php) was down: HTTP code: 520 Response time: 206 ms Resolved: PiHole Backup is back up in ae984fd.
gharchive/issue
2022-11-14T09:47:19
2025-04-01T06:46:04.827834
{ "authors": [ "tronyx" ], "repo": "tronyx/upptime", "url": "https://github.com/tronyx/upptime/issues/3197", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1458101999
🛑 Bazarr is down In ed06b7b, Bazarr (https://tronflix.app/bazarr/system/status) was down: HTTP code: 521 Response time: 51 ms Resolved: Bazarr is back up in 5832555.
gharchive/issue
2022-11-21T15:08:12
2025-04-01T06:46:04.830253
{ "authors": [ "tronyx" ], "repo": "tronyx/upptime", "url": "https://github.com/tronyx/upptime/issues/3645", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1907082101
🛑 SABnzbd is down In 8c82dc9, SABnzbd (https://tronflix.app/sabnzbd/) was down: HTTP code: 302 Response time: 116 ms Resolved: SABnzbd is back up in 85ceac8 after 46 minutes.
gharchive/issue
2023-09-21T14:11:58
2025-04-01T06:46:04.832607
{ "authors": [ "tronyx" ], "repo": "tronyx/upptime", "url": "https://github.com/tronyx/upptime/issues/4148", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
974438224
Chain emmet invocations Hey 👋 and thanks for the lib ❤️ I got a small issue where I'm not sure if it's intended or not. There's another issue which mentions something similar but sadly it was not clearly mentioned how it was resolved in the end. After I invoke an emmet abbreviation my cursor gets placed in between the tags. Now if I start typing I would expect to see some intellisense suggestions again (emmet or otherwise) but nothing is shown. When you look at the stackblitz you can try out what I mean. Thanks for your time! https://user-images.githubusercontent.com/5793380/130040789-3dc4d526-fd5f-4719-8f80-f3b65871a453.mov I think it caused by monaco editor will somehow select the text between tags until you move cursor out. Same behavior on stackblitz, too. Seems different behavior between monaco-editor and vscode, have no idea how to fix this hey 👋 a colleague of mine stumbled over a monaco editor setting which seems to fix this issue nicely. See this SO answer for more information. What it comes down to is setting the snippetsPreventQuickSuggestions toggle to false const editor = monaco.editor.create(element, { value: value, language: myLanguageId, theme: myThemeId, suggest: { snippetsPreventQuickSuggestions: false } }); I'll close this issue with this comment as the problem seems to be resolved. kind regards Seems it can't be integrated into this plugin, still useful when building an fully functional editor, thanks.
gharchive/issue
2021-08-19T09:02:52
2025-04-01T06:46:04.837405
{ "authors": [ "simerlec", "troy351" ], "repo": "troy351/emmet-monaco-es", "url": "https://github.com/troy351/emmet-monaco-es/issues/98", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }