added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:10:24.564465
2024-03-07T13:52:58
2173918284
{ "authors": [ "Kallinteris-Andreas", "emlynw", "jwbirkbeck", "reginald-mclean" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14143", "repo": "Farama-Foundation/Metaworld", "url": "https://github.com/Farama-Foundation/Metaworld/issues/467" }
gharchive/issue
Incorrect rendering of goal spheres in reach-v2 tasks Hi All, I'm raising this for the reach-v2 environment but it might apply to other tasks. When rendering all the available tasks using the below code, the red sphere does not change position. This leads users to mistakenly conclude the target position is the same across tasks. I believe this is because the reset_model method only updates self._target_pos which has no impact on the rendering within MuJoCo. The reward function is therefore correct across tasks, but the rendering is incorrect. If this is correct, a likely fix is to add the a line to the end of the reset_model method: self._set_pos_site('goal', self._target_pos) As a related issue, the potential confusion is made worse due to the goal parameter being a fixed constant (from the init, self.goal = np.array([-0.1, 0.8, 0.2])). Under 'least surprise' I think self.goal should provide the user with the same information as self._target_pos. import metaworld ml1 = metaworld.ML1('reach-v2') env = ml1.train_classes['reach-v2'](render_mode='human') while True: for task in ml1.train_tasks: env.set_task(task) env.reset() Hi, Thanks for posting this, I was using the push-v2 environment and didn't realize that the sphere was meant to move. I tried adding self._set_pos_site('goal', self._target_pos) to the reset_model method but the sphere still didn't move between episodes. Adding it to the evaluate_state method seems to work though what mujoco version are you running? 2.3.7 This is definitely something to look into, I will try and look into it tomorrow or over the weekend Reginald McLean (he/him) Ph.D. Candidate, Department of Computer Science Lead Maintainer of Meta-World https://github.com/Farama-Foundation/Metaworld/ Toronto Metropolitan University https://www.torontomu.ca/ (formerly Ryerson University) On Wed, Apr 17, 2024 at 1:53 PM Emlyn @.***> wrote: 2.3.7 — Reply to this email directly, view it on GitHub https://github.com/Farama-Foundation/Metaworld/issues/467#issuecomment-2061886891, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARLRRRKQ5BKDPRTBDBGMTSTY52ZJTAVCNFSM6AAAAABELBLJSKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRRHA4DMOBZGE . You are receiving this because you are subscribed to this thread.Message ID: @.***> Hi @emlynw and @jwbirkbeck thanks for letting us know about this issue. I am going to ask you to test the environments that you are using by making a slight modification to the reset function in sawyer_xyz_env.py in metaworld/envs/mujoco/sawyer_xyz/. It should look like this after adding one line: def reset(self, seed=None, options=None): self.curr_path_length = 0 obs, info = super().reset() self.model.site('goal').pos = self._target_pos mujoco.mj_forward(self.model, self.data) self._prev_obs = obs[:18].copy() obs[18:36] = self._prev_obs obs = np.float64(obs) return obs, info Adding self.model.site('goal').pos = self._target_pos into sawyer_xyz_env.py causes issues with other environments that don't have a 'goal' key such as the coffee environments. As @jwbirkbeck said adding it into the reset_model of the specific environment's method works, and this doesn't affect the other environments (I had to add it as the first line of the method not at the end) Ah yep, the goal is different. The following list of environments will pass with that fix: ['basketball-v2', 'box-close-v2', 'dial-turn-v2', 'door-close-v2', 'door-open-v2', 'hand-insert-v2', 'drawer-close-v2', 'drawer-open-v2', 'hammer-v2', 'lever-pull-v2', 'peg-insert-side-v2', 'pick-place-wall-v2', 'pick-out-of-hole-v2', 'reach-v2', 'push-back-v2', 'push-v2', 'pick-place-v2', 'plate-slide-v2', 'plate-slide-side-v2', 'plate-slide-back-v2', 'plate-slide-back-side-v2', 'peg-unplug-side-v2', 'soccer-v2', 'stick-push-v2', 'stick-pull-v2', 'push-wall-v2', 'reach-wall-v2', 'shelf-place-v2', 'sweep-into-v2', 'sweep-v2', 'window-open-v2', 'window-close-v2'] This is going to be fixed by PR #473
2025-04-01T04:10:24.566093
2024-10-19T09:41:10
2598958500
{ "authors": [ "sundarielango95" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14144", "repo": "Farama-Foundation/Miniworld", "url": "https://github.com/Farama-Foundation/Miniworld/issues/116" }
gharchive/issue
[Question] Is there a way to get a top view of the environment as the observation, rather than the Agent's POV? Question I want to change the observation that gets sent from the environment to the RL model. Is there a way to do this? Please let me know. NVM, I just had to use env.render_top_view()
2025-04-01T04:10:24.573157
2019-12-30T16:02:24
543929042
{ "authors": [ "Mesqualito", "yfain" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14145", "repo": "Farata/angulartypescript", "url": "https://github.com/Farata/angulartypescript/issues/21" }
gharchive/issue
ObservableMedia deprecated? My compiler moans regarding https://github.com/Farata/angulartypescript/blob/master/code-samples/Angular7/chapter7/ng-auction/src/app/home/home.component.ts. It seems ObservableMedia is deprecated: https://github.com/angular/flex-layout/issues/989. Do I have to update my code accordingly to be able to follow your book? I believe it was fixed on the Angular8 version of the book code samples. I solved it for myself here with using constructor(private media: MediaObserver, private productService: ProductService) { this.products$ = this.productService.getAll(); this.columns$ = this.media.asObservable().pipe( map(mc => <number>this.breakpointsToColumnsNumber.get(mc[0].mqAlias)) ); } Of course, I am not shure if this is very clever - I am a beginner ;-) I didn't find the concerning changes in your repo and it stopped me at this point to be able to carry on with learning - or I would have to step back to Angular 6. In the upcoming chapters, the code for your book is divided into server and client, maybe I'll find it somewhere in the next days :-) Thank you for your good work - the book is very recommendable, a good composition to learn along in an ever changing vocation with lots of details and valuable hints!
2025-04-01T04:10:24.577446
2021-07-08T15:20:21
939971564
{ "authors": [ "SoaresMG" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14146", "repo": "Farfetch/react-carousel", "url": "https://github.com/Farfetch/react-carousel/pull/23" }
gharchive/pull-request
Sync beta Sync beta and master :tada: This PR is included in version 1.2.0 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:10:24.578429
2023-08-18T17:23:07
1857046834
{ "authors": [ "Farfi55" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14147", "repo": "Farfi55/CookedUp", "url": "https://github.com/Farfi55/CookedUp/issues/10" }
gharchive/issue
MeatPatty left to burn after Bot's recipe expires A possible solution is to add a new state to remove burning ingredients from stoveCounter, which should have a higher priority compared to the other states. This would also means creating a priority system for player states. fixed as suggested
2025-04-01T04:10:24.600364
2020-02-28T21:16:55
573018346
{ "authors": [ "RickCarlino", "muenchris", "whitecapsO" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14148", "repo": "FarmBot/farmbot_os", "url": "https://github.com/FarmBot/farmbot_os/issues/1165" }
gharchive/issue
[Farmware] Did you stop supporting the local webserver on port 27347? Expected Behavior Connect to the local webserver with REST should give access as documented here: https://hexdocs.pm/farmbot/Farmbot.BotState.Transport.HTTP.html Actual Behavior Webserver on PI refuse connection Steps to Reproduce Try a REST call with a valid token in the Bearer to http://farmbotiP:27347/api/vi1/bot/state" @muenchris This was permanently removed from FBOS starting with v8. We recommend using AMQP or FarmBotJS for streaming state updates in v8. do these two APIs work locally or do I have to install the whole FB Cloud infrastructure onsite? The beauty of the previous APIs were that I can run emergency code locally when the connection to the cloud is unreliable or down. @muenchris FarmBotJS and AMQP require a connection to the home server (usually my.farm.bot unless you are self hosting). If you wish to self host your infrastructure on a LAN for increased connectivity, the instructions can be found here. Please note that the only OS distribution we officially support is Ubuntu and the only supported hardware architecture is x86_64 (ARM is not supported). Any special version of Ubuntu? Is your farmware no longer using celeryscript? Does that mean all move and water commandos are coming from the cloud? Nothing local anymore? If so I guess I have to find a way to use my own controller for the bot, I don't want to be dependent on a cloud. Also the data I create is my data - and will not be shared with cloud services that I do not give explicit permission (very similar to PID of the GDPR in Europe). These regulations are not yet very popular in the US ...but they will come once privacy is becoming more of an issue here. Any special version of Ubuntu? They were last tested and working on a fresh install of Ubuntu 18.10 Is your farmware no longer using celeryscript? It is still using CeleryScript, but over Unix sockets. It is best to use a wrapper like FarmBot Tools. This will allow compatibility for future versions if the underlyng platform must change. Does that mean all move and water commandos are coming from the cloud? Nothing local anymore? Farmware still exists in FBOS v9. The underlying APIs have changed, though. I don't want to be dependent on a cloud. Also the data I create is my data We agree entirely. You don't need to use our servers if you don't want to. Really- it's OK! That's why we always have and always will give users the option to self-host their devices. It's also why we leave our entire source code available for inspection and modification with a permissive Open Source license (something that many cloud and IoT companies are unwilling to do for their customers). The software you see on Github is the same software that is running in the cloud. The only caveat to the self-hosting route is that support is provided on a best-effort basis. FarmBot only has two full time developers, so we cannot spend too much of our time helping people get setup or training people on Docker and Rails. With that being said, plenty of folks are self-hosting their devices just fine and we will always provide this as an option to our customers. Thank you for your excellent support and answers. I have worked many years at Siemens and with KUKA and would love to experiment with the Farmbot. Its a great test case for our industrial IoT Solution but its fully based on .NET and requires at least some way to get to the linux on the PI. Unix Sockets is something we are using on other projects therefore I might go this route next (running my IoT solution side by side on the PI with your and communicate via Unix Sockets - if this is documented somewhere?) Again thanks and keep up the good work. @muenchris Unlike TCP sockets, Unix sockets are not accessible over a network and they should be considered an internal API (they are an interprocess communication tool provided by the OS). I would highly recommend against directly interacting with the Unix sockets because if you base your entire application around a private API, it may break later on. If you use Farmware tools or FarmBotJS, it is easier for us to provide clean upgrades, keeping the same external API but possibly changing underlying infrastructure. The reality with on device software is that it has eaten up huge chunks of our developer hours (sometimes entire work weeks). I only have evidence of 8 other people actually authoring farmware. This Github issue here is a good example. We removed it over a year ago and you are the first person to even notice. We've spent countless weeks implementing these features, hunting bugs and making compromises elsewhere in the system to allow for on-device scripting support. I can't say it makes sense from a business perspective to continue encouraging people to write on-device software, at least with the way it is currently architected. Conversely, telling people to use wrappers like FarmBotJS does make sense because it is the tool that we use internally to build things like the Web App. The Web App is essentially just a GUI front end to FarmBotJS and if you write an application on top of it, you can be highly certain that FarmBotJS will continue to work as intended (it's what the Web App uses, after all). Moving forward, we are thinking that the Farmware v3 library may be substantially different from the current system that has been plagued with problems and poorly adopted by third-party developers. I had an interesting conversation with @whitecapso in the forum where they propose some ideas about what a next-gen farmware system would look like, along with comparisons to some other platforms that do similar things. In v3 of Farmware, I could see us transitioning to an integrated IDE (such as Monaco) in the web app that gives users a choice of a few popular languages that are well suited to embedded systems (such as MicroPython or Lua). The idea behind the integrated editor would be that you do not need to host your farmware code externally or on Github, and you do not need to install an IDE or runtime on your host device. This would be a huge step in a different direction than the current farmware system and I am skeptical of the usefulness of providing an upgrade path for an API that has only been used by 9 people (keeping in mind we have shipped literally thousands of FarmBots at this point). It would be better to give the v3 system a clean slate and not drag legacy problems into the new system. I welcome your feedback and ideas on this, although I think it may be better to move the conversation over to the forum, since Github issues are better suited for troubleshooting and your main questions have been answered. It would also be good to hear more feedback from the other folks who have written third party integrations, since we want to build a solution that is useful for third party development. I do understand your point. My problem is that I am in a rural environment with very spotty internet. The risk that my crop is not watered because of "internet issues" is not generating high confidence. Down the road I envision fields of farmbots - completely autonomous - only "monitoring" via the cloud but all the farmware running even if the cloud is down. If this should scale, the "on premise" solutions needs to be easier to setup as it is right now. People with interest in farming are not necessarily robotic-engineers. If the next generation Farmware goes towards this goal I am all in. If not, how can I help making this happen? Hi guys, please forgive me for jumping in. I read the conversation and had some thoughts on future architectures for folks with poor internet connectivity. The easyist solution is to have the web app run on a local web server with the option of ethernet or local wifi network connectivity to the FBOS on the Pi. This would require an easier way to deploy, configure and update the webapp and associated libraries for non technical folks. It would be nice if the webapp ran on the Pi and you could access that over the FB wifi (similar to how the config app runs) but I'm not sure if the Pi is powerful enough to run both the FBOS and the WebApp. The Pi could then be shipped with all the software installed. Both of these solutions will require a new upgrade process and no doubt create maintenance headaches that come with local deployments. So a use at your own risk / best efforts etc approach may be initially needed. Dealing with Netowrk Partitions The risk that my crop is not watered because of "internet issues" is not generating high confidence. This issue has been fixed in 1.5 devices. Adding an RTC chip to your device will prevent farm events from failing to execute. It is possible to upgrade older devices. We've intentionally cut the internet off of devices in the warehouse for days on end and things are OK as long as the RTC chip has a good battery. The problem before was that FarmBot's only means of keeping time was via an NTP server, which led to clock skew (failed farm events, invalid SSL certs, etc..). We've still got a way to go, but the addition of RTC support has had a great effect on reducing failure during network partitions. FarmBot Product Road Map I envision fields of farmbots - completely autonomous We're not focused on that use case. Although we will continue to design the software to handle network partitions, we're still focused on small to large automated gardens at home and in schools where there is a decent internet connection. We plan to introduce "workspaces" where multiple users control multiple devices, but again, this is from the perspective of a home/school user operating at a garden scale. Self-Hosting as a Secondary Focus at FarmBot ... the "on premise" solutions needs to be easier to setup. This would require an easier way to deploy, configure and update the webapp and associated libraries for non technical folks. Before I respond to this, I would like to re-state what I said in a previous message (in case anyone is joining the conversation late, finding it off of Google, etc...). FarmBot will always allow customers to self host their devices. The web server code will ALWAYS be open source. With that being said, cloud users are our main priority and the device is architected with that assumption in mind. Let me explain this with more detail, since the proposal for an autonomous, 100% off-grid FarmBot comes up pretty often in discussions. When we founded FarmBot, Inc. we had a vision of creating a consumer-facing robot that would gain mainstream adoption, not just among developers, but also non-technical people. I don't think self-hosting has a place in the mainstream and I don't think it ever has or ever will be a solution for non-technical folks. With the understanding that FarmBot aspires to be a consumer product rather than a product for "power users", I don't see a future in self-hosting as the first option for users. We will always allow self-hosting for the power users or people who do not want us to store their data, but it's never going to be our main focus. If a motivated developer wishes to create a more streamlined "FarmBot Easy Installer", I would absolutely support their efforts, but we can't provide development support on such a project because again, our main focus is non-technical users that will be on the official server provided at my.farm.bot. That's not to say we don't support sellf-hosted users, it just means that the majority of our time and effort will be spent where the majority of our customers are, which is at my.farm.bot. Consider the following example. Given a large list of self-hosted software packages, it is nearly impossible to come up with an example of one being succesfully deployed by a non-technical user, even among the ones that have amazingly simple installers. I highly doubt a non-technical family member (especially a "digital non-native") would be able to successfully / safely deploy any of them. Running a server has serious security implications that end-users are not aware of. This is one of the reasons that projects like WordPress have (I think unfairly) gained a reputation as "insecure". Alternatives to Self-Hosting If the next generation Farmware goes towards this goal I am all in. If not, how can I help making this happen? The most helpful thing for us would be to hear about how you would plan to use the Farmware and what sort of features would make it useful to you. . We are still in the planning phase at this point and having input from users has been extremely helpful as we plan. Having the integrated editor will be a big win for users since it eliminates the need to host Farmware externally on sites like Github and is one less whitelist exception that enterprise / school users need to add to their firewall. Furthermore, if someone in the community wishes to create a self-hosted installer to simplify the self-hosting process, we would also be happy to answer questions. What sort of things would you do with such a solution?
2025-04-01T04:10:24.606698
2023-05-26T22:16:11
1728299525
{ "authors": [ "FarzamMohammadi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14149", "repo": "FarzamMohammadi/ado-express", "url": "https://github.com/FarzamMohammadi/ado-express/issues/107" }
gharchive/issue
UI Adjustments Put "Source Environment" input before "Target Environment" Auto scroll down in the terminal on new inputs Add "have a good day!" in the websocket messages after deploylemnt, to give it that full retro feel Make the terminal selectable via tab (and if possible, controllable by up and down buttons) Terminal auto scroll functionality is stopped when user manually scrolls up past the 87% scroll height threshold. (scroll height is counted from top to bottom)
2025-04-01T04:10:24.673971
2022-06-01T03:15:41
1254796225
{ "authors": [ "reconman", "saikai26" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14150", "repo": "Fate-Grand-Automata/FGA", "url": "https://github.com/Fate-Grand-Automata/FGA/issues/1196" }
gharchive/issue
Fail to press the skill Preparation [X] I tested the latest release [X] I looked at other issues (even the closed ones) [X] I read the Troubleshooting Guide FGO server JP FGA build number 1676 Describe the bug Auto doesn't click skills and proceeds as normal like it does want to click where the np should be and stuff. Video Uploading 2022-06-01 11-57-27.mp4… Device model LDPlayer 4.0.81(32) Android version 7 Screen size 1600x900 RAM 6 https://user-images.githubusercontent.com/106642178/171320887-bd48e761-62ab-45ed-8b13-ff15048a9dd7.mp4 That's Skill Confirmation and part of the Troubleshooting Guide.
2025-04-01T04:10:24.687817
2023-03-09T04:47:03
1616406731
{ "authors": [ "FearedFusionX" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14151", "repo": "FearedFusionX/Ro-Status", "url": "https://github.com/FearedFusionX/Ro-Status/issues/215" }
gharchive/issue
⚠️ Avatar API Endpoint has degraded performance In b58a263, Avatar API Endpoint (https://avatar.roblox.com/v1/avatar-rules) experienced degraded performance: HTTP code: 200 Response time: 1145 ms Resolved: Avatar API Endpoint performance has improved in fc4a408.
2025-04-01T04:10:24.694970
2022-03-18T03:24:39
1173144687
{ "authors": [ "LaynePeng", "dylan-fan", "excellent-lixl", "hainingzhang", "jarviszeng-zjc", "snowzjx" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14152", "repo": "FederatedAI/FATE-Community", "url": "https://github.com/FederatedAI/FATE-Community/issues/28" }
gharchive/issue
Create a new repo for SIG InterOp To host the documents and specifications of SIG InterOp, we plan to create a new repo for this purpose. The implementation code is still saved in the FATE main repo. +LGTM +1 +1 +1 +1 LGTM Created InterOp repo.
2025-04-01T04:10:24.741330
2024-04-04T06:23:00
2224599560
{ "authors": [ "FelixNgFender", "MengLinMaker" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14154", "repo": "FelixNgFender/Mu2Mi", "url": "https://github.com/FelixNgFender/Mu2Mi/issues/40" }
gharchive/issue
feat: add local inference services https://www.acorn.io/resources/blog/introducing-cog-and-containerizing-machine-learning-models Here there, cool project. I happen to be working on something kinda similar (for piano to midi transcription): Musidi I suggest taking a look at parallel processing for audio related ai inferencing. With this approach compute no longer becomes a bottleneck. Instead bandwidth and cold starts are the new bottleneck. This would massively speed up your transcription speed. I don't know what model you choose, but the accuracy isn't great. BTW, I'm randomly scouring the internet for people working on similar stuff. Hi, I really appreciate the suggestion. Could you tell me more about this parallel processing idea? Because in the current setup, I send off my processing tasks to Replicate (a GPU/model provider SaaS) which kicks off the processing and immediately returns the task ID (non-blocking, asynchronous style) and AFAIK, there is no limit to how many tasks I can send off to Replicate at a time (even though there is a rate limit, arbeit very high). So I'm not too sure how to approach parallel processing when moving to local services. I'm intending to use Cog to package the local models BTW (open source model-as-a-container by Replicate). I'm using Basic Pitch by Spotify. The accuracy isn't great because it's trained to be a general purpose model I think. I will try experiment with the parameters to come up with "profiles" for different instruments. I deployed a modified version of Bytedance's Piano Transcriber to AWS lambda and using CPU inferencing in parallel. It's surprisingly much faster than deploying to Replicate. According to Replicate, typical inference time would take 6 minutes for this model (audio size is unknown). AWS lambda brings it down to under 30 seconds for an 8 minute audio. However AWS Lambda has low bandwidth issues. Ideally, I'm trying to find a way to combine both GPU and parallel inferencing. Are you chopping the audio into parts, sending them for processing, and aggregating them in a final Lambda? That sounds interesting. Replicate has issues with cold starts (up to 3-4 minutes on some rare cases) for less popular models so I'm looking at other options. My vision is to make Mu2Mi self-hostable so all the models and other components should fit on a single host. For the bandwidth issues, perhaps you can check out scaling out with Modal. They are a PaaS with a "serverless GPU" niche. They provide some nice Python primitives to build out your ML workflows. They also allow you to mix-and-match CPU/GPU for any part of your workflow. I have never used them so these are just points I read on their page. Ray Serve is another open source alternative to that. This one is really nice based on my experience. Yeah, Modal seems very promising. The deployment process seems a little cumbersome (Modal's CEO doesn't like containers). It's on my to do list. Self hostable is an interesting vision. The challenge is that optimising AI models is very hardware specific. I know that ONNXruntime does some runtime optimisations by checking the hosts's hardware info. How do you plan to make it self hostable? Also interestingly, Spotify's Basic Pitch Demo is more accurate than Mu2mi for some reason. Maybe there's some pitch configuration? I notice some of the lower pitch notes are being omitted. Oh it seems like Ray has security issues, have to look into it more though. I plan to make it self-hostable through Docker Compose as Cog can make Dockerfile for a pre-defined model. It irons out the kinks of using GPU/CUDA cross-platformly. One of the concerns I have is the size of the generated Docker container. One large model may take up to 20GB, times that by 6 and it's a whole 120GB SSD lol. I think Docker has some caching mechanism for similar build image layers though but I still need to look into that. Oh it seems like Ray has security issues, have to look into it more though. Oh, I didn't know that lol. I'll reconsider Ray then. As for the Basic Pitch demo, I'll take a look at Spotify's demo's parameters. I'm currently quite busy at school until May so that may have to wait a little bit. 20GB container sounds absurdly big. How big is your largest model? I believe the docker cache mainly applies to docker builds. Multistage docker builds will then share the same base layer image when building. The largest container I had was just over 2GB and the model was 150MB + Pytorch with Librosa. Eventually I quantised the model to 75MB in float16 and used ONNXruntime which fits into the 250MB limit of AWS Lambda (In a docker container, that would be around 1.25 GB). Of course, I didn't include any GPU libraries through.
2025-04-01T04:10:24.764976
2018-09-03T05:11:31
356359925
{ "authors": [ "FengZhenhua", "MyYaYa", "protossw512" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14155", "repo": "FengZhenhua/Wing-Loss", "url": "https://github.com/FengZhenhua/Wing-Loss/issues/2" }
gharchive/issue
Reimplement question Hi, Feng, Thanks for your excellent work on Wing Loss. I'm reimplementing the simple cnn6 networks. Aftering reading your paper, I do training on the aflw-full-protocol dataset, my steps is below. PDB, with procrustes analysis and PCA, I duplicate the fewer images (make every bin is equal, but for those bins that have too fewer images, I set the upper bound--20 dumplicates) Data Augmentation, random flip, random rotate(-30, 30), random little box jitter, random little image brightness changes and so on CNN6 network. Firstly, I use batch normalization (why I use bn is explained latter). input image is normalize to (-1, 1), and output landmark of the network is normalize to (0~1) 4, hyperparameter. learning_rate: 3e-5 to 3e-7 with cosine warm restart(SGDR), weight_decay:5e-4, using SGDM with momentum 0.9 Under these setting, the training loss and training nme is well, but on the validation dataset, the nme is not good, even poor. I guess it's overfitting, but I think that data is enough, and the CNN6 is simple enough, how this overfitting circumstance happens? As shown in figure, training_nme_per_batch is well, but eval_nme is up and down around 0.10. Here are what I want to know: How do you process aflw-full-protocol dataset properly? I see the model mat file, Is the output of network is arange from 0 to 64? not normalize to 0~1? I also try not to use bn, but the performance is even worse. Hi, For you implementation: It sounds that your PDB and data augmentation are correct. For the CNN-6 network, I did not use BN because it performs worse that the one without BN. I resize the intensity of all the pixels of an input image to [0,1] rather than [-1,1]. But I do not think this is a big difference. For the output of the landmarks, I did not resize them to (0~1). They are between 1 and 64. If you resize them to (0~1), you should increase the learning rate. Maybe you should increase it from 3e-5~3e-7 to 1e-3~1e-5. For your question "How do you process aflw-full-protocol dataset properly?", I do not quite understand what does it mean. Could you please you explain it further? Hope my answer is helpful for you. Zhenhua Thanks! The question, "How do you process aflw dataset properly?", means how do you do PDB and Data Augmentation. You have told me that what I did is correct, so this question is answered nicely. And fortunately, I found the key reason of my reimplementation's worse performance. It's the random flip. I use TF's Dataset Class for feeding data, and also use Dataset's inner func for augmentation. Though I check the ouput of my Dataset, that is correct, the training will be like the picture I have shown. Now, I cancel the random flip, the performance is back to normal. When I check out the actual reason of degradation caused by random flip, I will explain it here. And lastly, can you tell me the insight of increasing learning rate for smaller range of output? I Found the reason why random flip get worse. It's the order of landmarks. It means that the order of left eyes represent the right eyes. It's so stupid, sorry for bothering. @MyYaYa Great to hear that you have solved your problem. For the learning rate, I think it would be easy to explain. If you use the output between 0 and 100, it amplifies the gradient by 100 times than the one using value between 0 and 1. Hence, the learning rate should be increased when you use smaller range of outputs. A further reminder for image flip is that, you also have to carefully check flipped landmarks not only in order but also in locations. @FengZhenhua Thanks, I will take care of it. Now I close the issue. @MyYaYa Can I ask what is your test NME on 300W dataset?
2025-04-01T04:10:24.768518
2019-04-21T20:17:17
435544090
{ "authors": [ "Fenny", "Ilario42" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14156", "repo": "Fenny/ChromecastJS", "url": "https://github.com/Fenny/ChromecastJS/issues/4" }
gharchive/issue
Update the data? Hello Friend! Great project, the only one that worked! I modified the code according to my needs (I hope this is not a problem). I need to update about every 20 seconds in the var "Media" "content" "poster" "title" "description". How can I do? Can you give me a hand? Thank you, give your answer! Happy Easter! Hi, Did you try to change the global variables as shown in the description? cc.Media.poster cc.Media.title cc.Media.description @Fenny Yes, cc.Media was changed but chromecast not refresh the info. I will look into it this week. If it's possible I will update the code. I will look into it this week. If it's possible I will update the code. @Fenny i wait your update @Ilario42 I didn't mange to figure out how to update those variables while casting, I'm closing it for now until someone knows how to fix it.
2025-04-01T04:10:24.779790
2022-06-16T15:51:23
1273756158
{ "authors": [ "gustavojra", "sgoodlett" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14157", "repo": "FermiQC/Molecules.jl", "url": "https://github.com/FermiQC/Molecules.jl/issues/23" }
gharchive/issue
Symmetry TODO list Write tests generate_symel_class_map rotate_symels_to_mol get_euler_angles dc_mat get_atom_mapping where_you_go symtext_from_file Come up with better names for function, and standardize abbreviations (e.g. ctab vs. chartab) Add symtext functions for cubic point groups Pretty printing for symels and symtext Adjust to changes in Molecule struct (going from Vector{Atoms} to Fermi like thingy) Chartab! tf is ctab I already changed all the functions to take in Vector{Atom} if that's what you mean. You can make them compatible with molecules as well, it would be nice, but not crucial.
2025-04-01T04:10:24.784117
2024-03-17T23:41:50
2190940145
{ "authors": [ "1C0D", "Fevol" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14158", "repo": "Fevol/obsidian-typings", "url": "https://github.com/Fevol/obsidian-typings/issues/38" }
gharchive/issue
another type for files in the explorer : interface View { fileItems: fileItem[] } interface fileItem{ "collapsible": boolean, "collapsed" : boolean, "el": HTMLElement, "selfEl": HTMLElement, "coverEl": HTMLElement, "collapseEl": HTMLElement, "innerEl": HTMLElement, "childrenEl": HTMLElement, "info": any, "view": View, "file": TFile | TFolder, "tagEl": HTMLElement, "parent": fileItem } I did this interface fileItem{ "collapsible": boolean, "collapsed" : boolean, "el": HTMLElement, "selfEl": HTMLElement, "coverEl": HTMLElement, "collapseEl": HTMLElement, "innerEl": HTMLElement, "childrenEl": HTMLElement, "info": any, "view": View, "file": TFile | TFolder, "tagEl": HTMLElement, "parent": fileItem } ``` it was enough for my needs Thanks for the info! This is now released with v1.0.4
2025-04-01T04:10:24.786123
2018-12-18T17:32:17
392261166
{ "authors": [ "FezVrasta", "tamias" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14159", "repo": "FezVrasta/react-popper", "url": "https://github.com/FezVrasta/react-popper/pull/261" }
gharchive/pull-request
fix: avoid flow error for call to warning() (#260) https://github.com/FezVrasta/react-popper/issues/260 Passing this.props.getReferenceRef to warning() causes an error with flow-typed, because warning() expects a boolean as its first argument. Avoid the error by casting this.props.getReferenceRef to a boolean. Thanks! May you also install the warning flow-typed definitions so we don't get these problems in future? Well, that didn't do what I wanted it to do... I'll start over.
2025-04-01T04:10:24.788057
2022-02-11T17:41:46
1132904297
{ "authors": [ "bsb808", "mabelzhang" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14160", "repo": "Field-Robotics-Lab/dave", "url": "https://github.com/Field-Robotics-Lab/dave/issues/191" }
gharchive/issue
Write a tutorial on using ignition fuel models Maybe this already exists externally, but now that we have our DAVE collection, it would be cool to have a wiki tutorial illustrating how to use the models from the collection. We have a pretty comprehensive tutorial on the Ignition website here https://ignitionrobotics.org/api/gazebo/6.5/meshtofuel.html The wiki page should at a minimum link to that, as it gives information about how the model directory should be structured, which is useful for uploading models. It also gives an example of how to use the model in a world file - it's literally a one-step process of copy-pasting the URI on the webpage and throwing it into the SDF <include><uri>.
2025-04-01T04:10:24.830400
2024-08-01T00:01:14
2441151187
{ "authors": [ "Filimoa", "xyzdeclan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14163", "repo": "Filimoa/open-parse", "url": "https://github.com/Filimoa/open-parse/issues/58" }
gharchive/issue
Table Extraction Tool Description There is another tool for PDF table extraction recently, maybe this could be an option to embed? https://github.com/ai8hyf/TF-ID I will look into this, it'd be helpful if they published more benchmarks of their work. I'm also concerned on the relatively small amount of data it was trained on.
2025-04-01T04:10:24.852938
2018-10-16T19:39:06
370767767
{ "authors": [ "ShaunEdiger", "wheresrhys" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14164", "repo": "Financial-Times/serverless-plugin-healthcheck", "url": "https://github.com/Financial-Times/serverless-plugin-healthcheck/pull/3" }
gharchive/pull-request
doc(configuration): Add missing documentation for stage Steps to reproduce Follow instructions in README to configure a project for healthchecks sls deploy Expected Result The plugin will detect the healthchecks specified within the lambda function events. Actual Result The plugin does not detect the healthchecks specified within the lambda function events. Healthcheck processing is skipped entirely (HealthCheck: no lambda to check). What's going on There is code within the plugin that toggles whether a lambda function will be included for healthcheck processing. It appears to be undocumented. What this pull request does The acceptable configuration values have been added to the README. There are also a few typo corrections. @ShaunEdiger We no longer use this package at FT. If you want the npm project transferred to you let me know<EMAIL_ADDRESS>
2025-04-01T04:10:24.859797
2018-03-01T21:54:37
301573858
{ "authors": [ "ogvolkov", "rnicholus" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14165", "repo": "FineUploader/fine-uploader", "url": "https://github.com/FineUploader/fine-uploader/pull/1977" }
gharchive/pull-request
Fix reported path when file name also occurs in parent directory name(s) Brief description of the changes Fixes reported path (qqpath) in case when the file name in question appears in one of its parent directory names, see #1976. The original code used indexOf so if the first occurrence was not a file name itself, it stripped way too much from the path. What browsers and operating systems have you tested these changes on? Chrome 64.0.3282.167, FF 58.0.2, Edge 41.16299.248.0 Have you written unit tests? If not, explain why. Unit tests are present, both for normal case where I used the value the current code returns as a reference, and for the case when directory name is the same as the file name. just to be clear, the only change was fullPath.indexOf(name) -> fullPath.lastIndexOf(name)? Yes, the functional change is only that. However to have it tested I've extracted it as a separate function, which is also a bit cleaner imo. thanks for your work on this! I'm going to release this as 5.15.7 shortly
2025-04-01T04:10:24.870715
2017-02-14T08:08:41
207444637
{ "authors": [ "kangza", "rnicholus" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14166", "repo": "FineUploader/react-fine-uploader", "url": "https://github.com/FineUploader/react-fine-uploader/issues/62" }
gharchive/issue
fine-uploader how to clear or refresh dropzone in react I will use function to clear or refresh dropzone in onComplete-Callback. So I use react-fine-uploader in react, I try to use $('.react-fine-uploader-gallery-delete-button').click(); but it's deleting file, I would just like to clear dropzone. Thanks. Please don't post the same question both here and on stack overflow. Sorry man. Where are you from, @kangza? @rnicholus ,I from thailand and you? I figured you might be. I have a special place in my heart for Thailand. Please post your code and a more detailed description of your problem and I'll see what I can do to help. I use fine-uploader in react I need to clear area dropzone in action onComplete callbacks: { onComplete: (id, name, response) => { self.fileInfo = response; self.setState({ file : response}); //This point want function //$(".react-fine-uploader-gallery-delete-button").click(); ## I just clear don't want delete }); } } You get my question? No problem, I fixed your code formatting a bit. So, when a file is successfully uploaded, you want that file to disappear from the gallery/UI. Is this correct? Yep man. Ok, that's an interesting use-case. And, even more interesting, a setStatus method was added to the API in Fine Uploader 5.14.0-beta2. This will allow you to set the status of a completed file as "deleted" without actually deleting the file from your server. For example: callbacks: { onComplete: (id, name, response) => { self.fileInfo = response; self.setState({ file : response}); uploadWrapper.methods.setStatus(id, 'deleted') }); } ...and that will cause the file to disappear from the <Gallery />. Note that you will need to have the 5.14.0-beta2 version installed locally for this to work. Thank man. How to update version i can use with npm update? I would suggest adding an entry in your package.json for fine-uploader explicitly: "fine-uploader": "5.14.0-beta3". Man i use in reactjs how to import name from fine-uploader? FineUploader object from fine-uploader-wrappers right? import FineUploader from 'fine-uploader-wrappers'; callbacks: { onComplete: (id, name, response) => { self.fileInfo = response; self.setState({ file : response}); FineUploader.methods.setStatus(id, 'deleted'); }); } uploadWrapper is object from fine-uploader-wrappers right? //Fine Uploader import FineUploader from 'fine-uploader-wrappers'; import Gallery from 'react-fine-uploader'; import styles from 'react-fine-uploader/gallery/gallery.css'; callbacks: { onComplete: (id, name, response) => { self.fileInfo = response; self.setState({ file : response}); FineUploader.methods.setStatus(id, 'deleted'); }); } I got problem setStatus I forgot something? Thank you very much.
2025-04-01T04:10:24.892591
2023-01-03T18:38:13
1517776533
{ "authors": [ "DASPRiD", "Finomnis" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14167", "repo": "Finomnis/tokio-graceful-shutdown", "url": "https://github.com/Finomnis/tokio-graceful-shutdown/issues/50" }
gharchive/issue
Question about dangling awaits during shutdown Hi there, During some recent testing I ran into an "interesting" issue with SIGINT and similar shutdowns. The exact situation I identified is the following: Deep inside one of my subsystems, I do an async lookup_host(), which has a timeout of apparently 20 seconds, which is await there. When hitting CTRL+C, all subsystems shut down and I get tokio_graceful_shutdown::toplevel] Shutdown successful.. At this point the program does not terminate though – it only exits after the lookup_host() future resolves/rejects. Do you happen to have an idea on how to force tokio to not wait for such things before shutting down itself? Theoretically, after the timeout of handle_shutdown_requests is over, it should cancel all the pending futures ... Would you mind providing a reproducible example? This behaviour sounds like a bug and requires fixing :) Hmm. It seems that lookup_host() internally uses spawn_blocking. Sadly, it's almost impossible to handle spawn_blocking gracefully during shutdown. Let me construct a minimal example for further discussion. I was just looking into lookup_host() as well for reproduction, and figured the same thing. I really wonder how this could be resolved nicely. The only thing I found so far would be shutdown_timeout() or shutdown_background(), but I'm unsure how to do this with externally provided runtimes. So spawning another runtime inside of Toplevel would probably work, but that would override the user's decision what runtime to spawn. I wonder if there is a config option of tokio::main that kills background tasks on exit, that would probably be the best option. This seems to exhibit the behaviour you described: use env_logger::{Builder, Env}; use miette::Result; use std::time::Duration; use tokio::select; use tokio_graceful_shutdown::{SubsystemHandle, Toplevel}; async fn hanging_task(subsys: SubsystemHandle) -> Result<()> { let hanging_task = tokio::task::spawn_blocking(|| { std::thread::sleep(std::time::Duration::from_secs(10)); }); select! { e = hanging_task => e.unwrap(), _ = subsys.on_shutdown_requested() => () } Ok(()) } #[tokio::main] async fn main() -> Result<()> { // Init logging Builder::from_env(Env::default().default_filter_or("debug")).init(); Toplevel::new() .catch_signals() .start("Hanging Task", hanging_task) .handle_shutdown_requests(Duration::from_millis(2000)) .await .map_err(Into::into) } Yeah, that seems to be pretty much it. I'm screening the tokio docs and code, but haven't found anything for it yet. I could add something along those lines in handle_shutdown_requests: use std::{thread, time::Duration}; use env_logger::{Builder, Env}; use miette::Result; use tokio::select; use tokio_graceful_shutdown::{SubsystemHandle, Toplevel}; async fn hanging_task(subsys: SubsystemHandle) -> Result<()> { let hanging_task = tokio::task::spawn_blocking(|| { thread::sleep(Duration::from_secs(10)); }); select! { e = hanging_task => e.unwrap(), _ = subsys.on_shutdown_requested() => { // Spawn thread that kills leftover tasks if necessary thread::spawn(|| { thread::sleep(Duration::from_millis(1000)); println!("Shutdown seems to hang. Killing program ..."); std::process::exit(1); }); }, } Ok(()) } #[tokio::main] async fn main() -> Result<()> { // Init logging Builder::from_env(Env::default().default_filter_or("debug")).init(); Toplevel::new() .catch_signals() .start("Hanging Task", hanging_task) .handle_shutdown_requests(Duration::from_millis(200)) .await .map_err(Into::into) } What do you think? I think that's a sane way to handle it, as long as the kill timeout is configurable. Though technically we already have timeout defined in handle_shutdown_requests. The only problem is that it isn't guaranteed that Toplevel is used like this, it could be used in a bunch of ways, including somewhere nested in the program. And in that case we probably don't want to provoke a exit(1) if the shutdown hangs. We could add a .exit_program_on_hang() option that can be appended to .handle_shutdown_requests(). Might have to sleep over it, though. I'm not perfectly happy with this yet, it seems too hacky. Sure thing, take your time (not too much please, I definitely like this resolved ;)). I agree that it should be an opt-in feature though. @DASPRiD Until I/we came up with a solution, you can work around it like this: use std::{thread, time::Duration}; use env_logger::{Builder, Env}; use miette::Result; use tokio::select; use tokio_graceful_shutdown::{SubsystemHandle, Toplevel}; async fn hanging_task(subsys: SubsystemHandle) -> Result<()> { let hanging_task = tokio::task::spawn_blocking(|| { thread::sleep(Duration::from_secs(10)); }); select! { e = hanging_task => e.unwrap(), _ = subsys.on_shutdown_requested() => (), } Ok(()) } #[tokio::main] async fn main() -> Result<()> { // Init logging Builder::from_env(Env::default().default_filter_or("debug")).init(); Toplevel::new() .catch_signals() .start("Hanging Task", hanging_task) .handle_shutdown_requests(Duration::from_millis(200)) .await?; thread::spawn(|| { thread::sleep(Duration::from_millis(1000)); log::error!("Shutdown seems to hang. Killing program ..."); std::process::exit(1); }); Ok(()) } Feel free to post further thoughts or ideas, I'm open for everything on this problem. I'm kinda stumped right now how to handle this properly. I agree that this is definitely not the nicest solution. Considering that such a blocking thread should usually just be ignored and an exit should happen immediately, this feels like an issue on the tokio side. Granted though, I was only running into this issue because I had to restart my program because of network issues, and that's exactly when I noticed this problem of course. As long as the network just works, the shutdown is normal. So this is definitely an edge-case. The best solution I can come up with right now is to put above chunk of code in a function, like fn kill_on_hang(timeout: Duration) { thread::spawn(|| { thread::sleep(timeout); log::error!("Shutdown seems to hang. Killing program ..."); std::process::exit(1); }); } So that people can use it like this: use std::{thread, time::Duration}; use env_logger::{Builder, Env}; use miette::Result; use tokio::select; use tokio_graceful_shutdown::{SubsystemHandle, Toplevel}; async fn hanging_task(subsys: SubsystemHandle) -> Result<()> { let hanging_task = tokio::task::spawn_blocking(|| { thread::sleep(Duration::from_secs(10)); }); select! { e = hanging_task => e.unwrap(), _ = subsys.on_shutdown_requested() => (), } Ok(()) } #[tokio::main] async fn main() -> Result<()> { // Init logging Builder::from_env(Env::default().default_filter_or("debug")).init(); Toplevel::new() .catch_signals() .start("Hanging Task", hanging_task) .handle_shutdown_requests(Duration::from_millis(200)) .await?; kill_on_hang(Duration::from_secs(1)); Ok(()) } But it doesn't feel right. I agree that I think the solution should be on tokio side somewhere. And it is, if you use tokio's runtime directly and can call Runtime::shutdown_background(). It just sucks that tokio::Runtime's default drop behaviour is to wait indefinitely. @DASPRiD In case you want to report it to tokio, this is the minimal reproducible example: use std::{thread, time::Duration}; #[tokio::main] async fn main() { tokio::task::spawn_blocking(|| { thread::sleep(Duration::from_secs(10)); }); } I should do that, thanks a lot! I'll reference this issue over there. Edited my previous comment to add why I believe it should behave like this. I've created an issue on the tokio repository. Maybe you can chime in there as well if you have anything to add :heart: @DASPRiD I prototyped a solution in tokio: Cargo.toml: [patch.crates-io] tokio = { version = "1.23.1", git = "https://github.com/Finomnis/tokio", branch = "main_arg_ignore_hanging_threads" } [dependencies] tokio-graceful-shutdown = "0.12.1" tokio = { version = "1.23.1", features = ["full"] } env_logger = "0.10.0" miette = { version = "5.5.0", features = ["fancy"] } use std::{thread, time::Duration}; use env_logger::{Builder, Env}; use miette::Result; use tokio::select; use tokio_graceful_shutdown::{SubsystemHandle, Toplevel}; async fn hanging_task(subsys: SubsystemHandle) -> Result<()> { let mut hanging_task = tokio::task::spawn_blocking(|| { thread::sleep(Duration::from_secs(100)); }); let joinhandle_ref = &mut hanging_task; select! { e = joinhandle_ref => e.unwrap(), _ = subsys.on_shutdown_requested() => (), } Ok(()) } #[tokio::main(ignore_hanging_threads = true)] async fn main() -> Result<()> { // Init logging Builder::from_env(Env::default().default_filter_or("debug")).init(); Toplevel::new() .catch_signals() .start("Hanging Task", hanging_task) .handle_shutdown_requests(Duration::from_millis(2000)) .await .map_err(Into::into) } What are your thoughts? That looks quite good to me. The parameter name is a little ambiguous though maybe, as this is quite specific to tokio spawned threads. Apart from that, it looks like a really clean solution. Yah, it's still open for debate. Now I just need two things: tests agreement with the tokio devs The second one could take a while, tokio is notoriously understaffed with many open pull requests. They are doing an excellent, to clarify, but there's just too many people wanting things done :D Maybe ignore_pending_threads instead of ignore_hanging_threads? Or detach_threads_on_exit? I'm not sure, I hate naming things :D I feel you on that, coming up with short but still descriptive names is… hard. Maybe something like abandon_pending_threads? Sounds good, might just it. Gotta leave for today though, can't promise when I'll get to it. Greetz @DASPRiD I opened a PR for the first part of the fix, fyi: https://github.com/tokio-rs/tokio/pull/5360 Thanks for keeping me in the loop. I assume a second PR would add that to the tokio-main macro? Exactly :) @DASPRiD It might be unlikely that we get the parameter for #[tokio::main] merged. I'm currently fighting for the config option for Runtime, that one might happen. Wish me luck ;) I see that there was no movement on the PR yet, I really hope they get around to it eventually :) I'm not sure, they seem to be quite busy ... I'll ask again in their discord soon. @DASPRiD After a long discussion they rejected the pull request and instead proposed a solution on our side. Their proposal was to wrap the runtime in a newtype that calls shutdown_background() in its Drop implementation. Here is the documentation on what #[tokio::main] actually expands to: https://docs.rs/tokio/latest/tokio/attr.main.html#using-the-multi-thread-runtime With that, this is how such a newtype thing would look like: use std::{future::Future, thread, time::Duration}; use env_logger::{Builder, Env}; use miette::Result; use tokio::{runtime::Runtime, select}; use tokio_graceful_shutdown::{SubsystemHandle, Toplevel}; struct RuntimeWithInstantShutdown(Option<Runtime>); impl RuntimeWithInstantShutdown { pub fn new() -> Self { Self(Some( tokio::runtime::Builder::new_multi_thread() .enable_all() .build() .unwrap(), )) } pub fn block_on<F: Future>(&self, future: F) -> F::Output { self.0.as_ref().unwrap().block_on(future) } } impl Drop for RuntimeWithInstantShutdown { fn drop(&mut self) { self.0.take().unwrap().shutdown_background() } } async fn hanging_task(subsys: SubsystemHandle) -> Result<()> { let hanging_task = tokio::task::spawn_blocking(|| { thread::sleep(Duration::from_secs(10)); }); select! { e = hanging_task => e.unwrap(), _ = subsys.on_shutdown_requested() => (), } Ok(()) } fn main() -> Result<()> { // Init logging Builder::from_env(Env::default().default_filter_or("debug")).init(); RuntimeWithInstantShutdown::new().block_on(async { Toplevel::new() .catch_signals() .start("Hanging Task", hanging_task) .handle_shutdown_requests(Duration::from_millis(200)) .await .map_err(Into::into) }) } @Finomnis Interesting, thanks a lot for your effort! Do you have any plan to have a macro for that within this library to make it easier for users? Not really, to be honest. It's different enough that it shouldn't be part of this crate in my opinion.
2025-04-01T04:10:24.919891
2024-11-19T18:50:21
2673193544
{ "authors": [ "nohe427" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14168", "repo": "FirebaseExtended/compass-ai-travel-planning-sample-flutter", "url": "https://github.com/FirebaseExtended/compass-ai-travel-planning-sample-flutter/issues/23" }
gharchive/issue
Emulator hub is already running When starting IDX we get a message that the Firebase hub is already running We are unable to query our data connect instance that is locally running. Is there a different part of this startup experience we are missing? @rodydavis @peterfriese - FYI @kroikie - FYI
2025-04-01T04:10:24.923210
2024-02-03T18:19:32
2116629867
{ "authors": [ "Coding-Solo", "Wallman" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14169", "repo": "FirebaseExtended/firebase-video-samples", "url": "https://github.com/FirebaseExtended/firebase-video-samples/issues/53" }
gharchive/issue
Sign In With Apple sheet is not dismissed on login with anonymous auth Applies to the auth-account-linking/final project. Expected Behavior When signing in with Apple, the sheet is not dismissed. Actual Behavior Sheet stays open. Steps to Reproduce the Problem Sign In With Apple Specifications iOS 17.2.1 @peterfriese @Wallman Add this to your SingInWithAppleButton's onCompletion handler if case .success(_) = result { dismiss() } So that it looks something like this: (assuming you are using the auth-account-linking/final project without modification) SignInWithAppleButton(.signIn) { request in viewModel.handleSignInWithAppleRequest(request) } onCompletion: { result in viewModel.handleSignInWithAppleCompletion(result) if case .success(_) = result { dismiss() } } I'm making a PR to fix this issue :)
2025-04-01T04:10:25.006568
2020-10-09T03:24:30
717819907
{ "authors": [ "campioncino", "dackers86", "mobiledev2014", "pengw00" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14170", "repo": "FirebaseExtended/flutterfire", "url": "https://github.com/FirebaseExtended/flutterfire/issues/3819" }
gharchive/issue
No Notification receive in background for IOS I am using flutter 1.22 stable, Xcode 12, Firebase 7.0.3 and IOS 14 for device but the notification only works in foreground, Please help me find the solution Hi @mobiledev2014. Could you provide an example of the code you are using so we can attempt to replicate the issue. Thanks I think you might miss add click_action attribute when sending message in backend I have the same (old) issue. There's no way to get the firebase message in background for iOS. The messages are correct and they arrive when the app is in foreground That's the message: "to": "dAMWn_x6TACR6clI_Oe62s:APA91bFtfnR9VrXl0J_HNB8WAdLJdOp8p8FZsqzDLqSwqsOS8RlepqTVJp2zzETJj78AAwVSfz4PYmjqoPpnpUWL72DpdWfkyeXkaWmyPjqYfgF1_7qWB6X5w4MIW7_kKs8ESuFvGAFg", "body": "this is a body", "title": "this is a title", "priority": "high", "data": { "click_action": "FLUTTER_NOTIFICATION_CLICK", "id": "1", "status": "done" } When the app is in background, there's no way to get the message, but when you put the app in foreground you get the message back. Have you ever solved this problem? Few month ago i followed this solution https://github.com/FirebaseExtended/flutterfire/pull/2016 Many updates, but same problems ? Hi @campioncino You are correct https://github.com/FirebaseExtended/flutterfire/pull/2016 is related, please follow this for any updates. Fyi. It's my understanding this will be apart of the messaging rework, please see https://twitter.com/elliothesp/status/1314479419667476480
2025-04-01T04:10:25.010778
2019-04-15T14:06:22
433302063
{ "authors": [ "ywwg" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14171", "repo": "FireflyArtsCollective/ffagc", "url": "https://github.com/FireflyArtsCollective/ffagc/issues/138" }
gharchive/issue
Fix dangerous reuse of html templates There are a bunch of places where a single html template file is reused for multiple account types. This is super dangerous and could result in committee-only data being accidentally visible to artists if, for instance, we mess up a single if statement. A good example is _funding_table.html.erb or grant_submissions/edit.html.erb. We should really split this stuff apart so this risk is reduced. fix for funding_table: 0cd3f987de25b73394846e283ce7b3b61cc48f5d edit.html wasn't bad, discuss was though!
2025-04-01T04:10:25.017932
2023-12-31T09:49:10
2061004664
{ "authors": [ "ALEKSEYR554", "dangeredwolf" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14173", "repo": "FixTweet/FxTwitter", "url": "https://github.com/FixTweet/FxTwitter/issues/582" }
gharchive/issue
Can't GET from api via code but can via browser My ruby code for this is very simple require 'Faraday' conn = Faraday::Connection.new 'https://api.fxtwitter.com/ALEKSEYR554_' p conn.get and everytime i get SSL_connect returned=1 errno=0 peeraddr=<IP_ADDRESS>:443 state=error: unexpected eof while reading (OpenSSL::SSL::SSLError) no matter what i do, but opening api link from browser works. GET from telegram bot api works with no problem. I tryed disabling ssl check when sending request, use firefox's cacert.pem nothing worked. And idk if it's problem from my side or server one bc I'm strugling with it for a week i guess. So maybe something went wrong with server, if not i'll look for solution We're using Cloudflare if that helps at all. I'm not terribly familiar with Ruby or Faraday so I'm not sure if there's additional logging you can turn on and look at or if there are any known compatibility issues between Cloudflare's free SSL certs and Faraday.
2025-04-01T04:10:25.020921
2018-11-05T17:18:23
377500079
{ "authors": [ "Fizzadar", "mansya" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14174", "repo": "Fizzadar/pyinfra", "url": "https://github.com/Fizzadar/pyinfra/issues/155" }
gharchive/issue
New logo/icon proposal Good day sir. I am a graphic designer and i am interested in designing a logo for your good project. I will be doing it as a gift for free. I just need your permission first before I begin my design. Hoping for your positive feedback. Thanks That sounds great, a logo for pyinfra would be awesome :) I have designed the logo for this project. What do you think ? do you like it ? if yes let me know and if there are changes please let me know. @mansya I think it looks great! Thank you :) I'd love to use just the icon if possible - do you have a PSD available? hi sir, I have sent you pull requests. and I have also uploaded all files to this drive. please check.
2025-04-01T04:10:25.032125
2024-04-23T14:20:05
2259018053
{ "authors": [ "agregoryfs", "matthewelwell" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14175", "repo": "Flagsmith/flagsmith", "url": "https://github.com/Flagsmith/flagsmith/issues/3827" }
gharchive/issue
Add Reporting on overall Feature Management Program Is your feature request related to a problem? Please describe. There is no insight into overall analytics behind the use of feature flags within an organization or project Describe the solution you'd like. Set up basic metrics in the product to look at the number of features active, avg lifespan of a feature, or the amount of team members using featuring flagging. This can be expanded to include additional metrics like number of split tests, a/b tests, identity overrides active, segment overrides active, etc. Describe alternatives you've considered Reporting API to give access to this outside of the UI Additional context No response How do we display / present / share the data? To start with, let's just create a dashboard in Flagsmith to display this data. The dashboard should be scoped to the organisation with filters for projects and environments. The data should be captured at regular intervals to allow us to track the data over time. Which metrics do we want to report on ### V1 Metrics num of projects number of environments num of users num of features created last x days num features deleted last x days num of features enabled number of change requests last x days number of CR committed last x days num segments num of segment overrides num of identity overrides V1.5 (require versioning) num features updated in the past x days num stale features Maybe V2 Average feature time from creation to use in a production environment Unused feature flags - i.e. usage is 0 for x days, this could help identify features that aren't actually being used Which metrics should we allow users to drill into, and how? For now, we just need to consider that we will want to drill into certain metrics. For example, number of features enabled - although this is technically already possible. Filter options Time period Project Environment Tag Assigned user / group
2025-04-01T04:10:25.038981
2024-09-09T08:07:35
2513227798
{ "authors": [ "matthewelwell" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14176", "repo": "Flagsmith/flagsmith", "url": "https://github.com/Flagsmith/flagsmith/issues/4599" }
gharchive/issue
Unexpected behaviour creating identity overrides for control value of MV features To reproduce: Create an MV feature Create an identity override for the MV feature which selects the control value Note that when testing, we should also validate the behaviour when updating an override for an MV feature. Expected behaviour: Retrieving the features for the identity always returns the control value Actual behaviour: I haven't quite worked out the full behaviour yet. It seems like, because the override still exists, and has a "feature_state_value" it is using that but what I can't work out is at which point(s) the FE is sending the "feature_state_value" - this might be on creation of the override or update perhaps, but it seems inconsistent. Possible solutions: When creating an override for an MV feature, the FE should always send a full complement of "multivariate_feature_state_values". When creating an override for the control value, the "percent_allocation" should be set to 0 for all values. Other solutions considered: When creating an override for an MV feature, the FE should always send the value of the MV option as the "feature_state_value". Having considered this, this would be a very bad idea since, if the values of those options change, the override would still be returning an old value. Having tested the possible solution given above, this doesn't seem to work and it looks like the API uses the value from feature_state_value. This is likely because of the fact that the 'control' value is just the feature_state_value on a multivariate feature state. Based on this, I think we likely need to look at this from an API perspective before looking at it from the FE.
2025-04-01T04:10:25.051970
2023-10-04T10:39:58
1925914558
{ "authors": [ "codecov-commenter", "gagantrivedi" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14177", "repo": "Flagsmith/flagsmith", "url": "https://github.com/Flagsmith/flagsmith/pull/2826" }
gharchive/pull-request
feat(tasks/queue-size): Implement queue_size Thanks for submitting a PR! Please check the boxes below: [x] I have run pre-commit to check linting [x] I have filled in the "Changes" section below? [x] I have filled in the "How did you test this code" section below? [x] I have used a Conventional Commit title for this Pull Request Changes Implement queue size to limit low priority tasks from overwhelming the queue Plan and SQL: explain analyze SELECT COUNT(*) AS "__count" FROM "task_processor_task" WHERE (NOT "task_processor_task"."completed" AND "task_processor_task"."num_failures" < 3 AND "task_processor_task"."task_identifier" = 'edge_request_forwarder.forward_identity_request'); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=69541.34..69541.35 rows=1 width=8) (actual time=5.017..5.018 rows=1 loops=1) -> Index Scan using incomplete_tasks_idx on task_processor_task (cost=0.38..68634.33 rows=362804 width=0) (actual time=0.018..5.013 rows=4 loops=1) Filter: ((task_identifier)::text = 'edge_request_forwarder.forward_identity_request'::text) Rows Removed by Filter: 1 Planning Time: 0.305 ms Execution Time: 5.037 ms (6 rows) How did you test this code? Adds unit test cases Codecov Report Attention: 5 lines in your changes are missing coverage. Please review. Comparison is base (929afeb) 95.53% compared to head (9267a92) 95.53%. Consider uploading reports for the commit 1d15e17 to get more accurate results Additional details and impacted files @@ Coverage Diff @@ ## main #2826 +/- ## ======================================= Coverage 95.53% 95.53% ======================================= Files 994 994 Lines 28072 28095 +23 ======================================= + Hits 26818 26840 +22 - Misses 1254 1255 +1 Files Coverage Δ api/edge_api/identities/edge_request_forwarder.py 100.00% <100.00%> (ø) api/task_processor/exceptions.py 100.00% <100.00%> (ø) api/task_processor/models.py 95.41% <100.00%> (+2.96%) :arrow_up: .../task_processor/test_unit_task_processor_models.py 100.00% <100.00%> (ø) api/task_processor/decorators.py 88.40% <28.57%> (-5.45%) :arrow_down: :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
2025-04-01T04:10:25.064601
2020-11-26T13:44:32
751602906
{ "authors": [ "Sloox", "axelzuziak-gogo", "pawelpasterz" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14178", "repo": "Flank/flank", "url": "https://github.com/Flank/flank/issues/1358" }
gharchive/issue
Optimize slack release notification Author the user story for this feature As a developer I wish to optimize the GitHub action that Flank is currently using to send a release message to flank. We can either swop to a better well documented solution or attempt to optimize the current solution so it works faster. Describe the solution We can either. Have a precompiled binary or docker image(needs to be external and requires docker account) so the commands don't need to be run on each release (Its not the end of the world that they run on each release as the releases are so infrequent) Adhere to better documented procedures on how to create GitHub actions : https://docs.github.com/en/free-pro-team@latest/actions/creating-actions Describe alternatives considered We can follow the current procedure with minor optimizations such as removal of dependencies and automation which shan't be too difficult. @Sloox https://github.com/features/packages Will also fix teh issues with the slack release notification Will also fix teh issues with the slack release notification @Sloox Can you add a SSD document here? Closing as it's no longer valid. See: #1595
2025-04-01T04:10:25.072669
2021-04-29T18:18:35
871306292
{ "authors": [ "DillonB07", "FlashyReese" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14179", "repo": "FlashyReese/CommandAliases", "url": "https://github.com/FlashyReese/CommandAliases/issues/16" }
gharchive/issue
/tellraw won't work with CommandAliases My plan was to have a tellraw run as a SERVER command so that all players can use it with a simple command. But the JSON file stops that as the command in question(tellraw @a [{"text":"Click here for ","color":"yellow"},{"text":"BlueMap","bold":true,"color":"dark_blue","clickEvent":{"action":"open_url","value":"https://rotf.lol/farmtasticraftsbluemap/"}},{"text":" Click here for","bold":true,"color":"yellow"},{"text":" ","bold":true,"color":"dark_blue"},{"text":"Dynmap","bold":true,"color":"aqua","clickEvent":{"action":"open_url","value":"https://rotf.lol/farmtasticraftsdynmap/"}}]) also uses JSON text components. Would there be a workaround for this? You need to escape the JSON itself when placing it as a string. If you can upload the entire command alias JSON file I can help you out Also, it may already be there and I missed it, but maybe this should be added to the wiki?
2025-04-01T04:10:25.079599
2016-07-24T05:34:59
167218456
{ "authors": [ "Flet", "zeke" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14180", "repo": "Flet/standard-engine", "url": "https://github.com/Flet/standard-engine/issues/113" }
gharchive/issue
Use a default eslint and eslintConfig The linter constructor doesn't seem to work without options: ❯ trymodule standard-engine Package 'standard-engine' was loaded and assigned to 'standard_engine' in the current scope REPL started... > standard_engine { cli: [Function: Cli], linter: [Function: Linter] } > const standard = standard_engine.linter() Error: opts.eslint option is required at new Linter (/Users/zeke/.trymodule/node_modules/standard-engine/index.js:32:27) at Object.Linter (/Users/zeke/.trymodule/node_modules/standard-engine/index.js:26:41) ... Passing in eslint and an empty object works though: ~/forks/standard-engine master* 25s ❯ node > const eslint = require('.') undefined > const standard = require('.').linter({eslint:eslint, eslintConfig: {}}) I think a nicer behavior would be to allow the options to be undefined, defaulting to vanilla eslint if unspecified. This seems sensible, however standard-engine isn't really meant to be run directly on its own. Its really just the glue that connects a specific version of eslint with a specific config. The config itself drives which version of eslint is needed as rules can be added/changed every version. This also allows using other eslint implementations like babel-eslint. Thanks, @Flet. I was a bit off-track, then realized standard has a programmatic API that suits my purposes: https://github.com/Flet/standard-engine/issues/114 Closing!
2025-04-01T04:10:25.087859
2023-06-30T21:45:52
1783196117
{ "authors": [ "Flix6x", "nhoening" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14181", "repo": "FlexMeasures/flexmeasures", "url": "https://github.com/FlexMeasures/flexmeasures/pull/755" }
gharchive/pull-request
chore: Upgrade dependencies after v0.14 Description We are upgrading dependencies once in a while, to stay up to date. After v014, this is the first attempt, but not ready yet. use Flask-Classful branch various smaller improvements (or more pins) which would help us move to Flask2.2 finally But: the current blocker for that is Flask-Login < 0.6.2, I debugged that a lot but found no solution or open issue. Forecasting/scheduling jobs are failing now (see below), I installed previous versions of openturns, statsmodels, pandas, timely-beliefs, sktime and pyomo but that did not help. Here I need ideas. @Flix6x maybe check the diff of app.txt or even the error logs (in Github Actions or locally) for an idea. How to test make test Here are the current failures: 421 passed 18 failed - flexmeasures/api/v3_0/tests/test_sensor_schedules.py:99 test_trigger_and_get_schedule_with_unknown_prices[message0] - flexmeasures/api/v3_0/tests/test_sensor_schedules.py:162 test_trigger_and_get_schedule[message0-Test battery] - flexmeasures/api/v3_0/tests/test_sensor_schedules.py:162 test_trigger_and_get_schedule[message1-Test charging station] - flexmeasures/data/tests/test_forecasting_jobs.py:50 test_forecasting_an_hour_of_wind - flexmeasures/data/tests/test_forecasting_jobs.py:88 test_forecasting_two_hours_of_solar_at_edge_of_data_set - flexmeasures/data/tests/test_forecasting_jobs.py:174 test_failed_forecasting_insufficient_data - flexmeasures/data/tests/test_forecasting_jobs.py:191 test_failed_forecasting_invalid_horizon - flexmeasures/data/tests/test_forecasting_jobs.py:207 test_failed_unknown_model - flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py:21 test_forecasting_three_hours_of_wind - flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py:52 test_forecasting_two_hours_of_solar - flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py:85 test_failed_model_with_too_much_training_then_succeed_with_fallback[failing-test-1] - flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py:85 test_failed_model_with_too_much_training_then_succeed_with_fallback[linear-OLS-2] - flexmeasures/data/tests/test_scheduling_jobs.py:17 test_scheduling_a_battery - flexmeasures/data/tests/test_scheduling_jobs.py:99 test_assigning_custom_scheduler[False] - flexmeasures/data/tests/test_scheduling_jobs.py:99 test_assigning_custom_scheduler[True] - flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py:12 test_scheduling_a_charging_station - flexmeasures/data/tests/test_scheduling_repeated_jobs.py:240 test_allow_trigger_failed_jobs - flexmeasures/data/tests/test_scheduling_repeated_jobs_fresh_db.py:31 test_requeue_failing_job Further Improvements We need to at least fix these failures. Here I need ideas. From my initial inspection of the logs I suspect an rq downgrade may be interesting to try out. From my initial inspection of the logs I suspect an rq downgrade may be interesting to try out. No that does not seem to help. I tried a lot of other downgrades, too, and couldn't hit the spot. Still suspecting a redis related issue, I found out that tests passed locally after downgrading fakeredis to fakeredis==2.10.3. Changelogs suggest that these issues may be playing a role, and the following line change in app,py fixes the issue: - redis_conn = FakeStrictRedis() + redis_conn = FakeStrictRedis(host="redis", port="1234") Thanks for finding this. @Flix6x We thus could merge this PR if you approve. Even if we are still held back regarding Flask2.2, the one larger move in here is pandas==2.0.3 which could help FlexMeasures with speed, so it is quite relevant. https://github.com/FlexMeasures/flexmeasures/pull/755/commits/c54b832c56b5cd80732fe599e544f4c057875fb9 still seems problematic to our pipeline. c54b832 still seems problematic to our pipeline. Terrible, as it works fine locally for me. Trying if a new PR would run the tests. Not being able to reproduce locally what Github Actions is seeing is frustrating ...
2025-04-01T04:10:25.101409
2017-04-13T16:19:40
221613467
{ "authors": [ "funkyApe", "funkyMonkeyX", "schmalls", "vladnega" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14182", "repo": "FlexSearch/FlexSearch", "url": "https://github.com/FlexSearch/FlexSearch/issues/52" }
gharchive/issue
Support For Searching All Fields I have looked through the documentation and can't find if there is a way to search all fields in the document. On our existing NHibernate.Search implementation we added an AllFieldsBridge which concatenates the values of every field in the object and creates a new field call * which is not stored but allows doing queries on all fields in the document. Is this something that could be done through a plugin easily? Hi @schmalls , Thank you for your interest in FlexSearch. The same approach can be taken in FlexSearch using PreIndex Scripts. You will need to create a non-storing field, and on the preIndex event concatenate all fields in the newly created one. Here is a link showing how preIndex scripts are used. Let me know if you need more information. @vladnega Thank you for the quick response. I will have to spend a little time learning F# and then I'll give it a shot. Once I have something working, I'll post it here for others to see. @vladnega what do you mean with create a non-storing field? @funkyApe, I was under the impression that FlexSearch still had a field setting for deciding whether it's stored or not stored. I double checked in the code and I can see that this option has somehow been removed along the way. Sorry for the confusion. I will add support over the next few days for having a field that is not stored, i.e. it can only be searched, but its contents cannot be retrieved. In the meantime, the functionality of searching across all fields can be done by concatenating all fields into a normal Text field (which is searchable and stored) using a PreIndex Script. @funkyApe, I've released version 0.8.5 of FlexSearch which includes a new Field Type called SearchOnly. As the name suggests, it only allows users to search against it, but it doesn't store the data, thus cannot be retrieved/displayed. Hi @vladnega , just tried the new version but it doesn´t work for me. http://localhost:9800 is not reachable. Seems that the server is started successfully. How to get it to work? @funkyMonkeyX, I have raised issue #55 with this problem. I should get it fixed today. @funkyMonkeyX , I have just pushed release 0.8.6 that should fix this issue. Could you take a look? so far the version works, but how to configure and use search fields? ^^ I have created a new index with 2 text fields and one search field but cannot find documents with the search field (tried anyof, like and so on, seems that it returns data only with matchall). Where can I define which fields are used by one search field? Is it possible to configure it per field? Is the usage only possible with preSearch scripts? @funkyMonkeyX , you have to populate the SearchOnly field manually. It doesn't automatically merge the values from all fields. You will have to write a PreIndex Script to merge these values. The below piece of code should work, assuming your index is made of: field1 - Text field2 - Text combo - SearchOnly module Script open FlexSearch.Api.Model open Helpers open System let preIndex (document : Document) = let mergedString = document.Fields |> Seq.where (fun kv -> kv.Key <> "combo") |> Seq.map (fun kv -> kv.Value) |> Seq.fold (fun acc value -> acc + " " + value) "" document.Set("combo", mergedString) // The above is the equivalent of: //document.Set("combo", document.Get("field1") + " " + document.Get("field2")) Please refer to this link on where to put your script.fsx file. Bear in mind you will have to reopen the index in order for FlexSearch to pick up the script. @vladnega Thank you for the sample code. How does the document.Set("combo", mergedstring) actually specify that it is a SearchOnly field? It doesn't specify it's a SearchOnly field. The field should have already been created. This script only runs when you index something. It doesn't create the field for you. You create that field using the Index Management UI tool. Check out the Index Management UI section from the homepage on how to create the fields using the UI. Ok. That makes much more sense now. It is hard coming from a pure Lucene.NET background to learn how other systems are setup. Great, @schmalls. Let me know if it works for you so I can close the issue. I will go ahead and close the issue. I can update later if I have to tweak the code a little.
2025-04-01T04:10:25.106520
2018-06-04T18:02:18
329158708
{ "authors": [ "Dschee", "EternalBlack", "Jeehut", "lazie" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14183", "repo": "Flinesoft/BartyCrouch", "url": "https://github.com/Flinesoft/BartyCrouch/issues/99" }
gharchive/issue
'too many arguments -- limit is 4096' exception Hi! I am trying to test in my project and it contains over 7000+ files, after running command like ' bartycrouch code -p Classes/ -l . ', I got this exception: Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'too many arguments (7102) -- limit is 4096'. It seems to be an os limit? How can I deal with this situation? Thank you. Hi there and wow, that's many files! I didn't even know there is a file limit, the big question is where that file limit arises. I should definitively dive into this question some time. But for the time being, an easy solution would be to separate those files into at least two different subfolders and run BartyCrouch twice, once for every subfolder. @lazie I just validated this issue and it actually happens and can't be fixed very easily. I will try to find a nice solution though. I have a workaround in mind that will probably work, but it's quite hacky, so I'll wait search for a better solution first. I'll keep you updated. I currently unfortunately also experience this. bartycrouch update -x give me 2019-05-22 09:32:58.741 bartycrouch[47331:688397] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'too many arguments (7167) -- limit is 4096' *** First throw call stack: ( 0 CoreFoundation 0x00007fff53352cfd __exceptionPreprocess + 256 1 libobjc.A.dylib 0x00007fff7d9fca17 objc_exception_throw + 48 2 CoreFoundation 0x00007fff53352b2f +[NSException raise:format:] + 201 3 Foundation 0x00007fff55545619 -[NSConcreteTask launchWithDictionary:error:] + 773 4 bartycrouch 0x000000010d719dac $s8SwiftCLI4TaskC6launch030_19E0ACB72F1C972020BFBD69850F9J1FLLyyF + 364 5 bartycrouch 0x000000010d719949 $s8SwiftCLI4TaskC7runSyncs5Int32VyF + 25 6 bartycrouch 0x000000010d71c9e9 $s8SwiftCLI3run_9arguments9directoryySS_SaySSGSSSgtKFTf4xnn_n + 393 7 bartycrouch 0x000000010d718bd9 $s8[1] 47331 abort bartycrouch update -x bartycrouch lint -x gives me Starting Task 'Lint' ... [1] 47453 illegal hardware instruction bartycrouch lint -x Currently kinda stuck and had to disable Bartycrouch until this is fixed. Yeah, this is still planned, the fix isn't trivial though, so it might take a while. Until then instead of disabling BartyCrouch, you could also run it separately for different folders in your project each of which doesn't exceed the limit. Hey everyone, I just released version 4.1.1 of BartyCrouch which attempts to fix this issue. Please try it out and report any issues you might run into. Should be available soon on Homebrew, or if you can't wait, just install it with Mint: mint install Flinesoft/BartyCrouch
2025-04-01T04:10:25.109813
2015-02-14T23:52:05
57711322
{ "authors": [ "drodriguez", "ryanolsonk" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14184", "repo": "Flipboard/FLEX", "url": "https://github.com/Flipboard/FLEX/pull/45" }
gharchive/pull-request
Add editor for NSDate values. The argument input for dates is an UIDatePicker, set to use gregorian calendar and UTC time zone (the locale is still the current one). Unfortunately UIDatePicker doesn’t give an option for showing seconds. Works in the object explorer and the defaults explorer. The changes around ArgumentInputViewFactory and DefaultEditorVC allows to introspect the value for its class and show the right editor (otherwise the JSON editor is used by default). This change is awesome. Thank you! Merging. Thanks!
2025-04-01T04:10:25.118853
2017-04-29T10:02:08
225247030
{ "authors": [ "FloEdelmann", "fxedel" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14185", "repo": "FloEdelmann/open-fixture-library", "url": "https://github.com/FloEdelmann/open-fixture-library/pull/105" }
gharchive/pull-request
Switch to shrink-ray for compression Closes #87. This enables the brotli and zopfli compression algorithms for supported browsers. Travis is failing due to build errors of the node-brotli dependency. There are prebuilt binaries available, but apparently, their location has changed. I think we'll just have to retry later, maybe with an updated version of shrink-ray. The problem is caused by zopfli, not brotli. Bot zopfli's documentation says: Compress gzip files 5% better compared to gzip. It is considerably slower than gzip (~100x) so you may want to use it only for static content and cached resources. So we also had to implement a caching solution or else, only few files could be compressed... I'm not sure whether it's worth taking that effort to optimize transferred data size which doesn't make problems at the moment to gain only 5% optimization. Since this doesn't seem to be fixed anytime soon and your point is valid, I'll close this now.
2025-04-01T04:10:25.126718
2016-05-17T11:14:03
155234551
{ "authors": [ "HM-Wen", "stamariajerome" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14186", "repo": "Floobits/atom-term3", "url": "https://github.com/Floobits/atom-term3/issues/59" }
gharchive/issue
Package Installation Installing term3 to C:\Users\Jerome Sta. Maria.atom\packages failed <EMAIL_ADDRESS>install C:\Users\JEROME~1.MAR\AppData\Local\Temp\apm-install-dir-116417-2724-11vry7u\node_modules\term3\node_modules\ptyw.js node-gyp rebuild C:\Users\JEROME~1.MAR\AppData\Local\Temp\apm-install-dir-116417-2724-11vry7u\node_modules\term3\node_modules\ptyw.js>if not defined npm_config_node_gyp (node "C:\Users\Jerome Sta. Maria\AppData\Local\atom\app-1.7.3\resources\app\apm\node_modules\npm\bin\node-gyp-bin....\node_modules\node-gyp\bin\node-gyp.js" rebuild ) else (node rebuild ) npm WARN deprecated<EMAIL_ADDRESS>react-tools is deprecated. For more information, visit https://fb.me/react-tools-deprecated gypnpm WARN deprecated<EMAIL_ADDRESS>graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to<EMAIL_ADDRESS>as soon as possible. Use 'npm ls graceful-fs' to find it in the tree. npm ERR! Windows_NT 6.2.9200 npm ERR! argv "C:\Users\Jerome Sta. Maria\AppData\Local\atom\app-1.7.3\resources\app\apm\bin\node.exe" "C:\Users\Jerome Sta. Maria\AppData\Local\atom\app-1.7.3\resources\app\apm\node_modules\npm\bin\npm-cli.js" "--globalconfig" "C:\Users\Jerome Sta. Maria.atom.apm.apmrc" "--userconfig" "C:\Users\Jerome Sta. Maria.atom.apmrc" "install" "C:\Users\JEROME~1.MAR\AppData\Local\Temp\d-116417-2724-eahpkq\package.tgz" "--target=0.36.8" "--arch=ia32" npm ERR! node v0.10.40 npm ERR! npm v2.13.3 npm ERR! code ELIFECYCLE npm ERR<EMAIL_ADDRESS>install: node-gyp rebuild npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the<EMAIL_ADDRESS>install script 'node-gyp rebuild'. npm ERR! This is most likely a problem with the ptyw.js package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node-gyp rebuild npm ERR! You can get their info via: npm ERR! npm owner ls ptyw.js npm ERR! There is likely additional logging output above.` Anyone know why I'm having this error when I try to install term3 via cli. did you fill this problem yet? i came across the same problem as yours ,i have installed graceful-fs @4+, and react-tools deprecated, what can i do???
2025-04-01T04:10:25.146467
2024-05-02T15:02:01
2275768606
{ "authors": [ "Steve-Mcl", "knolleary" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14187", "repo": "FlowFuse/nr-project-nodes", "url": "https://github.com/FlowFuse/nr-project-nodes/pull/69" }
gharchive/pull-request
Add a 2 minute session expiry on the mqtt connection Fixes #68 This updates the project nodes to connect with a non-clean session, meaning session state is not immediately expired when the node disconnects. Alongside this, setting the session expiry to 2 minutes so we don't queue up messages indefinitely. The choice of a 2 minute session expiry means we'll handle brief connectivity blips without discarding messages. If the project nodes are disconnected for more than 2 minutes, we'll stop queuing up messages for it (something we didn't do at all previously). pulling and testing with e2e flows that operate across the broker.
2025-04-01T04:10:25.148583
2024-02-13T23:43:32
2133310250
{ "authors": [ "HenryHengZJ", "MMX-cpu" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14188", "repo": "FlowiseAI/Flowise", "url": "https://github.com/FlowiseAI/Flowise/issues/1724" }
gharchive/issue
[BUG] Trying to run the vector upsert feature, getting error 401 Trying to run the Vector upsert feature and getting "Error: command "insertmany" failed with the following errors: [{"message":"Request failed with status code 401"}] Running openai embedding with astraDB. Using valid credentials. should be resolved with this PR - https://github.com/FlowiseAI/Flowise/pull/2071
2025-04-01T04:10:25.149868
2023-11-11T23:13:04
1989152262
{ "authors": [ "HenryHengZJ", "Zochory" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14189", "repo": "FlowiseAI/Flowise", "url": "https://github.com/FlowiseAI/Flowise/pull/1217" }
gharchive/pull-request
Removal of older models from OpenAI https://platform.openai.com/docs/models/ think these models are marked as legacy, but still can be used. we can remove them once its completely removed from OpenAI closing for now, will revisit once they are completely out, as some are still using these models
2025-04-01T04:10:25.155636
2024-10-07T12:02:40
2570242282
{ "authors": [ "HenryHengZJ", "PylotLight", "thiagolealassis" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14190", "repo": "FlowiseAI/Flowise", "url": "https://github.com/FlowiseAI/Flowise/pull/3317" }
gharchive/pull-request
Optimize basename handling and ensure dynamic runtime definition (custom basepath) As part of this contribution, I’ve implemented a series of modifications in Flowise to allow it to support a custom BasePath (which corresponds to the basename property in React). This is an essential feature for enabling deployments where the application needs to run under a specific path, such as behind a reverse proxy like NGINX, without requiring changes to the underlying Flowise backend. The main challenge was that Flowise’s frontend did not natively support running under a BasePath. Without this capability, any attempt to deploy Flowise under a custom path (e.g., https://flowise.domain.com/custom/chatflows) would break the application, as all assets and routes would assume the root URL. In environments where Flowise needs to coexist with other services on the same domain, this becomes a significant limitation. To solve this, I introduced logic to dynamically set the basename in React’s router based on a configurable BasePath. The change was isolated to the frontend, where the BasePath can now be managed without impacting the backend. Specifically, I introduced a mechanism to substitute the %BASE_HREF% variable in index.html at runtime, allowing the application to correctly resolve assets and routes when deployed under any given path. On the backend side, I ensured that this works seamlessly with NGINX, where NGINX strips the BasePath from the request and forwards it to Flowise as usual. This solution ensures that Flowise can now be deployed with a custom BasePath, enabling greater flexibility for deployments in various environments, such as microservices architectures or multi-tenant setups. This feature is crucial for improving Flowise’s adaptability in enterprise-level deployments, where the application may need to share a single domain with other services. It also helps avoid potential conflicts and ensures that Flowise remains robust and scalable in a broader range of scenarios. I’m confident that this enhancement will significantly benefit teams looking to integrate Flowise into more complex infrastructures, and I encourage its adoption to make Flowise even more versatile in production environments. thanks @thiagolealassis ! I tried to set the BASE_HREF as /flowise When I open http://localhost:3000/flowise, I have this error: Are you able to reproduce? We also have this issue as the assets are trying to load from a root path which is inaccessible. Can we finish off this PR to get this to work, or is there another way to get a custom prexfix/basepath for assets to work?
2025-04-01T04:10:25.287724
2018-08-10T12:03:08
349489688
{ "authors": [ "ali-hardan", "looknear" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14191", "repo": "FolioReader/FolioReader-Android", "url": "https://github.com/FolioReader/FolioReader-Android/issues/252" }
gharchive/issue
Is there are free FolioReader based app for normal users? Is exists free app based on FolioReader for normal users, who want simply read epubs only. yes. here sample: https://www.e-vrit.co.il/
2025-04-01T04:10:25.294354
2018-02-06T21:26:12
294918112
{ "authors": [ "ShaMan123", "atroppe" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14192", "repo": "Foliotek/Croppie", "url": "https://github.com/Foliotek/Croppie/issues/466" }
gharchive/issue
Resizer Touch events I downloaded an instance of croppie a week ago. The touch resize events didn't work. After inspection I found that the code was missing but here I see it. Is this new? ` if (vr) { vr.addEventListener('mousedown', mouseDown); vr.addEventListener('touchstart', mouseDown); } if (hr) { hr.addEventListener('mousedown', mouseDown); hr.addEventListener('touchstart', mouseDown); }` I am also struggling with this issue. I came across a pr in which these lines of code were added to the croppie library, but it does not seem to be present in the latest released version. Any update as to which version of croppie is needed in order to support touch events? I abandoned this library for a different solution. However, it's easy to fix. בתאריך יום ו׳, 20 ביולי 2018, 18:18, מאת Amy Troppe ‏< <EMAIL_ADDRESS> I am also struggling with this issue. I came across a pr in which these lines of code were added to the croppie library, but it does not seem to be present in the latest released version. Any update as to which version of croppie is needed in order to support touch events? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Foliotek/Croppie/issues/466#issuecomment-406632598, or mute the thread https://github.com/notifications/unsubscribe-auth/AgwLcdjgzX1AWc-Gk1KPZel32ht2iX8Eks5uIfS2gaJpZM4R7wGR . I tried editing my local copy but it did not work. I am using the angular package referenced above, so I suspect I would need the angular package to support the version released after the change was made to the croppie library. check out my PR changed files: croppie.js croppie.css This is an old version though. It's been a long time since I worked with it. Hope it helps. I Have no clue about angular.
2025-04-01T04:10:25.299114
2021-06-05T07:39:27
912137575
{ "authors": [ "let4be" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14193", "repo": "Folyd/robotstxt", "url": "https://github.com/Folyd/robotstxt/issues/4" }
gharchive/issue
Document test dependencies While following README.md and trying to build && run official google tests I get this error /usr/bin/ld: cannot find -labsl::container collect2: error: ld returned 1 exit status on make stage What should I install in order to fix this and run tests? uname -a 5.8.0-53-generic #60-Ubuntu SMP Thu May 6 07:46:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux Hm... on another try it gives me this -- Found Python: /usr/bin/python3.8 (found version "3.8.6") found components: Interpreter -- Configuring done CMake Error at CMakeLists.txt:62 (add_executable): Target "robots-test" links to target "absl::container" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing? during cmake .. found this in CMakeLists.txt.in ExternalProject_Add(abseilcpp GIT_REPOSITORY https://github.com/abseil/abseil-cpp.git GIT_TAG master GIT_PROGRESS 1 SOURCE_DIR "${CMAKE_CURRENT_BINARY_DIR}/libs/abseil-cpp-src" BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/libs/abseil-cpp-build" CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" TEST_COMMAND "" ) ``` it's never a good idea to link to master branch of a git, should always pin version of your dependencies changing https://github.com/Folyd/robotstxt/blob/d46c028d63f15c52ec5ebd321db7782b7c033e81/tests/CMakeLists.txt.in ExternalProject_Add(abseilcpp GIT_REPOSITORY https://github.com/abseil/abseil-cpp.git GIT_TAG 20200923.2 GIT_PROGRESS 1 SOURCE_DIR "${CMAKE_CURRENT_BINARY_DIR}/libs/abseil-cpp-src" BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/libs/abseil-cpp-build" CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" TEST_COMMAND "" ) ExternalProject_Add(googletest GIT_REPOSITORY https://github.com/google/googletest.git GIT_TAG release-1.10.0 GIT_PROGRESS 1 SOURCE_DIR "${CMAKE_CURRENT_BINARY_DIR}/libs/gtest-src" BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}/libs/gtest-build" CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" TEST_COMMAND "" ) changing https://github.com/Folyd/robotstxt/blob/d46c028d63f15c52ec5ebd321db7782b7c033e81/tests/CMakeLists.txt#L63 to target_link_libraries(robots-test absl::base absl::container absl::strings gtest_main dl) fixed an issue!
2025-04-01T04:10:25.307475
2024-01-26T10:28:19
2101968123
{ "authors": [ "Bonajo", "homberghp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14194", "repo": "FontysVenlo/codestripper", "url": "https://github.com/FontysVenlo/codestripper/issues/10" }
gharchive/issue
Allow all tags to have a payload, so all can drop a replacement for self. You still are tied to the old java CodeStripper tags for the replacewith feature. Implementing this could drop that need. If you allow all tags to have a playload by default, eg //cs:remove:start:// TODO added Then you can easily (with a regex ;-)) drop the old style tags from all solutions. That is replace all //_Start Solution::replacewith::_//TODO with //sc:remove:start://TODO where the regex would be ///Start Solution::replacewith::/ a sed -i strcipt can then do the trick. There is built-in support for the legacy tags. The reason I don't support payload on the remove tag (for now), is because having tags on the start and the end makes it difficult to see what is being replaced. I do agree if there is only one line that gets added that it is easier to have a payload.
2025-04-01T04:10:25.316258
2021-11-16T19:59:15
1055267683
{ "authors": [ "cmungall", "ddooley" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14195", "repo": "FoodOntology/foodon", "url": "https://github.com/FoodOntology/foodon/issues/163" }
gharchive/issue
incorrect equivalence axiom for hominy Discovered by @wdduncan in ENVO build: https://github.com/EnvironmentOntology/envo/pull/1240 http://purl.obolibrary.org/obo/FOODON_03310226 hominy equivalentTo 'corn food product' or 'corn (on-the-cob, kernel or parts)' this is clearly incorrect and is causing equivalence between named classes: My immediate recommendation is that after fixing this, use robot to fail fast if equivalencies detected See http://robot.obolibrary.org/reason#equivalent-class-axioms Also https://douroucouli.wordpress.com/2018/09/04/debugging-ontologies-using-owl-reasoning-part-2-unintentional-entailed-equivalence/ My next recommendation would be to very careful with equivalence axioms. Only use them if the term conforms to a documented DP If the term doesn't have an or in the label, be suspicious of any class forming to a A or B equiv axiom See also https://douroucouli.wordpress.com/2019/07/29/ontotip-dont-over-specify-owl-definitions/ https://douroucouli.wordpress.com/2019/07/08/ontotip-write-simple-concise-clear-operational-textual-definitions/, specifically S11 it is a red flag if text and logical definitions don't match Hominy is a food produced from dried maize (corn) kernels that have been treated with an alkali, in a process called nixtamalization (nextamalli is the Nahuatl word for "hominy"). != 'corn food product' or 'corn (on-the-cob, kernel or parts)' Consider a DP for produced-by-process remember it's better to have no equivalence axiom than an incorrect one I think this is a case of two errors - the "or" is supposed to be an "and", and the equivalency mistakenly got applied to the wrong entity. This equivalency is removed; I'll get it rolled out to OntoBee etc. Thx for robot tip. I'll explore what the equivalency report yields. I'm thinking we want inferred equivalencies but not explicit ones - which I presume occur when one accidentally puts a disjunction between atomic entities in the equivalency expression. Also, I've run the robot equivalency report and found a number of other equivalency errors related to the new plant_part_import.owl file we did, so thanks for that tip. Going forward releases will be tested against this report.
2025-04-01T04:10:25.317319
2021-02-05T11:03:18
802071415
{ "authors": [ "forty-rachel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14196", "repo": "FoodStandardsAgency/lit-fetch", "url": "https://github.com/FoodStandardsAgency/lit-fetch/issues/70" }
gharchive/issue
Consider moving the preview to before the filter Will need to think about how this works - probably need a preview button this I meant specifically on the Welcome page - probably clear but just didn't wnat to accidentally cause work!
2025-04-01T04:10:25.325050
2023-02-19T19:12:02
1590817050
{ "authors": [ "ForJadeForest", "matveymotyvin" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14198", "repo": "ForJadeForest/ImageSearchLightningCLIP", "url": "https://github.com/ForJadeForest/ImageSearchLightningCLIP/issues/1" }
gharchive/issue
Model What model architectures have you used to encode text and images? Hi, thanks for your interesting for this work. The distillation model is a small vit and a small transformer. These model architecture is similar to the CLIP original model(eg. vit32-B). Btw, the diatillation code is in the master branch. And the code about app is in the main branch. You can checkout the master branch to check the implementation details. Thanks! Hello, can you share the full version of the code to start the model distillation process on a computer (for experiments), I would be very grateful if you could send it by email<EMAIL_ADDRESS> Sure, but it will take me some time to recall the code and I will reorganize a new version of the code in this repository. Thank you so much, I will be waiting!!! Now you can check the new version code! If you have any further questions, please feel free to consult 👏.
2025-04-01T04:10:25.328074
2023-12-15T12:13:42
2043623213
{ "authors": [ "ForNeVeR", "Lehonti", "ygra" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14199", "repo": "ForNeVeR/xaml-math", "url": "https://github.com/ForNeVeR/xaml-math/pull/480" }
gharchive/pull-request
Refactored to do away with reassignment in SourceSpan.GetHashCode This is an idea for how we could get rid of variable reassignments in the future, making the code more functional (as opposed to procedural) Do you really believe that it's worth to change this? I even have an argument in favor of the previous approach: it makes the code linear, almost as if the type had some linearity (FP-oriented folks tend to love the linear types, are they?). I mean that you cannot access the previous value of hashcode after each line, so you are sure that you haven't mixed up the step values. Also, adding a new member in the middle is trivial with the old approach, but will require to change the variable names in the new one. And it may be my bias, but I find the old approach a bit easier to read. In the new one, I had to eyeball all the lines to check that step1 wasn't reused in calculation of the step3, while it wasn't even possible in the old approach. So far, I tend to reject this change, though I am open for further discussion if you see other cases when this refactoring would be beneficial. In this particular case I'd say HashCode.Combine(Length, Source, SourceName) is probably the clearest, although it requires referencing Microsoft.Bcl.HashCode on .NET Framework.
2025-04-01T04:10:25.377812
2024-03-18T11:31:43
2191953720
{ "authors": [ "cburstedde", "donnaaboise" ], "license": "bsd-2-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14200", "repo": "ForestClaw/forestclaw", "url": "https://github.com/ForestClaw/forestclaw/pull/326" }
gharchive/pull-request
(gauges) Don't overcount gauges Gauges on the boundary between patches or blocks were getting over-counted. Change comparisons to xll <= g.x < xur This may mean that gauges exactly on an upper boundary will not get counted. Gauges on the boundary between patches or blocks were getting over-counted. Change comparisons to xll &lt;= g.x &lt; xur This may mean that gauges exactly on an upper boundary will not get counted. You can view, comment on, or merge this pull request online at: This is a bit puzzling since our search code makes sure that each point is found on at most one process and at most one patch. Would you be able to double-check our domain_search_points code @hannesbrandt? https://github.com/ForestClaw/forestclaw/pull/326 I don't think it is problem with p4est. I was incorrectly counting a a gauge as belonging to more than one block when the gauge appears right on the boundary. I don't think there was any way that p4est could determine this, since we don't tell p4est exactly where the gauge is.
2025-04-01T04:10:25.398773
2015-09-18T14:20:28
107211490
{ "authors": [ "alexlande", "kenwheeler" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14201", "repo": "FormidableLabs/component-playground", "url": "https://github.com/FormidableLabs/component-playground/pull/37" }
gharchive/pull-request
Bringing build/test/lint setup up to our modern standards. Will fix l… …int in a later PR for clarity. Removed gulp Using elint-config-defaults Got tests running Fixed some prop based React warnings cc @baer @alexlande Rad :+1:
2025-04-01T04:10:25.403867
2016-08-24T11:44:03
172930460
{ "authors": [ "kenwheeler", "simonswiss" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14202", "repo": "FormidableLabs/react-music", "url": "https://github.com/FormidableLabs/react-music/pull/15" }
gharchive/pull-request
Adjusts beatInterval to produce the correct tempo value Currently, demo song has a tempo of 190 which translates into an actual 95 BPM. This PR amends this by changing the value of beatInterval in the Song component. The steps are just double timed. If you put it to 95, you would just want to reduce your steps like 4 becomes 2. Yeah, makes sense. Just thought for the demo purpose, it was weird to have a tempo of 190 actually producing a BPM of 95. But yeah, splitting the steps by two or doubling up the resolution does the trick too :)
2025-04-01T04:10:25.409026
2020-09-01T13:54:30
690163969
{ "authors": [ "Corjen", "JoviDeCroock", "artecoop", "kitten", "stern-shawn" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14203", "repo": "FormidableLabs/urql", "url": "https://github.com/FormidableLabs/urql/issues/951" }
gharchive/issue
Typescript problem with types in exchange-multipart-fetch Using "urql": "1.10.0" "@urql/exchange-multipart-fetch": "0.1.8" and typescript 4.0.2 there is a compile time error when adding the multipartFetchExchange in the exchanges on `createClient': Type 'import(/node_modules/@urql/core/dist/types/types").Exchange' is not assignable to type 'import(/node_modules/urql/node_modules/@urql/core/dist/types/types").Exchange'. Types of parameters 'input' and 'input' are incompatible. Type 'import(/node_modules/urql/node_modules/@urql/core/dist/types/types").ExchangeInput' is not assignable to type 'import(/node_modules/@urql/core/dist/types/types").ExchangeInput'. Types of property 'client' are incompatible. Type 'import(/node_modules/urql/node_modules/@urql/core/dist/types/client").Client' is not assignable to type 'import(/node_modules/@urql/core/dist/types/client").Client'. Types have separate declarations of a private property 'createOperationContext'. Could you try running yarn why @urql/core or list out your @urql/core versions with npm. This would imply that you have a duplicate equivalent of @urql/core floating around. The one in urql doesn't seem deduplicated judging from the error. That was the issue. npm ci resolved the situation. Thanks a ultra lot @JoviDeCroock Thank you, @JoviDeCroock ! I just ran into the same issue but with adding cacheExchange and devtoolsExchange and this saved me from a ton of req squiggles in my IDE 🎉 Hi! Can confirm this is still an issue in a yarn workspace environment. Adding "workspaces": { "nohoist": [ "@urql/**" ] }, to packages/a/package.json fixed it for now :) @Corjen that may lead to more issues in the future 😅 I’d still instead recommend to use yarn-deduplicate or even a resolution on @urql/Core to make sure that you only have one installation of the package and one entry for it in your lock file. Nevermind 😄 removed the nohoist property, did a reinstall and it’s working now.
2025-04-01T04:10:25.522860
2015-11-24T06:28:36
118541735
{ "authors": [ "Foxandxss", "haruyama" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14205", "repo": "Foxandxss/angular-toastr", "url": "https://github.com/Foxandxss/angular-toastr/issues/143" }
gharchive/issue
0.4.3 requires "angular: '>=1.3.0'" https://github.com/Foxandxss/angular-toastr/blob/0.4.3/bower.json ... "version": "1.6.0", ... "angular": ">=1.3.0" ... Oh, it created that tag from master, that is wrong indeed. Will have to fix it Created 0.4.4, forgot to bump bower there, but it doesn't really care. If it is a problem for you, please re-open an issue.
2025-04-01T04:10:25.524032
2018-09-06T19:04:13
357785345
{ "authors": [ "Foxboron", "jelly" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14206", "repo": "Foxboron/devtools-repro", "url": "https://github.com/Foxboron/devtools-repro/issues/38" }
gharchive/issue
repro - Verify installed packages (pacman -Q) versus BUILDINFO packages Currently the tool sets up a build env but does not verify if it's 100% correct, this can be done by verifying the installed packages against the BUILDINFO required installed packages. Fixed with #40 Wops, wrong closed issue :o
2025-04-01T04:10:25.554405
2020-01-28T02:03:41
555940617
{ "authors": [ "Frando" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14209", "repo": "Frando/hypercore-protocol-rs", "url": "https://github.com/Frando/hypercore-protocol-rs/issues/1" }
gharchive/issue
Let rust and node handshake The handshake does not work at the moment. See this issue in datrs/hypercore for a high-level overview. This issue is only concerned with the handshake. I managed to track down where exactly the handshake fails: It's always the client (initiator) that crashes. And both in Rust and in Node it happens at the same place. It happens when receiving the S token and then calling into the SymetricState and its DecryptAndHash function. There, the cipher's DecryptWithAd function is called, and this decryption fails. In rust, the error occurs here in symmetricstate.rs when called from handshakestate.rs when receiving the S token In node, the error occurs here in symmetric-state.js when called from handshake-state.js when receiving the S token So - either the input parameters to the decrypt function are different, or the XChaCha20 impls differ. I managed to get this to work :sweat_smile: still not with the released versions, but it works! This was fixed quite a while ago
2025-04-01T04:10:25.595465
2024-04-21T10:19:02
2255004041
{ "authors": [ "luziusmeisser", "samclassix" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14210", "repo": "Frankencoin-ZCHF/frankencoin-dapp", "url": "https://github.com/Frankencoin-ZCHF/frankencoin-dapp/issues/34" }
gharchive/issue
Too many requests We should move to the latest version of wagmi, monitor the number of requests we are issuing and check with walletconnect why we see this error. Strangely, our API usage in cloud.walletconnect.com is 0 despite the url having the correct project id in its requests: It also looks like some requests connect to cloudflare-eth.com, which is the default provider for viem, which is often used in concert with wagmi, but apparently requires its own configuration. These are my discoveries so far Currently, there are some adverse side-effects associated with fetching data or making API calls within React components. React components should "primarily" focus on rendering and remain detached from such tasks. Whenever React refreshes a component that implements data fetching or API calls, it automatically triggers these actions, leading to potential issues. At a certain scale, Wagmi encountered a '429 Too Many Requests' error response. As a workaround, it switched to the backup service provided by cloudflare-eth.com, attempting to request the same data as before. When might our data have changed? Given our reliance on the Ethereum blockchain, any relevant changes to our data could occur with each new block. Wagmi implements a roughly 5sec fetching interval for the latest block height (eth_blockNumber): { "jsonrpc": "2.0", "id": 548, // request id "method": "eth_blockNumber" } Component called BlockUpdater To stay updated with the latest block height changes, let's introduce a new component called BlockUpdater. This component will execute a one-time solution for fetching new block states for the application, employing either a Full Update or Lazy Loading approach. import { useBlockNumber } from "wagmi"; import { useEffect } from "react"; export default function BockUpdater() { const {error, data} = useBlockNumber(); useEffect(() => { if (error) return console.log(`New block found: ${data}`); // call redux slice A fetch // call redux slice B fetch // call redux slice C fetch // call redux slice D fetch }, [error, data]) return <></>; } Get state from store Each component requiring access to the Redux store retrieves data from the store or slices using the useSelector hook and object tree parameter. e.g.: import React from 'react' import { useSelector } from 'react-redux' export function Counter() { const count = useSelector(state => state.counter.value) ... We could consider following steps: Implement Redux Store - A store holds the whole state tree of your application. (https://redux.js.org/api/store) Implement Redux Slices - The state produced by combineReducers() namespaces the states of each reducer under their keys [...] (https://redux.js.org/api/combinereducers#state-slices) Remove/Detache data fetching from components and rely on the Redux Store for state rendering Each component needs to access the Redux state and refrain from directly pulling data within itself. Transition to global state management rather than component-specific states. Those changes will eliminate the constant and unnecessary fetching of data, which is highly likely to have remained unchanged. Update Extended Ponder Backend for positions indexing. Integrated Redux Store in dApp Integration Redux Slice for positions, 1 request via apollo-client (no-hook) filter, open, closed, denied, original/clones, collateral Integration Redux Slice for prices (ERC20Info type with coingecko data, UPSET policy) Init Redux Slice for Account, used for user address, account data, allowance, ... Implementation of BlockUpdater component Implementing of BlockUpdater policies, address changes, ERC20Infos changes, each block, 10 blocks, 100 blocks UPSET: no overwriting, create or update policy, only dispatch and update state if changed EACH BLOCK Update Policy upset positions upset account data, if address is defined 10 BLOCKS Update Policy upset coingecko prices ERC20Infos aka. all relevant ERC20 token addresses atm: all mintable (ZCHF), all collateral token used in positions upset ER20Infos triggers -> prices updates ADDRESS CHANGES Update Policy upset account data Improvement Before sometimes hundreds requests requesting the same data requesting data which didnt change again react render management for components which fetches data triggered in a component components trigger data fetch too many requests error message After 1 request per block, via BlockUpdater, to own Ponder Indexer Service fetches all position data at once, due to extending Ponder Indexer Service 1 request per block, for account data (NOT READY YET), via Ponder Indexer or SmartContract calls 6 request per 10 blocks, for coingecko data, upset policy Improvements and performance improvements are enormously! Have a look at the network traffic (timing scale) Most of the times just fetching "eth_blockNumber" Just 6 requests to coingecko (can be also just ONE, but free tier doesn't allow) Positions filtered by collateral and originals No need to fetch data at the positions page, and you have access to all position data stored in the redux store. Open Positions By Collateral (Best for overview) Open Positions By Originals (Best for circular dependency between originals and clones) Original Clone A Clone B Clone C ... Network Traffic over time 25min time frame constant request flow, due to controlled requesting updates ca. 150kB transferred 6 requests to coingecko only Controlled Fetching and Enforced Update Policies should be solved with current state of dev branch. As soon its stable and testing went well, it should be applied to main to stop flooding with requests until the api access key limits are exhausted. If limits are reached, testing on dev does not work neither, because the main and dev applications share the same api keys. @luziusmeisser can we have a separate walletconnect api for dev namespace, it would improve development and testing by separating both namespaces also with api key. Also good for tracking. Maybe we should move all api calls to alchemy so we have all the stats in one place? Feel free to setup different keys there (production frontend, dev frontend, production ponder, dev ponder).
2025-04-01T04:10:25.617881
2024-05-09T19:44:09
2288339562
{ "authors": [ "Sanaz01", "sckott", "tefirman" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14211", "repo": "FredHutch/shiny-cromwell", "url": "https://github.com/FredHutch/shiny-cromwell/issues/94" }
gharchive/issue
Download Troubleshoot output Can there be a feature to download the output of "Troubleshoot a Workflow" in Troubleshoot tab? see also #7 (b/c both are about downloading details of that page/section) Should be easy enough, though I think we'd want to add this as we change interfaces across the app to download outputs. Still think this is medium priority, but I wonder if implementing this will become easier after the GUI update associated with v1.2 (i.e. as a visualization rather than a download), so I'm setting this as v1.3 for now. Yeah, I think there's an opportunity to help the user troubleshoot in the browser, let's see what we can do
2025-04-01T04:10:25.755180
2016-09-09T17:26:03
176063354
{ "authors": [ "dhcodes", "oneofmygithub" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14212", "repo": "FreeCodeCamp/FreeCodeCamp", "url": "https://github.com/FreeCodeCamp/FreeCodeCamp/issues/10609" }
gharchive/issue
Why is this failing? Challenge [iterate-over-arrays-with-map] var oldArray = [1,2,3,4,5]; // Only change code below this line. var newArray = oldArray.map(function(val){ return val+ 3; }); @oneofmygithub I believe this is a duplicate of issue #10548 . Thanks for reporting. Closing as duplicate. yes, thank you. I know it's a duplicate, but I can't find the answer in the duplicate issue. @oneofmygithub that's because it's a bug in the challenge. It won't work correctly until the issue is fixed. I referenced that issue #10548 so you could follow up there as it is worked on. i see. thanks
2025-04-01T04:10:25.759883
2016-09-24T08:57:28
179020006
{ "authors": [ "BKinahan", "RebusGlider", "atjonathan", "hallaathrad", "zarruk" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14213", "repo": "FreeCodeCamp/FreeCodeCamp", "url": "https://github.com/FreeCodeCamp/FreeCodeCamp/issues/10892" }
gharchive/issue
Text typo Challenge Use the Twitchtv JSON API has an issue. User Agent is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36. "UPDATE: Due to a change in conditions on API usage explained here, use the sample API response we provide at" The link to the sample API response is missing. @RebusGlider, thanks for catching this! This is weird, I cannot find this sentence in the correct file. cc/ @FreeCodeCamp/issue-moderators Thinking this may be related to https://github.com/FreeCodeCamp/FreeCodeCamp/issues/10740. Please reopen if we're dealing with a different issue. @atjonathan The change, with link error, is here: https://github.com/FreeCodeCamp/FreeCodeCamp/blob/backup/master/seed/challenges/01-front-end-development-certification/intermediate-ziplines.json#L175 Note the branch. Reopening as help wanted. Note you will need to add the line from master to staging and fix the syntax errors. So this means that we have to complete this challenge not calling the json but using the sample API response they're giving? Did I understand fine?
2025-04-01T04:10:25.772087
2015-08-25T03:36:54
102942219
{ "authors": [ "alyzeia", "bugron" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14214", "repo": "FreeCodeCamp/FreeCodeCamp", "url": "https://github.com/FreeCodeCamp/FreeCodeCamp/issues/2651" }
gharchive/issue
"Your image should be 100 pixels wide" error Challenge http://freecodecamp.com/challenges/waypoint-size-your-images has an issue. Please describe how to reproduce it, and include links to screenshots if possible. "Your image should be 100 pixels wide" is still crossed out even though the preview image is smaller. @alyzeia thanks for posting this issue! I've just passed test with you code. Please see Help I've Found a Bug.
2025-04-01T04:10:25.775545
2016-04-15T12:38:06
148649726
{ "authors": [ "LvntE", "raisedadead" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14215", "repo": "FreeCodeCamp/FreeCodeCamp", "url": "https://github.com/FreeCodeCamp/FreeCodeCamp/issues/8134" }
gharchive/issue
test issue Challenge Prioritize One Style Over Another has an issue. User Agent is: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0. Please describe how to reproduce this issue, and include links to screenshots if possible. My code: <style> body { background-color: black; font-family: Monospace; color: green; } .pink-test { color: pink; } </style> <h1 class="pink-test">Hello World!</h1> I think this should pass the challenge Thanks for reporting this. Your code is incorrect: .pink-test ?? should be something else, refer instructions. Please consult the Help Chat Room for any assistance to Coding. Happy Coding!
2025-04-01T04:10:25.782001
2015-08-05T16:58:48
99250610
{ "authors": [ "BerkeleyTrue", "Koriban", "benmcmahon100" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14216", "repo": "FreeCodeCamp/freecodecamp", "url": "https://github.com/FreeCodeCamp/freecodecamp/issues/1566" }
gharchive/issue
404 Error on moving between Responsive Design with Bootstrap lessons 1 and 2 moving from Waypoint: Mobile Responsive Images to Waypoint: Center Text with Bootstrap produces a 404 : We couldn't find a challenge with that name. Please double check the name. System Info: Windows 10 Microsoft Edge @Koriban Hey do you have any errors in your browser console? Here ya go :) ina bit of a rush so just a screenie. @Koriban It is as i feared. Edge has issues with web workers so you'll need to use another browser. Sorry about the inconvenience. @benmcmahon100 Lets keep this issue open to track Edge compatibility. Not sure what can be done at this moment. @benmcmahon100 Did you find any relevent articles on M$ Edge and web workers? @BerkeleyTrue Here's the problem second table -> http://caniuse.com/#search=web workers I see shared webworkers are not working, is that the issue? It says it's a security issue which suggests that we are using a shared one. Yeah we have a shared one. The fix is to create a web worker for each script if the user is on edge Alright, we'll put this on the back burner for now. We won't fix this for now. Hopefully when bonfires gets rewritten it will fix this issue but that won't happen for a while Yeah I'm also curious to see if multi-web workers craps out performance or if Edge will add support for shared workers Well they're supposed to be on the Edge, right? BTW I found out today Edge has target adds built into the browser. Thats M$ for ya! Nah that's an optional built into bing. To be fair you can see the issue with shared web workers but I'd be surprised if they didn't release a build that allows them and just does a job of making them safe as opposed to assuming they're not. There's an "Open this in IE" button in Edge which should be offered as the immediate solution to all Edge-related issues @BerkeleyTrue Looking at the table i think this may be worth moving up the priority queue as it will fix almost all the mobile issues in one go. The only browser that wouldn't benefit from using non-shared web workers would be Opera. So all browsers would work with executing the bonfires
2025-04-01T04:10:25.801538
2015-03-01T11:44:13
59392969
{ "authors": [ "akallabeth", "tomas-korec", "weberhofer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14217", "repo": "FreeRDP/FreeRDP", "url": "https://github.com/FreeRDP/FreeRDP/issues/2432" }
gharchive/issue
Implementation of errors listed in error.h Hi, I use FreeRDP with Remmina and I think this combination is much better than Windows RDP Client. So good job! :) However, there are som things I didn't like that much, so I decided to tell you about them, so FreeRDP and Remmina can improve. I think that informing user about errors that happens can be improved. I have created this issue: https://github.com/FreeRDP/Remmina/issues/498 and giox069 (his comment: https://github.com/FreeRDP/Remmina/issues/498#issuecomment-75594933 ) discovered that FreeRDP returns the same error code for any type of error. Could you implement more error codes, so Remmina can inform user about type of error that happened? Thank you! Tomas see also #4555 We're quite limited in finding out what exactly caused an error, but #4555 is a first step and has been integrated today. this has improved, closing.
2025-04-01T04:10:25.804287
2016-09-22T07:15:12
178538907
{ "authors": [ "duhow", "jondo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14218", "repo": "FreeRDP/FreeRDP", "url": "https://github.com/FreeRDP/FreeRDP/issues/3509" }
gharchive/issue
Help page should document how to leave fullscreen It was difficult to me to find out about Ctrl-Alt-Enter. (I finally found it in #1036). So I suggest to extend the corresponding help line to /f Fullscreen mode (leave with Ctrl-Alt-Enter) See https://github.com/FreeRDP/FreeRDP/issues/3071#issuecomment-173616053 for someone else with the same problem. I think this can be closed, as -toggle-fullscreen shows this. @duhow, thanks for the hint, I was not aware of that! However, I suggest to also add this text (Alt+Ctrl+Enter toggles fullscreen) to the help of /f, for better discoverability.
2025-04-01T04:10:25.823670
2019-05-04T12:10:11
440316610
{ "authors": [ "akallabeth", "hardening", "shivamsinghgit" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14219", "repo": "FreeRDP/FreeRDP", "url": "https://github.com/FreeRDP/FreeRDP/issues/5374" }
gharchive/issue
Middle mouse click autoscroll problem - Server 2008 R2 I am facing the auto-scroll problem during taking RDP session in window Server 2008 R2 . When clicking on the mouse wheel in a 2008 R2 RDP sessoin, autoscroll is activated in Excel 2007 as expected but it's not possible to release the scroll, or control the speed. plz provide the solution . thanks in advance. Seriously? Could you please at least read the issue template? I am no fortune teller and from the few things mentioned can not even guess what your problem is... If you are not understanding the problem then u can ask for clarification , don't behave like u are boss here and I don't take tension this problem is not at your level . On Sun, May 5, 2019, 12:24 AM akallabeth<EMAIL_ADDRESS>wrote: Seriously? Could you please at least read the issue template? I am no fortune teller and from the few things mentioned can not even guess what your problem is... — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/FreeRDP/FreeRDP/issues/5374#issuecomment-489354946, or mute the thread https://github.com/notifications/unsubscribe-auth/AL7P5TXJDKQCTXUFQNF25KLPTXLWJANCNFSM4HKYYGUQ . hmm, looks like you're correct, sorry. when first reading this your second sentence with the excel description was not there. (sorry, maybe missed it, but was really annoyed by not having that info ;)) anyway, the rest of the information in https://github.com/FreeRDP/FreeRDP/wiki/BugReporting is not meant as punishment for bug reporters but necessary information to reproduce and find a solution, specifically the /buildconfig of your version (what are you actually using?) and your command line (leave out passwords and such sensitive stuff) @akallabeth you're really nice, and BTW you're a kinda boss here ;) Actually I am new on git so really sorry. On Sun, May 5, 2019, 2:43 AM akallabeth<EMAIL_ADDRESS>wrote: hmm, looks like you're correct, sorry. when first reading this your second sentence with the excel description was not there. (sorry, maybe missed it, but was really annoyed by not having that info ;)) anyway, the rest of the information in https://github.com/FreeRDP/FreeRDP/wiki/BugReporting is not meant as punishment for bug reporters but necessary information to reproduce and find a solution, specifically the /buildconfig of your version (what are you actually using?) and your command line (leave out passwords and such sensitive stuff) — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/FreeRDP/FreeRDP/issues/5374#issuecomment-489366021, or mute the thread https://github.com/notifications/unsubscribe-auth/AL7P5TWONLGKDCQ5YTJWGS3PTX36DANCNFSM4HKYYGUQ . @shivamsinghgit looks like we started on the wrong track. so, git? no need for using that, just add the missing details about what you're using (aka which freerdp version/build) and how you connect (command line). Actually I am taking the RDP session of window server 2008 R2 and I open Excel 2007 and start auto-scroll using the mouse wheel and in a second try to stop at certain point of my Excel but it's not possible to release the scroll and my machine get stuck even I can't perform any other task on that RDP session. any update? still waiting for the missing information... Actually configure window server 2008 R2 and then take the RDP of that machine . After open Excel 2007 on that RDP session then press mouse wheel for auto-scroll . And after few second try to stop that scroll. In my case I am not able to stop , for that I have to close the RDP session . On Tue, May 7, 2019, 12:37 PM akallabeth<EMAIL_ADDRESS>wrote: any update? still waiting for the missing information... — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/FreeRDP/FreeRDP/issues/5374#issuecomment-489960650, or mute the thread https://github.com/notifications/unsubscribe-auth/AL7P5TWHTCTHNDSQKF7JL3LPUETCNANCNFSM4HKYYGUQ . @shivamsinghgit ok, please read the whole thread and the link I sent you again. This is not the information requested. freerdp version you are using (xfreerdp /buildconfig) your command line you use for connecting Microsoft Remote desktop connection. I am taking the RDP session from other machine which supported Remote desktop protocol 10.2. On Tue, May 7, 2019, 4:51 PM akallabeth<EMAIL_ADDRESS>wrote: @shivamsinghgit https://github.com/shivamsinghgit ok, please read the whole thread and the link I sent you again. This is not the information requested. freerdp version you are using (xfreerdp /buildconfig) your command line you use for connecting — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/FreeRDP/FreeRDP/issues/5374#issuecomment-490038191, or mute the thread https://github.com/notifications/unsubscribe-auth/AL7P5TVVZ5IMBJ44NL3LR23PUFQ2RANCNFSM4HKYYGUQ . ok, not reponding to questions, closing. Okay On Tue, May 7, 2019, 4:59 PM akallabeth<EMAIL_ADDRESS>wrote: ok, not reponding to questions, closing. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/FreeRDP/FreeRDP/issues/5374#issuecomment-490040793, or mute the thread https://github.com/notifications/unsubscribe-auth/AL7P5TTLN3CLBJG7OPIBGW3PUFR2NANCNFSM4HKYYGUQ .
2025-04-01T04:10:25.843791
2018-06-23T14:55:25
335106671
{ "authors": [ "Captainkirkdawson", "richpomfret", "smrr723" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14220", "repo": "FreeUKGen/FreeCENMigration", "url": "https://github.com/FreeUKGen/FreeCENMigration/issues/491" }
gharchive/issue
Search crashes There are approximately 100 crashes per day of the FR2 search app ActionController::UrlGenerationError SearchQueriesController#create 23 Jun 2018, 08:40 Error message ActionController::UrlGenerationError: No route matches {:action=>"show", :controller=>"search_queries", :id=>nil} missing required keys: [:id] Error details Filed tickets Stack trace[Show framework code] ActionController::UrlGenerationError: No route matches {:action=>"show", :controller=>"search_queries", :id=>nil} missing required keys: [:id] …/freecen2/production/app/helpers/application_helper.rb: 201:in google_advert' /home/apache/hosts/freecen2/production/app/views/search_queries/_form_freecen.html.erb:194:in _app_views_search_queries__form_freecen_html_erb__2477835353264903732_17357765060' /home/apache/hosts/freecen2/production/app/views/search_queries/new.html.erb:14:in _app_views_search_queries_new_html_erb__2173609002657814540_17357921340' …roduction/app/controllers/search_queries_controller.rb: 72:in create' Deployed to production. @smrr723 to check fix in the log before closing. Can't see any errors similar to the above in the passenger.log on colobus, so the fix seems to have worked.
2025-04-01T04:10:25.855125
2023-05-05T01:19:56
1696882624
{ "authors": [ "xldistance" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14221", "repo": "FreedomIntelligence/LLMZoo", "url": "https://github.com/FreedomIntelligence/LLMZoo/issues/25" }
gharchive/issue
Cannot launch web app properly cli is running normally Log on to http://localhost:21888 show {"detail":"Not Found"}
2025-04-01T04:10:25.878136
2022-01-27T07:49:39
1115881881
{ "authors": [ "jefim" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14222", "repo": "FrendsPlatform/Frends.Excel", "url": "https://github.com/FrendsPlatform/Frends.Excel/issues/5" }
gharchive/issue
Frends.PowerShell.RunScript NB! Frends.PowerShell repo already has code for these tasks, but it is older than community one, so we should overwrite it. The following should be noted: The task should have its own NuGet The task has to have good unit test coverage, at least lets try to see if host machine can allow this No integration tests are required besides just testing in unit tests No need for performance tests, Powershell being the main bottleneck We need to add and check SonarCube linting and go through warnings on that Need to setup actions workflows Update the readme with new task names, testing section, and such This was created in Excel repo by mistake
2025-04-01T04:10:25.899070
2017-03-30T15:26:56
218235778
{ "authors": [ "bravo-kernel", "dereuromark" ], "license": "WTFPL", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14224", "repo": "FriendsOfCake/awesome-cakephp", "url": "https://github.com/FriendsOfCake/awesome-cakephp/pull/258" }
gharchive/pull-request
Add Crud to the REST/API section Since this might not be evident to (new) users, also promote JSON API a bit more. It's only 1 suggestion @VarCI-bot, the other two are whitespace cleaning. It is already in CRUD section, please move it here then if that is the better location. I realize it is in the skeleton section already but I feel it deserves to be in two places as it is both a skeleton plugin and an API plugin (IMO). Could an exception for a duplicate entry be tolerated here to better aid the end-user? I guess that is reasonable. Thank you kindly, anybody want to merge this?
2025-04-01T04:10:25.904019
2017-07-18T06:24:42
243611624
{ "authors": [ "alcohol", "julienfalque", "keradus" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14225", "repo": "FriendsOfPHP/PHP-CS-Fixer", "url": "https://github.com/FriendsOfPHP/PHP-CS-Fixer/issues/2919" }
gharchive/issue
Deprecation causes unexpected ErrorException The PHP version you are using: <EMAIL_ADDRESS>> php -v PHP 7.0.20 (cli) (built: Jun 23 2017 07:46:30) ( NTS ) Copyright (c) 1997-2017 The PHP Group Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies with Xdebug v2.5.5, Copyright (c) 2002-2017, by Derick Rethans PHP CS Fixer version you are using: <EMAIL_ADDRESS>> bin/php-cs-fixer -v PHP CS Fixer version 2.3.2 by Fabien Potencier and Dariusz Ruminski Config <EMAIL_ADDRESS>> cat .php_cs.dist <?php $finder = new PhpCsFixer\Finder(); $config = new PhpCsFixer\Config('***'); $finder ->in('src') ->exclude('src/frontend') ; $config ->setRules([ '@PSR2' => true, '@Symfony' => true, '@PHP70Migration' => true, 'concat_space' => ['spacing' => 'one'], 'no_useless_else' => true, 'not_operator_with_space' => true, 'ordered_imports' => ['sortAlgorithm' => 'alpha'], ]) ->setFinder($finder) ; return $config; Command <EMAIL_ADDRESS>> bin/php-cs-fixer fix --allow-risky yes You are running php-cs-fixer with xdebug enabled. This has a major impact on runtime performance. Loaded config *** from "***/.php_cs.dist". Using cache file ".php_cs.cache". [ErrorException] Passing "replacements" at the root of the configuration is deprecated and will not be supported in 3.0, use "replacements" => array(...) option instead. fix [--path-mode PATH-MODE] [--allow-risky ALLOW-RISKY] [--config CONFIG] [--dry-run] [--rules RULES] [--using-cache USING-CACHE] [--cache-file CACHE-FILE] [--diff] [--format FORMAT] [--stop-on-violation] [--show-progress SHOW-PROGRESS] [--] [<path>]... I'm not even using replace myself... I'm guessing the migration fixer does, but why does it throw an exception? @SpacePossum provided a fix for the deprecation notices in built-in rulesets. But I'm still wondering how these silenced notices can still be converted into an ErrorException: the error handler checks the error_reporting level before doing so, this should not occur for silenced errors. I see you are running the tool with XDebug enabled, is it possible that you have xdebug.scream setting enabled? I'm able to reproduce the ErrorException issue when enabling this setting. Thanks @SpacePossum for the hint ;) You are correct. I do have xdebug.scream turned on in my current environment it seems. I think the issue is on my side :-) Great to hear, closing due to discussion and opened PR ;)
2025-04-01T04:10:25.905963
2020-10-22T09:50:09
727231862
{ "authors": [ "SpacePossum" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14226", "repo": "FriendsOfPHP/PHP-CS-Fixer", "url": "https://github.com/FriendsOfPHP/PHP-CS-Fixer/pull/5195" }
gharchive/pull-request
YodaStyle - statements in braces should be treated as variables in strict … …mode closes https://github.com/FriendsOfPHP/PHP-CS-Fixer/issues/5154 There is still another bug if (($a ?? '') !== 'something') { return; } gets fixed (default config), if (($a[1] ?? '') !== 'something') { return; } is not
2025-04-01T04:10:25.909277
2024-11-15T14:05:08
2666499376
{ "authors": [ "RafaelKr", "schneider-felix" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14227", "repo": "FriendsOfShopware/shopware-cli", "url": "https://github.com/FriendsOfShopware/shopware-cli/issues/438" }
gharchive/issue
GitHub release note links are broken (404) The links of the release notes are following this scheme: https://github.com/FriendsOfShopware/FroshTools/blob/2.4.2/628f9db But when opening any of those links I'm taken to a GitHub 404 page. See: https://github.com/FriendsOfShopware/FroshTools/releases I created a PR #439 that will fix this issue. Sadly existing changelogs will not be touched by this change. I hope I was able to help anyways.
2025-04-01T04:10:26.004066
2023-09-26T08:19:55
1912964395
{ "authors": [ "Charlie-XIAO", "Connor-Shen", "yueshengbin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14228", "repo": "FudanDISC/DISC-LawLLM", "url": "https://github.com/FudanDISC/DISC-LawLLM/issues/3" }
gharchive/issue
consultance of designed template "These candidate documents, along with the user input, are formulated using our designed template and then fed into the DISC-LawLLM" 请问是否会公布此处的模板样式以及具体的SFT训练方法? The details of the retrieved template are in Figure 3 (Model Input), and you can use our dataset to train your model with your own template. Closing as completed.
2025-04-01T04:10:26.024732
2022-08-20T16:49:58
1345221773
{ "authors": [ "kayagokalp", "otrho", "simonr0204" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14229", "repo": "FuelLabs/sway", "url": "https://github.com/FuelLabs/sway/issues/2600" }
gharchive/issue
return statement with struct fails to compile If s is struct , then the statement return s; fails to compile, with Internal compiler error: Verification failed: Function doesnt_compile return type must match its RET instructions. However, returning an expression s compiles. Repro here Note that this appears to be introduced as of 0.20.0 Out of curiosity, I did a little investigation on this one. Seems like this issue popped up after merging #2363. Following the provided repro, I saw that In the verify_ret we are trying to compare two types (one of them Type::Pointer and other Type::Struct) that always returns "not equal". Should we get the pointed type while we are making a comparison between a Type::Pointer and some other Type? Yeah, you can see the only thing different before and after that PR is the return type is now ptr { u64 }: Before fn doesnt_compile<d2cb8797>() -> { u64 }, !5 { local ptr { u64 } s entry: v0 = get_ptr ptr { u64 } s, ptr { u64 }, 0, !6 v1 = const { u64 } { u64 0 }, !7 store v1, ptr v0, !6 v2 = get_ptr ptr { u64 } s, ptr { u64 }, 0, !8 ret { u64 } v2, !8 } After fn doesnt_compile<d2cb8797>() -> { u64 }, !5 { local ptr { u64 } s entry: v0 = get_ptr ptr { u64 } s, ptr { u64 }, 0, !6 v1 = const { u64 } { u64 0 }, !7 store v1, ptr v0, !6 v2 = get_ptr ptr { u64 } s, ptr { u64 }, 0, !8 ret ptr { u64 } v2, !8 } Since the return type declared by the function is just { u64 } I think that needs to be fixed for master.
2025-04-01T04:10:26.029205
2024-02-18T12:28:09
2140964886
{ "authors": [ "CLAassistant", "GearedPaladin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14230", "repo": "FuelLabs/sway", "url": "https://github.com/FuelLabs/sway/pull/5627" }
gharchive/pull-request
Add missing docs to primitive_conversions library in standard library Description Checklist [ ] I have linked to any relevant issues. [x] I have commented my code, particularly in hard-to-understand areas. [x] I have updated the documentation where relevant (API docs, the reference, and the Sway book). [ ] I have added tests that prove my fix is effective or that my feature works. [ ] I have added (or requested a maintainer to add) the necessary Breaking* or New Feature labels where relevant. [x] I have done my best to ensure that my PR adheres to the Fuel Labs Code Review Standards. [x] I have requested a review from the relevant team or maintainers. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. @crodas @bitzoic please review
2025-04-01T04:10:26.050642
2023-01-25T01:40:44
1555928184
{ "authors": [ "EliteMasterEric", "YoshiCrafter29", "doggogit" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14231", "repo": "FunkinCrew/funkin-resources", "url": "https://github.com/FunkinCrew/funkin-resources/pull/31" }
gharchive/pull-request
Fix Codename Engine description i noticed the codename description didnt really describe the engine correctly lol Looks good to me, had trouble trying to describe it exactly so a description from the author is perfect. I'd add a "Famous for" but it hasn't been used for any important mods yet Looks good to me, had trouble trying to describe it exactly so a description from the author is perfect. I'd add a "Famous for" but it hasn't been used for any important mods yet Cam might remove the "Famous for" as he did that for Forever and Psych. Well, not remove, but might request for it not the be added I guess.
2025-04-01T04:10:26.090297
2023-09-08T23:24:09
1888480442
{ "authors": [ "AzazelHD", "Funky-Fr3sh", "FunkyFr3sh", "RebelliousX", "SilentMRG", "elishacloud", "emxd" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14232", "repo": "FunkyFr3sh/cnc-ddraw", "url": "https://github.com/FunkyFr3sh/cnc-ddraw/issues/241" }
gharchive/issue
Commandos Hello. Thanks for making this tool. Im trying to use it along Commandos 1 and its expansion. What should i notice ingame after installing it? I cant really notice it, so i dont know if im doing something wrong you could press Alt+Enter to check if it switches to windowed mode. Or you can press Ctrl+Tab and check if that unlocks your cursor. Or use cnc-ddraw config.exe and enable windowed mode in there, then start the game and check if it's windowed. So, this is tool is mainly to allow old game (those that use directdraw) to be displayed in windows or borderless mode? Yeah, borderless and windowed mode are two things people usually like about cnc-ddraw. But it does also add support for shaders that improve the image quality or add some retro CRT/scanline effects. It can also fix your mouse sensitivity (needs Borderless/Fullscreen upscaled enabled). But it does fix compatibility issues as well, some games don't even start without cnc-ddraw (or run very slow). There are also games that crash on Alt+Tab without cnc-ddraw. I see. Thanks so much ^^ Sorry for coming here again. Im having some issues. Without cnc, mouse works right, but i cant use borderless mode. With cnc, i can use borderless, but mouse goes weird. Like if it had lagg. I tried enabling and disabling adjust the mouse sensitivity, but i cant make it go like without cnc. Do you have any ideas? There are all kinds of reasons for a slow cursor: You enabled v-sync Boderless does also add some delay to the cursor, that's normal in games that have a software cursor You chose a shader that's too heavy and your hardware can't handle it Also, make sure you never enable v-sync and boderless at the same time. vsync is off. And idk how to disable shaders. There is no "none" option. Simply remove folder? I am not sure, but cnc is doing something with resolutions. The resolutions (in game) with and without cnc are totally different If you didn't change the shader then it's probably fine, "nearest neighbor", "bilinear" and "bicubic/Catmull-rom" are not going to lag What if you use "Fullscreen upscaled" instead of "Borderless", does it feel normal? "Fullscreen upscaled" does the same as "borderless", both upscale the game. But "Fullscreen upscaled" is real fullscreen so it doesn't add any delay. So, im on windows 10, and i have on that feature called "night light" (sorry my OS isnt on english), which applies like a yellow filter to the screen. If i use fullscreen upscaled, that filter is gone. I dont know why, but i dont like it playing at night without that filter. Yeah, that's fine, it was mainly just meant to be a test. Does the cursor feel normal with "Fullscreen upscaled"? Nop, cursor is still laggy. I dont know how to describe apart from saying that it is not as smooth as without cnc. All the other settings are by default. Only changed the window mode Alright, I'll try to set the game up and do some tests just to make sure it's still working as intended (been long time since I last tried it) I dont know if it will matter, but I have the GOG version. Thanks for your time Yeah, there's are multiple bugs. it does run better with the game speed limiter disabled, but I'll have to do some more debugging (Will do that tomorrow, getting late here) Yeah i tried that too. It is smoother, but not as smooth as without cnc. Thanks so much, and sorry for troubles. Take your time lol, dont worry ^^ Did you disable "Adjust Mouse sensitivity"? It seems to run same as original for me with that off and no speed limiter. BTW, this problem only happens in the menus (The gameplay is fine). In Beyond The Call Of Duty they've fixed it. Even in game i have lagg. Look, maybe u could try this and hopefully find the issue. Do u have afterburner + rivartuner? if not, download them, and on afterburner settings, click on frametime - show on screen display - graph (it is on text by default) And then make RTSS to show ingame. When ever u move the cursor, frametime varies a lot. Idk if this is the "lagg" I have a FPS overlay in cnc-ddraw, but what you see there is normal. Old games only redraw the screen if something changed. So if you don't move the mouse or move it slow it will show a low fps. https://github.com/FunkyFr3sh/cnc-ddraw/assets/8355237/a38025a7-c8b4-4327-950e-250cf26476ed https://github.com/FunkyFr3sh/cnc-ddraw/assets/47080553/f60b444a-82f1-4d61-9b7d-5b4a4da7c762 Mmmm ive noticed that your ddraw.dll is bigger than mine, and i dont have those dxwrapper files. I googled it and i got another github repo. Could be that? Oh, you can ignore those files, I just wanted to try some other ddraw.dll wrappers and check if they also have the same issue (they do). It did even happen on windows xp without any ddraw.dll wrapper. You could try to play around with the settings a bit, but I guess it's the best you try it with all on default for now (until we figured out the issue). Keep it on Fullscreen and make sure game speed limit is off Thanks for ur time ^^ Hi. Little update. I tried dxwrapper (https://github.com/elishacloud/dxwrapper) And mouse runs smoother. Do u think i could use the ddraw.dll from there with ur cnc? or it will crash? No, that's not possible. You can use only one at a time. And BTW, dxwrapper is disabled by default (just dropping the ddraw.dll into the folder isn't enough to activate it), you need to enable it via the settings file I tried dxwrapper (https://github.com/elishacloud/dxwrapper) And mouse runs smoother. What settings are you enabling in the dxwrapper.ini file? Okay I can reproduce it now on windows 10. Don't have a solution yet, but I'll post here once I made some progress No, that's not possible. You can use only one at a time. And BTW, dxwrapper is disabled by default (just dropping the ddraw.dll into the folder isn't enough to activate it), you need to enable it via the settings file Yeah, i read the instructions and the settings. Their readme has the reqs for it. For commandos they say to enable FullscreenWindowMode and ColorMode (or something similar) to 16. But dxwrapper does something weird, cuz the resolution on rivatuner is not the same as with cnc You also need to set this, otherwise the other settings are doing nothing. It does not run well with dxwrapper though, at least for me it's unplayable atm [Compatibility] Dd7to9 = 1 Ye ye, i didnt mention but sure, i enabled dd7to9 ^^' Here's a test build @AzazelHD , I'm getting a 60 fps cursor now with it: cnc-ddraw_Commandos_<IP_ADDRESS>.zip OMG it goes super smooth <3 <3 <3 Thank you so much, i really appreciate it Hello everybody! Enjoying the topic to ask a question... Could someone please tell me how I can make the videos (I'm referring to those historical videos) appear? There are none of these videos and no videos with company logo appear when starting the game. I'm using the GOG version of Behind Enemy Lines. Of course, I'm using the latest version of cnc-ddraw. Thanks in advance! Hello everybody! Enjoying the topic to ask a question... Could someone please tell me how I can make the videos (I'm referring to those historical videos) appear? There are none of these videos and no videos with company logo appear when starting the game. I'm using the GOG version of Behind Enemy Lines. Of course, I'm using the latest version of cnc-ddraw. Thanks in advance! Does it work without cnc-ddraw? If it also doesn't work without cnc-ddraw then my guess is a problem with the codec, you might need to run the game in compatibility mode Windows XP SP2 (This way windows enables some older codecs) I haven't tested it without cnc-ddraw yet. As soon as it's on my PC I'll test it. Regarding the codec, I don't think this is the problem, as I have installed the latest version of "K-Lite Codec Pack Mega", and I can watch the videos by accessing them directly from the folder; however, in the game no video will play, and instead I get a black screen with some dots that flash quickly. Other than that, the game works very well and I even finished it. I'm going to play Beyond the Call of Duty for the next few days and see whether or not videos will be play. Edit: just tested, works fine for me using the "comandos_w10.exe" in the game folder. Tested on Win7 and win10 Wait a minute... Does "comandos_w10.exe" work on Windows 7?! I thought about using this executable but I thought it wouldn't work because I'm on Windows 7 and the executable is for Windows 10. I'll test this later too. Yes, it does work fine on windows 7, I always use the _w10 one We could try to use a different codec for testing, just to see what happens. I re-encoded one video for you, try to extract the files into your game folder and run "enable-xvid-vfw.reg" to enable the xvid codec. commandos-video-test.zip This is one of the codecs installed by K-Lite Codec Pack Mega. I received an error related to this codec every time I tried to play Ys I & II Chronicles+, and after installing K-Lite the problem was resolved. So, in my humble opinion I believe it is not a problem with the codecs. Honestly, this is a mystery, worse than the videos not being displayed is the lack of a warning related to the error. At least Ys warned that there was an error about missing codec. Hm, I don't really know what else the problem could be. The game plays the videos just fine and there seems to be nothing unusual about them (cnc-ddraw also fulyl supports them, including upscaling). The codec is also nothing special, it's just the old "Cinepak Video" one that a lot of other old games are using as well. I just installed Beyond the Call of Duty and I'm not getting any videos either. I installed the latest version of K-Lite: https://codecguide.com/ And I did a fresh install, and left all the settings at their defaults. So, I don't know what could be causing this. I took a look in the System32 folder and there are the dlls from the package you uploaded, the same dlls that were reinstalled by K-Lite. To give you an idea, the creation date of the dlls in your package is the same as the one that K-Lite installs. That about the videos not appearing doesn't make sense... Anyway, at least I can watch the videos through the folder. And sorry for posting these things here, it's just that for me, this wasn't a problem with cnc-ddraw, that's why I didn't open an issue. Thank you for your attention and help! Yeah, the dll files are just the original xvid ones from the official installer. I only included them just in case you don'T have them yet. BTW, the video in my zip is the only one that needs these dlls. The original videos are using "iccvid.dll", maybe you don't have that one? I just searched the System32 folder for iccvid.dll and didn't find anything, so I guess I don't have it. Could this be the problem? I guess that might be the problem yeah. I do have the file on my win7 Surely this is the problem. And now? How to install this dll? There is a program that installs it? K-Lite does not install it. I think it's part of windows installation, that's probably also the reason why mine (Deutsch). I also tried to delete the file it doesn't let me delete it so it probably is a windows component. Maybe in your country the file wasn't included in the windows installation due to license issues? Windows installs different files on different languages. I don't know if there is a official download link for it, you'll have to google for it. Here is my file and the registry key to enable the codec: cinepak-codec.zip Man, I really thank you for all your help! You wouldn't even have to care about it, but you did and tried to help me in the best way possible. BUT, unfortunately it didn't work, I tried several ways: I threw the dll in the game folder and registered it, then I deleted the keys and threw the dll in the System32 folder, finally, I found this: http://www.probo.com/ cinemapak.php I downloaded the 32-bit version and used these keys: [HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\drivers.desc] "iccvid.dll"="Cinepak Codec by Radius Inc." [HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Drivers32] "vidc.cvid"="iccvid.dll" With your file header, in this case, "Windows Registry Editor Version 5.00". And guess what? Nothing happened! Holy shit... And of course, when doing these tests I deleted the keys and dlls to avoid generating conflicts. I think my PC is cursed and not even black magic will solve it! I've never seen an invisible error like this where no solution works. Anyway, I'm from Brazil, and I think that MAYBE it's a region lock or whatever! I was irritated by this game! I'm going to uninstall it and try Commandos 2 another time. Again, thank you for all your help and apologize for any inconvenience. Those codecs should be installed as part of a Windows component. Have you tried removing with "Media Features" component and then adding it back in? See here: https://www.dummies.com/article/technology/computers/operating-systems/windows/windows-10/how-to-add-or-remove-parts-of-windows-7-195288/ Those codecs should be installed as part of a Windows component. Have you tried removing with "Media Features" component and then adding it back in? See here: https://www.dummies.com/article/technology/computers/operating-systems/windows/windows-10/how-to-add-or-remove-parts-of-windows-7-195288/ Thanks for the tip... And by the way, I lost my system installation disc. I gave up on this game, many juggles to make a game work is not good. If it worked the first time or with minimal adjustments, that's fine, if it didn't work, I'll just delete it. I'm an old guy who doesn't have time to waste. XD Hi. Sorry for coming back again. I have realised game freezes every few seconds. Is like... if it was lagging. Do i have something wrong on the cnc settings? ddraw.txt Hi. Sorry for coming back again. I have realised game freezes every few seconds. Is like... if it was lagging. Do i have something wrong on the cnc settings? ddraw.txt I still don't have a proper solution for this game, but you can try these two here, maybe one of them will work better: cnc-ddraw_Commandos_<IP_ADDRESS>_sleep4.zip cnc-ddraw_Commandos_<IP_ADDRESS>_sleep3.zip The game seems to slowdown only if you move the mouse. Wouldn't filtering WM_MOUSE messages help? The game seems to slowdown only if you move the mouse. Wouldn't filtering WM_MOUSE messages help? The game should not be slowing down with the main release of cnc-ddraw. Only the test builds here have that issue. The game expects a certain delay when refreshing the screen, if there is no delay (like in the main release) then it would redraw the cursor multiple times in a row and it appears like the game runs at a low framerate. If you add too much delay then the game will freeze sometimes. TBH, I think I'll never be able to get this working properly - It's just pure luck and it will behave differently on each system. (Other ddraw wrapper fail as well) It's playable at least with the current default settings - closing this here now, sorry stupid I know it is a bit late to answer this, but I am running the Steam version of the game without any modifications. I didn't use CNC-DDraw yet. I am using latest Windows 11 here. But to fix the playback issue, if you installed K-Lite Codec Pack, then search from the start menu "LAV Video" program, it should be installed with K-Lite Codec Pack. Then go to formats and disable CINEPAK video decoding. Then videos will start playing in all 4 Commandos games { BEL, BCD, 2 MoC and 3 DB }. Oh, I just downloaded the GOG version, videos work too there. I really don't like the GOG version, since it is kind of outdated compared to the Steam version. The GOG versions of BEL and BCD has max resolution of 1024x768 while the Steam version have 1280x720 wide screen support out-of-the-box. But regarding videos playback, the above fix should do it for both versions. I think Commandos 2 Men of Courage and 3 Destination Berlin games don't need this since they use different codec. Sorry for the double posts. But to fix the playback issue, if you installed K-Lite Codec Pack, then search from the start menu "LAV Video" program, it should be installed with K-Lite Codec Pack. Then go to formats and disable CINEPAK video decoding. Then videos will start playing in all 4 Commandos games { BEL, BCD, 2 MoC and 3 DB }. Thank you for that! I'll try as soon as I reinstall these titles. I also have black screen with audio in Expendable (1998) videos, maybe the problem is the same as with Commandos. Strangely, they are the only games where videos are not shown.
2025-04-01T04:10:26.144365
2022-11-08T08:40:03
1439738231
{ "authors": [ "hsvgbkhgbv", "yqlwqj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14234", "repo": "Future-Power-Networks/MAPDN", "url": "https://github.com/Future-Power-Networks/MAPDN/issues/20" }
gharchive/issue
For: Issue for "AttributeError: 'SQDDPG' object has no attribute 'agent_importance_vec'" Hi, I am try reproducing results and have following issue in "sqddpg_33_0_l1.out" while doing "source train_case33.sh 0 l1 reproduction" Traceback (most recent call last): File "train.py", line 115, in train.run(stat, i) File "/home/ben/ben/MAPDN/utilities/trainer.py", line 111, in run self.behaviour_net.train_process(stat, self) File "/home/ben/ben/MAPDN/models/model.py", line 213, in train_process value = self.value(state_, action_pol) File "/home/ben/ben/MAPDN/models/sqddpg.py", line 108, in value return self.marginal_contribution(obs, act) File "/home/ben/ben/MAPDN/models/sqddpg.py", line 66, in marginal_contribution subcoalition_map, grand_coalitions, individual_map = self.sample_grandcoalitions(batch_size) # shape = (b, n_s, n, n) File "/home/ben/ben/MAPDN/models/sqddpg.py", line 38, in sample_grandcoalitions agent_importance_vec = self.agent_importance_vec.unsqueeze(0).expand(batch_size*self.sample_size, self.n_) File "/home/ben/anaconda3/envs/mapdn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1266, in getattr type(self).name, name)) AttributeError: 'SQDDPG' object has no attribute 'agent_importance_vec' May you help for this? thanks Hi, I've fixed the bug. Plesae try it again. Best, Jianhong
2025-04-01T04:10:26.151711
2021-01-22T16:58:46
792161729
{ "authors": [ "codecov-io", "firstof9" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14235", "repo": "FutureTense/keymaster", "url": "https://github.com/FutureTense/keymaster/pull/48" }
gharchive/pull-request
Add Release Drafter Proposed change Add release drafter github actions Type of change [ ] Dependency upgrade [ ] Bugfix (non-breaking change which fixes an issue) [ ] New feature (which adds functionality) [ ] Breaking change (fix/feature causing existing functionality to break) [x] Code quality improvements to existing code or addition of tests Additional information This PR fixes or closes issue: fixes # This PR is related to issue: Codecov Report Merging #48 (1cfef8c) into main (3a76bdb) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## main #48 +/- ## ======================================= Coverage 94.32% 94.32% ======================================= Files 8 8 Lines 581 581 ======================================= Hits 548 548 Misses 33 33 Codecov Report Merging #48 (1cfef8c) into main (3a76bdb) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## main #48 +/- ## ======================================= Coverage 94.32% 94.32% ======================================= Files 8 8 Lines 581 581 ======================================= Hits 548 548 Misses 33 33
2025-04-01T04:10:26.166465
2022-08-15T18:34:11
1339335679
{ "authors": [ "ThomasThelen", "misimpso" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14236", "repo": "FuzzFoundation/WKTPlot", "url": "https://github.com/FuzzFoundation/WKTPlot/issues/29" }
gharchive/issue
Error importing WKTPlot Describe the bug When attempting to import WKTPlot, I get the error shown in the screenshots section. It appears that I'm missing wktplot.common. I included a screenshot of the instillation directory to show that the file is missing the common folder. To Reproduce Steps to reproduce the behavior: Open a new terminal pip install WKTPlot python3 from wktplot import WKTPlot See the error below Screenshots Traceback (most recent call last): File "/Users/thomas/sdss-demo/roads.py", line 4, in <module> from wktplot import WKTPlot File "/Users/thomas/sdss-demo/venv/lib/python3.9/site-packages/wktplot/__init__.py", line 1, in <module> from .plots.standard import WKTPlot # noqa: F401 File "/Users/thomas/sdss-demo/venv/lib/python3.9/site-packages/wktplot/plots/standard.py", line 5, in <module> from wktplot.common.file_utils import get_random_string, sanitize_text ModuleNotFoundError: No module named 'wktplot.common' Desktop (please complete the following information): OS: macOS Version 11.6.2 Additional context Add any other context about the problem here. Thanks for creating this issue. Looks like the folder got skipped over when the package was build for some reason. I will look into this tonight. This should be fixed from this latest PR - https://github.com/FuzzFoundation/WKTPlot/pull/30 Release v2.3.1 is now published with this fix: https://github.com/FuzzFoundation/WKTPlot/releases/tag/v2.3.1
2025-04-01T04:10:26.172507
2016-05-27T17:39:16
157256718
{ "authors": [ "Fyrd", "cvrebert", "leandross2" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14239", "repo": "Fyrd/caniuse", "url": "https://github.com/Fyrd/caniuse/issues/2535" }
gharchive/issue
add selector first-line add selector first-line i.e. https://developer.mozilla.org/en-US/docs/Web/CSS/::first-line Duplicate of #1109
2025-04-01T04:10:26.180585
2018-08-25T11:43:36
354006252
{ "authors": [ "Malvoz", "Schweinepriester", "jrbasso", "vineettalwar" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14240", "repo": "Fyrd/caniuse", "url": "https://github.com/Fyrd/caniuse/issues/4461" }
gharchive/issue
Priority hints API Priority Hints introduces the importance attribute, providing developers with the control to indicate a resource's relative importance to the browser. Specification: https://wicg.github.io/priority-hints/ Explainer: https://github.com/WICG/priority-hints/blob/master/EXPLAINER.md Example usage: https://github.com/WICG/priority-hints/blob/master/EXAMPLES.md Google developers blog intro: https://developers.google.com/web/updates/2018/08/web-performance-made-easy#experimental_priority_hints Chrome status: https://www.chromestatus.com/feature/5273474901737472 I will vouch for this one also. I was looking for it as well. https://developers.google.com/web/updates/2019/02/priority-hints Web.dev page about it: https://web.dev/priority-hints/ Via https://github.com/Fyrd/caniuse/pull/6214#issuecomment-1092269091 importance was renamed to fetchpriority as part of the standards process. I think every part is covered via MDN by now: fetchpriority (https://github.com/Fyrd/caniuse/issues/6213#issuecomment-1124335029) => https://caniuse.com/?search=fetchpriority "the priority attribute on the RequestInfo of fetch" => https://caniuse.com/mdn-api_request_priority The data specifically refers that: https://github.com/mdn/browser-compat-data/blob/49d96ed28c8ab1184be01328c3d7c596b1ef365a/api/Request.json#L1107-L1109 Not saying this isn't useful as a combined feature on caniuse, just for the record.
2025-04-01T04:10:26.186664
2018-12-13T22:30:23
390888920
{ "authors": [ "Fyrd", "Malvoz", "Naismith", "Schweinepriester", "boaz-amit", "herohamp", "ofekshmuely", "yareckon" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14241", "repo": "Fyrd/caniuse", "url": "https://github.com/Fyrd/caniuse/issues/4679" }
gharchive/issue
Explainer for the <portal> HTML element: https://github.com/WICG/portals/blob/master/explainer.md Specification: https://wicg.github.io/portals/ Chrome dev summit explainer video: https://www.youtube.com/watch?v=Ai4aZ9Jbsys&t=17m00s Chrome status: https://www.chromestatus.com/feature/4828882419056640 Chrome Canary now supports Portals behind a feature flag. I feel this should be added. https://web.dev/hands-on-portals/ +1 +1 https://www.youtube.com/watch?v=X2zqwMBBvIs This is currently a proprietary Chrome feature. It's not a standard or even a standard in the making: https://wicg.github.io/portals/ It is not a W3C Standard nor is it on the W3C Standards Track @boaz-amit while I don't know W3C Process by heart, usually (if not all) proposals start off as non-standard. However, that is irrelevant, as browsers are free to implement non-standard features. I feel this should be added as while it is not a web standard it is a feature that will be used by developers Now available at https://caniuse.com/#feat=portals
2025-04-01T04:10:26.188847
2020-06-25T09:25:29
645394428
{ "authors": [ "Fyrd", "atjn" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14242", "repo": "Fyrd/caniuse", "url": "https://github.com/Fyrd/caniuse/issues/5495" }
gharchive/issue
Update test for registerProtocolHandler I suggest rewriting the code that registers test handlers on the test page to: navigator.registerProtocolHandler("caniusetest", "https://tests.caniuse.com/rph.html?val=%s", "Caniuse handler test"); and navigator.registerProtocolHandler("web+caniusetest", "https://tests.caniuse.com/rph.html?val=%s", "Caniuse handler test"); The existing code uses insecure http in the url, and uses a dash in the protocol name. Those traits will fail the registration in modern browsers, even though the browser supports custom protocols. Thanks, has now been updated!
2025-04-01T04:10:26.194420
2016-04-19T13:00:09
149455335
{ "authors": [ "DamirSvrtan", "Fyrd", "LukasReschke" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14243", "repo": "Fyrd/caniuse", "url": "https://github.com/Fyrd/caniuse/pull/2459" }
gharchive/pull-request
Add Same Site Cookie attribute as feature This will be shipped with Chrome 51 as per the Intent to Ship. Firefox also voiced public support. So it may be handy to have this to track which browsers support this security hardening over the time. Thanks! Would you happen to know how to easily test support for this? Thanks for the merge. 🚀 Easily is always such a thing when it comes to security features. At the moment I'm adding support for this to @ownCloud (https://github.com/owncloud/core/pull/24092), but since this is only targeted for master it won't be possible to use our demo instance to verify this. In Chrome the feature can be quickly checked for existence by going to the Cookies resource overview and checking whether the SameSite column is in there: I don't know whether other browsers will also add an overview like that. Another somewhat reliable way to test this is to iframe a page that uses either lax or strict cookies (make sure to not have any X-Frame-Options that may block this) and verify that in the iframe the user is indeed not logged-in and no cookies are sent. Anyways, I'll keep closely track on adoption to this one as we intend to add this feature to our next release. So once other browsers keep adding support for this as well (something that we can easily check using an installed ownCloud) I'll update this entry 😄 Is there a reason why I can't find this feature on caniuse.com? I may be looking at it somewhat in a wrong manner, not sure. Thnx! @DamirSvrtan The feature data had not yet been reviewed, but it has now and is available at http://caniuse.com/#feat=same-site-cookie-attribute @Fyrd thnx!
2025-04-01T04:10:26.198435
2016-12-12T09:16:07
194923636
{ "authors": [ "Fyrd", "Schweinepriester", "mkurz" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14244", "repo": "Fyrd/caniuse", "url": "https://github.com/Fyrd/caniuse/pull/3048" }
gharchive/pull-request
Safari TP enabled HTML interactive form validation Tweet: https://twitter.com/chris_dumez/status/806601644855074818 Mentioned in: https://webkit.org/blog/7093/release-notes-for-safari-technology-preview-19/ Ticket: https://bugs.webkit.org/show_bug.cgi?id=165123 Commit: https://trac.webkit.org/changeset/209060/trunk/Source Now there is a blog post as well: https://webkit.org/blog/7099/html-interactive-form-validation/ Another post: http://developer.telerik.com/topics/web-development/cross-browser-html5-form-validation-finally-now/ I didn't update this in #3038, because I wasn't sure due to the wording of the changelog and the bug mentioned in http://caniuse.com/#feat=form-validation wasn't marked as resolved duplicate. Great to see form validation finally in Safari, though :) Thanks!
2025-04-01T04:10:26.199776
2019-11-27T09:59:42
529239371
{ "authors": [ "Fyrd", "maxgomes92" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14245", "repo": "Fyrd/caniuse", "url": "https://github.com/Fyrd/caniuse/pull/5199" }
gharchive/pull-request
Update Promise.prototype.finally firefox support I tested on Browserstack. Version 69 is the one the support begins. Also tested in Browserstack and it works in my test: https://tests.caniuse.com/?feat=promise-finally Can you share the test you see failing?
2025-04-01T04:10:26.201170
2023-06-06T09:20:23
1743446329
{ "authors": [ "dannyfriar", "jgiannuzzi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14246", "repo": "G-Research/fasttrackml", "url": "https://github.com/G-Research/fasttrackml/issues/51" }
gharchive/issue
Experiment tags/labels to allow grouping experiments logically ┆Issue is synchronized with this Jira Task by Unito Superseded by #83
2025-04-01T04:10:26.234863
2015-04-20T12:19:43
69574354
{ "authors": [ "Sebobo", "sutiyonodoang" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14247", "repo": "GBKS/Wookmark-jQuery", "url": "https://github.com/GBKS/Wookmark-jQuery/issues/191" }
gharchive/issue
mysql data load Dear All, There is an example api to load image from mysql? thx in advance There is an example php script in example-api on how to create a super simple server. You just have to replace the static $data part with your own code to get the data from your db. We cannot provide that. How about dynamic data load? Date: Tue, 21 Apr 2015 13:56:44 -0700 From<EMAIL_ADDRESS>To<EMAIL_ADDRESS>CC<EMAIL_ADDRESS>Subject: Re: [Wookmark-jQuery] mysql data load (#191) There is an example php script in example-api on how to create a super simple server. You just have to replace the static $data part with your own code to get the data from your db. We cannot provide that. — Reply to this email directly or view it on GitHub. That's what you have to provide yourself. Ok, thx :) Date: Wed, 22 Apr 2015 09:38:47 -0700 From<EMAIL_ADDRESS>To<EMAIL_ADDRESS>CC<EMAIL_ADDRESS>Subject: Re: [Wookmark-jQuery] mysql data load (#191) That's what you have to provide yourself. — Reply to this email directly or view it on GitHub.
2025-04-01T04:10:26.259996
2023-01-08T19:15:43
1524641050
{ "authors": [ "GDay", "coveralls", "msabatier" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14248", "repo": "GDay/django-q2", "url": "https://github.com/GDay/django-q2/pull/59" }
gharchive/pull-request
More explicit log messages in exception handling Improve log messages in case of exception. Also log the name of the task that has been created from a schedule. This is useful to understand the lifecycle of a particular task. Coverage: 90.324% (+0.006%) from 90.318% when pulling 851b52bb7b09547c84a3858fd3aa371c4d4b3f12 on msabatier:q2_exception_logs into 31e82ad028eda093fc07b75aabcef8bc8f1d7011 on GDay:master. Looks great! Thanks @msabatier
2025-04-01T04:10:26.348040
2023-03-18T18:41:39
1630509412
{ "authors": [ "jinjianrong", "nyanmisaka" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14249", "repo": "GPUOpen-Drivers/AMDVLK", "url": "https://github.com/GPUOpen-Drivers/AMDVLK/issues/317" }
gharchive/issue
Missing VK_EXT_image_drm_format_modifier vulkan: physical device does not support DRM format modifiers VK_EXT_image_drm_format_modifier is required for importing image from dma buf that contains modifiers. NV proprietary, Intel ANV and RADV have supported this for a while. Since these two related extensions have been supported in AMDVLK, it'd be great if we can have modifier support. VK_EXT_external_memory_dma_buf VK_KHR_external_memory_fd Thanks! We have been working on the extension. @jinjianrong Nice to hear that! Any plans on adding modifier in KMD for gfx8 Polaris? This generation still has a wide user base, but due to the big difference from gfx9, developers from Google only added modifier support for gfx9+. https://patchwork.freedesktop.org/patch/247020/ https://patchwork.freedesktop.org/series/80262/ https://gitlab.freedesktop.org/mesa/mesa/-/issues/5882 This can be closed by https://github.com/GPUOpen-Drivers/AMDVLK/releases/tag/v-2023.Q3.1
2025-04-01T04:10:26.364642
2022-11-01T11:31:26
1431283627
{ "authors": [ "sc336", "st--" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14250", "repo": "GPflow/GPflow", "url": "https://github.com/GPflow/GPflow/pull/2012" }
gharchive/pull-request
Sc336/api documentation refresh PR type: doc improvement Related issue(s)/PRs: Contributes to #1823 Summary A few comments were in places that meant they weren't getting picked up by Sphinx. A few others were unclear enough I had to ask for some help in understanding them; I've updated these to make them more clear. Fully backwards compatible: yes PR checklist [ ] New features: code is well-documented [x] detailed docstrings (API documentation) [ ] notebook examples (usage demonstration) [ ] The bug case / new feature is covered by unit tests [ ] Code has type annotations [x] Build checks [x] I ran the black+isort formatter (make format) [ ] I locally tested that the tests pass (make check-all) [x] Release management [ ] RELEASE.md updated with entry for this change [x] New contributors: I've added myself to CONTRIBUTORS.md @uri-granta (as an aside, it would be nice if the pretty printing were accessible via a public method!) gpflow.utilities.print_summary() ?
2025-04-01T04:10:26.367245
2020-02-09T07:17:46
562135435
{ "authors": [ "nprindle" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14251", "repo": "GRarer/Aurora", "url": "https://github.com/GRarer/Aurora/pull/27" }
gharchive/pull-request
Move CI minification dependencies to separate package.json Currently, node_modules is around 282M in size. This includes all of the webpack-related dependencies for aggressively minifying the build output into a single index.html, even though only the CI needs to do this, not developers. This splits off these dependencies into ci/package.json and ci/package-lock.json, which the CI will use instead of the top-level ones when minifying before deploying. This more than halves the size of node_modules to around 128M. Additionally, this also changes the Travis config to use npm ci instead of npm install when installing dependencies, as it is more deterministic. Additionally, this should also speed up pull request checks by ~10 seconds.
2025-04-01T04:10:26.489302
2022-09-08T20:04:13
1366925154
{ "authors": [ "mtfishman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14254", "repo": "GTorlai/PastaQ.jl", "url": "https://github.com/GTorlai/PastaQ.jl/pull/295" }
gharchive/pull-request
Update for latest syntax for Boson/Qudit operator definitions This updates PastaQ for the latest syntax for creating operators for Boson/Qudit site types which was improved in https://github.com/ITensor/ITensors.jl/pull/963. The new syntax is described here: https://github.com/ITensor/ITensors.jl/issues/957#issuecomment-1217207889 (which replaces the previous ITensors._op syntax). @GTorlai, current test failures seem to be AD issues introduced in a recent version of Zygote, I'll merge and we can try setting an upper bound on the compat entry for Zygote to a previous version that was working.
2025-04-01T04:10:26.496827
2024-10-17T15:49:06
2595129924
{ "authors": [ "lorikrammer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14255", "repo": "GW-HIVE/PredictMod", "url": "https://github.com/GW-HIVE/PredictMod/issues/309" }
gharchive/issue
dkNET Proposal Mazumder_LOI_rm1.docx Thank you for submitting your Letter of Intent (LOI) for the dkNET AI Pilot Funding Program. We look forward to receiving your full proposal submission through EasyChair. Please take note of the following guidelines for submitting your proposal: Proposals must be submitted through the EasyChair platform. Please ensure your submission follows the instruction Section V. Application and Submission Information. Applicaton forms: https://drive.google.com/file/d/18Ty7rRba1uJdv3UEbWCJE0SgOXP3UkqP/view The final dkNET application should be one (1) PDF document. Please upload your PDF file to the same submission # of your LOI submission. Help and Support: If you need any assistance with your submission, please contact us at<EMAIL_ADDRESS>The deadline for submission is November 12, 2024 [application should be submitted by 9 pm PST (12 am EST)]. If you have any questions or need further information, please do not hesitate to reach out. We look forward to receiving your submission! Best regards, Ko-Wei Tasks [ ] Please complete the Internal Pod 3 PI Endorsement & FCOI forms and send those directly to Emma, so she can create proposal in MyResearch. [ ] Proposal setup & checklist This proposal should be ready for the internal routing process by October 25th, 2024 - the budget should be final and all required documents should be attached to the proposal in MyResearch. If you need additional proposal editing services please contact REU Support Please use SciENcv to Prepare NIH Formats Checklist, Drive, and Slack channels created. Science portion sent to Raja for review 11.4.2024. Routing package sent to Emma 11.4.2024.
2025-04-01T04:10:26.506704
2020-09-03T22:29:04
692449289
{ "authors": [ "GabrielOlvH", "Gbergz", "Katorone" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:14256", "repo": "GabrielOlvH/Industrial-Revolution", "url": "https://github.com/GabrielOlvH/Industrial-Revolution/issues/34" }
gharchive/issue
[1.16.2] Installing this mod with 'KubeJS-Fabric' causes custom recipes not to load properly. Title^ Adding recipes through KubeJS with this mod installed makes the recipes added not work, but after a '/reload' it works like 25% of the time. Also the recipes added gets added shown twice in REI. That is when a recipe added do work which it likely won't. It's really hard to explain this but essentially when I load into a world and check for a custom recipe, it sometimes doesn't load. And if it loads (which is rare), it gets shown twice in REI. But then after a '/reload' or relog the recipe is gone again. This doesn't happen when the mod is removed. Meaning everything works as intended if this mod is removed. Example of double REI recipes: Versions: Minecraft: 1.16.2 Fabric API: 0.19.0+build.398-1.16 IndRev: 1.6.2-BETA KubeJS: 1.4.1 Other information: This is reproducable for me on an empty instance with only Fabric API, Fabric Language Kotlin & KubeJS. And ofcourse Industrial Revolution. With a custom recipe. - Gbergz Is there any log output that could be related to this? I've been sitting trying to figure out what could be causing it for like 3 hours now. Logs say nothing. I've gotten help by AK9 and we're both unsure. Until I decided to remove a mod 1 by 1 until the problem went away. Can you please test if the issue is fixed with this version? Just tried it, same issue persists unfortunately. In your log, are you seeing errors similar to these? https://github.com/KubeJS-Mods/KubeJS-Fabric/issues/3 But pertaining to industrial revolution? In your log, are you seeing errors similar to these? KubeJS-Mods/KubeJS-Fabric#3 But pertaining to industrial revolution? Yes but only this line: [ForkJoinPool.commonPool-worker-3/WARN]: Parsing error loading recipe techreborn:centrifuge/calciumcarbonate_cell net.minecraft.class_151: Non [a-z0-9_.-] character in namespace of location: {fluid:"techreborn:calcium_carbonate",holder:"techreborn:cell"} I tried tracking this but could not find anything. This is possibly a KubeJS issue, I'd recommend opening an issue on their repository. If it ends up being an issue on my end, I'll happily fix it!