added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:37:58.054300
2016-05-29T03:46:56
157367719
{ "authors": [ "Daniel15", "davidfowl" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3918", "repo": "aspnet/KestrelHttpServer", "url": "https://github.com/aspnet/KestrelHttpServer/issues/893" }
gharchive/issue
Using command-line options in RC2? My blog is currently running on ASP.NET 5 RC1. I have commands for live vs staging configured in project.json: "commands": { "web-live": "Microsoft.AspNet.Server.Kestrel --server.urls http://unix:/run/dan-live.sock", "web-staging": "Microsoft.AspNet.Server.Kestrel --server.urls http://unix:/run/dan-staging.sock" }, And then I run a command like this to actually run the site: /var/www/.dnx/runtimes/dnx-mono.1.0.0-rc1-final/bin/dnx --appbase /var/www/dan.cx/live/site/approot/packages/Daniel15.Web/1.0.0/root Microsoft.Dnx.ApplicationHost --configuration Release web-live With RC2, it looks like my site compiles into a .exe file that I can execute directly via mono: % mono Daniel15.Web.exe Hosting environment: Production Content root path: /var/www/dan.cx/staging/site Now listening on: http://localhost:5000 Application started. Press Ctrl+C to shut down. However, I can't figure out how to specify the server.urls. Do I need to have separate config files for staging vs live? How do I specify which one to use? (note that my site is currently not compatible with .NET Core, so I am using Mono + .NET Framework 4.5.1) https://github.com/aspnet/Hosting/issues/737 Thanks @davidfowl! Once I added the config: var config = new ConfigurationBuilder() .AddCommandLine(args) .Build(); var host = new WebHostBuilder() .UseConfiguration(config) // ...... .Build(); It worked as expected :smile:
2025-04-01T06:37:58.055737
2015-08-07T19:18:43
99711006
{ "authors": [ "halter73", "lodejard" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3919", "repo": "aspnet/KestrelHttpServer", "url": "https://github.com/aspnet/KestrelHttpServer/pull/155" }
gharchive/pull-request
Gracefully handle exceptions thrown from OnStarting callbacks If OnStarting is being called after the app func has completed, return a 500. If Onstarting is being called due to a call to write, throw from write. Addresses #25 :shipit: :tanabata_tree:
2025-04-01T06:37:58.060371
2016-06-06T05:08:43
158601511
{ "authors": [ "Eilon", "azimmerer", "davidfowl" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3920", "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/4814" }
gharchive/issue
rc2 postback very slow on first execution I recently upgraded my asp.net 5 application to asp.net core 1.0. The postbacks are very slow, it takes 45 seconds for the postback objected to be created and till the first breakpoint in the postback method is reached. After that the breakpoint is hit and the request is executed normaly. If I execute the same method again, the postback is very fast. It only happens if I post back objects created by the entity framework with dependencies to other object. For Instance: public class PostbackObject { public virtual Object1 Object1 {get;set;} public int Object1ID {get;set;} } If I comment out Object1 public class PostbackObject { //public virtual Object1 Object1 {get;set;} public int Object1ID {get;set;} } I don't have any issues. See https://github.com/aspnet/Mvc/issues/4666 @rynowak does this sound the same as #4666 and this is a dup? It is the same. The fix from #4666 worked for me. Thank you.
2025-04-01T06:37:58.079884
2016-09-29T17:56:36
180118758
{ "authors": [ "DamianEdwards", "Eilon", "gdoron", "mkArtakMSFT" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3921", "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/5340" }
gharchive/issue
Add an option to inline css file inside a Razor page Hi, I just wrote a small TagHelper that inlines css files as it's one of the recommendations of PageSpeed by Google: https://developers.google.com/speed/docs/insights/InlineCSS https://developers.google.com/speed/docs/insights/OptimizeCSSDelivery It would be nice if ASP.NET would have something like that baked into the framework and the existing LinkTagHelper which surely will be much more robust than my extremely simple, couple of minutes work: [HtmlTargetElement("embedCss")] public class EmbedCssTagHelper : TagHelper { private readonly IMemoryCache _memoryCache; private readonly IHostingEnvironment _hostingEnvironment; private const string CacheKeyPrefix = "EmbedCss_"; public string Href { get; set; } public EmbedCssTagHelper(IMemoryCache memoryCache, IHostingEnvironment hostingEnvironment) { _memoryCache = memoryCache; _hostingEnvironment = hostingEnvironment; } public override void Process(TagHelperContext context, TagHelperOutput output) { var cacheKey = CacheKeyPrefix + Href; output.TagName = "style"; var content = _memoryCache.GetOrCreate(cacheKey, entry => { if (_hostingEnvironment.IsStaging() || _hostingEnvironment.IsProduction()) { Href = Href.Replace(".css", ".min.css"); } var path = _hostingEnvironment.WebRootPath+ Href; var result = File.ReadAllText(path); // Surely there are other ways, like watching the file system for changes... // If you could reference a resource that shows how to do that it would be great! entry.SetAbsoluteExpiration(TimeSpan.FromMinutes(1)); entry.SetValue(result); return result; }); output.Content.SetHtmlContent(content); output.TagMode = TagMode.StartTagAndEndTag; } } What do you think about it @Eilon? BTW, while writing the TagHelper was easy, I couldn't find why my TagHelper didn't get fire until I added [HtmlTargetElement("embedCss")], what should be the element name in the HTML if it wasn't specified? @DamianEdwards can you share thoughts on how important a feature like this is? Especially considering HTTP2, where making multiple requests shouldn't be as expensive as it is today from a latency perspective. Still worthwhile. I did it on live.asp.net. HTTP2 is a while away from normal, especially given it requires Windows Server 2016 for IIS. We should have this built in to both the link and script tag helpers. @DamianEdwards Awesome, thanks! One of the big problems we're concerned about is that embedding CSS is tricky when you have URLs inside the CSS (e.g. to images). The URLs have to be re-pathed based on the current request's URL. The code would have to parse the CSS and re-path every URL inside it. This would affect the ability to cache the CSS because there can't be one canonical copy of it - there would have to be a copy for every URL where it's used (or at least for every "group" of similar URLs). That's a very good point. We could simply say "not supported" in that case, but I agree it's less than ideal. There are many cases when this would still be useful, as plenty of CSS doesn't contain URLs to images or fonts, but it's not clear what the correct behavior here should be. @Eilon @DamianEdwards It's easily solved with having only URI "absolute relative" (totally made this term up right now) paths in the CSS files: Good: /static/images/kitchen.jpeg @font-face { font-family: 'MyWebFont'; src: url('/static/fonts/webfont.woff2') format('woff2') } Bad: ../images/kitchen.jpeg @font-face { font-family: 'MyWebFont'; src: url('../fonts/webfont.woff2') format('woff2') } In fact, this is how we currently use CSS on our website. We create the website in a very components fashion, like a SPA, with tons of partial views, and each partial view has its own independent SCSS file. So it's impossible to have tons of HTTP requests for css files for each and every partial view in a page (can be a dozen for a page). The only drawback of this is every request for the page, all the CSS needs to be included in the HTML and can't be cached locally on the client browser. I thought about ways to solve it: every page loads the global css file (all the SCSS files, combined) asynchronously and set a cookie with the last time it was received / ETAG. When rendering the page, using <embedCss>, we'll check the cookie, if its ETAG is the newest, we won't include inline CSS but rely on the external CSS file. We didn't implement it yet as we are busy with writing our entire client side from scratch, but that's the idea. @gdoron using only absolute URLs is a "restriction," not a "solution" πŸ˜„ Using relative paths in CSS is extremely common because in "normal" CSS files it's the safest to use. It's quite normal for the author of the CSS file to require some of their own images, yet not know the final path where the CSS file and related images will be placed. For example, Bootstrap.css has tons of relative url() values. But anyway, we're definitely not saying this is a bad feature - as @DamianEdwards said it's quite interesting. The concern is that there's a lot of work required to call it "done". Not the least of which is doing actual browser perf measurements to see whether this feature even improves performance in several common scenarios. (We all know that just making various "site optimization" checklists happy is not necessarily actually good for the site.) @Eilon @DamianEdwards I agree, using relative paths in CSS is extremely common and useful BUT only for external resources (like bootstrap you mentioned). This is because the authors of the CSS framework can't enforce a specific website, application and folders structure, so they must use relative paths to link the fonts and images. Inline large chuck for CSS file is actually a bad pattern and discouraged by page speed: Be careful when inlining data URIs in CSS files. While selective use of small data URIs in your CSS may make sense, inlining large data URIs can cause the size of your above-the-fold CSS to be larger, which will slow down page render time. So It really only makes sense to inline homemade CSS, which doesn't benefit at all from using relative paths, and sometimes it's even more resilient to use absolute paths since you can move the CSS file around and it work break anything (for example, serving from a CDN or moving the .min.css to a different folder). Not the least of which is doing actual browser perf measurements to see whether this feature even improves performance in several common scenarios. Make sure to test on mobile devices and slow networks, inline resources on these have a huge impact, because of browsers' parallel request limitation and TCP (HTTP) slow start See more on performance and mobile: Presentation YouTube lecture both of Ilya Grigorik, a web performance engineer at Google, co-chair of the W3C Web Performance Working group. @Eilon Curious to know whether you agree with the the above or not. @gdoron those are all good points, but still no plans to have this feature built-in. If you end up building this tag helper you can release a package on NuGet, and we'd be happy to link to it. @Eilon Problem is, I think it shouldn't really be a new tagHelper but a boolean property on LinkTagHelper. Should I really copy paste the entire code (with great features) from LinkTagHelper? That's bad practice. Is there a DRY way of extending TagHelpers in MVC Core? To include this in the framework I think we'd want to try resolving some of the issues mentioned earlier. As of right now I think some of these issues are a "pit of failure" where it's very easy to lead a user into doing things that won't work. As far as extending the current tag helpers, it's a matter of whether the tag helper in question was designed to support any particular extensibility, and in this case what's being discussed is a whole new feature, so I'm not surprised that it doesn't have that extensibility. @Eilon So what do you suggest? It seems like the community is blocked by the framework, unless we are going to copy paste the entire LinkTagHelper class (and its dependencies) which is really bad for maintenance. Thanks. @gdoron I looked at LinkTagHelper and I'm not sure this is a good fit for the existing tag helper. The existing tag helper is all about doing file globbing (not very useful here) and resource/CDN fallback (clearly not applicable here). So this sounds to me like it fits best as a separate tag helper. If this tag helper requires some core pieces of functionality from some existing tag helper we would certainly be open to exposing some of that functionality somewhere - it's a matter of identifying what should be made reusable. Do you have thoughts on that? @Eilon I think you're probably right regarding LinkTagHelper not fitting to embeding css, but that's (at least from my understanding) is the only way we can still use the known HTML element <link> and just having a flag to embed the content. Regarding something common, I guess the only the thing that might be common is the FileVersionProvider which is in a separate class anyway. BTW, do you think it's a good idea storing the CSS file content in the MemoryCache and invalidate it when the content changes? Or is it better reading it from disk every time? @gdoron you can have multiple tag helpers targeting the same element, though you're right that it wouldn't be the same as having just one extra switch. Regarding loading from disk vs. using a memory cache, the best way to answer that is to not implement caching, use a profiler to measure the behavior, and then optimize whatever the slowest part is, and then finally measure again to see that the behavior improved in the expected way. Then repeat. Closing this as we have no plans to do this.
2025-04-01T06:37:58.093703
2017-12-06T15:38:52
279798454
{ "authors": [ "Tarig0", "mkArtakMSFT", "pranavkm" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3922", "repo": "aspnet/Mvc", "url": "https://github.com/aspnet/Mvc/issues/7114" }
gharchive/issue
Storing Empty list in TempData cookies results in a Jarray object when read I have code that may store an empty list object in the tempdata property of the controller. When using the default tempdata provider (cookies) after I redirect to the MVC action that consumes the data it will be read as a Jarray instead of the intended type List @Tarig0, can you please share a sample code to repro the issue? Just had a thought, could we add generics so we can get an empty list. TempData<List<Int>>("Test"); Key Exists Value Result No null Throw key not found Yes null null Yes [] Empty List Yes ["Hello","World"] Throw cast error Yes [1,2,3,4] List with values The problem is any change to the interface ITempDataDictionary would be a breaking change and we wouldn't be able to make it until the next major release. But yes, perhaps the idea that you'd pass in the type of the the value to the dictionary would be to make the type support more complex types among other things.
2025-04-01T06:37:58.107437
2017-03-11T01:41:09
213494838
{ "authors": [ "SergeyRudakov" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3923", "repo": "aspnet/Proxy", "url": "https://github.com/aspnet/Proxy/issues/55" }
gharchive/issue
Connection Timeout on packets loss Hello I'm using proxy on the azure VM connected to internal network via VPN. It works all right most of the time however periodically I experience TCP packet loss (with tcp retransmission) when sending POST request. This leads to TaskCancelledException (operation timed out) in the following line: HttpResponseMessage responseMessage = await _httpClient.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, context.RequestAborted); I set HttpClient Timeout period to 15 minutes (which is more than necessary) but it seems when I start getting packet send problems following with (RST, ACK), connection stuck forever and terminates on timeout. Sometimes I have the same Timeout error after 15-20 seconds no matter that I set ReceiveHeadersTimeout and SendTimeout to 900 seconds. I never see the same problem when I run proxy on local network. With VPN, about 2-5 POST requests of 100 are failed. I'm not sure if I will have the same problem on full .net 4.5. I experienced similar results with .net 4.5 proxy and direct website POST. I guess it is expected behavior to have timeout on DNS query (15 seconds) no matter what timeout is set for entire httpClient. Closing this issue.
2025-04-01T06:37:58.119253
2024-01-07T09:50:10
2069062611
{ "authors": [ "assafelovic", "maorkuriel" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3924", "repo": "assafelovic/gpt-researcher", "url": "https://github.com/assafelovic/gpt-researcher/issues/326" }
gharchive/issue
Cannot install openai~=1.3.3 and openai~=1.6.1 MacOS Sonoma 14.2.1 Apple M2 Pro Cant install the requirements, getting an error ERROR: Cannot install openai~=1.3.3 and openai~=1.6.1 because these package versions have conflicting dependencies. Full log: ✝ ξ‚° Documents/Github/gpt-researcher ξ‚° ξ‚  master ξ‚° pip install -r requirements.txt DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/commix-3.9.dev0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/dnsgen-1.0.4-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/xnLinkFinder-4.1-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/ghauri-1.2.7-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/LinkFinder-1.0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/urless-1.0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/wafw00f-2.2.0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/py_altdns-1.0.2-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/Interlace-1.9.8-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/cloud_enum-0.0.0-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/DNSValidator-0.1-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/cmseek-1.1.3-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 DEPRECATION: Loading egg at /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/waymore-1.28-py3.12.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330 Collecting asyncio==3.4.3 (from -r requirements.txt (line 2)) Using cached asyncio-3.4.3-py3-none-any.whl (101 kB) Collecting beautifulsoup4==4.12.2 (from -r requirements.txt (line 3)) Using cached beautifulsoup4-4.12.2-py3-none-any.whl (142 kB) Requirement already satisfied: colorama==0.4.6 in /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages (from -r requirements.txt (line 4)) (0.4.6) Collecting duckduckgo_search==4.1.1 (from -r requirements.txt (line 5)) Using cached duckduckgo_search-4.1.1-py3-none-any.whl.metadata (19 kB) Collecting md2pdf==1.0.1 (from -r requirements.txt (line 6)) Using cached md2pdf-1.0.1.tar.gz (6.4 kB) Preparing metadata (setup.py) ... done Collecting openai~=1.3.3 (from -r requirements.txt (line 7)) Using cached openai-1.3.9-py3-none-any.whl.metadata (17 kB) Collecting playwright==1.40.0 (from -r requirements.txt (line 8)) Using cached playwright-1.40.0-py3-none-macosx_11_0_arm64.whl.metadata (3.6 kB) ERROR: Cannot install openai~=1.3.3 and openai~=1.6.1 because these package versions have conflicting dependencies. The conflict is caused by: The user requested openai~=1.3.3 The user requested openai~=1.6.1 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts Same Issue with Docker image ✘ ✝ ξ‚° Documents/Github/gpt-researcher ξ‚° ξ‚  master ξ‚° docker-compose up WARN[0000] The "TAVILY_API_KEY" variable is not set. Defaulting to a blank string. [+] Running 1/1 ! gpt-researcher Warning 2.0s [+] Building 53.7s (12/14) docker:desktop-linux => [gpt-researcher internal] load .dockerignore 0.0s => => transferring context: 81B 0.0s => [gpt-researcher internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.04kB 0.0s => [gpt-researcher internal] load metadata for docker.io/library/python:3.11.4-slim-bullseye 2.7s => [gpt-researcher auth] library/python:pull token for registry-1.docker.io 0.0s => [gpt-researcher install-browser 1/3] FROM docker.io/library/python:3.11.4-slim-bullseye@sha256:40319d0a897896e746edf877783ef39685d44e90e1e6de8 3.0s => => resolve docker.io/library/python:3.11.4-slim-bullseye@sha256:40319d0a897896e746edf877783ef39685d44e90e1e6de8d964d0382df0d4952 0.0s => => sha256:40589f858a36548f3a99431b7b0d983ba27e5350bd0a032910f956dc14bc77a3 1.06MB / 1.06MB 0.4s => => sha256:ebfe57ae3ec3cd387f0f58f3246abacb367501246ba38834c134db2e117a9922 12.03MB / 12.03MB 1.5s => => sha256:40319d0a897896e746edf877783ef39685d44e90e1e6de8d964d0382df0d4952 1.65kB / 1.65kB 0.0s => => sha256:295605814c6beef84ee8d2bc80e42348ba4c4d0bb01425c6d5262c3849d3ba48 1.37kB / 1.37kB 0.0s => => sha256:a52167001c4fe71875dd1c847ca252944583d73eec2fb93451a11b0024f5161e 6.94kB / 6.94kB 0.0s => => sha256:41f92d5a73b9bee296c7b4a3817b28098b22fb60112608b42bb03570ca296115 30.06MB / 30.06MB 1.5s => => sha256:0cffb447bcdc453d0e4f501f9b8a46080ccf47c1945e63b39801bdea23c38cdf 242B / 242B 0.8s => => sha256:f09dc7ca5e15b356f3b324046a9fac4bba7214fd246ff86d6be85d380930cffa 3.38MB / 3.38MB 1.4s => => extracting sha256:41f92d5a73b9bee296c7b4a3817b28098b22fb60112608b42bb03570ca296115 0.9s => => extracting sha256:40589f858a36548f3a99431b7b0d983ba27e5350bd0a032910f956dc14bc77a3 0.0s => => extracting sha256:ebfe57ae3ec3cd387f0f58f3246abacb367501246ba38834c134db2e117a9922 0.3s => => extracting sha256:0cffb447bcdc453d0e4f501f9b8a46080ccf47c1945e63b39801bdea23c38cdf 0.0s => => extracting sha256:f09dc7ca5e15b356f3b324046a9fac4bba7214fd246ff86d6be85d380930cffa 0.2s => [gpt-researcher internal] load build context 0.1s => => transferring context: 2.91MB 0.0s => [gpt-researcher install-browser 2/3] RUN apt-get update && apt-get satisfy -y "chromium, chromium-driver (>= 115.0)" && chromium 32.4s => [gpt-researcher install-browser 3/3] RUN apt-get install -y firefox-esr wget && wget https://github.com/mozilla/geckodriver/releases/down 10.4s => [gpt-researcher gpt-researcher-install 1/4] RUN mkdir /usr/src/app 0.1s => [gpt-researcher gpt-researcher-install 2/4] WORKDIR /usr/src/app 0.0s => [gpt-researcher gpt-researcher-install 3/4] COPY ./requirements.txt ./requirements.txt 0.0s => ERROR [gpt-researcher gpt-researcher-install 4/4] RUN pip install -r requirements.txt 5.0s ------ > [gpt-researcher gpt-researcher-install 4/4] RUN pip install -r requirements.txt: 1.027 Collecting asyncio==3.4.3 (from -r requirements.txt (line 2)) 1.365 Downloading asyncio-3.4.3-py3-none-any.whl (101 kB) 1.466 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 101.8/101.8 kB 973.0 kB/s eta 0:00:00 1.582 Collecting beautifulsoup4==4.12.2 (from -r requirements.txt (line 3)) 1.660 Downloading beautifulsoup4-4.12.2-py3-none-any.whl (142 kB) 1.694 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.0/143.0 kB 4.7 MB/s eta 0:00:00 1.803 Collecting colorama==0.4.6 (from -r requirements.txt (line 4)) 1.869 Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) 1.999 Collecting duckduckgo_search==4.1.1 (from -r requirements.txt (line 5)) 2.069 Downloading duckduckgo_search-4.1.1-py3-none-any.whl (26 kB) 2.159 Collecting md2pdf==1.0.1 (from -r requirements.txt (line 6)) 2.228 Downloading md2pdf-1.0.1.tar.gz (6.4 kB) 2.247 Preparing metadata (setup.py): started 3.211 Preparing metadata (setup.py): finished with status 'done' 3.313 Collecting openai~=1.3.3 (from -r requirements.txt (line 7)) 3.384 Downloading openai-1.3.9-py3-none-any.whl (221 kB) 3.414 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 221.4/221.4 kB 7.8 MB/s eta 0:00:00 3.570 Collecting playwright==1.40.0 (from -r requirements.txt (line 8)) 3.641 Downloading playwright-1.40.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (37.0 MB) 4.715 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 37.0/37.0 MB 31.5 MB/s eta 0:00:00 4.739 ERROR: Cannot install openai~=1.3.3 and openai~=1.6.1 because these package versions have conflicting dependencies. 4.739 4.739 The conflict is caused by: 4.739 The user requested openai~=1.3.3 4.739 The user requested openai~=1.6.1 4.739 4.739 To fix this you could try to: 4.739 1. loosen the range of package versions you've specified 4.739 2. remove package versions to allow pip attempt to solve the dependency conflict 4.739 4.739 ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts 4.984 4.984 [notice] A new release of pip is available: 23.1.2 -> 23.3.2 4.984 [notice] To update, run: pip install --upgrade pip ------ failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 1 @maorkuriel Thanks it is resolved now. Issue was caused due to PR bot
2025-04-01T06:37:58.128761
2023-07-12T18:59:27
1801583358
{ "authors": [ "OumYacout", "bcgit2023", "jortegac", "madiha1ahmed", "neoOpus", "thinkJD", "wwzeng1" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3925", "repo": "assafelovic/gpt-researcher", "url": "https://github.com/assafelovic/gpt-researcher/issues/34" }
gharchive/issue
openai.error.InvalidRequestError: The model: gpt-4 does not exist I thought this will work with GPT-3.5 and also thought that OpenAI made it possible to anyone to use GPT-4 API but after several hours tinkering with this I am not able to get it to work, I see the webpage but it keeps throwing an error after another (fixed them all) but I am unable to make it work because of this error I have the same problem. Hi! I’m one of the founders of Sweep, a github app that solves issues(like small bugs) by writing pull requests. This looks like a good issue for Sweep https://github.com/sweepai/sweep to try. It might need more details from the maintainers though. We have onboarding instructions here, I’m also happy to help you onboard directly :) I have the same error "openai.error.InvalidRequestError: The model: gpt-4 does not exist" just stopped. The model is configured here https://github.com/assafelovic/gpt-researcher/blob/master/config/config.py#L25 By accessing an enviroment variable self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4") If you are having issues with access to gpt-4, you can set an environment variable with a model name that you do have access to e.g. gpt-3.5-turbo For example: export SMART_LLM_MODEL="gpt-3.5-turbo" Hi! By making the following modifications, I was able to resolve the issue: In gpt-researcher/config/config.py, substitute in line 25 "gpt-4" with "gpt-3.5-turbo-16k" In gpt-researcher/config/config.py, substitute in line 27: 8000 with 4000 Upon executing this modified script, I encountered another error: "json.decoder.JSONDecodeError: Extra data: line 2 columnΒ 1Β (charΒ 58)". So, if you also get the same error, all you need to do is: In gpt-researcher/agent/research_agent.py line 92, substitute "return json.loads(result)" with " return result". This avoids the double-parsing as result itself is a JSON Object. Hope this helps! @madiha1ahmed I did it your way and it worked. thanks! It works for me now. Thanks
2025-04-01T06:37:58.148432
2024-02-23T01:04:04
2150204341
{ "authors": [ "outlaw-dame", "snarfed" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3926", "repo": "assemblee-virtuelle/activitypods", "url": "https://github.com/assemblee-virtuelle/activitypods/issues/184" }
gharchive/issue
Integrate Granary as Middleware for AT Protocol Interoperability Summary This proposal outlines a plan to integrate Granary as middleware within ActivityPods to enable interoperability with the AT Protocol. The goal is to bridge ActivityPods, which utilises ActivityPub and Solid standards, with the emerging AT Protocol, thereby enhancing cross-platform communication and user interaction across decentralised social networks. https://github.com/snarfed/granary Background Granary is a library that converts data between different social web formats, including ActivityStreams, microformats, HTML, and JSON. Extending Granary to support AT Protocol (https://github.com/bluesky-social/atproto)and integrating it as middleware in ActivityPods can achieve seamless interoperability between ActivityPods-based applications and AT Protocol platforms. Objectives Enhance Interoperability: Enable ActivityPods applications to communicate with AT Protocol platforms, breaking down silos in the decentralised social web. User Empowerment: Allow users to interact across different platforms without duplicating their social graph or content. Developer Support: Provide developers with tools to build applications that can operate across different decentralised social networking protocols. Proposed Approach Phase 1: Planning and Analysis Understand the protocols (ActivityPub, Solid, AT Protocol) and identify key integration points. Map common entities and actions between ActivityStreams and AT Protocol lexicons. Phase 2: Extension of Granary Develop AT Protocol support in Granary, focusing on conversion logic for core entities like users and posts. Handle extensions and custom fields specific to ActivityStreams and AT Protocol. Phase 3: Middleware Integration Design and implement middleware architecture within ActivityPods to utilise Granary for data conversion. Adapt ActivityPods' API endpoints for AT Protocol communication, focusing on authentication and data flow. Phase 4: Testing and Validation Conduct comprehensive testing, including unit, functional, and interoperability tests. Set up a test environment for real-world scenario testing. Phase 5: Documentation and Deployment Document the integration process, API usage, and contribution guidelines. Develop a deployment strategy and monitor the integration's performance. Request for Comments I invite the ActivityPods community to discuss this proposal. Feedback on the approach, potential challenges, and additional benefits are welcome. Collaboration is key to achieving interoperability and enhancing the decentralised social web. Conclusion Integrating Granary as middleware to achieve interoperability with AT Protocol represents a significant step towards a more interconnected and user-centric decentralised social web. This proposal aims to start a collaborative effort towards this goal, leveraging the strengths of ActivityPods, Granary, and the broader open-source community. Thank you for considering this proposal. I look forward to your feedback and the opportunity to work together on this exciting integration. I understand this a large undertaking. I in no way believe this to be addressed nor completed anytime soon but the possibilities are incredible. ActivityPods would garner rapid adoption Adding this to provide context for Granary and what it covers. Granary is a library and REST API that fetches and converts between a wide variety of social data sources and formats: Facebook, Flickr, GitHub, Instagram, Mastodon, and Twitter native APIs Instagram and Facebook scraped HTML ActivityStreams 1.0 and 2.0 JSON, including ActivityPub HTML and JSON with microformats2 Atom, RSS 2.0, JSON Feed Plain XML Bluesky/AT Protocol Nostr, with many NIPs Free yourself from silo API chaff and expose the sweet social data foodstuff inside in standard formats and protocols! Thanks for the idea and vote of support @outlaw-dame! Granary can indeed convert social data to Bluesky format, ie the app.bsky.* lexicons, which can then be served over AT Protocol. Actually supporting AT Protocol itself, ie acting as a full fledged PDS, is a bigger lift: you need to store and serve MST nodes and a repo commit chain over websocket. You can do that with eg https://github.com/snarfed/arroba , which integrates well with granary, or you can write your data to a standalone PDS, either one you run (the official one is open source) or someone else's like the official https://bsky.social/ . Thanks for the idea and vote of support @outlaw-dame! Granary can indeed convert social data to Bluesky format, ie the app.bsky.* lexicons, which can then be served over AT Protocol. Actually supporting AT Protocol itself, ie acting as a full fledged PDS, is a bigger lift: you need to store and serve MST nodes and a repo commit chain over websocket. You can do that with eg https://github.com/snarfed/arroba , which integrates well with granary, or you can write your data to a standalone PDS, either one you run (the official one is open source) or someone else's like the official https://bsky.social/ . How would this be used to map the data/PDS to the Pod Store which is the ActivityPods backend and where user data is housed You'd integrate arroba into the ActivityPods backend to create ATProto users, repos and create, update, and delete records in those repos when you do the corresponding things for native ActivityPods users. You could optionally also write a new implementation of arroba's Storage class that stores repo data in the ActivityPods data store. However, it looks like ActivityPods is JavaScript, and arroba is Python, so you'd have to use cross-language bindings, which can sometimes be awkward. The official PDS is JavaScript, so you might want to look at using that instead.
2025-04-01T06:37:58.153733
2024-08-14T21:09:55
2466847958
{ "authors": [ "coveralls", "epage" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3927", "repo": "assert-rs/snapbox", "url": "https://github.com/assert-rs/snapbox/pull/358" }
gharchive/pull-request
fix(filter): Take redactions into account for Array elides Verified this against cargo in rewriting some tests Fixes #352 Pull Request Test Coverage Report for Build<PHONE_NUMBER>9 Details 7 of 7 (100.0%) changed or added relevant lines in 1 file are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 51.251% Totals Change from base Build<PHONE_NUMBER>7: 0.0% Covered Lines: 1393 Relevant Lines: 2718 πŸ’› - Coveralls
2025-04-01T06:37:58.158496
2022-01-29T05:00:35
1118031191
{ "authors": [ "KENNYSOFT", "scordio" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3928", "repo": "assertj/assertj-core", "url": "https://github.com/assertj/assertj-core/pull/2476" }
gharchive/pull-request
Breaking change: Extracting within SoftAssertions now throws an assertion error if actual is null Check List: Unit tests : YES Javadoc with a code example (on API only) : NA PR meets the contributing guidelines : Should be, but feel free to notify me if not! #2401 introduced extracting to throw an assertion error if actual is null, but it's not effective when using SoftAssertions. So I've fixed it. Actually, I think I should also see related #2411 and #2412. I will process if my approach is right. Weirdly, the Pull request list page doesn't show this. @scordio Can I get some review? Sure @KENNYSOFT, I had already a first look. I would like to check our Byte Buddy configuration, it might be that we can solve this issue directly there. I'll play with BB during the weekend and get back to you. Wow, sounds great! Looking forward to see the issue resolved in any form. @KENNYSOFT I'm still working on this topic and I might need a few more days for it. Thanks for the update. Hope you finish well. By the way, I now learned JUnit's @ParameterizedTest and it resolves my actual issue in some different way; I used SoftAssertions in the loop to check all violations within the given list, but now the loop is iterated by JUnit and I just use a simple Assertion. πŸ˜‰
2025-04-01T06:37:58.181540
2021-03-12T16:54:06
830293041
{ "authors": [ "andrewmilligan", "zstumgoren" ], "license": "ISC", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3929", "repo": "associatedpress/harvester", "url": "https://github.com/associatedpress/harvester/issues/8" }
gharchive/issue
Docs should recommend creating a new GCP project? AFAICT a GCP project can only have one Oauth consent form. If another app is configured to use Oauth in a user's sole GCP project, it means Harvester would need to share that same form. This appears to be possible (i.e. you can add multiple Redirect URIs and Authorized domains), but I believe they would need to share the same user verification screen and related metadata about user policy, etc. Might be a good idea to instruct users to create a new GCP project specifically for Harvester. Yes this is a great point. The new version of the docs recommend creating a new project so I'm going to close this: https://harvester.readthedocs.io/en/latest/google_credentials/#google-cloud-platform-project
2025-04-01T06:37:58.190285
2024-10-29T12:35:27
2621117068
{ "authors": [ "AGalabov", "paulwongx", "pincman" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3930", "repo": "asteasolutions/zod-to-openapi", "url": "https://github.com/asteasolutions/zod-to-openapi/issues/267" }
gharchive/issue
All types are inferred as string Type inference error for request and response: I tried to build an OpenAPI using this library in combination with hono.js and Next.js. However, after defining the schema, I discovered that when calling the API from the Next.js client, type errors occur. All request parameters and response types have become string types, including date and number types. response type error export const postSchema = z .object({ id: z.string(), title: z.string(), thumb: z.string(), summary: z.string().nullable().optional(), keywords: z.string().nullable().optional(), description: z.string().nullable().optional(), slug: z.string().nullable().optional(), body: z.string(), createdAt: z.coerce.date(), updatedAt: z.coerce.date(), }) .strict(); app .openapi( createRoute({ tags: ['ζ–‡η« ζ“δ½œ'], method: 'get', path: '/:item', request: { params: z.object({ item: z.string(), }), }, responses: { 200: { content: { 'application/json': { schema: postSchema, }, }, description: 'ζ–‡η« ζŸ₯θ―’η»“ζžœ', }, }, }), async (c) => { try { const { item } = c.req.param(); const result = await queryPostItem(item); return c.json(result) as any; } catch (error) { return c.json({ error }, 500); } }, ) const result = await apiClient.api.posts[':item'].$get({ param: { item: params.item } }); if (!result.ok) return notFound(); const post = await result.json(); // ... <div> <span> <AiOutlineCalendar /> </span> <time className="tw-ellips"> {!isNil(post.updatedAt) ? formatChineseTime(post.updatedAt) : formatChineseTime(post.createdAt)} </time> </div> request type error export const postPaginateQuerySchema = z.object({ page: z.coerce.number().optional().openapi({ type: 'number' }), limit: z.coerce.number().optional().openapi({ type: 'number' }), orderBy: z.enum(['asc', 'desc']).optional(), }); export const postPaginateResultSchema = z.object({ items: z.array(postSchema), meta: z.object({ itemCount: z.coerce.number(), totalItems: z.coerce.number(), perPage: z.coerce.number(), totalPages: z.coerce.number(), currentPage: z.coerce.number(), }), }); export type PostPaginate = z.infer<typeof postPaginateResultSchema>; const res = await apiClient.api.posts.$get({ query: { page, limit }, }); const { items, meta } = (await res.json()) as any as PostPaginate; if (meta.totalPages && meta.totalPages > 0 && page > meta.totalPages) { return redirect('/'); } // ... @pincman this is not an issue with our library in terms of your integration. The date. z.date() does validate dates. However in terms of "specification" (Open API) there is no date representation - there is only a string with some specific formatting but it still a string. The second screenshot with page and limit is probably correct - I think the documentation would say that it is a number. However in general over HTTP requests the query parameters are all strings. So I imagine what you are seeing is caused by the Next.JS (or something else on your stack) type system and it has nothing to do with our library. I am going to close this issue but if you happen to notice that is wrongly generated feel free to open up a new issue or reopen this one. Good luck! I'm encountering the same issue. @pincman how did you go about resolving response values that are dates/numbers being converted to strings?
2025-04-01T06:37:58.233472
2023-12-08T04:34:49
2031912531
{ "authors": [ "MichaReiser" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3931", "repo": "astral-sh/ruff", "url": "https://github.com/astral-sh/ruff/pull/9051" }
gharchive/pull-request
Fix handling of trailing target comment Summary This PR fixes an issue where Ruff moved a trailing target comment past the statement end c = b[dddddd, aaaaaa] = ( a[ aaaaaaa, bbbbbbbbbbbbbbbbbbb ] # comment 2 ) = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Before c = b[dddddd, aaaaaa] = a[ aaaaaaa, bbbbbbbbbbbbbbbbbbb ] = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # comment 2 Now c = b[dddddd, aaaaaa] = ( a[aaaaaaa, bbbbbbbbbbbbbbbbbbb] # comment 2 ) = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Which matches black's formatting Test Plan Added test Current dependencies on/for this PR: main PR #9051 πŸ‘ˆ This stack of pull requests is managed by Graphite.
2025-04-01T06:37:58.248154
2024-08-06T07:33:43
2450164897
{ "authors": [ "ulucs", "zanieb" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3932", "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/issues/5808" }
gharchive/issue
PYTHONPATH support for uv Python Package (Sort-of related to, and would fix: https://github.com/astral-sh/uv/issues/4450) Python supports a myriad of methods to declare dependency locations, not all of which can be correctly identified by the uv Python package. For example, PYTHONPATH: ❯ PYTHONPATH=$VENV_PATH/lib/python3.10/site-packages python Python 3.10.14 (main, Mar 19 2024, 21:46:16) [Clang 16.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import uv >>> uv.find_uv_bin() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "$VENV_PATH/lib/python3.10/site-packages/uv/__init__.py", line 30, in find_uv_bin raise FileNotFoundError(path) FileNotFoundError: /Users/(thats_me)/.local/bin/uv which throws, even though the uv package is located in the environment. Expected Behavior uv.find_uv_bin() correctly identifies that its binary should be located in $VENV_PATH/bin and returns $VENV_PATH/bin/uv Resonable Other Behavior uv.find_uv_bin() falls back to fetching the binary location from $PATH, and gives the responsibility of correctly setting it to the process which populated $PYTHONPATH Forgive my ignorance here, but isn't PYTHONPATH just a module search path? I don't quite see how we can "correctly" infer the virtual environment bin path from just the PYTHONPATH.
2025-04-01T06:37:58.256043
2024-08-22T04:55:34
2479838038
{ "authors": [ "wimglenn" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3933", "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/issues/6407" }
gharchive/issue
Incorrect precedence of UV_INDEX_URL vs --index-url in reqs file If an index url has been specified directly in the requirements file using -i or --index-url, this should have precedence over env var. Currently uv will use UV_INDEX_URL if it's set, and ignore the one in the requirements file. Reproducer: $ uv --version uv 0.3.1 (be17d132a 2024-08-21) $ python3 -VV Python 3.12.4 (v3.12.4:8e8a4baf65, Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-13<IP_ADDRESS>)] $ python3 -m venv .venv --without-pip $ vi reqs.txt $ cat reqs.txt --index-url https://test.pypi.org/simple uv-reqs-example==0.1 $ UV_INDEX_URL=https://pypi.org/simple uv pip install -r reqs.txt Γ— No solution found when resolving dependencies: ╰─▢ Because uv-reqs-example was not found in the package registry and you require uv-reqs-example==0.1, we can conclude that your requirements are unsatisfiable. $ UV_INDEX_URL=https://example.org/ uv pip install -r reqs.txt β ΄ Resolving dependencies... error: HTTP status server error (500 Internal Server Error) for url (https://example.org/uv-reqs-example/) This is in contrast to pip, where the index url from environment is lower priority to the cmd: $ uv pip install pip Resolved 1 package in 308ms Prepared 1 package in 573ms Installed 1 package in 11ms + pip==24.2 $ PIP_INDEX_URL=https://example.org/ .venv/bin/pip install -r reqs.txt Looking in indexes: https://test.pypi.org/simple Collecting uv-reqs-example==0.1 (from -r reqs.txt (line 2)) Downloading https://test-files.pythonhosted.org/packages/78/20/c8431c9645d1390acce2f2da698539f473818e77a168d135c09279dd7bf5/uv_reqs_example-0.1-py3-none-any.whl.metadata (58 bytes) Downloading https://test-files.pythonhosted.org/packages/78/20/c8431c9645d1390acce2f2da698539f473818e77a168d135c09279dd7bf5/uv_reqs_example-0.1-py3-none-any.whl (986 bytes) Installing collected packages: uv-reqs-example Successfully installed uv-reqs-example-0.1 This makes uv + UV_INDEX_URL somewhat dangerous to use as a drop-in replacement for pip + PIP_INDEX_URL. A failure to install is not the worst scenario, the worst is that you succeed to install potentially totally different packages from a totally different index (note: checksums in the reqs file could protect you from that). I'm intending to convince you that retaining compatibility with pip is a better choice than documenting as an incompatibility. Consider when there is a custom default index set. This is pretty common in corporate environments, where PyPI packages may need to be cached, whitelisted or shadowed with patched versions, e.g. by using an internal devpi-server. The custom index is set as the default index (for all users) by /etc/pip.conf file, owned by root and placed by sysadmin or config mgmt. Now, uv doesn't look at /etc/pip.conf, nor allow a global default configuration, only considers the user's ~/.config/uv/uv.toml. The next-best option for the sysadmin to put a sensible default would probably be setting UV_INDEX_URL (perhaps in /etc/environment, or by wrapping the uv executable). However, if the UV_INDEX_URL takes precedence over the index specified by requirements files, that's an unfortunate situation - users would have to unset the env var (which they might not even know about) to allow using the index which the requirements file was attempting to provide. As I mentioned earlier, the simple failure mode is that packages are not found, but the more confusing and hard-to-debug failure mode is if the installation succeeds but uses the wrong packages (because it was using the wrong index). Switching from pip to uv pip should not be so error-prone. The failure mode I describe might sound like a weird edge-case, but could actually be quite common especially with devpi-server, where "team-specific" indices could inherit and mirror a default index, to host a mix of their own/patched local versions layered over the default versions. ... and the one in a requirements.txt in a file was picked up (which could be from another project or similar). I'm not sure what you mean here? Unless I'm mistaken, it is part of the requirements file format that the file can control the index-url. This is not dangerous, because requirements files and their custom indices can not be used during dependency resolving from an index, the requirements file would be installed directly with -r. At least, it's not any more dangerous than a direct url requirement, as specified in PEP 508.
2025-04-01T06:37:58.258417
2024-10-10T12:36:47
2578724790
{ "authors": [ "Super1Windcloud", "charliermarsh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3934", "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/issues/8088" }
gharchive/issue
when I run uv pip sync requirements.txt , the environment is not be updated win11 but pyproject.toml So, I think this command of uv add -r requirements.txt can instead it . Yeah I think you're looking for uv add here as you mentioned above.
2025-04-01T06:37:58.261716
2024-02-17T10:19:09
2139931115
{ "authors": [ "erlend-sh", "olivier-lacroix", "zanieb" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3935", "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/pull/1581" }
gharchive/pull-request
Add benchmark of pixi with uv Summary Adds Pixi as one of the tool benchmarked against uv. Pixi can generate multi-platform lock files. here, lock-files are limited to the platform the benchmark is executed on Pixi can install packages from conda and/or pypi. here, packages are installed from pypi Still in draft as generating lock-files currently fails due to https://github.com/prefix-dev/pixi/issues/817 This seems redundant now that they're retiring rip in favor of uv. This seems redundant now that they're retiring rip in favor of uv. For anyone else like me wondering what you’re referring to: https://prefix.dev/blog/uv_in_pixi Very ecosystem-conscious move by the Prefix team! πŸ‘
2025-04-01T06:37:58.263174
2024-04-19T00:24:33
2251831636
{ "authors": [ "zanieb" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3936", "repo": "astral-sh/uv", "url": "https://github.com/astral-sh/uv/pull/3131" }
gharchive/pull-request
Revert "Rewrite uv-auth (#2976)" This reverts commit c0efeeddf6d738991d8f3149168ce57c52073f4e. As an alternative to the in-progress fix at https://github.com/astral-sh/uv/pull/3130, we could revert the pull request at #2976. #3130 instead.
2025-04-01T06:37:58.272419
2024-07-06T17:11:39
2393633447
{ "authors": [ "SuperFluffy", "noot" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3937", "repo": "astriaorg/astria", "url": "https://github.com/astriaorg/astria/pull/1242" }
gharchive/pull-request
fix(sequencer)!: store native asset ibc->trace mapping in init_chain Summary we need to store the native asset ibc to "trace" mapping in the state, otherwise queries for the native asset using the ID will fail. for example get_bridge_account_info where the asset is the native asset fails right now Changes store native asset ibc->trace mapping in init_chain also enforce that the native asset is is "trace" form, as otherwise, we won't be able to map from ibc->trace form for the asset as we don't know the trace form. Breaking changes this is unfortunately breaking since the ibc->trace mapping is stored in app state. Added an exclamation mark, as in fix(sequencer)!, because this isi breaking
2025-04-01T06:37:58.277715
2023-03-07T18:15:31
1613970454
{ "authors": [ "RodrigoTomeES", "Suven", "gnomeria", "nikolaxhristov" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3938", "repo": "astro-community/astro-critters", "url": "https://github.com/astro-community/astro-critters/issues/73" }
gharchive/issue
Anyway to make it also remove the external from the html? Hi, thank you for building this tool. I was wondering how could we remove something like <link href=/_astro/index.66b179d0.css rel=stylesheet> from the built html that's already been inlined? Critters only inlines a small portion of the CSS, the one that is displayed above-the-fold. The rest is needed by your website. @nikolaxhristov maybe this feature make sense when all your styles are critical @nikolaxhristov maybe this feature make sense when all your styles are critical True, this is the issue to track it https://github.com/nikolaxhristov/critters/issues/3 and it is related to https://github.com/nikolaxhristov/critters/issues/2 which needs to get fixed first. @nikolaxhristov Critters only inlines a small portion of the CSS, the one that is displayed above-the-fold. The rest is needed by your website. It is recommended to leave those intact. Actually, that is a false assumption, isn't it? The critters-readme states: It also means Critters inlines all CSS rules used by your document, rather than only those needed for above-the-fold content. @Suven Yes, but it also means that critters includes more CSS than just that above-the-fold. @Suven I don't always give the most correct assumptions :D With the fact in mind that critters inlines everything which is needed in the initial render, the only case where the external css is still needed, is for client-side-hydrated components right? πŸ€” Maybe a config option for this plugin to remove the link-tags would be nice, or am I overlooking something? If you agree I would try and see if I am able to provide a PR. All PRs are warm-welcomed.
2025-04-01T06:37:58.281128
2023-11-24T16:15:41
2009952504
{ "authors": [ "mpgreg", "sunank200" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3939", "repo": "astronomer/ask-astro", "url": "https://github.com/astronomer/ask-astro/issues/174" }
gharchive/issue
Upsert does not remove successfully upserted documents https://github.com/astronomer/ask-astro/blob/515f3386c4eac8aa4ddcdc3ad12c46b52e4aad8a/airflow/include/tasks/extract/utils/weaviate/ask_astro_weaviate_hook.py#L328C9-L340C1 If no errors occur in upsert this will never be called. Need to move this to its own function and call it after rollback and also at line 405 if there are no errors. For suggested fix https://github.com/mpgreg/ask-astro/blob/ed65a354013b5ce2170f98448bd510ed3c4201be/airflow/include/utils/weaviate/hooks/weaviate.py#L350 https://github.com/mpgreg/ask-astro/blob/ed65a354013b5ce2170f98448bd510ed3c4201be/airflow/include/utils/weaviate/hooks/weaviate.py#L301 https://github.com/mpgreg/ask-astro/blob/ed65a354013b5ce2170f98448bd510ed3c4201be/airflow/include/utils/weaviate/hooks/weaviate.py#L464-L489 @mpgreg Isn't this section doing the rollback: https://github.com/astronomer/ask-astro/blob/main/airflow/include/tasks/extract/utils/weaviate/ask_astro_weaviate_hook.py#L328?
2025-04-01T06:37:58.289499
2023-01-23T22:14:08
1553916656
{ "authors": [ "chrishronek", "tatiana" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3940", "repo": "astronomer/astronomer-cosmos", "url": "https://github.com/astronomer/astronomer-cosmos/issues/93" }
gharchive/issue
Add support for "source" select/exclude to DbtDag & DbtTaskGroup parsers See dbt docs on cli usage examples here $ dbt run --select source:snowplow+ # run all models that select from Snowplow sources Ultimately, in our parsers, we should be able to have a new parameter that looks something like this: # (Either the select or exclude parameter would be specified with the snowplow source - not both) jaffle_shop = DbtTaskGroup( ... select={'sources': ['snowplow+']} # run all models that select from Snowplow sources exclude={'sources': ['snowplow+']} # run all models except those that select from Snowplow sources ) Complementing, as of Cosmos 1.x, this functionality only works on LoadMode.DBT_LS. We should also support it when the DAG/TaskGroup uses LoadMode.DBT_MANIFEST and LoadMode.CUSTOM.
2025-04-01T06:37:58.296651
2024-02-06T02:57:18
2119838010
{ "authors": [ "jbandoro" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3941", "repo": "astronomer/astronomer-cosmos", "url": "https://github.com/astronomer/astronomer-cosmos/pull/836" }
gharchive/pull-request
Add support for InvocationMode.DBT_RUNNER for local execution mode Description This PR adds dbtRunner programmatic invocation for ExecutionMode.LOCAL. I decided to not make a new execution mode for each (e.g. ExecutionMode.LOCAL_DBT_RUNNER) and all of the child operators but instead added an additional config ExecutionConfig.invocation_mode where InvocationMode.DBT_RUNNER could be specified. This is so that users who are already using local execution mode could use dbt runner and see performance improvements. With the dbtRunnerResult it makes it easy to know whether the dbt run was successful and logs do not need to be parsed but are still logged in the operator: Performance Testing After #827 was added, I modified it slightly to use postgres adapter instead of sqlite because the latest dbt-core support for sqlite is 1.4 when programmatic invocation requires >=1.5.0. I got the following results comparing subprocess to dbt runner for 10 models: InvocationMode.SUBPROCESS: Ran 10 models in 23.77661895751953 seconds NUM_MODELS=10 TIME=23.77661895751953 InvocationMode.DBT_RUNNER: Ran 10 models in 8.390100002288818 seconds NUM_MODELS=10 TIME=8.390100002288818 So using InvocationMode.DBT_RUNNER is almost 3x faster, and can speed up dag runs if there are a lot of models that execute relatively quickly since there seems to be a 1-2s speed up per task. One thing I found while working on this is that a manifest is stored in the result if you parse a project with the runner, and can be reused in subsequent commands to avoid reparsing. This could be a useful way for caching the manifest if we use dbt runner for dbt ls parsing and could speed up the initial render as well. I thought at first it would be easy to have this also work for virtualenv execution, since I at first thought the entire execute method was run in the virtualenv, which is not the case since the virtualenv operator creates a virtualenv and then passes the executable path to a subprocess. It may be possible to have this work for virtualenv and would be better suited for a follow-up PR. Related Issue(s) closes #717 Breaking Change? None Checklist [x] I have made corresponding changes to the documentation (if required) [x] I have added tests that prove my fix is effective or that my feature works - added unit tests and integration tests. @jlaneve I'm closing this PR and opening up #850 because I couldn't update the GH action and have it run with the updates here on my forked branch.
2025-04-01T06:37:58.304260
2022-04-21T11:24:01
1210894621
{ "authors": [ "danielhoherd", "pankajkoti", "phanikumv" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3942", "repo": "astronomer/astronomer-providers", "url": "https://github.com/astronomer/astronomer-providers/issues/257" }
gharchive/issue
Fix failing example_emr_eks_pi_job example DAG The example_emr_eks_pi_job is failing as part of our intergration tests run. As part of the investigation logs, the shell script Bash Operator seems to having some errors not being raised up and letting the task state shown to be successful. Understand the DAG, debug the issue across tasks in the DAG and make it working. Have set failure raise for shell script in the PR: https://github.com/astronomer/astronomer-providers/pull/259 However, cloud formation template is not succeeding with below failure reason: AWS::EKS::Nodegroup/ManagedNodeGroup: CREATE_FAILED – "Resource handler returned message: \"[Issue(Code=AsgInstanceLaunchFailures, Message=You've reached your quota for maximum Fleet Requests for this account. Launching EC2 instance failed., ResourceIds=[eks-ng-a0731f88-2ec026b0-8620-cc3d-82e3-dee3ed01e00a]), Issue(Code=NodeCreationFailure, Message=Instances failed to join the kubernetes cluster, ResourceIds=[DUMMY_11448ab7-f2d5-42a7-a6cc-0d74ccb2e8e9, DUMMY_1bb6f31e-92c8-43d5-9ca3-8e815cbff947, DUMMY_2a5d16f9-291c-4d58-bf35-2eb5c0db88f5, DUMMY_4e4b490f-ef51-4d5b-87a5-3a5141ac6f9c, DUMMY_77a3e8d7-3af0-49e6-afa5-7c5ebc9a0eeb, DUMMY_7e88602e-c7f8-4d6a-943b-5f5078a4eb3f, DUMMY_a129f26a-1a81-4b5e-9c84-2285d8734a8f, DUMMY_abb67e19-d841-4bd0-ad05-5418f73149d1, DUMMY_ae850f0d-0693-4772-9d9b-4f4d442bab0e, DUMMY_ff345f8f-3468-4f2b-b8cf-ac1560c4150d])] (Service: null, Status Code: 0, Request ID: null, Extended Request ID: null)\" (RequestToken: 98ef1611-2341-fd34-ef08-4e8970c34e47, HandlerErrorCode: GeneralServiceException)" Adding @bharanidharan14 as assignee too as he is trying this on his local. @dstandish suggested to try a smaller instance. But, we still are facing the same issue. He further suggested to check with @danielhoherd and/or speak to AWS. I will connect with @danielhoherd and check if he can help us here. Parallelly, I have created an AWS case for our account: https://us-east-1.console.aws.amazon.com/support/home?region=us-east-2#/case/?displayId=9973194951&language=en I don't have any special knowledge about this. The error seems pretty clear though: "You've reached your quota for maximum Fleet Requests for this account." My first step would be to make that request that @pankajkoti made. We've run into this kind of thing in prod-cloud in GCP, and requesting an increase is the only solution. We are not sending the VIRTUAL_CLUSTER_ID to the example_delete_eks_cluster_and_role_policies shell script from the DAG. Due to this, if a virtual cluster already exists the delete shell script isn't cleaning up properly. [2022-05-02, 17:42:13 UTC] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'sh $AIRFLOW_HOME/dags/example_delete_eks_cluster_and_role_policies.sh '] [2022-05-02, 17:42:13 UTC] {subprocess.py:85} INFO - Output: [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - To see help text, you can run: [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - aws help [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - aws <command> help [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - aws <command> <subcommand> help [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - [2022-05-02, 17:42:19 UTC] {subprocess.py:89} INFO - aws: error: argument --id: expected one argument aws emr-containers delete-virtual-cluster --id $VIRTUAL_CLUSTER_ID VIRTUAL_CLUSTER_ID is coming as none during run-time create_emr_virtual_cluster_func should delete existing virtual clusters and then create one. [2022-05-02, 17:42:10 UTC] {example_emr_eks_containers_job.py:60} ERROR - Error while creating EMR virtual cluster Traceback (most recent call last): File "/usr/local/airflow/dags/example_emr_eks_containers_job.py", line 50, in create_emr_virtual_cluster_func response = client.create_virtual_cluster( File "/usr/local/lib/python3.9/site-packages/botocore/client.py", line 395, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.9/site-packages/botocore/client.py", line 725, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the CreateVirtualCluster operation: A virtual cluster already exists in the given namespace
2025-04-01T06:37:58.373001
2019-09-11T16:35:36
492345303
{ "authors": [ "larrybradley", "pllim" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3943", "repo": "astropy/regions", "url": "https://github.com/astropy/regions/issues/298" }
gharchive/issue
write_ds9 silently overwrites output file https://github.com/astropy/regions/blob/35bab74340b12053942d2bc6ebd071dce340c605/regions/io/ds9/write.py#L62 This behavior is not always desirable. I propose you add an overwrite keyword like what astropy unified I/O does. Added in v0.5
2025-04-01T06:37:58.386844
2019-08-05T01:59:34
476623590
{ "authors": [ "AvinashBukkittu", "ZhitingHu" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3944", "repo": "asyml/texar-pytorch", "url": "https://github.com/asyml/texar-pytorch/pull/138" }
gharchive/pull-request
Enhancement of Convolution Networks This PR Corrects a few mistakes in the docs for Convolution Networks Adds data_format capability. User can now use channels_first and channels_last tensors. This option can be passed in forward function. If not passed, it will be picked from hyperparameters Handles the difference in data_format requirements between convolution network and mask_sequences [This was an existing issue which was unnoticed] Makes other_conv_kwargs and other_pool_kwargs accept list as well as dict. If dict, the same property is applied to all layers. If list, then individual layers get their own kwargs Fixes a bug - When num_dense_layers < 0, in_features of logits layer equals the out_features of Flatten layer. Adds a test case for this. This PR closes #136 Is the new issue about time_major=False included in this PR? Is the new issue about time_major=False included in this PR? Yes, it is included. I have updated the description of this PR to mention this issue.
2025-04-01T06:37:58.396118
2022-05-20T17:26:38
1243432022
{ "authors": [ "magicmatatjahu", "naman-tiwari" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3945", "repo": "asyncapi/design-system", "url": "https://github.com/asyncapi/design-system/pull/34" }
gharchive/pull-request
docs: update to make styling more consistent I have updated the Readme part by adding some icons aside two headings i.e. Environment and How to setup. @naman-tiwari Hi! Please apply my and Missy suggestions, and then we will accept it and merge :)
2025-04-01T06:37:58.397457
2023-05-03T22:27:05
1694933072
{ "authors": [ "asyncedd" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3946", "repo": "asyncedd/dots.nvim", "url": "https://github.com/asyncedd/dots.nvim/issues/96" }
gharchive/issue
sumneko_lua lags the editor As stated above, sumneko_lua (lua_ls) lags down the editor. Somehow it really lags mini plugins? not just mini but also noice.nvim? otherwise, diagnostics are lagging down the editor when leaving? nope, fixed in 50aa8ccb8cfcec2aef2fca202cd1b7aa1dca450e
2025-04-01T06:37:58.399421
2022-02-27T15:06:39
1153283479
{ "authors": [ "mhelleborg", "rogeralsing" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3947", "repo": "asynkron/protoactor-dotnet", "url": "https://github.com/asynkron/protoactor-dotnet/pull/1488" }
gharchive/pull-request
Restructure Remove netstandard 2.1 Restructure solution structure Remove Proto.Remote.GrpcNet and Proto.Repote.GrpcCore in favor of just Proto.Remote containing all grpcnet code Also, any idea why some tests consistently fail? or maybe just CI acting up Also, any idea why some tests consistently fail? or maybe just CI acting up I did rewrite the tests to run concurrently. Might need higher timeouts because of that, since they might get CPU throttled I've disabled everything else but the two failing tests. still fails. Very unclear why. one just seem to get stuck, not doing anything?
2025-04-01T06:37:58.403674
2023-08-08T11:10:38
1841096233
{ "authors": [ "GMMDMDIDEMS", "mike-gimelfarb" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3948", "repo": "ataitler/pyRDDLGym", "url": "https://github.com/ataitler/pyRDDLGym/pull/209" }
gharchive/pull-request
Fixes #208 Changes the function argument type in get_instance(self, num: int) in the ExampleManager class from 'int' to 'str'. This allows to load some of the instance files with characters in the filename, e.g. 'instance1c.rddl' in the MountainCar domain. Hi, Thanks for the fix. We should really starting thinking of moving the competition domains over to rddlrepository soon. What do you mean exactly by moving the competition domains over to rddlrepository? There are some inconsistencies due to the fix, e.g. in the README.md EnvInfo.get_instance(0) we still use an int. I could offer to fix all inconsistencies and additionally I would rename the argument from num to name, so it get's def get_instance(self, name: str):. Let me know what you think and if it would help you. Indeed, there are still some incompatibilities. Originally, we intended (and still do) for instance numbers to be integers, following custom in defining rddl instances. The 'c' was appended to the instance to denote that they are part of the official IPPC probabilistic planning competition we held earlier this year. This means those domain files with 1c, 2c will eventually be removed from the pyrddlgym. That said, there is no reason why users should not be allowed to use either integer or string numbering going forward, There is currently a major design overhaul underway of the pyrddlgym front end to enhance user friendliness for the future , so I think a number of these changes will need to be eventually incorporated as part of this overhaul. If you would like to help, please feel free to contribute a PR. Note we are currently waiting on #211 while is a big change to the front end, so hopefully it would not conflict. If you think any aspect of the front end can be further improved please feel free to suggest improvements or make PR :)
2025-04-01T06:37:58.457187
2022-07-22T08:53:53
1314695302
{ "authors": [ "giacomocavalieri" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3953", "repo": "atedeg/mdm", "url": "https://github.com/atedeg/mdm/pull/95" }
gharchive/pull-request
Add monadic utility functions This PR adds some monadic utility functions to improve the readability of code written using MTL-style effects. As a side note: writing the code I notices wart remover getting really angry with the monadic unless and when because it kept inferring the type Any; we have to keep an eye on this I think we'll have to disable this Wart to keep the code readable: when (updatedOrder needsMoreOf product) (emit(MissingProduct(product))) // would turn into when[<type annotation>] (updatedOrder needsMoreOf product) (emit(MissingProduct(product))) // The type annotation is quite ugly and breaks the flow of the sentence when reading the code
2025-04-01T06:37:58.462520
2024-04-03T21:48:22
2223992924
{ "authors": [ "35gh", "corgan2222" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3954", "repo": "ateodorescu/home-assistant-ipmi", "url": "https://github.com/ateodorescu/home-assistant-ipmi/issues/34" }
gharchive/issue
Integration stopped working after update to HA 2024.4.0 Hi, After update to HA 2024.4.0, all integration entities but state became unavailable. I see those suspicious errors in logs, not sure they are related: Enregistreur: homeassistant Source: util/async_.py:35 S'est produit pour la première fois: 23:34:13 (1 occurrences) Dernier enregistrement: 23:34:13 Error doing job: Task exception was never retrieved Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 629, in async_add_entities await add_func(coros, entities, timeout) File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 535, in _async_add_and_update_entities tasks = [create_eager_task(coro) for coro in coros] ^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/src/homeassistant/homeassistant/util/async_.py", line 35, in create_eager_task loop=loop or get_running_loop(), ^^^^^^^^^^^^^^^^^^ RuntimeError: no running event loop Enregistreur: py.warnings Source: runner.py:189 S'est produit pour la première fois: 23:34:13 (1 occurrences) Dernier enregistrement: 23:34:13 /usr/local/lib/python3.12/threading.py:299: RuntimeWarning: coroutine 'EntityPlatform._async_add_entity' was never awaited def __enter__(self): Thanks! I can confirm this bug after updating to 2024.0. Debug output: This happens to both of my servers (dell and ASRock). It looks, that only the SDR List Items are effected. The power state and the power switch are available. 1.6.0 brought back the sensors for me, thank you.
2025-04-01T06:37:58.479240
2020-02-13T09:45:19
564559260
{ "authors": [ "atifaziz" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3955", "repo": "atifaziz/LINQBridge", "url": "https://github.com/atifaziz/LINQBridge/issues/28" }
gharchive/issue
Move ExtensionAttribute into a separate assembly At this point where is a lot of libraries which employs an ExtensionAttribute definition hack to support .NET, which leads to some well-known problems when System.Runtime.CompilerServices.ExtensionAttribute gets redefined in more when one assembly. For example, let's consider the following situation I faced recently: my project referenced both LinqBridge and Json.NET which has its own built-in *internal* copy of LinqBridge (with ExtensionAttribute defined *internal* too). This resulted in an "error CS0656: Missing compiler required member 'System.Runtime.CompilerServices.ExtensionAttribute..ctor'" and made me to recompile Json.NET to reference LinqBridge explicitly. So, what about moving ExtensionAttribute definition into a completely separate assembly, maybe even into a completely separate project? AFAIR, LinqBridge is the most notable project which uses this tricky technique, and I hope other libraries would eventually switch to that assembly instead of defining their own copy of the attribute class, and there would be the only one assembly with ExtensionAttribuite for .NET 2.0, providing the "standard" implementation the hack. See also: http://stackoverflow.com/questions/11025100 http://devhawk.net/2012/06/20/ambiguous-extensionattribute-errors/ Original issue reported on code.google.com by<EMAIL_ADDRESS>on 20 Dec 2013 at 3:18 You should take up this issue with Json.NET. The idea of LINQBridge was to provide .NET 3.5-isms for .NET 2.0, which, besides LINQ, includes extension methods. In fact, the ExtensionAttribute was internalized in 1.1 and then deliberately brought back in 1.2 (see issue #10). Original comment by azizatif on 20 Dec 2013 at 6:10 Changed state: WontFix > The idea of LINQBridge was to provide .NET 3.5-isms for .NET 2.0, which, > besides LINQ, includes extension methods. LINQ is the most known application of extension methods, but not the only one. It may be a good idea to split "providing extension methods for .NET 2.0" and "providing LINQ to Objects for .NET 2.0", because they are *different* languages features, where the latter is based on the former. There may be other libraries which would like to use extension methods facility which have nothing to do with LinqBridge. If they all declare their own ExtensionAttribute, they may get unusable together with each other. > In fact, the ExtensionAttribute was internalized in 1.1 and then > deliberately brought back in 1.2 (see issue #10 ). I didn't mean it should be internalized. It should be public, but moved away from LinqBridge assembly into a separate DLL what would be shipped with it. Something like this: * LinqBrigde.dll references ExtensionAttribute.dll * LibraryThatNeedsExtensionMethodsWithLinq.dll references LinqBridge.dll * LibraryThatNeedsExtensionMethodsButNotLinq.dll references ExtensionAttribute.dll Original comment by<EMAIL_ADDRESS>on 20 Dec 2013 at 8:14 I understand where you're coming from. You can also extend the same argument to Action and Func delegates. In any event, this would change the direction with which LINQBridge was conceived. Today, it is a project in twilight and the only effort made would be, I reckon, towards a showstopper bug. Anything else would require considerable resources that are scarce, like volunteered free time. That said, LINQBridge is open source and if you are confident it needs to make the split, it can be forked (under the same license), changed and re-published on NuGet under an alternate Id. Original comment by azizatif on 21 Dec 2013 at 6:38 > You can also extend the same argument to Action and Func delegates. This would be an overkill. Moreover, multiple Action and Func delegates may be declared in different namespaces, one for each library which wants then, and they would be still compatible each with other, unlike ExtensionAttribute which MUST reside in system namespace. > LINQBridge is open source and if you are confident it needs to make > the split, it can be forked (under the same license), changed and > re-published on NuGet under an alternate Id. The key point was to create a common assembly for everyone who wants to employ extension methods in .NET 2.0, so that assembly must come from an authoritative source like LinqBridge. Even if I'm going to fork the project, it will not convince anybody to use that assembly. > Today, it is a project in twilight Sad but true. Almost no one cares about .NET 2.0, but some libraries still try to support it. If there would be a common and widely acceptable way of using extension methods which will prevent conflicts between these libraries (caused by ExtensionAttribute), then their authors would not just terminate that support because they got tired of bug reports from users of old runtime. Anyway, thank you for your answers. I'm also sorry for my not very perfect language. Original comment by<EMAIL_ADDRESS>on 21 Dec 2013 at 8:51
2025-04-01T06:37:58.523500
2017-10-01T09:33:33
261904038
{ "authors": [ "fabianishere" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3956", "repo": "atlarge-research/opendc-simulator", "url": "https://github.com/atlarge-research/opendc-simulator/issues/5" }
gharchive/issue
Interpolate task progress Expected Behaviour The progress of tasks update every tick according to the speed of the machines. Actual Behaviour At the moment, the progress of tasks will only update at a specific interval (10 ticks) instead of per tick. Implemented in #21
2025-04-01T06:37:58.531382
2019-04-26T18:19:00
437790084
{ "authors": [ "igarashitm" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3957", "repo": "atlasmap/atlasmap", "url": "https://github.com/atlasmap/atlasmap/issues/895" }
gharchive/issue
Introduce editable combobox Introduce editable combobox which allows user to type in and filter before choose one from a list. delimiter for Concatenate/Split transformation transformation name field name What else? out of date
2025-04-01T06:37:58.547318
2021-11-07T15:59:25
1046783273
{ "authors": [ "atodorov", "bruns6077" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3958", "repo": "atodorov/django-s3-cache", "url": "https://github.com/atodorov/django-s3-cache/issues/9" }
gharchive/issue
init.py change from storages.backends import s3boto to from storages.backends.s3boto3 import S3Boto3Storage replace s3boto.S3BotoStorage with S3Boto3Storage and update repository please, its outdated. change from storages.backends import s3boto to from storages.backends.s3boto3 import S3Boto3Storage replace s3boto.S3BotoStorage with S3Boto3Storage and update repository please, its outdated. Pull request(s) are welcome!
2025-04-01T06:37:58.570163
2015-12-28T15:49:46
124083240
{ "authors": [ "i90rr", "mnquintana" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3959", "repo": "atom/atom", "url": "https://github.com/atom/atom/issues/10202" }
gharchive/issue
[ Keyboard usability enhancement ] Search box for Tree View Hi, Often a project is made up of tens if not hundreds or even thousands of files and directories, it would be quite handy to have a box to search (as-you-type) for them. Cheers Thanks for the suggestion! This looks like a duplicate of https://github.com/atom/tree-view/issues/159 – feel free to subscribe there for updates. You can already live search for files in your project in Atom already though, via the fuzzy-finder: https://atom.io/docs/latest/getting-started-atom-basics#opening-a-file-in-a-project Also, please take a look at the Contributing guide for a guide on submitting bug reports (including searching for duplicates first).
2025-04-01T06:37:58.573316
2016-02-25T23:13:55
136541875
{ "authors": [ "landocloud", "lee-dohm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3960", "repo": "atom/atom", "url": "https://github.com/atom/atom/issues/10976" }
gharchive/issue
Feature Request: Distribute Atom through the Mac App Store IANAL. Please can anyone confirm whether the Atom MIT license is compatible with the Mac App Store? Now that Electron 0.34.0 supports the Mac App Store https://github.com/atom/electron/blob/master/docs/tutorial/mac-app-store-submission-guide.md, is it possible to submit Atom to the Mac App Store so that the app auto-updates? As of Atom 1.0 to 1.6, the OS X auto-update framework is borked as per https://github.com/atom/atom/issues/2860 Thanks! To my knowledge, there is no restriction against open source software on the Mac App Store. On the other hand, an application on the Mac App Store is restricted to a sandbox that precludes reading from and writing to files outside certain areas. Since editing files anywhere on your system is one of the features of Atom, until that requirement changes I don't believe we'll be pursuing publishing on the Mac App Store. Thanks for your feedback!
2025-04-01T06:37:58.582152
2017-01-14T15:44:24
200811203
{ "authors": [ "hucan7", "lee-dohm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3961", "repo": "atom/atom", "url": "https://github.com/atom/atom/issues/13620" }
gharchive/issue
Build Failed Prerequisites [x] Put an X between the brackets on this line if you have done all of the following: Reproduced the problem in Safe Mode: http://flight-manual.atom.io/hacking-atom/sections/debugging/#using-safe-mode Followed all applicable steps in the debugging guide: http://flight-manual.atom.io/hacking-atom/sections/debugging/ Checked the FAQs on the message board for common solutions: https://discuss.atom.io/c/faq Checked that your issue isn't already filed: https://github.com/issues?utf8=βœ“&q=is%3Aissue+user%3Aatom Checked that there is not already an Atom package that provides the described functionality: https://atom.io/packages Description I want to install atom by building the source code in Linux environment, but get failed. Steps to Reproduce git clone https://github.com/atom/atom.git cd atom npm config set python /.tool/bin/python -g script/build wait for a few minutes after printing "Installing apm" on the screen. the error will be displayed. Expected behavior: [What you expect to happen] Install successfully Actual behavior: [What actually happens] The error info as following: Node: v6.9.4 Npm: v4.1.1 Installing script dependencies Installing apm module.js:327 throw err; ^ Error: Cannot find module '../build/Release/git.node' at Function.Module._resolveFilename (module.js:325:15) at Function.Module._load (module.js:276:25) at Module.require (module.js:353:17) at require (internal/module.js:12:17) at Object.<anonymous> (/.tool/atom/apm/node_modules/atom-package-manager/node_modules/git-utils/lib/git.js:8:16) at Object.<anonymous> (/.tool/atom/apm/node_modules/atom-package-manager/node_modules/git-utils/lib/git.js:371:4) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) child_process.js:506 throw err; ^ Error: Command failed: /.tool/atom/apm/node_modules/atom-package-manager/bin/apm --loglevel=error install at checkExecSyncError (child_process.js:483:13) at Object.execFileSync (child_process.js:503:13) at module.exports (/.tool/atom/script/lib/install-atom-dependencies.js:15:16) at Object.<anonymous> (/.tool/atom/script/bootstrap:28:1) at Module._compile (module.js:570:32) at Object.Module._extensions..js (module.js:579:10) at Module.load (module.js:487:32) at tryModuleLoad (module.js:446:12) at Function.Module._load (module.js:438:3) at Module.require (module.js:497:17) Reproduces how often: [What percentage of the time does it reproduce?] After getting the error, I run clear, and build again, still get the same error. Versions OS: RHEL 5.7 Python version: v2.6.9 Node version: v6.9.4 Npm version: v4.1.1 Additional Information Any additional information, configuration or data that might be necessary to reproduce the issue. Is the version of python you have installed at /.tool/bin/python v2 or v3? It's v2, v2.6.9
2025-04-01T06:37:58.586453
2017-06-01T09:45:32
232823091
{ "authors": [ "q4w56", "rsese" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3962", "repo": "atom/atom", "url": "https://github.com/atom/atom/issues/14692" }
gharchive/issue
Rename file should replace the tab with the renamed file. Prerequisites [X] Put an X between the brackets on this line if you have done all of the following: ... Description Rename file should replace the tab with the renamed file. Steps to Reproduce Right click a file in tab or file tree sidebar, Click rename Enter the new name Expected behavior: The active tab should be the newly renamed file. Actual behavior: The active tab remained the old un-renamed file. And I have to close it and open the renamed file. Versions Atom: 1.16.0 OS: ubuntu 16.04 Thanks for the report! Can you re-open this in https://github.com/atom/tree-view if there's no existing issue already so we have it in the right place? Also, can you clarify your steps to reproduce a bit? I'm not quite sure if you're renaming the file you currently have open or if you're renaming some other file. This is a duplicate of atom/tree-view#264
2025-04-01T06:37:58.590215
2018-05-22T04:27:28
325136963
{ "authors": [ "bauripalash", "rsese" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3963", "repo": "atom/atom", "url": "https://github.com/atom/atom/issues/17383" }
gharchive/issue
Display Progress of Download of a package Summary It would a good idea to show a progress bar of how much is downloaded when installing a package via apm or atom ui Motivation When installing a package via apm or atom ui it sometimes took a long time, so sometimes it let me think, is it really downloading something or it crashed? Additional context It would be helpful for many people. i hope this feature will be added in next releases. Thanks for contributing! We noticed that this looks like a duplicate of https://github.com/atom/apm/issues/148 so you can subscribe there if you'd like. Because we treat our issues list as the Atom team's backlog, we close duplicates to focus our work and not have to touch the same chunk of code for the same reason multiple times. This is also why we may mark something as duplicate that isn't an exact duplicate but is closely related. For information on how to use GitHub's search feature to find out if something is a duplicate before filing, see the How Can I Contribute? section of the Atom CONTRIBUTING guide.
2025-04-01T06:37:58.593798
2019-05-09T10:31:25
442166552
{ "authors": [ "doakey3", "rsese" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3964", "repo": "atom/atom", "url": "https://github.com/atom/atom/issues/19288" }
gharchive/issue
Auto Indent Empty Line Above In Geany, notice that when an indented line is sent to a line below, indentation is added to the newline above so that indented text can quickly be added to that newline: In Atom, the indentation must be added manually: My feature request is for the geany-like behavior to be implemented by default in atom. Thanks for the suggestion! For future issues, please fill out the issue template - the information and format of the templates are super helpful for us when triaging issues. And as mentioned in the template, the team is currently very unlikely to prioritize feature requests right now - but with Atom's customizability and with community packages, you can often get the functionality you need without requesting changes to Atom itself. In this particular case, I poked around and found this package that looks like it can help: https://atom.io/packages/atom-cursor-indent With your example and this package: Since this functionality is already provided by a package, we'll go ahead and close this issue.
2025-04-01T06:37:58.597842
2015-06-28T17:33:27
91619174
{ "authors": [ "50Wliu", "Narrat", "fusion809" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3965", "repo": "atom/atom", "url": "https://github.com/atom/atom/issues/7511" }
gharchive/issue
Incorrect Arch Linux installation instructions Hi, I know I could go about this with a pull request but it's only a single word addition to the Linux.md file so I felt a developer with write permissions to the GitHub repo could just read this and make the change. At the moment the Linux.md file reads this in its Arch Linux dependency installation instructions: sudo pacman -S gconf base-devel git nodejs libgnome-keyring python2 export PYTHON=/usr/bin/python2 before building Atom. The amendment I propose is to add npm after nodejs in the first of these lines, in accordance with the official Nodejs Wiki installation instructions. I have tested this on a 32 bit Manjaro Linux Virtual Machine (which is as close to Arch that I can effectively work with-- Arch is over my head): without npm in this line, later during the actual build script/build generated errors stating that it could not find npm. I hope this helps, Brenton As this particular problem got fixed with #8101, shouldn't this be closed? Thanks @Narrat :).
2025-04-01T06:37:58.616702
2015-10-21T20:49:29
112675988
{ "authors": [ "Ben3eeE", "MartyGentillon", "MethodGrab", "MorganMarshall", "benogle", "calebmeyer", "damieng", "jerone", "maxbrunsfeld", "mnquintana", "paulcbetts", "sonokamome", "vvs" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3966", "repo": "atom/atom", "url": "https://github.com/atom/atom/issues/9247" }
gharchive/issue
Atom Beta isn't Side by Side on Windows Expected According to the beta info page Atom Beta should "run side-by-side with Atom stable". To me that reads like I should be able to have both the beta and stable installed (and even running) simultaneously on the same system (like with Chrome's stable & beta releases) and there should be separate links to run each build. Actual On Windows (7 x64), when I install the beta (1.1.0-beta1), it replaces the context menu "Open with Atom" shortcut and the start menu shortcuts with the beta so there is no apparent way to run stable without reinstalling. Reinstalling stable works but it replaces all the shortcuts with links to the stable version so there is no way to run the beta without re-installing. /cc @maxbrunsfeld @nathansobo cc @raelyard Ok, I had to stop work on this for the time being because of some PathLength issues on windows. We need to upgrade our build infrastructure to use npm 3, which does a better job of reducing path nesting. Until that time, I'm reopening this. With the file associations and shell integration having the beta executable named something different - atombeta.exe for example - would also make this work better. Thoughts? having the beta executable named something different - atombeta.exe for example - would also make this work better. :+1: Ah, I didn't realize that. We already rename a bunch of stuff based on the channel (stable vs beta) in the build scripts, so this would just be one more. Now the shell integration is coming out of the installer and into Atom settings we should make sure we use productName instead of atom.exe everywhere when we do that work in #5901 Included in the forthcoming shell integration options. @damieng what's the process/timeline for getting both installed side by side on windows? It's checked into master now on both atom/atom and atom/settings-view and should ship in Atom 1.10. Reopening because I am not able to install Atom beta side by side with Atom stable on Windows 7. Installing beta removes the stable install. See below for a full list of what I did to try and install Atom 1.12.0-beta4 side by side with Atom 1.11.2. I can test it out on Windows 10 later if required to see if stable and beta side by side works there. Originally reported in: https://github.com/atom/atom/issues/13016 so not alone seeing this. Windows 7 - Trying to install stable and beta side by side. Uninstalled Atom 1.12-beta4 using Programs and Features found in the Control panel. Removed the .atom folder from %USERPROFILE%. Removed the atom folder from %localappdata%. It contained update.exe and .dead. Expected: Atom to be uninstalled. Actual: Atom pin is still on the taskbar, clicking it gives an error that it might have been moved, renamed or deleted asking me to remove the pin. Unpinned Atom from the taskbar by clicking Yes on the dialog. Installed Atom 1.11.2 using AtomSetup.exe downloaded from atom.io. Expected and Actual: Atom 1.11.2 starts after install, all settings lost. Answered yes on the telemetry consent and unchecked the Show On Start option for the welcome guide. Opened the settings-view using Ctrl+, Navigated to the System tab and checked all three options. Pinned Atom to the taskbar. Closed Atom 1.11.2. Checked %localappdata\atom it contains the app-1.11.2 folder. Installed Atom 1.12.0-beta4 using AtomSetup.exe downloaded from atom.io. Atom 1.12.0-beta4 starts. Atom does not ask for telemetry consent or show the welcome guide. Atom 1.11.2 is not in %localappdata%\atom. Only app-1.12.0-beta4. Atom is still pinned to the taskbar and can open from there but it has the icon from stable. Open with Atom context menu works, icon from beta and opens Atom 1.12.0-beta4. Checked the Systems tab all three options are checked. Atom Beta is expected to use the same config and settings as regular Atom right now so I wouldn't expect it to show the welcome guide or telemetry consent. The real problem right now is that beta and non-beta on Windows share the same setup-id in Squirrel so one is considered an upgrade to the other. One option for now would be to use the beta zip and unpack that somewhere. That should allow you to run side-by-side. If you want the beta to also use a separate config then you should be able to create an empty .atom folder in the folder about where you unpacked the beta, e.g. c:\apps\.atom``` I see exactly the same happening as @Ben3eeE is describing. While installing any version of Atom (release, beta or dev) the previous Atom installation is being removed from %LOCALAPPDATA%\atom\. Windows 7 EN 64-bits Just installed Atom 1.18 beta (Windows 10, x64) and it has deleted the stable Atom version (1.17). If the issue is not going to be fixed, it would be good to remove the wrong message on the Atom beta page that the beta can be used side-by-side. Heya Vlad, no it works in other platforms as far as the side-by-side install. It's just in Windows where this is currently an issue. side-by-side installs work fine in UNIX and UNIX-like environments. They're still working on it. In the meantime, refer to @damieng 's suggestion: Atom Beta is expected to use the same config and settings as regular Atom right now so I wouldn't expect it to show the welcome guide or telemetry consent. The real problem right now is that beta and non-beta on Windows share the same setup-id in Squirrel so one is considered an upgrade to the other. One option for now would be to use the beta zip and unpack that somewhere. That should allow you to run side-by-side. If you want the beta to also use a separate config then you should be able to create an empty .atom folder in the folder about where you unpacked the beta, e.g. c:\apps\atombeta c:\apps\.atom Hi all, I've written the first version of a tool designed to solve this problem: https://github.com/atom/avm Here's how to get started: npm install -g atom-version-manager ## Install the stable version: avm switch stable ## Switch to the beta avm switch beta The initial run of these commands will take awhile as it downloads Atom and installs it, but from then on switching between the two will be very fast (i.e. 2-3 sec or so). Let me know if this helps! At a minimum, it would be nice to mention something about this on the atom beta website. It's really annoying to discover after the fact that it blew away my stable atom install. Guys, tons of people are wasting a lot of time on this. Like everyone is suggesting please remove the side by side sales pitch from the site, at least on windows side. From the looks of this thread it hasn't worked in years. Its causing a lot of confusion and cursing.
2025-04-01T06:37:58.619418
2016-06-30T03:20:17
163072903
{ "authors": [ "50Wliu", "Ben3eeE", "Postem1", "sunnyvempati" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3967", "repo": "atom/fuzzy-finder", "url": "https://github.com/atom/fuzzy-finder/issues/226" }
gharchive/issue
Indexing Project..0 Whenever I start the Fuzzy Finder, it gives a screen stating: Indexing project..0. How do I get it to work properly? https://github.com/atom/fuzzy-finder/issues/205? +1 Closing as a duplicate of #205 - feel free to subscribe there for updates.
2025-04-01T06:37:58.652958
2022-07-24T17:40:53
1315966772
{ "authors": [ "bettio" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3968", "repo": "atomvm/AtomVM", "url": "https://github.com/atomvm/AtomVM/pull/333" }
gharchive/pull-request
ESP32: add support to CMake buildsystem used on esp-idf >= 4.x These changes are made under both the "Apache 2.0" and the "GNU Lesser General Public License 2.1 or later" license terms (dual license). SPDX-License-Identifier: Apache-2.0 OR LGPL-2.1-or-later tested: [x] network driver [x] socket driver
2025-04-01T06:37:58.674802
2023-07-05T19:11:18
1790126110
{ "authors": [ "gkc" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3969", "repo": "atsign-foundation/at_client_sdk", "url": "https://github.com/atsign-foundation/at_client_sdk/issues/1084" }
gharchive/issue
functional tests occasionally failing Describe the bug Seeing messages like this in logs SEVERE|2023-07-05 19:04:35.767602|LocalSecondary (@aliceπŸ› )|exception in llookup:Could not read value from box. Maybe your box is corrupted. For example in this PR, test run 1 failed but test run 2 succeeded (but can still see the warnings about hive box maybe being corrupted even though the test run succeeded). Suspect there is a race condition with how and when the hive boxes are being cleaned up / recreated in the functional test pack Steps to reproduce run the functional tests Expected behavior functional tests should pass or fail consistently Setting to P0 because (a) worrying messages are worrying (b) happens frequently, which means it is a big thief of time
2025-04-01T06:37:58.687602
2022-07-01T01:06:00
1290780210
{ "authors": [ "VJag", "cconstab", "gkc", "murali-shris", "srieteja" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3970", "repo": "atsign-foundation/at_libraries", "url": "https://github.com/atsign-foundation/at_libraries/issues/189" }
gharchive/issue
Update TLS connection to optionally output TLS keys to file Is your feature request related to a problem? Please describe. Update TLS connection to optionally output TLS keys to file. This allows you to "see" inside the TLS packets using WireShark and diagnose issues. Describe the solution you'd like Open to suggestions on implementation but it would be nice just to include a dev low level library that includes this feature. So testing can be done by including a dev library in pubspec.yaml. This would dump the TLS keys in the directory where the binary is being run. The additional lines of code required to do this are :- In at_lookup_impl.dart var secConConnect = SecurityContext(); var keyfile = File('keysfile'); secConConnect.setTrustedCertificates('caroot/rootcacert.pem'); var secureSocket = await SecureSocket.connect(host, int.parse(port), context: secConConnect, keyLog: (line) => keyfile.writeAsStringSync(line, mode: FileMode.append)); And in monitor_client.dart var secConConnect = SecurityContext(); var keyfile = File('keysfile'); secConConnect.setTrustedCertificates('caroot/rootcacert.pem'); var secureSocket = await SecureSocket.connect(host, int.parse(port), context: secConConnect, keyLog: (line) => keyfile.writeAsStringSync(line, mode: FileMode.append)); replacing the secureSocket connection with no SecurityContent() It would be nice to abstract the SecureSocket.connect so only one change would effect both lines of code and then that abstraction could be used in the secondary server code as well perhaps. Describe alternatives you've considered I did consider pushing all the way through via command line options or by adding a method options but that I think holds the danger of leaving it in place before going to a prod build.. But open to them or other ideas.. Additional context Screen shot of the resulting Wireshark diagnostics Flowchart for proposed changes flowchart TD A[Start] --> B[CreateSecureSocket] B --> C [Read preference] C --> D {decryptPackets?} D -->|No| E[Create secure socket without security context] D -->|Yes| F[Create secure socket with security context] E --> G [End] F --> G https://api.flutter.dev/flutter/dart-io/SecureSocket/connect.html files to modify on client side at_client/lib/src/manager/monitor.dart at_lookup/lib/src/at_lookup_impl.dart wireshark TLS decryption https://wiki.wireshark.org/TLS#tls-decryption The above-listed PRs contain implementation for creating sockets with security context and changes necessary to support this new implementation. What needs to be worked on further: Unit tests in at_client_sdk need minor changes to the way they mock MonitorOutboundConnectionFactory class as this class uses a new implementation to create secure sockets. When testing this I only see one TLS connection dumping the keys .. The monitor connection.. We need to sump keys for all files. The test for the rootcacerts file also does not error if the file is not there. Plus if the files is not there for the keys it does not get opend (it is being opened in append mode) Ok I have spotted the problem in at_libraraies and posted a branch with a 'fix' @srieteja see what you think I am not too sure how the monitor connection gets picked up ?? As I see nowhere in monitor_client.dart where the code uses the TLS dumping socket connection , does that happen somewhere else now ?? the branch is 'tlsdump` I tested sshnp's using dependency_overrides: at_lookup: git: url: https://github.com/atsign-foundation/at_libraries.git path: at_lookup ref: tlsdump The monitor uses the SecureSocketUtil in 'at_client/lib/src/manager/monitor.dart'. This was a more feasible place to do this as we needed access to the AtClientPreferences. @cconstab Is my fix ok? @srieteja if so I will raise a PR @cconstab I was able to understand the problem but I was unable the understand the fix. Perhaps you could send me your tls keys file with the fix(so that I could understand the diff) or we could jump on a quick call ? Hope I'm not interrupting your weekend :) Yup it's just a single line change to remove the bool. The rest is just editor noise Yes. The thing that is bugging me is that even though the false statement was removed, the default value for decryptPackets is false in SecureSocketConfig class. I'm just trying to understand how removing the false statement is affecting the functioning. @cconstab I just debugged it and understood your fix and was able to observe the bug. Also if it's okay with you I would like to push a commit into your branch resolving the case with rootcerts availability check. That's great thanks When testing this I only see one TLS connection dumping the keys .. The monitor connection.. We need to dump keys for all files. The test for the rootcacerts file also does not error if the file is not there. Plus if the files is not there for the keys it does not get opend (it is being opened in append mode).. @cconstab sorry for the delay, I forgot to push the change into the branch. I put a fix to throw an error when the certs don't exist. Reg the other thing I used append mode instead of write as append does not overwrite data from previous sessions, and append does create a new file when a file does not already exist. I think this has been completed but waiting for @cconstab to approve the PR Ok I have tested this an approved the PR ... Thanks folks! Once PR is merged and published, we can close this ticket
2025-04-01T06:37:58.695148
2017-09-29T00:32:23
261499750
{ "authors": [ "rafael-atticlabs" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3971", "repo": "attic-labs/noms", "url": "https://github.com/attic-labs/noms/issues/3747" }
gharchive/issue
SimplifyType is too expensive Now that the dust is starting to settle after changing values to be backed by bytes, SimplifyType is rising the top of alot of profiles. The algorithm is unavoidably costly. I think a good kind of approach here is to try to avoid recomputing a simplified type that we've already computed a bunch (e.g. when creating large collections) Thinking about this more, I think I can imagine three approaches to mitigating the cost of simplify type: Find some way to implement which is lost costly in terms of number of memory allocations Memoize the work so that we don't keep recomputing the same simplified type for the same (or very similar) input types Avoid doing it at all if the type doesn't need to be simplified I think maybe a good first place to start is (3) above. I think there's probably a 60-80% base case here which is that SimplifyType gets called everytime the sequence chunker produces a new node in a prolly tree. What gets simplified is all of the elements of the sequence. It's probably more often than not that a) Each element is exactly the same type b) Any given element is already Simplified I think simply detecting this case and avoiding the call to SimplifyType is a good place to start and will help with the major of cases.
2025-04-01T06:37:58.702482
2018-02-08T12:41:31
295497691
{ "authors": [ "attzonko", "seLain" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3972", "repo": "attzonko/mattermost_bot", "url": "https://github.com/attzonko/mattermost_bot/issues/9" }
gharchive/issue
Bot logins before joining any team ? If a bot tries to login before it joins any team, there will be error in MattermostAPIv4.load_initial_data Should mattermost_bot allows this bot to login ? A possible way to do that : def load_initial_data(self): ! self.teams = self.get('/users/me/teams') + self.teams_channels_ids = {} + if len(self.teams) == 0: + return self.default_team_id = self.teams[0]['id'] - self.teams_channels_ids = {} for team in self.teams: self.teams_channels_ids[team['id']] = [] # get all channels belonging to each team for channel in self.get_channels(team['id']): self.teams_channels_ids[team['id']].append(channel['id']) Or should mattermost_bot throws defined exception ? Need to check if not having a default_team_id in this scenario would be a problem. Have you had a chance to test this proposal @seLain? The default_team_id is necessary for MattermostAPI (APIv3) when calling webhooks. In MattermostAPIv4, there is not much usage of it since the APIv4 webhook is not supported in mattermost_bot yet .... Maybe we should postpone this issue and think about it more globally. Along with the deprecation of APIv3, the MattermostAPIv4 does not have to extend MattermostAPI (APIv3) anymore. Further, there should be more enhancements needed for mattermost_bot in supporting APIv4. At that time, I believe this issue will be resolved consequently. After all, a bot must be added to at least one team. Otherwise it can do almost nothing. This issue can be easily skipped this way in normal cases. :stuck_out_tongue: Totally agree. We should update the documentation to specify that the bot user must be added to at least one team prior to actually running the bot.
2025-04-01T06:37:58.776358
2016-10-25T21:57:49
185245136
{ "authors": [ "benjaoming", "eliasdorneles" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3973", "repo": "audreyr/cookiecutter-pypackage", "url": "https://github.com/audreyr/cookiecutter-pypackage/pull/263" }
gharchive/pull-request
Remove auto-generated comment :) I guess the generated docs files are quite modified compared to what sphinx autogenerates, so we can remove this :) Apparently I end all sentences with smileys right now :) Right, makes sense!
2025-04-01T06:37:58.787722
2023-01-04T02:56:04
1518212717
{ "authors": [ "hoangthanh212" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3974", "repo": "aura-nw/verify-contract", "url": "https://github.com/aura-nw/verify-contract/pull/10" }
gharchive/pull-request
Merge develop to serenity (#9) Update ci.yml fix deployment ts (#8) Update ci.yml Co-authored-by: kienvc<EMAIL_ADDRESS> @doquockhanhan : TuαΊ§n sau merge
2025-04-01T06:37:58.841570
2023-04-05T21:43:29
1656332861
{ "authors": [ "aureliano" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3975", "repo": "aureliano/caravela", "url": "https://github.com/aureliano/caravela/issues/15" }
gharchive/issue
Call internationalization Add internationalization configuration as parameter of both: check for updates and update methods. Implemented by e3daf30e81aac6c506dfb080e207c6509613d663.
2025-04-01T06:37:58.887298
2019-04-29T16:01:33
438390215
{ "authors": [ "TimDaub" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3976", "repo": "austintgriffith/burner-wallet", "url": "https://github.com/austintgriffith/burner-wallet/pull/164" }
gharchive/pull-request
Use erc721-balance to quickly query badges This PR integrates erc721-balance by @vrde. Here's a page that explains what it does: https://vrde.github.io/erc721-benchmark/ TL;DR: It queries ERC721 tokens super quickly by taking advantage of batch calls against the JSON-RPC API. This shows especially when you have lots of badges. Note: Code is very alpha and may contain bugs. Surely @vrde would be interested to iterate with you on this :) <EMAIL_ADDRESS>contains a bug where accounts with a balance of 0 cannot be retrieved. Fixed it here: https://github.com/vrde/erc721-balance/pull/2/files Let's wait for this to be published on npm before merging this PR.
2025-04-01T06:37:58.889692
2024-06-16T12:32:00
2355734645
{ "authors": [ "eudoxia0", "tim-de" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3977", "repo": "austral/austral", "url": "https://github.com/austral/austral/issues/600" }
gharchive/issue
Incompatible types in typeclass instances are not caught by the checker While looking into why the standard library wasn't building correctly for me, I found that that readByte for StandardInput was trying to call an incompatible mono in the generated c code. In the source I found that the implementations of the readByte and writeByte functions in the instances for StandardInput and StandardError respectively did not have the correct types in the function definition. The typeclass definition demands that the instance type parameter matches the stream type -- standard/src/IO/IO.aui typeclass ByteInputStream(T: Type) is generic [R: Region] method readByte(stream: &![T, R]): Option[Nat8]; end; But the implementation does not follow this -- standard/src/IO/Terminal.aum instance ByteInputStream(StandardInput) is generic [R: Region] method readByte(stream: &![StandardOutput, R]): Option[Nat8] is let stdin: Address[Nat8] := getStdin(); let res: Int32 := fgetc(stdin); if res = EOF then return None(); else return toNat8(res); end if; end; end; I have created a PR to fix the discrepancy in this code, but surely this should be getting caught by the type checker? Yes this is absolutely a bug in the type checker. Thanks for reporting and fixing the code!
2025-04-01T06:37:58.913995
2023-11-08T12:44:07
1983523625
{ "authors": [ "codecov-commenter", "sergiught" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3979", "repo": "auth0/go-auth0", "url": "https://github.com/auth0/go-auth0/pull/308" }
gharchive/pull-request
ESD-32354: Add disable_self_service_change_password to AD connection options πŸ”§ Changes Adds support for disable_self_service_change_password on AD Connection Options. πŸ“š References https://github.com/auth0/terraform-provider-auth0/issues/870 πŸ”¬ Testing πŸ“ Checklist [x] All new/changed/fixed functionality is covered by tests (or N/A) [x] I have added documentation for all new/changed functionality (or N/A) Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Comparison is base (ba83c16) 94.80% compared to head (09ffb2c) 94.81%. Additional details and impacted files @@ Coverage Diff @@ ## main #308 +/- ## ======================================= Coverage 94.80% 94.81% ======================================= Files 46 46 Lines 8916 8921 +5 ======================================= + Hits 8453 8458 +5 Misses 361 361 Partials 102 102 Files Coverage Ξ” management/connection.go 72.50% <ΓΈ> (ΓΈ) management/management.gen.go 100.00% <100.00%> (ΓΈ) :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
2025-04-01T06:37:58.928602
2024-01-22T00:21:39
2092852228
{ "authors": [ "LeoDog896", "miunau" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3980", "repo": "auth70/bodyguard", "url": "https://github.com/auth70/bodyguard/issues/1" }
gharchive/issue
Extra +s are being appended to formData's strings When sending in formData with some structure as { key: "this is a test" }, all of the spaces in the string are replaced with + after being processed. Hi! Sorry this took a while! You can now pass convertPluses: true as a config option to form and softForm to enable this behaviour. Otherwise if it's default, it would interfer with actual plus signs. If you want to avoid this altogether you can use enctype="multipart/form-data" in your form which will encode strings properly. Thanks! No prob :) I also added file upload support in 1.5.0
2025-04-01T06:37:58.955412
2021-08-16T00:43:03
971265351
{ "authors": [ "codecov-commenter", "jon-whit" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3981", "repo": "authorizer-tech/access-controller", "url": "https://github.com/authorizer-tech/access-controller/pull/37" }
gharchive/pull-request
fix: resolve namespace cfg snapshot timestamps uniquly per namespace The changes herein ensure that namespace config snapshots are resolved and propagated uniquely per namespace. namespace config changes can happen at different times between any two different namespaces, and therefore we must propagate the snapshot timestamps independently in the request context. Fixes #36 . Codecov Report Merging #37 (9579d47) into master (e914a98) will increase coverage by 1.87%. The diff coverage is 100.00%. @@ Coverage Diff @@ ## master #37 +/- ## ========================================== + Coverage 76.07% 77.94% +1.87% ========================================== Files 8 8 Lines 1028 1147 +119 ========================================== + Hits 782 894 +112 - Misses 182 189 +7 Partials 64 64 Impacted Files Coverage Ξ” internal/access-controller.go 75.87% <100.00%> (+1.51%) :arrow_up: internal/namespace.go 100.00% <100.00%> (ΓΈ) internal/tree.go 100.00% <0.00%> (ΓΈ) internal/hashring.go 100.00% <0.00%> (ΓΈ) internal/client-router.go 100.00% <0.00%> (ΓΈ) internal/healthchecker.go 100.00% <0.00%> (ΓΈ) internal/relation-tuple.go 100.00% <0.00%> (ΓΈ) internal/namespace-manager/postgres/manager.go 57.21% <0.00%> (+1.47%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Ξ” = absolute <relative> (impact), ΓΈ = not affected, ? = missing data Powered by Codecov. Last update e914a98...9579d47. Read the comment docs. Sweet. I'll merge tonight and cut a patch release for it. Thanks!
2025-04-01T06:37:58.960444
2023-02-10T23:22:21
1580478005
{ "authors": [ "Bhashit", "ecordell", "thomasklein94" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3982", "repo": "authzed/spicedb-operator", "url": "https://github.com/authzed/spicedb-operator/issues/146" }
gharchive/issue
extraPodAnnotations doesn't apply to the migration pod I deployed the following in a namespace that has istio injection enabled apiVersion: authzed.com/v1alpha1 kind: SpiceDBCluster metadata: name: dev spec: config: datastoreEngine: postgres extraPodAnnotations: sidecar.istio.io/inject: "false" secretName: dev-spicedb-config I was trying to add the annotation with extraPodAnnotations to the generated pods so that the sidecar doesn't get injected Istio adds a sidecar to every pod created in that namespace. Because the sidecar keeps running, the pod never reaches the Completed stage, preventing the operator from progressing further (and creating the spicedb pods) Since this migration pod is also generated by the operator, shouldn't the additional annotations apply to the migration pod as well? I know I can either deploy the spicedb cluster to a namespace that doesn't inject sidecars, or configure the Istio operator to never inject anything in the pods that match the labels for spicedb, But it may be simpler, and perhaps more consistent, to apply the same annotations to the migration pod as well. Another possible solution could be for the operator to only check the status of the migration container within the pod, instead of checking the pod status before moving on Right now, extraPodLabels only applies to the deployment pods, not the jobs. There's a PR in progress (https://github.com/authzed/spicedb-operator/pull/135) that attempts to address this generically - that would look something like: apiVersion: authzed.com/v1alpha1 kind: SpiceDBCluster metadata: name: dev spec: config: datastoreEngine: postgres secretName: dev-spicedb-config patches: - kind: Job patch: spec: template: metadata: annotations: sidecar.istio.io/inject: "false" Thanks for the very quick response (once again, since you did the same earlier this morning). Looking forward to it. While the PR #135 looks exciting, until the work there has completed, consider merging PR #147. @ecordell any plans to cut a new release with the #147 merged in?
2025-04-01T06:37:58.964665
2022-09-29T08:29:16
1390470616
{ "authors": [ "jakub-lesniak-mck", "vroldanbet" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3983", "repo": "authzed/spicedb", "url": "https://github.com/authzed/spicedb/issues/850" }
gharchive/issue
SpiceDb crashing unexpectedly I came across a weird behaviour for SpiceDB - it crashed with an exit code 2. Here is the tail of the log: {"level":"error","module":"pgx","args":[],"err":"ERROR: relation \"relation_tuple_transaction\" does not exist (SQLSTATE 42P01)","pid":7266,"sql":"\n\tSELECT COALESCE(\n\t\t(SELECT MIN(id) FROM relation_tuple_transaction WHERE timestamp >= TO_TIMESTAMP(FLOOR(EXTRACT(EPOCH FROM NOW() AT TIME ZONE 'utc') *<PHONE_NUMBER> /<PHONE_NUMBER>) *<PHONE_NUMBER> /<PHONE_NUMBER>) AT TIME ZONE 'utc'),\n\t\t(SELECT MAX(id) FROM relation_tuple_transaction)\n\t),\n\t5000000000 - CAST(EXTRACT(EPOCH FROM NOW() AT TIME ZONE 'utc') *<PHONE_NUMBER> as bigint) %<PHONE_NUMBER>;","time":"2022-09-28T11:52:42Z","message":"Query"} panic: interface conversion: interface {} is nil, not decimal.Decimal goroutine 786 [running]: github.com/authzed/spicedb/internal/datastore/common/revisions.(*CachedOptimizedRevisions).OptimizedRevision(0xc000b087c0, {0x2117760?, 0xc000e74280?}) /home/runner/work/spicedb/spicedb/internal/datastore/common/revisions/optimized.go:72 +0x4d3 github.com/authzed/spicedb/internal/datastore/proxy.hedgingProxy.OptimizedRevision.func1({0x2117760?, 0xc000e74280?}, 0x0?) /home/runner/work/spicedb/spicedb/internal/datastore/proxy/hedging.go:179 +0x65 created by github.com/authzed/spicedb/internal/datastore/proxy.newHedger.func1 /home/runner/work/spicedb/spicedb/internal/datastore/proxy/hedging.go:78 +0x27b paw-marketplace-spicedb-1 exited with code 2 It is being run inside a docker-compose like so: pg-spicedb: image: postgres:14-alpine healthcheck: test: [ 'CMD-SHELL', 'pg_isready -U postgres -d spicedb' ] interval: 10s timeout: 5s retries: 5 labels: - 'traefik.enable=false' expose: - '5432' environment: POSTGRES_PASSWORD: postgres POSTGRES_USER: postgres POSTGRES_DB: spicedb volumes: - pg-spicedb-data:/var/lib/postgresql/data spicedb: image: quay.io/authzed/spicedb:v1.9.0 depends_on: - pg-spicedb labels: - 'traefik.enable=false' ports: - 50051:50051 volumes: - ./libs/authz/schema.local.yml:/schema.yml environment: - SPICEDB_GRPC_PRESHARED_KEY=localadmin - SPICEDB_GRPC_ENABLED=1 - SPICEDB_GRPC_ADDR=:50051 - SPICEDB_GRPC_NO_TLS=1 - SPICEDB_METRICS_ENABLED=0 - SPICEDB_DASHBOARD_ENABLED=0 - SPICEDB_DATASTORE_ENGINE=postgres - SPICEDB_DATASTORE_CONN_URI=postgresql://postgres:postgres@pg-spicedb:5432/spicedb?sslmode=disable - SPICEDB_DATASTORE_BOOTSTRAP_FILES=/schema.yml - SPICEDB_DATASTORE_BOOTSTRAP_OVERWRITE=1 - SPICEDB_TELEMETRY_ENDPOINT= command: serve πŸ‘‹πŸ» It seems like you are running v1.9.0. In that version, if an error happened while checking for optimized revisions, it was not checked and would panic: https://github.com/authzed/spicedb/blob/ae4552ed89f0561f71893da2feeb7feb1e767e71/internal/datastore/common/revisions/optimized.go#L71-L72 You can see that this was fixed in https://github.com/authzed/spicedb/pull/740, which is part of release v1.12.0. Feel free to reopen if that does not fix your problem! πŸ™‡πŸ»
2025-04-01T06:37:58.967902
2023-04-04T02:46:31
1653071654
{ "authors": [ "mtsmfm", "riywo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3984", "repo": "autifyhq/actions-web-test-run", "url": "https://github.com/autifyhq/actions-web-test-run/pull/31" }
gharchive/pull-request
Surround URL replacements option with quotes Internal ticket: https://app.clickup.com/t/864ea29wc https://github.com/autifyhq/autify-cli/pull/389 To support --url-replacements option with space delimiter we need to surround the arg with quotes. I confirmed it works well on autify-cli with this commit https://github.com/autifyhq/autify-cli/pull/389/commits/84b25fd9d2fe19e536e948b4abe2948cbfea8629 The escaping looks good. Then, it reminds us that the URLs containing , will be broken (also CircleCI integrations as well). Then, it reminds us that the URLs containing , will be broken (also CircleCI integrations as well). True, we need to have another delimiter to pass multiple replacements option via CI integrations. I will raise an internal ticket but will not deal with it here since it'll be also breaking change.
2025-04-01T06:37:58.986849
2024-06-05T06:29:32
2335016551
{ "authors": [ "cozdas" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3985", "repo": "autodesk-forks/OpenColorIO", "url": "https://github.com/autodesk-forks/OpenColorIO/pull/4" }
gharchive/pull-request
Adsk Contrib - Add cmake option OCIO_HAS_BUILTIN_YAML_CONFIGS Adding cmake option to remove built-in yaml based configs (CGConfig and StudioConfig). When this option is turned off, the tests that rely on the built-in configs will also be removed. Yes, when the YAML switch is added this will be controlled by it too. These PR's are for rolling in all the previous work in separable pieces.
2025-04-01T06:37:58.988783
2023-02-10T10:38:47
1579444107
{ "authors": [ "salahelfarissi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3986", "repo": "autodesk-platform-services/aps-iot-extensions-demo", "url": "https://github.com/autodesk-platform-services/aps-iot-extensions-demo/issues/6" }
gharchive/issue
The file extension loaded into the viewer is not supported I have a 2022 revit model (.rvt). I want to know whether the problem comes from the version of the software or from the formats that are supported by the app. I used the same revit file as in the README. The file extension loaded into the viewer is not supported @petrbroz Thank you. Solved. https://gist.github.com/salahelfarissi/784796c339ea39ec917f919db6f203fc
2025-04-01T06:37:59.012349
2018-05-19T04:53:23
324601167
{ "authors": [ "dymurray", "jmrodri" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3987", "repo": "automationbroker/sbcli", "url": "https://github.com/automationbroker/sbcli/pull/1" }
gharchive/pull-request
prune vendor using pruning rules Before prune: $ du -sh vendor/ 121M vendor/ After prune: $ du -sh vendor/ 41M vendor/ Thanks @jmrodri You're going to have to show me how to do this :)
2025-04-01T06:37:59.021312
2020-06-18T19:18:53
641490562
{ "authors": [ "autonomousapps", "sboishtyan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3988", "repo": "autonomousapps/dependency-analysis-android-gradle-plugin", "url": "https://github.com/autonomousapps/dependency-analysis-android-gradle-plugin/issues/202" }
gharchive/issue
How could I disable verbose output chatty(false) is no-op now. Do I have an opportunity to disable console output? Thanks for asking. I have made an effort with recent releases to reduce console output by default. Do you find this is still too verbose? Yes, we have a lot of modules around 300 and a lot of plugins and they all want to print something Now your plugin makes a lot of noise in the logs It makes hard to read logs and I prefer to look for your plugin advice only in files Thanks for the response. Would you like the ability to suppress all output from this plugin? It would be enough for me. I'm not sure I want to make this an extension method (like chatty), but I'm also not totally opposed to it. What about using a system property to disable it? You could do it via the command line or in gradle.properties. For me, it would be enough. Feel free to make any design decision you think fits for your vision Note to self, I think I only need to disable logging in the AdviceSubprojectAggregationTask class. This is one of only two places where logger.quiet is used. The other is FailOrWarnTask, which runs just once per invocation of buildHealth, compared to once per subproject for the Advice... task. I have resolved this by adding a system property the plugin will respond to. If you add systemProp.dependency.analysis.silent=true to gradle.properties, then logging will be greatly reduced. You may also use -Ddependency.analysis.silent=true on the command line. This has not yet been published, but is available as a snapshot if you want to test.
2025-04-01T06:37:59.026378
2024-10-10T13:23:07
2578844536
{ "authors": [ "autonomousapps", "emartynov" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3989", "repo": "autonomousapps/dependency-analysis-gradle-plugin", "url": "https://github.com/autonomousapps/dependency-analysis-gradle-plugin/issues/1283" }
gharchive/issue
ignoreKtx(true) is not working Plugin version 2.1.4 Gradle version 8.10.2 JDK version 23 (Optional) Kotlin and Kotlin Gradle Plugin (KGP) version 2.0.20 (Optional) Android Gradle Plugin (AGP) version 8.6.2 (Optional) reason output for bugs relating to incorrect advice ./gradlew sha:vide:reason --id androidx.lifecycle:lifecycle-runtime:2.8.6 :shared:video-player \--- io.coil-kt:coil-base:2.7.0 \--- androidx.lifecycle:lifecycle-runtime:2.8.6 Source: developDebug, main -------------------------- (no usages) ./gradlew sha:vide:reason --id libs.androidx.lifecycle.runtime Shortest path from :shared:video-player to androidx.lifecycle:lifecycle-runtime-ktx:2.8.6 (libs.androidx.lifecycle.runtime) for nowsecureReleaseUnitTestRuntimeClasspath: :shared:video-player \--- androidx.lifecycle:lifecycle-runtime-ktx:2.8.6 Source: developDebug, main -------------------------- (no usages) Describe the bug The ktx dependency is proposed to be removed and the normal dependency is proposed to be added even we have dependencyAnalysis { structure { ignoreKtx(true) // default is false } } In the root folder. Expected behavior ignoreKtx is respected Additional context Plugin is applied in root folder and then applied in every module. The ignoreKts is applied in the root file. Thanks for the issue. Do you have a reproducer? Sorry for the delay, let me try in small project.
2025-04-01T06:37:59.028189
2022-05-09T15:36:11
1229909868
{ "authors": [ "handstandsam" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3990", "repo": "autonomousapps/gradle-glossary", "url": "https://github.com/autonomousapps/gradle-glossary/issues/7" }
gharchive/issue
Add definition of "Script Plugin" https://kotlinlang.slack.com/archives/C19FD9681/p1652110468534199?thread_ts=1652107016.526049&cid=C19FD9681 Oops, @martinbonnin was wrong, it is there. My fault for not double-checking though πŸ˜‚ https://github.com/autonomousapps/gradle-glossary#script-plugin CLOSING
2025-04-01T06:37:59.029448
2021-01-23T05:34:27
792446713
{ "authors": [ "Jah-On" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3991", "repo": "autopilot-rs/autopy", "url": "https://github.com/autopilot-rs/autopy/issues/67" }
gharchive/issue
Compiling for Windows? Hello, I made a trimmed down version and building for Linux is easy. I'm having issues with building on Windows. Can you post directions online? Thank you! I figured it out I figured it out
2025-04-01T06:37:59.087415
2022-03-03T00:33:57
1157837646
{ "authors": [ "kenji-miyake", "xmfcx" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3992", "repo": "autowarefoundation/autoware.core", "url": "https://github.com/autowarefoundation/autoware.core/pull/4" }
gharchive/pull-request
chore: sync issue and PR templates Sync files that will be added in https://github.com/autowarefoundation/autoware/pull/56. pre-commit.ci run @xmfcx cc @mitsudome-r I'll explain about this CI here. If we change the settings like this, and run the workflow sync-files, this kind of PR is created. It enables us to sync files between repositories easily. https://github.com/autowarefoundation/autoware.core/pull/12 This time I've run it manually, the workflow usually runs automatically every day. This time I've run it manually, the workflow usually runs automatically every day. Why not run it with every commit? Every commit in which repository? :thinking: Anyway, we don't have to run this workflow so frequently. I believe daily execution is enough. If necessary, you can run it anytime you like using workflow_dispatch. (What I've used this time.) Every commit in which repository? πŸ€” Anyway, we don't have to run this workflow so frequently. I believe daily execution is enough. If necessary, you can run it anytime you like using workflow_dispatch. (What I've used this time.) Ah this repository doesn't have ability to get event notifications from the https://github.com/autowarefoundation/autoware-github-actions/tree/main/sync-files so we cannot trigger this when changes to that repo occurs right? @xmfcx Actually, it's technically possible, for example by https://github.com/peter-evans/repository-dispatch. But I think it's a bit complex for this use case.
2025-04-01T06:37:59.094674
2023-04-10T07:52:41
1660395707
{ "authors": [ "takayuki5168" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3993", "repo": "autowarefoundation/autoware.universe", "url": "https://github.com/autowarefoundation/autoware.universe/pull/3339" }
gharchive/pull-request
feat(obstacle_cruise_planner): implement slow down planner Description Implemented slow down planner, inserting slow down point in the trajectory where the point is close to the dynamic/static obstacles. Related links launcher PR: https://github.com/autowarefoundation/autoware_launch/pull/288 Tests performed Planning simulator works well. TODO [x] scenario sim: https://evaluation.tier4.jp/evaluation/reports/50ce7861-d8b3-5ef9-87b1-f68421f860a8?project_id=prd_jt Notes for reviewers Pre-review checklist for the PR author The PR author must check the checkboxes below when creating the PR. [x] I've confirmed the contribution guidelines. [x] The PR follows the pull request guidelines. In-review checklist for the PR reviewers The PR reviewers must check the checkboxes below before approval. [ ] The PR follows the pull request guidelines. [ ] The PR has been properly tested. [ ] The PR has been reviewed by the code owners. Post-review checklist for the PR author The PR author must check the checkboxes below before merging. [ ] There are no open discussions or they are tracked via tickets. [ ] The PR is ready for merge. After all checkboxes are checked, anyone who has write access can merge the PR. slow down virtual wall
2025-04-01T06:37:59.103163
2023-08-30T08:11:52
1873161267
{ "authors": [ "kosuke55", "kyoichi-sugahara" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3994", "repo": "autowarefoundation/autoware.universe", "url": "https://github.com/autowarefoundation/autoware.universe/pull/4816" }
gharchive/pull-request
fix(goal_planner): fix goal search for narrow shoulder lane Description Fix goal search for narrow shoulder lane considering patterns when the bound of the vehicle at pull over lane's center line is outside the bound of the lanelet. before (skip checking is in lane) after Related links Tests performed psim ! Notes for reviewers Interface changes Effects on system behavior Pre-review checklist for the PR author The PR author must check the checkboxes below when creating the PR. [x] I've confirmed the contribution guidelines. [x] The PR follows the pull request guidelines. In-review checklist for the PR reviewers The PR reviewers must check the checkboxes below before approval. [ ] The PR follows the pull request guidelines. [ ] The PR has been properly tested. [ ] The PR has been reviewed by the code owners. Post-review checklist for the PR author The PR author must check the checkboxes below before merging. [ ] There are no open discussions or they are tracked via tickets. [ ] The PR is ready for merge. After all checkboxes are checked, anyone who has write access can merge the PR. @kosuke55 could you give me time to check this PR? If it's urgent I will take a look briefly @kyoichi-sugahara OK, thanks!!b
2025-04-01T06:37:59.109799
2024-10-11T07:03:20
2580605880
{ "authors": [ "youtalk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3995", "repo": "autowarefoundation/autoware", "url": "https://github.com/autowarefoundation/autoware/pull/5332" }
gharchive/pull-request
fix(docker): install CUDA development drivers in development containers Description Resolved https://github.com/autowarefoundation/autoware/issues/5219 The cause of #5219 is that due to the changes in #5159, only the CUDA runtime drivers are now being installed in both the development containers and the runtime containers. Though the development containers are required to install the CUDA development drivers. https://github.com/autowarefoundation/autoware/blob/main/ansible/roles/cuda/tasks/main.yaml#L28-L50 This PR installs CUDA development drivers on development containers. cc @marioney Tests performed https://github.com/youtalk/autoware/actions/runs/11288251647 https://github.com/youtalk/autoware/actions/runs/11288252539 Effects on system behavior Not applicable. Interface changes Pre-review checklist for the PR author The PR author must check the checkboxes below when creating the PR. [x] I've confirmed the contribution guidelines. [x] The PR follows the pull request guidelines. In-review checklist for the PR reviewers The PR reviewers must check the checkboxes below before approval. [ ] The PR follows the pull request guidelines. Post-review checklist for the PR author The PR author must check the checkboxes below before merging. [ ] There are no open discussions or they are tracked via tickets. After all checkboxes are checked, anyone who has write access can merge the PR. I hope it was self hosted runner problem. Please let me merge to run by GitHub runner.
2025-04-01T06:37:59.119131
2023-02-01T04:37:43
1565418732
{ "authors": [ "isamu-takagi", "kenji-miyake", "tkhmy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3996", "repo": "autowarefoundation/autoware_adapi_msgs", "url": "https://github.com/autowarefoundation/autoware_adapi_msgs/pull/24" }
gharchive/pull-request
feat(autoware_adapi_v1_msgs): add vehicle status msgs Signed-off-by: tkhmy<EMAIL_ADDRESS>Description Create msgs for vehicle status Related links https://github.com/autowarefoundation/autoware/issues/3232 https://github.com/autowarefoundation/autoware-documentation/pull/312 Tests performed Notes for reviewers Pre-review checklist for the PR author The PR author must check the checkboxes below when creating the PR. [ ] I've confirmed the contribution guidelines. [ ] The PR follows the pull request guidelines. In-review checklist for the PR reviewers The PR reviewers must check the checkboxes below before approval. [ ] The PR follows the pull request guidelines. [ ] The PR has been properly tested. [ ] The PR has been reviewed by the code owners. Post-review checklist for the PR author The PR author must check the checkboxes below before merging. [ ] There are no open discussions or they are tracked via tickets. [ ] The PR is ready for merge. After all checkboxes are checked, anyone who has write access can merge the PR. @isamu-takagi I think we will need to add the maximum velocity. Do you think is better to put it in this place? @tkhmy If it means the limit value of hardware, I think that it can be provided as vehicle information. If it is a config value, it seems better to consider making it as a planning API Including setting service. @tkhmy If it means the limit value of hardware, I think that it can be provided as vehicle information. If it is a config value, it seems better to consider making it as a planning API including setting service. @isamu-takagi it should be velocity limit. Ya I think we should put it in planning api instead of vehicle api @mitsudome-r @isamu-takagi @yukkysaito @kenji-miyake Hi, I created the message for visualization the vehicle status. Can you help to review it? Thank you! @kenji-miyake @yukkysaito I think it's okay except for the typo (int8). Do you have any other comments? @isamu-takagi I think @mitsudome-r needs to get agreements with other AWF members (at least @xmfcx ). @mitsudome-r @xmfcx Could you check this PR?
2025-04-01T06:37:59.120347
2020-08-05T20:58:11
673842927
{ "authors": [ "ZackPierce", "jonlamb-gh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3997", "repo": "auxoncorp/modality-probe", "url": "https://github.com/auxoncorp/modality-probe/pull/188" }
gharchive/pull-request
Unifying the Reporting API: without RaceBuffer for now Closes #179 Based on #185 , but replaces the RaceBuffer with a basic ring buffer to get us by until the RaceBuffer work settles. :+1:
2025-04-01T06:37:59.158127
2019-02-22T21:59:33
413595813
{ "authors": [ "ThatNerdyPikachu", "aveao", "leo60228" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3998", "repo": "aveao/robocop-ng", "url": "https://github.com/aveao/robocop-ng/pull/18" }
gharchive/pull-request
Add ReSwitched Silverβ„’ one real addition to one meme addition is a good ratio right Can you make it an embed? thanks I hate it
2025-04-01T06:37:59.195566
2024-04-06T03:12:32
2229047515
{ "authors": [ "averbraeck", "vjcortesa" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3999", "repo": "averbraeck/housinggame-player", "url": "https://github.com/averbraeck/housinggame-player/issues/51" }
gharchive/issue
Improve clarity of 1. Your budget and expectations screen [ ] It is unclear to the players what values are constant every round: The facilitator can explain that aspect by adding an * to the applicable values and a note below. I do not think it is wise to hard-code this. Future versions of the game could have changes in these values as a result of news measures (e.g., increased living costs). Therefore, I would not like to change this. Also, the (*) gives extra clutter on the screen.
2025-04-01T06:37:59.196645
2021-05-11T19:15:15
888284501
{ "authors": [ "mvsoder" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4000", "repo": "avereon/xenon", "url": "https://github.com/avereon/xenon/issues/202" }
gharchive/issue
Auto save of new asset not working When auto-save tries to save an asset it "appears" to save because the save button is disabled but the asset is not really saved anywhere. Furthermore, Save As is not enabled to save it somewhere else. This ended up being a problem with the data model which has now been resolved. Save As will be fixed in a different issue.
2025-04-01T06:37:59.198045
2019-04-16T18:51:04
433933023
{ "authors": [ "JohnHunter809" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4001", "repo": "averma1/RockstarRestaurant", "url": "https://github.com/averma1/RockstarRestaurant/issues/89" }
gharchive/issue
Load/Save Employee PINs to file Done When: the pins for every employee can be loaded and saved to file 30 min: planning and discussion with group about what is needed for this and how to approach it. I started coding it but realized I need to merge unfinished work to really get going 1.5 hr: did work on saving to file 1.5 hrs: Fixing issues and getting I/O to work
2025-04-01T06:37:59.205674
2023-06-20T13:11:24
1765383713
{ "authors": [ "Shreya111111", "Yashbhadiyadra" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4002", "repo": "avinashkranjan/Amazing-Python-Scripts", "url": "https://github.com/avinashkranjan/Amazing-Python-Scripts/issues/1849" }
gharchive/issue
Adding Wikipedia Scrapper Aim What is the objective of the Script It is basically a technique or a process in which large amounts of data from a huge number of websites is passed through a web scraping software coded in a programming language and as a result, structured data is extracted Details What the features will your script have?? Web scraping is an automatic process of extracting information from the web. It allows scrapping based on Javascript and Python Framework. Almost compatible with all the websites and retrieve necessary informations @avinashkranjan and @1e9abhi1e10 plz assign this issue to me Go ahead @Shreya111111 @1e9abhi1e10 @Yashbhadiyadra Plz review the PR https://github.com/avinashkranjan/Amazing-Python-Scripts/pull/1867 fixes https://github.com/avinashkranjan/Amazing-Python-Scripts/issues/1849 ..
2025-04-01T06:37:59.208857
2023-06-08T06:43:29
1747189336
{ "authors": [ "Abhinavcode13", "avinashkranjan" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4003", "repo": "avinashkranjan/Pentesting-and-Hacking-Scripts", "url": "https://github.com/avinashkranjan/Pentesting-and-Hacking-Scripts/issues/231" }
gharchive/issue
Vulnerability Assessment and Scanning Script Aim The objective of Vulnerability Assessment and Scanning is to identify and assess vulnerabilities within a system, network, or application. This process involves using various tools and techniques to systematically identify weaknesses that could be exploited by attackers. The main objective is to discover potential vulnerabilities within a system, network, or application. This includes identifying security weaknesses in configurations, software, services, or infrastructure components that could be exploited by malicious individuals. Once vulnerabilities are identified, they need to be evaluated and prioritized based on their severity and potential impact on the system's security. This allows security teams to focus their efforts on addressing the most critical vulnerabilities first. By conducting vulnerability assessments and scanning, organizations can gain insights into their security weaknesses and take appropriate measures to strengthen their overall security posture, reduce the risk of exploitation, and protect their systems and data from potential attacks. Details Vulnerability scanning script using the popular open-source tool Nmap. Do I want to work on this: [x] Yes [ ] No Please assign me this issue under gssoc 23 @avinashkranjan Go Ahead @Abhinavcode13
2025-04-01T06:37:59.283015
2023-03-23T19:05:26
1638137364
{ "authors": [ "Thinner77", "awawa-dev" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4004", "repo": "awawa-dev/HyperHDR", "url": "https://github.com/awawa-dev/HyperHDR/issues/536" }
gharchive/issue
JSON-API: No Authorization with valid token sent Bug report, debug log and your config file (FULL LOGS ARE MANDATORY) Steps to reproduce API Authentication: on Internet API Access: off Local Admin API Authentication: on Local API Authentication: off curl http://localhost:8090/json-rpc -H 'Content-Type: application/json' -H 'Authorization : token [valid token]' -d '{"command":"config","subcommand":"getconfig"}' What is expected? output of current config What is actually happening? result of curl request: { "command": "config", "error": "No Authorization", "success": false, "tan": 0 } When Local Admin API Authentication is disabled, request will succeed. Maybe my curl request or my understanding of the security system is wrong, but please have a look at this: JsonAPI::handleConfigCommand() checks for _adminAuthorized void JsonAPI::handleConfigCommand(const QJsonObject& message, const QString& command, int tan) { ... else if (subcommand == "getconfig") { if (_adminAuthorized) sendSuccessDataReply(QJsonDocument(_hyperhdr->getQJsonConfig()), full_command, tan); else sendErrorReply("No Authorization", command, tan); } API::isTokenAuthorized() is not setting _adminAuthorized but _authorized: bool API::isTokenAuthorized(const QString& token) { (_authManager->thread() != this->thread()) ? QMetaObject::invokeMethod(_authManager, "isTokenAuthorized", Qt::BlockingQueuedConnection, Q_RETURN_ARG(bool, _authorized), Q_ARG(QString, token)) : _authorized = _authManager->isTokenAuthorized(token); return _authorized; } System HyperHDR Server: Build: master (GitHub-bc24df7/a9a00f9-1678986833) Build time: Mar 23 2023 09:35:26 Git Remote: https://github.com/awawa-dev/HyperHDR.git Version: <IP_ADDRESS>beta0 UI Lang: en (BrowserLang: de) UI Access: default Avail Capt: Linux (V4L2) Database: read/write HyperHDR Server OS: Distribution: Raspbian GNU/Linux 11 (bullseye) Architecture: arm CPU Model: ARMv6-compatible processor rev 7 (v6l) CPU Type: Raspberry Pi Zero W Rev 1.1 CPU Revision: 9000c1 CPU Hardware: BCM2835 Kernel: linux (6.1.19+ (WS: 32)) Qt Version: 5.15.2 Browser: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0 When Local Admin API Authentication is disabled, request will succeed. When enabled, you first need to ask the user (a dialog will appear on the HyperHDR page if it is open) to authorize the token before using it (authorize>requestToken). I still can't remember if it can be done with http://ip:8090/json-rpc or json port (default 19444). Hi, token was created directly in Web UI and i can see it in Token Management. There is a requestToken subcommand and according to the docs a popup should appear in the Web UI to accept/reject the token request. But there seems to be no difference in token creation. Hyperion.ng has the same problem, there is an old (05/2021) issue about this topic, but i don't expect to get that fixed. This is one of the reasons i try to switch to HyperHDR, because i hope you are more motivated and/or skilled and have a better coding style. A request to http://ip:8090/json-rpc?request={%22command%22:%22authorize%22,%22subcommand%22:%22requestToken%22,%22comment%22:%22testtokentest%22} results in: { "command": "authorize", "error": "Command not implemented", "success": false, "tan": 0 } BTW: All references to _authorized are in API.cpp and i couldn't find any code checking _authorized, it's always set only. This may be caused by the coding style of hyperion.ng developers, maybe they have removed it, i don't know. Simple fix of the issue would be to set _adminAuthorized = _authorized; at the end of API::isTokenAuthorized(), but i don't know if this will break something else in security system. wbr _adminAuthorized is different from _authorized and requires user interaction: just creating a token is not enough. If you don't want the user to interfere, the local admin option should be disabled (but it's enabled on default and the user has to disable it manually). Even if I redesign it to make it work over https POST (http & GET requests are too risky), user interaction will still be required. A request to .... results in: yes I checked it: it's disabled for http/https requests so only json api RPC port can be used. Some more test: http://<IP_ADDRESS>:8090/json-rpc?request={"command":"config","subcommand":"getconfig"} { "command": "config", "error": "No Authorization", "success": false, "tan": 0 } http://<IP_ADDRESS>:8090/json-rpc?request={"command":"authorize","subcommand":"login","token":"911faca7-0d11-4e06-9424-e46c6c6784b0"} { "command": "authorize", "error": "Command not implemented", "success": false, "tan": 0 } curl --request POST http://localhost:8090/json-rpc -H 'Content-Type: application/json' --data-raw '{"command":"config","subcommand":"getconfig"}' { "command": "config", "error": "No Authorization", "success": false, "tan": 0 } curl --request POST http://localhost:8090/json-rpc -H 'Content-Type: application/json' --data-raw '{"command":"authorize","subcommand":"login","token":"911faca7-0d11-4e06-9424-e46c6c6784b0"}' { "command": "authorize", "error": "Command not implemented", "success": false, "tan": 0 } curl --request POST http://localhost:8090/json-rpc -H 'Content-Type: application/json' -H 'Authorization : token 911faca7-0d11-4e06-9424-e46c6c6784b0' --data-raw '{"command":"config","subcommand":"getconfig"}' { "command": "config", "error": "No Authorization", "success": false, "tan": 0 } echo '{"command":"config","subcommand":"getconfig"}' | nc localhost 19444 {"command":"config","error":"No Authorization","success":false,"tan":0} waiting for input... need to CTRL-C echo '{"command":"authorize","subcommand":"login","token":"911faca7-0d11-4e06-9424-e46c6c6784b0"}' | nc localhost 19444 {"command":"authorize-login","success":true,"tan":0} waiting for input... need to CTRL-C Looks like JSON-API can be used by raw access to port 19444 only, but there is no way to use it in shell scripts (would like to switch V4L device input by shell script triggered by lirc irexec, executing getconfig, using jq to change, and executing setconfig to save config) wbr Just change the token if it exists in your config for this example to work: a message in HyperHDR page will popup when you execute the command. OK, there is a login subcommand, let's try this: echo '{"command":"authorize","subcommand":"login","token":"d9f2c817-9b8e-4358-9133-995e611b09ab"}' | websocat -n1 ws://localhost:8090 {"command":"authorize-login","success":true,"tan":0} Token must be longer than 36 chars https://github.com/awawa-dev/HyperHDR/blob/6a6b29dfa970ff1c0b1b4d46192d27deaedab70c/sources/api/JsonAPI.cpp#L1531 otherwise for 36 chars it triggers method that won't unlock admin access only ordinary authorization https://github.com/awawa-dev/HyperHDR/blob/6a6b29dfa970ff1c0b1b4d46192d27deaedab70c/sources/api/JsonAPI.cpp#L1541 OK, silly question, how do i get a valid token longer than 36 bytes to unlock admin access? wbr Did you check 'auth' table for a token for the user? also login with a password returns it https://github.com/awawa-dev/HyperHDR/blob/6a6b29dfa970ff1c0b1b4d46192d27deaedab70c/sources/api/JsonAPI.cpp#L1562 Hi, yes there is another entry in auth table user = Hyperhdr password = 4e0d2fa2cf3741d8999b884f5b77dcbe70c1978abbc9ee656fe1046eb08b788fac4db5903453735f77302358fce50ef51bbe5ea43a132c8d6fc713cf1f8d5860 token = df7d84dabef464722b5253b4187489e82260def11b16e05cafae27958265946d169ad062767754d17f13a4e424b0c35e8cc08849b25b02199d4597581752a90f salt = 510c2bc6125aab9a53aa2dd91f655a1903093ff02d335bb565fd32186bad47eb7c73cd597196a78ef6731ad5003ef29271eae5c847de914e96db0c0e283084df comment = portal_token = id = created_at = 2023-03-23T15:25:33Z last_use = 2023-03-29T22:11:56Z looks like token is hashed too, let's try to login with password echo '{"command":"authorize","subcommand":"login","password":"hyperhdr"}' | websocat -n1 ws://localhost:8090 {"command":"authorize-login","info":{"token":"df7d84dabef464722b5253b4187489e82260def11b16e05cafae27958265946d169ad062767754d17f13a4e424b0c35e8cc08849b25b02199d4597581752a90f"},"success":true,"tan":0} hashed token is longer than 36 bytes, maybe we are logged in now? echo '{"command":"config","subcommand":"getconfig"}' | websocat -n1 ws://localhost:8090 {"command":"config","error":"No Authorization","success":false,"tan":0} Maybe the token has to be used either in further requests echo '{"command":"config","subcommand":"getconfig","token":"df7d84dabef464722b5253b4187489e82260def11b16e05cafae27958265946d169ad062767754d17f13a4e424b0c35e8cc08849b25b02199d4597581752a90f"}' | websocat -n1 ws://localhost:8090 {"command":"config","error":"Errors during specific message validation, please consult the HyperHDR Log","success":false,"tan":0} WEBSOCKET : <ERROR> While validating schema against json data of 'JsonRpc@::1':[root].token: no schema definition OK key "token" is not defined in schema-config.json, let's try to login with token echo '{"command":"authorize","subcommand":"login","token":"df7d84dabef464722b5253b4187489e82260def11b16e05cafae27958265946d169ad062767754d17f13a4e424b0c35e8cc08849b25b02199d4597581752a90f"}' | websocat -n1 ws://localhost:8090 {"command":"authorize-login","success":true,"tan":0} echo '{"command":"config","subcommand":"getconfig"}' | websocat -n1 ws://localhost:8090 {"command":"config","error":"No Authorization","success":false,"tan":0} Any more ideas? Code flow looks like this, there are no errors in log (--debug option) void JsonAPI::handleAuthorizeCommand(const QJsonObject& message, const QString& command, int tan) ... if (subc == "login") ... if (token.length() > 36) { if (API::isUserTokenAuthorized(token)) ... bool API::isUserTokenAuthorized(const QString& userToken) { bool res; QMetaObject::invokeMethod(_authManager, "isUserTokenAuthorized", Qt::BlockingQueuedConnection, Q_RETURN_ARG(bool, res), Q_ARG(QString, DEFAULT_CONFIG_USER), Q_ARG(QString, userToken)); if (res) { _authorized = true; _adminAuthorized = true; // Listen for ADMIN ACCESS protected signals connect(_authManager, &AuthManager::newPendingTokenRequest, this, &API::onPendingTokenRequest, Qt::UniqueConnection); } return res; } bool AuthManager::isUserAuthorized(const QString& user, const QString& pw) { if (isUserAuthBlocked()) return false; if (!_authTable->isUserAuthorized(user, pw)) { setAuthBlock(true); return false; } return true; } bool AuthTable::isUserAuthorized(const QString& user, const QString& pw) { if (userExist(user) && (calcPasswordHashOfUser(user, pw) == getPasswordHashOfUser(user))) { updateUserUsed(user); return true; } return false; } wbr
2025-04-01T06:37:59.292544
2018-07-23T02:34:54
343468087
{ "authors": [ "awelkie", "ejmahler" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4005", "repo": "awelkie/RustFFT", "url": "https://github.com/awelkie/RustFFT/pull/33" }
gharchive/pull-request
Add a "good-thomas double butterfly" algorithm impl When I added the good-thomas implementation a while back, I tried it with very large sizes, and saw that it didn't really stand up to mixed-radix. Something I didn't try is very small sizes. So the planner now does Good-Thomas double butterfly instead of Mixed-Radix Double Butterfly whenever possible. I also tried using the main Good-Thomas Algorithm instead of Mixed Radix when sizes are less than a few thousand, but I got mixed results. Some benchmarks were improved, while others were worse. If we did something like FFTW where we measured performance as a part of the planning process, it would be worth it to test mixed radix vs good-thomas performance for given sizes, but it seems to be too unreliable to do it all the time. I also tried another stab at computing the reordering indexes on the fly, and I got a version that's faster than the original, but it's still slower than precomputing them, both at small and large sizes. Thanks! As an aside, you may have noticed I haven't been putting much time into this project lately. At this point, I think you have a more complete understanding of the code and you seem to have a good vision for pushing this project forward. Would you be willing to assume ownership of this project? You're already a collaborator, so what this means is I would just defer to you for PR decisions and you would manage the version hosted on crates.io. Could you add me ad an owner on the crate? Username is ejmahler @awelkie Have you had a chance to look at this? At the PR or adding you as an owner? I sent the crates.io invitation 5 days ago when you asked, let me know if you didn't get it. I skimmed the PR and it looks fine, but as I said I'll defer to you. You should have permissions to merge pull requests, right? The crates.io account. I didn't notice that you had sent an invite. I just accepted it! thanks
2025-04-01T06:37:59.342684
2020-07-26T14:01:34
665785163
{ "authors": [ "Knusper", "ajeetdsouza", "aloisdg" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4006", "repo": "awesome-lists/awesome-bash", "url": "https://github.com/awesome-lists/awesome-bash/pull/53" }
gharchive/pull-request
Add zoxide zoxide is a blazing fast autojump alternative written in Rust. It's orders of magnitude faster than other autojumpers, cross-shell, and extremely feature-rich. First you didn't disclose that it was your project. Please do it next time. With 4.3k stars I think it is fine for the popular enough part. Last thing though, this tool is neither written in bash (which could be fine) nor it is bash related. Well it is not more related to bash than zsh or fish. I think zoxide would be perfect for an awesome-cli or something like this. We may have include tool like this in the past. I dont know (who wants to check?). Fianlly, I dont have a close answer to give. Who have an opinion to share? This tool is neither written in bash (which could be fine) nor it is bash related IMHO, this is one of the strengths of the project. Because it's not linked to a shell, it can work across other applications like ranger/nnn/vim/emacs without a problem, which adds so much more value to the tool as a shell plugin. I wouldn't say it isn't bash related, though. It has first-class support for bash, and the fact that it is able to support other shells at the same time shouldn't diminish its value as a bash plugin. We may have include tool like this in the past. I dont know (who wants to check?). I just checked the Command Line Productivity section. Almost all of the tools are written in other languages: aliases is written in Rust, supports only bash right now but mentions zsh under future plans bashhub is written in Python, supports bash+zsh commacd is written in POSIX shell, supports bash+zsh hstr is written in C, supports bash+zsh qfc is written in Python, supports bash+zsh The only tools that actually are written in Bash were: bashmarks has sshrc (although the link is broken) Would it be more relevant on awesome-cli-apps or awesome-cli? Does awesome-zsh or awesome-fish included it? Would it be more relevant on awesome-cli-apps or awesome-cli? zoxide is, at the end of the day, a shell plugin. It needs to be set up on your shell. It comes with a CLI, but that CLI is basically used to set up the shell plugin. Did awesome-zsh or awesome-fish include it? It's included in awesome-zsh. awesome-fish didn't include it because they require every tool to be written in Fish rather than for Fish. I am fine with merging stuff not written in bash if it is relevant to bash in some way. If you're trying to curate a list of high-quality Bash plugins, I don't see why you would want to exclude a plugin simply because it supports other shells. Consider the starship project. I use it on Bash, because it's the best prompt I could find for Bash. Does it matter to me, as a user, that it's not written in Bash, or that it works on zsh as well? Absolutely not. I haven't understood why you say zoxide isn't relevant to Bash. It's a Bash plugin. The fact that it's also a zsh plugin doesn't change the fact that it's a Bash plugin. Finally, 5/8 projects I checked in the list were not written in Bash. Would you want to remove those from the README, for no fault other than the fact that they support other shells? The list was not maintained much and thus did not have real contribution guidelines. I am in favor of adopting the philosophy of the awesome-fish list. However, I also could envision some middle ground where we have an category of enhancements that are not "pure" bash.
2025-04-01T06:37:59.362281
2021-10-07T03:52:41
1019579499
{ "authors": [ "Jack-Works", "fregante" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4007", "repo": "awesome-webextension/webpack-target-webextension", "url": "https://github.com/awesome-webextension/webpack-target-webextension/issues/24" }
gharchive/issue
Not support MV3 native ES background worker Blocked by https://github.com/w3c/ServiceWorker/issues/1356, otherwise, we need to emit import statements which is not easy in a webpack bundle. I think that maybe it already works with this, regardless if manifest version: Yep: https://github.com/pixiebrix/pixiebrix-extension/pull/7472 I'm unsure if your comment relates to this issue, but I'll write down what I recently found as a memo. Firstly, Chrome supports service workers with native ES modules now, you need to write your manifest like this: "background": { "service_worker": "background.worker.js", "type": "module" }, Then the service worker can use import or export statements, not importScripts. Currently, we use JSONP-style (the default of webpack) to load chunks, this way works well, so even if this issue is not resolved, this plugin is ok to use. // JSONP-style chunk format "use strict"; (globalThis["webpackChunk"] = globalThis["webpackChunk"] || []).push(...) As an ES Module enthusiast, I hope we can support ES Module as the chunk format, and also as the chunk loading format. // ES Module chunk format export const id = 232; export const ids = [232]; export const modules = { /***/ 232: /***/ (() => { /***/ }) }; If the chunk format becomes ESM, the only way to load it is static import or dynamic import. Dynamic import already works to load JSONP-style chunks in the content script (https://github.com/awesome-webextension/webpack-target-webextension#content-script), the problem is dynamic import is not allowed in an ES Module service worker even if the dynamic import is called before first event loop ends. This requires me to emit not dynamic import, but static imports. Webpack now only supports dynamic import (see __webpack_require__.f.j if you're interested) those chunks. This becomes harder, but not impossible. I think it needs a lot of time to figure out how to make this work. I can emit import statements at the top level, but I need to know their file name, but unluck, file names are generated from a runtime function (__webpack_require__.u) and it's very hard to do it in the compile time. Since everything works well today, this is not very urgent to fix. Then the service worker can use import or export statements, not importScripts. That's very good to know. Parcel forced me to use that syntax but never investigated it. I'll start using it in webpack too because importScript breaks "Pause on uncaught errors" in the dev tools, because the debugger pauses on importScript, regardless of the actual position and source maps. dynamic import That didn't seem to work for me πŸ€·β€β™‚οΈ I got the same error as before Changing it to import statement fixed it. So technically now I'm using a native ES background worker in webpack-target-webextension. This issue is fixed? Maybe it just needs some documentation? Native ES service worker works, dynamic import does not. While using this plugin, compiled ES modules (including dynamic import) work after bundling, loading by importScritps. Native ES service worker does not work with this plugin currently (the screenshot you gave). I just took another look of this. This is not possible until there is ES Module support in content script. e is ES Module support in content script I think you meant "in background workers" right? e is ES Module support in content script I think you meant "in background workers" right? No, to share chunks between the background and content script, they must use the format that both environments support. Now ES Module is only supported in the background worker and not supported in the content script, so there is no meaning in investigating this. What do you mean? I definitely am using both static and dynamic ESM in content scripts: https://robwu.nl/crxviewer/?crx=https%3A%2F%2Fchromewebstore.google.com%2Fdetail%2Frefined-github%2Fhlepfoohegkhhmjieoechaddaejaokhf%3Fhl%3Den See content-script.js. There are some tricks though: you need to use import(chrome.runtime.getURL('actual-content-script.js')) in your entry point in order to later be able to use import statements all JS files need to be in web_accessible_resources hmm wait you're right, an extra file will be enough to load ESM in content script.
2025-04-01T06:37:59.397853
2019-03-11T09:27:15
419577216
{ "authors": [ "bwobbones", "gregevari", "kaustavghosh06" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4008", "repo": "aws-amplify/amplify-cli", "url": "https://github.com/aws-amplify/amplify-cli/issues/1019" }
gharchive/issue
Where do I add permissions when using amplify-cli? ** Which Category is your question related to? ** API/Function ** What AWS Services are you utilizing? ** Lambda/DynamoDB ** Provide additional details e.g. code snippets ** I have a lambda that gathers data from DynamoDB that i created using the cli. When I execute it, I get a permission error: User: blah is not authorized to perform: dynamodb:DescribeTable This is easy enough to fix, I added the DynamoDB permissions required through the console but this feels bad. I could edit the cloudformation template, adding the permissions there before pushing the changes up. But the template is generated, that also feels bad. Any idea where I should be doing this? I can't find any documentation around it. @bwobbones You could probably modify the Cloudformation file in amplify/backend/function<funcition-name>/cloudformation-file.json file to add the perminssions. Thanks @kaustavghosh06, do you know if that file will be overwritten if I push changes? @gregevari What do you mean by if push changes? I don't beleive this file is changed dynamically by the CLI after it's created. @kaustavghosh06 pushing with amplify push. I think you cover my question though, thanks!
2025-04-01T06:37:59.405518
2019-09-26T18:26:08
499044946
{ "authors": [ "Genkilabs", "TrueLecter", "UnleashedMind", "stefanotauriello" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4009", "repo": "aws-amplify/amplify-cli", "url": "https://github.com/aws-amplify/amplify-cli/issues/2423" }
gharchive/issue
add-graphql-datasource is not showing list of secrets Describe the bug No keys available to select To Reproduce Steps to reproduce the behavior: Create new project, amplify init, amplify add api Create RDS instance (Aurora PostgreSQL compatible with PostgreSQL 10.7, serverless) Execute amplify api add-graphql-datasource, try to complete quiz. See error Expected behavior Being able to use RDS as datasource. Screenshots If I just press enter, this happens Additional context OS: Windows 10 amplify -v: 3.10.0 @TrueLecter Did you select the "serverless" mode when you create the RDS database? It is required to select the "serverless" mode as stated in our document. Yes. All clusters were created with serverless template. Role shown as serverless in console as well. @TrueLecter Did you create the password for the "master" user? and then use the query editor to create a database in the newly created cluster? I will send a PR to guard against those scenarios, and print out error messages. Ok, so I was able to finally connect to database in Query Editor. However, not I'm getting next issue: I was also getting an error, stating that there was no database named . After that I created database with name same as username and started receiving issue regarding unrecognized configuration parameter. This seems to still be the case (CLI ver. 4.12.0). Is there any info on when amplify will support Postgres? (in fact, I did not find any documentation that it was not supported beyond this issue...) Is there any update, please?
2025-04-01T06:37:59.410934
2022-06-30T21:12:26
1290646313
{ "authors": [ "codecov-commenter", "danielleadams" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4010", "repo": "aws-amplify/amplify-cli", "url": "https://github.com/aws-amplify/amplify-cli/pull/10677" }
gharchive/pull-request
chore: use correct commit when publishing git tag Description of changes This changes the prerelease scripts to make sure we are publishing tags and releases linked to the correct commit SHA. Issue #, if available Description of how you validated changes Checklist [x] PR description included [ ] yarn test passes [ ] Tests are changed or added [ ] Relevant documentation is changed or added (and PR referenced) [ ] New AWS SDK calls or CloudFormation actions have been added to relevant test and service IAM policies By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Codecov Report :exclamation: No coverage uploaded for pull request base (dev@5ea0b9a). Click here to learn what that means. The diff coverage is n/a. @@ Coverage Diff @@ ## dev #10677 +/- ## ====================================== Coverage ? 47.37% ====================================== Files ? 669 Lines ? 33066 Branches ? 6673 ====================================== Hits ? 15665 Misses ? 15723 Partials ? 1678 :mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
2025-04-01T06:37:59.442058
2023-01-06T12:35:13
1522517255
{ "authors": [ "esteban-serfe", "hloriana" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4011", "repo": "aws-amplify/amplify-hosting", "url": "https://github.com/aws-amplify/amplify-hosting/issues/3228" }
gharchive/issue
Cloning repository fails trying to read the cache Before opening, please confirm: [X] I have checked to see if my question is addressed in the FAQ. [X] I have searched for duplicate or closed issues. [X] I have read the guide for submitting bug reports. [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue. [X] I have removed any sensitive information from my code snippets and submission. App Id d31qkmp0h12ar6 AWS Region us-west-2 Amplify Hosting feature Build settings, Monorepo Describe the bug I'm using a Monorepo configuration for this application and try to make the deploy using a custom amplify.yml file. Currently the main issue is that the checkout job fails with the following 2 relevant warnings: 2023-01-06T12:25:44.542Z [INFO]: # Retrieving environment cache... 2023-01-06T12:25:44.612Z [WARNING]: ! Unable to write cache: {"code":"ERR_BAD_REQUEST","message":"Request failed with status code 404"})} 2023-01-06T12:26:00.536Z [INFO]: # Retrieving cache... 2023-01-06T12:26:00.536Z [INFO]: # Retrieved cache 2023-01-06T12:26:40.089Z [ERROR]: !!! TypeError: m.indexOf is not a function 2023-01-06T12:26:40.168Z [INFO]: # Starting environment caching... 2023-01-06T12:26:40.168Z [INFO]: # Environment caching completed After that the build step fails. I've tested the following configuration changes: Node 14 & 16 Overwrite the amplify CLI version (multiple versions and the same as we are using locally) Removed the caches from the amplify.yml file declaration Removed the test step Reconnect the repository multiple times (as is the indication that appears on the interface) Expected behavior We expect to see the checkout to work as expected as the indications are warnings and not errors. Reproduction steps This happens when triggering a build or pushing a new change from gitlab integration. Build Settings No response Log output 2023-01-06T12:25:44.414Z [INFO]: # Switching to commit: 4a5a6f7d9966f36b966e056f491b0948b4bea32a 2023-01-06T12:25:44.458Z [INFO]: Agent pid 159 2023-01-06T12:25:44.458Z [INFO]: Identity added: /root/.ssh/git_rsa (/root/.ssh/git_rsa) Note: switching to '4a5a6f7d9966f36b966e056f491b0948b4bea32a'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c <new-branch-name> Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 4a5a6f7d9 Testing configuration 2023-01-06T12:25:44.530Z [INFO]: Successfully cleaned up Git credentials 2023-01-06T12:25:44.531Z [INFO]: # Checking for Git submodules at: /codebuild/output/src955501831/src/creator-web/.gitmodules 2023-01-06T12:25:44.542Z [INFO]: # Retrieving environment cache... 2023-01-06T12:25:44.612Z [WARNING]: ! Unable to write cache: {"code":"ERR_BAD_REQUEST","message":"Request failed with status code 404"})} 2023-01-06T12:25:44.612Z [INFO]: ---- Setting Up SSM Secrets ---- 2023-01-06T12:25:44.612Z [INFO]: SSM params {"Path":"/amplify/d35m6bal8x8kl9/citesting/","WithDecryption":true} 2023-01-06T12:25:44.666Z [INFO]: # Defaulting to Node version 16 2023-01-06T12:25:54.411Z [INFO]: # Node version 16 is available for installation 2023-01-06T12:25:54.502Z [INFO]: # Installing Node version 16 2023-01-06T12:26:00.450Z [INFO]: # Now using Node version 16 2023-01-06T12:26:00.531Z [INFO]: No live updates for this build run 2023-01-06T12:26:00.536Z [INFO]: # Retrieving cache... 2023-01-06T12:26:00.536Z [INFO]: # Retrieved cache 2023-01-06T12:26:40.089Z [ERROR]: !!! TypeError: m.indexOf is not a function 2023-01-06T12:26:40.168Z [INFO]: # Starting environment caching... 2023-01-06T12:26:40.168Z [INFO]: # Environment caching completed Terminating logging... Additional information This application is configured on the amplify console to use other application backend. Hi @esteban-serfe πŸ‘‹πŸ½ thanks for raising this issue. It's possible that there could be an issue with your amplify.yml file and we are misrepresenting the error. Could you please share the file so we can make sure it is configured correctly? Hi @hloriana This is the amplify.yml file from the last build. The issue appears within the students application, not the creators. version: 1 applications: - appRoot: creators backend: phases: preBuild: commands: # Run the lint over the lambdas functions > check #97113" #- npm i #- npm run lint:lambdas - if [[ ! -v ENV ]]; then export ENV=${USER_BRANCH}; fi; - if [[ ! -v ENV && -z "$ENV" ]]; then export ENV=${AWS_BRANCH}; fi - echo "Using $ENV as the Backend environment" #- whereis jq #- yum update && yum install -y jq #- aws cloudformation wait stack-update-complete --stack-name $(amplify env get --name $ENV --json | jq '.awscloudformation.StackId') build: commands: - amplifyPush --simple frontend: phases: preBuild: commands: - nvm use $VERSION_NODE_14 - export NODE_OPTIONS=\"--max-old-space-size=8192\" #- npm ci --no-audit #- echo "=== Running Lint to validate it works as expected ===" #- "npm run lint" - npm i --no-audit --production build: commands: - npm run build artifacts: baseDirectory: build files: - "**/*" cache: paths: - node_modules/**/* test: artifacts: baseDirectory: cypress configFilePath: "**/mochawesome.json" files: - "**/*.png" - "**/*.mp4" - "report/mochawesome-report/**/*" phases: preTest: commands: - echo "=== Install cypress reporters and friends ===" - npm install --no-audit wait-on pm2 mocha mochawesome mochawesome-merge mochawesome-report-generator - echo '=== Configure the environment data sample ===' - 'echo "{ \"CYPRESS_TEST_USER_CREATOR\": \"${CYPRESS_TEST_USER_CREATOR}\", \"CYPRESS_TEST_PASS_CREATOR\": \"${CYPRESS_TEST_PASS_CREATOR}\" }" > cypress.env.json' - npx pm2 start npm -- start - npx wait-on http://localhost:3000/ test: commands: - 'npx cypress run --reporter mochawesome --reporter-options "reportDir=cypress/report/mochawesome-report,overwrite=false,html=false,json=true,timestamp=mmddyyyy_HHMMss" --config video=false' postTest: commands: - echo "=== Generate tests output by merging ===" - "npx mochawesome-merge cypress/report/mochawesome-report/mochawesome*.json > cypress/report/mochawesome.json" - "npx pm2 kill" - appRoot: students/everprep-students backend: phases: build: commands: # - set - true frontend: phases: preBuild: commands: - nvm use $VERSION_NODE_14 - export NODE_OPTIONS=\"--max-old-space-size=4096\" - echo "=== Install dependencies ===" - npm ci --no-audit --production # - if [[ ! -v ENV ]]; then export ENV=${USER_BRANCH}; fi; # - if [[ ! -v ENV && -z "$ENV" ]]; then export ENV=${AWS_BRANCH}; fi # - echo "Using $ENV as the Backend environment" # - export AWSCLOUDFORMATIONCONFIG='{"configLevel":"project","accessKeyId":"$AWS_ACCESS_KEY_ID","secretAccessKey":"$AWS_SECRET_ACCESS_KEY","region":"$AWS_REGION"}' # - export AMPLIFY='{"envName":"$ENV","appId":"$AWS_APP_ID","defaultEditor":"code"}' # - export PROVIDERS='{"awscloudformation":$AWSCLOUDFORMATIONCONFIG}' # - export CODEGEN='{"generateCode":false,"generateDocs":false}' # - export REACTCONFIG='{"SourceDir":"src","DistributionDir":"build","BuildCommand":"npm run-script build","StartCommand":"npm run-script start"}' # - export FRONTEND='{"frontend":"javascript","framework":"react","config":$REACTCONFIG}' # - export PORT=3001 # - amplify pull $ENV --yes --amplify ${AMPLIFY} --providers ${PROVIDERS} --frontend ${FRONTEND} # - test -f src/aws-exports.js build: commands: - npm run build artifacts: baseDirectory: build files: - "**/*" # cache: # paths: # - node_modules/**/* # test: # artifacts: # baseDirectory: cypress # configFilePath: "**/mochawesome.json" # files: # - "**/*.png" # - "**/*.mp4" # - "report/mochawesome-report/**/*" # phases: # preTest: # commands: # - "npm install --no-audit wait-on pm2<EMAIL_ADDRESS>mochawesome mochawesome-merge mochawesome-report-generator" # - "npx pm2 start npm -- start" # - "npx wait-on http://localhost:3001" # test: # commands: # - 'npx cypress run --reporter mochawesome --reporter-options \"reportDir=cypress/report/mochawesome-report,overwrite=false,html=false,json=true,timestamp=mmddyyyy_HHMMss\" --config video=false' # postTest: # commands: # - "npx pm2 kill" # - "npx mochawesome-merge cypress/report/mochawesome-report/mochawesome*.json > cypress/report/mochawesome.json"
2025-04-01T06:37:59.486074
2023-08-10T17:53:22
1845937280
{ "authors": [ "gauravsapkal1", "himanshu-mobstac" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4012", "repo": "aws-amplify/amplify-hosting", "url": "https://github.com/aws-amplify/amplify-hosting/issues/3640" }
gharchive/issue
while deploying my nextjs project with sentry on aws amplify build getting failed Before opening, please confirm: [X] I have searched for duplicate or closed issues and discussions. [X] I have read the guide for submitting bug reports. [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue. JavaScript Framework Next.js Amplify APIs Not applicable Amplify Categories Not applicable Environment information # Put output below this line Describe the bug I am using nextjs 11.1.3 when I added sentry error tracking to my next js project when i build and run it locally it works fine but when i deploy my sentry nextjs project on aws amplyfy my build getting failed with following error, '> Build error occurred\n' + 'Error: spawn ENOMEM\n' + ' at ChildProcess.spawn (node:internal/child_process:420:11)\n' + ' at spawn (node:child_process:733:9)\n' + ' at fork (node:child_process:169:10)\n' + ' at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)\n' + ' at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)\n' + ' at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)\n' + ' at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)\n' + ' at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)\n' + ' at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)\n' + ' at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)\n' + ' at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)\n' + ' at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31\n' + ' at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)\n' + ' at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {\n' + ' errno: -12,\n' + " code: 'ENOMEM',\n" + " syscall: 'spawn'\n" + '}', failed: true, timedOut: false, isCanceled: false, killed: false } [G 2023-08-10T16:47:33.927Z [ERROR]: [?25h[G[J 300s β€Ί darxjey0xrl5c β€Ί Error: Command failed with exit code 1: node_modules/.bin/next build To Reproduce Below is my aws amplify build logs '> Build error occurred\n' + 'Error: spawn ENOMEM\n' + ' at ChildProcess.spawn (node:internal/child_process:420:11)\n' + ' at spawn (node:child_process:733:9)\n' + ' at fork (node:child_process:169:10)\n' + ' at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)\n' + ' at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)\n' + ' at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)\n' + ' at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)\n' + ' at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)\n' + ' at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)\n' + ' at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)\n' + ' at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)\n' + ' at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31\n' + ' at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)\n' + ' at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {\n' + ' errno: -12,\n' + " code: 'ENOMEM',\n" + " syscall: 'spawn'\n" + '}', failed: true, timedOut: false, isCanceled: false, killed: false } [G 2023-08-10T16:47:33.927Z [ERROR]: [?25h[G[J 300s β€Ί darxjey0xrl5c β€Ί Error: Command failed with exit code 1: node_modules/.bin/next build Expected behavior my nextjs project which contains sentry error tracking build should not fail. Reproduction steps after deploying my nextjs sentry project on aws amplify build getting failed. Code Snippet // Put your code below this line. **next.config.js File** ```javascript const { withSentryConfig } = require("@sentry/nextjs"); const nextConfig = { ...nextConfigurations productionBrowserSourceMaps: true, sentry:{ widenClientFileUpload: true, transpileClientSDK: true, hideSourceMaps: true, disableLogger: true, } }; const sentryWebpackPluginOptions = { org: process.env.NEXT_PUBLIC_SENTRY_ORG_NAME, project: process.env.NEXT_PUBLIC_SENTRY_PROJECT_NAME, authToken: process.env.NEXT_PUBLIC_SENTRY_AUTH_TOKEN, sourceMapFilename: '[name].[hash].js.map', silent: true, }; module.exports = withSentryConfig(nextConfig, sentryWebpackPluginOptions); sentry.client.config.js import * as Sentry from "@sentry/nextjs"; import { ContextLines } from "@sentry/integrations"; Sentry.init({ dsn: process.env.NEXT_PUBLIC_SENTRY_DSN, tracesSampleRate: 0.2, replaysSessionSampleRate: 0.1, replaysOnErrorSampleRate: 0.1, integrations: [ new Sentry.Replay(), new ContextLines({ frameContextLines: 7, }), ], }); app.js Sentry.init({ dsn: process.env.NEXT_PUBLIC_SENTRY_DSN, integrations: [ new BrowserTracing() ], tracesSampleRate: 0.2, }); So this setup works well for me in locally when i build locally sourcemap getting uploaded to my sentry dashboard and also i get proper stack starce but when i try to deploy on aws amplify my build is failing and getting below error '> Build error occurred\n' + 'Error: spawn ENOMEM\n' + ' at ChildProcess.spawn (node:internal/child_process:420:11)\n' + ' at spawn (node:child_process:733:9)\n' + ' at fork (node:child_process:169:10)\n' + ' at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)\n' + ' at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)\n' + ' at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)\n' + ' at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)\n' + ' at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)\n' + ' at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)\n' + ' at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)\n' + ' at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)\n' + ' at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31\n' + ' at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)\n' + ' at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {\n' + ' errno: -12,\n' + " code: 'ENOMEM',\n" + " syscall: 'spawn'\n" + '}', failed: true, timedOut: false, isCanceled: false, killed: false } [G 2023-08-10T16:47:33.927Z [ERROR]: [?25h[G[J 300s β€Ί darxjey0xrl5c β€Ί Error: Command failed with exit code 1: node_modules/.bin/next build I am not getting why I am getting this problem only when i deploy it on aws amplify locally everything working fine. ### Log output <details> // Put your logs below this line Build error occurred Error: spawn ENOMEM at ChildProcess.spawn (node:internal/child_process:420:11) at spawn (node:child_process:733:9) at fork (node:child_process:169:10) at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45) at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10) at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12) at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27) at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1) at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26) at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28) at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9) at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31 at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20) at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) { errno: -12, code: 'ENOMEM', syscall: 'spawn' } info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5 info - Checking validity of types... info - Creating an optimized production build... info - Using external babel configuration from /codebuild/output/src489982623/src/...-nextjs/.babelrc info - Collecting page data... at makeError (/root/.//node_modules/execa/lib/error.js:60:11) at handlePromise (/root/.//node_modules/execa/index.js:118:26) at runMicrotasks () at processTicksAndRejections (node:internal/process/task_queues:96:5) at async Builder.build (/root/.//node_modules/@sls-next/lambda-at-edge/dist/build.js:377:13) at async NextjsComponent.build (/root/.//node_modules/@sls-next/-component/dist/component.js:165:13) at async NextjsComponent.default (/root/.//node_modules/@sls-next/-component/dist/component.js:22:13) at async fn (/root/.npm/_npx/780a6c1398234b48/node_modules/@/template/utils.js:280:41) at async Promise.all (index 0) at async executeGraph (/root/.npm/_npx/780a6c1398234b48/node_modules/@/template/utils.js:294:3) at async Template.default (/root/.npm/_npx/780a6c1398234b48/node_modules/@/template/.js:67:38) at async Object.runComponents (/root/.npm/_npx/780a6c1398234b48/node_modules/@/cli/src/index.js:222:17) { shortMessage: 'Command failed with exit code 1: node_modules/.bin/next build', command: 'node_modules/.bin/next build', escapedCommand: '"node_modules/.bin/next" build', exitCode: 1, signal: undefined, signalDescription: undefined, stdout: 'info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5\n' + 'info - Checking validity of types...\n' + 'info - Creating an optimized production build...\n' + 'info - Using external babel configuration from /codebuild/output/src489982623/src/...-nextjs/.babelrc\n' + 'info - Collecting page data...', stderr: '\n' + 'warn - As of Tailwind CSS v2.2, lightBlue has been renamed to sky.\n' + 'warn - Update your configuration file to silence this warning.\n' + '\n' + 'warn - As of Tailwind CSS v3.0, warmGray has been renamed to stone.\n' + 'warn - Update your configuration file to silence this warning.\n' + '\n' + 'warn - As of Tailwind CSS v3.0, trueGray has been renamed to neutral.\n' + 'warn - Update your configuration file to silence this warning.\n' + '\n' + 'warn - As of Tailwind CSS v3.0, coolGray has been renamed to gray.\n' + 'warn - Update your configuration file to silence this warning.\n' + '\n' + 'warn - As of Tailwind CSS v3.0, blueGray has been renamed to slate.\n' + 'warn - Update your configuration file to silence this warning.\n' + '(node:4289) [DEP_WEBPACK_CHUNK_HAS_ENTRY_MODULE] DeprecationWarning: Chunk.hasEntryModule: Use new ChunkGraph API\n' + '(Use node --trace-deprecation ... to show where the warning was created)\n' + '(node:4289) [DEP_WEBPACK_CHUNK_ADD_MODULE] DeprecationWarning: Chunk.addModule: Use new ChunkGraph API\n' + 'warn - Compiled with warnings\n' + '\n' + './components/dashB/FooterDashboard.js\n' + "Attempted import error: 'support_id' is not exported from '../../constants' (imported as 'support_id').\n" + '\n' + './node_modules/typescript/lib/typescript.js\n' + "Module not found: Can't resolve 'perf_hooks' in '/codebuild/output/src489982623/src/...-nextjs/node_modules/typescript/lib'\n" + '\n' + './node_modules/typescript/lib/typescript.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/typescript/lib/typescript.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/engine.io-client/node_modules/ws/lib/buffer-util.js\n' + "Module not found: Can't resolve 'bufferutil' in '/codebuild/output/src489982623/src/...-nextjs/node_modules/engine.io-client/node_modules/ws/lib'\n" + '\n' + './node_modules/engine.io-client/node_modules/ws/lib/validation.js\n' + "Module not found: Can't resolve 'utf-8-validate' in '/codebuild/output/src489982623/src/...-nextjs/node_modules/engine.io-client/node_modules/ws/lib'\n" + '\n' + './components/dashB/FooterDashboard.js\n' + "Attempted import error: 'support_id' is not exported from '../../constants' (imported as 'support_id').\n" + '\n' + './node_modules/next/dist/server/load-components.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/next/dist/server/load-components.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/next/dist/server/load-components.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/next/dist/server/require.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/next/dist/server/require.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/next/dist/server/require.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/typescript/lib/typescript.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + './node_modules/typescript/lib/typescript.js\n' + 'Critical dependency: the request of a dependency is an expression\n' + '\n' + '\n' + '> Build error occurred\n' + 'Error: spawn ENOMEM\n' + ' at ChildProcess.spawn (node:internal/child_process:420:11)\n' + ' at spawn (node:child_process:733:9)\n' + ' at fork (node:child_process:169:10)\n' + ' at ChildProcessWorker.initialize (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:141:45)\n' + ' at new ChildProcessWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/workers/ChildProcessWorker.js:132:10)\n' + ' at WorkerPool.createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:44:12)\n' + ' at new BaseWorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/base/BaseWorkerPool.js:135:27)\n' + ' at new WorkerPool (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/WorkerPool.js:30:1)\n' + ' at new Worker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/node_modules/jest-worker/build/index.js:167:26)\n' + ' at createWorker (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:15:28)\n' + ' at new Worker1 (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/lib/worker.js:21:9)\n' + ' at /codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:432:31\n' + ' at async Span.traceAsyncFn (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/telemetry/trace/trace.js:60:20)\n' + ' at async Object.build [as default] (/codebuild/output/src489982623/src/...-nextjs/node_modules/next/dist/build/index.js:77:25) {\n' + ' errno: -12,\n' + " code: 'ENOMEM',\n" + " syscall: 'spawn'\n" + '}', </details> ### aws-exports.js _No response_ ### Manual configuration _No response_ ### Additional configuration _No response_ ### Mobile Device _No response_ ### Mobile Operating System _No response_ ### Mobile Browser _No response_ ### Mobile Browser Version _No response_ ### Additional information and screenshots _No response_ @cwomack ok but can you share me url where you shared this issue ? how caan i see there @gauravsapkal1 Did you find any solution to this issue?
2025-04-01T06:37:59.496478
2024-01-10T11:32:38
2074173852
{ "authors": [ "Jay2113", "PythonCircuit", "arundna", "jitendra-koodo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4013", "repo": "aws-amplify/amplify-hosting", "url": "https://github.com/aws-amplify/amplify-hosting/issues/3897" }
gharchive/issue
BUG: SSG Build failed : Failed to find the deploy-manifest.json file in the build output Before opening, please confirm: [X] I have checked to see if my question is addressed in the FAQ. [X] I have searched for duplicate or closed issues. [X] I have removed any sensitive information from my code snippets and submission. Amplify Hosting feature Deployments Is your feature request related to a problem? Please describe: Building SSG next js site gives build error Describe how you'd like this feature to work https://github.com/aws-amplify/amplify-hosting/issues/3853 Getting the same error for nextjs SSG deployment : 2024-01-10T10:41:58.716Z [INFO]: ## Completed Frontend Build 2024-01-10T10:41:58.721Z [INFO]: ## Build completed successfully 2024-01-10T10:41:58.722Z [INFO]: # Starting caching... 2024-01-10T10:41:58.732Z [INFO]: # Creating cache artifact... 2024-01-10T10:42:33.446Z [INFO]: # Created cache artifact 2024-01-10T10:42:33.544Z [INFO]: # Uploading cache artifact... 2024-01-10T10:42:37.786Z [INFO]: # Uploaded cache artifact 2024-01-10T10:42:37.881Z [INFO]: # Caching completed 2024-01-10T10:42:37.891Z [ERROR]: !!! CustomerError: Failed to find the deploy-manifest.json file in the build output. Please verify that it exists within the "baseDirectory" specified in your buildSpec. If it's not there, we will also check the .amplify-hosting directory as a fallback. When using a framework adapter for hosting on Amplify, double-check that the adapter settings are correct. 2024-01-10T10:42:37.892Z [INFO]: # Starting environment caching... 2024-01-10T10:42:37.893Z [INFO]: # Environment caching completed Terminating logging... This is actually a bug but "bug" option was not there when clicked on "New Issue" @jitendra-koodo πŸ‘‹ This repository only accepts new feature requests for AWS Amplify Hosting. For technical support, we encourage you to open a case with AWS technical support if you have AWS support plan. If you do not have an active AWS support plan, we encourage you to leverage our Amplify community Discord server where community members and staff try to help each other with Amplify. Where are we supposed to report the bugs? They don't actually care @jitendra-koodo. They do what they want. They lie about ISR and its still included in there docs like it works exactly as vercels infrastructure but it does not. There customer service at AWS web service's is a joke, they just want to upgrade you to premium by ticking you off with their lack of expertise in the first tier support. Hi @jitendra-koodo πŸ‘‹ , if you have an AWS support plan we encourage you to report bugs by creating a support case. If you do not have an active AWS support plan, we encourage you to leverage our community Discord server where you can ask questions by creating a new thread in the amplify-help channel and community members and staff will try to answer your queries.
2025-04-01T06:37:59.498175
2020-04-30T13:01:04
609953768
{ "authors": [ "drochetti", "lawmicha", "vishaldroisys" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4014", "repo": "aws-amplify/amplify-ios", "url": "https://github.com/aws-amplify/amplify-ios/issues/410" }
gharchive/issue
@auth: API_KEY is Supported or not we have Postgres database and we have selected authentication as API_KEY. and API type is GRAPHQL Thanks in advanced. Hi @vishaldroisys can you explain your use case in more detail? Are you trying to make a call directly to the Postgres database or through APIGateway or AppSync with data resolvers? Closing due to inactivity. Feel free to re-open if you're still experiencing the same issue.
2025-04-01T06:37:59.499624
2020-01-01T23:09:52
544408850
{ "authors": [ "iartemiev", "wooj2" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4015", "repo": "aws-amplify/amplify-ios", "url": "https://github.com/aws-amplify/amplify-ios/pull/281" }
gharchive/pull-request
Attempt to fix circle ci builds for API By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. This PR only targets api -- working on datastore ut test in a separate PR LGTM! Can probably combine this with https://github.com/aws-amplify/amplify-ios/pull/282
2025-04-01T06:37:59.525545
2023-06-27T20:37:53
1777713640
{ "authors": [ "tjleing", "tylerjroach" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4016", "repo": "aws-amplify/amplify-ui-android", "url": "https://github.com/aws-amplify/amplify-ui-android/issues/52" }
gharchive/issue
❗❗Liveness: Camera preview not centered on screen (Fixed in version 1.0.2) Which UI component? Liveness Describe the bug Upgrading Material3 UI (androidx.compose.material3:material3) from 1.0.1 to version 1.1.0, caused an issue where the camera preview was top aligned to the screen during the liveness challenge, instead of center aligned. This results in the camera preview not aligning to the face oval. Impacted Amplify UI Liveness Versions: 1.0.0 (Only if the customer upgraded the androidx.compose.material3:material3 dependency to 1.1.0+) 1.0.1 As a workaround for these versions, you can force downgrade the material3 lib by adding the snippet below to the app build.gradle. configurations.all { resolutionStrategy { force('androidx.compose.material3:material3:1.0.1') } } Amplify UI Liveness 1.0.2 includes a fix to ensure the camera is properly centered on the screen, and is able to render correctly with 'androidx.compose.material3:material3:1.1.0' or later. Closing, notification has been given.
2025-04-01T06:37:59.588884
2024-03-20T14:59:07
2197757142
{ "authors": [ "PatMyron", "shawnbucholtz" ], "license": "MIT-0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4017", "repo": "aws-cloudformation/cfn-lint", "url": "https://github.com/aws-cloudformation/cfn-lint/issues/3103" }
gharchive/issue
WS1004 CloudFormation Lint Version cfn-lint 0.86.0 What operating system are you using? WSL2 Ubuntu on Windows Describe the bug cfn-lint incorrectly reports: [cfn-lint] WS1004: Lambda function xxx does not have a corresponding log group with a Retention property When there is a !Ref to a log group with a retention property. Expected behavior When there is an explicit !Ref to a defined log group with the RetentionInDays property set then WS1004 should not be flagged. Reproduction template AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Resources: LambdaProxyFunction: Type: AWS::Serverless::Function Properties: CodeUri: src/ FunctionName: MyFunction Handler: index.handler LoggingConfig: LogFormat: JSON LogGroup: !Ref LambdaProxyLogGroup MemorySize: 512 PackageType: Zip ReservedConcurrentExecutions: 1 Runtime: nodejs18.x Timeout: 10 Tracing: Active LambdaProxyLogGroup: DeletionPolicy: Retain UpdateReplacePolicy: Retain Type: AWS::Logs::LogGroup Properties: LogGroupClass: STANDARD RetentionInDays: 180 cfn-lint itself does not vend WS1004. Seems to be from this popular rule pack: https://awslabs.github.io/serverless-rules/rules/ https://github.com/awslabs/serverless-rules/issues
2025-04-01T06:37:59.654242
2021-08-19T17:25:31
974888641
{ "authors": [ "RedbackThomson", "vijtrip2" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4018", "repo": "aws-controllers-k8s/community", "url": "https://github.com/aws-controllers-k8s/community/issues/908" }
gharchive/issue
Auto-generate controllers when new runtime released Regenerate all service controllers to runtime v0.13.0 and code-gen v0.13.0 to include recent bug fixes. Now up and running!
2025-04-01T06:37:59.658688
2021-09-11T15:03:44
993861572
{ "authors": [ "haarchri", "vijtrip2" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4019", "repo": "aws-controllers-k8s/community", "url": "https://github.com/aws-controllers-k8s/community/issues/951" }
gharchive/issue
apigatewayv2 - DomainNameConfigurations missing in DomainNameObservation for apigatewayv2 in DomainNameObservation struct - DomainNameConfigurations struct is missing https://github.com/aws/aws-sdk-go/blob/v1.37.10/service/apigatewayv2/api.go#L12137-L12158 at the moment: type DomainNameObservation struct { APIMappingSelectionExpression *string `json:"apiMappingSelectionExpression,omitempty"` DomainName *string `json:"domainName,omitempty"` } what we need: type DomainNameObservation struct { APIMappingSelectionExpression *string `json:"apiMappingSelectionExpression,omitempty"` DomainName *string `json:"domainName,omitempty"` DomainNameConfigurations []*DomainNameConfiguration `json:"domainNameConfigurations,omitempty"` } we have one Issue in Crossplane Provider-AWS for this: https://github.com/crossplane/provider-aws/issues/826 Hi Thanks for pointing it out. APIGatewayv2 controller is currently at aws-sdk-go v1.35.5 https://github.com/aws-controllers-k8s/apigatewayv2-controller/blob/main/apis/v1alpha1/ack-generate-metadata.yaml#L8 We are working on upgrading it but have some problems with how controller-gen handles maps of maps , which were introduced after v1.35.5 apigatewayv2 controller now updated to v1.37.10 https://github.com/aws-controllers-k8s/apigatewayv2-controller/blob/main/apis/v1alpha1/ack-generate-metadata.yaml#L8
2025-04-01T06:37:59.661416
2023-10-23T22:43:39
1958141905
{ "authors": [ "jefferyfry", "kkvinjam" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4020", "repo": "aws-ia/cfn-abi-lacework-polygraph", "url": "https://github.com/aws-ia/cfn-abi-lacework-polygraph/pull/72" }
gharchive/pull-request
Updated submodules/lacework-control-tower-cfn. Updated submodules/lacework-control-tower-cfn. Removed extraneous test parameters from lacework-control-tower-cfn cfn-abi-control-tower-integration.template.yaml. /do-e2e-tests @jefferyfry tests fail due to Issues with below resources from ControlTower submodule: LaceworkAuthFunction LaceworkSetupFunction LaceworkAccountFunction All 3 failed due to S3 permission or incorrect reference to the key. Resource handler returned message: "Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist. ( @kkvinjam Added missing lambda zip files for ControlTower submodule.
2025-04-01T06:37:59.669178
2024-02-21T21:32:57
2147751009
{ "authors": [ "dreamorosi" ], "license": "MIT-0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4021", "repo": "aws-powertools/powertools-lambda-typescript", "url": "https://github.com/aws-powertools/powertools-lambda-typescript/issues/2126" }
gharchive/issue
Feature request: allow to pass custom logger for warning and debug logs Use case The Metrics utility emits some warning logs to notify customers that certain expected conditions are not being met. This is the case for when a namespace is not specified or when the publishStoredMetrics() method is called on an empty buffer. Currently customers have no way of suppressing these warnings and some customers have reported wanting to do so (#2036). I think this is a fair ask and whatever implementation we settle for in this issue will also be reused for other utilities that emit either warnings or debug logs (Idempotency, Tracer, and Parameters). Solution/User Experience We could define a new type/interface in the commons package and expose it: interface UtilityLogger { trace?: (...content: any[]) => void; debug: (...content: any[]) => void; info: (...content: any[]) => void; warn: (...content: any[]) => void; error: (...content: any[]) => void; } From there, customers can use it as a reference to create their own logger and pass it to the Metrics utility. In the example below I'm making the warn method a no-op to disable the warnings entirely, but customers can write their own custom implementation to decide whether the logs are emitted or not. import { Metrics } from '@aws-lambda-powertools/metrics'; import type { UtilityLogger } from '@aws-lambda-powertools/commons/types'; const myLogger: UtilityLogger = { debug: console.debug, info: console.info, warn: {}, // no-op - but customers can add their own logic error: console.error, }; const metrics = new Metrics({ namespace: 'serverlessAirline', serviceName: 'orders', logger: myLogger, }); Customers should also be able to pass an instance of Powertools Logger if they wish to do so: import { Metrics } from '@aws-lambda-powertools/metrics'; import { Logger } from '@aws-lambda-powertools/logger'; const logger = new Logger({ serviceName: 'orders', logLevel: 'ERROR', }); const metrics = new Metrics({ namespace: 'serverlessAirline', serviceName: 'orders', logger, }); My main concern with this is avoiding confusion and conveying clearly that this is only a logger that will be used for debug and warning logs but not to emit the EMF metrics themselves. The Metrics utility maintain its own Console object that logs the metrics using console.log (notice that the log() method is not part of the suggested interface). This is needed for the Metrics utility to work with the Advanced Logging Configuration feature. Alternative solutions If my memory serves me right the AWS Lambda Node.js managed runtime treats warning emitted via process.emitWarning(warning[, options]) as errors rendering this method unviable. As part of this issue however we should still test this option just in case I'm wrong. Other alternatives that I'm not inclined to consider would go along the lines of adding a doNotWarnOnEmptyMetrics option to suppress these warnings. Acknowledgment [X] This feature request meets Powertools for AWS Lambda (TypeScript) Tenets [ ] Should this be considered in other Powertools for AWS Lambda languages? i.e. Python, Java, and .NET Future readers Please react with πŸ‘ and your use case to help us understand customer demand. @heitorlessa & @am29d would love your opinion on this and especially your point of view on the concern I share at the end of the "Solution/User Experience" section. Also please let me know if anything is not clear, happy to clarify & expound on any detail. Thanks!
2025-04-01T06:37:59.680510
2024-06-23T19:54:58
2368866002
{ "authors": [ "mccartni-aws", "rsgrewal-aws" ], "license": "MIT-0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:4022", "repo": "aws-samples/amazon-bedrock-samples", "url": "https://github.com/aws-samples/amazon-bedrock-samples/pull/211" }
gharchive/pull-request
Customer support Streamlit application with Guardrails Issue #, if available: #210 Description of changes: Features: CloudFormation Template: Defines a guardrail for the customer support chatbot. Filters out harmful content and protects sensitive information. Configures Content Policy, Sensitive Information Policy, Topic Policy, and Word Policy. Python Scripts: deploy_guardrails_infra.sh: Bash script to deploy the CloudFormation stack and retrieve the guardrail identifier. streamlit_guardrails_app.py: Streamlit app to interact with the chatbot, using the guardrails set up in the CloudFormation stack. Requirements: Added requirements.txt to install necessary Python packages, including Streamlit. Documentation: Detailed README with step-by-step instructions to set up the environment, deploy the CloudFormation stack, and run the Streamlit app. Included links to relevant resources and documentation for further reference. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Please re do this PR following the new layout and the process for creating the required file . We request all files to have a presence on the Cookbook / Recipes Website which is fronting this github repo now. Some of the mandatory sections of the notebook need to include Overview. What are we demonstrating . What use case . What will you learn. What is the architectural pattern and why we select this. With a diagram What are the libraries to install What model did we choose and why this model Every cell needs to have a markup THIS PR needs to go to responsibleai / use-cases