added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:34:41.325133
2022-07-14T18:10:40
1305132079
{ "authors": [ "ameroyer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8426", "repo": "microsoft/DeepSpeed", "url": "https://github.com/microsoft/DeepSpeed/issues/2097" }
gharchive/issue
[BUG] Discrepancy between patch and reload functionals in FLOPs profiler In the Flops profiler, the list of functionals restored in _reload_functionals is missing some functions wrapped in _patch_functionals. This leads to wrong FLOPs counts when using consecutive calls to profiler.start_profile (and possibly some minor overhead when using these functional later on). To reproduce Example comparing torch.nn.LayerNorm (not properly restored) and torch.nn.Linear (properly restored): Repeated calls with the exact same inputs should give the same result. modules_test = [torch.nn.LayerNorm(256), torch.nn.Linear(256, 128)] num_calls = 3 for m in modules_test: print(f"\n\nTesting {m}") for i in range(num_calls): flops, macs, params = get_model_profile(m, (1, 256), print_profile=False) print(f" Call {i + 1}: flops = {flops}; macs = {macs}; params = {params}") giving the output: Testing LayerNorm((256,), eps=1e-05, elementwise_affine=True) Call 1: flops = 1.28 K; macs = 0 MACs; params = 512 Call 2: flops = 2.56 K; macs = 0 MACs; params = 512 Call 3: flops = 3.84 K; macs = 0 MACs; params = 512 Testing Linear(in_features=256, out_features=128, bias=True) Call 1: flops = 65.54 K; macs = 32.77 KMACs; params = 32.9 k Call 2: flops = 65.54 K; macs = 32.77 KMACs; params = 32.9 k Call 3: flops = 65.54 K; macs = 32.77 KMACs; params = 32.9 k Expected behavior _reload_functionals should restore all functionals wrapped in _patch_functionals. It seems that (layer, group, instance)norm are missing, but I haven't checked the list carefully. issue addressed/fixed in this thread: https://github.com/microsoft/DeepSpeed/pull/2068
2025-04-01T04:34:41.327146
2023-07-22T10:44:04
1816718923
{ "authors": [ "phoekz", "walbourn" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8427", "repo": "microsoft/DirectX-Headers", "url": "https://github.com/microsoft/DirectX-Headers/issues/112" }
gharchive/issue
CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC is missing a Init_1_2 member function and constructor This is a very minor nitpick, but CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC under d3dx12_root_signature.h doesn't have a member function variant of Init_1_2 function, while 1_0 and 1_1 have one. Similarly, a constructor for 1_2 is also missing. This was addressed in https://github.com/microsoft/DirectX-Headers/commit/6e3bd047c9c3ecc53bc2c044f516bc027be0ad9d
2025-04-01T04:34:41.330181
2021-08-26T01:04:57
979731053
{ "authors": [ "vdwtanner", "zhou-mengbo" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8428", "repo": "microsoft/DirectX-Specs", "url": "https://github.com/microsoft/DirectX-Specs/pull/74" }
gharchive/pull-request
Update HLSL_ShaderModel6_5.md Hi, Before my change, it says "Note how subset with mask.x == 0x0b refers to lane 1, which is either inactive or is a helper lane". Since mask 0x0b is for lane 0 and it's Prefix, lane 1 shouldn't be considered anyway. I think set the second bit of mask in lane 2 will reduce possibility of confusion. It also mentioned "The groups are assumed to be non-intersecting". However, mask 0xb and 0x9 have overlaps in the example, so I suggest to update 0xb to 0x2. BTW, do you really want the groups to be non-intersecting? I think there are quite some scenarios in real world needs to reuse same lane in different groups. Thanks, Mengbo Hi Mengbo, I checked in with someone with more context on this, and it is actually correct as written. I added some additional clarification in https://github.com/microsoft/DirectX-Specs/pull/116 to hopefully make this more clear going forward. Thanks!
2025-04-01T04:34:41.368549
2018-10-01T15:54:25
365527836
{ "authors": [ "SindhujaElangovan", "TYLEROL", "asmaafayed", "naveedmcp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8429", "repo": "microsoft/EasyRepro", "url": "https://github.com/microsoft/EasyRepro/issues/187" }
gharchive/issue
Message: Test method Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount threw exception: System.InvalidOperationException: unknown error: Xrm is not defined Message: Test method Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount threw exception: System.InvalidOperationException: unknown error: Xrm is not defined (Session info: chrome=69.0.3497.100) (Driver info: chromedriver=2.38.552522 (437e6fbedfa8762dec75e2c5b3ddb86763dc9dcb),platform=Windows NT 10.0.14393 x86_64) @TYLEROL After I updated all tools, still getting the same error; Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount threw exception: OpenQA.Selenium.WebDriverException: unknown error: Xrm is not defined (Session info: chrome=69.0.3497.100) (Driver info: chromedriver=2.42.591088 (7b2b2dca23cca0862f674758c9a3933e685c27d5),platform=Windows NT 10.0.14393 x86_64) @naveedmcp - Thanks for raising this issue. Have you altered the CreateAccount default sample test at all? If not, could you please provide more of the stack trace from the error test? I suspect this may be coming from the following line: xrmBrowser.GuidedHelp.CloseGuidedHelp(); You could try commenting that line out and see if your results change. If not, is the application getting 100% of the way through the login experience? The steps should be: Attempt to access org, get redirected to O365 Login Page Input username, then supply password Interact with the Stay Sign In dialog Wait for main Dynamics 365 page to load Resume test steps If this is not helpful, then the full stack trace when the exception is thrown, plus a screenshot of the browser window at time of exception will be helpful for investigation. Please make sure there is no PII in the screenshot of the browser window Thanks, Tyler @TYLEROL Thanks for your reply. After commenting the line of code mentioned above, i am now getting the following error; Message: Test method Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount threw exception: System.NullReferenceException: Object reference not set to an instance of an object. At the time of the error, following is the screen state; Input username, then supply the password Following is the complete stack trace; Test Name: WEBTestCreateNewAccount Test FullName: Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount Test Source: C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Sample\Web\Create\CreateAccount.cs : line 22 Test Outcome: Failed Test Duration: 0:00:16.0783748 Result StackTrace: at Microsoft.Dynamics365.UIAutomation.Api.Navigation.<>c__DisplayClass3_0.b__0(IWebDriver driver) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Api\Pages\Navigation.cs:line 87 at Microsoft.Dynamics365.UIAutomation.Browser.DelegateBrowserCommand1.ExecuteCommand(IWebDriver driver, Object[] params) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\DelegateBrowserCommand.cs:line 28 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserCommand1.Execute[T1,T2,T3,T4,T5,T6,T7,T8,T9](IWebDriver driver, T1 p1, T2 p2, T3 p3, T4 p4, T5 p5, T6 p6, T7 p7, T8 p8, T9 p9) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\BrowserCommand.cs:line 136 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserCommand1.Execute(IWebDriver driver) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\BrowserCommand.cs:line 36 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserPage.Execute[TResult](BrowserCommandOptions options, Func2 delegate) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\BrowserPage.cs:line 182 at Microsoft.Dynamics365.UIAutomation.Api.Navigation.OpenSubArea(String area, String subArea, Int32 thinkTime) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Api\Pages\Navigation.cs:line 80 at Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount() in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Sample\Web\Create\CreateAccount.cs:line 29 Result Message: Test method Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount threw exception: System.NullReferenceException: Object reference not set to an instance of an object. @TYLEROL This is the stack trace when i uncomment the line back; Test Name: WEBTestCreateNewAccount Test FullName: Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount Test Source: C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Sample\Web\Create\CreateAccount.cs : line 22 Test Outcome: Failed Test Duration: 0:00:12.7332615 Result StackTrace: at OpenQA.Selenium.Remote.RemoteWebDriver.UnpackAndThrowOnError(Response errorResponse) at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary2 parameters) at OpenQA.Selenium.Remote.RemoteWebDriver.ExecuteScriptCommand(String script, String commandName, Object[] args) at OpenQA.Selenium.Remote.RemoteWebDriver.ExecuteScript(String script, Object[] args) at OpenQA.Selenium.Support.Events.EventFiringWebDriver.ExecuteScript(String script, Object[] args) at Microsoft.Dynamics365.UIAutomation.Browser.SeleniumExtensions.ExecuteScript(IWebDriver driver, String script, Object[] args) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\Extensions\SeleniumExtensions.cs:line 114 at Microsoft.Dynamics365.UIAutomation.Api.GuidedHelp.get_IsEnabled() in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Api\Pages\GuidedHelp.cs:line 31 at Microsoft.Dynamics365.UIAutomation.Api.GuidedHelp.<CloseGuidedHelp>b__3_0(IWebDriver driver) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Api\Pages\GuidedHelp.cs:line 50 at Microsoft.Dynamics365.UIAutomation.Browser.DelegateBrowserCommand1.ExecuteCommand(IWebDriver driver, Object[] params) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\DelegateBrowserCommand.cs:line 28 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserCommand1.Execute[T1,T2,T3,T4,T5,T6,T7,T8,T9](IWebDriver driver, T1 p1, T2 p2, T3 p3, T4 p4, T5 p5, T6 p6, T7 p7, T8 p8, T9 p9) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\BrowserCommand.cs:line 136 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserCommand1.Execute(IWebDriver driver) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\BrowserCommand.cs:line 36 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserPage.Execute[TResult](BrowserCommandOptions options, Func`2 delegate) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Browser\BrowserPage.cs:line 182 at Microsoft.Dynamics365.UIAutomation.Api.GuidedHelp.CloseGuidedHelp(Int32 thinkTime) in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Api\Pages\GuidedHelp.cs:line 46 at Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount() in C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Sample\Web\Create\CreateAccount.cs:line 26 Result Message: Test method Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount threw exception: OpenQA.Selenium.WebDriverException: unknown error: Xrm is not defined (Session info: chrome=69.0.3497.100) (Driver info: chromedriver=2.42.591088 (7b2b2dca23cca0862f674758c9a3933e685c27d5),platform=Windows NT 10.0.14393 x86_64) Following is the screen; Thanks @naveedmcp. Based on your latest screenshot, this issue is because the EasyRepro sample code uses our default login code. The default login code expects the standard Office 365 login window as opposed to a federated login page. You can adjust this by calling an overloaded Login method that supports a CustomLoginAction. I'll get the ADFSLogin.cs sample added back to the latest branches, however if you look at the releases/v8.1 branch, this sample file should exist. I've included it below as well. The password field on the ADFS login window uses a different element id compared to the Office 365 login page so the problem is the password is not being input as expected. The code eventually tries to move on but is unable to because you are not on a Dynamics 365 web page. Please let us know if you have any additional questions on implementing this in your scenario. [TestMethod] public void TestCustomLogin() { using (var xrmBrowser = new XrmBrowser(TestSettings.Options)) { xrmBrowser.LoginPage.Login(_xrmUri, _username, _password, CustomLoginAction); } } public void CustomLoginAction(LoginRedirectEventArgs args) { //Login Page details go here. You will need to find out the id of the password field on the form as well as the submit button. //You will also need to add a reference to the Selenium Webdriver to use the base driver. //Example //-------------------------------------------------------------------------------------- var d = args.Driver; d.FindElement(By.Id("passwordInput")).SendKeys(args.Password.ToUnsecureString()); d.ClickWhenAvailable(By.Id("submitButton"), new TimeSpan(0, 0, 2)); //Insert any additional code as required for the SSO scenario //Wait for CRM Page to load d.WaitUntilVisible(By.XPath(Elements.Xpath[Reference.Login.CrmMainPage]) , new TimeSpan(0, 0, 60), e => { e.WaitForPageToLoad(); e.SwitchTo().Frame(0); e.WaitForPageToLoad(); }, f => { throw new Exception("Login page failed."); }); //-------------------------------------------------------------------------------------- } Best Regards, Tyler @TYLEROL Thanks for the code; Getting following build error; Severity Code Description Project File Line Suppression State Error CS0103 The name 'By' does not exist in the current context Microsoft.Dynamics365.UIAutomation.Sample C:\Project Work\EasyRepro-releases-v9.0\Microsoft.Dynamics365.UIAutomation.Sample\Web\Create\CreateAccount.cs 56 Active @TYLEROL it worked. please keep posted when you have the fix ready. i have to copy the same method to all tests or put it in the base class and inherit @naveedmcp - I just submitted and merged a PR with the beta/v9.0.2 branch. This change adds a method called ADFSLoginAction to the LoginDialog.cs code the web client. You should be able to call this from any of the code files using the following: xrmBrowser.LoginPage.Login(_xrmUri, _username, _password, xrmBrowser.LoginPage.ADFSLoginAction); We'll get something added in the future for the Unified Interface scenarios. If you have any problems with the latest change, please re-open this issue and add any details as appropriate. Thanks, Tyler @naveedmcp - I pushed one additional update this morning that adds similar functionality to the Unified Interface. xrmApp.OnlineLogin.Login(_xrmUri, _username, _password, client.ADFSLoginAction); Thanks! Tyler Hello Tylerol, Good Day. I'm facing the same issue "System.InvalidOperationException: unknown error: Xrm is not defined" but in a different browser-Firefox. I'm getting this error once after the username and password is entered automatically it gets stuck in the Stay Sign In dialog and it is not redirected to the CRM home page. I tried with your fix code but after that I'm getting-"System.NullReferenceException: Object reference not set to an instance of an object. And this time it gets stuck in the password page itself by not entering the password Firefox version - 63.0.3 (64 bit) geckodriver - v0.23.0-win64 Hope you can help me to resolve this issue.Thank You Regards, Sindhuja @TYLEROL Hello @TYLEROL I am very new to EasyRepro and using the latest version but still getting the same error <> I am using "Firefox" and ADFLogin xrmBrowser.LoginPage.Login(_xrmUri, _username, _password, xrmBrowser.LoginPage.ADFSLoginAction); xrmBrowser.GuidedHelp.CloseGuidedHelp(); Here's the stack error Test Name: WEBTestCreateNewAccount Test FullName: Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount Test Source: F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Sample\Web\Create\CreateAccount.cs : line 21 Test Outcome: Failed Test Duration: 0:00:16.0861886 Result StackTrace: at OpenQA.Selenium.Remote.RemoteWebDriver.UnpackAndThrowOnError(Response errorResponse) at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary2 parameters) at OpenQA.Selenium.Remote.RemoteWebDriver.ExecuteScriptCommand(String script, String commandName, Object[] args) at OpenQA.Selenium.Remote.RemoteWebDriver.ExecuteScript(String script, Object[] args) at Microsoft.Dynamics365.UIAutomation.Browser.SeleniumExtensions.ExecuteScript(IWebDriver driver, String script, Object[] args) in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Browser\Extensions\SeleniumExtensions.cs:line 118 at Microsoft.Dynamics365.UIAutomation.Api.GuidedHelp.get_IsEnabled() in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Api\Pages\GuidedHelp.cs:line 33 at Microsoft.Dynamics365.UIAutomation.Api.GuidedHelp.<CloseGuidedHelp>b__3_0(IWebDriver driver) in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Api\Pages\GuidedHelp.cs:line 58 at Microsoft.Dynamics365.UIAutomation.Browser.DelegateBrowserCommand1.ExecuteCommand(IWebDriver driver, Object[] params) in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Browser\DelegateBrowserCommand.cs:line 28 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserCommand1.Execute[T1,T2,T3,T4,T5,T6,T7,T8,T9](IWebDriver driver, T1 p1, T2 p2, T3 p3, T4 p4, T5 p5, T6 p6, T7 p7, T8 p8, T9 p9) in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Browser\BrowserCommand.cs:line 136 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserCommand1.Execute(IWebDriver driver) in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Browser\BrowserCommand.cs:line 36 at Microsoft.Dynamics365.UIAutomation.Browser.BrowserPage.Execute[TResult](BrowserCommandOptions options, Func`2 delegate) in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Browser\BrowserPage.cs:line 182 at Microsoft.Dynamics365.UIAutomation.Api.GuidedHelp.CloseGuidedHelp(Int32 thinkTime) in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Api\Pages\GuidedHelp.cs:line 54 at Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount() in F:\MOJ\CRM_Automation\Microsoft.Dynamics365.UIAutomation.Sample\Web\Create\CreateAccount.cs:line 28 Result Message: Test method Microsoft.Dynamics365.UIAutomation.Sample.Web.CreateAccount.WEBTestCreateNewAccount threw exception: OpenQA.Selenium.WebDriverException: ReferenceError: Xrm is not defined Result StandardOutput: BrowserAutomation Information: 9000 : BrowserInitialized invoked. BrowserAutomation Information: 9001 : BrowserInitialized completed. BrowserAutomation Information: 10001 : Command Start: Login - Attempt 1/0 BrowserAutomation Information: 10002 : Command Stop: Login - 1 attempts - total execution time 4841.4539ms BrowserAutomation Information: 10001 : Command Start: Close Guided Help - Attempt 1/0 BrowserAutomation Error: 10003 : Command Error: Close Guided Help - Attempt 1/0 - OpenQA.Selenium.WebDriverException - ReferenceError: Xrm is not defined BrowserAutomation Information: 9000 : BrowserDisposing invoked. BrowserAutomation Information: 9001 : BrowserDisposing completed.
2025-04-01T04:34:41.376475
2021-09-07T11:57:20
989905527
{ "authors": [ "Ashfaq-ali", "TYLEROL" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8430", "repo": "microsoft/EasyRepro", "url": "https://github.com/microsoft/EasyRepro/pull/1186" }
gharchive/pull-request
Fix issues on GetHeaderTitle and OpenRecord Type of change [x] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Other (updates to documentation, formatting, etc.) Description Wave 2 release related issues: Update to Grid container Xpath Fetch headerTitle from title attribute instead of Text Issues addressed https://github.com/microsoft/EasyRepro/issues/1179 https://github.com/microsoft/EasyRepro/issues/1172 All submissions: [x] My code follows the code style of this project. [ ] Do existing samples that are effected by this change still run? [ ] I have added samples for new functionality. [ ] I raise detailed error messages when possible. [ ] My code does not rely on labels that have the option to be hidden. Which browsers was this tested on? [x ] Chrome [ ] Firefox [ ] IE [ ] Edge I tried these changes for OpenRecord. It doesn't work for me. I am using this version of CRM Online: 2021 release wave 1 enabled Server version: 9.2.21081.00147 Client version: 1.4.3101-2108.1 Hi, Thanks for reviewing. This fix is specifically for the Wave 2 release, I assume wave2 fixes will be packaged as a new version. I have tested against CRM version 2021 release wave 2 enabled Server version: 9.2.21084.00137 Client version: 1.4.3129-2108.4 Thanks, Ash @Ashfaq-ali - Can you please re-target this towards the new branch 'develop-2021ReleaseWave2'? This branch will be used for 2021 Release Wave 2 fixes (and eventually pushed back into Develop once the platform has a common base) @TYLEROL Sure, switched to develop-2021ReleaseWave2 branch. Thanks
2025-04-01T04:34:41.383603
2020-09-07T09:00:58
694886008
{ "authors": [ "badrishc", "shadimari" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8431", "repo": "microsoft/FASTER", "url": "https://github.com/microsoft/FASTER/issues/321" }
gharchive/issue
NullReferenceException when executing c# benchmarking tool w/ checkpoints When executing C# benchmarking tool with the following configurations: reads_percent = -1 (i.e. 100% RMW) kCheckpointMilliseconds= 10000 kRunSeconds = 60 seconds kUseSmallData = true/false kUseSyntheticData = true kCheckpointStoreContents = true kAffinitizedSession = true/false kSmallMemoryLog = true/false Threads exit with NullReferenceException while checkpointing from the main-thread. Basically, the error occurs during the invocation of session.CompletePending(false), and in specific this line of code in FASTERThread.cs: ref Value value = ref pendingContext.value.Get(); StackTrace: System.NullReferenceException: Object reference not set to an instance of an object. at FASTER.core.FasterKV2.InternalCompleteRetryRequest[Input,Output,Context,FasterSession](FasterExecutionContext3 opCtx, FasterExecutionContext3 currentCtx, PendingContext3 pendingContext, FasterSession fasterSession) in D:\arabyads\sandbox\FASTER\cs\src\core\Index\FASTER\FASTERThread.cs:line 217 at FASTER.core.FasterKV2.InternalCompleteRetryRequests[Input,Output,Context,FasterSession](FasterExecutionContext3 opCtx, FasterExecutionContext3 currentCtx, FasterSession fasterSession) in D:\arabyads\sandbox\FASTER\cs\src\core\Index\FASTER\FASTERThread.cs:line 204 at FASTER.core.FasterKV2.InternalCompletePending[Input,Output,Context,FasterSession](FasterExecutionContext3 ctx, FasterSession fasterSession, Boolean wait) in D:\arabyads\sandbox\FASTER\cs\src\core\Index\FASTER\FASTERThread.cs:line 167 at FASTER.core.ClientSession6.CompletePending(Boolean spinWait, Boolean spinWaitForCommit) in D:\arabyads\sandbox\FASTER\cs\src\core\ClientSession\ClientSession.cs:line 396 at FASTER.benchmark.FASTER_YcsbBenchmark.RunYcsb(Int32 thread_idx) in D:\arabyads\sandbox\FASTER\cs\benchmark\FasterYcsbBenchmark.cs:line 164 at FASTER.benchmark.FASTER_YcsbBenchmark.<>c__DisplayClass29_1.b__1() in D:\arabyads\sandbox\FASTER\cs\benchmark\FasterYcsbBenchmark.cs:line 304 at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) --- End of stack trace from previous location where exception was thrown --- at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() Thanks for the report. The linked PR fixes this issue. Also, you do not need to set kCheckpointStoreContents as its intention is for backup for fast future benchmark runs when running without periodic checkpoints. You only need to set kCheckpointMilliseconds to enable periodic checkpointing. We have updated the benchmark code with renamed variables in the PR to remove the confusion: kCheckpointMilliseconds => kPeriodicCheckpointMilliseconds kCheckpointStoreContents => kBackupStoreForFastRestarts Thanks for the update and the clarification. Issue is fixed.
2025-04-01T04:34:41.384680
2021-11-19T05:35:07
1058155939
{ "authors": [ "liususan091219" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8432", "repo": "microsoft/FLAML", "url": "https://github.com/microsoft/FLAML/pull/293" }
gharchive/pull-request
checkpoint naming in nonray mode, fix ray mode, delete checkpoints in nonray mode …ning test mode, delete all the checkpoints in non-ray mode Closes Issue #290 #291
2025-04-01T04:34:41.387927
2022-04-25T21:18:02
1215055592
{ "authors": [ "ChumpChief" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8433", "repo": "microsoft/FluidFramework", "url": "https://github.com/microsoft/FluidFramework/pull/10040" }
gharchive/pull-request
Update definition of public/private for repo policy check In #10028 I discovered that examples are considered public, which seems an incorrect labeling since we don't publish them to NPM (and I believe the purpose of this check is to ensure published packages don't depend on non-published packages). This change would do the following: Define "public" and "private" based on whether the packages publish to NPM Public: @fluidframework, @fluid-experimental Private: @fluid-internal, @fluid-example, @fluid-tools Update the "private" flags in package.json files to match this definition Convert the bubblebench packages to "example" rather than "experimental" -- they were always under an examples subdirectory, I think this was just an oversight. Add a missing LICENSE file to property-query-service. Just realized I'll need to break this up for integration - will push a PR to only do the policy check update and hold the repo updates until we integrate that. Closing in favor of #10065.
2025-04-01T04:34:41.390788
2022-09-26T05:56:30
1385467366
{ "authors": [ "chensixx", "vladsud" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8434", "repo": "microsoft/FluidFramework", "url": "https://github.com/microsoft/FluidFramework/pull/12114" }
gharchive/pull-request
Add new error type fluidInvalidSchema For more information about how to contribute to this repo, visit this page. For specific guidelines for Pull Requests in this repo, visit this wiki page. The sections included below are suggestions for what you may want to include. Feel free to remove or alter parts of this template that do not offer value for your specific change. Description Adding a new error type fluidIvalidSchema for errors happening when server tries to open non-fluid file with fluid-like extensions, eg. ".note" Separate PR on handling new error will merge after this I believe you had same PR against main. And I think it's more appropriate to bring it to main, as it's not breaking. Pushing it to next will delay substantially when we can start actually using new error code
2025-04-01T04:34:41.391980
2023-12-21T22:38:50
2053138761
{ "authors": [ "RishhiB", "zhenmichael" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8435", "repo": "microsoft/FluidFramework", "url": "https://github.com/microsoft/FluidFramework/pull/18967" }
gharchive/pull-request
added check for empty previousVersions before downloading blob previous build-docs implementation assumed that previousVersions would be populated, this change was added to check for empty "previousVersions" since 2.0 has not yer released. did you mean to return console.log, doesnt that just return a void?
2025-04-01T04:34:41.395693
2021-10-20T08:12:48
1031109468
{ "authors": [ "AMollis", "havokentity" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8436", "repo": "microsoft/MixedRealityLearning", "url": "https://github.com/microsoft/MixedRealityLearning/issues/47" }
gharchive/issue
Error when using ASA 2.10.2 and MRTK 2.7.2 for Android builds I am getting this error when I add MRTK 2.7.2 to a Unity project which has ASA 2.10.2 whenenever I try to do an Android build. Library\PackageCache\com.microsoft.azure.spatial-anchors-sdk.core@0d9691ef23ae-1634109003085\Runtime\Scripts\SpatialAnchorExtensions.cs(17,17): error CS0234: The type or namespace name 'MixedReality' does not exist in the namespace 'Microsoft' (are you missing an assembly reference?) Can someone please help? Marking this stale issue as "won't fix". If relevant, please try out the new MRTK3 tutorial samples. @ https://github.com/microsoft/MixedRealityLearning/tree/development/MRTK3 Tutorials. If the new MRTK3 samples don't have a relevant fix for your problem, please open a new issue. Thank you.
2025-04-01T04:34:41.399914
2022-05-25T18:03:21
1248474098
{ "authors": [ "Zee2", "elbuhofantasma" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8437", "repo": "microsoft/MixedRealityToolkit-Unity", "url": "https://github.com/microsoft/MixedRealityToolkit-Unity/issues/10615" }
gharchive/issue
Leap Motion tracked hand mesh switches to opposite hand while attempting to use teleport Describe the bug When the user has one or both hands in the view of the Leap Motion device To reproduce Steps to reproduce the behavior: Set up a fresh 2.8 clone for Leap Motion (Standalone and UWP both have this issue) Open the Leap Motion Hand Tracking Example scene Press the play button to start the example scene Attempt to use one hand to teleport to another location See hand mesh switch to the opposite hand Expected behavior The hand mesh does not switch to the opposite hand while trying to use a gesture. Screenshots https://user-images.githubusercontent.com/65038391/170331377-04a8e5c4-6b75-4992-9374-5e61970fab9a.mp4 Your setup (please complete the following information) Unity Version 2021.3.3, 2020.3.34, 2019.4.39 MRTK Version prerelease/2.8 Target platform (please complete the following information) Leap Motion Additional context This issue does not occur if both hands are trying to use the teleport gesture but will occur if both hands are being tracked and only one hand attempts the teleport gesture. This issue also occurs when building the project and running it on a PC as seen in the attached video. Assuming this is an Ultraleap hand tracking quality problem (their tracking algos getting confused about which hand is which...)
2025-04-01T04:34:41.419419
2021-03-18T03:44:10
834361047
{ "authors": [ "Wyrlock", "zxubian" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8438", "repo": "microsoft/MixedRealityToolkit-Unity", "url": "https://github.com/microsoft/MixedRealityToolkit-Unity/issues/9523" }
gharchive/issue
Missing Script in HandMenu prefabs To reproduce In an empty project, get the following features via the Feature Tool: Open any of the hand menu prefabs (e.g.) HandMenu_Large_AutoWorldLock_On_HandDrop.prefab @packages\com.microsoft.mixedreality.toolkit.examples\Common\Prefabs Expected behavior The prefab is valid and can be dropped in the scene and usesd. Actual behaviour Selecting the prevab gives the following unity warning in the inspector: "Prefab has missing scripts. Open Prefab to fix the issue." Opening the prefab reveals that the bottommost script is missing. Screenshots If applicable, add screenshots to help explain your problem. I know this is a bit old, but, what script was it? I'm experiencing the same thing, but after already importing Examples.
2025-04-01T04:34:41.423542
2020-04-01T11:51:28
591855793
{ "authors": [ "NikCharlebois", "codecov-io" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8439", "repo": "microsoft/Office365DSC", "url": "https://github.com/microsoft/Office365DSC/pull/436" }
gharchive/pull-request
Fix assert-o365dsctemplate Pull Request (PR) description Small fix for when the template is correctly asserted This Pull Request (PR) fixes the following issues N/A This change is  Codecov Report Merging #436 into Dev will not change coverage by %. The diff coverage is n/a. @@ Coverage Diff @@ ## Dev #436 +/- ## ===================================== Coverage 91% 91% ===================================== Files 107 107 Lines 11443 11443 Branches 10 10 ===================================== Hits 10420 10420 Misses 1013 1013 Partials 10 10
2025-04-01T04:34:41.426145
2020-03-16T12:03:00
582237703
{ "authors": [ "darrelmiller", "irvinesunday", "xuzhg" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8440", "repo": "microsoft/OpenAPI.NET.OData", "url": "https://github.com/microsoft/OpenAPI.NET.OData/pull/52" }
gharchive/pull-request
Adds support for paging through entities collection Closes https://github.com/microsoftgraph/microsoft-graph-explorer-api/issues/199 Proposes: Adding x-ms-pageable extension to allow for paging of collections. Extend the current test to include the above added extension. Looks good to me. 🚢 @irvinesunday This PR has failing tests. Can you make the necessary changes so the tests pass? Thanks. @darrelmiller This is closed; did you mean this PR? https://github.com/microsoft/OpenAPI.NET.OData/pull/55
2025-04-01T04:34:41.444390
2023-05-30T23:58:52
1733177019
{ "authors": [ "LucGenetier", "toshio-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8441", "repo": "microsoft/Power-Fx", "url": "https://github.com/microsoft/Power-Fx/pull/1539" }
gharchive/pull-request
[[Power-Fx]] Localization hand-back [CTAS - RunID=20230530-235707-5y8x0bvy57] Translation handback check-in (PR Created by automation). ✅ No public API change.
2025-04-01T04:34:41.445773
2022-03-03T01:00:45
1157851619
{ "authors": [ "CarlosFigueiraMSFT", "MikeStall" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8442", "repo": "microsoft/Power-Fx", "url": "https://github.com/microsoft/Power-Fx/pull/189" }
gharchive/pull-request
Expose extra information about failed tests in the runner Exposes extra information about the tests that failed on the runner, instead of only summary (total, pass, fail) information. https://github.com/microsoft/Power-Fx/pull/190 may supercede this PR. Closing for now, waiting until #190 is completed to see if this will still be needed
2025-04-01T04:34:41.447926
2024-07-26T22:13:40
2433042916
{ "authors": [ "LucGenetier", "anderson-joyle" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8443", "repo": "microsoft/Power-Fx", "url": "https://github.com/microsoft/Power-Fx/pull/2562" }
gharchive/pull-request
Custom structural print This PR aims to provide the host with a way to anonymize expressions with more deterministic mask values. ✅ No public API change. ✅ No public API change. ✅ No public API change. ✅ No public API change. ✅ No public API change. ✅ No public API change. ✅ No public API change. ✅ No public API change.
2025-04-01T04:34:41.449787
2022-02-22T19:23:24
1147283513
{ "authors": [ "jeroenterheerdt", "tirnovar" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8444", "repo": "microsoft/PowerBI-Icons", "url": "https://github.com/microsoft/PowerBI-Icons/issues/21" }
gharchive/issue
Dashboard and Goal Icons are not centered Hello guys, I am so sorry to bother you, but Dashboard and Goal SVG Icons are not centered, so there is some space in the left of an icon. thanks, updated!
2025-04-01T04:34:41.452034
2021-11-14T18:15:08
1053004864
{ "authors": [ "JakeDansker", "franky920920" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8445", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/14430" }
gharchive/issue
PowerToys Run Web Search Description of the new feature / enhancement Allow a web search directly from the powetoys run search window. Scenario when this would be used? This would be incredibly useful to search the web from a keyboard shortcut Supporting information Allow 3rd party search engines. Duplicate of #9526 Thanks for your suggestion! This issue is a duplicate. I'm linking this issue to #3245 to centralize the discussion thread, please go to that thread for discussion. Thanks for your help to make PowerToys a better piece of software!
2025-04-01T04:34:41.473704
2021-12-30T05:30:55
1090886829
{ "authors": [ "Jay-o-Way", "TheJoeFin", "bellesbae", "crutkas", "franky920920", "htcfreek" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8446", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/15213" }
gharchive/issue
ciceroUIWndFrame PowerLauncher.exe Application Error Microsoft PowerToys version 0.51.1 Running as admin [X] Yes Area(s) with issue? General Steps to reproduce Getting this pop up error as soon as I reboot. ciceroUIWndFrame PowerLauncher.exe Application Error Goes away after I uninstall powertoys but returns after I install it and reboot. screenshot: https://paste.pics/FI607 ✔️ Expected Behavior No response ❌ Actual Behavior Pop up error Other Software No response Apparently Cicero is the Speech and Handwriting Recognition feature from Microsoft Office. https://www.troublefixers.com/avoid-cicerouiwndframe-error-while-windows-shut-down/ @crutkas is this related to #15022? Apparently Cicero is the Speech and Handwriting Recognition feature from Microsoft Office. https://www.troublefixers.com/avoid-cicerouiwndframe-error-while-windows-shut-down/ @crutkas is this related to #15022? Could be but I can't find the fix they're referring to. Might be for older Windows. And I don't even have the full Office version installed. Okay. What about the Windows OS version of the "Touch Keyboard and Handwriting Panel Service"? Any changes to that? Touch Keyboard and Handwriting Panel Service Its status is running but startup type is set to Manual. @crutkas Do we have a tracking issue for this ciceroUIWndFrame error message? I saw this issue multiple times the last time. And this issue is annoying me too every time I reboot/shutdown my system. we've seen reports of it but no one provided a bug report. If there's a fix for this, please advise :) @crutkas I tried to find logs for this on my pc. But there aren't any. 😕 And it seems that the bug is gone for now on my pc. I formatted and reinstalled powertoys. Had the same error upon restart. The error is still there. What do you mean "formatted" @bellesbae can you provide a bug report? /reportbug PowerToysReport_2022-01-25-12-32-43.zip By formatted, I meant I formatted my PC thinking it was a problem with my PC. But I managed to record a video and slowed it down and realized it was a problem with PowerToys. See attached for the bug report. event ID 1026: Application: PowerToys.PowerLauncher.exe CoreCLR Version: 5.0.1321.56516 .NET Version: 5.0.13 Description: The process was terminated due to an unhandled exception. Exception Info: System.NullReferenceException: Object reference not set to an instance of an object. at System.Windows.Data.BindingExpression.Deactivate() at System.Windows.Data.BindingExpression.DetachOverride() at System.Windows.Data.BindingExpressionBase.Detach() at System.Windows.Data.BindingExpressionBase.OnDetach(DependencyObject d, DependencyProperty dp) at System.Windows.DependencyObject.SetValueCommon(DependencyProperty dp, Object value, PropertyMetadata metadata, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType, Boolean isInternal) at System.Windows.DependencyObject.SetValue(DependencyProperty dp, Object value) at System.Windows.Baml2006.WpfMemberInvoker.SetValue(Object instance, Object value) at MS.Internal.Xaml.Runtime.ClrObjectRuntime.SetValue(XamlMember member, Object obj, Object value) at MS.Internal.Xaml.Runtime.ClrObjectRuntime.SetValue(Object inst, XamlMember property, Object value) at MS.Internal.Xaml.Runtime.PartialTrustTolerantRuntime.SetValue(Object obj, XamlMember property, Object value) at System.Xaml.XamlObjectWriter.SetValue(Object inst, XamlMember property, Object value) at System.Xaml.XamlObjectWriter.Logic_ApplyPropertyValue(ObjectWriterContext ctx, XamlMember prop, Object value, Boolean onParent) at System.Xaml.XamlObjectWriter.Logic_DoAssignmentToParentProperty(ObjectWriterContext ctx) at System.Xaml.XamlObjectWriter.Logic_AssignProvidedValue(ObjectWriterContext ctx) at System.Xaml.XamlObjectWriter.WriteEndMember() at System.Xaml.XamlWriter.WriteNode(XamlReader reader) at System.Windows.Markup.WpfXamlLoader.TransformNodes(XamlReader xamlReader, XamlObjectWriter xamlWriter, Boolean onlyLoadOneNode, Boolean skipJournaledProperties, Boolean shouldPassLineNumberInfo, IXamlLineInfo xamlLineInfo, IXamlLineInfoConsumer xamlLineInfoConsumer, XamlContextStack1 stack, IStyleConnector styleConnector) at System.Windows.Markup.WpfXamlLoader.Load(XamlReader xamlReader, IXamlObjectWriterFactory writerFactory, Boolean skipJournaledProperties, Object rootObject, XamlObjectWriterSettings settings, Uri baseUri) at System.Windows.ResourceDictionary.CreateObject(KeyRecord key) at System.Windows.ResourceDictionary.OnGettingValue(Object key, Object& value, Boolean& canCache) at System.Windows.ResourceDictionary.OnGettingValuePrivate(Object key, Object& value, Boolean& canCache) at System.Windows.ResourceDictionary.GetValueWithoutLock(Object key, Boolean& canCache) at System.Windows.ResourceDictionary.GetValue(Object key, Boolean& canCache) at System.Windows.ResourceDictionary.GetValueWithoutLock(Object key, Boolean& canCache) at System.Windows.ResourceDictionary.GetValue(Object key, Boolean& canCache) at System.Windows.ResourceDictionary.GetValueWithoutLock(Object key, Boolean& canCache) at System.Windows.ResourceDictionary.GetValue(Object key, Boolean& canCache) at System.Windows.ResourceDictionary.GetValueWithoutLock(Object key, Boolean& canCache) at System.Windows.ResourceDictionary.GetValue(Object key, Boolean& canCache) at System.Windows.DeferredResourceReference.GetValue(BaseValueSourceInternal valueSource) at System.Windows.DeferredAppResourceReference.GetValue(BaseValueSourceInternal valueSource) at System.Windows.DependencyPropertyChangedEventArgs.get_NewValue() at System.Windows.Shell.WindowChrome._OnChromeChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) at System.Windows.FrameworkElement.OnPropertyChanged(DependencyPropertyChangedEventArgs e) at System.Windows.DependencyObject.NotifyPropertyChange(DependencyPropertyChangedEventArgs args) at System.Windows.DependencyObject.UpdateEffectiveValue(EntryIndex entryIndex, DependencyProperty dp, PropertyMetadata metadata, EffectiveValueEntry oldEntry, EffectiveValueEntry& newEntry, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType) at System.Windows.StyleHelper.ApplyStyleOrTemplateValue(FrameworkObject fo, DependencyProperty dp) at System.Windows.StyleHelper.InvalidateContainerDependents(DependencyObject container, FrugalStructList1& exclusionContainerDependents, FrugalStructList1& oldContainerDependents, FrugalStructList1& newContainerDependents) at System.Windows.StyleHelper.DoStyleInvalidations(FrameworkElement fe, FrameworkContentElement fce, Style oldStyle, Style newStyle) at System.Windows.StyleHelper.UpdateStyleCache(FrameworkElement fe, FrameworkContentElement fce, Style oldStyle, Style newStyle, Style& styleCache) at System.Windows.FrameworkElement.OnStyleChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) at System.Windows.FrameworkElement.OnPropertyChanged(DependencyPropertyChangedEventArgs e) at System.Windows.DependencyObject.NotifyPropertyChange(DependencyPropertyChangedEventArgs args) at System.Windows.DependencyObject.UpdateEffectiveValue(EntryIndex entryIndex, DependencyProperty dp, PropertyMetadata metadata, EffectiveValueEntry oldEntry, EffectiveValueEntry& newEntry, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType) at System.Windows.DependencyObject.SetValueCommon(DependencyProperty dp, Object value, PropertyMetadata metadata, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType, Boolean isInternal) at System.Windows.DependencyObject.SetValue(DependencyProperty dp, Object value) at System.Windows.FrameworkElement.SetResourceReference(DependencyProperty dp, Object name) at System.Windows.FrameworkElement.OnPropertyChanged(DependencyPropertyChangedEventArgs e) at System.Windows.DependencyObject.NotifyPropertyChange(DependencyPropertyChangedEventArgs args) at System.Windows.DependencyObject.UpdateEffectiveValue(EntryIndex entryIndex, DependencyProperty dp, PropertyMetadata metadata, EffectiveValueEntry oldEntry, EffectiveValueEntry& newEntry, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType) at System.Windows.DependencyObject.SetValueCommon(DependencyProperty dp, Object value, PropertyMetadata metadata, Boolean coerceWithDeferredReference, Boolean coerceWithCurrentValue, OperationType operationType, Boolean isInternal) at System.Windows.DependencyObject.SetValue(DependencyProperty dp, Object value) at System.Windows.Baml2006.WpfMemberInvoker.SetValue(Object instance, Object value) at MS.Internal.Xaml.Runtime.ClrObjectRuntime.SetValue(XamlMember member, Object obj, Object value) at MS.Internal.Xaml.Runtime.ClrObjectRuntime.SetValue(Object inst, XamlMember property, Object value) at System.Xaml.XamlObjectWriter.SetValue(Object inst, XamlMember property, Object value) at System.Xaml.XamlObjectWriter.Logic_ApplyPropertyValue(ObjectWriterContext ctx, XamlMember prop, Object value, Boolean onParent) at System.Xaml.XamlObjectWriter.Logic_DoAssignmentToParentProperty(ObjectWriterContext ctx) at System.Xaml.XamlObjectWriter.Logic_AssignProvidedValue(ObjectWriterContext ctx) at System.Xaml.XamlObjectWriter.WriteEndMember() at System.Xaml.XamlWriter.WriteNode(XamlReader reader) at System.Windows.Markup.WpfXamlLoader.TransformNodes(XamlReader xamlReader, XamlObjectWriter xamlWriter, Boolean onlyLoadOneNode, Boolean skipJournaledProperties, Boolean shouldPassLineNumberInfo, IXamlLineInfo xamlLineInfo, IXamlLineInfoConsumer xamlLineInfoConsumer, XamlContextStack`1 stack, IStyleConnector styleConnector) at System.Windows.Markup.WpfXamlLoader.Load(XamlReader xamlReader, IXamlObjectWriterFactory writerFactory, Boolean skipJournaledProperties, Object rootObject, XamlObjectWriterSettings settings, Uri baseUri) at System.Windows.Markup.WpfXamlLoader.LoadBaml(XamlReader xamlReader, Boolean skipJournaledProperties, Object rootObject, XamlAccessLevel accessLevel, Uri baseUri) at System.Windows.Markup.XamlReader.LoadBaml(Stream stream, ParserContext parserContext, Object parent, Boolean closeStream) at System.Windows.Application.LoadComponent(Object component, Uri resourceLocator) at PowerLauncher.ReportWindow.InitializeComponent() at PowerLauncher.ReportWindow..ctor(Exception exception) at PowerLauncher.Helper.ErrorReporting.Report(Exception e, Boolean waitForClose) at PowerLauncher.Helper.ErrorReporting.DispatcherUnhandledException(Object sender, DispatcherUnhandledExceptionEventArgs e) at System.Windows.Threading.Dispatcher.CatchException(Exception e) at System.Windows.Threading.Dispatcher.CatchExceptionStatic(Object source, Exception e) at System.Windows.Threading.ExceptionWrapper.CatchException(Object source, Exception e, Delegate catchHandler) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.Dispatcher.LegacyInvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) Interesting, The above comment seems like https://github.com/microsoft/PowerToys/issues/15846#issuecomment-1032544541 Is this issue still relevant in v0.73? /needinfo
2025-04-01T04:34:41.476669
2022-03-14T07:14:05
1167978788
{ "authors": [ "franky920920", "omar2205" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8447", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/17031" }
gharchive/issue
FR: Quick draw on screen Description of the new feature / enhancement Pressing a hotkey would allow me to draw on the screen, hitting another key clears the screen or hitting a third key saves a screenshot. Scenario when this would be used? Helpful If I'm explaining or reviewing something. Supporting information No response Duplicate of #149 Issue is a duplicate Thank you for your issue! We've linked your report against another issue to centralize the thread. Thanks for helping us to make PowerToys a better piece of software.
2025-04-01T04:34:41.478198
2022-05-15T14:01:29
1236297040
{ "authors": [ "crutkas", "pimporte85" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8448", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/18301" }
gharchive/issue
QuickLook Description of the new feature / enhancement Quicklook for documents, pictures, pdf, etc... Scenario when this would be used? It is very helpful for take a quick look to any document Supporting information can you create this feature please ? Quicklook. you can find it in the microsoft store /dup #80
2025-04-01T04:34:41.482135
2022-07-11T15:24:22
1300871752
{ "authors": [ "RavenMacDaddy", "jefflord" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8449", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/19328" }
gharchive/issue
PT Run - Windows Search plugin does not honor "Hide extensions for known file types" Microsoft PowerToys version 0.60.0 Running as admin [X] Yes Area(s) with issue? PowerToys Run Steps to reproduce Under Windows folder Option, if you check this: The file ext are still shown in PT Run: Built in Windows Search does hide as configured: ✔️ Expected Behavior Hide file extension when configured at the system level. ❌ Actual Behavior File extensions are always shown. Other Software No response Rather than simply respecting the setting of File Explorer, I'd like to see the global option for PowerToys Run to show these or not.
2025-04-01T04:34:41.484953
2023-01-06T05:29:36
1521975259
{ "authors": [ "jaimecbernardo", "quocthang0507" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8450", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/23133" }
gharchive/issue
PowerToys automatically exit when I click the author name in What’s new! Microsoft PowerToys version v0.66.0 Installation method PowerToys auto-update Running as admin None Area(s) with issue? General Steps to reproduce PowerToys automatically exit when I click the author's name in What’s new! ✔️ Expected Behavior Open browser ❌ Actual Behavior Close app Other Software No response I can replicate this. Thanks for opening the issue. This has been addressed as part of the 0.67 release cycle. Thank you!
2025-04-01T04:34:41.521266
2023-03-02T12:09:57
1606688981
{ "authors": [ "jaimecbernardo", "simulatorwinner" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8451", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/24451" }
gharchive/issue
Unable to install powertoys Microsoft PowerToys version 0.68.0 Installation method GitHub Running as admin Yes Area(s) with issue? Installer Steps to reproduce Install latest PowerToysSetup-0.68.0-x64.exe Run with admin Wait Window provided in Actual Behavior pops up ✔️ Expected Behavior Powertoys to get installed on pc as it normally would be ❌ Actual Behavior Other Software No response Log File: [2498:21AC][2023-03-02T17:36:02]i001: Burn v<IP_ADDRESS>26, Windows v10.0 (Build 22621: Service Pack 0), path: C:\Users\HELLOS~1\AppData\Local\Temp{75540B05-6672-4DD2-A161-9E94773848AA}.cr\PowerToysSetup-0.68.0-x64.exe [2498:21AC][2023-03-02T17:36:02]i000: Initializing string variable 'InstallFolder' to value '[ProgramFiles64Folder]PowerToys' [2498:21AC][2023-03-02T17:36:02]i000: Initializing string variable 'MsiLogFolder' to value '[LocalAppDataFolder]\Microsoft\PowerToys' [2498:21AC][2023-03-02T17:36:02]i000: Initializing version variable 'DetectedPowerToysVersion' to value '<IP_ADDRESS>' [2498:21AC][2023-03-02T17:36:02]i000: Initializing version variable 'TargetPowerToysVersion' to value '0.68.0' [2498:21AC][2023-03-02T17:36:02]i000: Initializing version variable 'DetectedWindowsBuildNumber' to value '0' [2498:21AC][2023-03-02T17:36:02]i009: Command Line: '"-burn.clean.room=C:\Users\Hello Sir\Downloads\PowerToysSetup-0.68.0-x64.exe" -burn.filehandle.attached=584 -burn.filehandle.self=592' [2498:21AC][2023-03-02T17:36:02]i000: Setting string variable 'WixBundleOriginalSource' to value 'C:\Users\Hello Sir\Downloads\PowerToysSetup-0.68.0-x64.exe' [2498:21AC][2023-03-02T17:36:02]i000: Setting string variable 'WixBundleOriginalSourceFolder' to value 'C:\Users\Hello Sir\Downloads' [2498:21AC][2023-03-02T17:36:02]i000: Setting string variable 'WixBundleLog' to value 'C:\Users\HELLOS~1\AppData\Local\Temp\powertoys-bootstrapper-msi-0.68.0_20230302173602.log' [2498:21AC][2023-03-02T17:36:02]i000: Setting string variable 'WixBundleName' to value 'PowerToys (Preview) x64' [2498:21AC][2023-03-02T17:36:02]i000: Setting string variable 'WixBundleManufacturer' to value 'Microsoft Corporation' [2498:2034][2023-03-02T17:36:02]i000: Setting numeric variable 'WixStdBALanguageId' to value 1033 [2498:2034][2023-03-02T17:36:02]i000: Setting version variable 'WixBundleFileVersion' to value '<IP_ADDRESS>' [2498:21AC][2023-03-02T17:36:02]i100: Detect begin, 3 packages [2498:21AC][2023-03-02T17:36:02]i000: Setting version variable 'DetectedPowerToysVersion' to value '<IP_ADDRESS>' [2498:21AC][2023-03-02T17:36:02]i000: Setting string variable 'DetectedWindowsBuildNumber' to value '22621' [2498:21AC][2023-03-02T17:36:02]i000: Registry key not found. Key = 'Software\Microsoft\EdgeUpdate\Clients{F3017226-FE2A-4295-8BDF-00C3A9A7E4C5}' [2498:21AC][2023-03-02T17:36:02]i000: Setting numeric variable 'HasWebView2PerUser' to value 0 [2498:21AC][2023-03-02T17:36:02]i000: Setting numeric variable 'HasWebView2PerMachine' to value 1 [2498:21AC][2023-03-02T17:36:02]i052: Condition 'HasWebView2PerMachine OR HasWebView2PerUser' evaluates to true. [2498:21AC][2023-03-02T17:36:02]i103: Detected related package: {3F721FAF-9D33-4840-9082-F9D5E0ED6970}, scope: PerMachine, version: <IP_ADDRESS>, language: 0 operation: MajorUpgrade [2498:21AC][2023-03-02T17:36:02]i103: Detected related package: {3F721FAF-9D33-4840-9082-F9D5E0ED6970}, scope: PerMachine, version: <IP_ADDRESS>, language: 0 operation: MajorUpgrade [2498:21AC][2023-03-02T17:36:02]i101: Detected package: TerminatePowerToys, state: Absent, cached: None [2498:21AC][2023-03-02T17:36:02]i101: Detected package: WebView2, state: Present, cached: None [2498:21AC][2023-03-02T17:36:02]i101: Detected package: PowerToysSetup_0.68.0_x64.msi, state: Absent, cached: None [2498:21AC][2023-03-02T17:36:02]i052: Condition 'TargetPowerToysVersion >= DetectedPowerToysVersion OR WixBundleInstalled' evaluates to true. [2498:21AC][2023-03-02T17:36:02]i052: Condition 'DetectedWindowsBuildNumber >= 19041 OR WixBundleInstalled' evaluates to true. [2498:21AC][2023-03-02T17:36:02]i199: Detect complete, result: 0x0 [2498:2034][2023-03-02T17:36:04]i000: Setting numeric variable 'EulaAcceptCheckbox' to value 1 [2498:21AC][2023-03-02T17:36:04]i200: Plan begin, 3 packages, action: Install [2498:21AC][2023-03-02T17:36:04]w321: Skipping dependency registration on package with no dependency providers: TerminatePowerToys [2498:21AC][2023-03-02T17:36:04]i000: Setting string variable 'WixBundleLog_TerminatePowerToys' to value 'C:\Users\HELLOS~1\AppData\Local\Temp\powertoys-bootstrapper-msi-0.68.0_20230302173602_000_TerminatePowerToys.log' [2498:21AC][2023-03-02T17:36:04]w321: Skipping dependency registration on package with no dependency providers: WebView2 [2498:21AC][2023-03-02T17:36:04]i000: Setting string variable 'WixBundleRollbackLog_PowerToysSetup_0.68.0_x64.msi' to value 'C:\Users\HELLOS~1\AppData\Local\Temp\powertoys-bootstrapper-msi-0.68.0_20230302173602_001_PowerToysSetup_0.68.0_x64.msi_rollback.log' [2498:21AC][2023-03-02T17:36:04]i000: Setting string variable 'WixBundleLog_PowerToysSetup_0.68.0_x64.msi' to value 'C:\Users\HELLOS~1\AppData\Local\Temp\powertoys-bootstrapper-msi-0.68.0_20230302173602_001_PowerToysSetup_0.68.0_x64.msi.log' [2498:21AC][2023-03-02T17:36:04]i201: Planned package: TerminatePowerToys, state: Absent, default requested: Present, ba requested: Present, execute: Install, rollback: None, cache: Yes, uncache: Yes, dependency: None [2498:21AC][2023-03-02T17:36:04]i201: Planned package: WebView2, state: Present, default requested: Present, ba requested: Present, execute: None, rollback: None, cache: No, uncache: No, dependency: None [2498:21AC][2023-03-02T17:36:04]i201: Planned package: PowerToysSetup_0.68.0_x64.msi, state: Absent, default requested: Present, ba requested: Present, execute: Install, rollback: Uninstall, cache: Yes, uncache: No, dependency: Register [2498:21AC][2023-03-02T17:36:04]i299: Plan complete, result: 0x0 [2498:21AC][2023-03-02T17:36:04]i300: Apply begin [2498:21AC][2023-03-02T17:36:04]i010: Launching elevated engine process. [2498:21AC][2023-03-02T17:36:06]i011: Launched elevated engine process. [2498:21AC][2023-03-02T17:36:07]i012: Connected to elevated engine. [12AC:36C8][2023-03-02T17:36:07]i358: Pausing automatic updates. [12AC:36C8][2023-03-02T17:36:07]i359: Paused automatic updates. [12AC:36C8][2023-03-02T17:36:07]i360: Creating a system restore point. [12AC:36C8][2023-03-02T17:36:07]i361: Created a system restore point. [12AC:36C8][2023-03-02T17:36:07]i370: Session begin, registration key: SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall{146aebdf-0b08-4bd1-b274-987efc9d40e7}, options: 0x7, disable resume: No [12AC:36C8][2023-03-02T17:36:07]i000: Caching bundle from: 'C:\Users\HELLOS~1\AppData\Local\Temp{2F214A5B-9752-48DC-929A-F80D67E4D039}.be\PowerToysSetup-0.68.0-x64.exe' to: 'C:\ProgramData\Package Cache{146aebdf-0b08-4bd1-b274-987efc9d40e7}\PowerToysSetup-0.68.0-x64.exe' [12AC:36C8][2023-03-02T17:36:07]i320: Registering bundle dependency provider: {146aebdf-0b08-4bd1-b274-987efc9d40e7}, version: <IP_ADDRESS> [12AC:36C8][2023-03-02T17:36:07]i371: Updating session, registration key: SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall{146aebdf-0b08-4bd1-b274-987efc9d40e7}, resume: Active, restart initiated: No, disable resume: No [12AC:24CC][2023-03-02T17:36:09]i305: Verified acquired payload: TerminatePowerToys at path: C:\ProgramData\Package Cache.unverified\TerminatePowerToys, moving to: C:\ProgramData\Package Cache\CEEB2F4674AB44E9EBCE9175CE716612D979979C\terminate_powertoys.cmd. [12AC:24CC][2023-03-02T17:36:10]i305: Verified acquired payload: PowerToysSetup_0.68.0_x64.msi at path: C:\ProgramData\Package Cache.unverified\PowerToysSetup_0.68.0_x64.msi, moving to: C:\ProgramData\Package Cache{B216161C-A8ED-4248-B2FC-75A9BA9D3832}v0.68.0\PowerToysSetup-0.68.0-x64.msi. [12AC:36C8][2023-03-02T17:36:10]i301: Applying execute package: TerminatePowerToys, action: Install, path: C:\ProgramData\Package Cache\CEEB2F4674AB44E9EBCE9175CE716612D979979C\terminate_powertoys.cmd, arguments: '"C:\ProgramData\Package Cache\CEEB2F4674AB44E9EBCE9175CE716612D979979C\terminate_powertoys.cmd"' [2498:21AC][2023-03-02T17:36:11]i319: Applied execute package: TerminatePowerToys, result: 0x0, restart: None [12AC:36C8][2023-03-02T17:36:11]i323: Registering package dependency provider: {B216161C-A8ED-4248-B2FC-75A9BA9D3832}, version: 0.68.0, package: PowerToysSetup_0.68.0_x64.msi [12AC:36C8][2023-03-02T17:36:11]i301: Applying execute package: PowerToysSetup_0.68.0_x64.msi, action: Install, path: C:\ProgramData\Package Cache{B216161C-A8ED-4248-B2FC-75A9BA9D3832}v0.68.0\PowerToysSetup-0.68.0-x64.msi, arguments: ' ARPSYSTEMCOMPONENT="1" BOOTSTRAPPERINSTALLFOLDER="C:\Program Files\PowerToys"' [12AC:36C8][2023-03-02T17:40:19]e000: Error 0x80070643: Failed to install MSI package. [12AC:36C8][2023-03-02T17:40:19]e000: Error 0x80070643: Failed to execute MSI package. [2498:21AC][2023-03-02T17:40:19]e000: Error 0x80070643: Failed to configure per-machine MSI package. [2498:21AC][2023-03-02T17:40:19]i319: Applied execute package: PowerToysSetup_0.68.0_x64.msi, result: 0x80070643, restart: None [2498:21AC][2023-03-02T17:40:19]e000: Error 0x80070643: Failed to execute MSI package. [12AC:36C8][2023-03-02T17:40:19]i318: Skipped rollback of package: PowerToysSetup_0.68.0_x64.msi, action: Uninstall, already: Absent [2498:21AC][2023-03-02T17:40:19]i319: Applied rollback package: PowerToysSetup_0.68.0_x64.msi, result: 0x0, restart: None [12AC:36C8][2023-03-02T17:40:19]i329: Removed package dependency provider: {B216161C-A8ED-4248-B2FC-75A9BA9D3832}, package: PowerToysSetup_0.68.0_x64.msi [12AC:36C8][2023-03-02T17:40:19]i351: Removing cached package: PowerToysSetup_0.68.0_x64.msi, from path: C:\ProgramData\Package Cache{B216161C-A8ED-4248-B2FC-75A9BA9D3832}v0.68.0 [12AC:36C8][2023-03-02T17:40:19]i351: Removing cached package: TerminatePowerToys, from path: C:\ProgramData\Package Cache\CEEB2F4674AB44E9EBCE9175CE716612D979979C [12AC:36C8][2023-03-02T17:40:19]i372: Session end, registration key: SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall{146aebdf-0b08-4bd1-b274-987efc9d40e7}, resume: None, restart: None, disable resume: No [12AC:36C8][2023-03-02T17:40:19]i330: Removed bundle dependency provider: {146aebdf-0b08-4bd1-b274-987efc9d40e7} [12AC:36C8][2023-03-02T17:40:19]i352: Removing cached bundle: {146aebdf-0b08-4bd1-b274-987efc9d40e7}, from path: C:\ProgramData\Package Cache{146aebdf-0b08-4bd1-b274-987efc9d40e7} [12AC:36C8][2023-03-02T17:40:19]i371: Updating session, registration key: SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall{146aebdf-0b08-4bd1-b274-987efc9d40e7}, resume: None, restart initiated: No, disable resume: No [2498:21AC][2023-03-02T17:40:20]i399: Apply complete, result: 0x80070643, restart: None, ba requested restart: No Looks like the MSI for 0.67.0 needed to uninstall was cleaned from your system. Please follow these instructions to get it from the 0.67 installer: https://learn.microsoft.com/en-us/windows/powertoys/install#extracting-the-msi-from-the-bundle /needinfo
2025-04-01T04:34:41.820097
2023-04-22T18:30:53
1679648975
{ "authors": [ "kkshinichi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8452", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/25662" }
gharchive/issue
PowerToys Run: $display not showing System Settings > Display Microsoft PowerToys version 0.69.1 Installation method GitHub Running as admin Yes Area(s) with issue? PowerToys Run Steps to reproduce Trigger PowerToys Run (alt+space) Type $display ✔️ Expected Behavior There will be an item called Display (System Settings > Display) ❌ Actual Behavior There's no item of Display (System Settings > Display) This item shows up on my other Windows 10 PC, but not on Windows 11 PC. Other Software No response It was renamed to Display properties Control Panel > Hardware and Sound instead. It redirects to Display Settings. It was renamed to Display properties Control Panel > Hardware and Sound instead. It redirects to Display Settings.
2025-04-01T04:34:41.830071
2023-10-01T18:52:04
1920915010
{ "authors": [ "davidldennison", "htcfreek", "jaimecbernardo", "jorgeb77" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8453", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/28940" }
gharchive/issue
Previewing files HTML, JavaScript and CSS in File Explorer Description of the new feature / enhancement Please, I would like you to add the ability to preview HTML, JavaScript and CSS files to the Windows File Explorer. Scenario when this would be used? To preview the contents of a file before opening it. Supporting information No response Can you please be more specific about what you mean? File Explorer add-ons allow you to view the contents of HTML, JavaScript and CSS files in the preview pane. Here's the focs showing a .cs file as an example: https://learn.microsoft.com/en-us/windows/powertoys/file-explorer#enabling-the-explorer-pane-in-windows-11 Does this help? /needinfo That's what I mean. That the plugin does not allow you to preview html, ccs and javascript files. Try it yourself to see for yourself. I would also like it to allow the preview of other types of programming files. Thank you very much for your attention. El mar, 3 oct 2023 a las 5:28, Jaime Bernardo @.***>) escribió: Can you please be more specific about what you mean? File Explorer add-ons allow you to view the contents of HTML, JavaScript and CSS files in the preview pane. Here's the focs showing a .cs file as an example: https://learn.microsoft.com/en-us/windows/powertoys/file-explorer#enabling-the-explorer-pane-in-windows-11 Does this help? /needinfo — Reply to this email directly, view it on GitHub https://github.com/microsoft/PowerToys/issues/28940#issuecomment-1744583420, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM6NCHYCABDYUHLWIUNOFJLX5PLC7AVCNFSM6AAAAAA5OQMN3SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONBUGU4DGNBSGA . You are receiving this because you modified the open/close state.Message ID: @.***> Is this issue still a problem? sometimes it works. I don't know what this is due to. On Thu, Feb 29, 2024 at 2:55 PM Heiko @.***> wrote: Is this issue still a problem? — Reply to this email directly, view it on GitHub https://github.com/microsoft/PowerToys/issues/28940#issuecomment-1971855540, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM6NCHYAVSGURCPNL7BNYX3YV6DSTAVCNFSM6AAAAAA5OQMN3SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZRHA2TKNJUGA . You are receiving this because you modified the open/close state.Message ID: @.***> @htcfreek it's definitely still an issue. I've tried for months to get it to work and for some reason sometimes it works, sometimes it breaks after. Sometimes there's no preview at all once installed.
2025-04-01T04:34:41.832914
2024-01-30T20:55:30
2108759970
{ "authors": [ "CityguyUSA", "OPS-X" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8454", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/31199" }
gharchive/issue
FancyZones UI is awful Description of the new feature / enhancement FancyZones UI is terribly unintuitive. It needs rethought. Every time I attempt to use it I can't figure it out. The editor seems to overlay everything and trying to drag n drop an open panel I find next to impossible. The interaction between the existing windows and the editor is convuluted. And once you've created a setup it's supposed to automatically open it next time...how? Scenario when this would be used? I love the idea of FancyZones it just needs a better more intuitive UI. Supporting information Maybe first off just have the editor be another window rather than covering the entire display. How about inverted the theory. Setup a display with the programs you want by dragging their boundaries to the desired spot. Click save to create your FancyZone and create a shortcut on the desktop that would open the named FancyZone and the associated apps into the proper layout. I literally googled for "fancy zones awfull ui" after pulling my hairs out within 10 minutes of trying to make some simple changes to get a basic 3 panel layout. Literally everything works the opposite of one would intuitively expect, so please fix this asap.
2025-04-01T04:34:41.836993
2024-03-10T18:23:24
2177839262
{ "authors": [ "Aaron-Junker", "RektSemih", "jaimecbernardo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8455", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/31833" }
gharchive/issue
powertoys keyboard manager does not open Microsoft PowerToys version 0.79.0 Installation method GitHub Running as admin Yes Area(s) with issue? Keyboard Manager Steps to reproduce PowerToysReport_2024-03-10-21-22-16.zip ✔️ Expected Behavior I was waiting for the keyboard organizer to open when I pressed it. ❌ Actual Behavior When I press to open the keyboard manager, nothing happens. Other Software No response @niels9001 From the logs I see some RTL error, but the user doensn't use a RTL locale, any idea what happens here? /dup #31708 @RektSemih , can you please add to that original issue? Also please try the local builds and report there if you're able to? https://github.com/microsoft/PowerToys/issues/31708#issuecomment-1986515217
2025-04-01T04:34:41.849461
2024-09-04T07:59:58
2504591418
{ "authors": [ "acave-solweb", "kaszeta", "rmegal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8456", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/34581" }
gharchive/issue
Most modules disabled after v0.84.0 update Microsoft PowerToys version 0.84.0 Installation method PowerToys auto-update Running as admin No Area(s) with issue? General Steps to reproduce Unknown, ran update within application and waited for it to finish. Tried restarting but didn't change module enabled state. ✔️ Expected Behavior No response ❌ Actual Behavior Most modules disabled, when almost all were enabled Other Software No response I have attached the runner log, which is much bigger than others after only a couple of hours uptime today. (477KB at upload for today, vs 10-12KB for previous days.) runner-log_2024-09-04.txt Additionally here is some of the settings files for the 'malformed json' error modules: Hosts-settings.json Env-settings.json FancyZones-settings.json From runner-log, this line indicates the modules were disabled at the same time I must have started the update from within the application: [2024-09-04 08:40:45.218391] [p-13720] [t-13724] [info] apply_general_settings: ... (See previous attachment for full line) The update-log only has four lines: update-log_2024-09-04 [2024-09-04 08:40:47.554095] [p-23252] [t-24072] [info] update logger is initialized [2024-09-04 08:40:58.663091] [p-23252] [t-24072] [error] Couldn't obtain github version info: Network error [2024-09-04 08:43:23.202284] [p-23208] [t-28212] [info] update logger is initialized [2024-09-04 08:44:30.182181] [p-17224] [t-17748] [info] update logger is initialized Additionally I restarted PowerToys again and the repeated errors about malformed JSON appear to have stopped, and my settings have persisted for now. runner-log_2024-09-04-SUPPLEMENT.txt Hopefully this isn't a Butterfly flipping a bit causing these issues 😅 Updating to v0.85.1 has resulted in the same problem, again. As before, it seems to disable the modules a couple of seconds before the logger is initialised. Fortunately, I ensured the settings had correctly backed-up after the last time (they hadn't backed-up previously), but ideally this bug should not occur. runner-log_2024-10-08.txt update-log_2024-10-08.txt Additional info: Windows 10 Pro - 22H2 - 19045.4291 I updated the app using the update button within the application manually. Saw this after updating to v0.86.0: Running on: Edition Windows 10 Enterprise Version 22H2 Installed on ‎2/‎14/‎2022 OS build 19045.5073 Experience Windows Feature Experience Pack 1000.19060.1000.0 I just started the application from the Windows notification for an update right after logging in, and most modules were disabled again, before I even tried updating. Is it possible that launching the settings too quickly is causing this? Or something else reset the active modules? I'm seeing the same thing as @rmegal; particularly, not only are some modules getting disabled on upgrade (FancyZones and Quick Accent, in particular), but modules I don't use (File Locksmith and Always on Top, in particular) are getting activated. PowerToys 0.81.1 Installed 12/24/2024 Edition Windows 11 Pro Version 24H2 Installed on ‎10/‎27/‎2024 OS build 26100.2033 Experience Windows Feature Experience Pack 1000.26100.23.0
2025-04-01T04:34:41.858314
2024-12-09T15:09:39
2727366396
{ "authors": [ "KadinMEastway", "TheJoeFin" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8457", "repo": "microsoft/PowerToys", "url": "https://github.com/microsoft/PowerToys/issues/36274" }
gharchive/issue
Snippets Description of the new feature / enhancement Similar to Raycast on macOS, Snippet features include the following things: Snippet menu I can create a snippet (name, keyword, snippet text) ~ Keyword referring to a sequence of inputted characters, not a key binding or shortcut I can insert placeholders into the snippet text using curly brackets {} to allow for additional customization I can manage/edit a snippet Machine-wide Functionality When I type a keyword in any text field, it will be auto expanded(replaced) with the corresponding snippet text I can use a hotkey to quick view all my snippets so I can directly choose the right snippets if I forgot the keywords When I pick a snippet from the quick view, the snippet text will auto paste to the text field I focused and copy the snippet text to clipboard This is very similar to another issue that was closed due to lack of author feedback (#26970), however it is NOT a duplicate of #1758 or #671. Please see supporting information for examples. Scenario when this would be used? Any time you have text/characters that are being used semi-frequently: Special characters/symbols (Emoticons, unicode characters, keyboard symbols) Form entry (Addresses, emails, phone numbers, account links, current date) Large messages (LinkedIn intros, customer feedback requests, thank you emails) Templates (Git commit messages, Project README.md's, GitHub issues, C++ Makefiles) Supporting information Here are links to the snippet extension in Raycast on MacOS detailing functionality and use examples: (~3 min YT Demo)https://www.youtube.com/watch?v=gSbZjxgl1Qc (Docs)https://www.raycast.com/core-features/snippets (Example Snippets)https://ray.so/snippets We've found some similar issues: Text Snippet and Expend #26970 , similarity score: 83% If any of the above are duplicates, please consider closing this issue out and adding additional context in the original issue. Note: You can give me feedback by 👍 or 👎 this comment. Leaving this open as the old issue was closed with no movement. (Will close in the event that #26970 is reopened) @KadinMEastway I did watch the video on Raycast and there are a few different issues in PowerToys which cover all of the features mentioned in that video. First would be expanding text: #5074 Then would be adding variables: like covered in #30902 Do you know of any tool for Windows which does the text replacement after typing a string of characters? I'm not sure if there are documented APIs for this. /needinfo @TheJoeFin thanks for linking the additional issue (#5074). This does seem much more similar, and as such I am fine with them falling under the same feature. Looking forward to when/if this becomes an official part of PowerToys, but I will instead look for another tool that offers the same functionality in the meantime. @KadinMEastway sounds good, if you do find a tool that does what you want please share what it is here, that'll be helpful for sure
2025-04-01T04:34:41.875634
2020-11-18T14:45:45
745723192
{ "authors": [ "craigjar", "danmarshall" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8458", "repo": "microsoft/SandDance", "url": "https://github.com/microsoft/SandDance/issues/284" }
gharchive/issue
Feature Request: Support for Databricks Please look into adding support or integration with Databricks. Could possibly just be a guide on how to config to get it working there. With the partnership between Microsoft and Databricks for the Azure Databricks service, making this visualization tool available there would take advantage of that partnership and provide benefits to MS as well as Databricks customers. Hi @craigjar, thanks for the suggestion. Can you provide a link to their API? I'm not familiar with Databricks, so if you can tell me where specifically this fits into this workflow, that would also be helpful. Databricks is similar to Jupyter, a browser based notebook interface with Apace Spark backend. Here is a link to the documentation where several other Python libraries examples are provided https://docs.microsoft.com/en-us/azure/databricks/notebooks/visualizations/#in-this-section-2 Here is another doc which provides examples for HTML, D3, and SVG visualizations: https://docs.microsoft.com/en-us/azure/databricks/notebooks/visualizations/html-d3-and-svg
2025-04-01T04:34:41.877682
2024-09-23T19:36:34
2543491824
{ "authors": [ "XiaoshanHsj", "torlarse" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8459", "repo": "microsoft/SpeechT5", "url": "https://github.com/microsoft/SpeechT5/issues/92" }
gharchive/issue
Speech conversion: process whole input without stopping I have followed the example in the Huggingface blog post, and can replicate that voice conversion example in a Colab session. The voice conversion stops consistently in the middle of the example input sentence. Noob question: where do I start making the model convert all the audio in the input? Is this a software engineering problem (cut audio with some silence detection logic, loop through sending audio-only to converter), or is it more of a fine-tuning or parameter adjusting effort? The end goal would be to use the SpeechT5 model to replicate the product ElevenLabs is offering. There you can upload a sound file up to 50 MB, and expect a voice conversion that time-wise matches the whole input. Appreciate any hint or tips @XiaoshanHsj sorry, i am not the author of SpeechT5.
2025-04-01T04:34:41.889776
2019-05-07T00:51:43
440976064
{ "authors": [ "AjawadMahmoud", "Gnbrkm41", "aaron-sonin", "adjlyadv", "ankitbko", "philselmer", "rw3iss", "vhanla", "zadjii-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8460", "repo": "microsoft/Terminal", "url": "https://github.com/microsoft/Terminal/pull/449" }
gharchive/pull-request
Fix build errors in VS2019 and Build Tools v142 Without this, building in VS2019 results in multiple function is not a member of std errors. This fixes that. 😃 EDIT: Important to mention I'm using VS2019 16.0.3 and build tools v. 142. Got it to build in VS2019, but debugging throws memory exception, oddly. Please see #461 https://github.com/microsoft/Terminal/issues/461#issuecomment-489929615 @rw3iss Where do I get WinRT C++ Library? @vhanla Read my comment? https://marketplace.visualstudio.com/items?itemName=CppWinRTTeam.cppwinrt101804264 nice work Do I still need to donwload VS2017 build tools? I'm still getting this error after applying your patch: The build tools for v142 (Platform Toolset = 'v142') cannot be found. To build using the v142 build tools, please install v142 build tools. Alternatively, you may upgrade to the current Visual Studio tools by selecting the Project menu or right-click the solution, and then selecting "Retarget solution". @AjawadMahmoud - I think you just need to change the build type from ARM64 to x64 or install the v143 build tools for ARM. /azp run I managed to build it is VS2019 without changing target to v142 and without any code change. Followed this to install WinRT and then right-clicked on solution -> install missing feature. Is this supposed to be done? on VS17 it seemed to work fine without the include. @ ankitbko seems to be suggesting it can be done without changes as well. You shouldn't install the cppwinrt VSIX anymore, that's been deprecated in favor of the nuget package. @philselmer, hmm.. I tried many scenarios but was always ended up with some errors. I'll spare some more time on this later. You shouldn't install the cppwinrt VSIX anymore, that's been deprecated in favor of the nuget package. Is this really deprecated? It is still present in official docs of WinRT. Nuget Package description mentions that MSBuild support is removed from vsix but I am not sure if there are other dependencies required by the project that is included in this visx. Ahhh okay I see what they've done. I'm dealing with outdated information from the last time we met with them. I see now that the VSIX does have some natvis stuff to make debugging easier. When we were last working with them, we were helping get the build rules into nuget, but I didn't realize they kept the vsix. That's my bad.
2025-04-01T04:34:41.908349
2019-03-22T06:38:40
424067213
{ "authors": [ "foolisheep", "msftclas", "orta" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8461", "repo": "microsoft/TypeScript-Node-Starter", "url": "https://github.com/microsoft/TypeScript-Node-Starter/pull/183" }
gharchive/pull-request
Fix 2 build error; Use interface when ever possible. Fix 2 build error with following error message. src/controllers/user.ts:96:5 - error TS2345: Argument of type '{ email: any; password: any; }' is not assignable to parameter of type 'Partial'. Object literal may only specify known properties, and 'email' does not exist in type 'Partial'. 96 email: req.body.email, ~~~~~~~~~~~~~~~~~~~~~ src/controllers/user.ts:106:16 - error TS7006: Parameter 'err' implicitly has an 'any' type. 106 user.save((err) => { ~~~ npm ERR! code ELIFECYCLE Use interface when ever possible, refactor the model of user, make it more understandable. Use the name User to represent the TypeScript type User. Use the name UserDocument to represent the mongoose documents which can be saved and retrieved from database. Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: foolisheep sign nowYou have signed the CLA already but the status is still pending? Let us recheck it. Hi, I'm not able to re-produce these errors on master. It looks like you've made quite a few changes in this PR, which makes it tricky to merge right now (lots of merge conflicts) Is there any chance you can show me how to re-produce this issue? Going to close this due it lack of a response - thanks for trying to help out though!
2025-04-01T04:34:41.925652
2018-03-05T14:58:01
302331506
{ "authors": [ "FredyC", "JulianG", "MrStLouis", "RyanCavanaugh", "aleclarson", "danieldelcore", "gamegee", "hypeofpipe", "kelly-tock", "kitsonk", "leoyli", "mhegazy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8462", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/22331" }
gharchive/issue
Conflicts between typings exposed globally TypeScript Version: 2.7.0-dev.201xxxxx Search Terms: cypress; beforeEach; global conflict; duplicate identifier Code Reproduction repo Expected behavior: To be able to use Cypress and Jest in the same project without explicitly specifying which @types packages to use. Actual behavior: See README in a repo Related Issues: https://github.com/cypress-io/cypress/issues/1087 As you can see in a linked issue, there were some workarounds suggested, but I am not happy about any of them. Setting skipLibCheck because of inclusion of a test framework feels mega dirty. Specifying what @types/* packages to use is not a good DX imo. I already have these listed in my project and I am sure it will grow over time. Worst part is it's not always easy to find which package needs to be included there. "types": ["node", "react", "jest", "debug", "googlemaps", "markerclustererplus"], Possible solution I think it might be worth to expand configuration of types to have a include/exclude variant. That way I could exclude mocha typings instead of listing everything I need to include. Also if I would want to write Cypress tests with TypeScript, I would be able to extend root tsconfig a include mocha there instead. The issue here is caused by the declaration files defining the same global functions. this can be remedied by either 1. both defining them the same exact type, 2. refactoring the declaration files to be modules instead of global declarations, or 3. by refactoring the declaration files to have a third common package both packages depend on and have the shared declarations. Hey, so you say this not something that can be fixed on TypeScript side? And what about my suggestion? Seeing a number of thumbs here I would say I am not the only one bothered by this. Why forcing the solution to the userland when there is such an easy fix? Two different test frameworks, not really possible to force either one of them to use exact same type as other I am not sure it can work in this case since there are global functions and you don't generally want to have imports in your tests I cannot imagine how that would work and it feels a rather weird coupling together two different test frameworks. I think it might be worth to expand configuration of types to have a include/exclude variant. you already have types which is a list. i do not expect the list of your @types package to be that large, that you can not enumerate them all modulo mocha in the "types" property in your tsconfig.json. @mhegazy Well, it certainly is possible but is it a good DX? I mean why not give another option that makes things easier and does not break anything? I am not arguing there is value. The question is the cost of adding a new feature both in terms of complexity to new users, and in terms of maintenance cost vs the value added. I would argue the real issue is the declaration of mocha and jest. Well, they are both testing frameworks relying on globals (sadly). It's not really something that can be changed without a breaking change. The complexity you mention is optional, the types config field can accept either array or object with include/exclude properties. For most people who are not solving this issue, it won't matter that this option is in there. @mhegazy Can you please reconsider? @FredyC did you end up with a workaround? @kelly-tock The workaround is obvious since there is no exclude option, you have to do the opposite. @mhegazy I don't understand why is to difficult for you to even talk about it. You did admit there is some value in it, so why to cut off discussion suddenly? @FredyC The issue isn't locked. Maintainers are still reading the issue tracker. @RyanCavanaugh I know it's not locked issue, but closed ones don't get much of the attention either :( I have the same problem, and I don't even know which types to list in my tsconfig.json. I was already listing "chrome" because I'm using some chrome APIs. Any suggestions on how to find the list of types my project is using? Any suggestions on how to find the list of types my project is using? --traceResolution Is there is any update? Issuing the same problem I am afraid until this stays closed, it will never get any attention despite what @RyanCavanaugh said before. @FredyC found an interesting solution though - here https://github.com/nrwl/nx/issues/816 @/FrozenPandaz What about the workaround of adding import 'jest'; to test-setup.ts? I seems to do the job and I haven't seen any side effects. Feels kinda far fetched. I've opted for a much simple solution and have a cypress folder with its own package.json and yarn.lock. That way dependencies don't collide as long as I don't need Jest when writing cypress tests which does not make much sense :) @FredyC, +1. I was bothered by the type conflicts here... The black list feature is helpful for mono-repo to have their @types file properly mange at the root level. Plz, reconsider to add this support since we have white list, logically it should not be too difficult to black list, I might be wrong though. @hypeofpipe I wish I had found your comment a week ago. These type collisions have been murdering my productivity and now I can finally publish 🙏 I think what would really help in this case is to at least have the ability to turn off module @type declarations when they do clash. For example: Root tsconfig.json { "paths": { "mocha": ["typings/mocha.d.ts"] } } typings/mocha.ts declare module 'mocha'; We're really at the mercy of library authors to avoid using globals, but without another way for projects like Jest to express types for their globals or a way for users to intelligently turn them off, we're left with having to explicitly define each and every @type dependency in our tsconifg. I dont think this is a solution. Is there any update ? I am issuing the same problem for KeyboardEvent type. The global name is conflicting with @types/react and DOM types (from typescript) An open issue related to this can be found at #37053. Show your support with upvotes and comments.
2025-04-01T04:34:41.930816
2018-10-22T09:31:06
372451603
{ "authors": [ "OliverJAsh", "yhaskell" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8463", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/28040" }
gharchive/issue
Declarations are not generated for a high order component TypeScript Version: 3.2.0-dev.201xxxxx, 2.9.1, 3.1.3 Search Terms: declaration, high-order, hoc Code export function Executor(func: () => any) { class ExecutingClass { execute() { func(); } } return ExecutingClass; } Expected behavior: Running both tsc and tsc -d produces no errors. Actual behavior: When I'm running tsc on this code, everything works fine. When I'm running tsc -d on this code, I have this error: hoc.ts:1:17 - error TS4060: Return type of exported function has or is using private name 'ExecutingClass'. 1 export function Executor(func: () => any) { It is still possible to generate typings for this if you change code to this: export function Executor(func: () => any) { const ExecutingClass = class { execute() { func(); } } return ExecutingClass; } export default Executor; We were hit by this when we were migrating to use project references (composite enables declaration). We have several React higher-order components and each one returned an error similar to the one above. import React, { Component, ComponentType } from 'react'; // ERROR ❌ // Exported variable 'hoc' has or is using private name 'WrapperComponent'. export const hoc = <ComposedProps,>(ComposedComponent: ComponentType<ComposedProps>) => { class WrapperComponent extends Component<ComposedProps> { render() { return <ComposedComponent {...this.props} />; } } return WrapperComponent; }; We worked around it by switching from class declarations to class expressions (as per above), or from class components to function components.
2025-04-01T04:34:41.937444
2020-04-22T00:55:31
604375443
{ "authors": [ "DanielRosenwasser", "RyanCavanaugh", "ahejlsberg", "amcasey" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8464", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/38102" }
gharchive/issue
New error in 3.9: TS 7503 - can't index with number TypeScript Version: 3.9.0-dev.20200420 Search Terms: Intersection, Union, 7503 Code export type TypedArray = Int32Array | Uint8Array; export function isTypedArray(a: {}): a is Int32Array | Uint8Array { return a instanceof Int32Array || a instanceof Uint8Array; } export function flatten<T extends number|TypedArray>(arr: T) { if (isTypedArray(arr)) { arr[1]; // TS 7503 } } 3.8 behavior: No error 3.9 behavior: Error Playground Link: https://www.typescriptlang.org/play/?ts=3.9.0-dev.20200420&ssl=1&ssc=1&pln=11&pc=2#code/KYDwDg9gTgLgBDAnmYcAqzgBMCCUoCGicAvHAJIB2MAzAEx6HEA+cAqgJbUAcjRA3AChBoSLDgAzAK6UAxjA4RKcDgGcMKXPiIAKAgC44AbwC+ASkMEVqitXp8W7LjF7biRwXDhRgMKVGUrLlUYAjlgCAlbWgY3OGZWIMoQsNkIqM4eByETYVFoeGk5BSVJABsCGBhgSgAeNDhQasosG0opAFsAI2AoZg1sBwA+PXxDNDNjTxUonTUBrSZRqDNJjy8NuAJ8AG0ARgBdIS9ckyA Related Issues: I'm guessing this is another empty intersection bug, but I wasn't sure how to de-dup it. Inspiration is from https://github.com/tensorflow/tfjs/blob/95995e7fec478e86b4baf0d34f902d3e345ee54b/tfjs-core/src/util.ts#L144 FYI @ahejlsberg The error text is Element implicitly has an 'any' type because expression of type '1' can't be used to index type '(T & Int32Array) | (T & Uint8Array)'. Property '1' does not exist on type '(T & Int32Array) | (T & Uint8Array)'.(7053) @ahejlsberg not sure why this changed; I don't recall any changes around indexing. Observations: The narrowed type of arr is the same in 3.9 and 3.8.3 arr[1] has type number in 3.8.3 Changing to arr[1 as number] doesn't fix it (just produces an error about no numeric index signature instead) It appears to have something to do with discriminants and intersections. Simplified example: interface NumList { kind: 'n'; [x: number]: number; } interface StrList { kind: 's'; [x: number]: string; } export function foo<T extends NumList | StrList>(arr: T & NumList | T & StrList) { let zz = arr[1]; // Error } Error goes away without the kind property or when it is changed to something that isn't a discriminant. Didn't @orta look into a similar bug?
2025-04-01T04:34:41.944149
2020-05-10T01:42:24
615295573
{ "authors": [ "corollari", "j-oliveras" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8465", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/38448" }
gharchive/issue
Maps are not properly displayed on typescript playground This issue isn't related to the typescript compiler but to the playground present on typescript.org, which, according to #2246 and #17677, implements an instance of monaco-editor. I've opened this issue in this repository instead of the monaco-editor one because it seems that the problem is caused by the extensions to monaco-editor that have been added to build the playground. Feel free to close this issue if I'm mistaken. TypeScript Version: 3.8.3 Search Terms: Map console.log playground Expected behavior: The standard representation of the Map object Map { 1 => 32 } is printed on the console.log call. Actual behavior: An empty object {} is printed on the console.log call. Related Issues: There's an stackoverflow question from 2017 which complains about the same behaviour on a repl.it environment Code let m = new Map<number,number>() m.set(1,32) console.log(m) Output "use strict"; let m = new Map(); m.set(1, 32); console.log(m); Compiler Options { "compilerOptions": { "noImplicitAny": true, "strictNullChecks": true, "strictFunctionTypes": true, "strictPropertyInitialization": true, "strictBindCallApply": true, "noImplicitThis": true, "noImplicitReturns": true, "alwaysStrict": true, "esModuleInterop": true, "declaration": true, "experimentalDecorators": true, "emitDecoratorMetadata": true, "moduleResolution": 2, "target": "ES2017", "jsx": "React", "module": "ESNext" } } Playground Link: Provided I see the expected behavior using Firefox. I've just tested it on both Firefox 75.0 and Chromium 81.0.4044.122 and have been able to reproduce the issue on the two of them.
2025-04-01T04:34:41.949931
2021-02-06T10:42:24
802668814
{ "authors": [ "didinele" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8466", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/42677" }
gharchive/issue
Maximum call stack size exceeded Bug Report 🔎 Search Terms 🕗 Version & Regression Information This is a crash This is the behavior in every version I tried I was unable to test this on prior versions because it depends on the brand new template literal types introduced in 4.1 ⏯ Playground Link Playground link with relevant code 💻 Code import { EventEmitter } from 'events'; interface Test<T> { on(event: 'error', listener: (error: any) => any): this; on(event: `__${string}`, listener: (data: string, isError: true) => any | ((data: T, isError: false) => any)): this; once(event: 'error', listener: (error: any) => any): this; once(event: `__${string}`, listener: (data: string, isError: true) => any | ((data: T, isError: false) => any)): this; emit(event: 'error', error: any): boolean; emit(event: `__${string}`, data: string, isError: true): boolean; emit(event: `__${string}`, data: T, isError: false): boolean; } class Test<T> extends EventEmitter { public test() { // In my case - I was using a package that could provide null, or any type of data, so they used `any` - but I was expecting a string. const data: any = 'test'; const key = `__${data as string}` as const; // In my case, I had 2 values that were nullable - the error, or the data. One of them is guranteed to be present, so "(a ?? b)!" is the way to go. const nullableProp1: T | null = 'test' as any; const nullableProp2: string | null = 'test' as any; // If you start writing out this line, it will compile with regular type errors until the 2nd parameter, at which point everything breaks. if (key) this.emit(key, (nullableProp1 ?? nullableProp2)!, true as boolean); } } 🙁 Actual behavior The Typescript compiler fails to pick an overload, and seems to recurse down past the maximum stack call size. This is reflected in the browser console when on the playground, and of course, locally 🙂 Expected behavior The Typescript compiler successfully picks an overload and continues on. Apologies, just tried typescript@next and it seems resolved in 4.2.0-dev.20210206.
2025-04-01T04:34:41.956493
2022-02-04T05:07:15
1123813778
{ "authors": [ "MartinJohns", "RyanCavanaugh", "yordis" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8467", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/47735" }
gharchive/issue
Conditional Type Failed extending never Bug Report 🔎 Search Terms Conditional Type 🕗 Version & Regression Information Version: v4.6.0-dev.20220116 ⏯ Playground Link Workbench Repro 💻 Code export type PathParam<T> = T extends never ? {} : { path: T }; export type QueryParam<T> = T extends never ? {} : { query: T }; export type BodyParam<T> = T extends never ? {} : { body: T }; export type OptionsParam = { options?: { signal?: AbortSignal; }; } export type UrlParams<TPath, TQuery> = PathParam<TPath> & QueryParam<TQuery>; export type OperationParams<TPath = unknown, TQuery = unknown, TBody = unknown> = OptionsParam & UrlParams<TPath, TQuery> & BodyParam<TBody>; type Example1 = UrlParams<{ name: string }, never> // ^? type Example2 = OperationParams<{ name: string }, { sort: string}, never> // ^? 🙁 Actual behavior The following Asserts happened: type Example1 = never type Example2 = never 🙂 Expected behavior The following Asserts to happen: type Example1 = { path: { name: string } }; type Example2 = OptionsParam & { path: { name: string } } & { query: { sort: string } } & { options?: { signal?: AbortSignal; }; This is working as intended. never is essentially an empty union type, and using union types with conditional type inference results in distributive behaviour. Distributing over an empty union results in an empty union, as there's nothing to distribute over. See the documentation: https://www.typescriptlang.org/docs/handbook/2/conditional-types.html#inferring-within-conditional-types @MartinJohns any chance you know how to accomplish what I am intending to do? export type PathParam<T> = [T] extends [never] ? {} : { path: T }; export type QueryParam<T> = [T] extends [never] ? {} : { query: T }; export type BodyParam<T> = [T] extends [never] ? {} : { body: T }; @yordis From the documentation I linked: Typically, distributivity is the desired behavior. To avoid that behavior, you can surround each side of the extends keyword with square brackets. I see it now 🤦🏻 , but it went over my head, my apologies, I literally went to sleep with that page open for 20 mins. Thank you so much for your patience and support.
2025-04-01T04:34:41.959478
2022-03-07T21:08:59
1161926491
{ "authors": [ "RyanCavanaugh", "muuvmuuv", "shicks", "weswigham" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8468", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/48163" }
gharchive/issue
Do not issue Left side of comma operator is unused and has no side effects error on (0, expr) this-removing pattern Both our and babel's output emits (0, expr) in many places to unbind the this from the call that's about to be done. Ergo, the 0, expr pattern is not without side-effects, and is, in fact, a very highly used pattern that we probably shouldn't issue an error on. It's probably OK to special-case the literal-zero LHS as a signal to silence the error. Damn, any workaround for this? I am in a NX Workspace, and we are trying to compile a custom executor. > NX ⨯ Unable to compile TypeScript: apps/entergon/webpack.config.ts:9:19 - error TS2695: Left side of comma operator is unused and has no side effects. 9 const _default = (0, _webpack.createConfig)("entergon", (config)=>{ This is a duplicate of #35866 This should be able to be closed now that #51989 is merged. Thanks!
2025-04-01T04:34:41.984569
2023-02-03T02:35:11
1569113038
{ "authors": [ "ITenthusiasm", "RyanCavanaugh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8469", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/52585" }
gharchive/issue
Allow Generic Child Classes to Satisfy Only a Subset of Parent Constructor Overloads Suggestion 🔍 Search Terms constructor Allow Generic Child Classes to Satisfy Only a Subset of Parent Constructor Overloads overloads extend constructor ✅ Viability Checklist My suggestion meets these guidelines: [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code [x] This wouldn't change the runtime behavior of existing JavaScript code [x] This could be implemented without emitting different JS based on the types of the expressions [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.) [x] This feature would agree with the rest of TypeScript's Design Goals. ⭐ Suggestion Currently, the following does not compile in TypeScript 4.9 interface ParentClassConstructor { new <T extends number>(arg: T): ParentClass; new <T extends string>(arg: T): ParentClass; } interface ParentClass { method(): void; } const ParentClass: ParentClassConstructor = class<T extends number | string> { constructor(arg: T) { // Do something with generic argument } method(): void { // Do anything } }; class ChildClass<T extends number> extends ParentClass<T> { constructor(number: T) { // Do something with number argument } } Instead, I get the following error when trying to work with the ChildClass: Type 'T' does not satisfy the constraint 'string'. Type 'number' is not assignable to type 'string'. In fact, changing the T type parameter for ChildClass to T extends number | string also fails. It seems that if the generic type parameter of the child class does not satisfy each of the possible generic constructors in the parent constructor interface, then the compiler will fail. My suggestion/feature request is that TypeScript would be updated to permit the use of child classes whose generic type parameters satisfy a subset (at least 1) of the generic overloads of the parent class constructor. This would allow the code above to successfully compile. If it's worth noting, the following is already legal currently: interface ParentClassConstructor { new (arg: number): ParentClass; new (arg: string): ParentClass; } interface ParentClass { method(): void; } const ParentClass: ParentClassConstructor = class { constructor(arg: number | string) { // Do something with argument } method(): void { // Do anything } }; interface ChildClassConstructor { new (arg: number): ChildClass; } interface ChildClass { method(): void; } const ChildClass: ChildClassConstructor = class extends ParentClass { constructor(number: number) { // Do something with argument super(number) // Only call `number`-based constructor of parent } } So I'm hoping this isn't an unreasonable request. 💻 Use Cases / Examples A feature like this would be incredibly helpful when it comes to maintenance of libraries using classes that require (or are improved by) well-defined generics. The best libraries give end-developers tools that give them the features they need without restricting their options, but these libraries often provide baked in solutions for common use cases. When these libraries depend on the concept of classes, there's often a base class -- with the baked-in solutions extending the base class for ease. This guarantees that some devs can use the baked in solution if it's sufficient, while others can reach for the more low-level option if needed. I'm working on a frontend tool that follows the idea of an "Observer" (e.g., IntersectionObserver, MutationObserver, etc.). But it's written with JS and interacts with the DOM. This class (let's call it BaseObserver) has multiple constructor overloads to support different kinds of options for setting up the observer. And these constructors need to be generic to make it clear what combinations of constructor argument types work and what combinations don't work. (Unions alone won't make it clear what combinations are or are not legal.) In some cases, generic constructors are also needed for DOM-related type inference. There are common use cases that this observer will be needed for. And rather than require the end-dev to come up with a solution on their own, I'm seeking to export other reliable, tested solutions as well. In the end, I'd like to have something like this: /* -------------------- Base Observer -------------------- */ interface BaseObserverConstructor { new <T extends number>(arg: T): BaseObserver; new <T extends string>(arg: T): BaseObserver; } interface BaseObserver { observe(element: HTMLElement): void; unobserve(element: HTMLElement): void; disconnect(): void; } export const BaseObserver: BaseObserverConstructor = class<T extends number | string> { constructor(arg: T) { // Do something with argument } observe(element: HTMLElement): void { // Observe the element } unobserve(element: HTMLElement): void { // Undo the observation } disconnect(): void { // `unobserve` everything } }; /* -------------------- Specific Observer -------------------- */ interface SpecificObserverConstructor { new <T extends number>(arg: T): SpecificObserver; } interface SpecificObserver extends BaseObserver { additionalNeededMethods(): void; } export const SpecificObserver: SpecificObserverConstructor = class<T extends number> extends BaseObserver<T> { constructor(arg: T) { // Do something with the argument super(arg); // ONLY call the `number`-based overload of `BaseObserver` } observe(element: HTMLElement): void { // Do any additional setup required super.observe(element); } unobserve(element: HTMLElement): void { // Do any additional teardown required super.unobserve(element); } additionalNeededMethods(): void { // Anything extra needed to guarantee this specific class works as needed } }; Although I don't know ahead of time what the developer will do with BaseObserver, I do know what the exact shape and purpose of SpecificObserver will need to be. So I can make SpecificObserver conform to a subset of the generic types supported by BaseObserver and provide that to the end-devs. But to do that, I need the compiler's permission. :sweat_smile: The example doesn't show motivation for why ParentClassConstructor should have generic call signatures, instead of being generic itself, or simply not being generic at all. Expanding on that would be helpful. For sure! As a note, the repository of the aforementioned tool is private, and I've been trying to keep the code examples small; but I'll do my best to expand more. (If it's insufficient, please let me know.) Continuing with the BaseObserver (which generally represents the project/problem actively being worked on), one of the primary ways that it operates is by attaching event listeners to the Document. More specifically, the user will indicate what type of event they want to listen for, and what to do when that event is emitted (through a supplied listener callback). I guess in a sense, you could call the observer a sophisticated, carefully designed wrapper around document.addEventListener/document.removeEventListener for accomplishing a specific purpose. Note that our tool also applies some "enhancements" to the provided listener callback(s). We want the users of our package to have all of the information that they may need while listening for these events. This necessitates a generic because there are various types of events (Event, InputEvent, MouseEvent, etc.). So we end up with something like this: type EventType = keyof DocumentEventMap; type ObserverEvent<T extends EventType> = DocumentEventMap[T]; type ObserverEventListener<T extends EventType> = (event: ObserverEvent<T>) => unknown; interface BaseObserverConstructor { new <T extends EventType>(type: T, listener: ObserverEventListener<T>): BaseObserver; } However, the end user may want to run a given listener not just for one type of event, but for multiple types of events. This, yet again, requires a generic. // ... interface BaseObserverConstructor { new <T extends EventType>(type: T, listener: ObserverEventListener<T>): BaseObserver; new <T extends ReadonlyArray<EventType>>(types: T, ObserverEventListener<T[number]>): BaseObserver; } There is another use case for another generic overload as well. But if we stop here, we already see that we're in a situation where we have 2 different type guards. Per the earlier comment, this will be impossible to extend (with TypeScript's current capabilities). Inside the class itself, this isn't really a big deal, as the generic is really only used in the constructor: // ... interface BaseObserver { observe(element: HTMLElement): void; unobserve(element: HTMLElement): void; disconnect(): void; } export const BaseObserver: BaseObserverConstructor = class<T extends EventType | ReadonlyArray<EventType>> { constructor(typeOrTypes: T, listener: ObserverEventListener<EventType>) { // Do something with arguments } observe(element: HTMLElement): void { // Observe the element } unobserve(element: HTMLElement): void { // Undo the observation } disconnect(): void { // `unobserve` everything } }; (For the same reason, making the class a generic won't do any good. It's the constructors that must be generic.) But this is a big deal when: 1) We try to extend our own class internally to provide baked-in solutions, 2) An end-developer tries to extend our low-level class, or 3) An end developer wants accurate types, guards, and IntelliSense on the arguments which they pass to the constructor. (As mentioned earlier, a mere union of types here is insufficient.) Currently, the first 2 options are impossible because TypeScript does not support this kind of extension. The third reason is the most important: We want to provide a good developer experience to our end users. And for that reason, generic constructors cannot be removed. (Just as it would be very bad DX if document.addEventListener couldn't tell you that your listener is passed a MouseEvent when listening for "mouseover", so it is with our BaseObserver and its extensions. And there's more to our observers than just calling document.addEventListener.) An alternative to this solution is for BaseObserver to be reduced from multiple constructors to 1 (the first constructor), and to have the end-developer create as many instances of BaseObserver as are necessary to accomplish what would have been possible with a proper generic, overloaded constructor. This would resolve the type errors. However, this developer experience is much worse. There are some things that would make more sense to group together under one observer. And it's technically more efficient/performant (assuming the class is designed well) to just instantiate one class rather than create multiple classes that basically do the same thing barring some minor difference like event types. Is that helpful? Unhelpful? I've been thinking about this a bit more. I'm wondering if a potential solution to this problem would be a "catch-all constructor". Currently, when we're overloading functions, the final function definition actually doesn't get exposed to the user at all. It's simply a "catch-all" that's intended to help the developer carefully write the correct implementation of the function for all potential use cases. function handleNumberOrString(num: number); function handleNumberOrString(str: string); function handleNumberOrString(arg: number | string) { // Account for all scenarios } The constructor has this same behavior if it's defined within the class body. But if the constructor overloads are all defined as part of an interface, then there's no "catch-all" opportunity for the developer to take advantage of and/or work with -- despite the fact that the developer will effectively need to account for a catch-all within the actual body of their constructor. If we could define a catch-all constructor type that didn't get exposed to users, then maybe instead of type checking every possible constructor in the interface, TS could just check the "catch-all constructor" during the use of extends. (As a bonus, this could also potentially create a way for TS to type guard class expressions relying on a constructor interface, because it could throw errors if the constructor in the class body was incompatible with the catch-all constructor in the interface.) Does that make sense? This isn't really making sense to me, to be honest. Trying to paste all your examples into a coherent snippet, I either get no errors, or correct errors (bona fide constraint violations). A simpler example that still demonstrates the thing you'd like to do (but aren't allowed to) would be really useful. Yeah, sorry. In the end I realized that the last comment I gave wouldn't really work out as an idea for a solution. Maybe I should minimize it? Or are you saying that the previous comments (the ones before this guy) that I gave earlier also don't make sense? And that you'd want those in a single snippet? A single self-contained snippet showing what you're trying to do / why it's not working is always ideal, right. The class rule is pretty strict. In the general case, it's not sufficient to show that you only match one overload or the other, since an actual invocation could end up matching either overload. If you have two overloads, f(x: T) and f(x: U) and only match on the second overload for U, an actual call could be f(T & U) and match the T overload instead. In this specific example that's not possible, but it's not really easy to see why. Anyway, you can rewrite this to a working form that I think captures most of the intent here: type NumericIndexOf<T> = T extends { [n: number]: infer E } ? E : never; interface BaseObserverConstructor<T> { new<T>(type: T & EventType, listener: ObserverEventListener<T & EventType>): BaseObserver; new<T>(types: T & ReadonlyArray<EventType>, listener: ObserverEventListener<NumericIndexOf<T> & EventType>): BaseObserver; } Wow, that's some pretty intense magic right there. :fire: Thanks for the snippet! Unfortunately, there's one more overload that maps the ReadonlyArray of types to a ReadonlyArray of listeners (whose function arguments correspond to the provided types, one-by-one). Is that possible with this approach? I tried to extend what you showed me but couldn't get anything working. If you have two overloads, f(x: T) and f(x: U) and only match on the second overload for U, an actual call could be f(T & U) and match the T overload instead. That is indeed a dilemma. :thinking: However, TS doesn't account for this when it comes to non-generic function overloads, does it? Consider this scenario: function doSomethingWithArgs(params: (...args: any[]) => any): void; function doSomethingWithArgs(params: { prop: string }): void; function doSomethingWithArgs(params: ((...args: any[]) => any) | { prop: string }): void { if ("prop" in params) return console.log("Object "); return console.log("Function"); } doSomethingWithArgs(() => {}); doSomethingWithArgs({prop: "string" }); const myFunc = () => {}; myFunc.prop = "jank-string"; doSomethingWithArgs(myFunc); All 3 invocations of this function are allowed by TypeScript. However, unexpected results may come from the third invocation. My understanding was that to an extent, TS left it up to developers to carefully craft their implementation of their overloads to prevent such problems. Devs are also encouraged to list more specific overloads first so that the correct, most simple overload is always displayed to the end user. Is this considered a bug (whether in TS or in the implementation)? If it isn't a bug, would it make sense to allow similar things for generics?
2025-04-01T04:34:42.001452
2023-02-24T10:57:45
1598403850
{ "authors": [ "andrewbranch", "mkubilayk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8470", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/issues/52950" }
gharchive/issue
Using baseUrl breaks package exports lookup Bug Report A producer package exposes multiple entrypoints using Node's package exports. When this package is consumed using the following compiler options, TypeScript fails to resolve the entry points. { "compilerOptions": { "module": "commonjs", "moduleResolution": "node16", "baseUrl": "/path/to/modules/store", } } This problem doesn't happen when baseUrl is absent -- relying on the default lookup behavior. 🔎 Search Terms baseUrl, base url, package exports, module resolution 🕗 Version & Regression Information This is the behavior in every version I tried: 4.7, 4.8 and 4.9. I was unable to test this on prior versions because "node16" module resolution was first introduced in 4.7. ⏯ Playground Link http://github.com/mkubilayk/typescript-package-exports-baseurl 💻 Code Please see the full code and reproduction steps in the repo but the most relevant bits are as follows. Producer: { "name": "producer-with-package-exports", "version": "1.0.0", "main": "out/index.js", "types": "out/index.d.ts", "exports": { ".": { "types": "./out/index.d.ts", "require": "./out/index.js" }, "./entrypoint-a": { "types": "./out/entrypoint-a.d.ts", "require": "./out/entrypoint-a.js" } } } Consumer (targetting "commonjs"): import { something } from "producer-with-package-exports"; import { somethingElse } from "producer-with-package-exports/entrypoint-a"; 🙁 Actual behavior When using the baseUrl compiler option in the consumer "producer-with-package-exports/entrypoint-a" fails to resolve: import { somethingElse } from "producer-with-package-exports/entrypoint-a"; // ^ // Cannot find module 'producer-with-package-exports/entrypoint-a' or its corresponding type declarations. ts(2307) 🙂 Expected behavior Using baseUrl should not stop TypeScript from looking at package exports in the producer. So "producer-with-package-exports/entrypoint-a" should be resolved correctly -- matching the behaviour when baseUrl is not present. I have also verified the producer package using @andrewbranch's https://arethetypeswrong.github.io (💙) and got the following report as expected: Expand to see the full report { "packageName": "producer-with-package-exports", "containsTypes": true, "entrypointResolutions": { ".": { "node10": { "name": ".", "resolution": { "fileName": "/node_modules/producer-with-package-exports/out/index.d.ts", "isJson": false, "isTypeScript": true }, "implementationResolution": { "fileName": "/node_modules/producer-with-package-exports/out/index.js", "isJson": false, "isTypeScript": false }, "trace": [ "======== Resolving module 'producer-with-package-exports' from '/index.ts'. ========", "Explicitly specified module resolution kind: 'Node10'.", "Loading module 'producer-with-package-exports' from 'node_modules' folder, target file types: TypeScript, Declaration.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "File '/node_modules/producer-with-package-exports.ts' does not exist.", "File '/node_modules/producer-with-package-exports.tsx' does not exist.", "File '/node_modules/producer-with-package-exports.d.ts' does not exist.", "'package.json' does not have a 'typesVersions' field.", "'package.json' does not have a 'typings' field.", "'package.json' has 'types' field 'out/index.d.ts' that references '/node_modules/producer-with-package-exports/out/index.d.ts'.", "File '/node_modules/producer-with-package-exports/out/index.d.ts' exists - use it as a name resolution result.", "======== Module name 'producer-with-package-exports' was successfully resolved to '/node_modules/producer-with-package-exports/out/index.d.ts' with Package ID<EMAIL_ADDRESS>========" ] }, "node16-cjs": { "name": ".", "resolution": { "fileName": "/node_modules/producer-with-package-exports/out/index.d.ts", "moduleKind": 1, "isJson": false, "isTypeScript": true }, "implementationResolution": { "fileName": "/node_modules/producer-with-package-exports/out/index.js", "moduleKind": 1, "isJson": false, "isTypeScript": false }, "trace": [ "======== Resolving module 'producer-with-package-exports' from '/index.ts'. ========", "Explicitly specified module resolution kind: 'Node16'.", "Resolving in CJS mode with conditions 'node', 'require', 'types'.", "File '/package.json' does not exist.", "Loading module 'producer-with-package-exports' from 'node_modules' folder, target file types: TypeScript, JavaScript, Declaration, JSON.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "Entering conditional exports.", "Matched 'exports' condition 'types'.", "Using 'exports' subpath '.' with target './out/index.d.ts'.", "File '/node_modules/producer-with-package-exports/out/index.d.ts' exists - use it as a name resolution result.", "Resolved under condition 'types'.", "Exiting conditional exports.", "======== Module name 'producer-with-package-exports' was successfully resolved to '/node_modules/producer-with-package-exports/out/index.d.ts' with Package ID<EMAIL_ADDRESS>========" ] }, "node16-esm": { "name": ".", "resolution": { "fileName": "/node_modules/producer-with-package-exports/out/index.d.ts", "moduleKind": 1, "isJson": false, "isTypeScript": true }, "trace": [ "======== Resolving module 'producer-with-package-exports' from '/index.mts'. ========", "Explicitly specified module resolution kind: 'Node16'.", "Resolving in ESM mode with conditions 'node', 'import', 'types'.", "File '/package.json' does not exist.", "Loading module 'producer-with-package-exports' from 'node_modules' folder, target file types: TypeScript, JavaScript, Declaration, JSON.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "Entering conditional exports.", "Matched 'exports' condition 'types'.", "Using 'exports' subpath '.' with target './out/index.d.ts'.", "File '/node_modules/producer-with-package-exports/out/index.d.ts' exists - use it as a name resolution result.", "Resolved under condition 'types'.", "Exiting conditional exports.", "======== Module name 'producer-with-package-exports' was successfully resolved to '/node_modules/producer-with-package-exports/out/index.d.ts' with Package ID<EMAIL_ADDRESS>========" ] }, "bundler": { "name": ".", "resolution": { "fileName": "/node_modules/producer-with-package-exports/out/index.d.ts", "isJson": false, "isTypeScript": true }, "trace": [ "======== Resolving module 'producer-with-package-exports' from '/index.ts'. ========", "Explicitly specified module resolution kind: 'Bundler'.", "File '/package.json' does not exist.", "Loading module 'producer-with-package-exports' from 'node_modules' folder, target file types: TypeScript, JavaScript, Declaration, JSON.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "Entering conditional exports.", "Matched 'exports' condition 'types'.", "Using 'exports' subpath '.' with target './out/index.d.ts'.", "File '/node_modules/producer-with-package-exports/out/index.d.ts' exists - use it as a name resolution result.", "Resolved under condition 'types'.", "Exiting conditional exports.", "======== Module name 'producer-with-package-exports' was successfully resolved to '/node_modules/producer-with-package-exports/out/index.d.ts' with Package ID<EMAIL_ADDRESS>========" ] } }, "./entrypoint-a": { "node10": { "name": "./entrypoint-a", "trace": [ "======== Resolving module 'producer-with-package-exports/entrypoint-a' from '/index.ts'. ========", "Explicitly specified module resolution kind: 'Node10'.", "Loading module 'producer-with-package-exports/entrypoint-a' from 'node_modules' folder, target file types: TypeScript, Declaration.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "'package.json' does not have a 'typesVersions' field.", "File '/node_modules/producer-with-package-exports/entrypoint-a.ts' does not exist.", "File '/node_modules/producer-with-package-exports/entrypoint-a.tsx' does not exist.", "File '/node_modules/producer-with-package-exports/entrypoint-a.d.ts' does not exist.", "'package.json' does not have a 'typings' field.", "'package.json' has 'types' field 'out/index.d.ts' that references '/node_modules/producer-with-package-exports/entrypoint-a/out/index.d.ts'.", "Loading module as file / folder, candidate module location '/node_modules/producer-with-package-exports/entrypoint-a/out/index.d.ts', target file types: TypeScript, Declaration.", "File name '/node_modules/producer-with-package-exports/entrypoint-a/out/index.d.ts' has a '.d.ts' extension - stripping it.", "Directory '/node_modules/@types' does not exist, skipping all lookups in it.", "Loading module 'producer-with-package-exports/entrypoint-a' from 'node_modules' folder, target file types: JavaScript, JSON.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "'package.json' does not have a 'typesVersions' field.", "File '/node_modules/producer-with-package-exports/entrypoint-a.js' does not exist.", "File '/node_modules/producer-with-package-exports/entrypoint-a.jsx' does not exist.", "'package.json' has 'main' field 'out/index.js' that references '/node_modules/producer-with-package-exports/entrypoint-a/out/index.js'.", "Loading module as file / folder, candidate module location '/node_modules/producer-with-package-exports/entrypoint-a/out/index.js', target file types: JavaScript, JSON.", "File name '/node_modules/producer-with-package-exports/entrypoint-a/out/index.js' has a '.js' extension - stripping it.", "======== Module name 'producer-with-package-exports/entrypoint-a' was not resolved. ========" ] }, "node16-cjs": { "name": "./entrypoint-a", "resolution": { "fileName": "/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts", "moduleKind": 1, "isJson": false, "isTypeScript": true }, "implementationResolution": { "fileName": "/node_modules/producer-with-package-exports/out/entrypoint-a.js", "moduleKind": 1, "isJson": false, "isTypeScript": false }, "trace": [ "======== Resolving module 'producer-with-package-exports/entrypoint-a' from '/index.ts'. ========", "Explicitly specified module resolution kind: 'Node16'.", "Resolving in CJS mode with conditions 'node', 'require', 'types'.", "File '/package.json' does not exist.", "Loading module 'producer-with-package-exports/entrypoint-a' from 'node_modules' folder, target file types: TypeScript, JavaScript, Declaration, JSON.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "Entering conditional exports.", "Matched 'exports' condition 'types'.", "Using 'exports' subpath './entrypoint-a' with target './out/entrypoint-a.d.ts'.", "File '/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts' exists - use it as a name resolution result.", "Resolved under condition 'types'.", "Exiting conditional exports.", "======== Module name 'producer-with-package-exports/entrypoint-a' was successfully resolved to '/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts' with Package ID<EMAIL_ADDRESS>========" ] }, "node16-esm": { "name": "./entrypoint-a", "resolution": { "fileName": "/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts", "moduleKind": 1, "isJson": false, "isTypeScript": true }, "trace": [ "======== Resolving module 'producer-with-package-exports/entrypoint-a' from '/index.mts'. ========", "Explicitly specified module resolution kind: 'Node16'.", "Resolving in ESM mode with conditions 'node', 'import', 'types'.", "File '/package.json' does not exist.", "Loading module 'producer-with-package-exports/entrypoint-a' from 'node_modules' folder, target file types: TypeScript, JavaScript, Declaration, JSON.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "Entering conditional exports.", "Matched 'exports' condition 'types'.", "Using 'exports' subpath './entrypoint-a' with target './out/entrypoint-a.d.ts'.", "File '/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts' exists - use it as a name resolution result.", "Resolved under condition 'types'.", "Exiting conditional exports.", "======== Module name 'producer-with-package-exports/entrypoint-a' was successfully resolved to '/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts' with Package ID<EMAIL_ADDRESS>========" ] }, "bundler": { "name": "./entrypoint-a", "resolution": { "fileName": "/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts", "isJson": false, "isTypeScript": true }, "trace": [ "======== Resolving module 'producer-with-package-exports/entrypoint-a' from '/index.ts'. ========", "Explicitly specified module resolution kind: 'Bundler'.", "File '/package.json' does not exist.", "Loading module 'producer-with-package-exports/entrypoint-a' from 'node_modules' folder, target file types: TypeScript, JavaScript, Declaration, JSON.", "Found 'package.json' at '/node_modules/producer-with-package-exports/package.json'.", "Entering conditional exports.", "Matched 'exports' condition 'types'.", "Using 'exports' subpath './entrypoint-a' with target './out/entrypoint-a.d.ts'.", "File '/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts' exists - use it as a name resolution result.", "Resolved under condition 'types'.", "Exiting conditional exports.", "======== Module name 'producer-with-package-exports/entrypoint-a' was successfully resolved to '/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts' with Package ID<EMAIL_ADDRESS>========" ] } } }, "fileExports": { "/node_modules/producer-with-package-exports/out/index.d.ts": { "something": { "name": "something", "flags": 2, "valueDeclarationRange": [ 21, 35 ] } }, "/node_modules/producer-with-package-exports/out/index.js": { "__esModule": { "name": "__esModule", "flags": 1048580, "valueDeclarationRange": [ 14, 75 ] }, "something": { "name": "something", "flags": 1048580, "valueDeclarationRange": [ 105, 122 ] } }, "/node_modules/producer-with-package-exports/out/entrypoint-a.d.ts": { "somethingElse": { "name": "somethingElse", "flags": 2, "valueDeclarationRange": [ 21, 39 ] } }, "/node_modules/producer-with-package-exports/out/entrypoint-a.js": { "__esModule": { "name": "__esModule", "flags": 1048580, "valueDeclarationRange": [ 14, 75 ] }, "somethingElse": { "name": "somethingElse", "flags": 1048580, "valueDeclarationRange": [ 109, 130 ] } } } } baseUrl is not a replacement for node_modules package lookups; it’s a replacement for relative paths. The implicit reasoning you have here is that when you have an import specifier like "package/subpath", we should always be looking for "/{some resolved prefix}/package/package.json" in order to figure out what to do with the /subpath portion of the path. But this logic is specific to node_modules lookups, and does not apply to baseUrl lookups (or paths, which are the other way you can resolve non-relative names). baseUrl is literally just a prefix to convert non-relative module specifiers to absolute ones, and was designed for use in AMD module loaders for browsers, like RequireJS. It shouldn’t be used for anything involving Node.
2025-04-01T04:34:42.004874
2019-11-05T21:27:11
518047736
{ "authors": [ "ahejlsberg" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8471", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/pull/34927" }
gharchive/pull-request
Fix excess property checking for unions with index signatures This PR revises our excess property checking logic to properly handle union types with index signatures. For example, both of the following now error where previously they didn't: const obj1: { [x: string]: number } | { [x: number]: number } = { a: 'abc' }; // Error const obj2: { [x: string]: number } | { a: number } = { a: 5, c: 'abc' }; // Error Fixes #34611. @typescript-bot test this @typescript-bot run dt @typescript-bot perf test this Performance is unaffected, so we're fine there. RWC tests have a few error elaboration changes, but they look fine. DT tests reveal genuine errors in the tests of three packages, xml, mergerino, and styled-system__css, that apparently were masked by our old excess property checking logic. Adding the Breaking Change label since we now catch more issues.
2025-04-01T04:34:42.009909
2020-02-05T01:39:47
560089036
{ "authors": [ "ahejlsberg", "amcasey" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8472", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/pull/36622" }
gharchive/pull-request
Cache results of isGenericObjectType and isGenericIndexType With this PR we cache the results of isGenericObjectType and isGenericIndexType for unions and intersections. This improves total compile time of the repro in #36564 by about 6.5%. @typescript-bot perf test this @ahejlsberg How do these predicates relate to couldContainTypeVariables? Is there a reason these are ObjectFlags and that uses a property? Context: instantiateType is frequently called on types that can't be instantiated and it seems like we could save some time (~2% on the material-ui benchmark above, in my naive implementation) by checking one or more of these cached values. How do these predicates relate to couldContainTypeVariables? Is there a reason these are ObjectFlags and that uses a property? The isGenericObjectType and isGenericIndexType functions are used to determine whether conditional types, indexed access types, and index types should be resolved, or whether resolution should be deferred in a higher-order type of the particular kind. It is possible for these functions to return false even if the given type references type variables, e.g. for the type { foo: T } where T is a type variable. The couldContainTypeVariables function computes whether a type possibly contains type variables, i.e. whether it possibly could be affected by instantiation. This function may return true even when the type references no type variables because it isn't possible to explore all types fully in depth. There's no good reason for this function to use a dedicated property. An ObjectFlags flag would be better. In scenarios that are heavy on union and intersection type instantiation, it might make sense to first call couldContainTypeVariables since the cached result would allow us to bail out quicker. Follow-up questions: Why do we only set those flags on union and intersection types? It looks like we might do non-trivial work for other types? Do we need both sets of flags? I would guess that only one of the two makes sense for any given type or that, if both make sense, only the || of the values matters.
2025-04-01T04:34:42.011242
2022-04-28T07:23:07
1218313006
{ "authors": [ "Andarist", "jakebailey" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8473", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/pull/48869" }
gharchive/pull-request
Add visible alias statements for non-visible binding patterns when emitting declaration Fixes https://github.com/microsoft/TypeScript/issues/30598 Well, I'm confident enough that I'll just merge this. Thanks for working on this and waiting.
2025-04-01T04:34:42.018731
2022-05-09T14:27:40
1229817968
{ "authors": [ "Andarist", "RyanCavanaugh", "jakebailey" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8474", "repo": "microsoft/TypeScript", "url": "https://github.com/microsoft/TypeScript/pull/49027" }
gharchive/pull-request
Automatically enable debugMode when recording with Replay I'm not sure if you have heard about Replay (https://www.replay.io/) and I will totally understand if you decide to reject this PR. However, the cost is small and it would make my life way easier :p as I'm using Replay to debug TypeScript quite a bit lately. I also highly encourage anyone interested in trying it out as it makes diagnosing and exploring code much easier. For the context - it is a time travel debugger that actually works with arbitrary scripts 😉 It's a little annoying that we have to probe process like that. Is there any other signal? what kind of an automated signal could we probe? I mean - is there even any other way to accomplish this than to probe a global (either directly checking something on process, or similar, or process.env)? I think that the Replay team could be open to introducing something else - but I truly don't know what else can be used to signal this. I was hoping for an environment variable 😅 Is there any particular reason why you are more OK with env var than with a process property? For context - I originally asked them if there is any env var that could be used to detect this because that was intuitively what I was expecting to be exposed. That's how I've learned that there is this process property available. So my intuition was kinda like yours - but I don't mind any of those ways to detect something. It seems that this global object (process.recordreplay) has some additional methods exposed on it (like you can get a predicted ID of the recording, the current "pause point" and stuff like that). So from their PoV, it makes sense that this is an object on the process as they wouldn't be able to expose methods using env vars alone. So exposing additional env var could be seen as somewhat redundant from their PoV here as this global object will exist anyway. If you are strongly opposed to checking this global object then I can circle back to the Replay team and ask if there is any chance that they would add some env var but I wonder if that's truly necessary. @RyanCavanaugh any more thoughts on this one? If you think that it's not the right time for this kind of a thing and that we can revisit this in the future, when/if Replay gets more popular then I'm OK with closing this one. I got pinged about this, so I'm taking a look. I personally don't see a problem with this, though I do think that it'd be nice to see some sort of documentation that implies that this is "the way" to detect Replay and that it's not going to disappear or change in the future. I've been emailing the folks at Relay and while this is experimental and the docs aren't yet complete, it does seem like probing process is the thing to do. That being said, I do wonder if we'd be better off defining: declare global { namespace NodeJS { interface Process { recordreplay?: { currentPointURL?: (args: never) => void; } } } } In sys.ts then checking typeof process.recordreplay?.currentPointURL === "function". Or, maybe just typeof (process as any).recordreplay?.currentPointURL === "function". In any case, in lieu of official docs, I think this looks alright. Realizing I misread and we could really just check typeof (process as any).recordreplay !== "undefined" or even (process as any).recordreplay. The currentPointURL check was just an extra bit of caution on my side - so this check wouldn't yield false positives. I think it's very unlikely that this would actually yield false positives even with a simplified check though. So I adjusted the condition as you suggested. I also chose the as any cast since with it the whole thing stays local to that one line of code.
2025-04-01T04:34:42.023015
2024-06-08T17:50:50
2341761775
{ "authors": [ "withinboredom" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8475", "repo": "microsoft/WSL", "url": "https://github.com/microsoft/WSL/issues/11671" }
gharchive/issue
Consider prompting the user during uninstall Is your feature request related to a problem? Please describe. A WSL distro is installed with wsl --install Ubuntu-22.04 However, it is uninstalled with wsl --unregister Ubuntu-22.04 If you accidentally type: wsl --uninstall Ubuntu-22.04 You're in for a bad day. Describe the solution you'd like If extra arguments are provided, the command should error. Describe alternatives you've considered Prompt the user to make sure this is the intended action Logs are not required.
2025-04-01T04:34:42.032325
2019-09-18T23:51:06
495505734
{ "authors": [ "Biswa96", "danwalmsley", "therealkenc" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8476", "repo": "microsoft/WSL", "url": "https://github.com/microsoft/WSL/issues/4520" }
gharchive/issue
update-binfmts does not persist when WSL2 restarted Please use the following bug reporting template to help produce issues which are actionable and reproducible, including all command-line steps necessary to induce the failure condition. Please fill out all the fields! Issues with missing or incomplete issue templates will be closed. If you have a feature request, please post to the UserVoice. If this is a console issue (a problem with layout, rendering, colors, etc.), please post to the console issue tracker. Important: Do not open GitHub issues for Windows crashes (BSODs) or security issues. Please direct all Windows crashes and security issues to<EMAIL_ADDRESS> Ideally, please configure your machine to capture minidumps, repro the issue, and send the minidump from "C:\Windows\minidump". Please fill out the below information: Your Windows build number: (Type ver at a Windows Command Prompt) 10.0.18980.1 What you're doing and what's happening: (Copy&paste the full set of specific command-line steps necessary to reproduce the behavior, and their output. Include screen shots if that helps demonstrate the problem.) im installing qemu, inside ubuntu (WSL2 enabled) then doing... sudo update-binfmts --enable qemu-aarch64 then im using code that runs aarch64 binaries, all is well. What's wrong / what should be happening instead: after restarting wsl2 then I have to run this command again, its not persisted. In other environments this would normally persist. Strace of the failing command, if applicable: (If some_command is failing, then run strace -o some_command.strace -f some_command some_args, and link the contents of some_command.strace in a gist here). For WSL launch issues, please collect detailed logs. See our contributing instructions for assistance. No systemd so no systemd-binfmt.service. Landing zone is #994. @therealkenc thats fine, but there may be a workaround to my issue? Like a config file you can specify commands that get run as root, whenever the vm fires up? or before ubuntu console is opened? One bizarre idea would be: set root as default user > Add the binfmt command in root's .bashrc (or other) file > Then add su to change to normal user. The go-to solution for all things missing from /init is to put the program in /etc/sudoers with NOPASSWD and launch it in .bashrc. There's a few other ways you can go. Mine personally is using the shell history...... @therealkenc sorry for delay in response, thank you very much exactly what I needed.
2025-04-01T04:34:42.222819
2021-04-06T11:07:15
851314622
{ "authors": [ "PavelSosin-320", "benhillis", "dfoxg", "einarpersson", "guaycuru", "ilya-bobyr", "moreje", "mprevot", "oomek", "rybacki78", "therealkenc", "tilenkranjc", "xquangdang" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8477", "repo": "microsoft/WSL", "url": "https://github.com/microsoft/WSL/issues/6765" }
gharchive/issue
WSL 2 slow startup (~10s from cold start) Expected behavior WSL 2 start really slow on my new machine with a Ryzen 7 4800H and 16GB of ram. My prev machine with core i7 8750H start wsl2 almost immediately (~1-2s). Actual behavior WSL 2 startup took too long, ~8-10s. If I just leave the Windows terminal open with 1 WSL 2 tab for a few mins, create a new tab that took about 4-5s. Environment Microsoft Windows [Version 10.0.21343.1000] WSL logs: [ 0.000000] Linux version 5.4.91-microsoft-standard-WSL2 (oe-user@oe-host) (gcc version 9.3.0 (GCC)) #1 SMP Mon Jan 25 18:39:31 UTC 2021 [ 0.000000] Command line: initrd=\initrd.img panic=-1 nr_cpus=16 swiotlb=force pty.legacy_count=0 [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Centaur CentaurHauls [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable [ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000e0fff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000001fffff] ACPI data [ 0.000000] BIOS-e820: [mem 0x0000000000200000-0x00000000f7ffffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x00000001c39fffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] DMI not present or invalid. [ 0.000000] Hypervisor detected: Microsoft Hyper-V [ 0.000000] Hyper-V: features 0xae7f, privilege high: 0x3b8030, hints 0xc2c, misc 0xe0bed7b2 [ 0.000000] Hyper-V Host Build:21343-10.0-1-0.1000 [ 0.000000] Hyper-V: LAPIC Timer Frequency: 0x1e8480 [ 0.000000] tsc: Marking TSC unstable due to running on Hyper-V [ 0.000000] Hyper-V: Using hypercall for remote TLB flush [ 0.000000] clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns:<PHONE_NUMBER>20 ns [ 0.000002] tsc: Detected 2894.463 MHz processor [ 0.000011] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000012] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000019] last_pfn = 0x1c3a00 max_arch_pfn = 0x400000000 [ 0.000055] MTRR default type: uncachable [ 0.000055] MTRR fixed ranges enabled: [ 0.000056] 00000-3FFFF write-back [ 0.000057] 40000-7FFFF uncachable [ 0.000057] 80000-8FFFF write-back [ 0.000058] 90000-FFFFF uncachable [ 0.000058] MTRR variable ranges enabled: [ 0.000059] 0 base<PHONE_NUMBER>00 mask FFFF00000000 write-back [ 0.000060] 1 base<PHONE_NUMBER>00 mask FFF000000000 write-back [ 0.000060] 2 disabled [ 0.000061] 3 disabled [ 0.000061] 4 disabled [ 0.000061] 5 disabled [ 0.000062] 6 disabled [ 0.000062] 7 disabled [ 0.000076] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.000094] last_pfn = 0xf8000 max_arch_pfn = 0x400000000 [ 0.000109] Using GB pages for direct mapping [ 0.000570] RAMDISK: [mem 0x03035000-0x03043fff] [ 0.000575] ACPI: Early table checksum verification disabled [ 0.000588] ACPI: RSDP 0x00000000000E0000 000024 (v02 VRTUAL) [ 0.000591] ACPI: XSDT 0x0000000000100000 000044 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000596] ACPI: FACP 0x0000000000101000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000600] ACPI: DSDT 0x00000000001011B8 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) [ 0.000602] ACPI: FACS 0x0000000000101114 000040 [ 0.000604] ACPI: OEM0 0x0000000000101154 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000606] ACPI: SRAT 0x000000000011F33C 0003B0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000608] ACPI: APIC 0x000000000011F6EC 0000C8 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000613] ACPI: Local APIC address 0xfee00000 [ 0.000869] Zone ranges: [ 0.000870] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.000871] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.000872] Normal [mem 0x0000000100000000-0x00000001c39fffff] [ 0.000873] Movable zone start for each node [ 0.000873] Early memory node ranges [ 0.000874] node 0: [mem 0x0000000000001000-0x000000000009ffff] [ 0.000875] node 0: [mem 0x0000000000200000-0x00000000f7ffffff] [ 0.000875] node 0: [mem 0x0000000100000000-0x00000001c39fffff] [ 0.001482] Zeroed struct page in unavailable ranges: 18273 pages [ 0.001484] Initmem setup node 0 [mem 0x0000000000001000-0x00000001c39fffff] [ 0.001485] On node 0 totalpages: 1816735 [ 0.001486] DMA zone: 59 pages used for memmap [ 0.001487] DMA zone: 22 pages reserved [ 0.001488] DMA zone: 3743 pages, LIFO batch:0 [ 0.001529] DMA32 zone: 16320 pages used for memmap [ 0.001530] DMA32 zone: 1011712 pages, LIFO batch:63 [ 0.020259] Normal zone: 12520 pages used for memmap [ 0.020261] Normal zone: 801280 pages, LIFO batch:63 [ 0.020940] ACPI: Local APIC address 0xfee00000 [ 0.020947] ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) [ 0.021917] IOAPIC[0]: apic_id 16, version 17, address 0xfec00000, GSI 0-23 [ 0.021923] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.021925] ACPI: IRQ9 used by override. [ 0.021927] Using ACPI (MADT) for SMP configuration information [ 0.021935] smpboot: Allowing 16 CPUs, 0 hotplug CPUs [ 0.021945] [mem 0xf8000000-0xffffffff] available for PCI devices [ 0.021946] Booting paravirtualized kernel on Hyper-V [ 0.021948] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:<PHONE_NUMBER>2750000 ns [ 0.189816] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:16 nr_node_ids:1 [ 0.190924] percpu: Embedded 50 pages/cpu s167256 r8192 d29352 u262144 [ 0.190933] pcpu-alloc: s167256 r8192 d29352 u262144 alloc=1*2097152 [ 0.190934] pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 [ 0.190952] Built 1 zonelists, mobility grouping on. Total pages: 1787814 [ 0.190954] Kernel command line: initrd=\initrd.img panic=-1 nr_cpus=16 swiotlb=force pty.legacy_count=0 [ 0.192845] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) [ 0.193695] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) [ 0.193759] mem auto-init: stack:off, heap alloc:off, heap free:off [ 0.219409] Memory: 4094392K/7266940K available (16390K kernel code, 1624K rwdata, 3180K rodata, 1572K init, 2216K bss, 229116K reserved, 0K cma-reserved) [ 0.219786] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 [ 0.219797] ftrace: allocating 44895 entries in 176 pages [ 0.238233] rcu: Hierarchical RCU implementation. [ 0.238235] rcu: RCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=16. [ 0.238236] All grace periods are expedited (rcu_expedited). [ 0.238236] rcu: RCU calculated value of scheduler-enlistment delay is 10 jiffies. [ 0.238237] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 [ 0.240721] Using NULL legacy PIC [ 0.240722] NR_IRQS: 16640, nr_irqs: 552, preallocated irqs: 0 [ 0.242121] random: crng done (trusting CPU's manufacturer) [ 0.242150] Console: colour dummy device 80x25 [ 0.242152] printk: console [tty0] enabled [ 0.242160] ACPI: Core revision 20190816 [ 0.242301] Failed to register legacy timer interrupt [ 0.242302] APIC: Switch to symmetric I/O mode setup [ 0.242303] Switched APIC routing to physical flat. [ 0.242346] Hyper-V: Using IPI hypercalls [ 0.242347] Hyper-V: Using enlightened APIC (xapic mode) [ 0.242684] Calibrating delay loop (skipped), value calculated using timer frequency.. 5788.92 BogoMIPS (lpj=28944630) [ 0.242687] pid_max: default: 32768 minimum: 301 [ 0.242711] LSM: Security Framework initializing [ 0.242736] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [ 0.242749] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [ 0.243057] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.243072] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 512 [ 0.243073] Last level dTLB entries: 4KB 2048, 2MB 2048, 4MB 1024, 1GB 0 [ 0.243076] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 0.243078] Spectre V2 : Mitigation: Full AMD retpoline [ 0.243078] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 0.243078] Spectre V2 : Enabling Restricted Speculation for firmware calls [ 0.243079] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [ 0.243080] Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl [ 0.243081] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp [ 0.243403] Freeing SMP alternatives memory: 48K [ 0.243831] smpboot: CPU0: AMD Ryzen 7 4800H with Radeon Graphics (family: 0x17, model: 0x60, stepping: 0x1) [ 0.243909] Performance Events: PMU not available due to virtualization, using software events only. [ 0.244028] rcu: Hierarchical SRCU implementation. [ 0.244288] smp: Bringing up secondary CPUs ... [ 0.244406] x86: Booting SMP configuration: [ 0.244408] .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 [ 2.493402] smp: Brought up 1 node, 16 CPUs [ 2.493403] smpboot: Max logical packages: 1 [ 2.493405] smpboot: Total of 16 processors activated (92647.73 BogoMIPS) [ 2.505012] node 0 initialised, 735858 pages in 20ms [ 2.513733] devtmpfs: initialized [ 2.513733] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:<PHONE_NUMBER>2750000 ns [ 2.513733] futex hash table entries: 4096 (order: 6, 262144 bytes, linear) [ 2.513733] xor: automatically using best checksumming function avx [ 2.513940] NET: Registered protocol family 16 [ 2.514254] ACPI: bus type PCI registered [ 2.514254] PCI: Fatal: No config space access function found [ 2.522787] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 2.532772] raid6: Forced to use recovery algorithm avx2x2 [ 2.532772] raid6: Forced gen() algo avx2x4 [ 2.532772] ACPI: Added _OSI(Module Device) [ 2.532772] ACPI: Added _OSI(Processor Device) [ 2.532772] ACPI: Added _OSI(3.0 _SCP Extensions) [ 2.532772] ACPI: Added _OSI(Processor Aggregator Device) [ 2.532772] ACPI: Added _OSI(Linux-Dell-Video) [ 2.532772] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 2.532772] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 2.538132] ACPI: 1 ACPI AML tables successfully acquired and loaded [ 2.543022] ACPI: Interpreter enabled [ 2.543027] ACPI: (supports S0 S5) [ 2.543028] ACPI: Using IOAPIC for interrupt routing [ 2.543039] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 2.543650] ACPI: Enabled 2 GPEs in block 00 to 0F [ 2.547213] iommu: Default domain type: Translated [ 2.547316] SCSI subsystem initialized [ 2.547384] hv_vmbus: Vmbus version:5.0 [ 2.547384] PCI: Using ACPI for IRQ routing [ 2.547384] PCI: System does not support PCI [ 2.547384] hv_vmbus: Unknown GUID: c376c1c3-d276-48d2-90a9-c04748072c60 [ 2.547384] clocksource: Switched to clocksource hyperv_clocksource_tsc_page [ 2.547384] hv_vmbus: Unknown GUID: 6e382d18-3336-4f4b-acc4-2b7703d4df4a [ 2.547384] hv_vmbus: Unknown GUID: dde9cbc0-5060-4436-9448-ea1254a5d177 [ 2.679152] VFS: Disk quotas dquot_6.6.0 [ 2.679165] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 2.679183] FS-Cache: Loaded [ 2.679207] pnp: PnP ACPI init [ 2.679473] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) [ 2.679527] pnp: PnP ACPI: found 1 devices [ 2.697879] NET: Registered protocol family 2 [ 2.698253] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) [ 2.698273] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) [ 2.698869] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) [ 2.699090] TCP: Hash tables configured (established 65536 bind 65536) [ 2.699141] UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) [ 2.699170] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) [ 2.699250] NET: Registered protocol family 1 [ 2.701654] RPC: Registered named UNIX socket transport module. [ 2.701656] RPC: Registered udp transport module. [ 2.701656] RPC: Registered tcp transport module. [ 2.701657] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 2.701662] PCI: CLS 0 bytes, default 64 [ 2.701816] Trying to unpack rootfs image as initramfs... [ 2.702330] Freeing initrd memory: 60K [ 2.702336] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 2.702339] software IO TLB: mapped [mem 0xf4000000-0xf8000000] (64MB) [ 2.710510] kvm: no hardware support [ 2.715132] kvm: Nested Virtualization enabled [ 2.715148] kvm: Nested Paging enabled [ 2.715149] SVM: Virtual VMLOAD VMSAVE supported [ 2.731630] Initialise system trusted keyrings [ 2.732168] workingset: timestamp_bits=46 max_order=21 bucket_order=0 [ 2.734223] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 2.734952] NFS: Registering the id_resolver key type [ 2.734969] Key type id_resolver registered [ 2.734969] Key type id_legacy registered [ 2.734973] Installing knfsd (copyright (C) 1996 okir@monad.swb.de). [ 2.737346] Key type cifs.idmap registered [ 2.737692] fuse: init (API version 7.31) [ 2.738406] SGI XFS with ACLs, security attributes, realtime, scrub, repair, no debug enabled [ 2.739743] 9p: Installing v9fs 9p2000 file system support [ 2.739765] FS-Cache: Netfs '9p' registered for caching [ 2.739842] FS-Cache: Netfs 'ceph' registered for caching [ 2.739847] ceph: loaded (mds proto 32) [ 2.743075] NET: Registered protocol family 38 [ 2.743079] Key type asymmetric registered [ 2.743080] Asymmetric key parser 'x509' registered [ 2.743095] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 2.748201] hv_vmbus: registering driver hv_pci [ 2.749643] hv_pci 06ab4550-cdf8-4ac6-a003-0a7163ce9489: PCI VMBus probing: Using version 0x10002 [ 2.752665] hv_pci 06ab4550-cdf8-4ac6-a003-0a7163ce9489: PCI host bridge to bus cdf8:00 [ 2.754436] pci cdf8:00:00.0: [1414:008e] type 00 class 0x030200 [ 2.783346] ACPI: AC Adapter [AC1] (off-line) [ 2.784538] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled [ 2.786110] battery: ACPI: Battery Slot [BAT1] (battery present) [ 2.786351] Non-volatile memory driver v1.3 [ 2.798902] brd: module loaded [ 2.803485] loop: module loaded [ 2.803563] hv_vmbus: registering driver hv_storvsc [ 2.803697] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) [ 2.805092] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. [ 2.805094] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld<EMAIL_ADDRESS>All Rights Reserved. [ 2.805124] tun: Universal TUN/TAP device driver, 1.6 [ 2.805516] PPP generic driver version 2.4.2 [ 2.805878] PPP BSD Compression module registered [ 2.805881] PPP Deflate Compression module registered [ 2.805888] PPP MPPE Compression module registered [ 2.805889] NET: Registered protocol family 24 [ 2.805901] hv_vmbus: registering driver hv_netvsc [ 2.806408] VFIO - User Level meta-driver version: 0.3 [ 2.807028] hv_vmbus: registering driver hyperv_keyboard [ 2.808616] rtc_cmos 00:00: RTC can wake from S4 [ 2.812448] scsi host0: storvsc_host_t [ 2.840439] rtc_cmos 00:00: registered as rtc0 [ 2.840453] rtc_cmos 00:00: alarms up to one month, 114 bytes nvram [ 2.841376] device-mapper: ioctl: 4.41.0-ioctl (2019-09-16) initialised<EMAIL_ADDRESS>[ 2.841945] device-mapper: raid: Loading target version 1.14.0 [ 2.842271] hv_utils: Registering HyperV Utility Driver [ 2.842273] hv_vmbus: registering driver hv_utils [ 2.842393] hv_vmbus: registering driver hv_balloon [ 2.842413] dxgk:err: dxg_drv_init Version: 5 [ 2.842420] hv_vmbus: registering driver dxgkrnl [ 2.842492] hv_utils: cannot register PTP clock: 0 [ 2.843855] hv_utils: TimeSync IC version 4.0 [ 2.846112] drop_monitor: Initializing network drop monitor service [ 2.846136] Mirror/redirect action on [ 2.847161] IPVS: Registered protocols (TCP, UDP) [ 2.847197] IPVS: Connection hash table configured (size=4096, memory=64Kbytes) [ 2.847368] hv_balloon: Using Dynamic Memory protocol version 2.0 [ 2.850314] hv_balloon: Cold memory discard enabled [ 2.853475] IPVS: ipvs loaded. [ 2.853480] IPVS: [rr] scheduler registered. [ 2.853481] IPVS: [wrr] scheduler registered. [ 2.853482] IPVS: [sh] scheduler registered. [ 2.853596] ipip: IPv4 and MPLS over IPv4 tunneling driver [ 2.859602] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully [ 2.862399] Initializing XFRM netlink socket [ 2.862551] NET: Registered protocol family 10 [ 2.863564] Segment Routing with IPv6 [ 2.871500] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver [ 2.871673] NET: Registered protocol family 17 [ 2.871705] Bridge firewalling registered [ 2.871717] 8021q: 802.1Q VLAN Support v1.8 [ 2.871751] sctp: Hash tables configured (bind 256/256) [ 2.871846] 9pnet: Installing 9P2000 support [ 2.871864] Key type dns_resolver registered [ 2.872379] Key type ceph registered [ 2.873033] libceph: loaded (mon/osd proto 15/24) [ 2.873037] hv_vmbus: registering driver hv_sock [ 2.873401] NET: Registered protocol family 40 [ 2.873444] IPI shorthand broadcast: enabled [ 2.873753] registered taskstats version 1 [ 2.873764] Loading compiled-in X.509 certificates [ 2.874397] Btrfs loaded, crc32c=crc32c-generic [ 2.877995] rtc_cmos 00:00: setting system clock to 2021-04-06T10:59:59 UTC<PHONE_NUMBER>) [ 2.878037] Unstable clock detected, switching default tracing clock to "global" If you want to keep using the local clock, then add: "trace_clock=local" on the kernel command line [ 2.886438] Freeing unused kernel image memory: 1572K [ 2.913165] Write protecting the kernel read-only data: 22528k [ 2.915765] Freeing unused kernel image memory: 1992K [ 2.917231] Freeing unused kernel image memory: 916K [ 2.917234] Run /init as init process [ 3.623952] scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 3.625255] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 3.629686] sd 0:0:0:0: [sda] 536870912 512-byte logical blocks: (275 GB/256 GiB) [ 3.629688] sd 0:0:0:0: [sda] 4096-byte physical blocks [ 3.630122] sd 0:0:0:0: [sda] Write Protect is off [ 3.630127] sd 0:0:0:0: [sda] Mode Sense: 0f 00 00 00 [ 3.631986] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 3.813038] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 4.033746] EXT4-fs (sda): mounted filesystem with ordered data mode. Opts: (null) [ 4.183816] sd 0:0:0:0: [sda] Attached SCSI disk [ 4.496267] Adding 2097152k swap on /swap/file. Priority:-2 extents:1 across:2097152k [ 5.643442] scsi 0:0:0:1: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 5.644821] sd 0:0:0:1: Attached scsi generic sg1 type 0 [ 5.645545] scsi 0:0:0:2: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 5.646765] sd 0:0:0:2: Attached scsi generic sg2 type 0 [ 5.648208] sd 0:0:0:1: [sdb] 711544 512-byte logical blocks: (364 MB/347 MiB) [ 5.648603] sd 0:0:0:1: [sdb] Write Protect is on [ 5.648608] sd 0:0:0:1: [sdb] Mode Sense: 0f 00 80 00 [ 5.649437] sd 0:0:0:1: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 5.649878] sd 0:0:0:2: [sdc] 536870912 512-byte logical blocks: (275 GB/256 GiB) [ 5.649882] sd 0:0:0:2: [sdc] 4096-byte physical blocks [ 5.650129] sd 0:0:0:2: [sdc] Write Protect is off [ 5.650132] sd 0:0:0:2: [sdc] Mode Sense: 0f 00 00 00 [ 5.651113] sd 0:0:0:2: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 5.661064] sd 0:0:0:1: [sdb] Attached SCSI disk [ 5.662326] EXT4-fs (sdb): mounted filesystem without journal. Opts: (null) [ 5.662879] sd 0:0:0:2: [sdc] Attached SCSI disk [ 6.295724] EXT4-fs (sdc): recovery complete [ 6.297376] EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered [ 6.449378] hv_pci 4505998c-7367-4ad0-ba1d-bb85d00b909a: PCI VMBus probing: Using version 0x10002 [ 6.554176] hv_pci 4505998c-7367-4ad0-ba1d-bb85d00b909a: PCI host bridge to bus 7367:00 [ 6.554181] pci_bus 7367:00: root bus resource [mem 0xe00000000-0xe00002fff window] [ 6.558458] pci 7367:00:00.0: [1af4:1049] type 00 class 0x010000 [ 6.569737] pci 7367:00:00.0: reg 0x10: [mem 0xe00000000-0xe00000fff 64bit] [ 6.574224] pci 7367:00:00.0: reg 0x18: [mem 0xe00001000-0xe00001fff 64bit] [ 6.578979] pci 7367:00:00.0: reg 0x20: [mem 0xe00002000-0xe00002fff 64bit] [ 6.611437] pci 7367:00:00.0: BAR 0: assigned [mem 0xe00000000-0xe00000fff 64bit] [ 6.614688] pci 7367:00:00.0: BAR 2: assigned [mem 0xe00001000-0xe00001fff 64bit] [ 6.617984] pci 7367:00:00.0: BAR 4: assigned [mem 0xe00002000-0xe00002fff 64bit] [ 6.717733] FS-Cache: Duplicate cookie detected [ 6.717737] FS-Cache: O-cookie c=00000000ec19bcb3 [p=00000000c2e84d59 fl=222 nc=0 na=1] [ 6.717739] FS-Cache: O-cookie d=00000000b4cb25a7 n=00000000f1bfd785 [ 6.717740] FS-Cache: O-key=[10] '34323934393337393433' [ 6.717745] FS-Cache: N-cookie c=0000000049829d2e [p=00000000c2e84d59 fl=2 nc=0 na=1] [ 6.717746] FS-Cache: N-cookie d=00000000b4cb25a7 n=00000000575349de [ 6.717747] FS-Cache: N-key=[10] '34323934393337393433' [ 7.292459] hv_pci 841ef9b2-85a0-406b-8cf0-7d306abedad8: PCI VMBus probing: Using version 0x10002 [ 7.296138] 9pnet_virtio: no channels available for device drvfs [ 7.296148] WARNING: mount: waiting for virtio device... [ 7.396638] 9pnet_virtio: no channels available for device drvfs [ 7.440018] hv_pci 841ef9b2-85a0-406b-8cf0-7d306abedad8: PCI host bridge to bus 85a0:00 [ 7.440024] pci_bus 85a0:00: root bus resource [mem 0xe00004000-0xe00006fff window] [ 7.444863] pci 85a0:00:00.0: [1af4:1049] type 00 class 0x010000 [ 7.456307] pci 85a0:00:00.0: reg 0x10: [mem 0xe00004000-0xe00004fff 64bit] [ 7.461183] pci 85a0:00:00.0: reg 0x18: [mem 0xe00005000-0xe00005fff 64bit] [ 7.465977] pci 85a0:00:00.0: reg 0x20: [mem 0xe00006000-0xe00006fff 64bit] [ 7.490653] pci 85a0:00:00.0: BAR 0: assigned [mem 0xe00004000-0xe00004fff 64bit] [ 7.492457] pci 85a0:00:00.0: BAR 2: assigned [mem 0xe00005000-0xe00005fff 64bit] [ 7.495097] pci 85a0:00:00.0: BAR 4: assigned [mem 0xe00006000-0xe00006fff 64bit] [ 7.497106] 9pnet_virtio: no channels available for device drvfs [ 7.611616] hv_pci 72eb2353-018f-41cd-9951-192848570d7a: PCI VMBus probing: Using version 0x10002 [ 7.710003] hv_pci 72eb2353-018f-41cd-9951-192848570d7a: PCI host bridge to bus 018f:00 [ 7.710008] pci_bus 018f:00: root bus resource [mem 0xe00008000-0xe0000afff window] [ 7.715032] pci 018f:00:00.0: [1af4:1049] type 00 class 0x010000 [ 7.721843] pci 018f:00:00.0: reg 0x10: [mem 0xe00008000-0xe00008fff 64bit] [ 7.724697] pci 018f:00:00.0: reg 0x18: [mem 0xe00009000-0xe00009fff 64bit] [ 7.727929] pci 018f:00:00.0: reg 0x20: [mem 0xe0000a000-0xe0000afff 64bit] [ 7.757718] pci 018f:00:00.0: BAR 0: assigned [mem 0xe00008000-0xe00008fff 64bit] [ 7.762831] pci 018f:00:00.0: BAR 2: assigned [mem 0xe00009000-0xe00009fff 64bit] [ 7.767145] pci 018f:00:00.0: BAR 4: assigned [mem 0xe0000a000-0xe0000afff 64bit] [ 7.879553] FS-Cache: Duplicate cookie detected [ 7.879558] FS-Cache: O-cookie c=000000001751915e [p=00000000c2e84d59 fl=222 nc=0 na=1] [ 7.879559] FS-Cache: O-cookie d=00000000b4cb25a7 n=00000000ca3f7c5c [ 7.879560] FS-Cache: O-key=[10] '34323934393338303539' [ 7.879564] FS-Cache: N-cookie c=0000000006bd3929 [p=00000000c2e84d59 fl=2 nc=0 na=1] [ 7.879564] FS-Cache: N-cookie d=00000000b4cb25a7 n=00000000de1ce056 [ 7.879565] FS-Cache: N-key=[10] '34323934393338303539' [ 7.994042] traps: gsettings-helpe[217] trap int3 ip:7f3724f3e215 sp:7ffc6a0d95b0 error:0 in libglib-2.0.so.0.5800.0[7f3724f04000+80000] [ 7.994050] potentially unexpected fatal signal 5. [ 7.994052] CPU: 4 PID: 217 Comm: gsettings-helpe Not tainted 5.4.91-microsoft-standard-WSL2 #1 [ 7.994058] RIP: 0033:0x7f3724f3e215 [ 7.994061] Code: 48 83 c4 08 5d 41 5c c3 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 53 89 fb e8 48 f6 01 00 85 c0 75 0b 85 db 0f 84 26 79 fc ff cc <5b> c3 bf 01 00 00 00 e8 cf 6c fc ff 66 66 2e 0f 1f 84 00 00 00 00 [ 7.994065] RSP: 002b:00007ffc6a0d95b0 EFLAGS: 00000202 [ 7.994067] RAX:<PHONE_NUMBER>000000 RBX:<PHONE_NUMBER>000001 RCX:<PHONE_NUMBER>000000 [ 7.994068] RDX:<PHONE_NUMBER>000001 RSI:<PHONE_NUMBER>000000 RDI:<PHONE_NUMBER>000001 [ 7.994070] RBP:<PHONE_NUMBER>000006 R08:<PHONE_NUMBER>000000 R09:<PHONE_NUMBER>000061 [ 7.994072] R10:<PHONE_NUMBER>61a650 R11:<PHONE_NUMBER>000246 R12:<PHONE_NUMBER>000002 [ 7.994074] R13:<PHONE_NUMBER>61a520 R14:<PHONE_NUMBER>000004 R15: 00007f372518b05a [ 7.994075] FS: 00007f37246db500 GS:<PHONE_NUMBER>000000 [ 8.032423] hv_pci dd99397f-d558-4788-8c55-75b904ac9cc1: PCI VMBus probing: Using version 0x10002 [ 8.036380] 9pnet_virtio: no channels available for device drvfs [ 8.036386] WARNING: mount: waiting for virtio device... [ 8.101287] hv_pci dd99397f-d558-4788-8c55-75b904ac9cc1: PCI host bridge to bus d558:00 [ 8.101290] pci_bus d558:00: root bus resource [mem 0xe0000c000-0xe0000efff window] [ 8.103797] pci d558:00:00.0: [1af4:1049] type 00 class 0x010000 [ 8.109797] pci d558:00:00.0: reg 0x10: [mem 0xe0000c000-0xe0000cfff 64bit] [ 8.113479] pci d558:00:00.0: reg 0x18: [mem 0xe0000d000-0xe0000dfff 64bit] [ 8.115882] pci d558:00:00.0: reg 0x20: [mem 0xe0000e000-0xe0000efff 64bit] [ 8.133126] pci d558:00:00.0: BAR 0: assigned [mem 0xe0000c000-0xe0000cfff 64bit] [ 8.134951] pci d558:00:00.0: BAR 2: assigned [mem 0xe0000d000-0xe0000dfff 64bit] [ 8.136682] pci d558:00:00.0: BAR 4: assigned [mem 0xe0000e000-0xe0000efff 64bit] [ 8.136870] 9pnet_virtio: no channels available for device drvfs [ 8.247184] hv_pci 4c574c27-4a97-4f75-a80e-93a827e90f3b: PCI VMBus probing: Using version 0x10002 [ 8.318736] hv_pci 4c574c27-4a97-4f75-a80e-93a827e90f3b: PCI host bridge to bus 4a97:00 [ 8.318739] pci_bus 4a97:00: root bus resource [mem 0xe00010000-0xe00012fff window] [ 8.321079] pci 4a97:00:00.0: [1af4:1049] type 00 class 0x010000 [ 8.327159] pci 4a97:00:00.0: reg 0x10: [mem 0xe00010000-0xe00010fff 64bit] [ 8.329777] pci 4a97:00:00.0: reg 0x18: [mem 0xe00011000-0xe00011fff 64bit] [ 8.332310] pci 4a97:00:00.0: reg 0x20: [mem 0xe00012000-0xe00012fff 64bit] [ 8.352042] pci 4a97:00:00.0: BAR 0: assigned [mem 0xe00010000-0xe00010fff 64bit] [ 8.354384] pci 4a97:00:00.0: BAR 2: assigned [mem 0xe00011000-0xe00011fff 64bit] [ 8.356545] pci 4a97:00:00.0: BAR 4: assigned [mem 0xe00012000-0xe00012fff 64bit] Spiritually dupe #5289 unless something else particularly stands out when comparing the dmesg output from your i7 side-by-side with your new Ryzen. Virtio9p (the PCI prints) does take a bit longer than the hvsocket-based 9p. It looks like you also have the GUI apps package installed which does make boot a bit slower. We're working on resolving both of these. @xquangdang - Could you try this on the most recent insider build with the updated wslg package? I happen to have a 3950x. 9 seconds with bleeding edge. Gist here if it helps. FYI ! with systemd-genie systemd-analyze "blames" 2 critical paths: very slow wslg start - 2sec extremely slow network stack start - about 2 sec total and holds other components until it is turned "ready". But this is a common Windows problem and gets worse from release to release. The outcome is "temporary failure in DNS resolution" which can take about forever. The impact of VPN has to be checked. Some distros have their own issues with Kernel 5.10.xxx and due to dependency with graphics stack. Overall Windows boot performance for iGPU + dGPU configuration is hardly acceptable. It takes seconds until Win makes a decision about a preferable GPU. @benhillis I afraid to find at the bottom of the pit the SSD may start in a portion of ms, GPU with its cooling system in a portion of seconds, AC/XS WiFi adapter in seconds. Unfortunately, Windows unlike Linux doesn't keep track of subsystems startup time. 😂 @benhillis Yet another critical path detected is LVM startup time. But it looks like a known problem with Kernel 5.10 ad some "Native" Linux distro like Suse and Fedora. WSL2 may need it for wsl -- mount and it is dup for slow mounted via wsl --mountdisks. With a pre-heated WiFi adapter the massive Fedora34 distro with full OCI dev environment the full startup time takes respectable systemd-analyze Startup finished in 1.599s (userspace) graphical.target reached after 1.457s in userspace. Does someone need some more investigation about this ? I see 40 seconds between startup and prompt. [ 0.000000] Linux version <IP_ADDRESS>-microsoft-standard-WSL2 (oe-user@oe-host) (x86_64-msft-linux-gcc (GCC) 9.3.0, GNU ld (GNU Binutils) <IP_ADDRESS>00220) #1 SMP Wed Mar 2 00:30:59 UTC 2022 [ 0.000000] Command line: initrd=\initrd.img panic=-1 pty.legacy_count=0 nr_cpus=16 [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Centaur CentaurHauls [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable [ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000e0fff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000001fffff] ACPI data [ 0.000000] BIOS-e820: [mem 0x0000000000200000-0x00000000f7ffffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000bffffffff] usable [ 0.000000] BIOS-e820: [mem 0x0000001000000000-0x0000001d9d3fffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] DMI not present or invalid. [ 0.000000] Hypervisor detected: Microsoft Hyper-V [ 0.000000] Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x20c2c, misc 0x20bed7b2 [ 0.000000] Hyper-V Host Build:19041-10.0-1-0.1766 [ 0.000000] Hyper-V: LAPIC Timer Frequency: 0x1e8480 [ 0.000000] Hyper-V: Using hypercall for remote TLB flush [ 0.000000] clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns:<PHONE_NUMBER>20 ns [ 0.000000] tsc: Marking TSC unstable due to running on Hyper-V [ 0.000001] tsc: Detected 3600.010 MHz processor [ 0.000008] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000009] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000011] last_pfn = 0x1d9d400 max_arch_pfn = 0x400000000 [ 0.000023] MTRR default type: uncachable [ 0.000023] MTRR fixed ranges enabled: [ 0.000024] 00000-3FFFF write-back [ 0.000024] 40000-7FFFF uncachable [ 0.000024] 80000-8FFFF write-back [ 0.000025] 90000-FFFFF uncachable [ 0.000025] MTRR variable ranges enabled: [ 0.000026] 0 base<PHONE_NUMBER> mask 7F00000000 write-back [ 0.000026] 1 base<PHONE_NUMBER> mask<PHONE_NUMBER> write-back [ 0.000027] 2 base<PHONE_NUMBER> mask<PHONE_NUMBER> write-back [ 0.000027] 3 disabled [ 0.000028] 4 disabled [ 0.000028] 5 disabled [ 0.000028] 6 disabled [ 0.000028] 7 disabled [ 0.000032] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.000039] last_pfn = 0xf8000 max_arch_pfn = 0x400000000 [ 0.000046] Using GB pages for direct mapping [ 0.000293] RAMDISK: [mem 0x03835000-0x03844fff] [ 0.000294] ACPI: Early table checksum verification disabled [ 0.000301] ACPI: RSDP 0x00000000000E0000 000024 (v02 VRTUAL) [ 0.000304] ACPI: XSDT 0x0000000000100000 000044 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000307] ACPI: FACP 0x0000000000101000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000310] ACPI: DSDT 0x00000000001011B8 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) [ 0.000312] ACPI: FACS 0x0000000000101114 000040 [ 0.000313] ACPI: OEM0 0x0000000000101154 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000315] ACPI: SRAT 0x000000000011F33C 000310 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000317] ACPI: APIC 0x000000000011F64C 0000C8 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000318] ACPI: Reserving FACP table memory at [mem 0x101000-0x101113] [ 0.000319] ACPI: Reserving DSDT table memory at [mem 0x1011b8-0x11f33b] [ 0.000319] ACPI: Reserving FACS table memory at [mem 0x101114-0x101153] [ 0.000320] ACPI: Reserving OEM0 table memory at [mem 0x101154-0x1011b7] [ 0.000320] ACPI: Reserving SRAT table memory at [mem 0x11f33c-0x11f64b] [ 0.000321] ACPI: Reserving APIC table memory at [mem 0x11f64c-0x11f713] [ 0.000325] ACPI: Local APIC address 0xfee00000 [ 0.000515] Zone ranges: [ 0.000516] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.000517] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.000518] Normal [mem 0x0000000100000000-0x0000001d9d3fffff] [ 0.000519] Device empty [ 0.000519] Movable zone start for each node [ 0.000520] Early memory node ranges [ 0.000520] node 0: [mem 0x0000000000001000-0x000000000009ffff] [ 0.000521] node 0: [mem 0x0000000000200000-0x00000000f7ffffff] [ 0.000522] node 0: [mem 0x0000000100000000-0x0000000bffffffff] [ 0.000523] node 0: [mem 0x0000001000000000-0x0000001d9d3fffff] [ 0.000526] Initmem setup node 0 [mem 0x0000000000001000-0x0000001d9d3fffff] [ 0.000527] On node 0 totalpages: 26825375 [ 0.000527] DMA zone: 59 pages used for memmap [ 0.000528] DMA zone: 22 pages reserved [ 0.000528] DMA zone: 3743 pages, LIFO batch:0 [ 0.000529] DMA32 zone: 16320 pages used for memmap [ 0.000529] DMA32 zone: 1011712 pages, LIFO batch:63 [ 0.000530] Normal zone: 403280 pages used for memmap [ 0.000530] Normal zone: 25809920 pages, LIFO batch:63 [ 0.000673] On node 0, zone DMA: 1 pages in unavailable ranges [ 0.000692] On node 0, zone DMA: 352 pages in unavailable ranges [ 0.010651] On node 0, zone Normal: 11264 pages in unavailable ranges [ 0.010672] ACPI: Local APIC address 0xfee00000 [ 0.010676] ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) [ 0.010854] IOAPIC[0]: apic_id 16, version 17, address 0xfec00000, GSI 0-23 [ 0.010857] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.010858] ACPI: IRQ9 used by override. [ 0.010859] Using ACPI (MADT) for SMP configuration information [ 0.010863] smpboot: Allowing 16 CPUs, 0 hotplug CPUs [ 0.010867] [mem 0xf8000000-0xffffffff] available for PCI devices [ 0.010868] Booting paravirtualized kernel on Hyper-V [ 0.010869] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:<PHONE_NUMBER>2750000 ns [ 0.014235] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:16 nr_node_ids:1 [ 0.014711] percpu: Embedded 56 pages/cpu s190040 r8192 d31144 u262144 [ 0.014714] pcpu-alloc: s190040 r8192 d31144 u262144 alloc=1*2097152 [ 0.014716] pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 [ 0.014728] Hyper-V: PV spinlocks enabled [ 0.014729] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) [ 0.014731] Built 1 zonelists, mobility grouping on. Total pages: 26405694 [ 0.014732] Kernel command line: initrd=\initrd.img panic=-1 pty.legacy_count=0 nr_cpus=16 [ 0.020981] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) [ 0.024138] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) [ 0.024251] mem auto-init: stack:off, heap alloc:off, heap free:off [ 0.041980] Memory: 4085908K/107301500K available (16404K kernel code, 2544K rwdata, 9048K rodata, 1580K init, 2772K bss, 1886388K reserved, 0K cma-reserved) [ 0.042013] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 [ 0.042018] ftrace: allocating 51665 entries in 202 pages [ 0.053276] ftrace: allocated 202 pages with 4 groups [ 0.053473] rcu: Hierarchical RCU implementation. [ 0.053474] rcu: RCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=16. [ 0.053475] Rude variant of Tasks RCU enabled. [ 0.053475] Tracing variant of Tasks RCU enabled. [ 0.053476] rcu: RCU calculated value of scheduler-enlistment delay is 10 jiffies. [ 0.053476] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 [ 0.055699] Using NULL legacy PIC [ 0.055700] NR_IRQS: 16640, nr_irqs: 552, preallocated irqs: 0 [ 0.056038] random: crng done (trusting CPU's manufacturer) [ 0.056054] Console: colour dummy device 80x25 [ 0.056059] printk: console [tty0] enabled [ 0.056064] ACPI: Core revision 20200925 [ 0.056185] Failed to register legacy timer interrupt [ 0.056185] APIC: Switch to symmetric I/O mode setup [ 0.056187] Switched APIC routing to physical flat. [ 0.056198] Hyper-V: Using IPI hypercalls [ 0.056198] Hyper-V: Using enlightened APIC (xapic mode) [ 0.056253] Calibrating delay loop (skipped), value calculated using timer frequency.. 7200.02 BogoMIPS (lpj=36000100) [ 0.056255] pid_max: default: 32768 minimum: 301 [ 0.056269] LSM: Security Framework initializing [ 0.056341] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 0.056551] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 0.056742] Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 [ 0.056743] Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 [ 0.056745] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 0.056745] Spectre V2 : Mitigation: Full generic retpoline [ 0.056746] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 0.056746] Spectre V2 : Enabling Restricted Speculation for firmware calls [ 0.056747] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [ 0.056747] Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl [ 0.056748] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp [ 0.056749] TAA: Vulnerable: Clear CPU buffers attempted, no microcode [ 0.056750] SRBDS: Unknown: Dependent on hypervisor status [ 0.056750] MDS: Vulnerable: Clear CPU buffers attempted, no microcode [ 0.056852] Freeing SMP alternatives memory: 56K [ 0.057709] smpboot: CPU0: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (family: 0x6, model: 0x9e, stepping: 0xc) [ 0.057763] Performance Events: unsupported p6 CPU model 158 no PMU driver, software events only. [ 0.057777] rcu: Hierarchical SRCU implementation. [ 0.058066] smp: Bringing up secondary CPUs ... [ 0.058104] x86: Booting SMP configuration: [ 0.058105] .... node #0, CPUs: #1 [ 0.058391] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ 0.058391] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. [ 0.058391] #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 [ 0.058391] smp: Brought up 1 node, 16 CPUs [ 0.058391] smpboot: Max logical packages: 1 [ 0.058391] smpboot: Total of 16 processors activated (115200.32 BogoMIPS) [ 0.176277] node 0 deferred pages initialised in 120ms [ 0.177206] devtmpfs: initialized [ 0.177206] x86/mm: Memory block size: 128MB [ 0.178294] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:<PHONE_NUMBER>2750000 ns [ 0.178294] futex hash table entries: 4096 (order: 6, 262144 bytes, linear) [ 0.178294] NET: Registered protocol family 16 [ 0.178294] thermal_sys: Registered thermal governor 'step_wise' [ 0.178294] cpuidle: using governor menu [ 0.178294] ACPI: bus type PCI registered [ 0.178294] PCI: Fatal: No config space access function found [ 0.178294] Kprobes globally optimized [ 0.178294] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages [ 0.178294] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 0.178294] raid6: skip pq benchmark and using algorithm avx2x4 [ 0.178294] raid6: using avx2x2 recovery algorithm [ 0.178294] ACPI: Added _OSI(Module Device) [ 0.178294] ACPI: Added _OSI(Processor Device) [ 0.178294] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.178294] ACPI: Added _OSI(Processor Aggregator Device) [ 0.178294] ACPI: Added _OSI(Linux-Dell-Video) [ 0.178294] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 0.178294] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 0.187157] ACPI: 1 ACPI AML tables successfully acquired and loaded [ 0.187778] ACPI: Interpreter enabled [ 0.187780] ACPI: (supports S0 S5) [ 0.187781] ACPI: Using IOAPIC for interrupt routing [ 0.187785] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.187890] ACPI: Enabled 1 GPEs in block 00 to 0F [ 0.188543] iommu: Default domain type: Translated [ 0.188592] SCSI subsystem initialized [ 0.188594] ACPI: bus type USB registered [ 0.188600] usbcore: registered new interface driver usbfs [ 0.188602] usbcore: registered new interface driver hub [ 0.188608] usbcore: registered new device driver usb [ 0.188630] hv_vmbus: Vmbus version:5.2 [ 0.188630] PCI: Using ACPI for IRQ routing [ 0.188630] PCI: System does not support PCI [ 0.188630] hv_vmbus: Unknown GUID: c376c1c3-d276-48d2-90a9-c04748072c60 [ 0.188630] clocksource: Switched to clocksource hyperv_clocksource_tsc_page [ 0.304982] VFS: Disk quotas dquot_6.6.0 [ 0.304989] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 0.305001] FS-Cache: Loaded [ 0.305017] pnp: PnP ACPI init [ 0.305044] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.305082] pnp: PnP ACPI: found 1 devices [ 0.309505] NET: Registered protocol family 2 [ 0.309750] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) [ 0.310889] tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) [ 0.311110] TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) [ 0.311715] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) [ 0.311792] TCP: Hash tables configured (established 524288 bind 65536) [ 0.311818] UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) [ 0.312095] UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) [ 0.312388] NET: Registered protocol family 1 [ 0.312939] RPC: Registered named UNIX socket transport module. [ 0.312939] RPC: Registered udp transport module. [ 0.312940] RPC: Registered tcp transport module. [ 0.312940] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 0.312942] PCI: CLS 0 bytes, default 64 [ 0.312989] Trying to unpack rootfs image as initramfs... [ 0.313155] Freeing initrd memory: 64K [ 0.313157] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 0.313158] software IO TLB: mapped [mem 0x00000000f4000000-0x00000000f8000000] (64MB) [ 0.313219] kvm: no hardware support [ 0.313220] has_svm: not amd or hygon [ 0.313220] kvm: no hardware support [ 0.316023] Initialise system trusted keyrings [ 0.316090] workingset: timestamp_bits=46 max_order=25 bucket_order=0 [ 0.316811] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 0.316924] NFS: Registering the id_resolver key type [ 0.316931] Key type id_resolver registered [ 0.316931] Key type id_legacy registered [ 0.316933] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 0.316934] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering... [ 0.316935] Installing knfsd (copyright (C) 1996 okir@monad.swb.de). [ 0.317308] Key type cifs.idmap registered [ 0.317351] fuse: init (API version 7.32) [ 0.317447] SGI XFS with ACLs, security attributes, realtime, scrub, repair, quota, no debug enabled [ 0.317650] 9p: Installing v9fs 9p2000 file system support [ 0.317656] FS-Cache: Netfs '9p' registered for caching [ 0.317683] FS-Cache: Netfs 'ceph' registered for caching [ 0.317685] ceph: loaded (mds proto 32) [ 0.322760] NET: Registered protocol family 38 [ 0.322761] xor: automatically using best checksumming function avx [ 0.322763] Key type asymmetric registered [ 0.322764] Asymmetric key parser 'x509' registered [ 0.322915] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) [ 0.324184] hv_vmbus: registering driver hv_pci [ 0.324705] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled [ 0.325088] Non-volatile memory driver v1.3 [ 0.327942] brd: module loaded [ 0.328736] loop: module loaded [ 0.328765] hv_vmbus: registering driver hv_storvsc [ 0.329096] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. [ 0.329096] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld<EMAIL_ADDRESS>All Rights Reserved. [ 0.329105] tun: Universal TUN/TAP device driver, 1.6 [ 0.329176] PPP generic driver version 2.4.2 [ 0.329271] PPP BSD Compression module registered [ 0.329272] PPP Deflate Compression module registered [ 0.329274] PPP MPPE Compression module registered [ 0.329274] NET: Registered protocol family 24 [ 0.329281] usbcore: registered new interface driver cdc_ether [ 0.329286] usbcore: registered new interface driver cdc_ncm [ 0.329286] hv_vmbus: registering driver hv_netvsc [ 0.329374] VFIO - User Level meta-driver version: 0.3 [ 0.329505] usbcore: registered new interface driver cdc_acm [ 0.329505] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters [ 0.329512] usbcore: registered new interface driver ch341 [ 0.329513] usbserial: USB Serial support registered for ch341-uart [ 0.329516] usbcore: registered new interface driver cp210x [ 0.329518] usbserial: USB Serial support registered for cp210x [ 0.329521] usbcore: registered new interface driver ftdi_sio [ 0.329523] usbserial: USB Serial support registered for FTDI USB Serial Device [ 0.329671] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller [ 0.329673] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 1 [ 0.329676] vhci_hcd: created sysfs vhci_hcd.0 [ 0.329878] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.10 [ 0.329879] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 0.329879] usb usb1: Product: USB/IP Virtual Host Controller [ 0.329880] usb usb1: Manufacturer: Linux <IP_ADDRESS>-microsoft-standard-WSL2 vhci_hcd [ 0.329880] usb usb1: SerialNumber: vhci_hcd.0 [ 0.330061] hub 1-0:1.0: USB hub found [ 0.330117] hub 1-0:1.0: 8 ports detected [ 0.330454] scsi host0: storvsc_host_t [ 0.330479] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller [ 0.330480] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 2 [ 0.330600] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. [ 0.330882] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.10 [ 0.330883] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 0.330883] usb usb2: Product: USB/IP Virtual Host Controller [ 0.330884] usb usb2: Manufacturer: Linux <IP_ADDRESS>-microsoft-standard-WSL2 vhci_hcd [ 0.330884] usb usb2: SerialNumber: vhci_hcd.0 [ 0.331012] hub 2-0:1.0: USB hub found [ 0.331056] hub 2-0:1.0: 8 ports detected [ 0.331246] hv_vmbus: registering driver hyperv_keyboard [ 0.331370] rtc_cmos 00:00: RTC can wake from S4 [ 0.332522] rtc_cmos 00:00: registered as rtc0 [ 0.332802] rtc_cmos 00:00: setting system clock to 2022-06-30T13:01:27 UTC<PHONE_NUMBER>) [ 0.332810] rtc_cmos 00:00: alarms up to one month, 114 bytes nvram [ 0.333055] device-mapper: ioctl: 4.43.0-ioctl (2020-10-01) initialised<EMAIL_ADDRESS>[ 0.333206] device-mapper: raid: Loading target version 1.15.1 [ 0.333228] usbcore: registered new interface driver usbhid [ 0.333229] usbhid: USB HID core driver [ 0.333311] hv_utils: Registering HyperV Utility Driver [ 0.333312] hv_vmbus: registering driver hv_utils [ 0.333334] hv_vmbus: registering driver hv_balloon [ 0.333338] hv_utils: cannot register PTP clock: 0 [ 0.333355] hv_vmbus: registering driver dxgkrnl [ 0.333359] (NULL device *): dxgk: dxg_drv_init Version: 2216 [ 0.333373] drop_monitor: Initializing network drop monitor service [ 0.333385] Mirror/redirect action on [ 0.333767] hv_utils: TimeSync IC version 4.0 [ 0.333800] hv_balloon: Using Dynamic Memory protocol version 2.0 [ 0.333898] IPVS: Registered protocols (TCP, UDP) [ 0.333924] IPVS: Connection hash table configured (size=4096, memory=64Kbytes) [ 0.333952] IPVS: ipvs loaded. [ 0.333953] IPVS: [rr] scheduler registered. [ 0.333954] IPVS: [wrr] scheduler registered. [ 0.333954] IPVS: [sh] scheduler registered. [ 0.334177] ipip: IPv4 and MPLS over IPv4 tunneling driver [ 0.334374] Free page reporting enabled [ 0.334375] hv_balloon: Cold memory discard hint enabled [ 0.335752] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully [ 0.336103] Initializing XFRM netlink socket [ 0.336140] NET: Registered protocol family 10 [ 0.336341] Segment Routing with IPv6 [ 0.337334] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver [ 0.337384] NET: Registered protocol family 17 [ 0.337396] Bridge firewalling registered [ 0.337401] 8021q: 802.1Q VLAN Support v1.8 [ 0.337414] sctp: Hash tables configured (bind 2048/2048) [ 0.337602] 9pnet: Installing 9P2000 support [ 0.337612] Key type dns_resolver registered [ 0.337615] Key type ceph registered [ 0.337695] libceph: loaded (mon/osd proto 15/24) [ 0.337739] NET: Registered protocol family 40 [ 0.337739] hv_vmbus: registering driver hv_sock [ 0.337756] IPI shorthand broadcast: enabled [ 0.337761] sched_clock: Marking stable (337402400, 328500)->(344566800, -6835900) [ 0.337951] registered taskstats version 1 [ 0.337957] Loading compiled-in X.509 certificates [ 0.338089] Btrfs loaded, crc32c=crc32c-generic [ 0.340368] Freeing unused kernel image (initmem) memory: 1580K [ 0.406412] Write protecting the kernel read-only data: 28672k [ 0.407103] Freeing unused kernel image (text/rodata gap) memory: 2024K [ 0.407504] Freeing unused kernel image (rodata/data gap) memory: 1192K [ 0.407506] Run /init as init process [ 0.407506] with arguments: [ 0.407506] /init [ 0.407507] with environment: [ 0.407507] HOME=/ [ 0.407507] TERM=linux [ 38.826209] hv_vmbus: Unknown GUID: 6e382d18-3336-4f4b-acc4-2b7703d4df4a [ 38.826268] hv_vmbus: Unknown GUID: dde9cbc0-5060-4436-9448-ea1254a5d177 [ 38.828949] hv_pci ff4c2fda-0b6c-4969-aee1-ee6b25a26548: PCI VMBus probing: Using version 0x10003 [ 38.829810] hv_pci ff4c2fda-0b6c-4969-aee1-ee6b25a26548: PCI host bridge to bus 0b6c:00 [ 38.830071] pci 0b6c:00:00.0: [1414:008e] type 00 class 0x030200 [ 38.834793] (NULL device *): dxgk: mmio allocated c00000000 200000000 c00000000 dffffffff [ 38.957277] scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 38.957777] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 38.958799] sd 0:0:0:0: [sda] 536870912 512-byte logical blocks: (275 GB/256 GiB) [ 38.958800] sd 0:0:0:0: [sda] 4096-byte physical blocks [ 38.959023] sd 0:0:0:0: [sda] Write Protect is off [ 38.959025] sd 0:0:0:0: [sda] Mode Sense: 0f 00 00 00 [ 38.959368] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 38.960707] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 39.242092] EXT4-fs (sda): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered [ 39.346759] sd 0:0:0:0: [sda] Attached SCSI disk [ 40.446435] Adding 27262976k swap on /swap/file. Priority:-2 extents:5 across:27295744k [ 41.116585] scsi 0:0:0:1: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 41.117146] sd 0:0:0:1: Attached scsi generic sg1 type 0 [ 41.118205] sd 0:0:0:1: [sdb] 536870912 512-byte logical blocks: (275 GB/256 GiB) [ 41.118206] sd 0:0:0:1: [sdb] 4096-byte physical blocks [ 41.118328] sd 0:0:0:1: [sdb] Write Protect is off [ 41.118329] sd 0:0:0:1: [sdb] Mode Sense: 0f 00 00 00 [ 41.118625] sd 0:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 41.121488] sd 0:0:0:1: [sdb] Attached SCSI disk [ 41.133377] EXT4-fs (sdb): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered [ 41.143652] FS-Cache: Duplicate cookie detected [ 41.143653] FS-Cache: O-cookie c=00000000dd6b555d [p=000000001bc55327 fl=222 nc=0 na=1] [ 41.143654] FS-Cache: O-cookie d=0000000083bfbaeb n=000000000f13ab36 [ 41.143654] FS-Cache: O-key=[10] '34323934393431343035' [ 41.143657] FS-Cache: N-cookie c=0000000089a002d8 [p=000000001bc55327 fl=2 nc=0 na=1] [ 41.143658] FS-Cache: N-cookie d=0000000083bfbaeb n=00000000a41eebbb [ 41.143658] FS-Cache: N-key=[10] '34323934393431343035' [ 41.269501] FS-Cache: Duplicate cookie detected [ 41.269502] FS-Cache: O-cookie c=0000000068ab73f0 [p=000000001bc55327 fl=222 nc=0 na=1] [ 41.269503] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000cb304dd0 [ 41.269503] FS-Cache: O-key=[10] '34323934393431343138' [ 41.269507] FS-Cache: N-cookie c=00000000f8ae9a8b [p=000000001bc55327 fl=2 nc=0 na=1] [ 41.269507] FS-Cache: N-cookie d=0000000083bfbaeb n=00000000a974287d [ 41.269507] FS-Cache: N-key=[10] '34323934393431343138' [ 41.271220] FS-Cache: Duplicate cookie detected [ 41.271221] FS-Cache: O-cookie c=0000000068ab73f0 [p=000000001bc55327 fl=222 nc=0 na=1] [ 41.271221] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000cb304dd0 [ 41.271222] FS-Cache: O-key=[10] '34323934393431343138' [ 41.271225] FS-Cache: N-cookie c=000000008cce34cf [p=000000001bc55327 fl=2 nc=0 na=1] [ 41.271225] FS-Cache: N-cookie d=0000000083bfbaeb n=000000007debbb98 [ 41.271226] FS-Cache: N-key=[10] '34323934393431343138' [ 41.272900] FS-Cache: Duplicate cookie detected [ 41.272901] FS-Cache: O-cookie c=0000000068ab73f0 [p=000000001bc55327 fl=222 nc=0 na=1] [ 41.272902] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000cb304dd0 [ 41.272902] FS-Cache: O-key=[10] '34323934393431343138' [ 41.272905] FS-Cache: N-cookie c=00000000119ca479 [p=000000001bc55327 fl=2 nc=0 na=1] [ 41.272906] FS-Cache: N-cookie d=0000000083bfbaeb n=000000005c433c92 [ 41.272906] FS-Cache: N-key=[10] '34323934393431343138' [ 41.274450] FS-Cache: Duplicate cookie detected [ 41.274451] FS-Cache: O-cookie c=0000000068ab73f0 [p=000000001bc55327 fl=222 nc=0 na=1] [ 41.274451] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000cb304dd0 [ 41.274452] FS-Cache: O-key=[10] '34323934393431343138' [ 41.274455] FS-Cache: N-cookie c=00000000119ca479 [p=000000001bc55327 fl=2 nc=0 na=1] [ 41.274456] FS-Cache: N-cookie d=0000000083bfbaeb n=00000000b4b79833 [ 41.274456] FS-Cache: N-key=[10] '34323934393431343138' [ 41.275958] FS-Cache: Duplicate cookie detected [ 41.275960] FS-Cache: O-cookie c=0000000068ab73f0 [p=000000001bc55327 fl=222 nc=0 na=1] [ 41.275960] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000cb304dd0 [ 41.275960] FS-Cache: O-key=[10] '34323934393431343138' [ 41.275964] FS-Cache: N-cookie c=00000000119ca479 [p=000000001bc55327 fl=2 nc=0 na=1] [ 41.275964] FS-Cache: N-cookie d=0000000083bfbaeb n=00000000d26055ac [ 41.275964] FS-Cache: N-key=[10] '34323934393431343138' [ 49.208537] hv_balloon: Max. dynamic memory size: 104788 MB [ 68.910114] EXT4-fs (sdb): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered [ 68.921484] FS-Cache: Duplicate cookie detected [ 68.921486] FS-Cache: O-cookie c=00000000bc269de8 [p=000000001bc55327 fl=222 nc=0 na=1] [ 68.921487] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000c5f50706 [ 68.921487] FS-Cache: O-key=[10] '34323934393434313833' [ 68.921490] FS-Cache: N-cookie c=00000000de05fe51 [p=000000001bc55327 fl=2 nc=0 na=1] [ 68.921491] FS-Cache: N-cookie d=0000000083bfbaeb n=00000000d6f5c564 [ 68.921491] FS-Cache: N-key=[10] '34323934393434313833' [ 69.049679] FS-Cache: Duplicate cookie detected [ 69.049681] FS-Cache: O-cookie c=00000000720c1e76 [p=000000001bc55327 fl=222 nc=0 na=1] [ 69.049682] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000d4ae87e6 [ 69.049682] FS-Cache: O-key=[10] '34323934393434313936' [ 69.049685] FS-Cache: N-cookie c=0000000089a002d8 [p=000000001bc55327 fl=2 nc=0 na=1] [ 69.049686] FS-Cache: N-cookie d=0000000083bfbaeb n=000000008869727e [ 69.049686] FS-Cache: N-key=[10] '34323934393434313936' [ 69.051121] FS-Cache: Duplicate cookie detected [ 69.051122] FS-Cache: O-cookie c=00000000720c1e76 [p=000000001bc55327 fl=222 nc=0 na=1] [ 69.051122] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000d4ae87e6 [ 69.051123] FS-Cache: O-key=[10] '34323934393434313936' [ 69.051126] FS-Cache: N-cookie c=000000008cce34cf [p=000000001bc55327 fl=2 nc=0 na=1] [ 69.051127] FS-Cache: N-cookie d=0000000083bfbaeb n=00000000080ff813 [ 69.051127] FS-Cache: N-key=[10] '34323934393434313936' [ 69.052353] FS-Cache: Duplicate cookie detected [ 69.052354] FS-Cache: O-cookie c=00000000720c1e76 [p=000000001bc55327 fl=222 nc=0 na=1] [ 69.052355] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000d4ae87e6 [ 69.052355] FS-Cache: O-key=[10] '34323934393434313936' [ 69.052358] FS-Cache: N-cookie c=00000000f8ae9a8b [p=000000001bc55327 fl=2 nc=0 na=1] [ 69.052359] FS-Cache: N-cookie d=0000000083bfbaeb n=000000001ecc1257 [ 69.052359] FS-Cache: N-key=[10] '34323934393434313936' [ 69.053636] FS-Cache: Duplicate cookie detected [ 69.053637] FS-Cache: O-cookie c=00000000720c1e76 [p=000000001bc55327 fl=222 nc=0 na=1] [ 69.053637] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000d4ae87e6 [ 69.053638] FS-Cache: O-key=[10] '34323934393434313936' [ 69.053641] FS-Cache: N-cookie c=00000000f8ae9a8b [p=000000001bc55327 fl=2 nc=0 na=1] [ 69.053641] FS-Cache: N-cookie d=0000000083bfbaeb n=00000000174e009c [ 69.053642] FS-Cache: N-key=[10] '34323934393434313936' [ 69.055108] FS-Cache: Duplicate cookie detected [ 69.055109] FS-Cache: O-cookie c=00000000720c1e76 [p=000000001bc55327 fl=222 nc=0 na=1] [ 69.055109] FS-Cache: O-cookie d=0000000083bfbaeb n=00000000d4ae87e6 [ 69.055110] FS-Cache: O-key=[10] '34323934393434313936' [ 69.055113] FS-Cache: N-cookie c=00000000f8ae9a8b [p=000000001bc55327 fl=2 nc=0 na=1] [ 69.055113] FS-Cache: N-cookie d=0000000083bfbaeb n=0000000023001571 [ 69.055114] FS-Cache: N-key=[10] '34323934393434313936' [ 99.961212] WSL2: Performing memory compaction. same here, > 1 min to get the prompt at first start! I can see warning in dmesg... dmesg [ 0.000000] Linux version <IP_ADDRESS>-microsoft-standard-WSL2 (oe-user@oe-host) (x86_64-msft-linux-gcc (GCC) 9.3.0, GNU ld (GNU Binutils) <IP_ADDRESS>00220) #1 SMP Wed Nov 23 01:01:46 UTC 2022 [ 0.000000] Command line: initrd=\initrd.img WSL_ROOT_INIT=1 panic=-1 nr_cpus=8 swiotlb=force console=hvc0 debug pty.legacy_count=0 [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Centaur CentaurHauls [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 [ 0.000000] x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 [ 0.000000] x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 [ 0.000000] x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. [ 0.000000] signal: max sigframe size: 3632 [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable [ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000e0fff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000001fffff] ACPI data [ 0.000000] BIOS-e820: [mem 0x0000000000200000-0x00000000f7ffffff] usable [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x00000001faffffff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] DMI not present or invalid. [ 0.000000] Hypervisor detected: Microsoft Hyper-V [ 0.000000] Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0xa4e24, misc 0xe4bed7b6 [ 0.000000] Hyper-V Host Build:22621-10.0-1-0.1037 [ 0.000000] Hyper-V: Nested features: 0x3e0101 [ 0.000000] Hyper-V: LAPIC Timer Frequency: 0x1e8480 [ 0.000000] Hyper-V: Using hypercall for remote TLB flush [ 0.000000] clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns:<PHONE_NUMBER>20 ns [ 0.000003] tsc: Detected 1497.603 MHz processor [ 0.000014] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000017] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000021] last_pfn = 0x1fb000 max_arch_pfn = 0x400000000 [ 0.000052] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.000064] last_pfn = 0xf8000 max_arch_pfn = 0x400000000 [ 0.000074] Using GB pages for direct mapping [ 0.000509] RAMDISK: [mem 0x03a35000-0x03c0efff] [ 0.000512] ACPI: Early table checksum verification disabled [ 0.000532] ACPI: RSDP 0x00000000000E0000 000024 (v02 VRTUAL) [ 0.000537] ACPI: XSDT 0x0000000000100000 000044 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000544] ACPI: FACP 0x0000000000101000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000550] ACPI: DSDT 0x00000000001011B8 01E191 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) [ 0.000555] ACPI: FACS 0x0000000000101114 000040 [ 0.000558] ACPI: OEM0 0x0000000000101154 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000562] ACPI: SRAT 0x000000000011F349 000330 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000567] ACPI: APIC 0x000000000011F679 000088 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) [ 0.000570] ACPI: Reserving FACP table memory at [mem 0x101000-0x101113] [ 0.000572] ACPI: Reserving DSDT table memory at [mem 0x1011b8-0x11f348] [ 0.000573] ACPI: Reserving FACS table memory at [mem 0x101114-0x101153] [ 0.000574] ACPI: Reserving OEM0 table memory at [mem 0x101154-0x1011b7] [ 0.000575] ACPI: Reserving SRAT table memory at [mem 0x11f349-0x11f678] [ 0.000576] ACPI: Reserving APIC table memory at [mem 0x11f679-0x11f700] [ 0.000875] Zone ranges: [ 0.000877] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.000880] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.000882] Normal [mem 0x0000000100000000-0x00000001faffffff] [ 0.000883] Device empty [ 0.000885] Movable zone start for each node [ 0.000886] Early memory node ranges [ 0.000886] node 0: [mem 0x0000000000001000-0x000000000009ffff] [ 0.000888] node 0: [mem 0x0000000000200000-0x00000000f7ffffff] [ 0.000890] node 0: [mem 0x0000000100000000-0x00000001faffffff] [ 0.000891] Initmem setup node 0 [mem 0x0000000000001000-0x00000001faffffff] [ 0.001138] On node 0, zone DMA: 1 pages in unavailable ranges [ 0.001163] On node 0, zone DMA: 352 pages in unavailable ranges [ 0.017073] On node 0, zone Normal: 20480 pages in unavailable ranges [ 0.017142] ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) [ 0.017649] IOAPIC[0]: apic_id 8, version 17, address 0xfec00000, GSI 0-23 [ 0.017657] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.017664] ACPI: Using ACPI (MADT) for SMP configuration information [ 0.017666] TSC deadline timer available [ 0.017668] smpboot: Allowing 8 CPUs, 0 hotplug CPUs [ 0.017711] [mem 0xf8000000-0xffffffff] available for PCI devices [ 0.017714] Booting paravirtualized kernel on Hyper-V [ 0.017717] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:<PHONE_NUMBER>2750000 ns [ 0.027084] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:8 nr_node_ids:1 [ 0.027613] percpu: Embedded 53 pages/cpu s177496 r8192 d31400 u262144 [ 0.027622] pcpu-alloc: s177496 r8192 d31400 u262144 alloc=1*2097152 [ 0.027646] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 [ 0.027692] Hyper-V: PV spinlocks enabled [ 0.027695] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) [ 0.027700] Built 1 zonelists, mobility grouping on. Total pages: 2010949 [ 0.027703] Kernel command line: initrd=\initrd.img WSL_ROOT_INIT=1 panic=-1 nr_cpus=8 swiotlb=force console=hvc0 debug pty.legacy_count=0 [ 0.027810] Unknown kernel command line parameters "WSL_ROOT_INIT=1", will be passed to user space. [ 0.030229] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) [ 0.031093] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) [ 0.031263] mem auto-init: stack:off, heap alloc:off, heap free:off [ 0.056435] Memory: 4081508K/8174204K available (18452K kernel code, 2627K rwdata, 9760K rodata, 2028K init, 1876K bss, 254960K reserved, 0K cma-reserved) [ 0.056493] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 [ 0.056507] ftrace: allocating 54585 entries in 214 pages [ 0.103570] ftrace: allocated 214 pages with 5 groups [ 0.103890] rcu: Hierarchical RCU implementation. [ 0.103892] rcu: RCU restricting CPUs from NR_CPUS=256 to nr_cpu_ids=8. [ 0.103893] Rude variant of Tasks RCU enabled. [ 0.103894] Tracing variant of Tasks RCU enabled. [ 0.103895] rcu: RCU calculated value of scheduler-enlistment delay is 10 jiffies. [ 0.103896] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 [ 0.108021] Using NULL legacy PIC [ 0.108024] NR_IRQS: 16640, nr_irqs: 488, preallocated irqs: 0 [ 0.108459] random: crng init done [ 0.108484] Console: colour dummy device 80x25 [ 0.108493] ACPI: Core revision 20210730 [ 0.108569] Failed to register legacy timer interrupt [ 0.108570] APIC: Switch to symmetric I/O mode setup [ 0.111038] x2apic enabled [ 0.114302] Switched APIC routing to physical x2apic. [ 0.114330] Hyper-V: Using IPI hypercalls [ 0.114392] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x15964a3c5e7, max_idle_ns:<PHONE_NUMBER>79 ns [ 0.114401] Calibrating delay loop (skipped), value calculated using timer frequency.. 2995.20 BogoMIPS (lpj=14976030) [ 0.114404] pid_max: default: 32768 minimum: 301 [ 0.114420] LSM: Security Framework initializing [ 0.114426] landlock: Up and running. [ 0.114472] Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [ 0.114482] Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) [ 0.114719] x86/cpu: User Mode Instruction Prevention (UMIP) activated [ 0.114773] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 [ 0.114774] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 [ 0.114778] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 0.114781] Spectre V2 : Mitigation: Enhanced IBRS [ 0.114782] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 0.114782] Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT [ 0.114783] RETBleed: Mitigation: Enhanced IBRS [ 0.114784] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [ 0.114786] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp [ 0.114791] MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode [ 0.114792] SRBDS: Unknown: Dependent on hypervisor status [ 0.124397] Freeing SMP alternatives memory: 60K [ 0.124397] smpboot: CPU0: Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz (family: 0x6, model: 0x7e, stepping: 0x5) [ 0.124397] Performance Events: AnyThread deprecated, Icelake events, 32-deep LBR, full-width counters, Intel PMU driver. [ 0.124397] ... version: 5 [ 0.124397] ... bit width: 48 [ 0.124397] ... generic registers: 8 [ 0.124397] ... value mask: 0000ffffffffffff [ 0.124397] ... max period: 00007fffffffffff [ 0.124397] ... fixed-purpose events: 4 [ 0.124397] ... event mask: 0001000f000000ff [ 0.124397] rcu: Hierarchical SRCU implementation. [ 0.124397] smp: Bringing up secondary CPUs ... [ 0.124397] x86: Booting SMP configuration: [ 0.124397] .... node #0, CPUs: #1 [ 0.124397] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. [ 0.124397] #2 #3 #4 #5 #6 #7 [ 0.124397] smp: Brought up 1 node, 8 CPUs [ 0.124397] smpboot: Max logical packages: 1 [ 0.124397] smpboot: Total of 8 processors activated (23961.64 BogoMIPS) [ 0.130886] node 0 deferred pages initialised in 10ms [ 0.130941] devtmpfs: initialized [ 0.130941] x86/mm: Memory block size: 128MB [ 0.130941] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns:<PHONE_NUMBER>2750000 ns [ 0.130941] futex hash table entries: 2048 (order: 5, 131072 bytes, linear) [ 0.130941] NET: Registered PF_NETLINK/PF_ROUTE protocol family [ 0.130941] thermal_sys: Registered thermal governor 'step_wise' [ 0.134426] cpuidle: using governor menu [ 0.134443] ACPI: bus type PCI registered [ 0.134456] PCI: Fatal: No config space access function found [ 0.135848] Kprobes globally optimized [ 0.135879] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages [ 0.135879] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 0.144460] raid6: skip pq benchmark and using algorithm avx512x4 [ 0.144460] raid6: using avx512x2 recovery algorithm [ 0.144460] ACPI: Added _OSI(Module Device) [ 0.144460] ACPI: Added _OSI(Processor Device) [ 0.144460] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.144460] ACPI: Added _OSI(Processor Aggregator Device) [ 0.144460] ACPI: Added _OSI(Linux-Dell-Video) [ 0.144460] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 0.144460] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 0.154713] ACPI: 1 ACPI AML tables successfully acquired and loaded [ 0.155587] ACPI: Interpreter enabled [ 0.155590] ACPI: PM: (supports S0 S5) [ 0.155591] ACPI: Using IOAPIC for interrupt routing [ 0.155598] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.155734] ACPI: Enabled 2 GPEs in block 00 to 0F [ 0.156950] iommu: Default domain type: Translated [ 0.156952] iommu: DMA domain TLB invalidation policy: lazy mode [ 0.157030] SCSI subsystem initialized [ 0.157039] ACPI: bus type USB registered [ 0.157047] usbcore: registered new interface driver usbfs [ 0.157051] usbcore: registered new interface driver hub [ 0.157055] usbcore: registered new device driver usb [ 0.157063] pps_core: LinuxPPS API ver. 1 registered [ 0.157063] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti<EMAIL_ADDRESS>[ 0.157065] PTP clock support registered [ 0.157091] hv_vmbus: Vmbus version:5.3 [ 0.157091] PCI: Using ACPI for IRQ routing [ 0.157091] PCI: System does not support PCI [ 0.157091] hv_vmbus: Unknown GUID: c376c1c3-d276-48d2-90a9-c04748072c60 [ 0.157091] clocksource: Switched to clocksource tsc-early [ 0.164741] VFS: Disk quotas dquot_6.6.0 [ 0.164751] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 0.164766] FS-Cache: Loaded [ 0.164787] pnp: PnP ACPI init [ 0.164968] pnp: PnP ACPI: found 1 devices [ 0.173289] NET: Registered PF_INET protocol family [ 0.173656] IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 0.174685] tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) [ 0.174695] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) [ 0.174698] TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) [ 0.174757] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) [ 0.175087] TCP: Hash tables configured (established 65536 bind 65536) [ 0.175113] UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) [ 0.175126] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) [ 0.175157] NET: Registered PF_UNIX/PF_LOCAL protocol family [ 0.175971] RPC: Registered named UNIX socket transport module. [ 0.175974] RPC: Registered udp transport module. [ 0.175975] RPC: Registered tcp transport module. [ 0.175976] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 0.175979] PCI: CLS 0 bytes, default 64 [ 0.176007] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 0.176009] software IO TLB: mapped [mem 0x00000000f4000000-0x00000000f8000000] (64MB) [ 0.176050] KVM: vmx: using Hyper-V Enlightened VMCS [ 0.176053] Trying to unpack rootfs image as initramfs... [ 0.177304] Freeing initrd memory: 1896K [ 0.514393] kvm: already loaded the other module [ 0.518938] Initialise system trusted keyrings [ 0.519261] workingset: timestamp_bits=46 max_order=21 bucket_order=0 [ 0.520297] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 0.520791] NFS: Registering the id_resolver key type [ 0.520811] Key type id_resolver registered [ 0.520813] Key type id_legacy registered [ 0.520816] nfs4filelayout_init: NFSv4 File Layout Driver Registering... [ 0.520819] nfs4flexfilelayout_init: NFSv4 Flexfile Layout Driver Registering... [ 0.520821] Installing knfsd (copyright (C) 1996 okir@monad.swb.de). [ 0.522733] Key type cifs.idmap registered [ 0.523000] fuse: init (API version 7.34) [ 0.523527] SGI XFS with ACLs, security attributes, realtime, scrub, repair, quota, no debug enabled [ 0.524487] 9p: Installing v9fs 9p2000 file system support [ 0.524556] FS-Cache: Netfs '9p' registered for caching [ 0.524622] FS-Cache: Netfs 'ceph' registered for caching [ 0.524625] ceph: loaded (mds proto 32) [ 0.539214] NET: Registered PF_ALG protocol family [ 0.539219] xor: automatically using best checksumming function avx [ 0.539222] Key type asymmetric registered [ 0.539223] Asymmetric key parser 'x509' registered [ 0.539267] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 248) [ 0.540689] hv_vmbus: registering driver hv_pci [ 0.541589] hv_pci 7a122d1d-f71d-44d4-a8c0-74567279369b: PCI VMBus probing: Using version 0x10004 [ 0.543467] hv_pci 7a122d1d-f71d-44d4-a8c0-74567279369b: PCI host bridge to bus f71d:00 [ 0.543469] pci_bus f71d:00: root bus resource [mem 0x9ffe00000-0x9ffe02fff window] [ 0.543472] pci_bus f71d:00: No busn resource found for root bus, will use [bus 00-ff] [ 0.544934] pci f71d:00:00.0: [1af4:1043] type 00 class 0x010000 [ 0.546493] pci f71d:00:00.0: reg 0x10: [mem 0x9ffe00000-0x9ffe00fff 64bit] [ 0.547505] pci f71d:00:00.0: reg 0x18: [mem 0x9ffe01000-0x9ffe01fff 64bit] [ 0.548531] pci f71d:00:00.0: reg 0x20: [mem 0x9ffe02000-0x9ffe02fff 64bit] [ 0.554856] pci_bus f71d:00: busn_res: [bus 00-ff] end is updated to 00 [ 0.554865] pci f71d:00:00.0: BAR 0: assigned [mem 0x9ffe00000-0x9ffe00fff 64bit] [ 0.555577] pci f71d:00:00.0: BAR 2: assigned [mem 0x9ffe01000-0x9ffe01fff 64bit] [ 0.556099] pci f71d:00:00.0: BAR 4: assigned [mem 0x9ffe02000-0x9ffe02fff 64bit] [ 0.556732] ACPI: AC: AC Adapter [AC1] (off-line) [ 0.557638] ACPI: battery: Slot [BAT1] (battery present) [ 0.561559] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled [ 0.618353] Non-volatile memory driver v1.3 [ 0.618580] [drm] Initialized vgem 1.0.0 20120112 for vgem on minor 0 [ 0.620471] printk: console [hvc0] enabled [ 0.622282] brd: module loaded [ 0.623971] loop: module loaded [ 0.624393] hv_vmbus: registering driver hv_storvsc [ 0.625609] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. [ 0.626665] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld<EMAIL_ADDRESS>All Rights Reserved. [ 0.626745] scsi host0: storvsc_host_t [ 0.627671] tun: Universal TUN/TAP device driver, 1.6 [ 0.628737] PPP generic driver version 2.4.2 [ 0.629441] PPP BSD Compression module registered [ 0.629865] PPP Deflate Compression module registered [ 0.630282] PPP MPPE Compression module registered [ 0.630765] NET: Registered PF_PPPOX protocol family [ 0.631273] usbcore: registered new interface driver cdc_ether [ 0.631958] usbcore: registered new interface driver cdc_ncm [ 0.632494] usbcore: registered new interface driver r8153_ecm [ 0.633092] hv_vmbus: registering driver hv_netvsc [ 0.634595] VFIO - User Level meta-driver version: 0.3 [ 0.635631] usbcore: registered new interface driver cdc_acm [ 0.636379] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters [ 0.637196] usbcore: registered new interface driver ch341 [ 0.637743] usbserial: USB Serial support registered for ch341-uart [ 0.638397] usbcore: registered new interface driver cp210x [ 0.638787] usbserial: USB Serial support registered for cp210x [ 0.639261] usbcore: registered new interface driver ftdi_sio [ 0.639772] usbserial: USB Serial support registered for FTDI USB Serial Device [ 0.640850] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller [ 0.641762] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 1 [ 0.642733] vhci_hcd: created sysfs vhci_hcd.0 [ 0.643926] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.15 [ 0.644906] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 0.645567] usb usb1: Product: USB/IP Virtual Host Controller [ 0.646127] usb usb1: Manufacturer: Linux <IP_ADDRESS>-microsoft-standard-WSL2 vhci_hcd [ 0.646790] usb usb1: SerialNumber: vhci_hcd.0 [ 0.647472] hub 1-0:1.0: USB hub found [ 0.647878] hub 1-0:1.0: 8 ports detected [ 0.649168] vhci_hcd vhci_hcd.0: USB/IP Virtual Host Controller [ 0.649918] vhci_hcd vhci_hcd.0: new USB bus registered, assigned bus number 2 [ 0.651469] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. [ 0.653319] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.15 [ 0.654036] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 0.654792] usb usb2: Product: USB/IP Virtual Host Controller [ 0.655566] usb usb2: Manufacturer: Linux <IP_ADDRESS>-microsoft-standard-WSL2 vhci_hcd [ 0.656333] usb usb2: SerialNumber: vhci_hcd.0 [ 0.657379] hub 2-0:1.0: USB hub found [ 0.657796] hub 2-0:1.0: 8 ports detected [ 0.660544] hv_vmbus: registering driver hyperv_keyboard [ 0.661497] rtc_cmos 00:00: RTC can wake from S4 [ 0.664442] rtc_cmos 00:00: registered as rtc0 [ 0.665434] rtc_cmos 00:00: setting system clock to 2022-12-22T18:57:57 UTC<PHONE_NUMBER>) [ 0.666057] rtc_cmos 00:00: alarms up to one month, 114 bytes nvram [ 0.667268] device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised<EMAIL_ADDRESS>[ 0.668657] device-mapper: raid: Loading target version 1.15.1 [ 0.669432] usbcore: registered new interface driver usbhid [ 0.670021] usbhid: USB HID core driver [ 0.670499] hv_utils: Registering HyperV Utility Driver [ 0.671035] hv_vmbus: registering driver hv_utils [ 0.671557] hv_vmbus: registering driver hv_balloon [ 0.672104] hv_utils: TimeSync IC version 4.0 [ 0.672393] hv_vmbus: registering driver dxgkrnl [ 0.673088] hv_balloon: Using Dynamic Memory protocol version 2.0 [ 0.674331] drop_monitor: Initializing network drop monitor service [ 0.675992] Mirror/redirect action on [ 0.676756] Free page reporting enabled [ 0.676831] u32 classifier [ 0.677142] hv_balloon: Cold memory discard hint enabled with order 9 [ 0.677395] Performance counters on [ 0.678366] input device check on [ 0.678709] Actions configured [ 0.680139] IPVS: Registered protocols (TCP, UDP) [ 0.680685] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 0.681481] IPVS: ipvs loaded. [ 0.681911] IPVS: [rr] scheduler registered. [ 0.682398] IPVS: [wrr] scheduler registered. [ 0.682873] IPVS: [sh] scheduler registered. [ 0.683956] ipip: IPv4 and MPLS over IPv4 tunneling driver [ 0.684983] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully [ 0.685686] Initializing XFRM netlink socket [ 0.686421] NET: Registered PF_INET6 protocol family [ 0.687744] Segment Routing with IPv6 [ 0.688236] In-situ OAM (IOAM) with IPv6 [ 0.688687] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver [ 0.689475] NET: Registered PF_PACKET protocol family [ 0.690106] Bridge firewalling registered [ 0.690483] 8021q: 802.1Q VLAN Support v1.8 [ 0.690849] sctp: Hash tables configured (bind 256/256) [ 0.691407] 9pnet: Installing 9P2000 support [ 0.692164] Key type dns_resolver registered [ 0.692685] Key type ceph registered [ 0.693348] libceph: loaded (mon/osd proto 15/24) [ 0.694137] NET: Registered PF_VSOCK protocol family [ 0.694755] hv_vmbus: registering driver hv_sock [ 0.695423] IPI shorthand broadcast: enabled [ 0.695963] sched_clock: Marking stable (688735411, 5982800)->(723128700, -28410489) [ 0.697569] registered taskstats version 1 [ 0.698896] Loading compiled-in X.509 certificates [ 0.700860] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no [ 0.705077] Freeing unused kernel image (initmem) memory: 2028K [ 0.774589] Write protecting the kernel read-only data: 30720k [ 0.776029] Freeing unused kernel image (text/rodata gap) memory: 2024K [ 0.776883] Freeing unused kernel image (rodata/data gap) memory: 480K [ 0.777396] Run /init as init process [ 0.777680] with arguments: [ 0.777961] /init [ 0.778150] with environment: [ 0.778427] HOME=/ [ 0.778613] TERM=linux [ 0.778802] WSL_ROOT_INIT=1 [ 0.977668] hv_vmbus: Unknown GUID: 6e382d18-3336-4f4b-acc4-2b7703d4df4a [ 0.981164] hv_vmbus: Unknown GUID: dde9cbc0-5060-4436-9448-ea1254a5d177 [ 0.981971] hv_pci 5418748e-f074-4752-be2b-5543a744b4e8: PCI VMBus probing: Using version 0x10004 [ 0.984631] hv_pci 5418748e-f074-4752-be2b-5543a744b4e8: PCI host bridge to bus f074:00 [ 0.985653] pci_bus f074:00: No busn resource found for root bus, will use [bus 00-ff] [ 0.987182] pci f074:00:00.0: [1414:008e] type 00 class 0x030200 [ 0.999045] pci_bus f074:00: busn_res: [bus 00-ff] end is updated to 00 [ 1.001699] hv_vmbus: Unknown GUID: 6e382d18-3336-4f4b-acc4-2b7703d4df4a [ 1.002907] scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 1.003346] hv_pci 58f09c14-6325-4782-804b-bebc0eceadfb: PCI VMBus probing: Using version 0x10004 [ 1.007452] hv_pci 58f09c14-6325-4782-804b-bebc0eceadfb: PCI host bridge to bus 6325:00 [ 1.007659] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.009006] pci_bus 6325:00: No busn resource found for root bus, will use [bus 00-ff] [ 1.010443] sd 0:0:0:0: [sda] 743696 512-byte logical blocks: (381 MB/363 MiB) [ 1.012653] sd 0:0:0:0: [sda] Write Protect is on [ 1.012966] pci 6325:00:00.0: [1414:008e] type 00 class 0x030200 [ 1.014161] sd 0:0:0:0: [sda] Mode Sense: 0f 00 80 00 [ 1.015194] sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 1.026476] pci_bus 6325:00: busn_res: [bus 00-ff] end is updated to 00 [ 1.127481] hv_pci cb79be22-476b-4e83-a06e-ccaf26e88b73: PCI VMBus probing: Using version 0x10004 [ 1.130728] hv_pci cb79be22-476b-4e83-a06e-ccaf26e88b73: PCI host bridge to bus 476b:00 [ 1.131511] pci_bus 476b:00: root bus resource [mem 0x9ffe04000-0x9ffe06fff window] [ 1.132267] pci_bus 476b:00: No busn resource found for root bus, will use [bus 00-ff] [ 1.135433] pci 476b:00:00.0: [1af4:1049] type 00 class 0x010000 [ 1.140152] pci 476b:00:00.0: reg 0x10: [mem 0x9ffe04000-0x9ffe04fff 64bit] [ 1.147541] pci 476b:00:00.0: reg 0x18: [mem 0x9ffe05000-0x9ffe05fff 64bit] [ 1.154689] pci 476b:00:00.0: reg 0x20: [mem 0x9ffe06000-0x9ffe06fff 64bit] [ 1.177070] pci_bus 476b:00: busn_res: [bus 00-ff] end is updated to 00 [ 1.178992] pci 476b:00:00.0: BAR 0: assigned [mem 0x9ffe04000-0x9ffe04fff 64bit] [ 1.184625] pci 476b:00:00.0: BAR 2: assigned [mem 0x9ffe05000-0x9ffe05fff 64bit] [ 1.188138] pci 476b:00:00.0: BAR 4: assigned [mem 0x9ffe06000-0x9ffe06fff 64bit] [ 1.554535] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x15964a3c5e7, max_idle_ns:<PHONE_NUMBER>79 ns [ 1.560652] clocksource: Switched to clocksource tsc [ 1.635519] EXT4-fs (sda): mounted filesystem without journal. Opts: (null). Quota mode: none. [ 1.745782] sd 0:0:0:0: [sda] Attached SCSI disk [ 2.783452] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 3.225207] scsi 0:0:0:1: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 3.278172] sd 0:0:0:1: Attached scsi generic sg1 type 0 [ 3.303039] sd 0:0:0:1: [sdb] 4194312 512-byte logical blocks: (2.15 GB/2.00 GiB) [ 3.340927] sd 0:0:0:1: [sdb] 4096-byte physical blocks [ 3.347097] sd 0:0:0:1: [sdb] Write Protect is off [ 3.349100] sd 0:0:0:1: [sdb] Mode Sense: 0f 00 00 00 [ 3.353032] sd 0:0:0:1: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 3.419991] sd 0:0:0:1: [sdb] Attached SCSI disk [ 3.499583] hv_pci b8b20fd0-37e5-4d1c-a211-c0f9c0f1aa90: PCI VMBus probing: Using version 0x10004 [ 3.508791] hv_pci b8b20fd0-37e5-4d1c-a211-c0f9c0f1aa90: PCI host bridge to bus 37e5:00 [ 3.510806] pci_bus 37e5:00: root bus resource [mem 0xc00000000-0xe00001fff window] [ 3.513310] pci_bus 37e5:00: No busn resource found for root bus, will use [bus 00-ff] [ 3.520663] pci 37e5:00:00.0: [1af4:105a] type 00 class 0x088000 [ 3.523352] Adding 2097152k swap on /dev/sdb. Priority:-2 extents:1 across:2097152k [ 3.532004] pci 37e5:00:00.0: reg 0x10: [mem 0xe00000000-0xe00000fff 64bit] [ 3.543128] pci 37e5:00:00.0: reg 0x18: [mem 0xe00001000-0xe00001fff 64bit] [ 3.551756] pci 37e5:00:00.0: reg 0x20: [mem 0xc00000000-0xdffffffff 64bit] [ 3.569549] pci_bus 37e5:00: busn_res: [bus 00-ff] end is updated to 00 [ 3.571275] pci 37e5:00:00.0: BAR 4: assigned [mem 0xc00000000-0xdffffffff 64bit] [ 3.579733] pci 37e5:00:00.0: BAR 0: assigned [mem 0xe00000000-0xe00000fff 64bit] [ 3.589274] pci 37e5:00:00.0: BAR 2: assigned [mem 0xe00001000-0xe00001fff 64bit] [ 3.632777] virtiofs virtio2: Cache len: 0x200000000 @ 0xc00000000 [ 3.721220] memmap_init_zone_device initialised 2097152 pages in 40ms [ 3.758195] FS-Cache: Duplicate cookie detected [ 3.759185] FS-Cache: O-cookie c=00000005 [p=00000002 fl=222 nc=0 na=1] [ 3.760257] FS-Cache: O-cookie d=00000000c0c3f43d{9P.session} n=000000008230811e [ 3.761728] FS-Cache: O-key=[10] '34323934393337363630' [ 3.763261] FS-Cache: N-cookie c=00000006 [p=00000002 fl=2 nc=0 na=1] [ 3.764843] FS-Cache: N-cookie d=00000000c0c3f43d{9P.session} n=00000000d4ee9470 [ 3.769920] FS-Cache: N-key=[10] '34323934393337363630' [ 3.792831] scsi 0:0:0:2: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 [ 3.796511] sd 0:0:0:2: Attached scsi generic sg2 type 0 [ 3.799087] sd 0:0:0:2: [sdc] 536870912 512-byte logical blocks: (275 GB/256 GiB) [ 3.801713] sd 0:0:0:2: [sdc] 4096-byte physical blocks [ 3.803989] sd 0:0:0:2: [sdc] Write Protect is off [ 3.805439] sd 0:0:0:2: [sdc] Mode Sense: 0f 00 00 00 [ 3.807079] sd 0:0:0:2: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 3.817341] sd 0:0:0:2: [sdc] Attached SCSI disk [ 3.828241] EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered. Quota mode: none. [ 49.524010] hv_balloon: Max. dynamic memory size: 7984 MB [ 151.995166] hv_pci ace788eb-48f3-4551-b73e-00cb068b60f5: PCI VMBus probing: Using version 0x10004 [ 151.998021] hv_pci ace788eb-48f3-4551-b73e-00cb068b60f5: PCI host bridge to bus 48f3:00 [ 151.998787] pci_bus 48f3:00: root bus resource [mem 0x9ffe08000-0x9ffe0afff window] [ 151.999520] pci_bus 48f3:00: No busn resource found for root bus, will use [bus 00-ff] [ 152.001553] pci 48f3:00:00.0: [1af4:1049] type 00 class 0x010000 [ 152.003512] pci 48f3:00:00.0: reg 0x10: [mem 0x9ffe08000-0x9ffe08fff 64bit] [ 152.005090] pci 48f3:00:00.0: reg 0x18: [mem 0x9ffe09000-0x9ffe09fff 64bit] [ 152.007605] pci 48f3:00:00.0: reg 0x20: [mem 0x9ffe0a000-0x9ffe0afff 64bit] [ 152.013383] pci_bus 48f3:00: busn_res: [bus 00-ff] end is updated to 00 [ 152.014336] pci 48f3:00:00.0: BAR 0: assigned [mem 0x9ffe08000-0x9ffe08fff 64bit] [ 152.015809] pci 48f3:00:00.0: BAR 2: assigned [mem 0x9ffe09000-0x9ffe09fff 64bit] [ 152.017878] pci 48f3:00:00.0: BAR 4: assigned [mem 0x9ffe0a000-0x9ffe0afff 64bit] [ 152.193635] hv_pci 30ae5498-226c-4f9a-8b89-4076026ba326: PCI VMBus probing: Using version 0x10004 [ 152.197243] hv_pci 30ae5498-226c-4f9a-8b89-4076026ba326: PCI host bridge to bus 226c:00 [ 152.198289] pci_bus 226c:00: root bus resource [mem 0x9ffe0c000-0x9ffe0efff window] [ 152.199105] pci_bus 226c:00: No busn resource found for root bus, will use [bus 00-ff] [ 152.201432] pci 226c:00:00.0: [1af4:1049] type 00 class 0x010000 [ 152.204814] pci 226c:00:00.0: reg 0x10: [mem 0x9ffe0c000-0x9ffe0cfff 64bit] [ 152.207861] pci 226c:00:00.0: reg 0x18: [mem 0x9ffe0d000-0x9ffe0dfff 64bit] [ 152.210252] pci 226c:00:00.0: reg 0x20: [mem 0x9ffe0e000-0x9ffe0efff 64bit] [ 152.220100] pci_bus 226c:00: busn_res: [bus 00-ff] end is updated to 00 [ 152.221113] pci 226c:00:00.0: BAR 0: assigned [mem 0x9ffe0c000-0x9ffe0cfff 64bit] [ 152.223297] pci 226c:00:00.0: BAR 2: assigned [mem 0x9ffe0d000-0x9ffe0dfff 64bit] [ 152.225860] pci 226c:00:00.0: BAR 4: assigned [mem 0x9ffe0e000-0x9ffe0efff 64bit] [ 152.432007] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 152.433098] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 152.433886] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 152.434944] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 152.436587] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 152.437515] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 152.438483] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -22 [ 152.439373] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 155.706041] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.724883] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.726034] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.727046] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.728095] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.729057] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.730220] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.731184] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.732531] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.733987] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 156.735097] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 [ 158.360930] misc dxg: dxgk: dxgkio_query_adapter_info: Ioctl failed: -2 Same here. Look at the delay in the unshorted output of dmesg after Adding 4194304k swap on /dev/sdb. [ 2.613965] EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered. Quota mode: none. [ 2.619164] Adding 4194304k swap on /dev/sdb. Priority:-2 extents:1 across:4194304k [ 49.196057] hv_balloon: Max. dynamic memory size: 16344 MB [ 65.950886] /sbin/ldconfig: [ 65.950889] /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link [ 66.025550] hv_pci b5a1914e-7eff-475d-aede-30c08f905603: PCI VMBus probing: Using version 0x10004 [ 66.026447] 9pnet_virtio: no channels available for device drvfs Same here. Look at the delay in the unshorted output of dmesg after Adding 4194304k swap on /dev/sdb. [ 2.613965] EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: discard,errors=remount-ro,data=ordered. Quota mode: none. [ 2.619164] Adding 4194304k swap on /dev/sdb. Priority:-2 extents:1 across:4194304k [ 49.196057] hv_balloon: Max. dynamic memory size: 16344 MB [ 65.950886] /sbin/ldconfig: [ 65.950889] /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link [ 66.025550] hv_pci b5a1914e-7eff-475d-aede-30c08f905603: PCI VMBus probing: Using version 0x10004 [ 66.026447] 9pnet_virtio: no channels available for device drvfs Surprisingly, in my case, addressing the /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link error helped speed up the startup time. Here are the instructions: https://github.com/microsoft/WSL/issues/5663#issuecomment-1068499676 I still see about ~40s between the addition of the swap and the hv_balloon messages. I've tried reducing both memory and swap sizes for the VM, but it did not seem to make any differences. .wslconfig is where those settings are specified. I was wondering if the slowdown might have something to do with inaccessible mounted network devices that I have mapped to some of my disk drives. It looks like there is another config file, wsl.config that is responsible for populating initial /mnt/ content. Thanks for the hint with the network share! After enabling the Network-Share server the startup of wsl2 takes only 5.8 seconds! Did you figured out how you can ignore only one specific drive for automount? As far I can see, you are only be able to completely disable automount for all drives. Also I will give the libcuba a try, maybe I'm able to speedup the boot even more. Thanks for the hint with the network share! After enabling the Network-Share server the startup of wsl2 takes only 5.8 seconds! Did you figured out how you can ignore only one specific drive for automount? As far I can see, you are only be able to completely disable automount for all drives. Also I will give the libcuba a try, maybe I'm able to speedup the boot even more. How did you enable the Network-Share server? Thanks for the hint with the network share! After enabling the Network-Share server the startup of wsl2 takes only 5.8 seconds! Did you figured out how you can ignore only one specific drive for automount? As far I can see, you are only be able to completely disable automount for all drives. Also I will give the libcuba a try, maybe I'm able to speedup the boot even more. How did you enable the Network-Share server? Edit: Do you mean that after enabling the server that serves your network mapped drive wsl2 started to boot faster? Yes, exactly. I booted my local server which acts as a smb-share. After that, the wsl startup was much faster! Currently I disabled automount in wsl.conf since I don't need my C-Drive in wsl. I guess, it would be great if mounts would not pause the WSL boot process. Should it be possible to run them in parallel to the rest of the actions. I wonder if there is already a matching issue. Seems like issues with automount and inaccessible drives is a common source of problems. Here are just a few other issues: https://github.com/microsoft/WSL/issues/8953, https://github.com/microsoft/WSL/issues/7565 This comment explains how to disable automount for all drives, but instead mount just those that you care about: https://github.com/microsoft/WSL/issues/7565#issuecomment-1227496759 I disabled automount but I still see a long pause ( ~30s) when first booting WSL if a network drive in my host machine is not available, even though that drive is not set to mount in fstab Is seems like the issue is related to network drives. I have business machine which has few mapped drives. When I'm at work WSL start within 4 sec. When I'm at home and all my work drives are unavailable it starts in 50 sec. When at home connected to VPN (drives are online) again it starts in 4 sec. I have disabled automount in wsl.conf. Using Debian distro in WSL. I am also experiencing slow startup times. If mounting of C:\ is causing the delay, it would really be great if the mounting could be done in parallell/lazily. It it very rare that I need immediate access to C:\ (but I want the prompt to be available asap) I guess, it would be great if mounts would not pause the WSL boot process. Should it be possible to run them in parallel to the rest of the actions. I wonder if there is already a matching issue. Is seems like the issue is related to network drives. I have business machine which has few mapped drives. When I'm at work WSL start within 4 sec. When I'm at home and all my work drives are unavailable it starts in 50 sec. When at home connected to VPN (drives are online) again it starts in 4 sec. I have disabled automount in wsl.conf. Using Debian distro in WSL. I have exactly the same behavior! I can add that it also applies to FileExplorer which remains unresponsive at first launch in the similar situation. It's incredible that mount trials can't be done in background , letting the main process responsive and usable even if this fails. I'm not sure if it's related, but disconnecting inaccessible mapped network drives solved it for me. Here's an issue -> #9358 I'm not sure if it's related, but disconnecting inaccessible mapped network drives solved it for me. Here's an issue -> #9358 Again, this is not a solution, but a workaround with tradeoffs. For me personally disconnecting mapped drives is not an option. I have 6 mapped drives that are offline for most of the time.
2025-04-01T04:34:42.235015
2021-09-02T14:15:18
986791182
{ "authors": [ "D3vil0p3r", "felipecrs" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8478", "repo": "microsoft/WSL", "url": "https://github.com/microsoft/WSL/issues/7381" }
gharchive/issue
Cannot find direct download URL for Ubuntu (non-versioned) AppX For solving https://github.com/microsoft/winget-pkgs/issues/26805, I need the URL where I can download the Ubuntu.appx (the non-versioned version) which is found at https://www.microsoft.com/en-us/p/ubuntu/9nblggh4msv6. I suppose this download URL exists already, as wsl --install --distribution ubuntu works. It's possible to get the appxbundles with https://store.rg-adguard.net/ by referring to https://www.microsoft.com/en-us/p/ubuntu/9nblggh4msv6: But I cannot download such files in the regular download serves, as we need a direct link for integrating with WinGet. @felipecrs did you find a solution to your issue? I have the same issue (look the reference here above). Not exactly the way I wanted but: Winget can now install MS Store apps directly wsl --install Ubuntu is simple enough
2025-04-01T04:34:42.243304
2022-07-12T18:25:23
1302465825
{ "authors": [ "joachimnielandt" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8479", "repo": "microsoft/WSL", "url": "https://github.com/microsoft/WSL/issues/8598" }
gharchive/issue
Network crashes - Connectivity to ubuntu guest fails after file upload / network stress Version Microsoft Windows [Version 10.0.22000.778] WSL Version [X] WSL 2 [ ] WSL 1 Kernel Version <IP_ADDRESS> Distro Version Ubuntu-22.04 Other Software Client: Docker Engine - Community Version: 20.10.17 Repro Steps Network connectivity seems to drop after stressing the network. I experienced this by running a tomcat docker container and uploading a large (200MB) .war file - invariably this fails running iperf3 server in ubuntu and iperf3 client in windows - after a while this fails After failure, I cannot ping my docker services or connect to my iperf3 server. After wsl --shutdown, network is restored (until it fails again). Services remain running in WSL though: I can access them from within Ubuntu, just not from Windows. Expected Behavior Network connectivity remains, even when stressed. Actual Behavior Network fails. Diagnostic Logs Not sure how to proceed - is it possible to generate a network-stack debug log for WSL? Pleasure, thanks for pointing out how to create a log file, wasn't aware of this option before WslLogs-2022-07-13_10-20-00.zip . I created a second one, this time a bit more concise I hope. I shutdown wsl, then started docker inside, then initiated the tomcat upload of the .war file, then saw the network crash occur, then initiated a wsl shutdown. WslLogs-2022-07-13_11-54-02.zip This might be relevant, but not clear to me at this point. There's plan9 and docker activity (involving port 8080, which is exposing tomcat). Couple seconds later, the host network service is stopped?
2025-04-01T04:34:42.251334
2022-09-02T13:45:11
1360218548
{ "authors": [ "MaikoDeizepi", "NotTheDr01ds" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8480", "repo": "microsoft/WSL", "url": "https://github.com/microsoft/WSL/issues/8790" }
gharchive/issue
WSL not update e not recognized Version Microsoft Windows [Version 10.0.19044.1889] WSL Version [ ] WSL 2 [X] WSL 1 Kernel Version <IP_ADDRESS> Distro Version Ubuntu 22.04 Other Software No response Repro Steps i'm not make update in wsl and cannot apply comands of wsl. PS C:\WINDOWS\system32> wsl --install -d Ubuntu-20.04 Ubuntu 20.04 LTS is already installed. Launching Ubuntu 20.04 LTS... PS C:\WINDOWS\system32> wsl --list --verbose Windows Subsystem for Linux has no installed distributions. Distributions can be installed by visiting the Microsoft Store: https://aka.ms/wslstore PS C:\WINDOWS\system32> Expected Behavior I want it to work the same https://docs.microsoft.com/pt-br/windows/wsl/basic-commands#install-a-specific-linux-distribution Actual Behavior not recognized wsl and not update for version 2 Diagnostic Logs PS C:\WINDOWS\system32> wsl --install -d Ubuntu-20.04 Ubuntu 20.04 LTS is already installed. Launching Ubuntu 20.04 LTS... PS C:\WINDOWS\system32> wsl --list --verbose Windows Subsystem for Linux has no installed distributions. Distributions can be installed by visiting the Microsoft Store: https://aka.ms/wslstore PS C:\WINDOWS\system32> Did you perhaps run wsl --install -d Ubuntu-20.04 under an Administrator (or different) account? I would try: Checking Add or Remove Programs to see if Ubuntu shows installed there. If so, uninstall it. I'm assuming that you have no data inside Ubuntu, since it sounds like you haven't been able to launch it yet, right? Once uninstalled, reboot Open PowerShell, as your regular user (not an administrator account) and try wsl --install -d Ubuntu-20.04 again. Note that your issue above says that the Distro version is 22.04, but then talk about 20.04. Do you have both installed? This is okay, I just want to make sure readers of the issue have a clear picture what you're working with.
2025-04-01T04:34:42.264117
2021-07-23T19:23:32
951840887
{ "authors": [ "AdamBraden", "andrewleader", "mrlacey" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8481", "repo": "microsoft/WindowsAppSDK", "url": "https://github.com/microsoft/WindowsAppSDK/issues/1111" }
gharchive/issue
Support NuGet packages using WinAppSDK without requiring parent app to install WinAppSDK Summary Traditionally, if you install a NuGet package in your app, it "just works", because the package automatically includes any dependencies it requires. However, with WinAppSDK, if a NuGet library uses the WinAppSDK, the parent app also still has to install the WinAppSDK NuGet package in their app, otherwise the library won't work. Make this automatically work, or at least provide an error when this happens! Is this because of some way the WinAppSDK NuGet package is configured or something else? I've looked at the linked issues and can't see how this is being caused. The way this is written it seems like a NuGet bug as a declared dependency isn't being respected. We need to teach the props/targets to deploy the main packages. There is already logic for handling framework packages. This is a workitem already tracked internally as 33356079.
2025-04-01T04:34:42.265422
2022-03-16T22:44:40
1171632944
{ "authors": [ "dhoehna" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8482", "repo": "microsoft/WindowsAppSDK", "url": "https://github.com/microsoft/WindowsAppSDK/pull/2277" }
gharchive/pull-request
Adding contracts for Environment Manager Adding contracts to Environment Manager. /azp run Abandoning PR because this change is present in https://github.com/microsoft/WindowsAppSDK/pull/2278
2025-04-01T04:34:42.295306
2019-09-20T16:26:26
496441832
{ "authors": [ "MatkovIvan", "ivanicin", "jwargo", "winnieli1" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8483", "repo": "microsoft/appcenter", "url": "https://github.com/microsoft/appcenter/issues/1062" }
gharchive/issue
Tracking NSErrors Describe the solution you'd like An API like Microsoft.AppCenter.Crashes.Crashes.TrackError, but for native exceptions. Describe alternatives you've considered Right now you can raise exception when the NSError appears and then use standard logging, but if that is the only purpose of raising exception it feels like a wrong thing to do. Additional context Add any other context or screenshots about the feature request here. @ivanicin thanks for this suggestion, I'll share this with the service PM. Thanks for the feature request! It sounds like you're looking for a way to log non fatal crashes (handled exceptions) for iOS apps? This isn't on our short term roadmap but is absolutely on our radar. I'll post here if I have any updates! (Reference #192 for similar request) Yes I would say it is the same, just that issue doesn't contain any expected keywords in the title or even description (use logging like crashalitycs is not something everyone will be searching for). There is some difference though as I suggest logging of all native issues so it would include Java exceptions too. But the main purpose was for that NSError logging. Duplicate of #192
2025-04-01T04:34:42.297168
2019-06-04T08:30:23
451864042
{ "authors": [ "SogoGolf", "winnieli1208" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8484", "repo": "microsoft/appcenter", "url": "https://github.com/microsoft/appcenter/issues/516" }
gharchive/issue
add a new column in crash reporting "Issues" that provides a ref# Describe the solution you'd like want to be able to quickly/easily get a "reference #" for each crash item so we can include it in our commits (ie. that fixes the given issue) Describe alternatives you've considered go up to the address bar in browser; highlight the URL in appcenter (of the crash). copy and paste that into the commit....yuk Thanks for the request! I agree it's a pretty yucky experience. Due to other priorities at the moment, we don't have immediate plans to address this. We'll keep this issue open to keep track of community support and I'll post here if we have any updates. Thanks!
2025-04-01T04:34:42.299472
2020-01-15T08:27:25
550036146
{ "authors": [ "phwecker", "sabbour" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8485", "repo": "microsoft/aroworkshop", "url": "https://github.com/microsoft/aroworkshop/issues/50" }
gharchive/issue
2.5 & 2.6 -- Sample App code no longer matches instructions ratings-api exposed port after following the instructions is 8080, not 3000 ratings-web vue version specified in package.json causes compile to fail parameters hardcoded in package.json start script override env settings and prevent app from connectiong to API i also filed these issues in the sample app repo. Updated with https://github.com/microsoft/aroworkshop/pull/51
2025-04-01T04:34:42.303341
2024-04-02T10:30:39
2220143823
{ "authors": [ "Josephrp", "ekzhu", "jackgerrits", "thinkall" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8486", "repo": "microsoft/autogen", "url": "https://github.com/microsoft/autogen/issues/2254" }
gharchive/issue
[Issue]: Handle the outdated blog contents Describe the issue The blog contents can be outdated, people may still refer to them and run into issues such as #2235. We may need to update the blog contents when related APIs changed. Steps to reproduce No response Screenshots and logs No response Additional Information No response For this specific issue you linked do you want to create a PR for it? i can confirm that the outdated blogs and documentation assets are a pretty huge problem when i introduce developpers to the library. i'm dont really have a lot of offer, but what ican suggest is to make a project and engage with original authors and community members , to break this monster task down into things that are at least possible to envisage as contributors. so my best advice : make a project with an issue per blog , something like that , otherwise i really feel it's a lot to ask from a volunteer or it's hard for volunteers to coordinate. 100% agree - if we can identify the blogs that are out of date then we can distribute the work. Even putting a notice on the blog that the content is no longer up to date would be helpful in terms of warning users For this specific issue you linked do you want to create a PR for it? #2273
2025-04-01T04:34:42.316390
2023-10-01T23:24:19
1921027983
{ "authors": [ "alhridoy", "codecov-commenter", "sonichi" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8487", "repo": "microsoft/autogen", "url": "https://github.com/microsoft/autogen/pull/69" }
gharchive/pull-request
Format issue Fixed the formatting issue in the README. Codecov Report Merging #69 (67a313a) into main (49ad771) will decrease coverage by 1.01%. The diff coverage is n/a. @@ Coverage Diff @@ ## main #69 +/- ## ========================================== - Coverage 37.48% 36.48% -1.01% ========================================== Files 17 17 Lines 1998 1998 Branches 439 439 ========================================== - Hits 749 729 -20 - Misses 1180 1198 +18 - Partials 69 71 +2 Flag Coverage Δ unittests 36.48% <ø> (-0.96%) :arrow_down: Flags with carried forward coverage won't be shown. Click here to find out more. see 1 file with indirect coverage changes @sonichi In this revision, I've replaced the bullet points with headings to distinguish the use cases from the features. Can you check this? This looks like: I think we should use heading for the two use cases, while using bullets for the lists. The two use cases correspond to those in https://microsoft.github.io/autogen/docs/Getting-Started#multi-agent-conversation-framework and https://microsoft.github.io/autogen/docs/Getting-Started#enhanced-llm-inferences Hi @sonichi , I've made the changes you suggested to the README.md file. Here's a summary of what I did: I added a new section called "Features of AutoGen" at the beginning of the document. This section includes bullet points highlighting the general features of AutoGen, such as multi-agent conversations, customization, and human participation. I restructured the document to highlight the two main use cases: "Multi-Agent Conversation Framework" and "Enhanced LLM Inferences". Each use case is now a separate section with its own heading. Under each use case section, I included a brief description of the use case, a code example, and links to more code examples and documentation. I believe these changes make the README.md file more organized and easier to understand. Please review the changes and let me know if there's anything else you'd like me to improve. Hi @sonichi , I've made the changes you suggested to the README.md file. Here's a summary of what I did: I added a new section called "Features of AutoGen" at the beginning of the document. This section includes bullet points highlighting the general features of AutoGen, such as multi-agent conversations, customization, and human participation. I restructured the document to highlight the two main use cases: "Multi-Agent Conversation Framework" and "Enhanced LLM Inferences". Each use case is now a separate section with its own heading. Under each use case section, I included a brief description of the use case, a code example, and links to more code examples and documentation. I believe these changes make the README.md file more organized and easier to understand. Please review the changes and let me know if there's anything else you'd like me to improve. Thanks. Can we remove the bullet points for the two use cases? Then make the "Features" as bullet points under the first use case. Hi @sonichi , I've made the changes you suggested to the README.md file. The "Features of AutoGen" are now bullet points under the "Multi-Agent Conversation Framework" section. I've also removed the bullet points for the two use cases and provided clear examples for both "Multi-Agent Conversation Framework" and "Enhanced LLM Inferences" sections. Please review the changes and let me know if there's anything else you'd like me to improve. Thanks for your guidance and feedback!
2025-04-01T04:34:42.319962
2022-10-17T17:38:23
1411960075
{ "authors": [ "MonsieurTib", "kendallroden" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8488", "repo": "microsoft/azure-container-apps", "url": "https://github.com/microsoft/azure-container-apps/issues/458" }
gharchive/issue
Unable to create app Limited to VNet from cli Please provide us with the following information: This issue is a: (mark with an x) [x] bug report -> please search issues before submitting [ ] documentation issue or request [ ] regression (a behavior that used to work and stopped in a new release) Issue description az containerapp create (cli) does not expose a parameter for ingress traffic "Limited to VNet". When Ingress is set to "Internal", Ingress traffic is set to "Limited to Container Apps Environment" Screenshots If applicable, add screenshots to help explain your problem. Hi there- sorry for any confusion. When you deploy the app in a VNet scenario and want the "Limited to VNet" you can use the "--ingress internal" command in the CLI Hi @kendallroden, thank you for clarifications, it works.
2025-04-01T04:34:42.338087
2019-03-29T08:09:36
426865836
{ "authors": [ "A-AontwSR", "TingluoHuang", "lg2de", "vtbassmatt" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8489", "repo": "microsoft/azure-pipelines-agent", "url": "https://github.com/microsoft/azure-pipelines-agent/issues/2194" }
gharchive/issue
Windows agent should be configured for Windows Certificate Store by default Agent Version and Platform 2.136.1 OS of the machine running the agent? Windows Azure DevOps Type and Version on-premises TFS 16.131.28226.3 What's not working? To get the windows agent to run completely with a TFS server using self-signed SSL certificate I need to configure manually: config.cmd --gituseschannel --sslcacert ./locationtoyourcert.pem The one option is required for git fetch, the other is required to (explicitly download artifacts e.g. using 'Download Build Artifacts' task). Is it possible to configure the agent by default to use the Windows Certificate Store? @lg2de that's a little bit hard, the agent is an dotnet core application, it's already use Windows cert store. the problem is all the different tools you may use during your pipeline, i don't think there is a universal way to force everything to use windows cert store, plus some linux tool that built on old version openssl may not support windows cert store at all. :) The package is for window, so it "knows" it is for windows. The batch file (config.cmd) is specific for windows. I could set required options internally. i am talking about all the different tools you used during the build, like npm, node, python, go, etc, they may have no idea about windows certificate store. :) I'll create a PR to illustrate my thoughts. @TingluoHuang I analyzed the code. I found that the option gituseschannel is for Windows only. I do not see any reason why we should not use SChannel by default. Would you accept a PR changing default behavior? Do we need an option to disable SChannel by default? @vtbassmatt to help. :) Seems like a potentially back-compat breaking change with very limited positive effects. Unlikely to take that as a PR, sorry. @vtbassmatt, I wonder how you come up with "limited positive effects". I can't imagine who wouldn't want to use the certificate store in Windows. @lg2de it's limited because, for anyone for whom this matters, they're already specifying -gituseschannel at config time. For everyone else, the change is neutral (vast majority of cases) or potentially harmful (misconfiguration where OpenSSL knows about a corporate root CA but Windows cert store doesn't, for example). Yes, I wish SChannel was the default on Windows. And we're looking at ways to safely introduce that change. @vtbassmatt I doubt if many users are specifying -gitusschannel when configuring the Agent. This option is nowhere mentioned in the docs and not mentioned in the config.cmd --help. How would they know about this option? Except by searching the Agent's source code and finding this issue. At least that's how I found out. Please document this usefull option! Hm, it should be reported in help now. I'll investigate why it's not. Ah, got it. We didn't actually replace our hard-coded help page in the agent. I'll go get this one added. https://github.com/microsoft/azure-pipelines-agent/pull/2940
2025-04-01T04:34:42.341100
2023-10-03T00:37:17
1923066536
{ "authors": [ "DenisRumyantsev", "merlynomsft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8490", "repo": "microsoft/azure-pipelines-agent", "url": "https://github.com/microsoft/azure-pipelines-agent/pull/4458" }
gharchive/pull-request
Check task deprecation Description: Check task deprecation in the Initialize job step for each task used in the pipeline. How it looks: In the Initialize job logs: In the web page of the pipeline run: Adding @geekzter for review
2025-04-01T04:34:42.373356
2019-04-30T17:17:16
438886926
{ "authors": [ "Anumita", "BobTheMadCow", "GrimGadget", "Rahul43nit", "bansalaseem", "darzu", "dazinator", "kanukhosla", "santosh4227", "siljemda", "vsarunov" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8491", "repo": "microsoft/azure-pipelines-tasks", "url": "https://github.com/microsoft/azure-pipelines-tasks/issues/10274" }
gharchive/issue
Azure Devops - Key Vault task - Specified service connection needs to have x y z Required Information Entering this information will route you directly to the right team and expedite traction. Question, Bug, or Feature? Type: Bug Enter Task Name: Azure Key Vault list here (V# not needed): https://github.com/Microsoft/azure-pipelines-tasks/tree/master/Tasks Environment Server - Azure Pipelines or TFS on-premises? Azure Pipelines If using Azure Pipelines, provide the account name, team project name, build definition name/build number: account<EMAIL_ADDRESS>project: lab release info: releaseId=389&environmentId=646&deploymentGroupPhaseId=567&agentName=COVD01 Agent - Hosted or Private: If using Hosted agent, provide agent queue name: If using private agent, provide the OS of the machine running the agent and the agent version: OS Name Microsoft Windows Server 2016 Datacenter Version 10.0.14393 Build 14393 (This is a VM hosted on Azure) Issue Description We have an azure subscription. I created a new Key Vault within that subscription. I used the following powershell script to create a service principal and get all the details I needed to create a new service connection in Azure DevOps: https://github.com/Microsoft/azure-pipelines-extensions/blob/master/TaskModules/powershell/Azure/SPNCreation.ps1 I have a release pipeline on Azure Devops. I added an "Azure Key Vault" task to this pipeline. I was able to select my subscription, and select my keyvault from the drop downs within the task: However when I create a release, and this step is executed, it fails with the following error message written to the release log: QlikClient: "Access denied. The specified Azure service connection needs to have Get, List secret management permissions on the selected key vault. To set these permissions, download the ProvisionKeyVaultPermissions.ps1 script from build/release logs and execute it, or set them from the Azure portal." The thing is, when I set up the new service connection, I used a service principal that has full access. So I have no idea why access is denied. Note: Our Azure Devops Organisation is under a different subscription from our Azure Key vault - ot sure if that makes any different. Happy to provide any other details required. Note, the task log below says there is a ps1 script somewhere I can run, but i'm not sure where I have to navigate to in Azure Devops in order to download it. Task logs 2019-04-30T16:54:59.3448695Z ##[section]Starting: Get QlikClient PFX From KeyVault 2019-04-30T16:54:59.3453310Z ============================================================================== 2019-04-30T16:54:59.3453421Z Task : Azure Key Vault 2019-04-30T16:54:59.3453507Z Description : Download Azure Key Vault Secrets 2019-04-30T16:54:59.3453581Z Version : 1.0.32 2019-04-30T16:54:59.3453657Z Author : Microsoft Corporation 2019-04-30T16:54:59.3453742Z Help : More Information 2019-04-30T16:54:59.3453842Z ============================================================================== 2019-04-30T16:55:00.2810600Z SubscriptionId: 5adb7e7f-fe52-4809-b539-2e6bbdbaf890. 2019-04-30T16:55:00.2817095Z Key vault name: Hub-Dev. 2019-04-30T16:55:00.2842285Z Downloading secret value for: QlikClient. 2019-04-30T16:55:00.6533488Z ##[error] QlikClient: "Access denied. The specified Azure service connection needs to have Get, List secret management permissions on the selected key vault. To set these permissions, download the ProvisionKeyVaultPermissions.ps1 script from build/release logs and execute it, or set them from the Azure portal." 2019-04-30T16:55:00.6534149Z Uploading C:\azagent\A1_work\r2\a\ProvisionKeyVaultPermissions.ps1 as attachment 2019-04-30T16:55:00.6580465Z ##[section]Finishing: Get QlikClient PFX From KeyVault Troubleshooting Checkout how to troubleshoot failures and collect debug logs: https://docs.microsoft.com/en-us/vsts/build-release/actions/troubleshooting Error logs Ah so I found the generated powershell scrip that the task was mentioning, in devops: I needed to click on "Download all logs": Then the file is: I'll try running that powershell script and trying again Yep.. running that powershell script fixed it. It would be good if someone can clarify why this is necessary though - given the previous powershell script that was run I thought the service connection used should have full access required..? @dazinator Good to see that you were able to resolve it yourself. The azure service connection does not have permission on the key-vault and users need to explicitly give permissions to access the keyvault. When we create keyvault backed variable groups, we take the users via the assignment of permissions flow but when the key vault task gets used directly, then users should run the power shell script to assign the permissions. @dazinator Yep.. running that powershell script fixed it. It would be good if someone can clarify why this is necessary though - given the previous powershell script that was run I thought the service connection used should have full access required..? I have tried executing PowerShell script before accessing Key vault. Unfortunately I am getting same error before this task. Please let me know where to execute this PS Script. I have also checked all permissions on azure portal, service principal have full access. @dazinator Where do you run the script? Do you run it before the azureKeyVault task or after when it fails? Could you please help with this one? I'm running into the same issue except that my dev machine is a Mac and the build agent is Linux, so I don't have anywhere to run this .ps1. Is there an az CLI script equivalent? Also it's not clear to me: is this script a run-once thing or does it need to be part of the pipeline tasks? Any help much appreciated. @darzu, you can run the following command for updating the key vault access policies for secrets az keyvault set-policy --name "" --spn --secret-permissions get list In case you want to know more on setting access policies for key vault, refer to https://docs.microsoft.com/en-us/azure/key-vault/key-vault-manage-with-cli2#authorizing-an-application-to-use-a-key-or-secret @darzu , is this working for you? Closing this issue. @darzu please reopen if you are still facing any issues The specified Azure service connection needs to have "Get, List" secret management permissions on the selected key vault. Click "Authorize" to enable Azure Pipelines to set these permissions or manage secret permissions in the Azure portal. Getting the same error for service connection "Free Trial". Can some one help with resolution, please provide the url of the script, it's hard time getting the script mentioned in the above comment. The specified Azure service connection needs to have "Get, List" secret management permissions on the selected key vault. Click "Authorize" to enable Azure Pipelines to set these permissions or manage secret permissions in the Azure portal. Getting the same error for service connection "Free Trial". Can some one help with resolution, please provide the url of the script, it's hard time getting the script mentioned in the above comment. If you don't know the ID of the Service Principal you should download it by going to the log page and downloading all logs, because the downloaded script will contain the ID. Otherwise, here is the content of the script: $ErrorActionPreference="Stop"; Login-AzureRmAccount -SubscriptionId XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX; $spn=(Get-AzureRmADServicePrincipal -SPN XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX); $spnObjectId=$spn.Id; Set-AzureRmKeyVaultAccessPolicy -VaultName yourVaultName -ObjectId $spnObjectId -PermissionsToSecrets get,list; Stumbled upon this by accident, but thought I'd chip in some background knowledge for anyone else who finds themselves here, as there's a fundamental bit of knowledge not being explicitly stated. There are two types of access being conflated in the initial question: Azure Resource role permissions and KeyVault Access Policies. These are referred to here as the Management Plane and the Data Plane. I've not spotted any other resource type that carries two types of permission sets like this. The service principal in @dazinator 's example has full access on the Management Plane, but no access on the Data Plane*, hence why his deployment failed. This can be fixed in the portal by going to the KeyVault resource and, in the Access Policies blade, adding an access policy granting the service principal permission to the Data Plane. This is what the ProvisonKeyVaultPermissions script is doing: creating an access policy. *It's worth noting that full permissions in the Management Plane includes the ability to create Access Policies, and thus control permissions on the Data Plane. why i am getting this mail? On Fri, May 29, 2020 at 3:35 PM BobTheMadCow<EMAIL_ADDRESS>wrote: Stumbled upon this by accident, but thought I'd chip in some background knowledge for anyone else who finds themselves here, as there's a fundamental bit of knowledge not being explicitly stated. There are two types of access being conflated in the initial question: Azure Resource role permissions and KeyVault Access Policies. These are referred to here https://docs.microsoft.com/en-us/azure/key-vault/general/secure-your-key-vault as the Management Plane and the Data Plane. I've not spotted any other resource type that carries two types of permission sets like this. The service principal in @dazinator https://github.com/dazinator 's example has full access on the Management Plane, but no access on the Data Plane*, hence why his deployment failed. This can be fixed in the portal by going to the KeyVault resource and, in the Access Policies blade, adding an access policy granting the service principal permission to the Data Plane. This is what the ProvisonKeyVaultPermissions script is doing: creating an access policy. *It's worth noting that full permissions in the Management Plane includes the ability to create Access Policies, and thus control permissions on the Data Plane. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/microsoft/azure-pipelines-tasks/issues/10274#issuecomment-635889652, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJSPRVKYOHCUBE26XS3G2GDRT6CFTANCNFSM4HJOS4QA . This seems somewhat less than automated. How do I capture the Service Connection's object id so that I can apply the access policy in the pipeline? Experiencing the same issue. I uploaded the 4_ProvisionKeyVaultPermissions.ps1 file to /home/username via powershell in azure portal. Executed --> az account set --subscription $SUBSCRIPTION_NAME ./4_ProvisionKeyVaultPermissions.ps1 PS /home/username> .\4_ProvisionKeyVaultPermissions.ps1 WARNING: Interactive authentication is not supported in this session, please run cmdlet 'Connect-AzAccount -UseDeviceAuthentication'. Set-AzKeyVaultAccessPolicy: /home/username/4_ProvisionKeyVaultPermissions.ps1:5 Line | 5 | Set-AzureRmKeyVaultAccessPolicy -VaultName username-PAT -Ob … | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | Vault 'username-PAT' does not exist in current subscription. If this vault exists in your tenant, please switch to the correct subscription in order to modify the Access Policies of this vault. I switched to the correct subscription and also gave list,get permission to the application via GUI. still getting the same error.
2025-04-01T04:34:42.377165
2020-06-15T09:02:42
638673220
{ "authors": [ "Jaffacakes82", "anuragc617" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8492", "repo": "microsoft/azure-pipelines-tasks", "url": "https://github.com/microsoft/azure-pipelines-tasks/issues/13119" }
gharchive/issue
Azure Resource Group Deployment fails on resource group delete (classic view) Bug Azure Resource Group deployment task fails when trying to delete a resource group with error: ##[error]Error: Task failed while initializing. Error: Location is required for deployment NOTE: we're using the classic editor to configure this task as opposed to the YAML editor. There is no way to set the 'location' parameter in the classic editor for the delete resource group action. I also see in the documentation that 'location' is only mandatory for the create or update resource group action Enter Task Name: Azure Resource Group Deployment V3 Environment Server - Azure Pipelines Agent - Private @Jaffacakes82 thank you for reporting this I am able to reproduce on my end. We will try to fix this asap. Looks like this is fixed, thanks!
2025-04-01T04:34:42.379469
2019-12-22T09:22:22
541427400
{ "authors": [ "balteravishay", "vithati" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8493", "repo": "microsoft/azure-pipelines-tasks", "url": "https://github.com/microsoft/azure-pipelines-tasks/pull/12021" }
gharchive/pull-request
Support for helm lint Closes: #11023 Enable helm lint functionality Does not require a kubernetes configuration to function Currently this PR does not change patch/minor version @balteravishay , Helm task 'command' input is editable and it supports arguments. You can type command and arguments, this file https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/HelmDeployV0/src/helmcommands/uinotimplementedcommands.ts takecares of execution. I see Pipeline UI is warning but you can ignore that. I will raise bug on pipeline UI. abandoning this PR after discussing the 'editable' feature with the owner of the repo.
2025-04-01T04:34:42.383606
2021-07-28T17:58:14
955107036
{ "authors": [ "anatolybolshakov" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8494", "repo": "microsoft/azure-pipelines-tasks", "url": "https://github.com/microsoft/azure-pipelines-tasks/pull/15098" }
gharchive/pull-request
[CopyFilesV2] Considered delay between retries for cp action Task name: CopyFilesV2 Description: we need to consider delayBetweenRetries input for cp action also - to make retries more effective agains flakiness issues. Also reworked retry logic in retryHelper: removed extra methods for retry; fixed message that shows the number of remaining attempts; fixed undefined instead of a function name in the error message; added option to ignore ENOENT for tl.stats function **Documentation changes required:**N Added unit tests: N Attached related issue: N Checklist: [x] Task version was bumped - please check instruction how to do it - bumped only patch since we are going to hotfix. [x] Checked that applied changes work as expected Tested changes locally, looks good! LGTM! (can't approve since I created it originally)
2025-04-01T04:34:42.384804
2022-10-07T01:42:29
1400511201
{ "authors": [ "anpaz", "xinyi-joffre" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8495", "repo": "microsoft/azure-quantum-python", "url": "https://github.com/microsoft/azure-quantum-python/issues/393" }
gharchive/issue
Tests for 1Qbit and Toshiba should use marks In general, we control what tests to run with custom marks; this allow us to decide at runtime what 3rd party providers tests should run. 1Qbit and Toshiba are not following this pattern, instead they are using their own environment variables to explicitly enable the tests. They should be migrated to the general marks schema. Closing since both 1QBit and Toshiba are now deprecated.
2025-04-01T04:34:42.387378
2020-07-16T05:31:45
657863685
{ "authors": [ "alanrenmsft", "coveralls" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8496", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/11373" }
gharchive/pull-request
fix the issue with explorer widget context menu This PR fixes #10765 for new notebook action, we need to return the IConnectionProfile to avoid the circular structure issue for script as create action, it is never meant to be exposed on database context. Coverage decreased (-0.1%) to 37.407% when pulling<PHONE_NUMBER>f6ed112743813c450d2f90484f8d59 on alanren/fix-explorer-menu into cc78d6a8f09c1bd835858f50a4be2eaaf6287b2e on main.
2025-04-01T04:34:42.397579
2021-02-09T00:45:31
804091653
{ "authors": [ "Charles-Gagnon", "alanrenmsft", "coveralls" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8497", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/14205" }
gharchive/pull-request
Fix ModelView addItem/withItem ordering Fix for issue described in https://github.com/microsoft/azuredatastudio/pull/14198 I didn't track down why/if this actually changed with https://github.com/microsoft/azuredatastudio/pull/13842 - but the issue here is that addItem/addItems/insertItem were adding the items immediately to the container - but withItems was delaying that until the container component was created on the main thread side - at which point it would then go create and add the ItemConfigs and add them to the container. But because this wasn't done first we would have the addItem queued up first and thus it would try to add that first and then afterwards go add all the defined itemConfigs. I vaguely recall that when I added the whole "initial" actions stuff in https://github.com/microsoft/azuredatastudio/pull/13317 I purposely didn't have those added as initial actions. But I can't remember exactly why and don't see any issues currently poking around a bunch of different dialogs/dashboards so I think it's fine to go forward with this now and then if we see issues try to find out what's broken there. Pull Request Test Coverage Report for Build 549794316 2 of 3 (66.67%) changed or added relevant lines in 2 files are covered. 1 unchanged line in 1 file lost coverage. Overall coverage increased (+0.005%) to 43.502% Changes Missing Coverage Covered Lines Changed/Added Lines % src/sql/workbench/browser/modelComponents/viewBase.ts 0 1 0.0% Files with Coverage Reduction New Missed Lines % extensions/import/src/wizard/pages/fileConfigPage.ts 1 73.46% Totals Change from base Build 549600724: 0.005% Covered Lines: 24935 Relevant Lines: 52502 💛 - Coveralls I think this should be ported to the release branch.
2025-04-01T04:34:42.404588
2021-06-24T22:08:23
929647563
{ "authors": [ "coveralls", "lucyzhang929" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8498", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/15905" }
gharchive/pull-request
Remove layout call in output component ngOnInit This PR starts to address https://github.com/microsoft/azuredatastudio/issues/15543. Removing the layout call improves the loading time for the 475kb notebook (with result grids) that Julie used by up to 30%. I did a bunch of testing to make sure removing this does not affect anything. Tested: notebooks with large text outputs notebooks with result grids of various sizes resizing ADS window and then open/run a notebook with outputs zoom in and then open/run a notebook with outputs Please let me know if there are any other things I should test. In addition, here is an ad hoc build if anyone wants to test: https://mssqltools.visualstudio.com/CrossPlatBuildScripts/_build/results?buildId=108751&view=artifacts&pathAsName=false&type=publishedArtifacts Pull Request Test Coverage Report for Build 969524828 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage increased (+0.003%) to 42.953% Totals Change from base Build 969175730: 0.003% Covered Lines: 26340 Relevant Lines: 55694 💛 - Coveralls
2025-04-01T04:34:42.414867
2022-04-28T13:49:13
1218761179
{ "authors": [ "coveralls", "nemanja-milovancevic" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8499", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/19244" }
gharchive/pull-request
UI for the Backup/Restore Managed Instance Feature In the following pull request, you can find changes necessary to enable backup and restore features for the Managed Instance version of the SQL Server. Additionally, user can backup SQL Server Box databases to URL and restore them from URL. Feel free to comment if you want to disable this feature for the SQL Server Box. Backup and restore don't work for Managed Instance in ADS. This problem exists because ADS didn't support backup to URL and restore from URL and partially because Managed Instance only supports a subset of backup and restore features. Managed Instance internally takes automatic backups, so the user can only make a copy-only backup. This type of backup is independent of other backups, and other backups are not dependent on copy-only backup. If you want to backup to URL: Connect to the Managed Instance. Choose the Databases tab on the main dashboard. Choose a database from the list. Wait for the spinner to disappear. You'll see a dialog like this: Click on the Browse button. You'll see a URL browser dialog: Choose an Azure account from the list. If there is no Azure account in the list, click on the Link account to link a new one. Choose tenant, subscription, storage account, and blob container. Click on the Create Credentials button. Optionally rename the backup file. Click the OK button. URL browser dialog will disappear. Click on the Backup button. If you want to restore a backup from a URL, you can do it in a similar fashion. The only major difference is that you would need to choose a backup file from the blob. That's why the restore URL browser dialog looks different from the backup URL browser dialog. Pull Request Test Coverage Report for Build<PHONE_NUMBER> 0 of 4 (0.0%) changed or added relevant lines in 1 file are covered. No unchanged relevant lines lost coverage. Overall coverage decreased (-0.002%) to 42.232% Changes Missing Coverage Covered Lines Changed/Added Lines % src/sql/workbench/api/common/sqlExtHost.protocol.ts 0 4 0.0% Totals Change from base Build<PHONE_NUMBER>: -0.002% Covered Lines: 27809 Relevant Lines: 61612 💛 - Coveralls
2025-04-01T04:34:42.423248
2022-05-04T23:08:24
1226051309
{ "authors": [ "coveralls", "kisantia" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8500", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/19292" }
gharchive/pull-request
change import back to import type * as azdataType in create project from db dialog I forgot about getAzdataApi() and why the import was originally import type * as azdataType when I was adding connection api calls in #19129 - turns out it was because the vscode extension doesn't load and throws an error about azdata not being found, even though this dialog isn't actually used in vscode. Pull Request Test Coverage Report for Build<PHONE_NUMBER> 2 of 3 (66.67%) changed or added relevant lines in 1 file are covered. 2 unchanged lines in 1 file lost coverage. Overall coverage decreased (-0.005%) to 42.209% Changes Missing Coverage Covered Lines Changed/Added Lines % extensions/sql-database-projects/src/dialogs/createProjectFromDatabaseDialog.ts 2 3 66.67% Files with Coverage Reduction New Missed Lines % extensions/notebook/src/book/bookTreeView.ts 2 38.21% Totals Change from base Build<PHONE_NUMBER>: -0.005% Covered Lines: 27884 Relevant Lines: 61780 💛 - Coveralls
2025-04-01T04:34:42.429618
2022-07-27T16:58:54
1319841371
{ "authors": [ "coveralls", "erpett" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8501", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/20182" }
gharchive/pull-request
updating changelog for 1.38 This PR fixes # Pull Request Test Coverage Report for Build<PHONE_NUMBER> 0 of 0 changed or added relevant lines in 0 files are covered. 4 unchanged lines in 1 file lost coverage. Overall coverage decreased (-0.01%) to 42.126% Files with Coverage Reduction New Missed Lines % extensions/notebook/src/jupyter/serverInstance.ts 4 75.5% Totals Change from base Build<PHONE_NUMBER>: -0.01% Covered Lines: 28181 Relevant Lines: 62523 💛 - Coveralls
2025-04-01T04:34:42.436721
2022-11-03T02:31:46
1434018264
{ "authors": [ "coveralls", "lewis-sanchez" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8502", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/21088" }
gharchive/pull-request
Resolves smoketest build errors. This PR fixes the missing utility function definitions that were preventing notebook smoke tests from building. Remaining todo item is to bring the notebook smoke tests online in the build pipelines. Pull Request Test Coverage Report for Build<PHONE_NUMBER> 0 of 0 changed or added relevant lines in 0 files are covered. 6 unchanged lines in 2 files lost coverage. Overall coverage decreased (-0.008%) to 41.953% Files with Coverage Reduction New Missed Lines % extensions/machine-learning/src/common/processService.ts 2 81.48% extensions/notebook/src/jupyter/serverInstance.ts 4 75.5% Totals Change from base Build<PHONE_NUMBER>: -0.008% Covered Lines: 28912 Relevant Lines: 64020 💛 - Coveralls
2025-04-01T04:34:42.443997
2023-01-18T20:55:39
1548093344
{ "authors": [ "coveralls", "raymondtruong" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8503", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/21630" }
gharchive/pull-request
[Port request] vbump sqltoolsservice in release/1.41 This PR bumps the version of sqltoolsservice in mssql from <IP_ADDRESS> to <IP_ADDRESS>, which includes the latest changes from SQL Migration. Discussed internally in ADS Shiproom. See https://github.com/microsoft/sqltoolsservice/pull/1814 and https://github.com/microsoft/azuredatastudio/issues/21629 for more details. Pull Request Test Coverage Report for Build<PHONE_NUMBER> 0 of 0 changed or added relevant lines in 0 files are covered. 1 unchanged line in 1 file lost coverage. Overall coverage increased (+0.008%) to 41.813% Files with Coverage Reduction New Missed Lines % extensions/notebook/src/book/bookTreeView.ts 1 37.72% Totals Change from base Build<PHONE_NUMBER>: 0.008% Covered Lines: 28970 Relevant Lines: 64404 💛 - Coveralls
2025-04-01T04:34:42.449043
2024-02-26T19:25:50
2154948247
{ "authors": [ "coveralls", "smartguest" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8504", "repo": "microsoft/azuredatastudio", "url": "https://github.com/microsoft/azuredatastudio/pull/25410" }
gharchive/pull-request
[Loc] emergency update to langpack source files 2-26-2024 Updates the version number to the proper value and also includes strings that were checked in from the last langpack (non new). Will be port requested soon. Pull Request Test Coverage Report for Build<PHONE_NUMBER> Details 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 41.757% Totals Change from base Build<PHONE_NUMBER>: 0.0% Covered Lines: 30827 Relevant Lines: 69082 💛 - Coveralls
2025-04-01T04:34:42.469878
2020-01-07T19:04:17
546453880
{ "authors": [ "coveralls", "fuselabs", "tomlm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8505", "repo": "microsoft/botbuilder-dotnet", "url": "https://github.com/microsoft/botbuilder-dotnet/pull/3229" }
gharchive/pull-request
fix include activity and enum camelcase fix #3220 fix #3228 rename IncludeActivity => activityProcessed with default value of true, and dialog actions to always set TurnPath.ACTIVITYPROCESSED when passing focus to another dialog fixed unit tests around ActivityProcessed behavior added appropriate attributes and updated .schema and dialog files for enums Pull Request Test Coverage Report for Build 98923 0 of 0 changed or added relevant lines in 0 files are covered. 480 unchanged lines in 62 files lost coverage. Overall coverage increased (+0.003%) to 78.1% Files with Coverage Reduction New Missed Lines % /libraries/Adapters/Microsoft.Bot.Builder.Adapters.Facebook/FacebookAdapterOptions.cs 1 92.31% /libraries/integration/Microsoft.Bot.Builder.Integration.ApplicationInsights.Core/ApplicationBuilderExtensions.cs 1 75.0% /libraries/Microsoft.Bot.Builder.AI.LUIS/LuisApplication.cs 1 97.37% /libraries/Microsoft.Bot.Builder.AI.LUIS/V3/LuisApplication.cs 1 97.37% /libraries/Microsoft.Bot.Builder.AI.LUIS/V3/LuisUtil.cs 1 95.24% /libraries/Microsoft.Bot.Builder.Dialogs.Adaptive/SequenceContext.cs 1 52.78% /libraries/Microsoft.Bot.Builder.Dialogs.Adaptive/TriggerConditions/OnIntent.cs 1 93.88% /libraries/integration/Microsoft.Bot.Builder.Integration.AspNet.Core/HttpHelper.cs 2 130.77% /libraries/integration/Microsoft.Bot.Builder.Integration.AspNet.Core/Skills/SkillHttpClient.cs 2 0.0% /libraries/Microsoft.Bot.Builder.AI.LUIS/Generator/DateTimeSpec.cs 2 72.73% Totals Change from base Build 98911: 0.003% Covered Lines: 14283 Relevant Lines: 18288 💛 - Coveralls :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.AI.Luis.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.AI.QnA.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.ApplicationInsights.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.Azure.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.Dialogs.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.Integration.ApplicationInsights.Core.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.Integration.AspNet.Core.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.TemplateManager.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Builder.Testing.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Configuration.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Connector.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Schema.dll compared against version 4.6.3 :heavy_check_mark: No Binary Compatibility issues for Microsoft.Bot.Streaming.dll compared against version 4.6.3
2025-04-01T04:34:42.472668
2020-05-19T01:16:57
620603058
{ "authors": [ "cleemullins", "tomlm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8506", "repo": "microsoft/botbuilder-dotnet", "url": "https://github.com/microsoft/botbuilder-dotnet/pull/3952" }
gharchive/pull-request
[DRAFT] Make SkillDialog more robust by adding side car routing DO NOT MERGE Added SkillConversationReferenceStorage(storage) which persists SkillConversationReference objects Added SkillHostWaiting and Activities to SkillConversationReference. Made SendToSkill(dc) based Changed SkillHost to look to see if SkillHostIsWaiting. If true, then it puts into SCR.Activiteis, if false it's proactive so it runs activity pipeline in the callback. Changed SendToSkill() to get activites from HttpResponse (in ExpectedMessages) or SCR.Activities (in async mode). Logic is the same for both. Added proper modeling of Event (We need to emit event and rootDc.continueDialog() Unit tests for storage. Closing per @tomlm during Teams conversation.
2025-04-01T04:34:42.479058
2022-02-10T13:04:01
1130042319
{ "authors": [ "BruceHaley", "sw-joelmut" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8507", "repo": "microsoft/botbuilder-dotnet", "url": "https://github.com/microsoft/botbuilder-dotnet/pull/6187" }
gharchive/pull-request
[#6173] Remove dotnet 2.1 support from 4.16 Fixes #6173 #minor Description This PR removes NetCore 2.1 from the project and updates the CI pipelines' yamls to stop building and testing the projects for this framework. Note: The libraries are still targeting netstandard2.0 to keep compatibility with the projects Microsoft.Bot.Builder.Integration.AspNet.WebApi and Microsoft.Bot.Builder.Integration.ApplicationInsights.WebApi that use net476 framework. Specific Changes Removed Microsoft.Bot.Builder.TestBot.NetCore21 project from the solution. Removed conditions for BuildTarget = netcoreapp2.1 from the test projects. Removed NETCOREAPP2_1 conditions used in the tests. Updated the following yaml files to remove the steps related to NetCore2.1: botbuilder-dotnet-ci-mac botbuilder-dotnet-ci ci-test-steps Testing This image shows the CI pipeline running after the changes. Hi, we have some libraries where we have explicit references to 2.1.0 assemblies, should we update those to 3.1 as part of this? ... @gabog, we updated multiple dependencies that were using the 2.1.x version to the latest 3.1.22 across the BotBuilder libraries. Currently, the CI pipelines are not passing successfully, because it's unable to find the 3.1.22 version in the custom registry feed. In our CI pipelines, we are using NuGet.org as the registry feed. Should we leave it as 3.1.22 and this will be fixed in the CI pipeline?. NU1102: Unable to find package Microsoft.Extensions.FileProviders.Physical with version (>= 3.1.22) - Found 5 version(s) in SDK_Dotnet_V4_org [ Nearest version: 3.1.6 ] /azp run
2025-04-01T04:34:42.516214
2024-07-05T18:49:09
2393016279
{ "authors": [ "BrendanAndrade", "FBraz-RMFarma", "YunnyChung", "at1as", "avilabiel", "firecode", "gitnavneet", "mclarke-logikor", "nikunj2001", "nodkrot", "obellemare-alcumus", "sharmk1", "sschoeb", "tim-tribble-ai", "tracyboehrer", "wilsonsantosnet" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8508", "repo": "microsoft/botbuilder-js", "url": "https://github.com/microsoft/botbuilder-js/issues/4708" }
gharchive/issue
Started receiving "unable to verify the first certificate" when interacting with the sdk Versions What package version of the SDK are you using<EMAIL_ADDRESS>What nodejs version are you using (v16.20.2) What os are you using (Mac) Describe the bug On July 3rd we started receiving "unable to verify the first certificate" error when starting and using botbuilder sdk APIs. It appears to happen to some but not all customers. To Reproduce Instantiate and use any operation botbuilder sdk (send or update message for example) Expected behavior No error "exception":{"message":"unable to verify the first certificate","stack":"Error: unable to verify the first certificate\n at new RestError (/var/www/app/node_modules/@azure/ms-rest-js/lib/restError.ts:18:5)\n at AxiosHttpClient.<anonymous> (/var/www/app/node_modules/@azure/ms-rest-js/lib/axiosHttpClient.ts:194:15)\n at step (/var/www/app/node_modules/@azure/ms-rest-js/node_modules/tslib/tslib.js:141:27)\n at Object.throw (/var/www/app/node_modules/@azure/ms-rest-js/node_modules/tslib/tslib.js:122:57)\n at rejected (/var/www/app/node_modules/@azure/ms-rest-js/node_modules/tslib/tslib.js:113:69)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (node:internal/process/task_queues:96:5)"},"level":"error","message":"Failure updating message: unable to verify the first certificate" I'm getting the same error right now. I'm getting the same error right now. Getting the same error intermittently too! we are receiving same error as well We are seeing the same on our two apps as well. The OP indicates 4.16, which is quite old. What about the others in this thread? I'm on 4.22.2 Same issue for us. First occurrence: 2024-07-03 03:37:56 pm PDT. Since then 290 further occurrences. For the time being we're catching this error and simply retrying each request up to 3x and virtually all succeed. However we'd prefer to not have to do this. we are seeing issue on 4.22.3 as well .. Since this is happening on older versions, it's probably not related to a recent change in SDK. I'll check for issues with the other end. I'm assuming its failing during the request, especially since retries make it work. I'm wondering what the HTTP response status code is. Is it being throttled? Though that is a specific status code and handled automatically. We can help mitigate by increasing our retries. Since this is happening on older versions, it's probably not related to a recent change in SDK. I'll check for issues with the other end. I'm assuming its failing during the request, especially since retries make it work. I'm wondering what the HTTP response status code is. Is it being throttled? Though that is a specific status code and handled automatically. We can help mitigate by increasing our retries. We receive 500 if I'm not mistaken. Since it's intermittent, it feels like there is a load balancer pointing to a server with an expired certificate. My guess The trace provided in this issue (third comment for requests.exceptions.SSLError) seems to indicate it's occurring with Python as well: https://answers.microsoft.com/en-us/msteams/forum/all/unabletoverifyleafsignature-traffic-manager-teams/3ad8a8cb-bb41-4c46-ae4e-e31d0688b06e In which case it may have nothing to do with bot-framework, which seems likely given these type of certificate issues generally originate on the server. I have a support ticket open with Microsoft, however I've been informed that I don't have the correct tier of paid support to have anyone from engineering take a look. If anyone has "Microsoft Unified" or "Professional Direct Support" support I'd appreciate them raising the issue through a ticket. I'm wondering what the HTTP response status code is. Is it being throttled? Though that is a specific status code and handled automatically. We can help mitigate by increasing our retries. I think it's actually a success response code. Our 429/5XX logic wasn't catching this error. We had to add some custom logic to catch-and-retry it via matching on e.message.includes("unable to verify the first certificate"). As far as I can tell, the server believes the request is valid. Hi, We started facing this error with our Teams App on 3/Jul/2024 and it has already impacted an important release. Before this, our App has been running fine since Dec'23. We are using<EMAIL_ADDRESS>on<EMAIL_ADDRESS>The exception has the format: FetchError: request to failed, reason: unable to verify the first certificate type: system errno, code: UNABLE_TO_VERIFY_LEAF_SIGNATURE The target is of the form: "https://smba.trafficmanager.net/amer/v3/conversations/.../activities/..." We face this intermittently when our Teams app sends or updates an adaptive card using the sendActivity or updateActivity methods of the TurnContext object in the Bot Builder SDK for Node.js. We were able to replicate this issue in non-production and usually see it occur for 10% of the send/update attempts. Handling the exception and retrying the activity works on the first attempt. I think you should be able to fix this by identifying and fixing the failing certificate chain on the concerned nodes of your distributed infrastructure. Looking forward to a quick fix! Thanks. Are these all Teams bots? 4.21.0 is when JS SDK moved to MSAL auth (from ADAL). But 4.16 would still be ADAL of course. Ours is a Teams Bot. This is being actively investigated by the Teams group. Also experiencing this with a teams bot running 4.21.1 There has been at least two Sev 2's raised in that group. That is Microsoft terminology for high impact issue. It also means there will be eyes on it. I leave this open and post updates as I get them. Same issue with two of our application. Hey team, do you still see the issues? Could you please let us know if you still see the issues? Hey team, do you still see the issues? Could you please let us know if you still see the issues? The last instance we observed was at 2024-07-08 04:00:17 pm PDT (~3 hours ago). However, since it's intermittent and our app traffic volume is lower in the evening, we'll need to wait longer to be sure Thank you for sharing @at1as! For others, please do share whether you still see the issues or not with me here :) One more question to the group: could you share your end point you are targeting like @gitnavneet did? I am mainly interested in the region that the end point contains. ex: "https://smba.trafficmanager.net/amer/v3/conversations/.../activities/..." Hi, We started facing this error with our Teams App on 3/Jul/2024 and it has already impacted an important release. Before this, our App has been running fine since Dec'23. We are using<EMAIL_ADDRESS>on<EMAIL_ADDRESS>The exception has the format: FetchError: request to failed, reason: unable to verify the first certificate type: system errno, code: UNABLE_TO_VERIFY_LEAF_SIGNATURE The target is of the form: "https://smba.trafficmanager.net/amer/v3/conversations/.../activities/..." We face this intermittently when our Teams app sends or updates an adaptive card using the sendActivity or updateActivity methods of the TurnContext object in the Bot Builder SDK for Node.js. We were able to replicate this issue in non-production and usually see it occur for 10% of the send/update attempts. Handling the exception and retrying the activity works on the first attempt. I think you should be able to fix this by identifying and fixing the failing certificate chain on the concerned nodes of your distributed infrastructure. Looking forward to a quick fix! Thanks. Haven't seen the issue in the last few hours, but, will wait and see how things go tomorrow! Our region endpoint was amer Hey team, do you still see the issues? Could you please let us know if you still see the issues? Hey @YunnyChung, We tried replicating the issue again in non-production today, however, it has not recurred! 😄 In summary, We faced a total of 70 errors in production some of which caused a bad UX. The first exception occurred on 3-Jul-24 at 21:56:45 UTC. The last exception was ~12 hours ago on 8-Jul-24 at 20:35:26 UTC. The target endpoint was always in the amer region. We will continue monitoring the production logs for a few more days and report back here if the issue recurs. 🤔 Could you please share the root cause analysis? Thanks! Errors stopped for as well Thank you so much everyone for sharing the information here! Yes, please do let me know whether errors have stopped or still occurring. We will share the root cause analysis when it is ready. One another small favor I want to ask to this community: For those who encountered this issue, are you using Linux? Could you share the OS of the machine you used when this issue occurred? Thank you so much everyone for sharing the information here! Yes, please do let me know whether errors have stopped or still occurring. We will share the root cause analysis when it is ready. One another small favor I want to ask to this community: For those who encountered this issue, are you using Linux? Could you share the OS of the machine you used when this issue occurred? Our solution is deployed as an Azure Function app running on Windows. Thank you so much everyone for sharing the information here! Yes, please do let me know whether errors have stopped or still occurring. We will share the root cause analysis when it is ready. One another small favor I want to ask to this community: For those who encountered this issue, are you using Linux? Could you share the OS of the machine you used when this issue occurred? I didn't see any errors yesterday, thanks for that! However, today we received a new one, probably related: { "url": "https://smba.trafficmanager.net/amer/api/v2/[redacted]/channel/messages/[redacted]" "response": "read ECONNRESET" } Thank you so much everyone for sharing the information here! Yes, please do let me know whether errors have stopped or still occurring. We will share the root cause analysis when it is ready. One another small favor I want to ask to this community: For those who encountered this issue, are you using Linux? Could you share the OS of the machine you used when this issue occurred? Our bot is deployed as an service in Google Cloud Functions @FBraz-RMFarma Our bot is deployed as an service in Google Cloud Functions No kidding? That's pretty cool. I saw the ECONNRESET as well yesterday. We saw this error once too, today at 6:15:57 UTC. { "level": "error", "message": "An error occurred while handling the event FetchError: request to https://smba.trafficmanager.net/amer/v3/conversations/.../activities/... failed, reason: read ECONNRESET" } Yeah, we got 223 issues in the last hour 😞 Yeah, we got 223 issues in the last hour 😞 @avilabiel , now you have me worried! 😮 How did those 223 new issues impact your app or end users? For our app, it caused an ephemeral error message, but the action succeeded with a retry. Thank you everyone for sharing the issue here 🙏🏻 For those who started to see the error, could you please share the following details? How often does this issue occur? (ex: All requests are failing or some? If this happens intermittently, how often? (ex: 1%, 5%, 10% of the requests etc)) When did you start to see the issue? (date with approximate time (with timezone) will be super appreciated 🙏🏻) If the issue stopped happening, could you share the last time you saw the issue? (again date with approximate time will be super helpful here) Could you share the end point with the region information? ex: https://smba.trafficmanager.net/amer/v3/conversations/.../activities/... Latest: 2024-07-10 08:00:42 am PDT (50 mins ago) First: 2024-07-09 08:00:03 pm PDT Occurrences: 37 Our app is heavily skewed to triggering on the hour, for certain hours, so 43 minutes of no occurrences does not tell us much. It is intermittent, but I don't have immediate counts. We see it for this endpoint (though it's our most heavily used, so this isn't necessarily an exhaustive list): https://smba.trafficmanager.net/amer/v3/conversations", "method":"POST" We are in active process for investigation & mitigation -quick two follow up questions here: (1) For those failed requests, do you see those requests' responses containing header with the ms-cv? (2) Could you let me now if you see the decrease of the errors starting 07/10/24 8:30 pm in UTC? @YunnyChung , The read ECONNRESET error occurred multiple times for our app today (10/Jul/24). Some users missed important updates. So, we have postponed the app launch to a wider group. (1) I did not notice any header with ms-cv in our logs. (2) Yes, errors have decreased. They have not occurred since 19:05:49 UTC. Some error timestamps (UTC): 6:15:57 (first), 14:16:51, 14:19:28, 15:21:01, 16:50:57, 19:05:49 (last). Target: https://smba.trafficmanager.net/amer/v3/conversations/.../activites/... Hey guys, please let me know if the problems have stopped? I have NOT seen any read ECONNRESET errors since 10/Jul/24, 19:05:49 UTC. 🎉 Our bot is able to reply to users again. All of our issues are now resolved. 🎉 @YunnyChung we randomly received today 8 of these ECONNRESET-Reports when trying to update an activity. FetchError: request to https://smba.trafficmanager.net/ch/v3/conversations/a%3A12zhsCtQNu1KBcePW8dV1g1toHlDUWb2jVVh3P-uhAUQo1R6TY3N7K5QA9v9XmkG9MgeIsSJZhwwNxMNAaEa4j_beiFBTIhC_kPqJ-cZUmlJxuWCX49R1Rs563GgjE90N/activities/f%3A025e984b-748d-1150-1ae5-e945fd8b3bd3 failed, reason: read ECONNRESET at ClientRequest.<anonymous> (/var/www/xxx/prod/bot/node_modules/node-fetch/lib/index.js:1501:11) at ClientRequest.emit (node:events:519:28) at TLSSocket.socketErrorListener (node:_http_client:500:9) at TLSSocket.emit (node:events:519:28) at emitErrorNT (node:internal/streams/destroy:169:8) at emitErrorCloseNT (node:internal/streams/destroy:128:3) at processTicksAndRejections (node:internal/process/task_queues:82:21) In general it works, but this on conversation seems to fail always. @YunnyChung we have started facing the ECONNRESET errors again with multiple users. It fails while trying to access the below endpoint: FetchError: request to https://smba.trafficmanager.net/amer/v3/conversations/ So far we have faced 5 instances of this issue.
2025-04-01T04:34:42.523248
2018-12-14T14:47:11
391139377
{ "authors": [ "lupino3" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8509", "repo": "microsoft/cmd-call-graph", "url": "https://github.com/microsoft/cmd-call-graph/issues/16" }
gharchive/issue
Add a lint mode The tool can identify some flaws in batch files, for example goto / call to non-existing labels. Implement a --lint option that only outputs potential problems with a given script. Example checks: dead code misspelled labels for goto/calls
2025-04-01T04:34:42.586017
2024-04-11T20:31:34
2238549799
{ "authors": [ "marvinbuss", "prdpsvs" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8510", "repo": "microsoft/dbt-fabric", "url": "https://github.com/microsoft/dbt-fabric/pull/152" }
gharchive/pull-request
Add Synapse Spark Authentication Option Add option to authenticate from Synapse Spark using mssparkutils. This is required since the azure-identity library does not work in Azure Synapse. Microsoft Fabric Spark may have a similar limitation. Reference: https://github.com/Azure/azure-sdk-for-python/issues/26997 Just sharing here some manual tests from Synapse Spark: @marvinbuss , Have you tried this in both Fabric and Synapse workspace notebooks? Note that mssparkutils library is not available in pypi to add it as a dependency. The dbt-fabric adapter is missing the packege dependency. Here is an example. @prdpsvs I have only validated Synapse as for Fabric there is no DW supported yet for the token generation as documented here: https://learn.microsoft.com/en-us/fabric/data-engineering/microsoft-spark-utilities#get-token Note that mssparkutils library is not available in pypi to add it as a dependency. The dbt-fabric adapter is missing the packege dependency. Here is an example. There is only a dummy library supported as documented here: https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/microsoft-spark-utilities?pivots=programming-language-python#package-dependencies Would you like me to add that to the requirements-dev.txt? @prdpsvs Please let me know how to proceed here. We can also have a quick sync to discuss how we can integrate the proposed solution. Thanks! Closing this PR as I raised another one.
2025-04-01T04:34:42.592170
2024-11-06T00:51:09
2636790890
{ "authors": [ "liu-weipeng" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8511", "repo": "microsoft/demikernel", "url": "https://github.com/microsoft/demikernel/pull/1453" }
gharchive/pull-request
Enhancement: Support NetVSC/VF dual NIC in hyper-V VMs Motivation On certain Hyper-V provisioned VMs, there're two NICs, one NetVSC, another VF. They can share same MAC address to act as single device from external point of view. Because VF is faster, it may be used to optimize data plane performance, while NetVSC is used for control plane. This is transparent to application layer if networking stack is done within the kernel. However, XDP bypasses the kernel, so we have to deal with it manually. This change would allow user to configure two interfaces for XDP to attach to and poll as incoming packets could come from either the VF or the NetVSC. Looks fine to me. Is there a reason to keep the VF rx queues separate from the other rx queues? Could we just append them to the same vec? I think we can merge them into one, although keeping them separate might serve some diagnostic purpose now or in the future.
2025-04-01T04:34:42.596434
2023-11-21T18:07:44
2004863768
{ "authors": [ "dany118d", "krschau" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8512", "repo": "microsoft/devhome", "url": "https://github.com/microsoft/devhome/issues/1939" }
gharchive/issue
Problem on screen Dev Home version 0.701.323.0 Windows build number 10.0.22631.2715 Other software OS Build Version: 10.0.22631.2715.amd64fre.ni_release.220506-1250 .NET Version: .NET 6.0.24 Steps to reproduce the bug Just open the app and navigate the sections Expected result Normal images and screen view Actual result Blurry parts Included System Information CPU: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz Physical Memory: 15.85GB (5.19GB free) Processor Architecture: x64 NVIDIA GEFORCE GTX 1060 GPU + Intel UHD 630 Included Extensions Information Extensions: Microsoft.Windows.DevHome_0.701.323.0_x64__8wekyb3d8bbwe Microsoft.Windows.DevHomeGitHubExtension_0.700.323.0_x64__8wekyb3d8bbwe Hi @dany118d, was this a one time thing or are you seeing it all the time? If you can reproduce it, can you include a screenshot? /feedbackhub This isn't an issue we can fix on Dev Home, but if you submit it through Feedback Hub it should get to the right people who can.
2025-04-01T04:34:42.609505
2024-07-03T18:54:19
2389249489
{ "authors": [ "bbonaby", "crutkas", "huzaifa-d", "joadoumie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8513", "repo": "microsoft/devhome", "url": "https://github.com/microsoft/devhome/issues/3359" }
gharchive/issue
PowerShell script for "adding user to hyper-v" Dev Home version 0.1501.547.0 Windows build number 10.0.22631.3874 Other software OS Build Version: 10.0.22631.3874.amd64fre.ni_release.220506-1250 .NET Version: .NET 8.0.6 Steps to reproduce the bug Click click "More details" you see it is running far more than what i consented to. https://github.com/microsoft/devhome/blob/b257651e55feb6ec2f0497cfbb3c9a9cfd256b8e/common/Environments/Scripts/HyperVSetupScript.cs is the file. Expected result Only run command to add me to the group as that is ONLY what i consented to Actual result it runs https://github.com/microsoft/devhome/blob/b257651e55feb6ec2f0497cfbb3c9a9cfd256b8e/common/Environments/Scripts/HyperVSetupScript.cs Included System Information CPU: Intel(R) Core(TM) Ultra 7 165H Physical Memory: 31.64GB (12.87GB free) Processor Architecture: x64 Included Extensions Information Extensions: Microsoft.Windows.DevHomeGitHubExtension.Canary_0.1500.530.0_x64__8wekyb3d8bbwe (Dev Home GitHub Extension (Canary)) Microsoft.Windows.DevHomeAzureExtension.Canary_0.1000.530.0_x64__8wekyb3d8bbwe (Dev Home Azure Extension (Canary)) Microsoft.Windows.DevHome_0.1501.533.0_x64__8wekyb3d8bbwe (Core Widget Extension) Microsoft.Windows.DevHome_0.1501.533.0_x64__8wekyb3d8bbwe (Hyper-V Extension) Microsoft.Windows.DevHomeGitHubExtension_0.1500.533.0_x64__8wekyb3d8bbwe (Dev Home GitHub Extension (Preview)) Microsoft.Windows.DevHome.Canary_0.1501.547.0_x64__8wekyb3d8bbwe (Core Widget Extension) Microsoft.Windows.DevHome.Canary_0.1501.547.0_x64__8wekyb3d8bbwe (Hyper-V Extension) Widget Service: MicrosoftWindows.Client.WebExperience_524.18000.0.0_x64__cw5n1h2txyewy Even though the function used is the same, different messages are presented to the user if only the user isn't part of the admin group (screenshot in the bug description) or if the feature isn't enabled - So, we are indeed asking permission for only what we do This is not a P3 as the user did not give permission for what the script actually executes under the hood. A user doesn't give to do everything in that script What do we need to take more permission for? The dialog i provided, The user gave permission for a single action, adding the user to the group, nothing else in that script should happen or attempt to be executed. I am not sure I understand. What would you like the dialog to say? Either the entire dialog needs to change for this flow or the action must mimic the prompt. Right now it says "Add user to hyper-v group". the code behind must ONLY do that action. That is what the user consented to. I think a great larger dialog would be maybe something like we do on PowerToys for showing a user what we're going to do for setup on something like Command Not Found. User sees clearly what exact actions need to be done and the output. [ ] Hyper-V installed [ ] Is Part of Hyper-V Manager [x] Requirement XYZ [ ] Requirement ABC Here is it in powertoys in an intentionally half state What text would you prefer to be shown? What text would you like to be shown instead? We originally created the script since the virtualization page in Windows Customization wasn't created yet. Now that its created we'd move this enablement/adding user to the admin group over to that section. Perhaps, we can speak with Jordi on his thoughts about making the Hyper-V section in the virtualization page an expander or another L3 page to incorporate the feedback from Clint. We'd just redirect the user to that page to setup Hyper-V: @joadoumie what are your thoughts? I know currently we use PowerShell scripts even in the virtualization page, but in the future I believe Jeff will be working on lifting the elevation code from the setup flow into Dev Home core so it can be used by everyone. After that we'll move away from the PowerShell scripts. That sounds great to me. We can restructure the wording where necessary as well in the virtualization page to make sure it's more clear for folks coming from the environments view. I think we need to adopt a flow more like I outlined. User can see what the list of things that are needed at enablement time for the extension and can do everything there vs going to a bunch of spots.
2025-04-01T04:34:42.615531
2023-05-29T20:58:24
1731228669
{ "authors": [ "UltimateZero", "gcarramate", "vineeththomasalex" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8514", "repo": "microsoft/devhome", "url": "https://github.com/microsoft/devhome/issues/941" }
gharchive/issue
Can't ask for GitHub permission after revoking access Dev Home version <IP_ADDRESS> Windows build number 10.0.22621.1702 Other software OS Build Version: 10.0.22621.1702.amd64fre.ni_release.220506-1250 .NET Version: .NET 6.0.16 Steps to reproduce the bug Link GitHub account to Dev Home Revoke access to account inside GitHub Try to link again, or add repositories Expected result I would be able to link my account again. Actual result Dev Home just closes. Included System Information CPU: AMD Ryzen 7 5800H with Radeon Graphics Physical Memory: 15.35GB (5.54GB free) Processor Architecture: x64 Included Extensions Information Extensions: Microsoft.Windows.DevHome_<IP_ADDRESS>_x64__8wekyb3d8bbwe Microsoft.Windows.DevHomeGitHubExtension_<IP_ADDRESS>_x64__8wekyb3d8bbwe Encountered this issue as well, even re-installing Dev Home and Dev Home Github Extension doesn't fix it. You need to open Credential Manager, and delete the GithubDevHomeExtension credential entry. I signed out, reset the app under settings -> apps and even uninstalled Dev Home and installed it again. After trying all of this, the only way to fix it was by manually deleting the GithubDevHomeExtension credential entry on Credential Manager, as @UltimateZero mentioned. Logging out and logging in will work to display the autorization page of the Dev Home GitHub Extension OAuth app while signing in to GitHub since v0.4
2025-04-01T04:34:42.618188
2021-12-01T16:09:35
1068595195
{ "authors": [ "bwalderman", "neethynair" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8515", "repo": "microsoft/edge-selenium-tools", "url": "https://github.com/microsoft/edge-selenium-tools/issues/56" }
gharchive/issue
We are using Python roboframework automation. Is it possible to download and install edge driver automatically while script run based on the version passed as parameter? We are using Python robot framework automation. Is it possible to download and install edge driver automatically while script run based on the version passed as parameter?We are using selenium 3.14 , robot framework selenium library 4.5.0 and python 3.7 @neethynair you can use WebDriverManager which is a 3rd-party tool that can automatically manage drivers for all browsers including Microsoft Edge. I'm going to close this issue since this repo is now deprecated. We're recommending users upgrade their tests to Selenium 4. If you need further help with the Microsoft Edge driver, you can contact the DevTools team at https://docs.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/contact
2025-04-01T04:34:42.631827
2024-06-07T06:54:03
2339727698
{ "authors": [ "apurvabhaleMS", "v-ajajvanu" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8516", "repo": "microsoft/fhir-server", "url": "https://github.com/microsoft/fhir-server/pull/3905" }
gharchive/pull-request
User story 100880 - Changes for Routing and Binders namespace. Description changes to below namespaces for user story 100880 Microsoft.Health.Fhir.Api.Features.Binders Microsoft.Health.Fhir.Api.Features.Routing Related issues user story AB#100880 Testing ran unit tests. FHIR Team Checklist Update the title of the PR to be succinct and less than 65 characters Add a milestone to the PR for the sprint that it is merged (i.e. add S47) Tag the PR with the type of update: Bug, Build, Dependencies, Enhancement, New-Feature or Documentation Tag the PR with Open source, Azure API for FHIR (CosmosDB or common code) or Azure Healthcare APIs (SQL or common code) to specify where this change is intended to be released. Tag the PR with Schema Version backward compatible or Schema Version backward incompatible or Schema Version unchanged if this adds or updates Sql script which is/is not backward compatible with the code. [ ] CI is green before merge Review squash-merge requirements Semver Change (docs) Patch|Skip|Feature|Breaking (reason) /azp run /azp run /azp run
2025-04-01T04:34:43.034043
2023-01-15T21:56:56
1534019482
{ "authors": [ "kabutz", "karianna", "kirk-microsoft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8517", "repo": "microsoft/gctoolkit", "url": "https://github.com/microsoft/gctoolkit/pull/255" }
gharchive/pull-request
Message Updated buffering @kirk-microsoft not sure how this interacts with #252 Weird, all of these changes are in the main message branch so I'll close this PR even though I thought I squash merged it as it came in.. hummm
2025-04-01T04:34:43.035537
2022-01-06T12:49:20
1095290334
{ "authors": [ "alloy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8518", "repo": "microsoft/graphitation", "url": "https://github.com/microsoft/graphitation/issues/83" }
gharchive/issue
[ARRDT] Add support fragment arguments directives https://relay.dev/docs/api-reference/graphql-and-directives/#arguments https://relay.dev/docs/api-reference/graphql-and-directives/#argumentdefinitions Closed by #88
2025-04-01T04:34:43.050091
2023-11-07T19:28:08
1982074900
{ "authors": [ "cpendery", "zhelenskiy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8519", "repo": "microsoft/inshellisense", "url": "https://github.com/microsoft/inshellisense/issues/24" }
gharchive/issue
Parse error near `__inshellisense Describe the bug The app fails with .inshellisense/key-bindings.zsh:10: parse error near `__inshellisense' when I open terminal after the installation. To Reproduce Steps to reproduce the behavior: Install the app on Mac M2 Pro Open new terminal Get the message. Inshellisense does not work. Expected behavior Everything works fine Environment OS: macOS 14.0 Output of inshellisense --version: 0.0.1-rc.1 Nodejs Version: 16.0.0 Additional context is -s zsh ERROR pagedSuggestions.at is not a function file:///opt/homebrew/lib/node_modules/@microsoft/inshellisense/build/ui/sugges tions.js:48:48 -Suggestion (file:///opt/homebrew/lib/node_modules/@microsoft/inshellisense/bu ild/ui/suggestions.js:48:48) -renderWithHo (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_mo ks dules/react-reconciler/cjs/react-reconciler.development.js:7478:1 8) -mountIndeterminateCo (/opt/homebrew/lib/node_modules/@microsoft/inshellisense ponent /node_modules/react-reconciler/cjs/react-reconciler.devel opment.js:11247:13) -beginWor (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_module s/react-reconciler/cjs/react-reconciler.development.js:12760:16) -beginWork$ (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_modu les/react-reconciler/cjs/react-reconciler.development.js:19569:14) -performUnitOfW (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_ rk modules/react-reconciler/cjs/react-reconciler.development.js:18 703:12) -workLoopSy (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_modu c les/react-reconciler/cjs/react-reconciler.development.js:18609:5) -renderRootSy (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_mo c dules/react-reconciler/cjs/react-reconciler.development.js:18577: 7) -performSyncWorkO (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/nod Root e_modules/react-reconciler/cjs/react-reconciler.development.j s:18193:20) -flushSyncCallb (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_ cks modules/react-reconciler/cjs/react-reconciler.development.js:29 36:22) ERROR pagedSuggestions.at is not a function file:///opt/homebrew/lib/node_modules/@microsoft/inshellisense/build/ui/sugges tions.js:48:48 -Suggestion (file:///opt/homebrew/lib/node_modules/@microsoft/inshellisense/bu ild/ui/suggestions.js:48:48) -renderWithHo (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_mo ks dules/react-reconciler/cjs/react-reconciler.development.js:7478:1 8) -mountIndeterminateCo (/opt/homebrew/lib/node_modules/@microsoft/inshellisense ponent /node_modules/react-reconciler/cjs/react-reconciler.devel opment.js:11247:13) -beginWor (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_module s/react-reconciler/cjs/react-reconciler.development.js:12760:16) -beginWork$ (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_modu les/react-reconciler/cjs/react-reconciler.development.js:19569:14) -performUnitOfW (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_ rk modules/react-reconciler/cjs/react-reconciler.development.js:18 703:12) -workLoopSy (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_modu c les/react-reconciler/cjs/react-reconciler.development.js:18609:5) -renderRootSy (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_mo c dules/react-reconciler/cjs/react-reconciler.development.js:18577: 7) -performSyncWorkO (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/nod Root e_modules/react-reconciler/cjs/react-reconciler.development.j s:18193:20) -flushSyncCallb (/opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_ cks modules/react-reconciler/cjs/react-reconciler.development.js:29 36:22) The above error occurred in the <Suggestions> component: at Suggestions (file:///opt/homebrew/lib/node_modules/@microsoft/inshellisense/build/ui/suggestions.js:42:39) at ink-box at file:///opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_modules/ink/build/components/Box.js:5:27 at UI (file:///opt/homebrew/lib/node_modules/@microsoft/inshellisense/build/ui/ui-root.js:11:15) at App (file:///opt/homebrew/lib/node_modules/@microsoft/inshellisense/node_modules/ink/build/components/App.js:19:9) React will try to recreate this component tree from scratch using the error boundary you provided, InternalApp. This might be a combination of both node version && CRLF issues. See #16 for the parsing error as this is a duplicate of that. Which node version are you using @zhelenskiy? @cpendery mentioned it in the description: 16.0.0. @zhelenskiy, can you try again with 0.0.1-rc.3 once it gets published to npm? The same The same @zhelenskiy you try running the is uninstall command to clear the key bindings and then create a fresh binding for zsh via is bind. It may still have left over CRLFs from the first time you created the binding Ok, now it doesn't show anything in the new terminal, but it also does nothing at all. I use iTerm by the way. Great, so you can start a new inshellisense session by running ^A or via running is -s zsh. It won't start one for you automatically on shell start You'll need at least node 16.6.0 & higher based on Mozilla's docs for the .at function. This is the same issue as #14.
2025-04-01T04:34:43.055682
2021-03-25T23:36:40
841431702
{ "authors": [ "jamesadevine", "mmoskal", "tballmsft" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8520", "repo": "microsoft/jacdac", "url": "https://github.com/microsoft/jacdac/issues/304" }
gharchive/issue
lo-pulse delay Describe the bug Waking up from sleep, transferring RTC timer to regular timer, and setting reception takes 50-60us on F0 running at 8MHz (depending on how much clock have to be updated). MakeCode on micro:bit has 14us lo-pulse and 42us delay, for a total of 56us, so sometimes we manage to set it up in time and sometimes we don't. This is why modules lose packets from micro:bit. I did some fixes in https://github.com/microsoft/jacdac-stm32x0/commit/21d02d92e321f01a6829859643aa269bf590fc3a and now it generally falls under 56us, though every now and then it doesn't. The spec has minimum delay of 40us, and I think we should rise that to 50us so that we have some buffer. I don't think this is going to be a problem for lower-end MCUs as they won't have complicated timers to update on wake up (if they even sleep). It's also not a problem for G0 as it runs at 16MHz. @jamesadevine comments? wow, nice catch. This is after max cycle skimming? Again it feels like we shouldn't necessarily change the length of the low data gap without good reason. Are you also proposing to change the top threshold so low pulse data gap could be 50-100 us? Even with 50 as the new minimum the stm32f0 will not be necessarily compatible with every other jd implementation... Why wouldn't it be compatible? I was missing 3us or so with the 42us delay. The times I quoted for F0 are from the beginning of the lo-pulse (because that's when we get an interrupt). I think F0 is pretty much the worst case - we sleep it, is slow when it wakes, and we use RTC because it doesn't have low-power timer (so we have to convert RTC to TIM17 on wake up). As discussed offline with @jamesadevine we need to update the docs to say: min delay after low-pulse is 50us; implementations should stay as close as possible but above 50us we can increase the max time from 100us to 200us (we almost never pay that)
2025-04-01T04:34:43.062020
2020-11-09T06:23:38
738723706
{ "authors": [ "Berkunath", "chrisglein" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8521", "repo": "microsoft/microsoft-ui-xaml", "url": "https://github.com/microsoft/microsoft-ui-xaml/issues/3574" }
gharchive/issue
WinUI Dekstop app crashes on Navigate to View from another Library I have created the application with Navigation View contains Frame. While try to navigate to the view from another library using Frame.Navigate method application get crash. If the view is inside the same library, Frame.Navigate works properly. Steps to reproduce the bug Run the attached sample Application get crash on Frame.Navigate Expected behavior Application should not throw any exception Version Info WinUI NuGet package version - 3.0.0-preview2.200713.0, Windows app type - WinUI Desktop app, OS version(s) - 10.0.18362.1139 Sample: FrameNavigation.zip Reminding me of #3371, which also was an issue doing things cross-library with WinUI3 preview. @JesseCol The actual crash seems to be in the UDK. Is that expected to show up that way or is that a red herring in this type of issue? @Berkunath Could you try this repro on preview3 of WinUI3? Also, does this same scenario work without WinUI3 on a plain OS XAML app? @JesseCol The actual crash seems to be in the UDK. Is that expected to show up that way or is that a red herring in this type of issue? @Berkunath Could you try this repro on preview3 of WinUI3? Also, does this same scenario work without WinUI3 on a plain OS XAML app? @chrisglein After WinUI 3 Preview 3, It seems working fine without crash. @chrisglein After WinUI 3 Preview 3, It seems working fine without crash.
2025-04-01T04:34:43.067406
2024-04-09T19:55:29
2234199538
{ "authors": [ "TheJoeFin", "ranjeshj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8522", "repo": "microsoft/microsoft-ui-xaml", "url": "https://github.com/microsoft/microsoft-ui-xaml/issues/9525" }
gharchive/issue
Ink on InkCanvas flashes when Ink Canvas (or parent control) is enabled or disabled Describe the bug When there is ink on an InkCanvas and the parent control IsEnabled value is changed the ink flashes. Steps to reproduce the bug Issue is visible in my app Ink Calendar and Journal. Switch between views (which enables and disables the parent flipview) Tap the lock symbol (also enables and disables the parent flipview) Notice flashing ink. Expected behavior Ink once loaded should flash when the parent control IsEnabled changes. Screenshots https://github.com/microsoft/microsoft-ui-xaml/assets/7809853/b2f519ac-ef78-462a-ad9e-172ae797182c NuGet package version WinUI 2 - Microsoft.UI.Xaml 2.8.2 Windows version Windows 11 (22H2): Build 22621 Additional context I am actually using WinUI 2.8.6 and on Windows 11 23H2 I reviewed all of the possibly related issues and they are not duplicates of this issue. 😊 @TheJoeFin sorry, we are only making business critical fixes in winui2 at this point - https://github.com/microsoft/microsoft-ui-xaml/blob/main/docs/contribution_handling.md#winui-2
2025-04-01T04:34:43.071851
2024-07-11T13:25:26
2403747607
{ "authors": [ "Wandtket", "codendone" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8523", "repo": "microsoft/microsoft-ui-xaml", "url": "https://github.com/microsoft/microsoft-ui-xaml/issues/9809" }
gharchive/issue
Titlebar buttons inactive on window maximize. Describe the bug Titlebar buttons inactive when window is in a maximized state (except for minimize button on occasion). When you move the window out of the maximized position the buttons become clickable again. It worked properly in 1.5.240428000. Steps to reproduce the bug Create a new WinUI Project, Extend Content into Title Bar, Maximize the window and try to interact with anything in the top right most part of the screen. Expected behavior Buttons should remain clickable when maximized. Screenshots NuGet package version Windows App SDK 1.5.5: 1.5.240627000 Packaging type Packaged (MSIX) Windows version Windows 11 version 22H2 (22621, 2022 Update) IDE Visual Studio 2022 Additional context No response @Wandtket Can you attach a simple repro app? We're not able to repro. @codendone As Requested, TitleBar.zip I added a TabView with the buttons I used in the screenshot. This was resolved as duplicate of #9749 -- fixed by the same change.
2025-04-01T04:34:43.073890
2021-07-19T07:59:46
947377480
{ "authors": [ "journeyman-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8524", "repo": "microsoft/moabsim-py", "url": "https://github.com/microsoft/moabsim-py/pull/14" }
gharchive/pull-request
Journeyman/model import test pytest -s tests\test_model_import.py --brain_name testmoab --custom_assess_name my_custom_1 --log_analy_workspace <YOUR_LOG_ANALYTICS_WORKSPACE> Bonsai does not automatically start logging of custom assessment using CLI. I'll have to think about how to address this. There's a thread going where this was going to be changed. Please use bonsai-cli from brain/src/Services/bonsaicli2 because logs don't automatically get stored without it. Waiting on release to Pypi git checkout arkadiya/custom-assessment-log-all-and-limit-20 cd brain/src/Services/bonsaicli2 pip install -e . Also episode lengths are incorrect https://bizair.visualstudio.com/Machine Teaching/_workitems/edit/24479/
2025-04-01T04:34:43.079489
2021-05-19T15:24:39
895582905
{ "authors": [ "hediet", "knieriem" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:8525", "repo": "microsoft/monaco-editor", "url": "https://github.com/microsoft/monaco-editor/issues/2489" }
gharchive/issue
Monarch: Markdown syntax highlighting: continuation lines of list items rendered as code blocks When using the File viewer/editor in Azure DevOps Git-Repositories, which appears to be based on Monaco/Monarch, i am observing an issue with syntax highlighting in case of Markdown files, like CHANGELOG.md: Continuation lines of list elements are highlighted as if they were code blocks, if the indentation is 4 spaces (3 spaces between the - and the text), or or one tab: - item continuation paragraph, still part of the list item In the Language Editor within the Monaco-Editor of the Monarch documention this is displayed as: Monarch's language syntax definition for Markdown contains a rule for code blocks that renders indented lines (4 spaces or a tab) as string: [ // list (starting with * or number) [/^\s*([\*\-+:]|\d+\.)\s/, 'keyword'], // code block (4 spaces indent) [/^(\t|[ ]{4})[^ ].*$/, 'string'] ] This also explains why the effect is visible only if the indentation of the continuation lines is exactly 4 spaces or 1 tab, not if it is e.g. 3 spaces. Perhaps the grammar for lists would need to be more complex. While it is not a big issue, it gives the viewer the impression that there is something wrong with the Markdown syntax (normal text, rendered in red), which from my view isn't the case. monaco-editor version: 0.24.0 Browser: Chrome OS: Linux/Windows Unfortunately, we don't have the resources to fix this bug right now, but we're more than happy to accept a PR for this! Do you have suggestions what the proper rule should be? Closing due to inactivity. If this bug still applies, please create a new issue. Make sure to select the form for bugs and include a minimal reproducible example. Thanks!