instruction
stringlengths
0
30k
|azure|terraform|terraform-provider-azure|
I have an array state variable `flashingBoxLoaderItems` that will collect the "receipts" from individual async fetches happening in the component, to let me know when all the fetches are complete. There's also a boolean variable `flashingBoxLoaderComplete` that is set on an `Effect` monitoring any changes on `flashingBoxLoaderItems` to hide the loader when everything's done. const [flashingBoxLoaderItems, setFlashingBoxLoaderItems] = useState<string[]>([]); const [flashingBoxLoaderComplete, setFlashingBoxLoaderComplete] = useState(false); Example of fetches. Note that the state variable is correctly updated for the Effect using the spread ... syntax, so the effect does pick it up. const fetch1 = async() => { try { const response = await get<any>("/url1/"); //... } catch { //... } finally { setFlashingBoxLoaderItems([...flashingBoxLoaderItems, 'fetch1']); } } // ETC. - Same for fetch2(), fetch3(), etc. Effect to check all the receipts and set the final `complete` variable: useEffect(() => { let fetchesExpected = ['fetch1','fetch2','fetch3','fetch4','fetch5']; let result = fetchesExpected.every(i => flashingBoxLoaderItems.includes(i)); setFlashingBoxLoaderComplete(result); }, [flashingBoxLoaderItems]); But this doesn't work correctly. The debugger shows that my `flashingBoxLoaderItems` doesn't grow incrementally as expected. Sometimes it has 1 or 2 strings instead of the expected 4 or 5 on the 4th or 5th fetch. My suspicion is the fetches all happen at different times and the array isn't being synchronously maintained. So the `complete` variable at the end never gets set to TRUE after the 5th fetch as it should. The code that does work is if I simply `flashingBoxLoaderItems.push('fetchN')` after each fetch. But in that case, since there's no state variable change, there's no re-render. I never get the chance to re-render and hide/show the loader.
I have defined a task in AWS ECS. I have copied the task ARN into the API call: ``` rsp = ecs.execute_command( container='test', command=json.dumps(event), task='arn:aws:ecs:##-####-#:############:task-definition/test:1', interactive=True ) ``` I receive the following error complaining about the task identifier: ``` botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameterException) when calling the ExecuteCommand operation: Task Identifier is invalid ``` I am not sure what is wrong as the task identifier has been copied from the AWS web interface where the task was created. Am I just missing something extremely obvious?
bot3 ecs.execute_command: Task Identifier is invalid
|python|amazon-web-services|boto3|amazon-ecs|
const icone = document.querySelector("i"); console.log(icone); //Je soumets icone.addEventListener('click', function() { console.log('icône cliqué'); console.log('Before:', icone.classList); icone.classList.toggle('fa-meh-blank'); icone.classList.toggle('fa-smile-wink'); console.log('After:', icone.classList); }); I think the code is working properly now. You forgot to delete the old class before adding the new one.
|javascript|php|json|stripe-payments|stripe-payment-intent|
I have a HTML Text, which will have some Thymeleaf variables. If I have some unknown variables etc. I will get always an error message an the String will not replace with variables anymore. I would like to replace the values for which I got an error just with an empty String (""). How can I do this? Here a simple test: customer. (with dot) is an issue for Thymeleaf, therefore I will get an error. public static void main(String[] args) { StringTemplateResolver templateResolver = new StringTemplateResolver(); templateResolver.setTemplateMode(TemplateMode.HTML); // Erstelle eine Template-Engine TemplateEngine templateEngine = new TemplateEngine(); templateEngine.setTemplateResolver(templateResolver); String templateString = "<p>Hello, {{customer.name}} !</p>"; Customer customer = new Customer(); customer.setName("Meier"); Context context = new Context(); context.setVariable("customer.", customer); templateString = replaceCustomVariableSyntax(templateString); // Prozessiere das Template mit den Variablen StringWriter stringWriter = new StringWriter(); try { templateEngine.process(templateString, context, stringWriter); } catch (Exception e) { // Catch any exceptions and return an empty string } System.out.println(stringWriter.toString()); }
Replace Variables with issue as empty String
|thymeleaf|
# CSES Problem Set GridPaths? I hope you're doing well. I'm currently working on a C++ program for maze traversal, and I've run into a few issues that I could use some assistance with. The program uses a depth-first search (DFS) algorithm to navigate through a 7x7 grid maze with certain restrictions. I've encountered unexpected results, and I suspect there might be a bug or oversight in my code. The program is designed to handle different movements ('U', 'R', 'D', 'L') and '?' as a wildcard, allowing the algorithm to explore all possible paths. I've attached the code below, and I would greatly appreciate it if someone could take a look and provide insights or suggestions on how to improve it. Specifically, I'm interested in understanding why the counts variable doesn't seem to be updating correctly. ``` #include <bits/stdc++.h> using namespace std; #define B begin() #define E end() #define ll long long #define forin for(int i = 0; i < n; i++) int counts; int graph[7][7] = {0}; map<int,int> m; pair<int,int> turn[4] = {{0,-1},{1,0},{0,1},{-1,0}}; void dfs(int x,int y,int deep){ if(x==6&&y==0){ if(deep==48){ counts++; } return; } graph[x][y] = 1; if(m[deep]==-1){ for(int i=0;i<4;i++){ int xx = x+turn[i].first; int yy = y+turn[i].second; if(xx>=0&&xx<7&&yy>=0&&yy<7){ if(graph[xx][yy]==0){ dfs(xx,yy,deep+1); } } } }else { int xx = x+turn[m[deep]].first; int yy = y+turn[m[deep]].second; if(xx>=0&&xx<7&&yy>=0&&yy<7){ if(graph[xx][yy]==0){ dfs(xx,yy,deep+1); } } } graph[x][y] = 0; } int main() { string s; cin >> s; counts = 0; for(int i=0;i<s.size();i++){ if(s[i]=='?') m[i] = -1; else if(s[i]=='U') m[i] = 0; else if(s[i]=='R') m[i] = 1; else if(s[i]=='D') m[i] = 2; else if(s[i]=='L') m[i] = 3; } dfs(0,0,0); cout << counts; return 0; } ```
We are trying to replace WebSphere Extremescape in our environment and planning to use redis. The only reason we are using WebSphere extremescale is to provide HTTP session persistence. I believe we can use jcache/redisson to have WebSphere Liberty use redis for session persistence. Has anyone done that - if yes, would you please share your configuration for redisson-jcache.yaml and server.xml configuration Thanks, Here is the stack trace ``` [3/27/24 1:04:12:216 UTC] 0000002b com.ibm.ws.logging.internal.impl.IncidentImpl I FFDC1015I: An FFDC Incident has been created: "java.lang.IllegalStateException: Default configuration hasn't been specified! com.ibm.ws.session.store.cache.CacheHashMap 250" at ffdc_24.03.27_01.04.11.0.log [3/27/24 1:04:12:218 UTC] 0000002b com.ibm.ws.session.store.cache.CacheHashMap E SESN0307E: An exception occurred when initializing the cache. The exception is: java.lang.IllegalStateException: Default configuration hasn't been specified! at org.redisson.jcache.JCacheManager.createCache(JCacheManager.java:119) at com.ibm.ws.session.store.cache.CacheHashMap.cacheInit(CacheHashMap.java:192) at com.ibm.ws.session.store.cache.CacheHashMap.lambda$new$0(CacheHashMap.java:136) at java.security.AccessController.doPrivileged(AccessController.java:690) at com.ibm.ws.session.store.cache.CacheHashMap.<init>(CacheHashMap.java:135) at com.ibm.ws.session.store.cache.CacheStore.<init>(CacheStore.java:35) at com.ibm.ws.session.store.cache.CacheStoreService.createStore(CacheStoreService.java:323) at com.ibm.ws.session.SessionContext.createStore(SessionContext.java:355) at com.ibm.ws.session.SessionContext.createStore(SessionContext.java:344) at com.ibm.ws.session.SessionContext.createCoreSessionManager(SessionContext.java:257) at com.ibm.ws.session.SessionContext.createCoreSessionManager(SessionContext.java:190) at com.ibm.ws.webcontainer.session.impl.HttpSessionContextImpl.createCoreSessionManager(HttpSessionContextImpl.java:963) at com.ibm.ws.session.SessionContext.<init>(SessionContext.java:160) at com.ibm.ws.session.SessionContext.<init>(SessionContext.java:145) at com.ibm.ws.webcontainer.session.impl.HttpSessionContextImpl.<init>(HttpSessionContextImpl.java:61) at com.ibm.ws.webcontainer.session.impl.SessionContextRegistryImpl.createSessionContextObject(SessionContextRegistryImpl.java:98) at com.ibm.ws.webcontainer.session.impl.SessionContextRegistryImpl.createSessionContext(SessionContextRegistryImpl.java:86) at com.ibm.ws.webcontainer.session.impl.SessionContextRegistryImpl.getSessionContext(SessionContextRegistryImpl.java:309) at com.ibm.ws.webcontainer.WebContainer.getSessionContext(WebContainer.java:707) at com.ibm.ws.webcontainer.VirtualHost.getSessionContext(VirtualHost.java:190) at com.ibm.ws.webcontainer.webapp.WebGroup.getSessionContext(WebGroup.java:158) at com.ibm.ws.webcontainer.webapp.WebApp.createSessionContext(WebApp.java:1344) at com.ibm.ws.webcontainer.webapp.WebApp.commonInitializationStart(WebApp.java:1327) at com.ibm.ws.webcontainer.osgi.webapp.WebApp.commonInitializationStart(WebApp.java:256) at com.ibm.ws.webcontainer.webapp.WebApp.initialize(WebApp.java:1039) at com.ibm.ws.webcontainer.webapp.WebApp.initialize(WebApp.java:6705) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost.startWebApp(DynamicVirtualHost.java:474) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost.startWebApplication(DynamicVirtualHost.java:469) at com.ibm.ws.webcontainer.osgi.WebContainer.startWebApplication(WebContainer.java:1197) at com.ibm.ws.webcontainer.osgi.WebContainer.access$100(WebContainer.java:112) at com.ibm.ws.webcontainer.osgi.WebContainer$3.run(WebContainer.java:994) at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:247) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522) at java.util.concurrent.FutureTask.run(FutureTask.java:277) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.lang.Thread.run(Thread.java:825) . ```
You want the `local.fin_config` from your master `terragrunt.hcl` to be accessible in each project's `terragrunt.hcl` file. But, as far as I know, configurations defined in the master file are not automatically available in the child configurations: [Terragrunt](https://terragrunt.gruntwork.io/) does not natively support the inheritance of locals across configurations. However, [inputs defined in an included `terragrunt.hcl` file](https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#include) can be accessed by the child configuration. That means you can try and use inputs for shared configuration rather than locals. Modify your master `terragrunt.hcl` in `terraform/infra` to output `fin_config` as an input that can be inherited by child configurations: ```hcl # In master terragrunt.hcl locals { # Your locals remain the same } inputs = { fin_config = local.fin_config } ``` In your project-specific `terragrunt.hcl` (e.g., within `project1`), you should be able to access `fin_config` through the `inputs` of the included configuration: ```hcl include "root" { path = find_in_parent_folders() } generate "provider" { path = "providers.tf" if_exists = "overwrite" contents = <<EOF provider "azurerm" { features {} subscription_id = "abc" skip_provider_registration = true alias = "\${include.root.inputs.fin_config.law}" } provider "azurerm" { features {} subscription_id = "\${include.root.inputs.fin_config.sub}" skip_provider_registration = true } EOF } ``` For project-specific configurations (e.g., different Azure subscription IDs), you should define these directly within each project's `terragrunt.hcl` as inputs or manage them through additional `.tfvars` files specified in the `extra_arguments` block. ----- > `Attempt to get attribute from null value` The error should mean that, when the child configurations try to access `fin_config` via `include.root.inputs.fin_config`, the `fin_config` is not properly initialized or passed down as expected. Terragrunt's `include` mechanism allows child configurations to inherit `inputs` from parent configurations, but this inheritance does not extend to the `locals` in the same straightforward way. When you try to pass `locals` from the parent configuration to the child through `inputs`, it is important that these `locals` are *fully resolved* and available at the time the child configurations are parsed. If `fin_config` is dependent on any dynamic values or other `locals` that are not resolved in time, it may result in the child configurations receiving a `null` value when they attempt to access `fin_config` through `include.root.inputs.fin_config`. A workaround for this issue involves making sure that any values you wish to pass down from the master to the child configurations are not dependent on unresolved `locals`, or are resolved before they are assigned to `inputs`. The master `terragrunt.hcl` would be (simplified example): ```hcl # Define a local that you intend to pass down locals { common_config = { law = "some-value", sub = "another-value" } } # Directly assign the local value to inputs inputs = { fin_config = local.common_config } ``` The child `terragrunt.hcl`: ```hcl include "root" { path = find_in_parent_folders() } generate "provider" { path = "providers.tf" if_exists = "overwrite" contents = <<EOF provider "azurerm" { features {} subscription_id = "abc" skip_provider_registration = true alias = "\${include.root.inputs.fin_config["law"]}" } provider "azurerm" { features {} subscription_id = "\${include.root.inputs.fin_config["sub"]}" skip_provider_registration = true } EOF } ``` So make sure any `locals` used within the `inputs` block of the master configuration are statically defined or resolved early enough to be fully available when the child configurations are processed. When accessing map attributes in the `contents` of the `generate` block, use the `["key"]` notation to make sure proper interpolation. Double-check the structure and definitions in your configurations to avoid any timing issues with the evaluation of `locals`.
I have a problem creating a project with Visual Studio 2022 Enterprise on Windows 10. I try to create a project with the template "Angular and ASP.NET Core". I dont change the default options (.NET 8.0,...). When creating the project I get the message "The version of Angular CLI was not valid." The template tries to call "ng version" from the project directory calling "c:\Users\<username>\AppData\Roaming\npm\ng.cmd" At home (using Visual Studio Community) all works well - The ASP.NET Core server project gets created, and also the Angular client project. At work (using Visual Studio Enterprise) I get this wired error directly after creating the project. The ASP.NET Core server projects gets created, the client project is completely empty. I am able to call "ng version" from command line getting the expected result: Angular CLI: 17.3.2, Node: 20.11.1, npm 10.2.4. I added some log "echos" to ng.cmd to make sure the right ng.cmd was called. I uninstalled all components (Angular, Node.js), reinstalled them. I uninstalled Visual Studio Enterprise, installed Visual Studio Community. I tried to install other versions of angular/cli (e.g. 17.1.0). Nothing changed the behavior. Do you have any ideas? Thanks and best regards Andy
I'm currently working on managing role assignments in Terraform for Azure Storage Access, and I'm looking to streamline my code. Below is the snippet I'm working with, ```hcl locals { sa_we = "0c975d82-85a2-4b3a-bb23-9be5c681b66f" sa_gl = "9ee248b1-26f6-4d72-a3ac-7b77cf6c17f2" } resource "azurerm_role_assignment" "storage_account_access" { scope = azurerm_storage_account.jd-messenger.id role_definition_name = "Storage Blob Data Reader" principal_id = local.sa_we } resource "azurerm_role_assignment" "storage_account_access" { scope = azurerm_storage_account.jd-messenger.id role_definition_name = "Storage Blob Data Reader" principal_id = local.sa_gl } ``` I'm wondering if there's a more efficient way to handle these role assignments. Specifically, I'm interested in consolidating these duplicate resource blocks into a single block, eliminating redundancy while still specifying different principal_id values. Any insights or suggestions on how to achieve this would be greatly appreciated!
[Having some trouble with `giraffe-template` on Mac M1](https://github.com/giraffe-fsharp/giraffe-template/issues/51), so decided to set up a Giraffe project manually. Started following the [Doing it manually](https://giraffe.wiki/#doing-it-manually) section of the Giraffe README, but got stuck right away, and I also couldn't see mentioned anywhere how the project could be served. > For the record, **the Giraffe docs are great**. I'm new to .NET, so the parts I'm struggling with are the basics of .NET project management, F#, and ASP.NET Core - it would be unreasonable to expect these topics covered in there.
How to start creating a Giraffe web project and how to serve it?
|asp.net-core|.net-core|f#|f#-giraffe|asp.net-core-cli|
|flutter|
null
Exception has occurred. FlutterError (setState() or markNeedsBuild() called during build. This _ModalScope<dynamic> widget cannot be marked as needing to build because the framework is already in the process of building widgets. A widget can be marked as needing to be built during the build phase only if one of its ancestors is currently building. This exception is allowed because the framework builds parent widgets before children, which means a dirty descendant will always be built. Otherwise, the framework might not visit this widget during this build phase. The widget on which setState() or markNeedsBuild() was called was: _ModalScope<dynamic>-[LabeledGlobalKey<_ModalScopeState<dynamic>>#96ae3] The widget which was currently being built when the offending call was made was: HaveAccountOrNot) The code is: ``` import 'package:flutter/material.dart'; import 'package:food_app/screens/signup.dart'; import 'package:food_app/screens/widget/haveaaccountornot.dart'; import 'package:food_app/screens/widget/mybutton.dart'; import 'package:food_app/screens/widget/mypasswordtextformfield.dart'; import 'package:food_app/screens/widget/mytextformfield.dart'; import 'package:food_app/screens/widget/toptitle.dart'; class Login extends StatefulWidget { @override _LoginState createState() => _LoginState(); } class _LoginState extends State<Login> { final TextEditingController email = TextEditingController(); final GlobalKey<ScaffoldState> scaffold = GlobalKey<ScaffoldState>(); static String p = r'^(([^<>()[\]\\.,;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$'; static RegExp regExp = new RegExp(p); void vaildation(){ if(email.text.isEmpty && password.text.isEmpty) { ScaffoldMessenger.of(context).showSnackBar( SnackBar(content:Text("Both field are required"), ), ); } else if(email.text.isEmpty) { ScaffoldMessenger.of(context).showSnackBar( SnackBar(content: Text("Email is required"), ), ); } else if (!regExp.hasMatch(email.text)) { ScaffoldMessenger.of(context).showSnackBar( SnackBar( content: Text("Email Is Not Vaild"), ), ); } else if(password.text.isEmpty) { ScaffoldMessenger.of(context).showSnackBar( SnackBar(content: Text("password is required"), ), ); } else if(password.text.length < 8) { ScaffoldMessenger.of(context).showSnackBar( SnackBar(content: Text("password is too small"), ), ); } } final TextEditingController password = TextEditingController(); @override Widget build(BuildContext context) { return Scaffold( resizeToAvoidBottomInset: false , key: scaffold, backgroundColor: Color(0xfff8f8f8), body: SafeArea( child: Container( padding: EdgeInsets.symmetric(horizontal: 20), child: Column( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: [ TopTitle( subsTitle: "Welcome To FoodZone", title: "Login"), Center( child: Container( height: 200, width: double.infinity, child: Column( mainAxisAlignment: MainAxisAlignment.center, children: [ MyTextFormField(title: "Email", controller: email,), SizedBox( height: 10, ), MyPasswordTextFormField(title: "Password", controller: password,), MyButton(name: "Login", onPressed: (){ vaildation(); }, ), HaveAccountOrNot(onTap: () { Navigator.of(context).pushReplacement( MaterialPageRoute( builder: (ctx) => SignUp(), ) ); }, title: "I don't have an Account", subsTitle: "SignUp"), ], ), ), ), ], ), ), ), ); } }
I used R code to create function to find the rows without duplicated rows. ``` View(getResults(query_cnv)) #query_results <- getResults(query_cnv_with_data_type) #query_cnv <- GDCquery(project = cancer_type, #data.category = "Copy Number Variation", # data.type = "Gene Level Copy Number Scores") query_results<- getResults(query_cnv) #print(query_results) View(query_results) unique_results <- unique(query_results) View(unique_results) ############################################################################################################# GDCdownload(query_cnv) print(query_cnv) ########################################################################################## # 查看前几行 data_cnv <- GDCprepare(query_cnv) data_cnv_unique <- data_cnv[!duplicated(data_cnv$cases), ] ``` The error information is : There is duplicated items but I don't know how to filter the information and create unique rows. ```` |920 |TCGA-ZF-A9RG-01A-21D-A42D-01;TCGA-ZF-A9RG-10A-01D-A42G-01 |Genotyping Array |ASCAT2 | |1014 |TCGA-ZF-A9RG-01A-21D-A42D-01;TCGA-ZF-A9RG-10A-01D-A42G-01 |Genotyping Array |ASCAT3 | |1100 |TCGA-ZF-A9RL-01A-11D-A38F-01;TCGA-ZF-A9RL-10A-01D-A38I-01 |Genotyping Array |ASCAT2 | |1102 |TCGA-ZF-A9RL-01A-11D-A38F-01;TCGA-ZF-A9RL-10A-01D-A38I-01 |Genotyping Array |ASCAT3 | |139 |TCGA-ZF-A9RM-01A-11D-A38F-01;TCGA-ZF-A9RM-10A-01D-A38I-01 |Genotyping Array |ASCAT3 | |1182 |TCGA-ZF-A9RM-01A-11D-A38F-01;TCGA-ZF-A9RM-10A-01D-A38I-01 |Genotyping Array |ASCAT2 | |130 |TCGA-ZF-A9RN-10A-01D-A42G-01;TCGA-ZF-A9RN-01A-11D-A42D-01 |Genotyping Array |ASCAT3 | |175 |TCGA-ZF-A9RN-10A-01D-A42G-01;TCGA-ZF-A9RN-01A-11D-A42D-01 |Genotyping Array |ASCAT2 | |878 |TCGA-ZF-AA4N-10A-01D-A38I-01;TCGA-ZF-AA4N-01A-11D-A38F-01 |Genotyping Array |ASCAT2 | |1091 |TCGA-ZF-AA4N-10A-01D-A38I-01;TCGA-ZF-AA4N-01A-11D-A38F-01 |Genotyping Array |ASCAT3 | |261 |TCGA-ZF-AA4R-10A-01D-A38I-01;TCGA-ZF-AA4R-01A-11D-A38F-01 |Genotyping Array |ASCAT2 | |1140 |TCGA-ZF-AA4R-10A-01D-A38I-01;TCGA-ZF-AA4R-01A-11D-A38F-01 |Genotyping Array |ASCAT3 | |929 |TCGA-ZF-AA4T-10A-01D-A38I-01;TCGA-ZF-AA4T-01A-11D-A38F-01 |Genotyping Array |ASCAT3 | |930 |TCGA-ZF-AA4T-10A-01D-A38I-01;TCGA-ZF-AA4T-01A-11D-A38F-01 |Genotyping Array |ASCAT2 | |83 |TCGA-ZF-AA4U-10A-01D-A38I-01;TCGA-ZF-AA4U-01A-11D-A38F-01 |Genotyping Array |ASCAT3 | |1028 |TCGA-ZF-AA4U-10A-01D-A38I-01;TCGA-ZF-AA4U-01A-11D-A38F-01 |Genotyping Array |ASCAT2 | |461 |TCGA-ZF-AA4V-01A-11D-A38F-01;TCGA-ZF-AA4V-10A-01D-A38I-01 |Genotyping Array |ASCAT2 | |1161 |TCGA-ZF-AA4V-01A-11D-A38F-01;TCGA-ZF-AA4V-10A-01D-A38I-01 |Genotyping Array |ASCAT3 | |101 |TCGA-ZF-AA4W-01A-12D-A38F-01;TCGA-ZF-AA4W-10A-01D-A38I-01 |Genotyping Array |ASCAT2 | |197 |TCGA-ZF-AA4W-01A-12D-A38F-01;TCGA-ZF-AA4W-10A-01D-A38I-01 |Genotyping Array |ASCAT3 | |332 |TCGA-ZF-AA4X-10A-01D-A38I-01;TCGA-ZF-AA4X-01A-11D-A38F-01 |Genotyping Array |ASCAT2 | |491 |TCGA-ZF-AA4X-10A-01D-A38I-01;TCGA-ZF-AA4X-01A-11D-A38F-01 |Genotyping Array |ASCAT3 | |819 |TCGA-ZF-AA51-01A-21D-A390-01;TCGA-ZF-AA51-10A-01D-A393-01 |Genotyping Array |ASCAT3 | |919 |TCGA-ZF-AA51-01A-21D-A390-01;TCGA-ZF-AA51-10A-01D-A393-01 |Genotyping Array |ASCAT2 | |163 |TCGA-ZF-AA52-01A-12D-A390-01;TCGA-ZF-AA52-10A-01D-A393-01 |Genotyping Array |ASCAT2 | |576 |TCGA-ZF-AA52-01A-12D-A390-01;TCGA-ZF-AA52-10A-01D-A393-01 |Genotyping Array |ASCAT3 | |905 |TCGA-ZF-AA53-10A-01D-A393-01;TCGA-ZF-AA53-01A-11D-A390-01 |Genotyping Array |ASCAT3 | |908 |TCGA-ZF-AA53-10A-01D-A393-01;TCGA-ZF-AA53-01A-11D-A390-01 |Genotyping Array |ASCAT2 | |282 |TCGA-ZF-AA54-01A-11D-A390-01;TCGA-ZF-AA54-10A-01D-A393-01 |Genotyping Array |ASCAT3 | |449 |TCGA-ZF-AA54-01A-11D-A390-01;TCGA-ZF-AA54-10A-01D-A393-01 |Genotyping Array |ASCAT2 | |341 |TCGA-ZF-AA56-10A-01D-A393-01;TCGA-ZF-AA56-01A-31D-A390-01 |Genotyping Array |ASCAT2 | |499 |TCGA-ZF-AA56-10A-01D-A393-01;TCGA-ZF-AA56-01A-31D-A390-01 |Genotyping Array |ASCAT3 | |734 |TCGA-ZF-AA58-10A-01D-A42G-01;TCGA-ZF-AA58-01A-12D-A42D-01 |Genotyping Array |ASCAT3 | |736 |TCGA-ZF-AA58-10A-01D-A42G-01;TCGA-ZF-AA58-01A-12D-A42D-01 |Genotyping Array |ASCAT2 | |758 |TCGA-ZF-AA5H-10A-01D-A393-01;TCGA-ZF-AA5H-01A-11D-A390-01 |Genotyping Array |ASCAT3 | |805 |TCGA-ZF-AA5H-10A-01D-A393-01;TCGA-ZF-AA5H-01A-11D-A390-01 |Genotyping Array |ASCAT2 | |359 |TCGA-ZF-AA5N-10A-01D-A42G-01;TCGA-ZF-AA5N-01A-11D-A42D-01 |Genotyping Array |ASCAT3 | |518 |TCGA-ZF-AA5N-10A-01D-A42G-01;TCGA-ZF-AA5N-01A-11D-A42D-01 |Genotyping Array |ASCAT2 | |766 |TCGA-ZF-AA5P-10A-01D-A393-01;TCGA-ZF-AA5P-01A-11D-A390-01 |Genotyping Array |ASCAT3 | |866 |TCGA-ZF-AA5P-10A-01D-A393-01;TCGA-ZF-AA5P-01A-11D-A390-01 |Genotyping Array |ASCAT2 | ```` Error in GDCprepare(query_cnv) : There are samples duplicated. We will not be able to prepare it There is duplicated items but I don't know how to filter the information and create unique rows. How could I solve the problem?
Error in GDCprepare(query_cnv) : There are samples duplicated. We will not be able to prepare it
|r|
ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY '123'; replace 123 by your password, this worked for me
Fluent bit v3.0.0 is configured on Ubuntu 22.04.3 LTS and on Windows server 2016 standart. The configs (fluent-bit.conf, parser.conf) on these linux and windows machines are absolutely the same. however I ran into a problem. The modify filter works correctly only on a Windows machine and does not work on Linux. Fluent bit has been configured, which parses into two tables in clickhouse. The first table name is 'access', the second table name is 'flussonic'. The name of the column is 'server_ip' in the tables is the same. Configured a modify filter that should add a constant value to the 'server_ip' columns in each of the two clickhouse tables. fluent-bit.conf: [FILTER] name modify match access add server_ip 192.168.1.2 [FILTER] name modify match flussonic add server_ip 192.168.1.2 In this case, the addition occurs only in 'flussonic', and a constant value is not written to 'access'. I tried this setting: [FILTER] name modify match * add server_ip 192.168.1.2 In this case, the addition also occurs only in 'flussonic'. I tried changing the name of the 'server_ip' column to 'ip_server'. No result. There are no errors in the fluent bit logs that could help us understand the problem. I would be grateful if you tell me what to do and where I am wrong. Because setting it up for the first time. Any ideas?
Modify filter of Fluent Bit doesn't add constant value
|clickhouse|fluent-bit|
null
{"Voters":[{"Id":147356,"DisplayName":"larsks"},{"Id":9214357,"DisplayName":"Zephyr"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[18]}
In a java Spring boot web app a microservice has three different entities with their corresponding controllers, repository and service layers. How do I join these entities to come up with one complete entity that can communicate independently and be accessed remotely by the existing different corresponding service, controller and repository layers without creating new combined repo, service and controller. Expected one to create one entity that could be accessed remotely but different services
Spring boot microservices
|spring-boot|
null
CSESProblemSetGridPaths? why not ac
|c++|algorithm|
null
After updating vaadin version to 24.3.5 vaadin for some reason saves to the cache a request to an internal resource instead of the page visited by an anonymous user. So after authorization user sees vaadinPush.js file. In this case we use vaadin.url-mapping and this request to the resource goes without url-mapping. In fact, I don't even know where to start digging through the source code to override the class or method that calls this save. Expected behavior: vaadin saves the request only to the page and not to internal resources
Vaadin saves the request to its resource, not the page during authorization
|vaadin-flow|
null
I was playing with computed columns and I noticed some weird behaviour with persisted computed columns. The query: DROP TABLE IF EXISTS #tmp CREATE TABLE #tmp ( ExternalId UNIQUEIDENTIFIER NULL, UniqueId UNIQUEIDENTIFIER NOT NULL DEFAULT(NEWID()), --Id AS ISNULL(ExternalId, UniqueId), PersistedId AS ISNULL(ExternalId, UniqueId) PERSISTED ) INSERT INTO #tmp (ExternalId) VALUES (null), (NEWID()) SELECT * FROM #tmp UPDATE #tmp SET externalid = CASE WHEN ExternalId IS NULL THEN newid() ELSE null END SELECT * FROM #tmp If you run this, you get the following output: **Just Persisted** Select 1: | ExternalId | UniqueId | PersistedId | | ---------- | -------- | ----------- | | NULL | fb544e9e-7d9b-47ec-b2ca-45484e02e343 | fb544e9e-7d9b-47ec-b2ca-45484e02e343 | | c6b3b82a-db68-46e4-8cfe-a4a0eedad2cf | 56d0091f-0f08-49f3-a020-f73fd2074d7a | 3fc603ce-dcac-449e-8f0a-203cfda0a634 | Select 2: | ExternalId | UniqueId | PersistedId | | ---------- | -------- | ----------- | | 0ecdff52-a59c-421f-bf87-8938488b83ea | fb544e9e-7d9b-47ec-b2ca-45484e02e343 | 0ecdff52-a59c-421f-bf87-8938488b83ea | | NULL | 56d0091f-0f08-49f3-a020-f73fd2074d7a | 56d0091f-0f08-49f3-a020-f73fd2074d7a | You can see that the persisted column for select 1 has a different guid to that of the external id. Now if I uncomment the `Id` line from the `CREATE TABLE` statement and run it again I get the following: **Both** Select 1: | ExternalId | UniqueId | Id | PersistedId | | ---------- | -------- | -- | ----------- | | NULL | 1275aff9-0c59-4406-8bd5-ae694d228a6d | 1275aff9-0c59-4406-8bd5-ae694d228a6d | 1275aff9-0c59-4406-8bd5-ae694d228a6d | | 4b7ac3d8-ad3e-4e94-b8df-c464b99e630c | e7980647-fe4f-45a2-9d41-53da0b8d780f | 4b7ac3d8-ad3e-4e94-b8df-c464b99e630c | 4b7ac3d8-ad3e-4e94-b8df-c464b99e630c | Select 2: | ExternalId | UniqueId | Id | PersistedId | | ---------- | -------- | -- | ----------- | | d606ea7b-f17b-48d8-8581-82c8736bf61f | 1275aff9-0c59-4406-8bd5-ae694d228a6d | d606ea7b-f17b-48d8-8581-82c8736bf61f | d606ea7b-f17b-48d8-8581-82c8736bf61f | | NULL | e7980647-fe4f-45a2-9d41-53da0b8d780f | e7980647-fe4f-45a2-9d41-53da0b8d780f | e7980647-fe4f-45a2-9d41-53da0b8d780f | As you can see with this, now that the id is also getting computed the PersistedId get's the correct id. I've also tried on the "Just Persisted" inserting a static guid and it looks fine, so I'm assuming the persisted column is calling the newid() again from the insert statement. Does anyone know the work around for this and whether? I can't imagine the externalid in the main project _(that this example code is for)_ would have a newid() used in a insert a new record, but I can't say it'll never happen. ** EDIT ** This does seem to be a bug and not a PICNIC problem, so I've opened a Microsoft Bug: https://feedback.azure.com/d365community/idea/4611e2d2-1fd3-ee11-92bc-6045bd7aea25
I have to keep reminding myself that > a Giraffe project plugs into the [ASP.NET Core][1] pipeline or is itself an [ASP.NET Core][1] application , so if I can't find answers to my questions in the [Giraffe docs][2], then it is probably because it is an [ASP.NET Core][1] topic (or an F# / .NET / etc. one). ### How to create and serve a Giraffe project Steps 0. to 5. follow the [Get started with F# with command-line tools (.NET | Microsoft Learn)](https://learn.microsoft.com/en-us/dotnet/fsharp/get-started/get-started-command-line) article. 0. (_OPTIONAL_) **Create a new solution**. ``` dotnet new sln -o SampleSolution ``` 1. **Enter the solution's directory**. ``` cd SampleSolution ``` 2. **Create an empty [ASP.NET Core][1] project**. ``` dotnet new web -lang "F#" -o src/GiraffeWebExample ``` > INFO The available `dotnet new` templates are available on the links below. (Both seem to list them all, but not sure which one is more up-to-date.) > + \[Microsoft Learn]\[.NET CLI] [.NET default templates for `dotnet new`](https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-new-sdk-templates) > + \[Microsoft Learn]\[.NET CLI] [`dotnet new <TEMPLATE>`](https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-new) 3. (_OPTIONAL_) **Add new project to solution**. ``` dotnet sln add src/GiraffeWebExample/GiraffeWebExample.fsproj ``` 4. **Enter the project's directory**. ``` cd src/GiraffeWebExample/ ``` 5. **Install dependencies**. ``` dotnet add package Microsoft.AspNetCore.App dotnet add package Giraffe ``` > NOTE I got a warning below when adding Giraffe, so just pasting it here for completeness' sake: > ``` > /usr/local/share/dotnet/sdk/8.0.202/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.DefaultItems.Shared.targets(111,5): > warning NETSDK1080: A PackageReference to Microsoft.AspNetCore.App is not > necessary when targeting .NET Core 3.0 or higher. If Microsoft.NET.Sdk.Web > is used, the shared framework will be referenced automatically. Otherwise, > the PackageReference should be replaced with a FrameworkReference. > [/Users/toraritte/dev/shed/dotnet/giraffe/ByHand/src/ByHand/ByHand.fsproj] > ``` 6. **Add the "entry point"**. > NOTE Still haven't figured out what other ways .NET has to set up a web project, but the `EntryPoint` attribute is covered in the [[Microsoft Learn][F# Guide] Console Applications and Explicit Entry Points](https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/functions/entry-point) article. I chose to simply copy one of the sample codes from the [Doing it manually](https://giraffe.wiki/#doing-it-manually) section; I prefer the more functional approach, so here it is the second one: ``` open System open Microsoft.AspNetCore.Builder open Microsoft.AspNetCore.Hosting open Microsoft.Extensions.Hosting open Microsoft.Extensions.DependencyInjection open Giraffe let webApp = choose [ route "/ping" >=> text "pong" route "/" >=> htmlFile "/pages/index.html" ] let configureApp (app : IApplicationBuilder) = // Add Giraffe to the ASP.NET Core pipeline app.UseGiraffe webApp let configureServices (services : IServiceCollection) = // Add Giraffe dependencies services.AddGiraffe() |> ignore [<EntryPoint>] let main _ = Host.CreateDefaultBuilder() .ConfigureWebHostDefaults( fun webHostBuilder -> webHostBuilder .Configure(configureApp) .ConfigureServices(configureServices) |> ignore) .Build() .Run() 0 ``` 7. **Run / serve project**. ``` dotnet watch run ``` > INFO Started with the [Get started with ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/getting-started/) article in the ASP.NET Core docs. [1]: https://learn.microsoft.com/en-us/aspnet/core/ [2]: https://giraffe.wiki/docs
I have two tables in my database: 'customers' and 'store_owners'. These tables have a many-to-many relationship. I want to create a chat feature between the 'store_owner' and their customers. However, I encountered a problem when I created a 'chat' table in my database. This table includes 'IdSender', 'IdReceiver', and 'message'. Both 'IdSender' and 'IdReceiver' are foreign keys from the previously mentioned tables. However, in the same conversation, a customer can be both a sender and a receiver, and the same applies to the store owner. What should I do? Please note that the 'customers' and 'store_owners' tables must remain separate. I am using React Native and Laravel. I attempted to create two foreign keys for the same ID, but as expected, it did not work.
Creating a chat table in database from 2 separated tables
EDIT: Made some changes to the code which answered some of my previously asked questions, but now I have new questions I am very new to threads and parallel programming so forgive me if I did things unconventionally. I have two threads, and I am trying to get them to communicate to one another. One of them runs shorter than the other so what I am trying to have is that when the shorter one ends, the longer one will also end. I had partial success with this by passing and scanning a certain object (here I use `sentinel`) to mark the completion of either of the threads, but I have several questions regarding this. Here is the sample code I am running: from threading import Thread import time from queue import Queue sentinel = object() q = Queue() q.put(0) def say_hello(subject, q): print("starting hello") for i in range(5): time.sleep(2) print(f"\nhello {subject}! iter:{i}") data = q.get() if data is sentinel: break else: q.put(data) print(f"from hello: {q.queue}") if i == 4: print("hello finished because of iter") q.put(sentinel) else: print("hello finished because of sentinel from foo") q.put(sentinel) def foo(q): print("starting foo") for j in range(2): time.sleep(1) print(f"\nfoo iter:{j}") data = q.get() if data is sentinel: break else: q.put(data) print(f"from foo: {q.queue}") if j == 1: print("foo finished because of iter") q.put(sentinel) else: print("foo finished because of sentinel from hello") q.put(sentinel) def t(): print("t:0\n") for k in range(1,6): time.sleep(1) print(f"\nt:{k}") def run(): time_thread = Thread(target = t) time_thread.start() hello_thread = Thread(target = say_hello, args = ["lem", q]) hello_thread.start() foo_thread = Thread(target = foo, args = [q]) foo_thread.start() time_thread.join() hello_thread.join() foo_thread.join() print("Done") run() In this case, `foo` should end first and `say_hello` will end following that, but this is not what I got. here is the output: t:0 starting hello starting foo t:1 foo iter:0 from foo: deque([0]) hello lem! iter:0 t:2 foo iter:1 from foo: deque([0]) foo finished because of iter from hello: deque([<object object at 0x000001B9CF908EA0>, 0]) t:3 t:4 hello lem! iter:1 hello finished because of sentinel from foo t:5 Done Now my question is: 1. Is this the right way of doing this? Is there perhaps an easier, cleaner, more conventional way of performing the same thing? 2. The output seems a little erratic such that they change ever so slightly every time I rerun it 3. Are local variables of identical names shared between threads? I was running all my loops with `i` before and it almost seems like it affects all the threads 4. It seems like `say_hello` doesn't quite run at the timestamps I want it to be. At `t:1` there shouldn't be any output from `say_hello` but there is. This is even more peculiar seeing that `from hello: deque...` is only printed after `foo`'s part did. I always suspected that they should print in pairs Any and all input is greatly appreciated. Thanks!
{"Voters":[{"Id":929999,"DisplayName":"Torxed"},{"Id":2530121,"DisplayName":"L Tyrone"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[11]}
VB.Net Project using visual Studio Microsoft Visual Studio Community 2022 (64-bit) - Current HTMLAgility Pack When I try to load a web page via the HtmlWeb Load method, I receive this error: "Cannot access a disposed object" I put following code in the form load event. It doesn't get any simpler than this: Dim Web As HtmlWeb = New HtmlWeb() Dim Document = Web.Load("https://scrapeme.live/shop/") When the Web.Load event executes, I get the error. Any advice would be greatly appreciated.
{"Voters":[{"Id":13434871,"DisplayName":"Limey"},{"Id":20002111,"DisplayName":"Friede"},{"Id":16217248,"DisplayName":"CPlus"}],"SiteSpecificCloseReasonIds":[13]}
Is there a way to make this request with UrlFetchApp in app script? ```curl -X GET http://test.com/api/demo -H 'Content-Type: application/json' -d '{"data": ["words"]}'``` I attempted it with this code: ``` const response = UrlFetchApp.fetch(API_URL, { method: "GET", headers: { "Content-Type": "application/json" }, payload: JSON.stringify(payload) }); ``` *The payload and the endpoint are the exact same as the curl request which I tested* However I got this error "Failed to deserialize the JSON body into the target type: missing field `name` at line 1 column 88".
I'm unable to create a mapping / configuration for the following sample data: `W001;MFS;4262;EMFS;{MFS;W001;;};109;11;"A";["TEID","Int "]` Which should result in something like: 1. W001 2. MFS 3. 4262 4. EMFS 5. {MFS;W001;;} 6. 109 7. ... Every time I try, the value for field 5 is `{MFS`, because the nested semicolon is recognized as the delimiter character. Also setting the quote character is not an option, in that case. Or an input like this: `MFS1;W001;8;STSO;TL1B;;[{"ON00000008";;DP;"LC00420017";{SO;1;;;;;;};[]}]` Where `[{"ON00000008";;DP;"LC00420017";{SO;1;;;;;;};[]}]` should be treated as a single field. Is this possible at all with CsvHelper or are there other recommendations?
CsvHelper - How to handle nested delimiter character in fields
|csvhelper|
null
Can any body please assist me. I am trying to upgrade PHP on my Uwamp to php 7.4.33 and or php 8.3.3 But somehow if I want to run the upgrade php.7.3.3 and or php 8.3.3 Apache stops running. php 7.2.7 works fine on my uwamp. I need to upgrade to a higher versions due to a theme I want to use that do not run on php 7.2.7 I have installed the php versions directly in the uwamp C:\UwAmp\bin\php folder and started uwamp. uwamp have picked up the new versions of php and it installed it. no problem, but it does not want to run the Apache. also under the php installation tab "uwamp php repository" the new php versions does not show. I have also tried to install the php version directly onto windows via edit system environment with no joy I have also configured uwamp "httpd_uwamp.conf" with no joy There is not many uwamp tutorials on youtube. I found plenty of tuorials on wamp and xwamp, but only a few on uwamp that does not help with my problem
Double check the product code on the chips, they must include something like «FN4» or «FH4», if not, then there's no Flash at all inside them [See the datasheet][1]. On my supermini boards, chips say «ESP32-C3\n432023\nUE00MAK173» (the 432023 is the week code). No «F», so no Flash, thus `esptool` cannot get Flash size: ``` C:\Users\XXXXXX\.espressif\frameworks\esp-idf-v5.2.1>python -m esptool flash_id esptool.py v4.7.0 Found 1 serial ports Serial port COM4 Connecting... Detecting chip type... ESP32-C3 Chip is ESP32-C3 (QFN32) (revision v0.4) Features: WiFi, BLE Crystal is 40MHz MAC: dc:da:0c:8e:cf:38 Uploading stub... Running stub... Stub running... Manufacturer: 3f Device: ffff Detected flash size: Unknown Hard resetting via RTS pin... ``` and you must provide an external SPI Flash. Bought these boards from Aliexpress, and according to some photographs sent by other buyers, some boards come with the «FH4» and mine (and others?) w/o it. [1]: https://i.stack.imgur.com/eVZ7L.png
I solved the problem. This configuration is for window, but I think it will have been working on Linux too. You need to configure 3 files. -------------------- 1-httpd.conf 2-httpd-vhost.conf 3-httpd-ssl.conf General rules do not use 0.0.0.0:80 do not use 0.0.0.0:443 or [::]:80 [::]:443 Keep all original like *:80 *:443 ----------- 1-httpd.conf -------- (keep original) Listen *:80 or Listen 80 Never define any <VirtualHost *:80 > in httpd.conf, we will define it at httpd-vhost.conf Never define any <VirtualHost *:443 > in httpd.conf, we will define it at httpd-ssl.conf Never define any ssl certificate in httpd.conf otherwise the system is confused make you wait 5s periodically.. Define Directory tag of each vhost like this: Example: DocumentRoot "C:/xampp/htdocs" <Directory "C:/xampp/htdocs"> Options Indexes FollowSymLinks Includes AllowOverride All Require all granted </Directory> <Directory "C:/xampp/htdocs2"> Options Indexes FollowSymLinks Includes AllowOverride All Require all granted </Directory> --------------------- 2- httpd-vhost.conf --- Never define any ssl certifiacte in httpd-vhost.conf Never Never define any <VirtualHost *:443 > in vhost file, we will define it at httpd-ssl.conf otherwise you have to wait 5sn periodically. Define <VirtualHost> only for 80 port and dont use 0.0.0.0:80 in tags etc, keep original like below Example: <VirtualHost *:80 > ServerName domain1.com DocumentRoot "C:/xampp/htdocs/htdocs" ErrorLog "logs/error-hh2.log" </VirtualHost> <VirtualHost *:80 > ServerName domain2.com DocumentRoot "C:/xampp/htdocs/htdocs2" ErrorLog "logs/error-hh2.log" </VirtualHost> --------------------- 3- httpd-ssl.conf --- Apache is starting to listen 443 port in this file, so after listen 443 or at the bottom add following naturally. (keep original) Listen 443 #edit existing <VirtualHost *:443 > ServerName domain1.com DocumentRoot "C:/xampp/htdocs/htdocs" ErrorLog "logs/error-hh.log" SSLEngine On SSLCertificateFile conf/ssl_hh/server.crt SSLCertificateKeyFile conf/ssl_hh/server.key #keep FilesMatch or BrowserMatch parameter as original if not needed </VirtualHost> #add for second domain <VirtualHost *:443 > ServerName domain2.com DocumentRoot "C:/xampp/htdocs/htdocs2" ErrorLog "logs/error-hh2.log" SSLEngine On SSLCertificateFile conf/ssl_hh/server2.crt SSLCertificateKeyFile conf/ssl_hh/server2.key </VirtualHost> That is all.
I am running A1111 on my AWS g4dn instance and it is working fine. However the issue happens when the app restarts and changes the URL. Then I need to SSH into the instance and take the new URL. Can we either: - Fix the URL where A1111 is running, so that it won't change even the app is restarting - Redirect console logs to a log file, so that another app can read these files and extract out the URL. It is a kinda pain to always login the instance and take the new URL.
[enter image description here](https://i.stack.imgur.com/3dIm0.png) I have a method called Background Blur, but it completely blurs the background of the window, and even if the window is transparent, the corners of the window look bad. How can I ensure that this is applied only to the background of the border? Blur.cs: ``` namespace TestApp { internal enum AccentState { ACCENT_DISABLED = 1, ACCENT_ENABLE_GRADIENT = 0, ACCENT_ENABLE_TRANSPARENTGRADIENT = 2, ACCENT_ENABLE_BLURBEHIND = 3, ACCENT_INVALID_STATE = 4 } [StructLayout(LayoutKind.Sequential)] internal struct AccentPolicy { public AccentState AccentState; public int AccentFlags; public int GradientColor; public int AnimationId; } [StructLayout(LayoutKind.Sequential)] internal struct WindowCompositionAttributeData { public WindowCompositionAttribute Attribute; public IntPtr Data; public int SizeOfData; } internal enum WindowCompositionAttribute { // ... WCA_ACCENT_POLICY = 19 // ... } /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> /// public partial class Blurx { [DllImport("user32.dll")] internal static extern int SetWindowCompositionAttribute(IntPtr hwnd, ref WindowCompositionAttributeData data); internal void EnableBlur(dynamic Element) { IntPtr hwnd; PresentationSource source = PresentationSource.FromVisual(Element); hwnd = source != null ? ((HwndSource)source).Handle : IntPtr.Zero; AccentPolicy accent = new AccentPolicy { AccentState = AccentState.ACCENT_ENABLE_BLURBEHIND }; int accentStructSize = Marshal.SizeOf(accent); var accentPtr = Marshal.AllocHGlobal(accentStructSize); Marshal.StructureToPtr(accent, accentPtr, false); WindowCompositionAttributeData data = new WindowCompositionAttributeData { Attribute = WindowCompositionAttribute.WCA_ACCENT_POLICY, SizeOfData = accentStructSize, Data = accentPtr }; SetWindowCompositionAttribute(hwnd, ref data); Marshal.FreeHGlobal(accentPtr); } public void Show(dynamic Element) { EnableBlur(Element); } } } ``` MainWindow.Xaml.cs: ``` namespace TestApp { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { private Blurx Blur; public MainWindow() { InitializeComponent(); Blur = new Blurx(); } private void Window_Loaded(object sender, RoutedEventArgs e) { Blur.Show(Main); } private void Window_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { DragMove(); } } } ``` MainWindow.Xaml: ``` <Window x:Class="TestApp.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:Telegram_Group_Media_Scraper" mc:Ignorable="d" WindowStyle="None" Background="Transparent" AllowsTransparency="True" Title="MainWindow" Loaded="Window_Loaded" MouseLeftButtonDown="Window_MouseLeftButtonDown" Height="500" Width="500"> <Border CornerRadius="50" x:Name="Main" Background="#7FDE7AFF"> </Border> </Window> ```
Your output: ```lang-none this is thread 1 this is thread 2 main exists thread 2 exists thread 1 exists thread 1 exists ``` Before I see that *"it prints "thread 1 exists" twice."*, I see that it prints after "main exists": This behavior can lead to unpredictable results. -- First, you should array your code: ```cpp #include <thread> #include <iostream> #include <future> #include <syncstream> void log(const char* str) { std::osyncstream ss(std::cout); ss << str << std::endl; } void worker1(std::future<int> fut) { log("this is thread 1"); fut.get(); log("thread 1 exists"); } void worker2(std::promise<int> prom) { log("this is thread 2"); prom.set_value(10); log("thread 2 exits"); } int main() { std::promise<int> prom; std::future<int> fut = prom.get_future(); // Fire the 2 threads: std::thread t1(worker1, std::move(fut)); std::thread t2(worker2, std::move(prom)); t1.join(); t2.join(); log("main exits"); } ``` Key points: * CRITICAL: Replace the `while` loop and `detach()` with `join()` in the `main()` to ensure that the main thread waits for all child threads to finish before exiting. * Dilute the `#include` lines to include only what's necessary - For better practice. * Remove unused variables - For better practice. * Remove the unused `using namespace` directive - For better practice. * In addition, I would also replace the `printf()` calls with `std::osyncstream`. [Demo][1] Now, the output is: ```lang-none this is thread 1 this is thread 2 thread 2 exits thread 1 exits main exits ``` -- *"The detach used but not join here is the requirements from test. I cannot change that."* - In that case: ```cpp #include <thread> #include <iostream> #include <future> #include <syncstream> #include <mutex> #include <condition_variable> std::mutex mtx; std::condition_variable cv; uint8_t workers_finished = 0; // Counter for finished workers void log(const char* str) { std::osyncstream ss(std::cout); ss << str << std::endl; } void worker1(std::future<int> fut) { log("this is thread 1"); fut.get(); log("thread 1 exits"); std::lock_guard lock(mtx); ++workers_finished; cv.notify_one(); // Signal main thread } void worker2(std::promise<int> prom) { log("this is thread 2"); prom.set_value(10); log("thread 2 exits"); std::lock_guard lock(mtx); ++workers_finished; cv.notify_one(); // Signal main thread } int main() { std::promise<int> prom; std::future<int> fut = prom.get_future(); // Fire the 2 threads: std::thread t1(worker1, std::move(fut)); std::thread t2(worker2, std::move(prom)); t1.detach(); t2.detach(); { std::unique_lock lock(mtx); while (workers_finished < 2) { cv.wait(lock); // Wait until notified (or spurious wakeup) } } log("main exits"); } ``` Now, the output is: (The same) ```lang-none this is thread 1 this is thread 2 thread 2 exits thread 1 exits main exits ``` [1]: https://onlinegdb.com/jnZN98KIu
#include <stdio.h> #include <math.h> #define MAX_BITS 64 int counter = 0, dec_eqv = 0; int runcounter = 1; int binToDec(char *bit) { printf("dec_eqv is %d\n", dec_eqv); if (counter == 0 && runcounter) { dec_eqv = 0; int i = 0; while (*(bit + i++)) ; counter = (i - 1) - 1; runcounter = 0; printf("counter is %d\n", counter); } if (*bit != 0) { printf("*bit not zero\n"); if (*bit == '1') { dec_eqv += (int)pow(2, counter--); } else { counter--; } binToDec(bit + 1); } else { printf("here *bit is 0\n"); runcounter = 1; printf ("here dec_eqv is %d\n", dec_eqv); return dec_eqv; printf ("Skipping return\n"); } return -1; } int main() { char bin[MAX_BITS]; printf("Enter binary number (64-bit max): "); scanf("%[^\n]s", bin); int result = binToDec(bin); printf("Decimal equivalent of 0b%s is %d.", bin, result); return 0; } I've added the ```printf()``` statements in ```binToDec()``` for debugging, and it seems that the compiler is ignoring the ```return dec_eqv;``` statement in the final ```else``` block and directly executing the ```return -1;``` at the end. What could be possibly wrong? I am using Clang/LLVM compiler in Ubuntu 23.10 AMD64.
Similar to Chef Pharaoh's answer, but shorter and more ergonomic: git reset "*" (The quotes are there to avoid shell expansion.)
The issue that I am having is that mui element select when clicked the menu items don't match the screen size after some screen resizing by the user. I have subscribed to the resize event and managed to calculate the select width and pass it to the menu props like this: <Select id="somecss" className="anothercss__options" variant="outlined" displayEmpty placeholder={ <span className="Someothercss">{placeholderVal}</span> } ref={selectRef} onOpen={() => { calculateInitialWidth(selectRef.current); }} MenuProps={{ slotProps: { paper: { sx: { minWidth: 'unset', width: menuWidth, }, }, }, }} value={selectedThingId || ''} onChange={onSelectThing} > The fix works and the menu items now match the select element, but here is the catch, if I switch to dark theme, the menu items will not respect the dark theme and the colors will not adjust. How can I only target the width of those elements? Note: min-width is also added, because strangely enough, mui calculates the min width based on the initial screen size you have when the components mount, so when you resize, you guessed it, it will not go below the min width this applies only for the menu items,not the select parent.
Mui MenuItem doesn't follow select width
|reactjs|material-ui|sx|
I have multiple lists (in a .txt file) which I'd like to quickly convert to an array. I've seen this question asked and answered here for Notepad++, but not for Xcode. Is it possible to similarly here? AliceBlue AntiqueWhite Aqua Aquamarine Azure Beige Bisque Black BlanchedAlmond and convert it to an array literal... var myArray = ["AliceBlue", "AntiqueWhite", ... ] //the highest rated answer for this on the notepad++ thread: https://stackoverflow.com/questions/8849357/add-quotation-at-the-start-and-end-of-each-line-in-notepad [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/VR6Eo.png *I should add* I can use the expression (.+) in the find function, but not the "\1".
Add quotation to start and end of each line in Xcode
|regex|xcode|replace|find|
I'm trying to compile my OpenGL shared library using g++ in MinGW but I get an error with undefined function reference. I link all the necessary static libraries, but it doesn't work. Previously it was compiled via VC++, but after moving to MinGW it broke. Could this be some kind of MinGW feature? ### Error ``` Warning: resolving _GetModuleHandleA@4 by linking to _GetModuleHandleA ... ... c:/mingw/bin/../lib/gcc/mingw32/6.3.0/crtbegin.o:cygming-crtbegin.c:(.text+0x29): undefined reference to `LoadLibraryA@4' ``` ### Compilation settings ``` g++ -g -Wall -shared -IC:/Users/nikita/.jdks/corretto-11.0.22/include -IC:/Users/nikita/.jdks/corretto-11.0.22/include/win32 -LC:/Windows/System32 -lopengl32 -lkernel32 -luser32 -lgdi32 -lwinspool -lcomdlg32 -ladvapi32 -lshell32 -lole32 -loleaut32 -luuid -lodbc32 -lodbccp32 -o artifact src/shared.h src/gl/shared-gl.h src/gl/windows.cpp ```
Clang possibly skipping lines of code while compiling
|c|recursion|return|compiler-construction|clang|
I was running into the same exact error. It turned out that the automated tests were clicking an exit button I had that quit the application and the tests thought that my application was crashing. I'm not sure if that is exactly what is happening for you but taking the below steps helped me figure out the issue - Go to Pre Launch Report --> Overview - Click the view details arrow in the Report History failing artifact row - Under the Stability section click view details - Click show more and scroll down to the section that shows the devices that crashed. Click the view details arrow - A video should show up of the automated test with the test click locations highlighted - Observe the last few clicks of the video for any hints on what can be causing the crash For me I noticed one of the last clicks done was located near my exit button. So I removed the exit button from my next release and the next release passed.
Many a times 9222 port does not work try a different number like 9214,9234 etc first of all goto chrome location installation via cd Cd C:\Program Files\Google\Chrome\Application run the below command to start the browse on the port **might not be 9222** as i also sometimes faced the issue of not opening chrome.exe --remote-debugging-port=9214 --no-first-run --no-default-browser-check --user-data-dir="C:\Users\gakhuran\AppData\Local\Google\Chrome\User Data once the browser is opened you can login et and do the prereqisite The below code should help in connecting to that already opened browser IBrowser browser = await Playwright.Chromium.ConnectOverCDPAsync("http://localhost:9214"); var default_context = browser.Contexts[0]; var Page1 = default_context.Pages[0]; I have been using this. Only problem is port sometimes rest all works
I had a problem with sending data to the server and then having this data sent to me in the telegram bot. When the 'Buy' button *(mainButtonClicked)* is pressed, nothing happens, nothing is sent to the server either, but if you configure the button to send data to the server when you press 'Add', then everything is sent. **Here is the code on github, the main reasons are in the sections:** tg-web-app-node/index.js and in tg-web-appreact/src/components/ProductList/ProductList.jsx **Link:** https://github.com/Topicesst/tg-web-app-node.git https://github.com/Topicesst/tg-web-app-react.git **Here is the problematic code:** ``` const getTotalPrice = (items = []) => { return items.reduce((acc, item) => { return acc += item.price }, 0) } const ProductList = () => { const [addedItems, setAddedItems] = useState([]); const {tg, queryId} = useTelegram(); const onSendData = useCallback(() => { const data = { products: addedItems, totalPrice: getTotalPrice(addedItems), queryId, } fetch('http://80.85.143.220:8000/web-data', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(data) }) }, [addedItems]) useEffect(() => { tg.onEvent('mainButtonClicked', onSendData) return () => { tg.offEvent('mainButtonClicked', onSendData) } }, [onSendData]) const onAdd = (product) => { const alreadyAdded = addedItems.find(item => item.id === product.id); let newItems = []; if(alreadyAdded) { newItems = addedItems.filter(item => item.id !== product.id); } else { newItems = [...addedItems, product]; } setAddedItems(newItems) if(newItems.length === 0) { tg.MainButton.hide(); } else { tg.MainButton.show(); tg.MainButton.setParams({ text: `Buy ${getTotalPrice(newItems)}` }) } } return ( <div className={'list'}> {products.map(item => ( <ProductItem product={item} onAdd={onAdd} className={'item'} /> ))} </div> ); }; export default ProductList; ``` Bduu is very grateful for the help, because even acquaintances could not help solve the problem ( I checked whether the button responds and whether it sends at least some data to the server - it does.
C# WPF Border background Blur
|c#|wpf|.net-8.0|
null
So somehow along the lines of declare namespace output = "http://www.w3.org/2010/xslt-xquery-serialization"; declare option output:method "text"; string-join( ( string-join(('id', 'last name', 'first name'), '|'), string-join(('--', '--', '--'), '|'), for $emp in //employee return string-join(($emp/id, $emp/lname, $emp/fname), '|') ), '&#10;' )
I am training a multi-task model which does ner and text classification. But when I change the model name in AutoModel.from_pretrained it gives error. <br> In the code below, the commented "self.model" doesn't give error and runs fine, but the "emilyalsentzer/Bio_ClinicalBERT" gives error following error:<br> **Error:**<br> RuntimeError: CUDA error: device-side assert triggered<br> CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.<br> Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.<br> **Model code:** class MultiTaskModel(nn.Module): def __init__(self,model_name=None,model_type='BERT',**kwargs): super(MultiTaskModel, self).__init__() self.model=AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") # model that causes the error. # self.model = AutoModel.from_pretrained('medicalai/ClinicalBERT') # this line works fine self.num_labels_ner=kwargs['num_labels_ner'] self.num_labels_stat=kwargs['num_labels_status'] #print(self.model.config) self.dropout=nn.Dropout(0.2) self.pre_classifier = torch.nn.Linear(self.model.config.hidden_size, 256) # Layers for NER self.linear_ner=nn.Linear(256,self.num_labels_ner) # Layers for status_time self.linear_status=nn.Linear(256,self.num_labels_stat) def forward( self, input_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): outputs=self.model( input_ids, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict) #print("Lengths of output ",len(outputs)) sequence_output = outputs[0] #print(outputs[0].shape) # pooled_output = outputs[1] pooled_output = sequence_output[:,0,:] #for ner sequence_output=self.dropout(sequence_output) sequence_output=self.pre_classifier(nn.ReLU()(sequence_output)) ner_out=self.dropout(sequence_output) ner_logits=self.linear_ner(ner_out) # for status_time pooled_output=self.pre_classifier(pooled_output) stat_out=self.dropout(pooled_output) stat_logits=self.linear_status(stat_out) # print("Stat_logits shape ",stat_logits.shape) return ner_logits,stat_logits
React Node Telegram bot problem with mainButtonClicked
|reactjs|node.js|button|telegram-bot|
null
Make sure the YAML indentation and format in the `parameters` section of your `SecretProviderClass` are correct. YAML is very sensitive to indentation, and even a small mistake can lead to parsing errors. You would find a similar error in [`Azure/secrets-store-csi-driver-provider-azure` issue 290](https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/290) for illustration. ```yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: test-tls spec: provider: gcp secretObjects: - secretName: test-tls-csi type: kubernetes.io/tls data: - objectName: "testcert.pem" key: tls.key - objectName: "testcert.pem" key: tls.crt parameters: secrets: | - resourceName: "projects/${PROJECT_ID}/secrets/test_ssl_secret/versions/latest" fileName: "testcert.pem" ``` The `resourceName`/`fileName` in the `parameters` section are properly indented as a part of the list under `secrets`. And [Online YAML Parser](http://yaml-online-parser.appspot.com/) seems to error on: ```yaml volumeMounts: - name: testsecret mountPath: /var/secret volumes: - name: testsecret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "test-tls" ``` The error: ``` ERROR: while parsing a block mapping in "<unicode string>", line 1, column 5: volumeMounts: ^ expected <block end>, but found '<block mapping start>' in "<unicode string>", line 4, column 11: volumes: ^ ``` A better indentation: ```yaml volumeMounts: - name: testsecret mountPath: /var/secret volumes: - name: testsecret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "test-tls" ``` The `volumes` block was being indented in a way that made it appear as a continuation of the properties of `testsecret` under `volumeMounts`, which is not structurally valid. Now, `volumeMounts` and `volumes` are at the same indentation level, indicating they are part of the same Kubernetes resource definition (e.g., a Pod spec).
I have been trying all day to write a simple bit of code that makes a Discord Bot join a channel and then play a sound. But for some reason I can't seem to get it to work. I have tried many different things but it just doens't play the sound. I also get no error or anything. Could somebody explain to me what I am doing wrong? The code can be found below. const Discord = require("discord.js"); const discordTTS = require('discord-tts'); const { Client, GatewayIntentBits } = require('discord.js'); const { joinVoiceChannel } = require('@discordjs/voice'); const { createAudioPlayer, NoSubscriberBehavior } = require('@discordjs/voice'); const { createAudioResource } = require('@discordjs/voice'); const { join } = require('node:path'); const { AudioPlayerStatus } = require('@discordjs/voice'); const player = createAudioPlayer({ behaviors: { noSubscriber: NoSubscriberBehavior.Pause, }, }); const client = new Client({ intents: [ GatewayIntentBits.Guilds, GatewayIntentBits.GuildMessages, GatewayIntentBits.MessageContent, GatewayIntentBits.GuildMembers, ], }); client.on('ready', (c) => { console.log(c.user.tag + " is online!"); }); client.on('messageCreate', (message) => { if (message.content === 'hello') { const connection = joinVoiceChannel({ channelId: message.member.voice.channel.id, guildId: message.member.voice.guild.id, adapterCreator: message.member.voice.guild.voiceAdapterCreator, selfDeaf: false, }); const player = createAudioPlayer(); connection.subscribe(player); resource = createAudioResource('/sounds/alert.mp3'); player.play(resource); } }); It joins the channel fine but then doens't play the sound. I double checked and the file (alert.mp3) is in the sounds folder.
After switching the UWP project from debug to release mode, an error appears even SplashScreen.png is included in .appmanifest : ``` Severity Code Description Project File Line Suppression State Details Error DEP0700: Registration of the app failed. [0x80073CF6] AppxManifest.xml(32,27): error 0x80070003: Cannot install or update package xxxx_t07g111c25az6 because the splash screen image [SplashScreen.png] cannot be located. Verify that the package contains an image that can be used as a splash screen for the application, and that the package manifest points to the correct location in the package where this splash screen image can be found. xxxx.App ``` I cleaned and rebuild my project multiple times. Excluding and including project didn't help. Deleting bin and obj folder had no effect. Removing splash screen and adding again, didn't help.
DEP0700: Registration of the app failed UWP in release mode
|c#|uwp|release|
null
$SQLServer = "Server Name" $SQLDBName = "DB Name" $csvData = Import-Csv -Path "D:\Sampath\SQL\DynamicSql\CustomerData.csv" #SQL Query for periodic payments. forEach ($data in $csvData){ $SqlQuery = "SELECT ID,CUSTOMERNUMBER,ENTITYNUMBER,ACCOUNT FROM [XXXX].[dbo].[XXXX] WITH(NOLOCK) where PSTATUS = 'F' AND [entityNumber] = '$($data.ENTITYNUMBER)'" $SqlConnection = New-Object System.Data.SqlClient.SqlConnection("Server=$SQLServer;Database=$SQLDBName;Integrated Security=True") $SqlCmd = New-Object System.Data.SqlClient.SqlCommand($SqlQuery, $SqlConnection) $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter($SqlCmd) $DataSet = New-Object System.Data.DataSet $SqlAdapter.Fill($DataSet)} $DataSet.Tables[0] | Export-Csv "D:\Sampath\SQL\DynamicSql\Results.csv" -NoTypeInformation -Encoding UTF8 ` I Have a requirement to extract the data from SQL database based on the input data in CSV file, this input csv file having 30+ rows, so i want to fetch the data each row wise and save the all results in one out put csv file I am trying to run the above script and it is running without error but results are not showing, Could you please suggest where i am making mistake
Exporting the data from DB by passing the data from input csv file and saving the results in csv file
|sql-server|powershell|loops|foreach|sql-scripts|
null
What's the difference between the [Microsoft.ApiManagement/service/portalsettings](https://learn.microsoft.com/en-us/azure/templates/microsoft.apimanagement/service/portalsettings-signin?pivots=deployment-language-bicep) and [Microsoft.ApiManagement/service/portalconfigs](https://learn.microsoft.com/en-us/azure/templates/microsoft.apimanagement/service/portalconfigs?pivots=deployment-language-bicep) resources of Azure API Management? I want to deploy some Azure API Management Developer Portal configuration using Bicep. One of items is to remove the 'Username and password' Identity Provider because we're going to use Microsoft Entra Id. But I'm not sure what to use. I've removed the Identity Provider manually and noticed that both the `enabled` property in the `portalsettings` resource (name `signup`) is set to `false` and that the `enableBasicAuth` property in the `portalconfigs` resource is set to `false`. I couldn't find anything in the documentation that explains the difference between the two resources.
Different model name gives `TORCH_USE_CUDA_DSA` error in AutoModel.from_pretrained
|nlp|huggingface-transformers|
Try `//*[local-name()='TagA']`.
i suggest making a capsule collision around the sword and then get all overlapped actors over the capsule, loop and throw into an array then deal damage to all actors in the array and then clear it for the next attack. using tick etc will cause lows fps and sometimes not detect the traces correctly IMO overlapped actors is more accurate and wont break if the games at 1fps or 120 ...
{"OriginalQuestionIds":[78052096],"Voters":[{"Id":228450,"DisplayName":"Björn"},{"Id":2887218,"DisplayName":"jcalz","BindingReason":{"GoldTagBadge":"typescript"}}]}
In Rust, I have created a struct Foo. I now want to initialise that from a string using a macro, for example: create_struct!("Foo"); I am struggling to do this - any help would be hugely appreciated! Thanks. src/main.rs: pub struct Foo { } // Macro to initialize an instance of the struct with the given name macro_rules! create_struct {($struct_name:literal) => {$struct_name {}}} fn main() {let my_struct = create_struct!("Foo");} Output on compilation: error: macro expansion ignores token `{` and any following --> src/main.rs:6:69 | 6 | macro_rules! create_struct {($struct_name:literal) => {$struct_name {}}} | ^ 7 | 8 | fn main() {let my_struct = create_struct!("Foo");} | --------------------- caused by the macro expansion here | = note: the usage of `create_struct!` is likely invalid in expression context
I'm trying to get the name from the database but I can't get it