instruction
stringlengths
0
30k
Pytorch: Compute gradient dx given Ax=b without solving the whole system
|pytorch|autograd|
null
In Flutter we do serialization (toJson) of the model, but how can we generate toJson that includes the getters? Simple example class MyClass { const MyClass({ required this.a, required this.b, }); final double a; final double b; double get average => (a + b) / 2; } We use `json_serializable` package for generating the `toJson` method. It generate the `my_class.g.dart` file where we have toJson and fromJson methods. The problem is that in this way we do not have a serialization of the getter (`average` field in our example). What can we do to provide the full serialization including class getters? Is there a way to achieve it using `json_serializable` package or some other package offers that?
Flutter - generate toJson method for model including getter
|flutter|flutter-dependencies|
You can alias your fields using `.` to build document structure. For your case, you can do this: drop table if exists #Project create table #Project ( Id int identity(1, 1), Description nvarchar(100), Note nvarchar(100) ) insert into #Project values ('Daphne', 'Ocala county - Barn'), ('Sunny', 'Riverdon county - Prison'), ('Sasha', 'Sommer county - School') select ( select [ExternalRefNbr.value] = cast(Id as nvarchar(30)), [Description.value] = Description, [Note.value] = Note for json path, without_array_wrapper ) from #Project
Ran into a similar problem in vba and this worked for me, maybe you can leverage something similar? https://stackoverflow.com/questions/58929416/setting-default-printer-through-vba CreateObject(WScript.Network).SetDefaultPrinter "RecOffice_Pink" Where "RecOffice_Pink" does not need to inlcude the port!
Consuming Strapi API with Java - convert to Java objects
|java|json|gson|
As per Json Schema documentation, **Contains** means that at least one item of the array is valid **Items** means that all elements of the array are valid https://json-schema.org/understanding-json-schema/reference/array So it's up to your use-case to determine if you want to validate at least one or everything
help me find the optimal solution. I have a compiled application (written in Go if it matters). The application should run as a daemon in Linux and write logs to the user's home directory. The problem is that the daemon will be launched as root, and the home directory will be that of root. I tried to add the following line to the postinst script of the deb package: `sed -i "s:%homedir%:${HOME}:g" /lib/systemd/system/mydemon.service` (My application has a CLI parameter that accepts the directory for log writing.) But if the installation is done with sudo, then we'll get the wrong directory. I also tried getting the user's home directory in the program itself using `os.UserHomeDir()`, but it returns the root directory if the application runs under root. Perhaps there's a way to obtain the home directory based on the session. For example, in Windows, my service also runs under the "SYSTEM" account, but I get the current user's directory through the session (using WinAPI, getting the UserToken, then GetUserProfileDirectory). Is there something similar that can be done in Linux?
Obtaining the user's home directory for a daemon application running as root in linux
|linux|go|debian|systemd|dpkg|
null
A lot depends on the weights on your edges. You'll get very different values depending on the weights and how those weights are being interpreted. You'll undoubtedly need to open up the Layout->Settings... dialog for the spring embedded layout and play with the values to get the results you want.
changes to be made: @GeneratedValue(strategy = GenerationType.IDENTITY) and for the properties : spring.jpa.hibernate.dll-auto=update
Edit: I'm doing away with pseudocode, here's the actual code. <br> I think the answer to question may lie [here](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call/14220323#14220323), but I can't understand the answer. The value of `searchedItem` remains unaltered. The following is my actual code. ```lang-js const searchedItem = async (req, res) => { const { requestedId } = req.query const client = await getClient() try { let searchedItem = 'literallyAnything' const allCollections = await db('dbName').listCollections().toArray() allCollections.forEach(async (collection) => { const allUnits = await client .db('dbName') .collection(collection.name) .find({}) .toArray() for (const topic of allUnits) { if (typeof topic === 'object' && Object.keys(topic).length > 1) { for (const item in topic) { if (typeof topic[item] === 'object') { if (topic[item].id == requestedId) { searchedItem = await topic[item] break } } } } } }) res.status(200).json(searchedItem) } catch (error) { // res.status(500).json(error) } } ``` So, as mentioned above I read this [answer](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call/14220323#14220323), but I don't understand it. To tackle the above, I decided to create another method whose sole job would be to send response to the frontend. And I decided to call it from the if statement itself, where I know the value of `searchedItem` still exists, which worked upto an extent. What I mean by that is, I do get the desired output in my browser, but additionally I also get the error `Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client`, and my server crashes. I'm truncating the code for sake of brevity. ```lang-js // The new method that I created const sendResponse = (response, statusCode, data) => response.status(statusCode).json(data) ``` ```lang-js // Including only the modified if statement if (topic[item].id == requestedId) { searchedItem = currentChild sendResponse(res, 200, searchedItem) } ``` What change should I implement in my code? Thank you.
Temporary lifetime variable MSVC different behaviour from GCC
|c++|
I am getting an error 1004 "Unable to get the Match property of the WorkSheet function class. Any ideas? Sub helper() Dim k As Long For k = 20 To 49 If Cells(k, 4).Value Like "*Opportunity*" Then Cells(k, 5).Value = WorksheetFunction.Index(Range("W20:W49"), WorksheetFunction.Match(Cells(k, 22).Value, Range("D20:D49"), 0)) ElseIf Cells(k, 4) <> "" Then Cells(k, 5).FormulaR1C1 = _ "=IFERROR(IF(RC[-1]>1,VLOOKUP(RC[-1],'Data Extract'!C1:C33,2,FALSE),""""),"""")" Else '' End If Next k End Sub
Index match to retrieve based on contains VBA
|vba|indexing|match|
please suggest how to identify that the script is right or not.[JMeter query image](https://i.stack.imgur.com/9UZbZ.png) please suggest me to identify that script is right or not when I am recording script using JMeter recorder then in that script i add suggested excludes still CSSs and other JS are showing in recording
Jmeter query about script recording
|jmeter|
null
I downloaded the library https://github.com/mgp25/Chat-API. This API whatsapp. I do everything as written in the documentation (github.com/mgp25/Chat-API/wiki). First, I wrote the following script: <?php require_once 'src/Registration.php'; $debug = true; $username = '123456789'; //my phone number $w = new Registration($username, $debug); $w->codeRequest('sms'); ?> Then, on my phone received a message with the code for registration. Next, I wrote the following script: <?php require_once 'src/Registration.php'; $debug = true; $username = '123456789'; $w = new Registration($username, $debug); $w->codeRegister('654321'); //сode, that I have received ?> In response, I received: [status] => ok [login] => login [pw] => password [type] => existing [expiration] => 1443256747 [kind] => free [price] => 39.0 [cost] => 0.89 [currency] => руб [price_expiration] => 1414897682 Next, I try to login: <?php set_time_limit(10); require_once 'src/whatsprot.class.php'; require_once 'src/events/MyEvents.php'; date_default_timezone_set('Europe/Moscow'); $username = '123456789'; $password = 'password'; $nickname = 'nickname'; $debug = true; $w = new WhatsProt($username, $nickname, $debug); $w->connect(); $w->loginWithPassword($password); Here, the script goes into an infinite loop. Function `loginWithPassword()` is in file whatsprot.class.php: github.com/mgp25/Chat-API/blob/master/src/whatsprot.class.php On line 277. On line 287 calls function doLogin(). This function in file Login.php : github.com/mgp25/Chat-API/blob/master/src/Login.php On line 24. On line 49 is infinite loop. The same problem described here https://github.com/mgp25/Chat-API/issues/2140
I'm currently trying to deploy a Next.js app on GitHub Pages using GitHub Actions, but I get a page 404 error even after it successfully deploys. I've looked around a bunch of similarly named questions and am having trouble figuring this out. Here is my GitHub repo: https://github.com/Mctripp10/mctripp10.github.io Here is my website: https://mctripp10.github.io I used the *Deploy Next.js site to Pages* workflow that GitHub provides. Here is the `nextjs.yml` file: ```lang-yaml # Sample workflow for building and deploying a Next.js site to GitHub Pages # # To get started with Next.js see: https://nextjs.org/docs/getting-started # name: Deploy Next.js site to Pages on: # Runs on pushes targeting the default branch push: branches: ["dev"] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages permissions: contents: read pages: write id-token: write # Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued. # However, do NOT cancel in-progress runs as we want to allow these production deployments to complete. concurrency: group: "pages" cancel-in-progress: false jobs: # Build job build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Detect package manager id: detect-package-manager run: | if [ -f "${{ github.workspace }}/yarn.lock" ]; then echo "manager=yarn" >> $GITHUB_OUTPUT echo "command=install" >> $GITHUB_OUTPUT echo "runner=yarn" >> $GITHUB_OUTPUT exit 0 elif [ -f "${{ github.workspace }}/package.json" ]; then echo "manager=npm" >> $GITHUB_OUTPUT echo "command=ci" >> $GITHUB_OUTPUT echo "runner=npx --no-install" >> $GITHUB_OUTPUT exit 0 else echo "Unable to determine package manager" exit 1 fi - name: Setup Node uses: actions/setup-node@v4 with: node-version: "20" cache: ${{ steps.detect-package-manager.outputs.manager }} - name: Setup Pages uses: actions/configure-pages@v4 with: # Automatically inject basePath in your Next.js configuration file and disable # server side image optimization (https://nextjs.org/docs/api-reference/next/image#unoptimized). # # You may remove this line if you want to manage the configuration yourself. static_site_generator: next - name: Restore cache uses: actions/cache@v4 with: path: | .next/cache # Generate a new cache whenever packages or source files change. key: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-${{ hashFiles('**.[jt]s', '**.[jt]sx') }} # If source files changed but packages didn't, rebuild from a prior cache. restore-keys: | ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}- - name: Install dependencies run: ${{ steps.detect-package-manager.outputs.manager }} ${{ steps.detect-package-manager.outputs.command }} - name: Build with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next build - name: Static HTML export with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next export - name: Upload artifact uses: actions/upload-pages-artifact@v3 with: path: ./out # Deployment job deploy: environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest needs: build steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v4 ``` I got this on the build step: ```lang-none Route (app) Size First Load JS ┌ ○ /_not-found 875 B 81.5 kB ├ ○ /pages/about 2.16 kB 90.2 kB ├ ○ /pages/contact 2.6 kB 92.5 kB ├ ○ /pages/experience 2.25 kB 90.3 kB ├ ○ /pages/home 2.02 kB 92 kB └ ○ /pages/projects 2.16 kB 90.2 kB + First Load JS shared by all 80.6 kB ├ chunks/472-0de5c8744346f427.js 27.6 kB ├ chunks/fd9d1056-138526ba479eb04f.js 51.1 kB ├ chunks/main-app-4a98b3a5cbccbbdb.js 230 B └ chunks/webpack-ea848c4dc35e9b86.js 1.73 kB ○ (Static) automatically rendered as static HTML (uses no initial props) ``` Full image: [Build with Next.js][1] I read in https://stackoverflow.com/questions/58039214/next-js-pages-end-in-404-on-production-build that perhaps it has something to do with having sub-folders inside the `pages` folder, but I'm not sure how to fix that as I wasn't able to get it to work without sub-foldering `page.js` files for each page. [1]: https://i.stack.imgur.com/wSlPq.png
I am trying to use the **Microsoft.Build** nuget package to: - open a solution - enumerate projects in it - discover the DLLs that are created. <sup>\<rant>Failure modes seem to be super-abundant, error messages are uninformative, documentation seems to be scarce and misdirected, and examples are outdated. After hours upon hours of troubleshooting in the dark, and solving issue after issue that popped up, I finally arrived at an "Internal MSBuild Error", which brings me to Stack Overflow.\</rant></sup> My "scratch" solution contains just one net8.0 project, as follows: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>net8.0</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.Build" Version="17.9.5" /> <PackageReference Include="Microsoft.Build.Utilities.Core" Version="17.9.5" /> </ItemGroup> </Project> This project contains just one source file, as follows: namespace ConsoleApp1; using System; using System.IO; using System.Collections.Generic; using Microsoft.Build.Construction; using Microsoft.Build.Definition; using Microsoft.Build.Evaluation; using Microsoft.Build.Evaluation.Context; class Program { static void Main( string[] args ) { Directory.SetCurrentDirectory( @"C:\" ); //ensure that the error we encounter further down is not due to the current directory msBuildApiTest( @"D:\Personal\MyVisualStudioSolution\Solution.sln" ); // <-- this fails msBuildApiTest( @"D:\Personal\scratch\scratch.sln" ); // <-- this would also fail } static void msBuildApiTest( string solutionFilePath ) { string msBuildExtensionsPath = @"C:\Program Files\dotnet\sdk\8.0.102"; string msBuildSdksPath = Path.Combine( msBuildExtensionsPath, "Sdks" ); Environment.SetEnvironmentVariable( "MSBuildSDKsPath", msBuildSdksPath ); //Prevents InvalidProjectFileException: The SDK 'Microsoft.NET.Sdk' specified could not be found. Environment.SetEnvironmentVariable( "MSBuildEnableWorkloadResolver", "false" ); //Prevents InvalidProjectFileException: The SDK 'Microsoft.NET.SDK.WorkloadAutoImportPropsLocator' specified could not be found. ProjectOptions projectOptions = new(); projectOptions.EvaluationContext = EvaluationContext.Create( EvaluationContext.SharingPolicy.Shared ); projectOptions.LoadSettings = ProjectLoadSettings.DoNotEvaluateElementsWithFalseCondition; projectOptions.GlobalProperties = new Dictionary<string, string>(); projectOptions.GlobalProperties.Add( "SolutionDir", Path.GetDirectoryName( solutionFilePath ) + "\\" ); //The trailing backslash is OF PARAMOUNT IMPORTANCE. projectOptions.GlobalProperties.Add( "MSBuildExtensionsPath", msBuildExtensionsPath ); //Prevents InvalidProjectFileException: The imported project "D:\Personal\scratch\ConsoleApp1\bin\Debug\net8.0\Current\Microsoft.Common.props" was not found. ProjectCollection projectCollection = new( ToolsetDefinitionLocations.Default ); SolutionFile solutionFile = SolutionFile.Parse( solutionFilePath ); foreach( ProjectInSolution projectInSolution in solutionFile.ProjectsInOrder ) { if( projectInSolution.ProjectType is SolutionProjectType.SolutionFolder or SolutionProjectType.SharedProject ) continue; Console.WriteLine( $"{projectInSolution.ProjectType}\t{projectInSolution.ProjectName}\t{projectInSolution.RelativePath}" ); Project project1 = Project.FromFile( projectInSolution.AbsolutePath, projectOptions ); // <-- this fails Project project2 = new( projectInSolution.AbsolutePath, projectOptions.GlobalProperties, "Current", projectCollection ); // <-- this would also fail } } } Either of the last two statements fails. The exception is a `Microsoft.Build.Exceptions.InvalidProjectFileException`. The message of the exception is as follows: > `The expression "[MSBuild]::GetTargetFrameworkIdentifier(net8.0)" cannot be evaluated.` > `MSB0001: Internal MSBuild Error: A required NuGet assembly was not found.` > `Expected Path: D:\Personal\scratch\ConsoleApp1\bin\Debug\net8.0` > `C:\Program Files\dotnet\sdk\8.0.102\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets` Note: The "Expected Path" (whatever that means) points to the output directory of my scratch solution, which makes no sense, because that's not the solution I am trying to parse, and I have not supplied Microsoft.Build with any path to my scratch solution. As a matter of fact, I even switch the current directory to `C:\` so as to make sure that the error is not due to that. Note: The `string msBuildExtensionsPath = @"C:\Program Files\dotnet\sdk\8.0.102";` part works on my machine, you might have to change it for your machine. Automatic discovery would be nice, but it is beyond the scope of this scratch app. Note: The solution at "D:\Personal\MyVisualStudioSolution\Solution.sln" is the solution that I am trying to parse, consisting of many net8.0 C# projects and one net472 project. However, it does not matter, because I get the exact same failure when I point this simple scratch app to try to parse itself. The question is: Why is this failing, and what must I do to make it work?
You should try set your html language to urdu. Change the following line in the html head ``` <html lang="ur"> ```
Getting this error in Postman when trying: http://locahost:3000/user . AssertionError: expected 'application/json; charset=utf-8' to include 'text/html But status code is 200. app.get('/user', (req, res)=>{ User.find().then(users=>res.status(200).json(users)) .catch(err=>{ console.error(`Error in Database ${err}`); res.status(500).json({ error: 'Internal Server Error'}); }) }); res.header('Content-Type', 'application/json; charset=utf-8'); I tried using this piece of code just above User.find().//Rest of my code...... But problem remains same.
AssertionError: expected 'application/json; charset=utf-8' to include 'text/html'
|node.js|express|postman|mern|
null
I create plotly bar chart using the next line ``` fig = px.bar(df_to_print, x="bin_dist", y=metric, color='net_id', barmode="group", text=metric, color_discrete_sequence=px.colors.qualitative.Vivid)` ``` and got this chart (using streamlit): [enter image description here](https://i.stack.imgur.com/D8z2O.png) I want to bold the maximum value on each group, something like: [enter image description here](https://i.stack.imgur.com/vOcHd.png) Any ideas? Trying: for net_id in df_to_print['net_id'].unique(): max_value = df_to_print[df_to_print['net_id'] == net_id][metric].max() max_index = df_to_print[(df_to_print['net_id'] == net_id) & (df_to_print[metric] == max_value)].index fig.data[0].marker.line.width[max_index] = 2 but got the excetion: TypeError: 'NoneType' object does not support item assignment
You need to move tooltip initialization in [ngAfterViewInit()](https://angular.io/api/core/AfterViewInit) hook instead, when the view is initialized: ngAfterViewInit() { var tooltipTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="tooltip"]')) var tooltipList = tooltipTriggerList.map(function (tooltipTriggerEl) { return new Tooltip(tooltipTriggerEl) }); }
I need to crop each column from scanned pdf. I tried lots of solution from here but none of them worked. For example I have below image. [![enter image description here][1]][1] I need to write python script to get below images. Can you help me about it. [![enter image description here][2]][2] ---------- [![enter image description here][3]][3] [1]: https://i.stack.imgur.com/YdPSG.jpg [2]: https://i.stack.imgur.com/S71XD.png [3]: https://i.stack.imgur.com/AETC0.png
Extract each column as image from scanned pdf
|python|opencv|image-processing|
I have something like this: ``` Future<void> f1() { ... } Future<void> f2() { ... } Future<void> f3() { ... } void main() async { f1(); f2(); // might throw an Exception await f3(); } ``` Note - I am deliberately not awaiting f1 or f2, I am only awaiting the consequences in f3. How do I handle the possibility of an exception in f2 ? The normal try / catch procedure doesn't work. I have seen some discussion of catchError but I don't really understand it. I would like to do the equivalent of: ``` Future<void> f1() { ... } Future<void> f2() { ... } Future<void> f3() { ... } void main() async { f1(); try { f2(); // might throw an Exception } on MyException catch( e ) { print('this is what I expected to happen, so carry on regardless'); } await f3(); } ```
How to handle errors coming back from Futures when I am not awaiting the Future?
|dart|exception|future|
null
I need to write a function that sorts this array based on dialog_node key and previous_sibling keys. The previous_sibling of the next object matches the dialog_node value of the previous object in the array. ``` export function orderDialogNodes(nodes) { // Create a mapping of dialog_node to its corresponding index in the array const nodeIndexMap = {}; nodes.forEach((node, index) => { nodeIndexMap[node.dialog_node] = index; }); // Sort the array based on the previous_sibling property nodes.sort((a, b) => { const indexA = nodeIndexMap[a.previous_sibling]; const indexB = nodeIndexMap[b.previous_sibling]; return indexA - indexB; }); return nodes; } const inputArray = [ { type: "folder", name: "Item 2", dialog_node: "node_2_1702794723026", previous_sibling: "node_9_1702956631016", }, { type: "folder", name: "Item 3", dialog_node: "node_3_1702794877277", previous_sibling: "node_2_1702794723026", }, { type: "folder", name: "Item 1", dialog_node: "node_9_1702956631016", previous_sibling: "node_7_1702794902054", }, ]; const orderedArray = orderDialogNodes(inputArray); console.log(orderedArray); ```
I have a VM and and i encrypt my OS disk. Now everything functions as normal. I can see a small lock on the c:\ drive. But the question is whats the real purpose of the encryption ? How does it protect the disks , in what way ? I mean another user that has access can come and rdp and still view the disk ? So in what scenario is this of any use ? How does it stop unauthorised access, I mean some one can take the disks , attach to a VM and they have the data ? (is that right?) Also if i was to take a snapshot and create a new VM with these disks , will the encryption still be there ? **UPDATE** *Encrypting the OS disk ensures that data remains inaccessible without the encryption key, deterring unauthorized access even if the disk is stolen* You mention the data inaccessible if the disk is stolen , how does that work ? So if the disk is encrypted on my Azure subscription via Key Vault entires that hold the encryption keys , does that mena that if i restore the snapshots on my tenant then it will work ? Does it mean if the disks and snapshots are stolen and someone tries to restore the disks to a Vm then it wont work because it cant find the Key Vault keys on their tenant ? Is that how it works?
|php|whatsapp|
How can I update multiple checkbox values?
i found the code/library and all i needed on BasselItech https://www.youtube.com/watch?v=zzs2xnyCczo i tried modifying the code with using .Hide() and visible set to false but there is always the Form that appears and i would just like to have it run in the background with a notifyicon Thank you all here is my actual code: ``` using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.IO.Ports; using System.Linq; using System.Text; using System.Windows.Forms; using USB_Barcode_Scanner; using System.Threading; using System.Timers; using System.Windows.Threading; using System.Diagnostics; namespace USB_Barcode_Scanner_Tutorial___C_Sharp { public partial class Form1 : Form { private static SerialPort currentPort; private delegate void updateDelegate(string txt); public Form1() { InitializeComponent(); //BarcodeScanner barcodeScanner = new BarcodeScanner(textBox1); //barcodeScanner.BarcodeScanned += BarcodeScanner_BarcodeScanned; string[] ports = System.IO.Ports.SerialPort.GetPortNames(); string com_PortName = "COM4"; int com_BaudRate = 9600; currentPort = new SerialPort(com_PortName, com_BaudRate, Parity.None, 8, StopBits.One); currentPort.Handshake = Handshake.None; currentPort.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); currentPort.ReadTimeout = 1000; currentPort.WriteTimeout = 500; //Begin communications currentPort.Open(); } private void BarcodeScanner_BarcodeScanned(object sender, BarcodeScannerEventArgs e) { textBox1.Text = e.Barcode; } private void CodeFunktionenFuerQRCode_ausfuehren(string inhalt) { if (inhalt.StartsWith("http")) //wenn Weblink erkannt { Process.Start("explorer.exe", inhalt); } else if (inhalt.StartsWith("helios")) //wenn HELiOS-Link erkannt { Process.Start("explorer.exe", inhalt); } else //alle anderen Dinge { } #region Inhalt des Links in die TEXTBOX-schreiben if (textBox2.Text.Length > 0) { textBox2.Text = ""; } textBox2.Text = inhalt; #endregion } private void port_DataReceived(object sender,SerialDataReceivedEventArgs e) { string strFromPort = ""; try { strFromPort = currentPort.ReadExisting(); //Daten aus Port lesen } catch { } //Port schließen; CDU: muss offen bleiben, damit weiter gescannt werden kann if (!currentPort.IsOpen) { currentPort.Open(); //Port öffnen (Definitionen siehe oben) System.Threading.Thread.Sleep(100); /// um alle Daten aus dem Scanner-Puffer lesen zu können, kurz warten currentPort.DiscardInBuffer(); /// CDU: clear buffer; leeren damit weiter gescannt werden kann } else { currentPort.DiscardInBuffer(); /// CDU: clear buffer; leeren damit weiter gescannt werden kann } BeginInvoke(new updateDelegate(CodeFunktionenFuerQRCode_ausfuehren), strFromPort); } } } ```
I'm currently trying to apply an Edge-weighted Spring Embedded Layout algorithm in Cytoscape to a network, following the Materials and Methods from a study. When I apply the Layout, I'm not obtaining the same results. In my case I'm having some nodes pointing outwards and some others highly concentrated in the middle of the graph, but I want the nodes to be equally distributed in the surface. This is what I have: ![What I obtain][1] [1]: https://i.stack.imgur.com/puYsQ.png And this is what I want: ![What I want to have][2] [2]: https://i.stack.imgur.com/dLrRc.png
How can I change the parameters in Cytoscape to obtain this layout?
I have two applications: one is an ASP.NET Core Web API and the other is a Maui Blazor application. I successfully logged into Azure using `PublicClientApplicationBuilder` with my client ID. However, I'm encountering an issue with the access token. I want to utilize the same access token that I obtained and send it to my API project for authentication. Is this achievable? If so, could someone provide me with an example of how to configure the `program.cs` file in the API project to validate the access token?
Validating Access Token in ASP.NET Core Web API project
|c#|azure|microsoft-graph-api|asp.net-core-webapi|
I have a `ChildView` that you can access via `NavigationLink` and that will display in a `NavigationSplitView`’s detail pane. NavigationSplitView { ListView() } detail: { } And .navigationDestination(for: ChildModel.self) { post in ChildView() } `ChildView` declares an Environment property for the model context: @Environment(\.modelContext) private var modelContext It also has a SwiftData `@Query` property declared to retrieve all stored DbModel objects: @Query private var dbModel: [DbModel] However, it always returns empty (when it is not true) and I see this warning pop: > Set a .modelContext in view's environment to use Query I have seen https://stackoverflow.com/questions/76878894/query-in-view-with-model-container-attached-shows-error-saying-it-does-not-hav which is similar but I wanted to follow up for a better understanding. If I follow the advice in the answer above, and drop the @Query to replace with a `FetchDescription` in the `ChildView` constructor: init() { let predicate = #Predicate<DbModel> { $0.id == self.id } let descriptor = FetchDescriptor<DbModel>(predicate: predicate) if let models = try? modelContext.fetch(descriptor) { /// do something here } } It crashes: > Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: SwiftData.SwiftDataError(_error: SwiftData.SwiftDataError._Error.loadIssueModelContainer) And has an accompanying warning: > Accessing Environment<ModelContext>'s value outside of being installed on a View. This will always read the default value and will not update. If I instead I refer to a reference of the context that I store after an app launch, it works fine: if let models = try? AppController.shared.modelContextReference.fetch(descriptor) { That’s progress, but I would appreciate clarification on two things: 1) How can I ensure the `ChildView` has environment access to the model context from a SwiftUI perspective, and for it to return the correct results (with no warning)? As in without needing to refer to a stored reference inside a class object. 2) I know `@Query` automatically stays up to date every time my data changes, and will reinvoke my SwiftUI view so it stays in sync. However, given I’m being forced to use a `FetchDescriptor` in the view’s constructor, does the same hold true? As in will the view — depending on a `FetchDescriptor` inside the `.init()` and not a `@Query` — also update whenever there are changes detected to the DbModel set? The wider context is I am only interested in `DbModel` results for `ChildView` that are relevant (i.e. share the same id). So I’m also curious whether — [short of having ability to dynamically query][1] — that this will perform similarly to a `@Query` that has a filter defined with no dynamic value check. [1]: https://stackoverflow.com/a/76530446/698971
C# I have a code that reads from a USB code scanner (COM4) that only works with a form i need it to run in background with just a NotifyIcon
|c#|qr-code|barcode-scanner|notifyicon|
null
I have 2 clusters,both publicly accessible,same security groups,but in different public subnets,I can connect to one cluster from local, but not the other. Tried connecting via cluster ip, tried adding my ip as well as 0.0.0.0/0 to security groups, rebooted cluster after allowing public access, nothing helped. Edit: Solution: When configuring cluster, I get to select the subnet group, and I chose the default one, which was a mixture of public and private subnets. So AWS randomly chose a subnet from it for my cluster, which was a private one. that is why I haven't been able to connect to it via local.
class ItemGetter extends EventEmitter { constructor () { super(); this.on('item', item => this.handleItem(item)); } handleItem (item) { console.log('Receiving Data: ' + item); } getAllItems () { for (let i = 0; i < 15; i++) { this .getItem(i) .then(item => this.emit('item', item)) .catch(console.error); } console.log('=== Loop ended ===') } async getItem (item = '') { console.log('Getting data:', item); return new Promise((resolve, reject) => { exec('echo', [item], (error, stdout, stderr) => { if (error) { throw error; } resolve(item); }); }); } } (new ItemGetter()).getAllItems() You logic, for first, run loop with calling all GetItem, then output '=== Loop ended ===', and only after that run all promices resolution, so, if you want get result of each getItem execution independently of eachother, just don't abuse asynchronous logic, frequently right solution much simpliest, than it seems ;) Note: in this solution, you will get the same output, because loop with getItem calling, runnung faster, then promises with exec, but in this case, each item will be handled exactly after appropriate promise will be resolved, except of awaiting of all promises resolution. But in case, when you whant to make method getAllItems, asyncronous too, and await, when all results will be handled, but still remaining each promise running independently, you should implement you own logic, eg, using counter of started getItem methods, which increments in a loop and decrements on each promise resolution, so, this can be not so trivial
So, I recently tried to make a project using the CC1101 wireless module and the [RadioLib](https://github.com/jgromes/RadioLib) library. I tried the CC1101_Receive_Adress and CC1101_Transmit_Adress examples from said library when I noticed taht the last symbol in the "Hello world!" message was getting corrupted - it was alwys replaced with something random, such as ">$, "@" or something similar. I also tried sending a readout form a BMI sensor, but the other side did not recieve that at all and only showed the random symbols. I tried sending the readouts as an int, float and byte variables, but it did not help. What could be causing this? Is it more likely a software or a hardware problem? The problems occured when using UNo R3 on the transmitting side and UNO R4 Wifi on the receivenig end, with pinouts the same on both sides: Vcc -3.3V GND - gnd MOSI - d11 SClk - d13 MISO - d12 GDO2 - d2 GDO0 (optional, but connected) - d3 CSN - d10 I am not sure if that matters since the name of the module is always the same, but i used [this specific variant](https://www.aliexpress.com/item/1005006440698413.html?spm=a2g0o.productlist.main.47.72497ff7czTHpM&algo_pvid=5d370251-5dea-4290-bf35-31c045851554&algo_exp_id=5d370251-5dea-4290-bf35-31c045851554-23&pdp_npi=4%40dis%21CZK%21257.39%2111.73%21%21%2178.03%213.55%21%40211b612817115494208761030ec04b%2112000037175069114%21sea%21CZ%210%21AB&curPageLogUid=jRATp7SUW13F&utparam-url=scene%3Asearch%7Cquery_from%3A) of the CC1101. If you need any more information, I'll do my best to provide it. When traying to solve this problem I tried changing the sent data to a variable, but that did not get sent at all. I also plan to try and replace the modules, but I currently have no spares.
everyone! I'm just learning bubble and I have some difficulties that I encountered. The problem is that I need to receive data, process it and send it back. At the same time, extract certain values of their response into json. Example: ``` { "cards": [ { "nmID": 12383295, "imtID": 9297853, "nmUUID": "018c08a7-5d05-7a9c-9423-80e599047ab8", "subjectID": 3137, "subjectName": "Урбеч", "vendorCode": "nao-5nao-5", "brand": "", "title": "Title", "description": "Some description here", "dimensions": { "length": 9, "width": 9, "height": 11 }, "sizes": [ { "chrtID": 38470141, "techSize": "0", "wbSize": "", "skus": [ "4626015349130" ] } ], "createdAt": "2020-05-13T01:46:46Z", "updatedAt": "2024-02-24T12:41:16.4192Z" }, { "nmID": 212400431, "imtID": 191822480, "nmUUID": "018ddabe-fec7-7476-8dac-05138b28488b", "subjectID": 192, "subjectName": "Polo", "vendorCode": "test", "brand": "Brand", "title": "Polo", "description": "Description here", "dimensions": { "length": 20, "width": 25, "height": 21 }, "sizes": [ { "chrtID": 339615565, "techSize": "XL", "wbSize": "52", "skus": [ "2039558028783" ] } ], "createdAt": "2024-02-24T10:52:46.832077Z", "updatedAt": "2024-02-24T10:52:46.832077Z" }, { "nmID": 10617728, "imtID": 163219155, "nmUUID": "018c08ac-0b85-7c10-85a0-58c2b61dbf7c", ``` So I need to find in the response the data of one card containing the value nmID 12383295. Change "description": "Some text nere" to the value from the variable and send back. Help me with advice, how to do this in Bubble. I receive a list of products upon request via API. I need to find the one I need. For example, with nmID 12383295. Change its description. [enter image description here](https://i.stack.imgur.com/3Lopu.png)
How to save, edit and post api responses from Bubble?
|json|api|bubble.io|
null
I create plotly bar chart using the next line ``` fig = px.bar(df_to_print, x="bin_dist", y=metric, color='net_id', barmode="group", text=metric, color_discrete_sequence=px.colors.qualitative.Vivid)` ``` and got this chart (using streamlit): [enter image description here](https://i.stack.imgur.com/D8z2O.png) I want to bold the maximum value on each group, something like: [enter image description here](https://i.stack.imgur.com/vOcHd.png) Any ideas? My entire function: def _print_production_pyramid_kpis_three_nets_above(production_pyramid_kpis_three_nets_above_df: pd.DataFrame): """ printing plotly graphs for 3 nets and above by user pick """ for object_name in list(production_pyramid_kpis_three_nets_above_df["object"].unique()): df_to_print = production_pyramid_kpis_three_nets_above_df[production_pyramid_kpis_three_nets_above_df["object"] == object_name] for metric in st.session_state["three_nets_metrics_pick"]: if object_name != '4w' and metric in ['3d_recall', '3d_diff', '4w_direct']: continue if metric in ['recall', '3d_recall', '4w_direct']: df_to_print[metric] = round(df_to_print[metric] * 100, 2) fig = px.bar(df_to_print, x="bin_dist", y=metric, color='net_id', barmode="group", text=metric, color_discrete_sequence=px.colors.qualitative.Vivid) if metric in ['recall', '3d_recall', '4w_direct']: fig.update_traces(texttemplate='%{text:.2f}%', textposition='outside') else: fig.update_traces(textposition='outside') for net_id in df_to_print['net_id'].unique(): max_value = df_to_print[df_to_print['net_id'] == net_id][metric].max() max_index = df_to_print[(df_to_print['net_id'] == net_id) & (df_to_print[metric] == max_value)].index # Bold the maximum value bar fig.data[0].marker.line.width[max_index] = 2 st.write(f'### {object_name} - {metric}') st.write(fig.data) st.plotly_chart(fig, use_container_width=True) Trying: for net_id in df_to_print['net_id'].unique(): max_value = df_to_print[df_to_print['net_id'] == net_id][metric].max() max_index = df_to_print[(df_to_print['net_id'] == net_id) & (df_to_print[metric] == max_value)].index fig.data[0].marker.line.width[max_index] = 2 but got the excetion: TypeError: 'NoneType' object does not support item assignment
I struggle with finding an elegant way to convert a variable of type `Optional<String[]>` to `Optional<String>` and joining all elements of the given array. Is there an elegant solution for this? ```java Optional<String[]> given = Optional.ofNullable(new String[]{"a", "b"}); Optional<String> joinedString = ....; Assertions.assertThat(joinedString.get()).isEqualTo("ab"); ```
How to turn a optional of an string array into a optional of a string?
|java|stream|optional-chaining|
The answer here is that your path to your project likely contains a space: path/to/PycharmProjects/Foo Bar Project/main.py ^ ^ This means you may be able to do things such as install packages through the Terminal, but you'll get the SDK error afterwards. Once you Refactor the project directory (<kbd>SHIFT</kbd> + <kbd>F6</kbd> or right-click on the project directory and select **Refactor**) to remove the spaces in the path, you should be able to run without seeing the SDK error.
I am trying to make a decision tree using rpart function in r. I have the y variable "outcome" and 4 variables as x. All of them are factors. Every tree I tried to make returns only the first node when I plot it. Here is the code: ``` library(rpart) library(rpart.plot) train<-read.csv('train.csv', head=T, sep=",", dec=".") train$gender<-factor(train$gender) train$hr<-factor(train$hr) train$sbp<-factor(train$sbp) train$dbp<-factor(train$dbp) train$outcome<-factor(train$outcome) model <- rpart(outcome~., data=train, method="class") rpart.plot(model) ``` At `model <-` I tried using any other combination and not with all the variables but again I had the same problem. I also tried using numeric variables and they worked with the "outcome" variable.
Decision tree using rpart for factor returns only the first node
I am currently working on a Spring Boot project with Thymeleaf. Using the following form should send the data to a REST endpoint: ```html <form th:action="@{/post/create}" method="post" th:object="${postDto}" accept-charset="UTF-8"> <div class="mb-3"> <label for="title" class="form-label">Titel</label> <input type="text" class="form-control" id="title" name="title" th:field="*{title}" required> </div> <div class="mb-3"> <label for="content" class="form-label">Beschreibung</label> <textarea class="form-control" id="content" name="content" rows="4" th:field="*{content}" required></textarea> </div> <div class="mb-3"> <label for="event" class="form-label">Event</label> <select class="form-control" id="event" name="event" th:field="*{eventId}"> <option th:value="${0}">Kein Event</option> <option th:each="event : ${events}" th:value="${event.id}" th:text="${event.name} + ' - ' + ${event.getClassName()}"></option> </select> </div> <div class="mb-3"> <label for="topic" class="form-label">Thema</label> <select class="form-control" id="topic" name="topic" th:field="*{topicId}"> <option th:value="${0}">Kein Thema</option> <option th:each="topic : ${topics}" th:value="${topic.id}" th:text="${topic.name}"></option> </select> </div> <div class="mb-3"> <label for="visibility" class="form-label">Sichtbarkeit</label> <select class="form-control" id="visibility" name="visibility" th:field="*{visibility}"> <option th:value="${0}">Für alle sichtbar</option> <option th:each="role : ${roles}" th:value="${role.getVisibilityScore()}" th:text="${role.getVisibilityScore()} + ' - ' + ${role.name}"></option> </select> </div> <div class="modal-footer"> <input type="hidden" name="_csrf" value="${_csrf.token}"/> <button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Schließen</button> <button type="submit" class="btn btn-primary">Absenden</button> </div> </form> ``` I tried setting the following settings in the `application.properties` ``` server.servlet.encoding.charset=UTF-8 server.servlet.encoding.enabled=true server.servlet.encoding.force=true spring.thymeleaf.encoding=UTF-8 ``` The REST-Endpoint is receivind the data and should create a Post object in the ServiceImpl: ```java @PostMapping("/post/create") public String createPost(@ModelAttribute @Valid PostDto postDto, BindingResult bindingResult, Model model) { if (bindingResult.hasErrors()) { model.addAttribute(ERROR_MESSAGE_ATTRIBUTE, "Es gab Probleme, den Beitrag anzulegen. Versuche es erneut."); return TEMPLATE_LOCATION; } try { postService.savePost(postDto); model.addAttribute(SUCCESS_MESSAGE_ATTRIBUTE, "Beitrag wurde erstellt!"); } catch (RuntimeException e) { model.addAttribute(ERROR_MESSAGE_ATTRIBUTE, e.getMessage()); } return TEMPLATE_LOCATION; } ``` ServiceImpl: ```java @Override public void savePost(PostDto postDto) { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); User user = userService.findByMailAddress(authentication.getName()); if (!permissionService.canCreatePosts(user)) { throw new InsufficientPermissionsException(INSUFFICIENT_PERMISSIONS_EXCEPTION); } Post post = Post.builder() .title(postDto.getTitle()) .content(postDto.getContent()) .visibility(postDto.getVisibility()) .event(eventService.getById(postDto.getEventId())) .topic(topicService.getById(postDto.getTopicId())) .creator(user) .creationDate(LocalDateTime.now()) .build(); postRepository.save(post); topicService.mailToSubscribers(post.getTopic()); } ``` Using the Spring Data JPA the object should get persisted in a database and users can read the post on the site. After submitting the form, i logged the PostDto object and its already messed up and character like "ä", "ö", "ü" or other will be replaced with a "?". PostDto: ```java @Data @Builder @NoArgsConstructor @AllArgsConstructor public class PostDto { @NotNull @NotEmpty private String title; @Lob @Column(length = 100000) @NotNull @NotEmpty private String content; private Long eventId; private Long topicId; @Min(0) @Max(100) @NotNull private Long visibility; } ``` I tried everything I found in the web and used ChatGPT but found no solution.
I'm using stringApp (v2.0.2) with Cytoscape (Version: 3.10.1). When put in the gene name (ex, FOS, KDR, ...) using 'STRING:protein query', a message box says 'Your query returned no results.'. A network diagram was usually occurred, and I could get data what I want until December 2023.
Resource `google_service_account_iam_member` allows you to add members that can be used by the service account that is getting access. In [this example][1], SA is getting the possibility to use default GKE SA : data "google_compute_default_service_account" "default" { } resource "google_service_account" "sa" { account_id = "my-service-account" display_name = "A service account that Jane can use" } # Allow SA service account use the default GCE account resource "google_service_account_iam_member" "gce-default-account-iam" { service_account_id = data.google_compute_default_service_account.default.name role = "roles/iam.serviceAccountUser" member = "serviceAccount:${google_service_account.sa.email}" } On the other side the resource `google_project_iam_member` grants access for identities to all resources in the project. With this example you will grant admin access to all Google Storage Buckets: resource "google_service_account" "sa" { account_id = "my-service-account" display_name = "A service account that Jane can use" } resource "google_project_iam_member" "project" { project = "your-project-id" role = "roles/storage.admin" member = "serviceAccount:${google_service_account.sa.email}" } The last option that you mentioned in the update is `google_compute_*_iam_member` which grants access on the resource level. That's why, when you will run this code sample you will grant admin access to only one Google Storage Bucket: resource "google_service_account" "sa" { account_id = "my-service-account" display_name = "A service account that Jane can use" } resource "google_storage_bucket" "example" { name = "example" location = "US" } resource "google_storage_bucket_iam_member" "member" { bucket = google_storage_bucket.example.name role = "roles/storage.admin" member = "serviceAccount:${google_service_account.sa.email}" } To follow the least privilege role assignment you should grant access to the specific Google service with the resources `google_compute_*_iam_member`. You can find these resources in the specific category in the Terraform registry. For example, I am resource for the storage bucket is placed in the `Cloud Storage` -> `Resources` -> [`google_storage_bucket_iam`][2]: [![enter image description here][3]][3] [1]: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_service_account_iam#google_service_account_iam_member [2]: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/storage_bucket_iam [3]: https://i.stack.imgur.com/0TiO0.png
Why are my wirelessly sent data getting corrupted?
|corruption|wireless|
null
### Answer 2 If you comment out the two vispy lines in `update_data` and optionally add some prints, you'll notice that the GUI still acts the same. That is it freezes. So this has nothing to do with vispy. As a general rule for realtime apps, any non-visualization operations should be in the separate thread (the data source). This includes your update interval logic. Put another way, only send data when the visualization should be updated. In terms of the Qt event loop, you're basically dumping thousands of events onto the queue of tasks Qt has to process. Qt can't process any new UI events until it finishes handling of your data events. This basically ends up being a `for` loop in python over thousands of events. The solution, at least to start, is to handle all your update interval stuff inside the data source thread. Only emit the signal when the data is ready to be displayed. Plus, 10000 points per second is a lot of data to view in realtime if that's your goal. You may need to work on averaging or subsetting the data to reduce how often you send updated data. ### Answer 1 It looks like this is based on the [vispy realtime data examples](https://vispy.org/gallery/scene/realtime_data/ex03b_data_sources_threaded_loop.html), right? When I wrote these I don't remember how much testing I did with a non-sleeping data source so there is always a chance that this code does not behave as I expect. You mention this isn't updating in the "specified intervals", what do you mean by that? It could be that you are flooding the GUI with so many update events that it isn't able to update before you see the final result. Without your sleep (correct me if I'm wrong) you're basically going from the first iteration to the last iteration as fast as the CPU can go, right? In this case, what would you expect to see? In a more realistic example the data source creating the data would take some actual time, but your example creates all the data instantly. Are you sure the application is hanging or is it just done producing all the data? If I'm wrong about all the above, then one difference I see is your use of `deque` which I have very little experience with. I'm wondering if you see any difference in behavior if you instead make a new numpy array inside your data source for every iteration.
I am new to creating compilers and interpreters and as an exercise, I created a handwritten lexer in Java that spits out tokens looking like the following. public Token(TokenType type, String lexeme, Object literal, int line) { this.type = type; this.lexeme = lexeme; this.literal = literal; this.line = line; } Now I want to create a parser using ANTLR, sadly I am running into some issues when trying to link my lexer with the ANTLR-generated parser. I have tried to implement a TokenSource (this is an ANTLR interface see: <https://www.antlr.org/api/Java/org/antlr/v4/runtime/TokenSource.html>), this can be used by a CharStream that the parser can use. My first question: Is this a good approach or are there better ways to link a custom lexer with an ANTLR-generated parser? My second question: ANTLR token types are integers, so the interface wants me to implement a getType() that returns an int. My token types are in an enum (so they are integers) but how do I link these integers/types with the ones in the ANTLR parser grammar (so they both see the type as the same type)?
I got the same error in Visual Studio, then I decided to switch to Rider and it told that build is failing because Live Coding console is launched (though I didn't have UE opened). Then I restarted my computer to kill live console and managed to build.
I am working on a virtualised data grid for my application. I use transform: translateY for the table offset on scroll to make table virtualised. I developed all the functionality in React 17 project, but when migrated to React 18 I found that the data grid behaviour changed for the worse - the data grid started to bounce on scroll. I prepared the minimal representing code extract, which shows my problem. To assure that the code is the same for React 17 and React 18, I change only the import of ReactDOM from 'react-dom/client' to 'react-dom' (which is of course incorrect, since the latter is deprecated) in my index.tsx file. React 18: [enter image description here][1] This is the code: index.html ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Virtuailsed table</title> </head> <body> <noscript>You need to enable JavaScript to run this app.</noscript> <div id="root"></div> </body> </html> ``` index.js ``` // import ReactDOM from "react-dom"; import ReactDOM from "react-dom/client"; import { useState } from "react"; import "./styles.css"; let vendors = []; for (let i = 0; i < 1000; i++ ){ vendors.push({ id: i, edrpou: i, fullName: i, address: i }) } const scrollDefaults = { scrollTop: 0, firstNode: 0, lastNode: 70, }; function App() { const [scroll, setScroll] = useState(scrollDefaults); const rowHeight = 20; const tableHeight = rowHeight * vendors.length + 40; const handleScroll = (event) => { const scrollTop = event.currentTarget.scrollTop; const firstNode = Math.floor(scrollTop / rowHeight); setScroll({ scrollTop: scrollTop, firstNode: firstNode, lastNode: firstNode + 70, }); }; const vendorKeys = Object.keys(vendors[0]); return ( <div style={{ height: "1500px", overflow: "auto" }} onScroll={handleScroll} > <div className="table-fixed-head" style={{ height: `${tableHeight}px` }}> <table style={{transform: `translateY(${scroll.scrollTop}px)`}}> <thead style={{ position: "relative" }}> <tr> {vendorKeys.map((key) => <td>{key}</td>)} </tr> </thead> <tbody > {vendors.slice(scroll.firstNode, scroll.lastNode).map((item) => ( <tr style={{ height: rowHeight }} key={item.id}> {vendorKeys.map((key) => <td><div className="data">{item[key]}</div></td>)} </tr> ))} </tbody> </table> </div> </div> ); } // const rootElement = document.getElementById("root"); // ReactDOM.render(<App />, rootElement); const root = ReactDOM.createRoot( document.getElementById('root') ); root.render( <App /> ); ``` styles.css ``` * { padding: 0; margin: 0 } .table-fixed-head thead th{ background-color: white; } .row { line-height: 20px; background: #dafff5; max-width: 200px; margin: 0 auto; box-shadow: 0 0 1px 0 rgba(0, 0, 0, 0.5); } .data{ width: 150px; white-space: nowrap; overflow: hidden; margin-right: 20px; } ``` I have spent 1.5 day trying to find the reason why the table bounces on scroll in React 18 without result. BTW, overscroll-behaviour: none doesn`t work. [1]: https://i.stack.imgur.com/AHoNg.gif
I want to create a login system in the react native app using firebase auth. In there a i have used ValidateJS for the frontend validation. But i am stuck with how to implement the confirm password field. I am not able to validate that the Confirm Password field is same the the Password Field. I have pasted the codes below. **Signup.js** ```js export default function Signup({ navigation }) { const [isLoading, setIsLoading] = useState(false); const [formState, dispatchFormState] = useReducer(reducer, initialState); const inputChangedHandler = useCallback( (inputId, inputValue) => { const result = validateInput(inputId, inputValue); dispatchFormState({ inputId, validationResult: result, inputValue }); }, [dispatchFormState] ); const signupHandler = () => { /// code }; return ( <SafeAreaProvider> <View> <Image source={appIcon} /> <Text> Getting Started</Text> <Text> Create an account to continue !</Text> <View> <Inputs id="username" placeholder="Username" errorText={formState.inputValidities["username"]} onInputChanged={inputChangedHandler} /> <Inputs id="email" placeholder="Enter your email" errorText={formState.inputValidities["email"]} onInputChanged={inputChangedHandler} /> <InputsPassword id="password" placeholder="Password" errorText={formState.inputValidities["password"]} onInputChanged={inputChangedHandler} /> <InputsPassword id="ConfirmPassword" placeholder="ConfirmPassword" errorText={formState.inputValidities["password"]} onInputChanged={inputChangedHandler} /> </View> <Buttons title="SIGN UP" onPress={signupHandler} isLoading={isLoading} /> <View> <Text>Already have an account?</Text> <TouchableOpacity onPress={() => { navigation.push("Login"); }}> <Text>Log In</Text> </TouchableOpacity> </View> <StatusBar style="auto" /> </View> </SafeAreaProvider> ); } ``` **Validate.js** ```js import { validate } from "validate.js"; export const validateString = (id, value) => { const constraints = { presence: { allowEmpty: false, }, }; if (value !== "") { constraints.format = { pattern: ".+", flags: "i", msg: "Value can't be blank.", }; } const validationResult = validate({ [id]: value }, { [id]: constraints }); return validationResult && validationResult[id]; }; export const validateEmail = (id, value) => { const constraints = { presence: { allowEmpty: false, }, }; if (value !== "") { constraints.email = true; } const validationResult = validate({ [id]: value }, { [id]: constraints }); return validationResult && validationResult[id]; }; export const validatePassword = (id, value) => { const constraints = { presence: { allowEmpty: false, }, }; if (value !== "") { constraints.length = { minimum: 6, msg: "must be atleast 6 characters", }; } const validationResult = validate({ [id]: value }, { [id]: constraints }); return validationResult && validationResult[id]; }; ``` **formActions.js** ```js import { validateConfirmPassword, validateEmail, validatePassword, validateString, } from "../validation.js"; export const validateInput = (inputId, inputValue) => { if (inputId === "username") { return validateString(inputId, inputValue); } if (inputId === "email") { return validateEmail(inputId, inputValue); } if (inputId === "password" || inputId === "confirmPassword") { return validatePassword(inputId, inputValue); } }; ``` **formReducer.js** ```js export const reducer = (state, action) => { const { validationResult, inputId, inputValue } = action; const updatedValues = { ...state.inputValues, [inputId]: inputValue, }; const updatedValidities = { ...state.inputValidities, [inputId]: validationResult, }; let updatedFormIsValid = true; for (const key in updatedValidities) { if (updatedValidities[key] !== undefined) { updatedFormIsValid = false; break; } } return { inputValues: updatedValues, inputValidities: updatedValidities, formIsValid: updatedFormIsValid, }; }; ```
[enter image description here][1]I created a return icon while designing, but this text appeared on it. what is this text and how do I solve it? there does not seem to be an error in my code, I searched the internet but could not find any results.I don't know how to investigate. [1]: https://i.stack.imgur.com/V1z9V.png Container( alignment: Alignment.center, // back icon height: Get.height * 0.0494, width: Get.width * 0.1, decoration: BoxDecoration( color: const Color(0xff2a2a2a), borderRadius: BorderRadius.circular(10), ), child: SuperTextIconButton( 'Back', onPressed: () => Get.back(), getIcon: Icons.arrow_back_ios_new, buttonColor: const Color(0xff7ED550), ), )
A better solution would be to call this function after scanning and before calling another activity : public void stopEmdkManager(){ if (this.emdkManager != null) { this.emdkManager.release(); this.emdkManager = null; } }
This is what I want to achieve: I have endpoints in the form `/entity1/<id>/entity2/<id>/stuff`, i.e.: ``` let route = warp::path("entity1") .and(warp::path::param::<u64>()) .and(warp::path("entity2")) .and(warp::path::param::<u64>()) .and(warp::path("stuff")) .and(warp::path::end()) .and(with_auth(db.clone())) // Authenticates user using Authorization header .and_then(handler_function); ``` I have several similar endpoints that load entity2 from the database, reject the request if it does not match to entity1, and do then stuff with the loaded entity. My idea is to create a filter that loads the entity from the database and passes it to the handler function (and does some validation, eventually rejecting the request) with something like `.and(with_entity2(db.clone()))` and ``` pub fn with_entity2(db: &Db) -> impl Filter<Extract = (Entity2,), Error = warp::Rejection> + Clone { // Get path params here? } ``` I guess I could use `FullPath`, but would need to parse it myself. Is it possible to get the path params? Or is there a better option? I wouldn't need the extracted params anymore, but I don't want to do database queries before the user is authenticated and the whole path matches.
How to use extracted path params in filters in warp / rust?
|rust|filter|warp|
null
may be its overwrite with other CSS code, use ```css #learndash_profile .profile_info .profile_avatar { display: none !important; } ```
From the [JSON Schema reference][1], regarding to array validation: > While the `items` schema must be valid for every item in the array, the `contains` schema only needs to validate against one or more items in the array. There are some examples on the docs where you can see that the `items` validation restricts the array to have all values compliant with the rule. But in the case of `contains`, only one *valid* value is enough to make it compliant. This explains why the errors can be detailed in the `items` particular case. [1]: https://json-schema.org/understanding-json-schema/reference/array#contains
ANTLR4 How to link custom lexer with ANTLR generated parser
|java|parsing|antlr|antlr4|lexer|
**I have 2 clusters,both publicly accessible,same security groups,but in different public subnets,I can connect to one cluster from local, but not the other. Tried connecting via cluster ip, tried adding my ip as well as 0.0.0.0/0 to security groups, rebooted cluster after allowing public access, nothing helped.** Edit: Solution: When configuring cluster, I get to select the subnet group, and I chose the default one, which was a mixture of public and private subnets. So AWS randomly chose a subnet from it for my cluster, which was a private one. that is why I haven't been able to connect to it via local.
You can see the difference if and only if the function actually needs to do something, either to catch the exception or to call a destructor, when an execption is thrown. https://godbolt.org/z/PrqEeKrqP Here's an example. If every function **called** in a block is marked as `noexcept`, or if the **calling** function itself is `noexcept`, the compiler can safely omit all exception handling logic. Otherwise, a hidden `catch` block is generated to call destructors. Note that I compile the example code with `-shared` flag to avoid inlining.
I don't know enough about the default trigger to comment on #1 To answer #2 I would consider using an event grid event. This way there's no polling and events are received only when the blob is created. So setting up a new function against an existing storage container would imply an invocation only for new blobs that are subsequently added. Noting that event grid guarantees "at least once delivery" (due to possibility of transient faults) so you do need to account for that. REF https://learn.microsoft.com/en-us/azure/azure-functions/storage-considerations?tabs=azure-cli#trigger-on-a-blob-container
Allow me to start with a basic disclaimer: I always use VMs to run more complex server-like applications. In this example I run minikube on a VM, as I want to play around with Kubernetes a bit. Once I'm done, I kill the VM, or back it up somewhere external to get rid of the stuff quickly. Installing this on my local OS is not a viable option to me. Now to the question/issue I want to resolve. I have a small Spring Boot Service, nothing worth mentioning. I want to deploy that service into a Pod. Following online documentations and tutorials, I have setup my VM on Debian 12 with Docker, Helm and Minikube. I used a kubernetes/helm plugin for my IDE to have all the necessary config files generated, and followed a simple onboarding tutorial to get an idea of how the basics work. My issue arises with my desire to have Minikube on a VM, instead of having it on the base client. I fail to build the docker image on the VM through Maven. What works: - I can build the .jar-File - I can copy the dockerfile and the .jar-File to the VM using Maven-Antrun-Plugin with SCP - I can build the image manually on the VM using "docker build -t name ." - I can use SSHEXEC to create a file through the touch command or a directory through mkdir What doesn't work: - Using the antrun plugin for SSHEXEC to execute above command with the proper path, results in the proper terminal feedback, but not in a created image - I tested the same through a shell-script "buildImage.sh", which runs the command in the proper folder, and have SSHEXEC just run the script, yielding the same results I use following execution (removed personal info) to run the sshexec ``` <execution> <id>builddocker</id> <phase>pre-integration-test</phase> <goals> <goal>run</goal> </goals> <configuration> <target> <sshexec host="xxx" username="xxx" password="xxx" failonerror="true" trust="true" timeout="120000" usepty="true" command="docker build -t xxx ${kubernetes.image.path}"/> </target> </configuration> </execution> ``` This is the first setup. The setup where I execute the shell-script is a bit more convoluted, but works the same. The only difference, why I tested it, is that the docker command is executed in the same working directory, where the dockerfile is located. My original hypothesis was that SSHEXEC closes the connection before the command finishes. The existence of the docker feedback in the execution console disproves this though. There are no error messages from docker. Docker finishes with "Writing image sha256:" and "naming to docker.io/library/xxx" But as soon as I **docker images ls** or **minikube images ls** the image isn't there. I also tripple checked that maven uses the same user as I use on my ssh terminal to check the results. Right now I'm a bit out of ideas, so I thought I'll open a question here to see if someone else has an idea. I also never rule out that I missed something really silly :) PS: If there is another approach to deploy the java/docker-stuff on the VM (meaning better than maven) I'm happy to hear it too. Maven is just "the most obvious answer" when I think of automatic builds in Java.
Sub DeleteRowsonCriteria() Dim lastRow As Long, dataRow As Long Dim prodTran As String, prodOIS As String lastRow = ActiveSheet.UsedRange.SpecialCells(xlCellTypeLastCell).Row For dataRow = lastRow To 3 Step -1 prodTran = Range("A" & dataRow).Text prodOIS = Range("AA" & dataRow).Text If prodTran = "Ordered" Then Rows(dataRow).Delete ElseIf prodTran = "Cancelled" Then Rows(dataRow).Delete ElseIf prodOIS = "Cancelled" Then Rows(dataRow).Delete ElseIf prodOIS = "Refund" Then Rows(dataRow).Delete End If Next dataRow End Sub
I'm currently trying to deploy a Next.js app on GitHub Pages using GitHub Actions, but I get a page 404 error even after it successfully deploys. I've looked around a bunch of similarly named questions and am having trouble figuring this out. Here is my GitHub repo: https://github.com/Mctripp10/mctripp10.github.io Here is my website: https://mctripp10.github.io I used the *Deploy Next.js site to Pages* workflow that GitHub provides. Here is the `nextjs.yml` file: ```lang-yaml # Sample workflow for building and deploying a Next.js site to GitHub Pages # # To get started with Next.js see: https://nextjs.org/docs/getting-started # name: Deploy Next.js site to Pages on: # Runs on pushes targeting the default branch push: branches: ["dev"] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages permissions: contents: read pages: write id-token: write # Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued. # However, do NOT cancel in-progress runs as we want to allow these production deployments to complete. concurrency: group: "pages" cancel-in-progress: false jobs: # Build job build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Detect package manager id: detect-package-manager run: | if [ -f "${{ github.workspace }}/yarn.lock" ]; then echo "manager=yarn" >> $GITHUB_OUTPUT echo "command=install" >> $GITHUB_OUTPUT echo "runner=yarn" >> $GITHUB_OUTPUT exit 0 elif [ -f "${{ github.workspace }}/package.json" ]; then echo "manager=npm" >> $GITHUB_OUTPUT echo "command=ci" >> $GITHUB_OUTPUT echo "runner=npx --no-install" >> $GITHUB_OUTPUT exit 0 else echo "Unable to determine package manager" exit 1 fi - name: Setup Node uses: actions/setup-node@v4 with: node-version: "20" cache: ${{ steps.detect-package-manager.outputs.manager }} - name: Setup Pages uses: actions/configure-pages@v4 with: # Automatically inject basePath in your Next.js configuration file and disable # server side image optimization (https://nextjs.org/docs/api-reference/next/image#unoptimized). # # You may remove this line if you want to manage the configuration yourself. static_site_generator: next - name: Restore cache uses: actions/cache@v4 with: path: | .next/cache # Generate a new cache whenever packages or source files change. key: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-${{ hashFiles('**.[jt]s', '**.[jt]sx') }} # If source files changed but packages didn't, rebuild from a prior cache. restore-keys: | ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}- - name: Install dependencies run: ${{ steps.detect-package-manager.outputs.manager }} ${{ steps.detect-package-manager.outputs.command }} - name: Build with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next build - name: Static HTML export with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next export - name: Upload artifact uses: actions/upload-pages-artifact@v3 with: path: ./out # Deployment job deploy: environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest needs: build steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v4 ``` I got this on the build step: ```lang-none Route (app) Size First Load JS ┌ ○ /_not-found 875 B 81.5 kB ├ ○ /pages/about 2.16 kB 90.2 kB ├ ○ /pages/contact 2.6 kB 92.5 kB ├ ○ /pages/experience 2.25 kB 90.3 kB ├ ○ /pages/home 2.02 kB 92 kB └ ○ /pages/projects 2.16 kB 90.2 kB + First Load JS shared by all 80.6 kB ├ chunks/472-0de5c8744346f427.js 27.6 kB ├ chunks/fd9d1056-138526ba479eb04f.js 51.1 kB ├ chunks/main-app-4a98b3a5cbccbbdb.js 230 B └ chunks/webpack-ea848c4dc35e9b86.js 1.73 kB ○ (Static) automatically rendered as static HTML (uses no initial props) ``` Full image: [Build with Next.js][1] I read in https://stackoverflow.com/questions/58039214/next-js-pages-end-in-404-on-production-build that perhaps it has something to do with having sub-folders inside the `pages` folder, but I'm not sure how to fix that as I wasn't able to get it to work without sub-foldering `page.js` files for each page. EDIT: Here is my `next.config.js` file: ``` /** @type {import('next').NextConfig} */ const nextConfig = { basePath: '/pages', output: 'export', } module.exports = nextConfig ``` [1]: https://i.stack.imgur.com/wSlPq.png
I create plotly bar chart using the next line ``` fig = px.bar(df_to_print, x="bin_dist", y=metric, color='net_id', barmode="group", text=metric, color_discrete_sequence=px.colors.qualitative.Vivid)` ``` and got this chart (using streamlit): [enter image description here](https://i.stack.imgur.com/D8z2O.png) I want to bold the maximum value on each group, something like: [enter image description here](https://i.stack.imgur.com/vOcHd.png) Any ideas?