instruction
stringlengths
0
30k
> I wrote below bash Shell script to check whether the input value is a character string or a number (using Mathematical function): > #!/bin/bash > uniq_value=$1 > if `$(echo "$uniq_value / $uniq_value" | bc)` ; then > echo "Given value is number" > else > echo "Given value is string" > fi > The execution result is as follows: > $ sh -x test.sh abc > `+ uniq_value=abc` > `+++ echo 'abc / abc'` > `+++ bc` > `Runtime error (func=(main), adr=5): Divide by zero` > `+ echo 'Given value is number'` > `Given value is number` > There is an error like this: "Runtime error (func=(main), adr=5): Divide by zero" > Can anyone please suggest how to rectify this error? > The expected result for the input "abc123xy" should be "Given value is string" > The expected result for the input "3.045" should be "Given value is number" > The expected result for the input "6725302" should be "Given value is number" > After this I will assign a series of values to "uniq_value" variable in a loop. Hence getting the output for this script is very important.
Look at this: for (i = 0; i < N; i += 8) for (j = 0; j < M; j += 4) { if (j == i || j == i + 4) { /* same block */ For 8x8 matrix both `N`and `M` is 8 so it's like: for (i = 0; i < 8; i += 8) for (j = 0; j < 8; j += 4) { if (j == i || j == i + 4) { /* same block */ So which values will `i` and `j`take: i==0, j==0 i==0, j==4 // Now the inner loop ends as j becomes 8 due j+=4 // Now the outer loop ends as i becomes 8 due i+=8 So the `if` statement will only be executed for (i,j) as (0,0) and (0,4). If both cases the `if` condition will be true. As the last statement in the body of the `if` is a `continue`, the code will never reach: for (i1 = i; i1 < i + 8 ; i1++) for (j1 = j; j1 < j + 4 ; j1++) B[j1][i1] = A[i1][j1]; In other words - your algorithm is wrong.
The thing is that I need to create a table and export it as a photo. There are some elements that I need them to be aligned to the right, but I want them to have some margin with respect to the end of the cell. What I did is to convert these numbers to a string, and then I added a blank space at the end (representing the margin) coste_total = 1234567.89 cadena = "{:,.2f}".format(coste_total) print(cadena) # Salida: 1,234,567.89 After I do this, I put it in the excel file and add the margin: ws.append((row[1]['Anyo'], row[1]['Mes'], coste_total + ' ' ,termino_energia + ' ', ter_pot+ ' ', exceso + ' ', resto + ' ')) However, when I do this, and I export the file, I get the corner marked by the control Green Error Checking Markers of Excel, because it detects that the values are numbers, and thus, the exported image has the markers too: ![Error marks](https://i.stack.imgur.com/D40yk.png) My question is if there is any other way to leave a margin, or if there is a way to ignore these errors via openpyxl. Thanks
GitLab CI Pipeline Incorrectly Triggers on All Branches Despite Specific Workflow Rules
|git|gitlab|gitlab-ci|
I am developing a simple test web application with registration and ability to log in. I try to add a new row to the database via Postman using this POST request: ``` https://localhost:7239/api/Auth/Register ``` But I get this error: > System.InvalidOperationException: Unable to resolve service for type 'WebApplication1.Data.AppDbContext' while attempting to activate 'WebApplication1.Controllers.AuthController'. ``` at Microsoft.Extensions.DependencyInjection.ActivatorUtilities.ThrowHelperUnableToResolveService(Type type, Type requiredBy) at lambda_method13(Closure, IServiceProvider, Object[]) at Microsoft.AspNetCore.Mvc.Controllers.ControllerFactoryProvider.<>c__DisplayClass6_0.<CreateControllerFactory>g__CreateController|0(ControllerContext controllerContext) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|25_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeFilterPipelineAsync() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope) at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context) at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context) at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddlewareImpl.Invoke(HttpContext context) ``` The database is up and running. Here is my `AuthController.cs`: ``` using Microsoft.AspNetCore.Mvc; using Microsoft.EntityFrameworkCore; using WebApplication1.Data; using WebApplication1.Models; using System.Threading.Tasks; namespace WebApplication1.Controllers { [Route("api/[controller]")] [ApiController] public class AuthController : ControllerBase { private readonly AppDbContext _context; public AuthController(AppDbContext context) { _context = context; } [HttpPost("register")] public async Task<IActionResult> Register(User user) { if (ModelState.IsValid) { _context.Users.Add(user); await _context.SaveChangesAsync(); return Ok("User registered successfully."); } return BadRequest("Invalid model state."); } [HttpPost("login")] public async Task<IActionResult> Login(User user) { var existingUser = await _context.Users.FirstOrDefaultAsync(u => u.Username == user.Username && u.Password == user.Password); if (existingUser != null) { return Ok("Login successful."); } return BadRequest("Invalid username or password."); } } } ``` `AppDbContext.cs`: ``` using Microsoft.EntityFrameworkCore; using WebApplication1.Models; namespace WebApplication1.Data { public class AppDbContext(DbContextOptions<AppDbContext> options) : DbContext(options) { public DbSet<User> Users { get; set; } } } ``` Part of `Startup.cs`: ``` public void ConfigureServices(IServiceCollection services) { services.AddDbContext<AppDbContext>(options => options.UseNpgsql(Configuration.GetConnectionString("DefaultConnection"))); services.AddControllers(); } ``` `appsettings.json`: ``` { "ConnectionStrings": { "DefaultConnection": "Host=localhost;Port=5432;Database=database;Username=postgres;Password=qwertyps4;" }, "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning" } }, "AllowedHosts": "*" } ``` And finally model `User.cs`: ``` using System.ComponentModel.DataAnnotations; namespace WebApplication1.Models { public class User { public int Id { get; set; } [Required] public string Username { get; set; } [Required] public string Password { get; set; } } } ``` I have checked all the files several times, but I just can't figure out the problem.
|c#|postgresql|asp.net-core-mvc|connection|
I'm currently trying to deploy a nextJS app on GitHub pages using GitHub actions, but I get a page 404 error even after it successfully deploys. I've looked around a bunch of similarly named questions and am having trouble figuring this out. I'll add that this is my first nextjs project and I've never hosted a website anywhere before. Here is my github repo: https://github.com/Mctripp10/mctripp10.github.io Here is my website: mctripp10.github.io I used the *deploy Next.js site to pages* workflow that GitHub provides. Here is the nextjs.yml file: # Sample workflow for building and deploying a Next.js site to GitHub Pages # # To get started with Next.js see: https://nextjs.org/docs/getting-started # name: Deploy Next.js site to Pages on: # Runs on pushes targeting the default branch push: branches: ["dev"] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages permissions: contents: read pages: write id-token: write # Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued. # However, do NOT cancel in-progress runs as we want to allow these production deployments to complete. concurrency: group: "pages" cancel-in-progress: false jobs: # Build job build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Detect package manager id: detect-package-manager run: | if [ -f "${{ github.workspace }}/yarn.lock" ]; then echo "manager=yarn" >> $GITHUB_OUTPUT echo "command=install" >> $GITHUB_OUTPUT echo "runner=yarn" >> $GITHUB_OUTPUT exit 0 elif [ -f "${{ github.workspace }}/package.json" ]; then echo "manager=npm" >> $GITHUB_OUTPUT echo "command=ci" >> $GITHUB_OUTPUT echo "runner=npx --no-install" >> $GITHUB_OUTPUT exit 0 else echo "Unable to determine package manager" exit 1 fi - name: Setup Node uses: actions/setup-node@v4 with: node-version: "20" cache: ${{ steps.detect-package-manager.outputs.manager }} - name: Setup Pages uses: actions/configure-pages@v4 with: # Automatically inject basePath in your Next.js configuration file and disable # server side image optimization (https://nextjs.org/docs/api-reference/next/image#unoptimized). # # You may remove this line if you want to manage the configuration yourself. static_site_generator: next - name: Restore cache uses: actions/cache@v4 with: path: | .next/cache # Generate a new cache whenever packages or source files change. key: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-${{ hashFiles('**.[jt]s', '**.[jt]sx') }} # If source files changed but packages didn't, rebuild from a prior cache. restore-keys: | ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}- - name: Install dependencies run: ${{ steps.detect-package-manager.outputs.manager }} ${{ steps.detect-package-manager.outputs.command }} - name: Build with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next build - name: Static HTML export with Next.js run: ${{ steps.detect-package-manager.outputs.runner }} next export - name: Upload artifact uses: actions/upload-pages-artifact@v3 with: path: ./out # Deployment job deploy: environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest needs: build steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v4 I got this on the build step: Route (app) Size First Load JS ┌ ○ /_not-found 875 B 81.5 kB ├ ○ /pages/about 2.16 kB 90.2 kB ├ ○ /pages/contact 2.6 kB 92.5 kB ├ ○ /pages/experience 2.25 kB 90.3 kB ├ ○ /pages/home 2.02 kB 92 kB └ ○ /pages/projects 2.16 kB 90.2 kB + First Load JS shared by all 80.6 kB ├ chunks/472-0de5c8744346f427.js 27.6 kB ├ chunks/fd9d1056-138526ba479eb04f.js 51.1 kB ├ chunks/main-app-4a98b3a5cbccbbdb.js 230 B └ chunks/webpack-ea848c4dc35e9b86.js 1.73 kB ○ (Static) automatically rendered as static HTML (uses no initial props) Full image: [Build with Next.js][1] I read in this post https://stackoverflow.com/questions/58039214/next-js-pages-end-in-404-on-production-build that perhaps it has something to do with having sub-folders inside the pages folder, but I'm not sure how to fix that as I wasn't able to get it to work without sub-foldering page.js files for each page. Any help would be greatly appreciated! Thanks! [1]: https://i.stack.imgur.com/wSlPq.png
I've a execute process task which executes python Script[![enter image description here][1]][1] [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/n0qzu.png [2]: https://i.stack.imgur.com/YDWK0.png I want to pass root variable value in python from SSIS when I give the project parameter value in Python script it is throwing an error how do I do this?
How to pass a variable value in SSIS to Python Script : Execute Process task
|python|ssis|sql-server-data-tools|ssis-2012|msbi|
If even after having incorporated the remove icon you still can't see the remove button as happened to me, then, after having implemented all the previous recommendations, make sure that in the module of your component you have imported these four modules: MatChipGrid, MatChipRow, MatChipInput, MatChipRemove, But specifically, the one referring to the remove button is: MatChipRemove. This is the import you should add: import { NgModule } from '@angular/core'; import {MyComponent} from "./my.component"; import {CommonModule} from "@angular/common"; import {MatButtonModule} from "@angular/material/button"; import {MatFormFieldModule} from "@angular/material/form-field"; import {MatInputModule} from "@angular/material/input"; import {ReactiveFormsModule} from "@angular/forms"; import {MatIconModule} from "@angular/material/icon"; import {MatChipGrid, MatChipInput, MatChipRemove, MatChipRow} from "@angular/material/chips"; @NgModule({ declarations: [MyComponent], imports: [ CommonModule, MatButtonModule, MatFormFieldModule, MatInputModule, ReactiveFormsModule, MatIconModule, MatChipGrid, MatChipRow, MatChipInput, MatChipRemove, ], exports: [MyComponent] }) export class MyModule { }
You can import javascript and css files in index.html from github owl-carousel repository.
I want to write a script for running regression models across a whole data.table, where my function fits the model and extracts information for later analysis. I have a very large number of models to fit, so I want to make this fast. I want to vectorize this function in order to do this, but have struggled as there are NA values interspersed across the data and I do not want to impute missing values, just fit each model with the data available. I also want to extract a few statistics from the model. So my question is, can this function be vectorized and if so how? In summary, the aim is to fit a simple linear regression model of the "dependent_var" (first column) versus each of predictor columns (in isolation) using the "covariate" column as a covariate (last column). So from each model, I want the following: ``` return(indepent_var = col, pearson = cor_coef, pearson_p = p_value, indepent_var_beta = slope, indepent_var_beta_sig = slope_sig, adj_r_squared = adj_r_squared, indepent_var_beta_se = se_slope, intercept_se = se_intercept, n_obs = nrow(dfx_subset)) # dfx subset being the subset used to fit the model in question. ``` # Toy data As an example dataset ``` library(data.table) set.seed(123) num_rows <- 300 num_cols <- 6000 # Random data data <- matrix(rnorm(num_rows * num_cols), nrow = num_rows) # Many columns include NAs which need to be handled during fit without imputation prop_na <- runif(num_cols, min = 0.1, max = 0.5) for (i in 1:num_cols) { num_na <- round(prop_na[i] * num_rows) idx_na <- sample(1:num_rows, num_na) data[idx_na, i] <- NA } # Mock table, dependent variable and binary covariate variable DT <- as.data.table(cbind(data, covariate = sample(0:1, num_rows, replace = TRUE))) setnames(DT, old = c("V1"), new = c("dependent_var")) ``` I may include more covariates in the future, so would like to include Pearson coefficients for some quick checks/potential future analyses. As I have a lot of data to process I'd ideally just submit this once on the HPC. Here is my function so far: # Linear regression function ``` lm_analysis <- function(col, dfx) { tryCatch({ dfx_subset <- dfx[complete.cases(dfx[[1]], dfx[[col]], dfx[["covariate"]]), ] ## Compute Pearson correlation cor_test <- cor.test(dfx_subset[[1]], dfx_subset[[col]]) cor_coef <- cor_test$estimate p_value <- cor_test$p.value ## Linear regression lm_result <- lm(dfx_subset[[1]] ~ dfx_subset[[col]] + dfx_subset[["covariate"]], data = dfx_subset) slope <- lm_result$coefficients[2] model_summary <- summary(lm_result) slope_sig <- model_summary$coefficients[2, 4] adj_r_squared <- model_summary$adj.r.squared se_slope <- model_summary$coefficients[2, "Std. Error"] se_intercept <- model_summary$coefficients[1, "Std. Error"] return(c(indepent_var = col, pearson = cor_coef, pearson_p = p_value, indepent_var_beta = slope, indepent_var_beta_sig = slope_sig, adj_r_squared = adj_r_squared, indepent_var_beta_se = se_slope, intercept_se = se_intercept, n_obs = nrow(dfx_subset))) }, error = function(e){ return(NULL) }) } ``` # Function application: ``` predictor_cols <- setdiff(names(DT), c("dependent_var", "covariate")) lm_results <- lapply(predictor_cols, lm_analysis, dfx = DT) results_lmfit <- as.data.table(do.call(rbind, lm_results)) ``` I do not want to filter any observations before the analysis. Also ideally I do not want to parallelise this component as I plan to do this on the outer loop, where I will be performing the above operation on a data.table of dependent variables. I am at a loose end in terms of how to vectorize this, and due to the mismatching data between different datasets I wasn't sure if this could easily be achievable. Any advice would be appreciated.
R linear regression function vectorization
|r|performance|vectorization|linear-regression|
null
denied: Unauthenticated request. Unauthenticated requests do not have permission "artifactregistry.repositories.uploadArtifacts" on resource "projects/digidoc-dev/locations/asia-south1/repositories/digidoc-art" (or it may not exist) I tried login authentication but still failed and have granted all the permissions
Custom .NET Core service factory with auto dispose features
|c#|asp.net-core|.net-core|.net-5|
It's a known issue with the latest release, it now has been resolved and the fix will come in the next release. One way to fix it is to rollback to a previous version i.e. 10.7.1 or if the isssue persists like mine you can just add this line of code: window.navigator.userAgent = "ReactNative"; I pasted mine inside the App.js file before any firebase related import.
|gcc|coredump|address-sanitizer|
Is it possible to use Few Shot Learning in Retrieval Augmented Language ? I can use the few shot separately and create a rag with template but I couldn't use the few shot template in Retrieval Augmented Language ``` #prompt few shot prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, suffix="Pergunta: {input}", input_variables=["input"] ) #retrievalRAG qa = RetrievalQA.from_chain_type(llm=llm, chain_type= 'stuff', retriever=retriever, verbose=True, return_source_documents=True) ```
I am struggling with a problem. I migrated my app from rails 6 + webpacker to rails 7 + jsbundling + webpack following this tutorial : https://github.com/rails/jsbundling-rails/blob/main/docs/switch_from_webpacker.md However, I get this error "The asset 'transportation_calculator.js' is not present in the asset pipeline" everytime I access a page. Here is my manifest.json : ``` //= link_tree ../../javascript .js //= link_directory ../stylesheets .scss //= link_tree ../images //= link_tree ../builds ``` In my application.html.erb I have : ``` <%= stylesheet_link_tag "application", media: "all", "data-turbolinks-track": "reload" %><%= javascript_include_tag "application", "data-turbolinks-track": "reload", defer: true %> ``` And I start the haml page that I'm trying to load with "= javascript_include_tag 'transportation_calculator.js'" Thank you very much for you help ! I tried to change the path in webpack.config.js to : ``` entry: { // add your css or sass entries application: [ './app/javascript/application.js', './app/assets/stylesheets/application.scss', ], } ``` without success. The error remained the same.
The asset xxx is not present in the asset pipeline - Rails 7
|ruby-on-rails-7|
null
Hello everybody i have a problem using NEXT's dynamic routing (Next 13) I have this structurr - user/ -- [id]/ --- page.js and this is working on dev mode but not in prod. What im trying? Im make a "page.js" inside user folder with this config on it export const dynamic = 'auto' export const dynamicParams = true export const revalidate = false export const fetchCache = 'auto' export const runtime = 'nodejs' export const preferredRegion = 'auto' export const maxDuration = 5 export default function MyComponent() {} I not using "generateStaticParams" because the param is generated randomly on backend and sended to user email. there are any way to solve this? Thanks for the answers
I am migrating old openssl version openssl-0.9.8i with VS-2012 to openssl-1.1.1w with VS-2017. I am getting below error: "..\openssl-1.1.1w\include\openssl/asn1_mac.h(10): fatal error C1189: #error: "This file is obsolete; please update your software." In older version "asn1_mac.h" has some different content and in newer version has only line saying "his file is obsolete; please update your software." Could you please help me to understand which file i need to refer for newer version?
\openssl-1.1.1w\include\openssl/asn1_mac.h(10): fatal error C1189: #error: "This file is obsolete; please update your software
|openssl|
i want to make my extension I want to suggest "CompletionList" to users. users can run "editor.action.triggerSuggest" The process of my extensions as is follows 1. users write some texts 2. if he press "completion command", 3. vscode extension provides Completion 4. and executeCommand("editor.action.triggerSuggest") But i encounter some issues. When a specific character is already entered, no matter how much I create CompletionItems, they won't appear unless that specific character is included in the suggestions. For instance, if the cursor is positioned right after the letter 'r' in the word 'for,' the suggestion list won't appear, but if the cursor is one space after 'r,' the suggestion list appears as expected [enter image description here](https://i.stack.imgur.com/JJbEv.png) [enter image description here](https://i.stack.imgur.com/9tGqq.png) if there is a way that solve this issue? I apologize if my English proficiency makes it difficult for you to understand https://code.visualstudio.com/docs/getstarted/settings In this site, I try to modify settings.json, but i can't resolve errors
How to Control Visual Studio Code Extension Intellisense?
|javascript|typescript|visual-studio-code|google-chrome-extension|vscode-extensions|
null
So, I'm sure this is a bit naive, but this is purely for experimental purposes and/or a learning exercise. In essence, I'd like to see if I can reduce the footprint of the closure created when we use `Task.Run(()=>Func<>())` by creating a class that I initialize only once. One, the objective would be to avoid creating a 'new' instance of this every time we run, which would probably be less efficient than the closure itself I imagine (but this is mere speculation, I know). So, creating a basic class to do so is rather simple, as you can find examples of that here on the stack. However, where I run into issue, is that it would appear to me, that if I want to use members and functions from another class, that having to encapsulate them, or inject them into the class we're going to `Run` on, while it may be less data than the original class itself, it's probably not going to be that much of an improvement. So say, I have something along the lines of: ``` internal async Task<PathObject> PopulatePathObjectAsync(Vector3Int origin, Vector3Int destination, PathObject path) { return await Task.Factory.StartNew(() => PopulatePathObject(origin, destination, path)); } /// Not sure if we want to make this a task or not because we may just parallelize and await the outer task. /// We'll have to decide when we get down to finalization of the architecture and how it's used. internal PathObject PopulatePathObject(Vector3Int origin, Vector3Int destination, PathObject path) { Debug.Log($"Pathfinding Search On Thread: ({System.Threading.Thread.CurrentThread.ManagedThreadId})"); if (!TryVerifyPath(origin, destination, ref path, out PathingNode currentNode)) return path; var openNodes = m_OpenNodeHeap; m_ClosedNodes.Clear(); openNodes.ClearAndReset(); openNodes.AddNode(currentNode); for (int i = CollectionBufferSize; openNodes.Count > 0 && i >= 0; i--) { currentNode = ProcessNextOpenNode(openNodes); if (NodePositionMatchesVector(currentNode, destination)) { return path.PopulatePathBufferFromOriginToDestination(currentNode, origin, PathState.CompletePath); } ProcessNeighboringNodes(currentNode, destination); } return path.PopulatePathBufferFromOriginToDestination(currentNode, origin, PathState.IncompletePath); } } ``` In order to ditch the lambda, the closure, and the creation (or perhaps cast?) of the delegate, I would need a class that actually encapsulates that `PopulatePathObject` function in its entirety, either by literally copying the members necessary, or passing them as arguments. This all seems like it would probably render any benefits gained. So is there a way I could have something like.. ``` private class PopulatePathObjectTask { private readonly Vector2Int m_Origin; private readonly Vector3Int m_Destination; private readonly PathObject m_Path; public PopulatePathObjectTask(Vector2Int origin, Vector3Int destination, PathObject path) { m_Origin = origin; m_Destination = destination; m_Path = path; } public PathObject PopulatePathObject(Vector3Int origin, Vector3Int destination, PathObject path) { ///Obviously here, without access to the actual AStar class responsible for the search, ///I don't have access to the functions or the class members such as the heap or the hashset ///that represents the closed nodes as well as the calculated buffer size based on the space-state ///dimensions. With that, I'd just be recreating the class and not avoiding much, if any, ///of the overhead created by the closure capturing the class in the first place. } } ``` That I could use to access the function that already exists? I've been toying with the idea of creating a static member and using dependency injection for the open/closed node collections, but I thought, or rather hoped, someone might have some more insight into this, other than it's pointless and the even *possible* overhead reduction or performance gains will be so minimal that it's pointless. Which, granted you're probably right,but I'm doing this as an exercise and I'd like to be able to actually measure the differences. Quite frankly, I'm probably not even going to use it, might even ditch the AStar for JPS instead, but I would like to know before moving on. Again, just as an exercise of expanding my understanding. I'm not entirely sure, but it would seem as if the closure would have to have the entire AStar object captured in time, one would hope by reference. Thanks in advanced. Hopefully I don't run into people who leave me seven comments in all caps about how dumb this is and then delete them later and make me look like I'm responding to no one :)
(C#) Reducing Closure Overhead In Task.Run/Factory.StartNew With Predefined Object
|c#|closures|task-parallel-library|
null
{"OriginalQuestionIds":[52593803],"Voters":[{"Id":476,"DisplayName":"deceze","BindingReason":{"GoldTagBadge":"python"}}]}
> I wrote below bash Shell script to check whether the input value is a character string or a number (using Mathematical function): > #!/bin/bash > uniq_value=$1 > if `$(echo "$uniq_value / $uniq_value" | bc)` ; then > echo "Given value is number" > else > echo "Given value is string" > fi > The execution result is as follows: > $ sh -x test.sh abc > `+ uniq_value=abc` > `+++ echo 'abc / abc'` > `+++ bc` > `Runtime error (func=(main), adr=5): Divide by zero` > `+ echo 'Given value is number'` > `Given value is number` > There is an error like this: "Runtime error (func=(main), adr=5): Divide by zero" > Can anyone please suggest how to rectify this error? > The expected result for the input "abc123xy" should be "Given value is string". > The expected result for the input "3.045" should be "Given value is number". > The expected result for the input "6725302" should be "Given value is number" > After this I will assign a series of values to "uniq_value" variable in a loop. Hence getting the output for this script is very important.
jst install it will work perfectly "npm install react-tilt react react-dom"
`Enum` classes (with members) [are final][1], even at runtime, so there is no need to use `Self`. Use `Letter` directly: <sub>(playground link: [Pyright][2])</sub> ```py from enum import Enum class Letter(Enum): ... @property def neighbors(self) -> list['Letter']: match self: case self.A: # Or `Letter.A` return [self.B] # Or `Letter.B` ... ``` The behaviour in question is [said][3] to be as designed: > The `Self` type is effectively a type variable that has an upper bound of the containing class. It is not the same as the class itself. Also, you can remove the last, wildcard branch: ```py case _: # ~ # Pyright will raise an error here since the checking # done by other branches are already exhaustive. raise ValueError ``` [1]: https://github.com/python/cpython/blob/1932da0c3dadb39b0d560c5367bb2b79381e4a15/Lib/enum.py#L956-L960 [2]: https://pyright-play.net/?pythonVersion=3.12&strict=true&code=GYJw9gtgBApgdgV2gSwgBzCALlAooiAWACgSBjAGwEMBnGqAGRiyxhAAp8kBKALhKiCoAQSgBeKACJhkgUIBC4qfNnEhUAMJLJG1XMEABNODRssAT31QAJjGBQ4MZAHMAFgCNMNdjRgVg3FAAtAB8UBTINFgA2gDkTCxssQC6-GrqQhBUWGSuUL7%2BaRnFUGS0MPl%2BwAB0wkUlDSDMCCBwUNEFNfLJVg1lvpX%2B1fL1DcVNWC1tHVW1ADSDNRo96WOl5YvVGqNrQhNT7Z3DK7vrAwD6O6cgVJEVAGpUFAgwuCDgICQkxshwWOwJVggWrVRwuDxebjfEC-f4WUwA5hAkFgtyeEA0bjcIA [3]: https://github.com/microsoft/pyright/issues/7169
For this to work you'll need to write a [recursive conditional type](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-1.html#recursive-conditional-types) that walks through the [tuple type](https://www.typescriptlang.org/docs/handbook/2/objects.html#tuple-types) of the input array and maps them to another array using the name mapper, and builds up an output by concatenating these arrays. For example, here's a [tail-recursive conditional type]( https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-5.html#tail-recursion-elimination-on-conditional-types ) called `MapNames<T, M>` which takes an input tuple `T` and a mapping type `M` that converts the elements of `T` into a new array: type MapNames< T extends readonly string[], M extends Record<T[number], readonly any[]>, A extends any[] = [] > = T extends readonly [ infer F extends keyof M, ... infer R extends readonly string[] ] ? MapNames<R, M, [...A, ...M[F]]> : A It uses [variadic tuple types](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-0.html#variadic-tuple-types) to manipulate both the input and output tuples. Note that the third type parameter `A` is the *accumulator* and its use makes it tail-recursive and therefore amenable to fairly long input tuples. We break the input tuple `T` into the first element `F` and the rest of the elements `R`. Then `F` is a key of the mapping type `F`, and we concatenate the array `M[F]` to the end of the accumulator, and recurse. Then you can write a generic `mapNames()` function that takes inputs of types `T` and an `M` and returns a `MapNames<T, M>`: function mapNames< const T extends readonly string[], const M extends Record<T[number], readonly any[]> >(t: T, m: M): MapNames<T, M> { return t.reduce<string[]>((acc, e) => { acc.push(...(m as any)[e]); return acc; }, []) as any; } This function is implemented the same way as your example, although there's no way for the compiler to understand that the implementation works for such a complicated recursive conditional generic type. So we have to loosen type safety inside the function via [type assertions](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#type-assertions) and [the `any` type](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#any), or something like it. --- Let's test it out: const t1 = ["a", "b", "c"] as const; const nameMap = { "a": [], "b": ["j"], "c": ["k", "l"], } as const; const t2 = mapNames(t1, nameMap); // ^? const t2: ["j", "k", "l"] console.log(t2); // ["j", "k", "l"] Looks good. The type of `t2` matches the actual value at runtime. [Playground link to code](https://www.typescriptlang.org/play?#code/LAKALgngDgpgBAWQIZQHJILYwM4B5RyFwAqcMAHmDAHYAm2cATjErQPbUA2Ec2YjAS2oBzANoBdADQEiCMpRr04AJRgBjNo1q5io6gFcMAIxiMpTFuy48k1CBIB80kETgBBeVToNb98XABeOAlQB0CST0UGZlYObmCZQiEAM1M4ADFI7zgAaxgINmTEZ1cAOnK4FLTlLKUYq3i+QREQkH8AfkQUdCw8ZUli4PLStwHhhFF08XEwgC53UFBk-Wo1MAEOOAxuzBx8FyINaj4Iii86yzieJqExKUS4I5O5M6iVdU1tXQNjU3N6q5wXyOUIACjA82IAww8wQAEpYTtejoBggwgBvB7MMD6RjUOBgUrMWj6NQwXA3FozUGgpBqNQDGBwwIYh6uOlqUpQfTYAAWoOGoIwQJ8djhohg4jhAG42URsbj8RzZQdCABfAYSZlIUUQFVqxYgUBPMAEgCM4VEACIkFaBlajHa4Fa1Fb-DrHhw+CqTXBqLtkFBwpjVc7bfMJCUiA6rRGrQArN1Rwgu2PBK05J1WzhJ0BqkWe45gH1e01gABM4W2aF22HBZoG-qwgZloAA9G3XAA9drGr1sTgwUqcNjCcHl1tGqdAA)
PyTorch LSTM not using hidden layer
|python|machine-learning|pytorch|lstm|
ERROR in ./src/index.js 8:0-51 Module not found: Error: Can't resolve './react-router-dom' in 'C:\Users\nagendra\nagendra-website\src' how to solve errors and what are the change? how to solve errors and what are the change?
Deprecation notice: ReactDOM.render is no longer supported in React 18 Fixed (React.s dom)
|reactjs|dom|
null
In a Linux shell, a dollar sign `$` inside double quotes gets interpreted as a variable. Since `$1` is undefined, it gets translated into an empty string, so the double-quoted argument effectively becomes: print if /#define\s+ELFOSABIV_LATEST\s+(\S+)/ when passed to perl, resulting in the entire line being printed when the `if` condition is evaluated as true. You can remedy it by enclosing the one-liner in single quotes instead to prevent dollar signs from being interpreted by shell: perl -ne 'print $1 if /#define\s+ELFOSABIV_LATEST\s+(\S+)/' /home/test/elf/elf.h Note that in Windows you do have to quote arguments with double qoutes, so if you want to maintain just one version of the script you would be better off writing an actual script instead of a one-liner command: # filter.pl while (<>) { print $1 if /#define\s+ELFOSABIV_LATEST\s+(\S+)/; } so that: perl filter.pl /home/test/elf/elf.h would output `6U` in both platforms. On the other hand, for your particular use case, you can rewrite your perl code so that it doesn't use any `$` to reference variables, allowing you to use double quotes to enclose the argument on both platforms: perl -ne "print /#define\s+ELFOSABIV_LATEST\s+(\S+)/" /home/test/elf/elf.h But again, referencing a variable with a `$` is sometimes unavoidable, in which case you should write it as a separate script to be portable.
Is there data leakage while splitting the data this way?
it get's boring when you are always doing it with the same programming language (required by work). to make it more challenging, try to learn new programming language when you have free time.
null
Having a type and a property with the exact same name isn't uncommon. Yes, it looks a little weird, but renaming properties to avoid this clash looks even weirder, admittedly. Eric Lippert had [a blog post on this exact topic](https://learn.microsoft.com/en-us/archive/blogs/ericlippert/color-color). There is no ambiguity for the compiler, however.
null
I’m fairly new to Python, so go easy on me. I’m trying to make the following inputs appear at the same time in the console Input(“Please enter your feet: “) Input(“Please enter your inches: “) Desired output: Please enter your feet: Please enter your inches: At the same time for the user to enter. Whenever I look up how to take multiple inputs at the same time, I usually get a variation of answers stating to use the split function and to assign the input to two different variables: x, y = input(“Enter two nums: “) But I feel that this would be confusing for the user, so I want to be able to prompt them two different entries at the same time on different lines.
next/navigation Error 404 dynamic routes only in production NEXT13
|javascript|reactjs|next.js|
<!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .parent { height: 100vh; } .copy-txt { color: rgba(255, 255, 255, 0.5); font-size: 10px; text-align: center; } .copy-txt > a { color: rgba(255, 255, 255, 0.5); font-size: 10px; } .copyright { transform-origin:0% 100%; position: absolute; rotate: -90deg; transition: rotate 0.3s ease-in-out; margin-left: 75px; background-color: rgb(10, 28, 46); width: 150px; height: 60px; padding-top: 0px; top: 40%; color: rgba(255, 255, 255, .5); text-align: center; } .parent:hover .copyright { display: block; rotate: 0deg; transition: rotate 0.3s ease-in-out; <!-- language: lang-html --> <div class="parent"> <div class="copyright"> <p> <span class="copy-txt"> &copy;&nbsp;<a href="https://davidgs.com/">David G. Simmons 2023 </a> </span> <br /> <span class="copy-txt">All rights reserved</span> </p> </div> </div> <!-- end snippet -->
The problem is that you are trying to assign the same ref for all your components. a ref needs to be assined to a single instance of a component so we can modify your code to the following ``` import React, { useRef, useEffect } from "react"; const DynamicComponent = React.forwardRef(({ name }, ref) => { // Some component logic here... return <div ref={ref}>{name}</div>; }); const HOC = ({ dynamicComponentProps }) => { const dynamicRefs = dynamicComponentProps.map(() => useRef(null)); useEffect(() => { dynamicRefs.forEach((ref) => { console.log(ref.current); // This should log the div element // Some other logic with ref.current... }); }, [dynamicRefs]); const renderDynamicComponent = () => { return dynamicComponentProps.map((props, index) => ( <DynamicComponent ref={dynamicRefs[index]} key={index} {...props} /> )); }; return <div>{renderDynamicComponent()}</div>; }; const App = () => { const dynamicComponentProps = [ { name: "Component A" }, { name: "Component B" }, { name: "Component C" }, ]; return <HOC dynamicComponentProps={dynamicComponentProps} />; }; export default App; ``` in this updated version we have a seperate ref foreach component. to make this work we add to modfiy a few other things as well 1. changed DynamicComponent to forwardRef so that we can pass a ref 2. in the useEffect we log the array of refs 3. in renderDynamiccomponent we assign a different ref to each component **EDIT** As pointed out in the comments the above code does break react rule of calling hook in side a loop so we can change the code a little instead of having an array of refs we now have a ref containings an array ``` import React, { useRef, useEffect } from "react"; const DynamicComponent = React.forwardRef(({ name }, ref) => { // Some component logic here... return <div ref={ref}>{name}</div>; }); const HOC = ({ dynamicComponentProps }) => { const dynamicRefs = useRef( dynamicComponentProps.map(() => ({ current: null })) ); useEffect(() => { dynamicRefs.current.forEach((ref) => { console.log(ref.current); // This should log the div element // Some other logic with ref.current... }); }, [dynamicRefs.current]); const renderDynamicComponent = () => { return dynamicComponentProps.map((props, index) => ( <DynamicComponent ref={dynamicRefs.current[index]} key={index} {...props} /> )); }; return <div>{renderDynamicComponent()}</div>; }; const App = () => { const dynamicComponentProps = [ { name: "Component A" }, { name: "Component B" }, { name: "Component C" }, ]; return <HOC dynamicComponentProps={dynamicComponentProps} />; }; export default App; ```
WSO2 MI 4.2.0 I am using Wso2 micro integrator version 4.2.0. and a rollover policy based on a time period for the log files. I'm trying to delete old rollover files with more than 58 days, meaning that I want to keep ~58 days of logs with the following configuration (as recommended on https://apim.docs.wso2.com/en/latest/administer/logging-and-monitoring/logging/managing-log-growth/): `appender.CARBON_LOGFILE.strategy.action.type = Delete appender.CARBON_LOGFILE.strategy.action.basepath = ${sys:carbon.home}/repository/logs/ appender.CARBON_LOGFILE.strategy.action.maxdepth = 1 appender.CARBON_LOGFILE.strategy.action.condition.type = IfLastModified appender.CARBON_LOGFILE.strategy.action.condition.age = 58D appender.CARBON_LOGFILE.strategy.action.PathConditions.type = IfFileName appender.CARBON_LOGFILE.strategy.action.PathConditions.glob = wso2carbon-` Here is an image of all the configurations I have in log4j.properties for the carbon_logfile: [log4j config](https://i.stack.imgur.com/Q18l6.png) But the configuration seems not to be having any effect on wso2carbon log rotation or restart of the service. The service has 60 files with the pattern wso2carbon-* increasing daily. [files](https://i.stack.imgur.com/KyuY5.png) Those anyone come across any similar issues? Is there something wrong in the configuration for the delete action to be applied?
Try adding another w-function to determine the next distinct order date. ```sql WITH ranked_transactions AS ( SELECT t.*, DENSE_RANK() OVER (PARTITION BY t.customer_key ORDER BY t.order_date) AS order_rank FROM transaction_records t ), next_order_dates AS ( SELECT customer_key, MIN(order_date) AS next_distinct_order_date FROM ranked_transactions GROUP BY customer_key ) SELECT t.customer_key, t.order_id, t.order_date, t.quantity, t.amount, t.order_rank, n.next_distinct_order_date AS next_order_date FROM ranked_transactions t LEFT JOIN next_order_dates n ON t.customer_key = n.customer_key ORDER BY t.customer_key, t.order_date; ```
Suppose I have a workload that branches out like a tree. I have to process `n` A items. Processing each A item requires processing of `m` B items. This goes on for another level or two. And I have the following functions: ```go func handler() { var aList []A var wg sync.WaitGroup for _, a := range aList { wg.Add(1) go func () { defer wg.Done() processA(a) } } wg.Wait() } func processA(a A) error { var wg sync.WaitGroup for _, b := range a.BList { wg.Add(1) go func () { defer wg.Done() processB(b) } } wg.Wait() } func processB(b B) error { var wg sync.WaitGroup for _, c := range b.CList { wg.Add(1) go func () { defer wg.Done() processC(c) } } wg.Wait() } ``` Now the nature of all of theses tasks is that they're BSP (Bulk Synchronous Processses). By that I mean, that they need **no communication amongst themselves**. And more importantly, if I had an infinite amount of Cores, there would be **NO** waiting in any thread/goroutine. Now let's come back to Earth, I am running this on a Lambda function that will have 2/3/4 Cores to offer. My workload is still such that **no memory limits will be hit**. Now should I change my code to limit the number of goroutines? If what I want is speedup? Is my code lagging due to too much context switching? Or is there no such thing as "too many goroutines"?
I need to install a self-signed SSL certificate to trusted root on the device from my uwp app code how can I achieve this?
How to install a ssl certificate from uwp app code
|c#|windows|security|ssl|uwp|
You can simplify your code to a single `SELECT` (and eliminate the nested loops): ```lang-sql DECLARE p_nation STRING_LIST := STRING_LIST('AA', 'BB', 'CC'); p_lastrundate VARCHAR2(20) := '01-JAN-1970 00:00:00'; BEGIN DBMS_OUTPUT.PUT_LINE('CIDGEN' || '||' || 'NATION'); FOR rec IN ( SELECT B.CIDGEN, B.NATION FROM COMPANY B INNER JOIN NATION_LOOKUP C ON B.NATION = C.CODE INNER JOIN TABLE(p_nation) n ON B.nation = n.COLUMN_VALUE WHERE B.CIDGEN IN ( SELECT CIDGEN FROM RAN.OA_REQUEST_RESPONSE_STATUS WHERE RESPONSE_STATUS = 'FailedRequest' AND RESPONSE_UPDATE_STAMP >= TO_DATE(p_lastrundate, 'DD-MON-YYYY HH24:MI:SS') ) ) LOOP DBMS_OUTPUT.PUT_LINE(rec.CIDGEN || '||' || rec.NATION); END LOOP; END; / ``` Which, for the sample data: ```lang-sql CREATE TABLE company (cidgen, nation) AS SELECT 'Acme Corporation', 'AA' FROM DUAL UNION ALL SELECT 'Octan', 'CC' FROM DUAL UNION ALL SELECT 'Umbrella Corporation', 'BB' FROM DUAL UNION ALL SELECT 'Wayne Enterprises', 'BB' FROM DUAL; CREATE TABLE nation_lookup (code) AS SELECT 'AA' FROM DUAL UNION ALL SELECT 'BB' FROM DUAL UNION ALL SELECT 'CC' FROM DUAL; CREATE TABLE RAN.OA_REQUEST_RESPONSE_STATUS (cidgen, response_status, response_update_stamp) AS SELECT 'Acme Corporation', 'FailedRequest', SYSDATE FROM DUAL UNION ALL SELECT 'Octan', 'FailedRequest', SYSDATE FROM DUAL UNION ALL SELECT 'Umbrella Corporation', 'FailedRequest', SYSDATE FROM DUAL UNION ALL SELECT 'Wayne Enterprises', 'FailedRequest', SYSDATE FROM DUAL CREATE TYPE string_list IS TABLE OF VARCHAR2(200); ``` Outputs: ``` status CIDGEN||NATION Acme Corporation||AA Octan||CC Umbrella Corporation||BB Wayne Enterprises||BB ``` [fiddle](https://dbfiddle.uk/ZrmpBPxa)
ngClass and ngStyle are actual directive need to import before using for Standalone component: import { NgFor, NgClass } from '@angular/common'; @Component({ selector: 'app-attribute', standalone: true, imports: [NgFor, NgClass], templateUrl: './attribute.component.html', styleUrl: './attribute.component.css' })
There is a QLabel added to the statusbar of the main window. When setting long text, the QLabel expands and expands the window with it. It is necessary that the size of the QLabel and the window do not change, and only the text that fits is shown (as happens with a fixed size widget). The width of the QLabel cannot be set to a fixed, because it must change depending on the width of the window. Probably there are some size policy settings for widgets? Tried setting fixed width and different size policies.
QLabel: how to prevent expanding?
|qt|qlabel|qtwidgets|qsizepolicy|
null
I am trying to split the data and rearrange the data in a CSV file. My data looks something like this ```none 1:100011159-T-G,CDD3-597,G,G 1:10002775-GA,CDD3-597,G,G 1:100122796-C-T,CDD3-597,T,T 1:100152282-CAAA-T,CDD3-597,C,C 1:100011159-T-G,CDD3-598,G,G 1:100152282-CAAA-T,CDD3-598,C,C ``` and I want a table that looks like this: | ID | 1:100011159-T-G | 1:10002775-GA | 1:100122796-C-T |1:100152282-CAAA-T | |---------------|-----------------|---------------|------------------|-------------------| | CDD3-597 | GG | GG | TT | CC | | CDD3-598 | GG | | | CC | I have written the following code: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> import pandas as pd input_file = "trail_berry.csv" output_file = "trail_output_result.csv" # Read the CSV file without header df = pd.read_csv(input_file, header=None) print(df[0].str.split(',', n=2, expand=True)) # Extract SNP Name, ID, and Alleles from the data df[['SNP_Name', 'ID', 'Alleles']] = df[0].str.split(',', n=-1, expand=True) # Create a new DataFrame with unique SNP_Name values as columns result_df = pd.DataFrame(columns=df['SNP_Name'].unique(), dtype=str) # Populate the new DataFrame with ID and Alleles data for _, row in df.iterrows(): result_df.at[row['ID'], row['SNP_Name']] = row['Alleles'] # Reset the index result_df.reset_index(inplace=True) result_df.rename(columns={'index': 'ID'}, inplace=True) # Fill NaN values with an appropriate representation (e.g., 'NULL' or '') result_df = result_df.fillna('NULL') # Save the result to a new CSV file result_df.to_csv(output_file, index=False) # Print a message indicating that the file has been saved print("Result has been saved to {}".format(output_file)) <!-- end snippet --> but this has been giving me the following error: Traceback (most recent call last): File "berry_trail.py", line 11, in <module> df[['SNP_Name', 'ID', 'Alleles']] = df[0].str.split(',', n=-1, expand=True) File "/nas/longleaf/home/svennam/.local/lib/python3.5/site-packages/pandas/core/frame.py", line 3367, in __setitem__ self._setitem_array(key, value) File "/nas/longleaf/home/svennam/.local/lib/python3.5/site-packages/pandas/core/frame.py", line 3389, in _setitem_array raise ValueError('Columns must be same length as key') Can someone please help, I am having hard time figuring this out.Thanks in advance! ValueError: Columns must be same length as key
See this, for example: [HOW C# ARRAY INITIALIZERS WORK][1] [1]: https://web.archive.org/web/20190719134346/http://bartdesmet.net/blogs/bart/archive/2008/08/21/how-c-array-initializers-work.aspx
I am using Masstransit version 7.3.1 and rabbitmq to create a messaging mechanism (I am sending a message with each request the API receives) and I installed the Greenpipes library to use it later but after some monitoring I found out that Greenpipes is using a lot of CPU (see the image). Does Masstransit use Greenpipes? And why is Greenpipes' CPU usage so high even though I am not using it directly? [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/9qs6k.png
How many Goroutines is too many Goroutines?
|multithreading|go|operating-system|goroutine|
{"Voters":[{"Id":2887218,"DisplayName":"jcalz"}]}
After the suggestion from @rcgldr I modified the solution. the solution is not really functional way of doing things but at least it runs within the specified time. *One thing to note is that if you submit the same answer at different times of day **OR** repeatedly submit the same solution, you get different tests failing due to timeout. A bit strange! (must be a flaw in their system) and depending on your luck all tests are passed.* The main difference between the original and modified one is the replacement of while with a for loop. object Solution { import scala.collection.mutable import scala.io.StdIn.{readInt, readLine} private def init(): (Int => Unit, () => Unit, () => Int) = { val stack = mutable.Stack[Int]() val interim = mutable.Stack[Int]() var _top: Int =0 def move(source: mutable.Stack[Int], target: mutable.Stack[Int]): Unit = { val i = source.size if(i>0){ for(j <- 1 to i) target.push(source.pop()) } } def enque(it: Int): Unit = { if(interim.isEmpty) _top = it interim.push(it) } def deque(): Unit = { move(interim, stack) if(stack.nonEmpty) stack.pop() if(stack.nonEmpty) _top = stack.top move(stack, interim) } def top(): Int = _top (enque, deque, top) } def main(args: Array[String]): Unit = { val (enque, deque, top) = init() val n = readInt() for(_ <-1 to n){ val t1 = readLine().split(" ").map(x=>x.toInt) t1 match { case Array(1, n) => enque(n) case Array(2) => deque() case Array(3) => println(top()) } } } }
|angular|angular-material|
null
I have connectionstring of my application defined in a seperate config file. While migrating to .NET8 im facing platform not supported error when trying to decrypt it. Is there any way to decrypt it in c#? Here is my config file: ```xml <?xml version="1.0" encoding="utf-8"?> <connectionStrings configProtectionProvider="RsaProtectedConfigurationProvider"> <EncryptedData Type="http://www.w3.org/2001/04/xmlenc#Element" xmlns="http://www.w3.org/2001/04/xmlenc#"> <EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#aes256-cbc" /> <KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#"> <EncryptedKey xmlns="http://www.w3.org/2001/04/xmlenc#"> <EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-oaep-mgf1p" /> <KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#"> <KeyName>Rsa Key</KeyName> </KeyInfo> <CipherData> <CipherValue>lT+4rnY5uXs2FXfNh4PSZOzbihEyLNOHH2+aQB2mAuElk6NBLFtEKKGr+V0nUDE74w4N1zX00nWUqM2A3u5RiPc3NSxY3qF/Ff5CxMdmTIpmPpyJ1aIfPF4ldCAePQksikahbsMXk5+MREBZ+kzsEsCvoQa/JrjO/1oz/tG9vZZCG3GD/Rsp1PbVX1IKdy2WrEO1cp4YQVLdPmXJM7TkvXabz3mIptLfb2qI/csZ8ZvHHPFDXtd4aM6BBibO5MINAR0eeFPZU1gtkXxv+h+i5szht0O/FlP4nrLfBM/6Wz7Y4QEZJRBKaIicbzizqqjzIG6B4n2Ho6+3ImAf3XLTDw==</CipherValue> </CipherData> </EncryptedKey> </KeyInfo> <CipherData> <CipherValue>ekEvjGOU3CKUhcqn178SRAn6mcL5Z3OJQqFEbSH4qjvZqDMZmGyS1TiDqCf38aJMxo1gFM3o9N18awCMXmoij/xBVJTvoC6wuLLTK+dGNrj9KUlVpPdErOq+3ZBj80ewYa6ZNp2Z7H7i7ttNt2SJNSA7OoOK8heYURuMHW5yUlfnHwmnZPgxptofLsWyFtdNKwzy/W2+rNMPQ7i4V8oosjw4hvgfzxBK00Eip424RqFxNVo4kV1rNMTEaoLnn0LIhk4G4ZrpdnKdQ2K2bKI99ODRAEpA02oc66sO7wGR+ZfmCEewt8dUoCX8L55GZGISrW2xhZE8WqT7YjWs6g5DiHiZRNR2kh8Mjp+y9AOLuJ1ilLHGpp55R2PxqczMQXxdtvRoeAGC9CqaZex2KsBYGBhTK1tx7DchLrCLeiZZbxtdvL3/NMsYTuM8HP/ZXzKLmk3bp5v1RU6hELyl8uQOsuDZBcMndnwYphOAGpmI6TL/ZoNsMUtGV7RhfUn/7Z/8Ktgc1r8rvOqhC0wdVCzOVclEyhlmjg2yBXefO9lcC9UzKjw5C5Yv6OozT1p9vpI8YaMLfK1aR3U24CjQONgjD+c7gXRRK2mDw+ILeEXkJdQ=</CipherValue> </CipherData> </EncryptedData> </connectionStrings> ```
|c#|asp.net-core|rabbitmq|masstransit|
I want to sort the values of this dictionary from low to high: For example, input: ``` average = {'ali': 7.83, 'mahdi': 13.4, 'hadi': 16.2, 'hasan': 3.57} ``` I want the output to be like this: ``` {'hasan': 3.57, 'ali': 7.83, 'mahdi': 13.4, 'hadi': 16.2} ``` I tested this method, but because my data is decimal, it gives an error: ``` dict(sorted(average.items(), key=lambda item: item[1])) ``` Error: ``` ave_dict_sort = dict(sorted(average.items(), key=lambda item: item[1])) ^^^^^^^^^^^^^ AttributeError: 'float' object has no attribute 'items' ```
I have a web application (4.8 Framework API) and I need to be able to test on local machine with various settings and I do not want them accidentally checked into source code repository. For example, Web.Config would have our standard settings but my Web.Local.Config would have settings I may need to debug a client's problem. It would be marked as "ignored" for the purposes of git commits. I installed the SlowCheetah extension and nuget but that does not seem to work. When testing, the transform is not being performed. The value in the web.config is shown on the page. The transformonbuild is not showing in the project file. I am even open to adding a post build script. Visual Studio 2019 Community .Net Framework 4.8
How to do a web.config transform during build for local debugging?
|c#|.net|visual-studio|web-config|
I'm using the apphud sdk to fetch in app purchase products. The SDK is used to purchase products in an IOS app. I use the following code Apphud.paywallsDidLoadCallback { paywalls in if let paywall = paywalls.first(where: { $0.identifier == "your_paywall_id" }) { I get the following error Cannot pass function of type '([ApphudPaywall]) async -> Void' to parameter expecting synchronous function type paywallsDidLoadCallback is defined as @MainActor @objc public static func paywallsDidLoadCallback(_ callback: @escaping ([ApphudPaywall]) -> Void) { ApphudInternal.shared.performWhenOfferingsReady { callback(ApphudInternal.shared.paywalls) } } is this because the apphud sdk does not support async/await? Since this is a third party sdk, is there a quick fix? Please help!
apphud sdk and completion handlers
|ios|swift|in-app-purchase|
The problem was that in my Users table in the database, the email field had a capital 'E' like 'Email.' However, Laravel by default searches for 'email' with a lowercase 'e,' and that's why. If you don't want to modify the column's name on your database you can paste the following code on your User's model: public function getEmailForPasswordReset(): string { return $this->Email; } This code will overwrite the getEmailForPasswordReset on your User's model and you can customize the field that's going to be researched. Hope it helps some lost soul like mine!
> I wrote below bash Shell script to check whether the input value is a character string or a number (using Mathematical function): > #!/bin/bash > uniq_value=$1 > if `$(echo "$uniq_value / $uniq_value" | bc)` ; then > echo "Given value is number" > else > echo "Given value is string" > fi > The execution result is as follows: > $ sh -x test.sh abc > `+ uniq_value=abc` > `+++ echo 'abc / abc'` > `+++ bc` > `Runtime error (func=(main), adr=5): Divide by zero` > `+ echo 'Given value is number'` > `Given value is number` > There is an error like this: "Runtime error (func=(main), adr=5): Divide by zero". > Can anyone please suggest how to rectify this error? > The expected result for the input "abc123xy" should be "Given value is string". > The expected result for the input "3.045" should be "Given value is number". > The expected result for the input "6725302" should be "Given value is number". > After this I will assign a series of values to "uniq_value" variable in a loop. Hence getting the output for this script is very important.
I don't know if I did it right, but I found a solution to make it work the way I want. Based on the answer here https://stackoverflow.com/a/67083089/21533506 I added the following code: ``` add_filter( 'woocommerce_package_rates', 'disable_shipping_method_based_on_location', 10, 2 ); function disable_shipping_method_based_on_location( $rates, $package ) { $city = $package['destination']['city']; // If the city is Paris, hide flat_rate:2, flat_rate:4 and local_pickup:3 if ( 'Paris' === $city ) { unset( $rates['flat_rate:2'] ); unset( $rates['flat_rate:4'] ); } else { // Otherwise, hide flat_rate:6, flat_rate:7 and local_pickup:3 for all other localities unset( $rates['flat_rate:6'] ); unset( $rates['flat_rate:7'] ); unset( $rates['local_pickup:3'] ); } return $rates; ``` After that I added the following code to hide all delivery methods until the address is filled in: ``` add_filter('woocommerce_package_rates', 'hide_shipping_until_address'); function hide_shipping_until_address($rates) { $address = WC()->customer->get_shipping(); $city = isset($address['city']) ? $address['city'] : ''; $state = isset($address['state']) ? $address['state'] : ''; if (empty($city) || empty($state)) { foreach ($rates as $rate_id => $rate) { unset($rates[$rate_id]); } } return $rates; } ``` After that I modified the code from here https://stackoverflow.com/a/77896304/21533506 to display the local fetch in the first position: ``` add_filter( 'woocommerce_package_rates', 'filter_woocommerce_package_rates', 100, 2 ); function filter_woocommerce_package_rates( $rates, $package ) { $free_shipping_exists = false; // Initialize free shipping flag // Loop through shipping rates for the current shipping package foreach ( $rates as $rate_key => $rate ) { // If method is free shipping, set free shipping flag to true if ( 'free_shipping' === $rate->method_id ) { $free_shipping_exists = true; break; } } // If free shipping exists, hide other shipping methods except local pickup if ( $free_shipping_exists ) { foreach ( $rates as $rate_key => $rate ) { // If method is not local pickup and not free shipping, unset it if ( 'local_pickup' !== $rate->method_id && 'free_shipping' !== $rate->method_id ) { unset( $rates[$rate_key] ); } // If method is local pickup, set cost to zero elseif ( 'local_pickup' === $rate->method_id ) { $rates[$rate_key]->cost = 0; $rates[$rate_key]->taxes = array_fill_keys( array_keys( $rates[$rate_key]->taxes ), 0 ); } } } return $rates; } ```
Geeting error while i push image to gcp artifact repo
|docker|image|google-cloud-platform|artifact|
null
I am working on a virtualised data grid for my application. I use transform: translateY for the table offset on scroll to make table virtualised. I developed all the functionality in React 17 project, but when migrated to React 18 I found that the data grid behaviour changed for the worse - the data grid started to bounce on scroll. I prepared the minimal representing code extract, which shows my problem. To assure that the code is the same for React 17 and React 18, I change only the import of ReactDOM from 'react-dom/client' to 'react-dom' (which is of course incorrect, since the latter is deprecated) in my index.tsx file. This is the code: index.html ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>Virtuailsed table</title> </head> <body> <noscript>You need to enable JavaScript to run this app.</noscript> <div id="root"></div> </body> </html> ``` index.js ``` // import ReactDOM from "react-dom"; import ReactDOM from "react-dom/client"; import { useState } from "react"; import "./styles.css"; let vendors = []; for (let i = 0; i < 1000; i++ ){ vendors.push({ id: i, edrpou: i, fullName: i, address: i }) } const scrollDefaults = { scrollTop: 0, firstNode: 0, lastNode: 70, }; function App() { const [scroll, setScroll] = useState(scrollDefaults); const rowHeight = 20; const tableHeight = rowHeight * vendors.length + 40; const handleScroll = (event) => { const scrollTop = event.currentTarget.scrollTop; const firstNode = Math.floor(scrollTop / rowHeight); setScroll({ scrollTop: scrollTop, firstNode: firstNode, lastNode: firstNode + 70, }); }; const vendorKeys = Object.keys(vendors[0]); return ( <div style={{ height: "1500px", overflow: "auto" }} onScroll={handleScroll} > <div className="table-fixed-head" style={{ height: `${tableHeight}px` }}> <table style={{transform: `translateY(${scroll.scrollTop}px)`}}> <thead style={{ position: "relative" }}> <tr> {vendorKeys.map((key) => <td>{key}</td>)} </tr> </thead> <tbody > {vendors.slice(scroll.firstNode, scroll.lastNode).map((item) => ( <tr style={{ height: rowHeight }} key={item.id}> {vendorKeys.map((key) => <td><div className="data">{item[key]}</div></td>)} </tr> ))} </tbody> </table> </div> </div> ); } // const rootElement = document.getElementById("root"); // ReactDOM.render(<App />, rootElement); const root = ReactDOM.createRoot( document.getElementById('root') ); root.render( <App /> ); ``` styles.css ``` * { padding: 0; margin: 0 } .table-fixed-head thead th{ background-color: white; } .row { line-height: 20px; background: #dafff5; max-width: 200px; margin: 0 auto; box-shadow: 0 0 1px 0 rgba(0, 0, 0, 0.5); } .data{ width: 150px; white-space: nowrap; overflow: hidden; margin-right: 20px; } ``` I have spent 1.5 day trying to find the reason why the table bounces on scroll in React 18 without result. BTW, overscroll-behaviour: none doesn`t work.
{"Voters":[{"Id":18839983,"DisplayName":"Thanasis Mp"}],"DeleteType":1}