instruction
stringlengths
0
30k
Hey guys so in class we went over this bit of Java and we were told to find the runtime. ``` for(int i=0; i<N; i=i+2){ for(int j=N; j<N j++){ for(int k=0; k<N k++){ System.out.println(i*j); System.out.println(i); } } } for(int k=0; k<100; k++){ System.out.println(k); } ``` apparently the runtime for this is o(n) but I don't know why. Wouldn't this be o(n^3) because of the for loops? Are there any tips or tricks to being able to tell immedietly what the runtime is that I am missing? Thanks!
Why is the runtime for this o(n)?
|data-structures|big-o|
null
There are several problems: - the data in the question is an image so no one can reproduce it without tediously retyping it. Even then this does not guarantee that one would have exactly what you have. In the future please provide a minimal reproducible example in text form using `dput`. See the information at the top of the [tag:r] tag page for guidance on posting. - there is no point in assigning NA to `a1data` because it is just overwritten in the next line anyways as if it had never existed - `as.vector(a1data['x'])` is a one element list whose sole component contains the vector `a1data$x`. That is why it thinks you have one element. Assuming that the time index is consecutive integers starting at 1 all we need is ts(a1data$x) For example, using the first 3 rows of the built-in data frame `BOD` which has 2 columns named `Time` and `demand` BOD3 <- BOD[1:3, ] BOD3 ## Time demand ## 1 1 8.3 ## 2 2 10.3 ## 3 3 19.0 ts(BOD3$demand) ## Time Series: ## Start = 1 ## End = 3 ## Frequency = 1 ## [1] 8.3 10.3 19.0
How to add new custom page in Timber Wordpress
|wordpress|timber|custom-pages|
null
``` $("#examplee").DataTable({ "data" : data, "columns":[ {"data" : 'numeroDossier'}, {"data" : 'nomOuRs'}, {"data" : 'tel'}, {"data" : 'province'}, { data : null, render : function() { return '<a><i class="btn fa-solid success fa-user-tag" onclick="get_assaj(\''+data.id+'\')" title ="selectionner assujetti" ></i></a><a><i class="btn fa fa-solid fa-user-pen" style="color: #B197FC" onclick="edit_assaj('+ data +')" ></i></a>'; } } ] }) ``` I have a table that displays the elements retrieved from an API in Jquery I would like to retrieve the IDs of the rows to add a deletion or a modification ....
How do I get my data.id of each row with using return in jquery please?
|html|jquery|api|
null
which backend parameter to add Internal Server Error: /accounts/google/login/callback/ Traceback (most recent call last): File "/home/zaibe/Desktop/project2/env/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner response = get_response(request) File "/home/zaibe/Desktop/project2/env/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) TypeError: api_google_oauth2_callback() missing 1 required positional argument: 'backend' [17/Mar/2024 06:39:23] "GET /accounts/google/login/callback/?code=4%2F0AeaYSHADSCseU_Nkg2BMLc5P8UpfRRqCJUNRIAyHrcW_tX4uQDpPADdj5rTJRS8v6siZHw&scope=email+profile+openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&authuser=0&prompt=consent HTTP/1.1" 500 65562 ``` from django.http import JsonResponse from django.contrib.auth import authenticate, login from django.views.decorators.csrf import csrf_exempt from .forms import UserRegistrationForm from .models import Profile from django.shortcuts import redirect from social_django.utils import psa import json from social_django.views import auth from django.views.generic import RedirectView from django.shortcuts import render def default_callback(request): return render(request, 'default_callback.html') @psa() def api_google_oauth2_login(request): backend = 'social_core.backends.google.GoogleOAuth2' return redirect('social:begin', backend=backend) @psa() def api_google_oauth2_callback(request): backend = 'social_core.backends.google.GoogleOAuth2' return auth(request, backend=backend) @csrf_exempt def api_user_login(request): if request.method == 'POST': # Retrieve raw JSON data from the request body data = json.loads(request.body) # Extract username and password from the JSON data username = data.get('username') password = data.get('password') if username is None or password is None: return JsonResponse({'error': 'Missing credentials'}, status=400) user = authenticate(request, username=username, password=password) if user is not None: login(request, user) return JsonResponse({'message': 'Authenticated successfully'}, status=200) else: return JsonResponse({'error': 'Invalid login'}, status=401) else: return JsonResponse({'error': 'Invalid request method'}, status=405) @csrf_exempt def api_user_register(request): if request.method == 'POST': form = UserRegistrationForm(request.POST) if form.is_valid(): new_user = form.save(commit=False) new_user.set_password(form.cleaned_data['password']) new_user.save() Profile.objects.create(user=new_user) return JsonResponse({'message': 'Registration successful'}, status=201) else: return JsonResponse({'error': form.errors}, status=400) else: return JsonResponse({'error': 'Invalid request method'}, status=405) class UserRedirectView(RedirectView): """ This view is needed by the dj-rest-auth-library in order to work the google login. It's a bug. """ permanent = False def get_redirect_url(self): return "http://127.0.0.1:8000/accounts/google/login/callback/" # Replace "redirect-url" with your actual redirect URL ``` and this is the Url file code ``` from django.urls import path, include from . import views urlpatterns = [ path('login/', views.api_user_login, name='api_user_login'), path('register/', views.api_user_register, name='api_user_register'), path('auth/', include('social_django.urls', namespace='social')), # Social authentication URLs # Add the following URLs for Google OAuth2 authentication path('google/login/', views.api_google_oauth2_login, name='api_google_oauth2_login'), path('google/login/callback/', views.api_google_oauth2_callback, name='api_google_oauth2_callback'), path('default-callback/', views.default_callback, name='default_callback'), # Remove trailing slash # Removed <str:backend> from the callback URL since it's not needed in this case # Add the redirect view URL path("~redirect/", view=views.UserRedirectView.as_view(), name="redirect"), ] ```
Django project with a RESTful API for user registration and login using Google or Apple OAuth
|django|django-rest-framework|django-views|google-oauth|social-auth-app-django|
null
|ruby-on-rails|ruby|testing|ruby-riot|
I have create a microService for shearing Proto files (server). and i have module and service for calling this. now in my (client), i want to call that microservice as soon as my application started. I already use onApplicationBootstrap() in my app.module file. but thats not work, because I use onModule init for my grpc client config in other modules. is there any way to run this, befor any other modules?
running a service file befor bootstrap function in nestjs
|javascript|node.js|nestjs|
null
Riot is a fast, expressive, and contextual unit testing framework for [tag:Ruby]. For more information, see the [GitHub page][1]. [1]: https://github.com/thumblemonks/riot
null
Riot is a fast, expressive, and contextual unit testing framework for Ruby.
null
When i click on the data in map then it's automatically draw a line vertically and horizontally. I have tried everything with all properties but still not able to remove that. How can i remove that line?? **please find the attached screenshot** [![I want to remove those linke where we have label of 42 there we have horizontal and vertical line][1]][1] [1]: https://i.stack.imgur.com/hk4yr.png
I make a program, two clients connect to a server, and one of the clients send msg to server, then the server resend this msg to another client.I use base_rdset as listen set, and rdset as ready set, clients' socket identifier storge in a array. When I start the server, sometimes it runs well, but sometimes it posts an error which is `select:Bad file descriptor`. waht happend to this program? ```c #include <55header.h> #define SIZE 3 int main(int argc,char*argv[]) { //./server 192.168.176.132 8080 ARGS_CHECK(argc,3); struct sockaddr_in addr; addr.sin_family=AF_INET;//ipv4 addr.sin_port=htons(atoi(argv[2]));//port addr.sin_addr.s_addr=inet_addr(argv[1]);//ip int socket_fd=socket(AF_INET,SOCK_STREAM,0); ERROR_CHECK(socket_fd,-1,"socket"); int res_bind =bind(socket_fd,(struct sockaddr*)&addr,sizeof(a ERROR_CHECK(res_bind,-1,"bind"); int res_listen=listen(socket_fd,10); ERROR_CHECK(res_listen,-1,"listen"); fd_set base_rdset; fd_set rdset; char buf[1024]; int net_fds[SIZE]={0}; FD_SET(socket_fd,&base_rdset); while(1){ FD_ZERO(&rdset); memcpy(&rdset,&base_rdset,sizeof(base_rdset)); int ready_num=select(10,&rdset,NULL,NULL,NULL); ERROR_CHECK(ready_num,-1,"select"); if(FD_ISSET(socket_fd,&rdset)){ for(int i=0;i<SIZE;i++){ if(net_fds[i]==0){ net_fds[i]=accept(socket_fd,NULL,NULL); FD_SET(net_fds[i],&base_rdset); break; } } } for(int i=0;i<SIZE;i++){ if(net_fds[i]==0){ continue; } if(FD_ISSET(net_fds[i],&rdset)){ memset(buf,0,sizeof(buf)); int count_chars=recv(net_fds[i],buf,sizeof(bu ERROR_CHECK(count_chars,-1,"recv"); if(count_chars==0){ close(net_fds[i]); FD_CLR(net_fds[i],&base_rdset); net_fds[i]=0; continue; } for(int j=0;j<SIZE;j++){ if(j==i||net_fds[j]==0){ continue; } send(net_fds[j],buf,count_chars,0); } } } } return 0; } ``` [enter image description here](https://i.stack.imgur.com/9JRij.png) I add code that is`FD_ZERO(&base_rdset)`,this problem seems like be solved,but i am not sure,I want to know why does this error occered
null
I make a program, two clients connect to a server, and one of the clients send msg to server, then the server resend this msg to another client.I use base_rdset as listen set, and rdset as ready set, clients' socket identifier storge in a array. When I start the server, sometimes it runs well, but sometimes it posts an error which is `select:Bad file descriptor`. waht happend to this program? ```c #include <55header.h> #define SIZE 3 int main(int argc,char*argv[]) { //./server 192.168.176.132 8080 ARGS_CHECK(argc,3); struct sockaddr_in addr; addr.sin_family=AF_INET;//ipv4 addr.sin_port=htons(atoi(argv[2]));//port addr.sin_addr.s_addr=inet_addr(argv[1]);//ip int socket_fd=socket(AF_INET,SOCK_STREAM,0); ERROR_CHECK(socket_fd,-1,"socket"); int res_bind =bind(socket_fd,(struct sockaddr*)&addr,sizeof(a ERROR_CHECK(res_bind,-1,"bind"); int res_listen=listen(socket_fd,10); ERROR_CHECK(res_listen,-1,"listen"); fd_set base_rdset; fd_set rdset; char buf[1024]; int net_fds[SIZE]={0}; FD_SET(socket_fd,&base_rdset); while(1){ FD_ZERO(&rdset); memcpy(&rdset,&base_rdset,sizeof(base_rdset)); int ready_num=select(10,&rdset,NULL,NULL,NULL); ERROR_CHECK(ready_num,-1,"select"); if(FD_ISSET(socket_fd,&rdset)){ for(int i=0;i<SIZE;i++){ if(net_fds[i]==0){ net_fds[i]=accept(socket_fd,NULL,NULL); FD_SET(net_fds[i],&base_rdset); break; } } } for(int i=0;i<SIZE;i++){ if(net_fds[i]==0){ continue; } if(FD_ISSET(net_fds[i],&rdset)){ memset(buf,0,sizeof(buf)); int count_chars=recv(net_fds[i],buf,sizeof(bu ERROR_CHECK(count_chars,-1,"recv"); if(count_chars==0){ close(net_fds[i]); FD_CLR(net_fds[i],&base_rdset); net_fds[i]=0; continue; } for(int j=0;j<SIZE;j++){ if(j==i||net_fds[j]==0){ continue; } send(net_fds[j],buf,count_chars,0); } } } } return 0; } ``` ![enter image description here](https://i.stack.imgur.com/9JRij.png) I add code that is`FD_ZERO(&base_rdset)`,this problem seems like be solved,but i am not sure,I want to know why does this error occered
Assuming the data has been stored in table named Numbers ``` SQL WITH NUM AS ( SELECT A.*, ROW_NUMBER() OVER(PARTITION BY ID1, ID2 ORDER BY VALUE) AS RN FROM NUMBERS A ) SELECT A.ID1, A.ID2, A.LOWER, A.VALUE,A.UPPER, A.MEASUREMENT, SUM(COALESCE(B.MEASUREMENT, 0)) AS DESIRED FROM NUM A LEFT JOIN NUM B ON A.ID1 = B.ID1 AND A.ID2 = B.ID2 AND A.LOWER >= B.LOWER AND A.UPPER <= B.UPPER AND A.RN > B.RN GROUP BY A.ID1, A.ID2, A.LOWER, A.VALUE,A.UPPER, A.MEASUREMENT ``` [Fiddle link with query output][1] [1]: https://dbfiddle.uk/y4BVt32r
You could try: lapply( my_list, \(x) if ('col3' %in% names(x)) transform(x, col3 = replace(col3, is.na(col3) & col1 %in% c('v2', 'v3'), 'VAL')) else x ) Output: [[1]] col1 col2 col3 col4 1 v1 wood cup <NA> 2 v2 <NA> VAL pear 3 v3 water fork banana 4 V2 <NA> VAL <NA> 5 V1 water <NA> apple [[2]] col1 col2 col4 1 v1 wood <NA> 2 v2 <NA> pear [[3]] col1 col3 col4 1 v1 cup <NA> 2 v2 VAL pear 3 v3 VAL banana 4 V3 VAL <NA>
I have a Avalonia project in C# that was just setup. Currently I am testing if Avalonia fits my requirements. Currently I struggle at a simple point. I want to create a custom user control of type `UserControl` and provide a property in that control, that should be set by a binding from the view that is using the control. What I have: 1. A new `MainWindow.axaml` with this content ``` <Window xmlns="https://github.com/avaloniaui" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:c="using:My.Avalonia.Controls" xmlns:vm="using:My.Avalonia.ViewModels" mc:Ignorable="d" d:DesignWidth="800" d:DesignHeight="450" x:Class="My.Avalonia.Views.MainWindow" Icon="/Assets/avalonia-logo.ico" Title="My.Avalonia" x:DataType="vm:MainViewModel" > <StackPanel> <c:MyControl Text="{Binding MyObject.DisplayName, Mode=OneWay}"/> <!-- This works: --> <Label Content="{Binding MyObject.DisplayName, Mode=OneWay}"/> </StackPanel> ``` 2. A new `UserControl`, created by the Avalonia Template. In `MyControl.axaml`: ``` <UserControl xmlns="https://github.com/avaloniaui" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:c="using:My.Avalonia.Controls" mc:Ignorable="d" d:DesignWidth="800" d:DesignHeight="450" x:Class="My.Avalonia.Controls.MyControl" x:Name="iconControl" x:DataType="c:MyControl" > <Design.DataContext> <c:MyControl /> </Design.DataContext> <Label Content="{Binding Text, Mode=OneWay}" Background="Aqua"/> </UserControl> ``` In the `MyControl.axaml.cs`: ``` using Avalonia; using Avalonia.Controls; using Avalonia.Media; using My.Avalonia.Models; using System.Collections.Specialized; using System.ComponentModel; namespace My.Avalonia.Controls { public partial class MyControl : UserControl, INotifyPropertyChanged { public MyControl() { InitializeComponent(); DataContext = this; } public static readonly StyledProperty<string?> TextProperty = AvaloniaProperty.Register<MyControl, string?>(nameof(Text)); public string? Text { get { return GetValue(TextProperty); } set { SetValue(TextProperty, value); } } } } ``` My Problem: When I run the program, the binding simply not work. The source string variable has a valid string as when I bind it to a standard label, it works. In the Debug output I see the following line: `Exception thrown: 'System.InvalidCastException' in System.Private.CoreLib.dll`. It disappears, if I remove the binding. It stays if I remove the Label in the `MyControl`, so the source seems to be the binding between the MainWindow and the UserControl. I don't understand why this is thrown as both have the type `string?`. Did I miss something in the Avalonia documentation?
I'm trying to create a code for **perfectly optimal chess endgame**. By that I mean that the loosing player tries to delay the checkmate as long as possible while the winning tries to checkmate the opponent as soon as possible. This code for chess endgame is my currently best [one](https://pastebin.com/zkcbgANy) import chess def simplify_fen_string(fen): parts = fen.split(' ') simplified_fen = ' '.join(parts[:4]) # Zachováváme pouze informace o pozici return simplified_fen def evaluate_position(board): #print(f"Position: {board.fen()}") if board.is_checkmate(): ### print(f"Position: {board.fen()}, return -1000") return -1000 # Mat protihráči elif board.is_stalemate() or board.is_insufficient_material() or board.can_claim_draw(): ### print(f"Position: {board.fen()}, return 0") return 0 # Remíza else: #print(f"Position: {board.fen()}, return None") return None # Hra pokračuje def create_AR_entry(result, children, last_move): return {"result": result, "children": children, "last_move": last_move, "best_child": None} def update_best_case(best_case): if best_case == 0: return best_case if best_case > 0: return best_case - 1 else: return best_case + 1 def update_AR_for_mate_in_k(board, AR, simplified_initial_fen, max_k=1000): evaluated_list = [] #print(f"") for k in range(1, max_k + 1): print(f"K = {k}") changed = False for _t in range(2): # Zajistíme, že pro každé k proběhne aktualizace dvakrát print(f"_t = {_t}") for fen in list(AR.keys()): #print(f"Fen = {fen}, looking for {simplified_initial_fen}, same = {fen == simplified_initial_fen}") board.set_fen(fen) if AR[fen]['result'] is not None: if fen == simplified_initial_fen: print(f"Finally we found a mate! {AR[fen]['result']}") return continue # Pokud již máme hodnocení, přeskočíme # Získáme výchozí hodnoty pro nejlepší a nejhorší scénář best_case = float("-inf") #worst_case = float("inf") nones_present = False best_child = None for move in board.legal_moves: #print(f"Move = {move}") board.push(move) next_fen = simplify_fen_string(board.fen()) #AR[fen]['children'].append(next_fen) if next_fen not in AR: AR[next_fen] = create_AR_entry(evaluate_position(board), None, move) evaluated_list.append(next_fen) if ((len(evaluated_list)) % 100000 == 0): print(f"Evaluated: {len(evaluated_list)}") board.pop() #for child in AR[fen]['children']: next_eval = AR[next_fen]['result'] if next_eval is not None: if (-next_eval > best_case): best_case = max(best_case, -next_eval) best_child = next_fen #worst_case = min(worst_case, -next_eval) else: nones_present = True if nones_present: if best_case > 0: AR[fen]['result'] = update_best_case(best_case) AR[fen]['best_child'] = best_child changed = True else: # Aktualizace hodnocení podle nejlepšího a nejhoršího scénáře #if worst_case == -1000: # Pokud všechny tahy vedou k matu, hráč na tahu může být matován v k tazích # AR[fen] = -1000 + k # changed = True #elif best_case <= 0: # Pokud nejlepší scénář není lepší než remíza, znamená to remízu nebo prohru # AR[fen] = max(best_case, 0) # Zabráníme nastavení hodnoty méně než 0, pokud je remíza možná # changed = True #elif best_case == 1000: # Pokud existuje alespoň jeden tah, který vede k matu protihráče, hráč na tahu může vynutit mat v k tazích # AR[fen] = 1000 - k # changed = True AR[fen]['result'] = update_best_case(best_case) AR[fen]['best_child'] = best_child changed = True ### print(f"Position = {fen}, results = {best_case} {nones_present} => {AR[fen]['result']}") if (fen == "8/8/3R4/8/8/5K2/8/4k3 b - -" or fen == "8/8/3R4/8/8/5K2/8/5k2 w - -"): print("^^^^^^^^") # remove here #break #if not changed: #break # Pokud nedošlo k žádné změně, ukončíme smyčku #if not changed: #break # Ukončíme hlavní smyčku, pokud nedošlo ke změně v poslední iteraci def print_draw_positions(AR): """ Vytiskne všechny remízové pozice (hodnota 0) zaznamenané v slovníku AR. """ print("Remízové pozice:") for fen, value in AR.items(): if True or (value > 990 and value < 1000): print(f"FEN>: {fen}, Hodnota: {value}","\n",chess.Board(fen),"<\n") def find_path_to_end(AR, fen): if AR[fen]['result'] is None: print(f"Unfortunately, there is no path that is known to be the best") fen_i = fen print(chess.Board(fen_i),"\n<") path = fen while AR[fen_i]['best_child'] is not None: fen_i = AR[fen_i]['best_child'] print(chess.Board(fen_i),"\n<") path = path + ", " + fen_i print(f"Path is: {path}") def main(): initial_fen = "1k6/5P2/2K5/8/8/8/8/8 w - - 0 1" initial_fen_original = "8/8/8/8/3Q4/5K2/8/4k3 w - - 0 1" initial_fen_mate_in_one_aka_one_ply = "3r1k2/5r1p/5Q1K/2p3p1/1p4P1/8/8/8 w - - 2 56" initial_fen_mate_in_two_aka_three_plies = "r5k1/2r3p1/pb6/1p2P1N1/3PbB1P/3pP3/PP1K1P2/3R2R1 b - - 4 28" initial_fen_mated_in_two_plies = "r5k1/2r3p1/p7/bp2P1N1/3PbB1P/3pP3/PP1K1P2/3R2R1 w - - 5 29" mate_in_two_aka_three_plies_simple = "8/8/8/8/3R4/5K2/8/4k3 w - - 0 1" mated_in_one_aka_two_plies_simple = "8/8/3R4/8/8/5K2/8/4k3 b - - 1 1" mate_in_one_aka_one_ply_simple = "8/8/3R4/8/8/5K2/8/5k2 w - - 2 2" initial_fen = mate_in_two_aka_three_plies_simple initial_fen = "1k6/5P2/2K5/8/8/8/8/8 w - - 0 1" initial_fen = "1k6/8/2K5/8/8/8/8/8 w - - 0 1" initial_fen = "8/8/8/8/8/7N/1k5K/6B1 w - - 0 1" initial_fen = "7K/8/k1P5/7p/8/8/8/8 w - - 0 1" simplified_fen = simplify_fen_string(initial_fen) board = chess.Board(initial_fen) AR = {simplified_fen: {"result": None, "last_move": None, "children": None, "best_child": None}} # Inicializace AR s počáteční pozicí update_AR_for_mate_in_k(board, AR, simplified_fen, max_k=58) # Aktualizace AR #print_draw_positions(AR) print(f"AR for initial fen is = {AR[simplified_fen]}") find_path_to_end(AR, simplified_fen) main() However,for initial fen = "8/8/8/4k3/2K4R/8/8/8 w - - 0 1" it doesn't give the optimal result like this one: https://lichess.org/analysis/8/8/8/4k3/2K4R/8/8/8_w_-_-_0_1?color=white Rather, it gives 27 plies [like this](https://pastebin.com/hZ6AaBZe) while lichess.com link above gives 1000-977==23 plies which I suppose is the correct number. Finding the bug will be highly appreciated.
Use case: I want to have a REST API that will fire an event in the system using RabbitMQ/AMQP, and I want to have the SecurityContext of the currently logged in user propagated to that event, in order to use the audit annotations in my JPA Repository (lastModifiedBy) and annotations for checking roles for users in services. I currently have this piece of code when declaring the AmqpTemplate bean: ``` rabbitTemplate.setBeforePublishPostProcessors( MessagePostProcessor { message -> val authentication = SecurityContextHolder.getContext().authentication message.messageProperties.setHeader("x-user-id", objectMapper.writeValueAsString(authentication)) message } ) ``` I want to serialize the Authentication object and then deserialize it on the side of the consumer. I'm currently having it like this: ``` @RabbitListener(queues = [TrackingEventPublisher.QUEUE_NAME]) fun handleTrackingMessage(trackingEvent: TrackingEvent, headers: MessageHeaders) { val xUserId = headers["x-user-id"].toString() val propagatedAuthentication = try { objectMapper.readValue(xUserId, AnonymousAuthenticationToken::class.java) } catch (_: Exception) { objectMapper.readValue(xUserId, OAuth2AuthenticationToken::class.java) } SecurityContextHolder.setContext( SecurityContextHolder.createEmptyContext().apply { authentication = propagatedAuthentication } ) logger.info { "Logging event: ${trackingEvent.uuid} started, headers: ${headers}" } logger.info { "Logging event: security ${SecurityContextHolder.getContext().authentication}" } trackingService.save(trackingEvent) logger.info { "Logging event: ${trackingEvent.uuid} finished" } } ``` The problem with this approach is that ObjectMapper can't actually deserialize the Authentication objects. It can't do it because the fields of that object are abstract classes or interfaces and they can't be instantiated (they also don't have a default constructor and Jackson is requiring it). I wonder if there's some idiomatic way to do it, or maybe something that has been already done by someone in some library. I know that there is a different approach that utilizes Spring Integration and it does have support for SecurityContext propagation, but I don't feel ready for learning about EIP, so I first want to do something simpler that is closer to the AMQP protocol, and not protocol-agnostic. Do you have any ideas how to actually approach this?
How do I propagate the current SecurityContext to my @RabbitListener in Spring Boot?
|spring|spring-security|spring-amqp|
I am using AppsScript to send emails by prompts on GoogleSheets. When I test the AppScript, the email appears in my Sent box; however, it does not appear in the recipients inbox. I have tried manually sending the same email; it appears in my Sent box and does appear in the recipient's inbox. I do not understand how an email can appear in by Sent via both AppScript and Manually, yet only the manual email appears in the recipient's inbox.
Email Sent Using AppScript Appears in Sent Box, but does not appear in Recipient's Inbox
|google-apps-script|
Although I am still unsure, as to what the actual is may be, I have managed to find a fix. Instead of relying on the `JwtService` to provide the verifier instance, I went with the approach of defining it within the module, as seen [in the ktor docs][1]. Now, my `configureSecurity()` looks like this: fun Application.configureSecurity() { val jwtService: JwtService by inject() val issuer = environment.config.property("jwt.issuer").getString() val audience = environment.config.property("jwt.audience").getString() val myRealm = environment.config.property("jwt.realm").getString() val jwkProvider = JwkProviderBuilder(issuer) .cached(10, 24, TimeUnit.HOURS) .rateLimited(10, 1, TimeUnit.MINUTES) .build() install(Authentication) { jwt("auth-jwt") { realm = myRealm verifier(jwkProvider, issuer) { withAudience(audience) withIssuer(issuer) acceptLeeway(3) } validate { credential -> jwtService.customValidator(credential) } } } } This seems to fix the crash and everything works just fine. As to the root cause of the original issue, I have only managed to pinpoint that it was being caused by the `jwtService.jwtVerifier` instance being used. That seemed to be calling the `getRSAPublicKey()` which would then attempt to create the `jwkProvider` lazily, and then crash. private val jwkProvider by lazy { JwkProviderBuilder(jwtIssuer) .cached(10, 24, TimeUnit.HOURS) .rateLimited(10, 1, TimeUnit.MINUTES) .build() } private fun getRSAPublicKey() = jwkProvider["3nGxSDQMSbeyMhuFT79exJ2hfnP8am"].publicKey as RSAPublicKey If someone figures out a fix for the issue while still using the `JwtService` class, I will accept that answer. [1]: https://ktor.io/docs/jwt.html#validate-payload
You need to convert to a continuous scale to use minor ticks, since there are no minor breaks on a discrete axis: ``` r dt %>% ggplot(aes(var1, as.numeric(factor(ca)), fill = var2)) + geom_col(width = 0.8, orientation = 'y') + stat_summary(orientation = 'y', fun = sum, geom = "point", colour = "grey40", fill = "grey40", aes(shape = var2), size = 2) + geom_vline(xintercept=0, colour="grey30", linetype = "dotted") + scale_y_continuous('ca', labels = levels(factor(dt$ca)), breaks = seq_along(levels(factor(dt$ca)))) + scale_shape_manual(values = c(20, 20, 20)) + guides(y = guide_axis(minor.ticks = TRUE)) + theme(axis.minor.ticks.length.y = unit(3, 'mm'), axis.ticks.length.y = unit(0, 'mm')) ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/EG8Sy.jpg
null
Assume I have the following bar chart made with `library(plotly)` (the space on the right side is intentional): ```r library(dplyr) library(plotly) library(tidyr) d <- tibble(cat = LETTERS[1:3], val = c(25, 10, 30), total = 40) (bars <- d %>% mutate(remaining = total - val) %>% pivot_longer(cols = c(val, remaining)) %>% plot_ly(x = ~ value, y = ~ cat, type = "bar", orientation = 'h', color = ~ name, colors = c("#440154FF", "#FDE725FF")) %>% layout(xaxis = list(title = NA, range= c(0, 60)), yaxis = list(title = NA), showlegend = FALSE, barmode = "stack")) ``` [![Barchart showing the letetrs A - C on the y-axis and stacked bars in yellow and purple][1]][1] I now would like to inset the following pie charts at `x == 50` and at the corresponding y-position: ```r pies <- d %>% rowwise() %>% group_map(~ plot_ly(.x) %>% add_pie(values = ~ c(val, total - val), marker = list(colors = c("#440154FF", "#FDE725FF")))) ``` The expected outcome looks like this (done by manually pasting the pies into the bar chart): [![Barchchart with piecharts added to the right of the bars][2]][2] Ideally the xa-axis would just span until 40 and there is no visible axis below the pies. ---------- P.S: I figured in this reprex that the colors are also messed up, how would I adjust the colors in the pie chart such that they match the colors in the bar chart? [1]: https://i.stack.imgur.com/0I4qC.png [2]: https://i.stack.imgur.com/OsIjg.png
null
when I launch chrome to debug an angular 17+ app, the debugger does not work. If I refresh then it stops at main.js, if I move forward from there it debugs from there. here is my launch configuration. ``` "configurations": [ { "name": "Launch Chrome", "request": "launch", "type": "chrome", "url": "http://localhost:4202", "webRoot": "${workspaceFolder}" }, ``` And my angular.json. I did have problems with the cache in the past. ``` { "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "cli": { "cache": { "enabled": false }, "analytics": false }, "version": 1, "newProjectRoot": "projects", "projects": { "client": { "projectType": "application", "schematics": { "@schematics/angular:component": { "style": "scss" } }, "root": "", "sourceRoot": "src", "prefix": "app", "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "outputPath": "dist/client", "index": "src/index.html", "browser": "src/main.ts", "polyfills": ["zone.js"], "tsConfig": "tsconfig.app.json", "inlineStyleLanguage": "scss", "assets": ["src/favicon.ico", "src/assets"], "styles": [ "@angular/material/prebuilt-themes/deeppurple-amber.css", "src/styles.scss" ], "scripts": [] }, "configurations": { "prod": { "budgets": [ { "type": "initial", "maximumWarning": "500kb", "maximumError": "1mb" }, { "type": "anyComponentStyle", "maximumWarning": "2kb", "maximumError": "4kb" } ], "outputHashing": "all", "fileReplacements": [ { "replace": "src/environments/environment.ts", "with": "src/environments/environment.prod.ts" } ] }, "development": { "optimization": false, "extractLicenses": false, "sourceMap": true, "fileReplacements": [ { "replace": "src/environments/environment.ts", "with": "src/environments/environment.development.ts" } ] } }, "defaultConfiguration": "production" }, "serve": { "builder": "@angular-devkit/build-angular:dev-server", "configurations": { "production": { "buildTarget": "client:build:production" }, "development": { "buildTarget": "client:build:development" } }, "defaultConfiguration": "development" }, "extract-i18n": { "builder": "@angular-devkit/build-angular:extract-i18n", "options": { "buildTarget": "client:build" } }, "test": { "builder": "@angular-builders/jest:run", "options": { "tsConfig": "tsconfig.spec.json", "inlineStyleLanguage": ["scss"], "assets": ["src/favicon.ico", "src/assets"], "styles": [ "@angular/material/prebuilt-themes/deeppurple-amber.css", "src/styles.scss" ], "scripts": [] } } } } } } ``` it's vscode V1.86.2 what should I look for?
Warning was fixed with update to: env: Python 3.9 conda pygraphviz 1.11 conda graphviz 8.1.0
{"Voters":[{"Id":10141885,"DisplayName":"halt9k"}]}
**To configure Microsoft Account as the IDP,** you need to register the application in **Microsoft Entra ID tenant (Azure AD tenant)**: Add redirect URL as **`https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`** ![enter image description here](https://i.imgur.com/rkT1emc.png) And **supported account types**: Personal Microsoft accounts only: ![enter image description here](https://i.imgur.com/AdsTwOp.png) Create **Azure AD B2C application** in the **Azure AD B2C tenant** and added redirect URI as **`https://jwt.ms`** ![enter image description here](https://i.imgur.com/MhhxGgY.png) **Now create the IDP by passing the Azure AD application ID and secret value in the Microsoft Entra ID tenant (Azure AD tenant)**: ![enter image description here](https://i.imgur.com/tplS3NQ.png) Run user flow by selecting the IDP: ![enter image description here](https://i.imgur.com/2oHSxP9.png) Select Microsoft Account: ![enter image description here](https://i.imgur.com/6MEScnb.png) ***The user is now able to sign-in successfully and ID token is generated:*** ![enter image description here](https://i.imgur.com/Vir1oco.png) **If you want to fetch access tokens along with ID token**, then you need to Expose an API and scope in the Azure AD B2C application: ![enter image description here](https://i.imgur.com/0zvxciY.png) Grant API permission for the scope: ![enter image description here](https://i.imgur.com/NvKza30.png) While running the user flow, select **resource as your b2c application** and run: ![enter image description here](https://i.imgur.com/IYt5q2Y.png) Now both **ID and access tokens will be generated** when the user will sign in: ![enter image description here](https://i.imgur.com/Ok0AUd6.png)
null
There is a custom loader that can be configured in Tag Manager that will allow you to set your own path. This custom loader allow you to create any path you like, such as: t.example.com/my_special_tracker.js The full step by step instructions are here (not my website): https://www.simoahava.com/analytics/custom-gtm-loader-server-side-tagging/ But here's the summary: 1. Install a code snippet to your Templates Gallery in Tag Manager 2. Create a new client from the template 3. While configuring the client, set your path, such as /gtm.js or my_special_tracker.js
I have to encrypt some data in my project. For this I'm using ECDH and AES-128-CBC. The problem is: I got the key from the other side. I have to create a public key from the mentioned key: public PublicKey createPublicKey(byte[] key) { X509EncodedKeySpec x509key = new X509EncodedKeySpec(key); KeyFactory keyfactory = KeyFactory.getInstance("EC"); return keyfactory.generatePublic(x509key); } But the code above returns an error: **java.security.spec.InvalidKeySpecException: com.android.org.conscrypt.OpenSSLX509CertificateFactory$ParsingException: Error parsing public key** What am I doing wrong? Do the key need to have specified length? For now it's 32 bytes.
vscode, debug angular, first time, doesn't debug, 2nd time stops at main.js then it's ok
|angular|typescript|visual-studio-code|
Since I updated 2FA in order to put trusted device, I have this error after the login and security code entered: >Key provided is shorter than 256 bits, only 64 bits provided I updated my User.php to add : /** * @ORM\Column(type="integer") */ private int $trustedVersion; ... public function getTrustedTokenVersion(): int { return $this->trustedVersion; } I updated my database to put the new column trusted_version and update security.yaml in order to use **scheb/2fa-trusted-device** I don't think the problem is about 2fa-trusted-device but I'm a beginner with symfony and I don't find the solution for the problem. Do you have any idea?
I found a solution. It is probably a dirty solution, but it works. The problem was to terminate my Activity just after the user manually grant the right MANAGE_EXTERNAL_STORAGE, knowing that : - even after the user grant the permission, my Activity did not have the right (1) - more over, if I end my Activity by finish() and start the app again, the new activity have not the right. I do not find explanation for that , It look like the new activity keep some context of the previous one. My solution is to kill my pid. I had implemented in my app the ability to launch bash commands. So I launch this command for terminating my Activity : static String HaraKiri = "PID=$(ps -ef | grep 'eu.eduphone.install' | grep -v 'grep' | grep -v 'eu.eduphone.install.' |awk '{ print $2 }');echo \"PID=$PID\";kill -15 \"$PID\""; ... ShellExec(HaraKiri); **(1) About the reason what we must restart Activity to get the right after the user grant it :** In another discussion,https://github.com/termux/termux-app/issues/71#issuecomment-1869222653, https://github.com/agnostic-apollo says that : - Unreliable/Removable volumes like USB OTG devices that are only available on the /mnt/media_rw paths with their own filesystem (vfat/exfat) are assigned the root (0) owner and external_storage (1077) group. - If an app has been granted the MANAGE_EXTERNAL_STORAGE permission, then the external_storage (1077) group is added to list of groups that are assigned to the app process when its forked from zygote, allowing it to access unreliable/removable volumes with the external_storage (1077) group. My running activity is not in the group 1077 because it has been forked before this group was assigned to the app.
Given the following C program (MSVC does not optimize away the "work" for me, for other compilers you may need to add an `asm` statement): ```c #include <inttypes.h> #include <stdlib.h> #define SIZE 10000 typedef struct { int32_t a, b, c; } Struct; void do_work(Struct* data) { int32_t* a = malloc(sizeof(int32_t) * SIZE), * b = malloc(sizeof(int32_t) * SIZE), * c = malloc(sizeof(int32_t) * SIZE); int32_t* a_ptr = a, * b_ptr = b, * c_ptr = c; for (size_t i = 0; i < SIZE; i++, a_ptr++, b_ptr++, c_ptr++, data++) { *a_ptr = data->a; *b_ptr = data->b; *c_ptr = data->c; } free(a); free(b); free(c); } int main() { Struct* data = malloc(sizeof(Struct) * SIZE); for (size_t i = 0; i < SIZE; i++) { data[i].a = i; data[i].b = i; data[i].c = i; } for (int i = 0; i < 500000; i++) { do_work(data); } free(data); } ``` **Edit:** Disassembly of `do_work()`: ```asm do_work PROC ; COMDAT $LN12: mov QWORD PTR [rsp+8], rbx mov QWORD PTR [rsp+16], rbp mov QWORD PTR [rsp+24], rsi push rdi sub rsp, 32 ; 00000020H mov rbx, rcx mov ecx, 40000 ; 00009c40H call QWORD PTR __imp_malloc mov ecx, 40000 ; 00009c40H mov rsi, rax call QWORD PTR __imp_malloc mov ecx, 40000 ; 00009c40H mov rbp, rax call QWORD PTR __imp_malloc mov r10, rsi lea rcx, QWORD PTR [rbx+8] sub r10, rax mov r11, rbp sub r11, rax mov rdi, rax mov r8, rax mov r9d, 10000 ; 00002710H npad 6 $LL4@do_work: mov edx, DWORD PTR [rcx-8] lea rcx, QWORD PTR [rcx+12] mov DWORD PTR [r10+r8], edx lea r8, QWORD PTR [r8+4] mov eax, DWORD PTR [rcx-16] mov DWORD PTR [r11+r8-4], eax mov eax, DWORD PTR [rcx-12] mov DWORD PTR [r8-4], eax sub r9, 1 jne SHORT $LL4@do_work mov rcx, rsi call QWORD PTR __imp_free mov rcx, rbp call QWORD PTR __imp_free mov rcx, rdi mov rbx, QWORD PTR [rsp+48] mov rbp, QWORD PTR [rsp+56] mov rsi, QWORD PTR [rsp+64] add rsp, 32 ; 00000020H pop rdi rex_jmp QWORD PTR __imp_free do_work ENDP ``` (I have a similar program in Rust with the same conclusions). Intel VTune reports that this program is 63.1% memory bound, and 52.4% store bound, with store latency of 26%. It recommends to search for false sharing, but I fail to see how there could be false sharing here. There is no concurrency, all data is owned by one core, the access patterns should be easily predicted and prefetched. I don't see why the CPU needs to stall on the stores here. I thought that maybe the low and high bits of the addresses of the three allocations are the same and that causes them to be mapped to the same cache lines, but I remember reading that modern CPUs don't just drop some bits to assign a cache line but do more complex calculations. Another thought was that maybe after the allocations are freed the CPU is still busy flushing the stores, and in the next run they are assigned the same address (or a close one) by the allocator and that brings problems for the CPU as it has to wait before storing new data. So I tried to not free the allocations, but that caused the code to be much slower. I'm on Windows 11, laptop Intel Core i9-13900HX, 32 logical cores, 8 Performance Cores and 16 Efficient Cores.
Inset pie chart into bar chart
|r|plotly|
My app has min sdk version 19 and apk has the following structure [![enter image description here][1]][1] And after I had switched to min sdk version 21, it changed the structure. [![enter image description here][2]][2] And SOTI can read it now. It sounds to me like SOTI has a bug on how to read this configuration. I'm not expect SOTI will fix it because the `restriction` is part of API 21. [1]: https://i.stack.imgur.com/HV48i.png [2]: https://i.stack.imgur.com/M1fOu.png
The 400 bad request error as we know it mostly has to do with the syntax, which means that the request might be malformed. The reason why I was getting the 400 bad request error was that I was trying to migrate to the new FCM API V1 and the new FCM API accepts a slightly different json payload as opposed to what the legacy FCM API used to accept. The only change that I have done to my code is at the place where I am forming the payload. This is the payload that I earlier had body = new { token = pushToken, notification = new { title = "Patient Flow", body = message, sound = soundFileName, }; data = new { type = notificationType } }; The updated payload: body = new { message = new { token = pushToken, data = new { title = "Patient Flow", body = NotificationMessage, sound = soundFileName, notificationType = notificationType } } }; It has basically required me to add the new key called 'message' within body and that has fixed the issue for me.
I have a problem when I try to use MouseLeftButtonUp event handler on Image it's not working exactly when parent(For example Grid) has his own MouseLeftButtonDown event handler. However when I remove MouseLeftButtonUp from parent(For example Grid) MouseLeftButtonUp event handler on Image is working. So what should I do to ensure MouseLeftButtonUp event handler execution on Image as a parent of Grid that has his own MouseLeftButtonDown event handler? I hope someone will help. Thanks! Xaml.cs code: ``` public void Exit_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) => Application.Current.Shutdown(); ``` Xaml code: ``` <Border Background="#7E2553" CornerRadius="20"> <Grid Background="#1D2B53" MouseLeftButtonDown="WindowPanel_MouseLeftButtonDown"> <Image Width="48" Height="48" Margin="0,0,32,0" HorizontalAlignment="Right" VerticalAlignment="Center" Cursor="Hand" MouseLeftButtonUp="Exit_MouseLeftButtonUp"> <Image.Source> <DrawingImage> <DrawingImage.Drawing> <DrawingGroup ClipGeometry="M0,0 V300 H300 V0 H0 Z"> <DrawingGroup Opacity="1"> <DrawingGroup.ClipGeometry> <RectangleGeometry RadiusX="0" RadiusY="0" Rect="0,0,300,300" /> </DrawingGroup.ClipGeometry> <DrawingGroup Opacity="1"> <GeometryDrawing Brush="White" Geometry="F1 M300,300z M0,0z M244.802,61.643C234.168,50.875 224.661,42.297 210.837,35.523 202.151,31.261 191.669,34.934 187.47,43.717 183.243,52.501 186.889,63.072 195.566,67.334 205.86,72.375 212.776,77.95 220.72,85.992 259.717,125.427 259.717,189.586 220.72,229.011 201.83,248.125 176.694,258.624 149.984,258.624 123.266,258.624 98.138,248.116 79.247,229.011 40.251,189.586 40.251,125.427 79.247,85.992 87.218,77.941 95.099,72.384 104.885,67.352 113.15,63.081 116.608,52.519 112.597,43.726 108.584,34.952 99.952,31.341 91.482,35.487 78.051,42.082 65.844,50.875 55.184,61.643 2.901,114.498 2.901,200.488 55.184,253.352 81.33,279.775 115.662,293 149.994,293 184.334,293 218.666,279.784 244.803,253.352 297.104,200.506 297.104,114.507 244.802,61.643z M149.984,174C159.849,174,167.855,165.993,167.855,156.129L167.882,24.871C167.882,15.007 159.876,7 150.011,7 140.145,7 132.139,15.007 132.139,24.871L132.139,78.486 132.112,156.128C132.112,166.002,140.118,174,149.984,174z" /> </DrawingGroup> </DrawingGroup> </DrawingGroup> </DrawingImage.Drawing> </DrawingImage> </Image.Source> </Image> </Grid> </Border> ``` I tried MouseLeftButtonUp and PreviewMouseLeftButtonUp on Image - not working However, other events like MouseLeftButtonDown works fine
Error "java.security.spec.InvalidKeySpecException: Error parsing public key" with Java
|java|android|aes|x509|
most simply implementation is int my_strncmp(const char *s1, const char *s2, size_t n) { if (n) { do { unsigned char a = *s1++, b = *s2++; if (int r = a - b) { return r; // 0 < r ? +1 : -1; // if normalize to {-1,0,+1} } if (!a) { return 0; } } while (--n); } return 0; } despite it will be very slow on long strings compare optimized implementation. very important use `unsigned char` for compare. if use /*unsigned*/ char a = *s1++, b = *s2++; result will be wrong. as example s1 = "" and s2 = "\xff"; in case use char a = *s1++, b = *s2++; b will 0xff and will be **signed** extended to `0xffffffff` in expression `(a - b)` as result will be `0 - 0xffffffff == 1` when in case unsigned char a = *s1++, b = *s2++; result be `0 - 0xff == 0xFFFFFF01` (negative) simply test void test() { char a=0, b=0; do { do { LONG i = my_strncmp(&a, &b, 1); LONG j = strncmp(&a, &b, 1); if ((i ^ j) & 0x80000000) { __debugbreak(); } } while (--b); } while (--a); }
There are many services you can use to play around with WordPress. 1. InstaWP - they have a 3 sites limit at a time, you can spin up a site for 7 days. They offer an internal migration tool which can be used to migrate to any hosting service. 2. TasteWP - i think they have a 48 hours limit on temp sites. 3. LocalWP - it is a local development environment, you will need to download and install a 350 mb file to get started. You can keep the site forever, but you can't share the sites easily.
If you are using **personal** Microsoft accounts *(Outlook, Gmail)* to login, your `code` value will start with *M.C* which is default behavior. I registered one multi-tenant application and granted `API permissions` as below: ![enter image description here](https://i.imgur.com/h5L8n3n.png) In my case, I used https://jwt.ms as **redirect URI** in my app registration: ![enter image description here](https://i.imgur.com/gKC9Kk2.png) When I ran below authorize URL and signed in with **personal** Microsoft account, I too got `code` value starts with *M.C* : ``` https://login.microsoftonline.com/common/oauth2/v2.0/authorize? client_id=appID &redirect_uri=https://jwt.ms &response_type=code &prompt=select_account &scope=openid offline_access Calendars.ReadWrite User.Read &state=12345 ``` ![enter image description here](https://i.imgur.com/YjBTjOE.png) Now, I used this `code` to generate access token via Postman with below parameters and got **response** like this: ```http POST https://login.microsoftonline.com/common/oauth2/v2.0/token grant_type: authorization_code client_id: appId client_secret: secret scope: https://graph.microsoft.com/.default code: paste_code_from_above redirect_uri: https://jwt.ms ``` **Response:** ![enter image description here](https://i.imgur.com/8i5z7HO.png) When I used this token to fetch user's calendar events, I got **response** successfully like this: ```http GET https://graph.microsoft.com/v1.0/me/events ``` **Response:** ![enter image description here](https://i.imgur.com/8zMFFuI.png) **Reference:** [php - How to use user's access token or access token based on Tenant ID in Microsoft graph API? - Stack Overflow](https://stackoverflow.com/questions/77431323/how-to-use-users-access-token-or-access-token-based-on-tenant-id-in-microsoft-g/77437176#77437176)
i have a service in spring boot and i want to get optional entity and to check if is not null to do something after. User repository class public interface UserRepository extends JpaRepository<UserEntity, Long> { Optional<UserEntity> findByLastName(String lastName); } Role repository class public interface RoleRepository extends JpaRepository<RoleEntity, Long> { Optional<RoleEntity> findByName(String name); } How can i write properly this code : public UserEntityDto addRoleToUser(String username, String rolename) { Optional<UserEntity> usrDb = userRep.findByLastName(username); Optional<RoleEntity> roleDb = roleRep.findByName(rolename); UserEntity userEntity = usrDb.orElse(null); RoleEntity roleEntity = roleDb.orElse(null); if (userEntity != null && roleEntity != null) { userEntity.getRoles().add(roleEntity); } return userEntityMapper.fromUserEntity(userEntity); }
Spring boot : how to get entity optional properly and check null
|java|spring|spring-boot|option-type|
<!-- language-all: sh --> To complement [Prodige69's helpful answer](https://stackoverflow.com/a/78182028/45375): Another way to **wait _only_ for `msiexec.exe`** (and child processes _synchronously_ launched from it, if any) rather than its entire child process _tree_ (`msiexec.exe` plus any child processes launched _asynchronously_ from it, which is the cause of the problem here) is to use **_direct invocation_ or invocation via `cmd.exe /c`**: These are ***syntactically easier* alternatives** to using `(Start-Process ... -PassThru).WaitForExit()` or `Start-Process ... -PassThru | Wait-Process`, which also automatically reflect `msiexec`'s process exit code in the [automatic `$LASTEXITCODE` variable](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Automatic_Variables#lastexitcode): * Using **direct invocation**, via a trick to make the call _synchronous_: ``` # To build an argument list *programmatically*, create an *array* $installerArguments = '/i', 'sample.msi', '/qn' # Executes *synchronously*, due to `| Out-Null` # Process exit code is reported in $LASTEXITCODE afterwards. # Equivalent of: # msiexec /i sample.msi /qn | Out-Null msiexec $installerArguments | Out-Null ``` * Calling **via `cmd /c`**, which allows you to control quoting explicitly, which is required for property values that require _partial_ quoting: ``` # Here, encode all arguments in a *single string*, using embedded double-quoting. $installerArgumentList = '/i sample.msi /qn PROP="Value with spaces"' # Executes *synchronously*, due to `cmd /c` # Process exit code is reported in $LASTEXITCODE afterwards. cmd /c "msiexec $installerArguments" ``` See [this answer](https://stackoverflow.com/a/50868019/45375) for background information.
I am making a form using next js 14, I made an actions.ts file there I built a generic method that shows a console log. [enter image description here][1] [enter image description here][2] In the form component, loginForm.tsx, in the form tag, I use the action property, and from there, I call the generic method that I made in action.ts [enter image description here][3] What happened is that, when I clicked on the save button of the form, I observed that in the devtools, in the network tab, a request was reflected, but I saw that in the netowrk tab, in the payload option, all the information in the form fields is displayed. my doubts are: Why does the user and password information appear in the devtools -> tab network -> in the payload? Is that considered a security vulnerability? Is there any idea on how to prevent it from displaying that information? I was investigating the topic, apparently it has to do with the html tag <form action={}></form>, but there was no mention of how to solve that detail [1]: https://i.stack.imgur.com/BX8D2.png [2]: https://i.stack.imgur.com/48uIs.png [3]: https://i.stack.imgur.com/FlBp2.png
Maybe try to plot in this order: `condition -> genes -> category`: nodes <- unique(unlist(example.data)) links <- rbind( data.frame(source = match(example.data$conditions, nodes) - 1, target = match(example.data$genes, nodes) - 1, value = 1), data.frame(source = match(example.data$genes, nodes) - 1, target = match(example.data$category, nodes) - 1, value = 1)) plot_ly( type = "sankey", orientation = "h", node = list(label = nodes), link = as.list(links) ) [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/E7Hlu.png
null
I'm using a pretty complex custom handler in DRF. For example, for a given response, `response.data` could look like to: ```python {'global_error': None, 'non_field_errors': [], 'field_errors': {'important_field': [ErrorDetail(string='Ce champ est obligatoire.', code='required')]}} ``` However, when getting the actual response from the API, the `ErrorDetail` will be transformed in a simple string, losing the code information. Is there a simple way to ensure `ErrorDetail` is always written in a response as `{"message": "...", "code": "..."}` without transforming the response manually in the custom handler? I know there exists the DRF `get_full_details()` method that returns exactly this on an exception. But I'm the response level.
In DRF, How to inject full `ErrorDetail` in the response, using a custom exception handler?
|python|django|django-rest-framework|
I am implementing an intersection observer to determine when certain div elements are coming into the viewport and once they are I am triggering a script which loads the google add into the already defined slot. Now the intersection observer currently works when the element comes into 1 pixel of the viewport however I want to extend the viewport by 350px using the 'rootMargin' property defined in the options object. Reading the documentation the target has to be an ancestor of the root element which it currently is, however the root margin is not having any effect what so ever. The current outcome is still the same. Here is my code. ``` `let options = { root: document, rootMargin: '500px 0px 0px 0px', threshold: 0, }; const observer = new IntersectionObserver( (entries) => { entries.forEach((entry) => { console.log(entry); if (entry.isIntersecting) { window.dispatchEvent(new CustomEvent("intersecting", { detail: { slotName: entry.target.id } })); observer.unobserve(entry.target); } }); }, { options } ); const addObservers = () => { const middleAds = "inart"; const readNextAds = 'rect'; const asideAds = 'rdnxt'; const mobile = 'minart'; const observedElement = document.querySelectorAll(`[id^=${middleAds}], [id^=${readNextAds}], [id^=${asideAds}]`); for(let i =0; i < observedElement.length; i++) { observer.observe(observedElement[i]); } };` ``` Can I even use a direct reference to document or do I need to make another div that takes up the current viewport and wraps around the targeted child elements.
> mov al, A ; Load A into AL register > imul B > movsx bx, cc > imul bx > movsx bx, D > imul bx This code calculates `A * B * cc * D`, simply multiplying the 4 numbers. > but when I use `idiv` the code doesn't give any outputs. You can't just substitute that instruction: `imul` outputs to AH / DX / EDX / RDX, where `idiv` inputs from AH / DX / EDX / RDX. You need to develop a new solution from scratch. The solution for this task must not only use the division instruction, but also respect the algebraic order of things as indicated by the parenthesis. Although the 4 variables were defined as *signed* integers, I see their values are all positive numbers. Therefore I will present a solution that uses the `div` instruction. I would hate to take the fun away, so I will leave the signed version of this up to you... ``` .data A BYTE 10 B BYTE 2 C BYTE 20 D BYTE 5 .code main PROC ; (C % D) movzx ebx, D test bl, bl jz ERROR ; #DE can't divide by zero movzx eax, C xor edx, edx div ebx ; EDX:EAX / EBX --> EDX is remainder test edx, edx jz ERROR ; #DE can't divide by zero (*) mov ecx, edx ; (A % B) movzx ebx, B test bl, bl jz ERROR ; #DE can't divide by zero movzx eax, A xor edx, edx div ebx ; EDX:EAX / EBX --> EDX is remainder ; (A % B) % (C % D) mov eax, edx xor edx, edx div ecx ; EDX:EAX / ECX --> EDX is remainder ; Result is in EDX ``` With the current data set, the program errors out at the asterisk!
Before anything, you might see my language isn't really good in english, and I'm sorry about that. By clicking a button in WPF on .NET 8, I want to create a new instance of another window that I made to enter values and then use these values later. The problem here is that when I want to make an instance of that window in the C# background code of the main window `MainWindow.xaml`, and when I execute the app, and press that button, this error occurs > System.IO.FileLoadException: 'The given assembly name was invalid.' This message comes from this code: ``` public Window_AddNew() { InitializeComponent(); // the error was on this line code } ``` By the way, this code is from the C# background code that belongs to the other window that I want to show up and make an instance of it, and it's name is `Window_AddNew.xaml`. For the buttons code that it should open the other window `Window_AddNew.xaml`: ``` private void btn_new_Click(object sender, RoutedEventArgs e) { Window_AddNew window_AddNew = new Window_AddNew(); // Window window_AddNew = new Window_AddNew(); // I tried this also but there's no change window_AddNew.Show(); } ``` One more notice: I've noticed that the problem was happening while in the `MainWindow.xaml` C# code, this line was being executed Window_AddNew window_AddNew = new Window_AddNew(); I've tried Window window_AddNew = new Window_AddNew(); instead of Window_AddNew window_AddNew = new Window_AddNew(); and I also tried this code window_AddNew.ShowDialog(); instead of window_AddNew.Show(); but there is no change about the problem. I hope someone can help me soon, and thank you all. This is the code in the `MainWindow.xaml`: ``` using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using System.Data; using Microsoft.Data.SqlClient; using System.Configuration; namespace _3__Train_AdoNet_3 { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void btn_new_Click(object sender, RoutedEventArgs e) { Window_AddNew window_AddNew = new Window_AddNew(); //Window window_AddNew = new Window_AddNew(); //window_AddNew.Show(); window_AddNew.ShowDialog(); } } } ``` And this is the code in `Window_AddNew.xaml`: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Shapes; using System.Data; using Microsoft.Data.SqlClient; using System.Configuration; namespace _3__Train_AdoNet_3 { /// <summary> /// Interaction logic for Window_AddNew.xaml /// </summary> public partial class Window_AddNew : Window { static string sqlConnString = ConfigurationManager.ConnectionStrings["sqlConnStr"].ConnectionString; SqlConnection sqlConn = new SqlConnection(sqlConnString); public Window_AddNew() { InitializeComponent(); } private void cmb_ProgrammingLanguages_ContextMenuOpening(object sender, ContextMenuEventArgs e) { DataTable dt_ProgrammingLanguages = new DataTable(); SqlDataAdapter sql_da_proglang = new SqlDataAdapter("sp_selectProgrammingLanguage", sqlConn); sql_da_proglang.Fill(dt_ProgrammingLanguages); cmb_ProgrammingLanguages.ItemsSource = dt_ProgrammingLanguages.DefaultView; } } } ```
Can't open new instance of another window in my app, in WPF .NET 8
|c#|wpf|xaml|.net-8.0|
I am using mathjax and Shiny to display equations. But the output text doesn't seem to me to be very high quality. How do I increase the dpi or the resolution of the output, which seems to be a dull or transparent black or isn't at it's full "brightness", if that makes sense? library(shiny) library(mathjaxr) ui <- fluidPage( title = 'MathJax Examples', uiOutput('ex3')) server <- function(input, output, session){ output$ex3 <- renderUI({ withMathJax( helpText( "$$\\log_{10}p_w = \\frac{-1.1489t}{273.1+t}-1.330\\text{e-}05t^2 + 9.084\\text{e-}08t^3 - 1.08\\text{e-}09t^4 +\\log_{10}p_i\\\\[15pt] \\log_{10}p_i=\\frac{-2445.5646}{273.1+t}+8.2312\\log_{10}\\left(273.1+t\\right)-0.01677006\\left(273.1+t\\right)+1.20514\\text{e-}05\\left(273.1+t\\right)^2-6.757169\\\\[15pt] p_i=saturated\\space vapor\\space pressure\\space over\\space ice\\space \\left(mmHg\\right)$$" )) })} shinyApp(ui = ui, server = server)
How to increase quality of mathjax output in Shiny R
|r|text|shiny|mathjax|
I have been using a `~/.config` directory, as a git repo that was forked from https://github.com/benbrastmckie/.config to `https://github.com/<myusername>/.config` and git cloned. I want to keep its remote repo focused on my Neovim related configuration, so my other local files in `.config` are git-excluded for now. The list of the excluded files can be viewed with `cat ~/.config/.git/info/exclude`. Now, I am trying to set up a `~/.dotfiles`. In this case, I am thinking of including the whole `.config` directory in the `.dotfiles`, adding the excluded files too. I want to do it such that the version control of `.dotfiles` should function properly without affecting what I have set for the `.config`. There are different kinds of methods available to configure the version control of `.dotfiles.`. But I am confused about what to follow in this situation.
IntersectionObserver rootMargin don't seem to work correctly
|javascript|observers|intersection-observer|
null
{"OriginalQuestionIds":[68158839],"Voters":[{"Id":12002570,"DisplayName":"user12002570","BindingReason":{"GoldTagBadge":"c++"}}]}
Do this $imageName = time().'.'.$image->extension(); $image->move(public_path('products'), $imageName); and while fetching <img src="{{ asset('products/'.$imageName) }}" alt="Uploaded Image">
If you want to delete subfolders under the root folder in an ADLS account, you can follow the procedure below: Upon the success of the copy activity, add a Get Metadata activity with the child items field using the Mainfolder dataset. It will display the list of subfolders as shown below: ![enter image description here](https://i.imgur.com/exEvOPo.png) Add a Foreach activity to the Get Metadata activity with enabled sequential and `@activity('Get Metadata1').output.childItems` items. Inside the Foreach activity, add a Delete activity using a dataset with the dataset parameter `path` for the directory to delete activity with the dynamic value `@concat('Mainfolder/',item().name)` as shown below: ![enter image description here](https://i.imgur.com/139eulz.png) Debug the pipeline, and it will debug successfully without any errors, deleting the subfolders in the main folder as shown below: ![enter image description here](https://i.imgur.com/hOSEjQ3.png) Before Pipeline debug: ![enter image description here](https://i.imgur.com/YVfFdu9.png) After pipeline debug: ![enter image description here](https://i.imgur.com/NbsFToA.png) **Note:** If your folder is in blob storage, it will delete the main folder itself because blob storage does not support empty folders.
null
It's kindof sortof the case in thios particular example, but you shouldn't care. Observe: ``` #include <iostream> int main(void) { { char test[3]; std::cout << "std::size(char[]): " << std::size(test) << '\n'; std::cout << "sizeof(char[]): " << sizeof(test) << '\n'; } std::cout << '\n'; { int test[3]; std::cout << "std::size(int[]): " << std::size(test) << '\n'; std::cout << "sizeof(int[]): " << sizeof(test) << '\n'; } return 0; } ``` ``` stieber@gatekeeper:~ $ g++ -std=c++20 Test.cpp && ./a.out std::size(char[]): 3 sizeof(char[]): 3 std::size(int[]): 3 sizeof(int[]): 12 ``` `sizeof()` simply gives you the storage size of an object; this is something you'd need to allocate memory via `malloc`, for example. `std::size()` gives you the number of items inside -- which is a very different thing, and more likely what you want. Although using C-style arrays comes with its own pitfalls anyway -- so that's also something you might not want to do.
I have "src/bin" folder to store the code and "tests" folder to store my test scripts. I am using the below pytest command in my pipeline - pytest /builds/cdf/Platform/onprem-core-functions/test/ --cov=src --cov-report xml:coverage/cov.xml --cov-branch but it fails with the below error. CoverageWarning: No data was collected. (no-data-collected) Also please note that if I keep my test script in the home directory instead of "test" folder, then the below pytest command runs successfully. - pytest --cov=src --cov-report xml:coverage/cov.xml --cov-branch However, in order to keep all my test scripts outside my application code, I intend to move them into a separate "test" folder instead of cluttering the home directory. Can you please advise where am i going wrong.
When .NET (Core) was first released for Linux, it was not yet available in the official Ubuntu repo. So instead, many of us added the Microsoft APT repo in order to install it. Now, the packages are part of the Ubuntu repo, and they are conflicting with the Microsoft packages. This error is a result of mixed packages. So you need to pick which one you're going to use, and ensure they don't mix. Personally, I decided to stick with the Microsoft packages because I figured they'll be better kept up-to-date. First, remove all existing packages to get to a clean state: ```bash sudo apt remove dotnet* aspnetcore* netstandard* ``` Then, create a file in `/etc/apt/preferences.d` (I named mine `99microsoft-dotnet.pref`, following the convention that files in such `*.d` directories are typically prefixed with a 2-digit number so that they sort and load in a predictable order) with the following contents: ``` Package: * Pin: origin "packages.microsoft.com" Pin-Priority: 1001 ``` Then, the regular update & install: ```bash sudo apt update && sudo apt install -y dotnet-sdk-8.0 ``` Note, the above example shows .NET 8; replace with another version if you prefer. .NET SDKs are installed side-by-side, so you can also install multiple versions. **If you would rather use the official Ubuntu packages**, remove all the existing packages as above, but instead of creating the `/etc/apt/preferences.d` entry, just delete the Microsoft repo: ```bash sudo rm /etc/apt/sources.list.d/microsoft-prod.list sudo apt update sudo apt install dotnet-sdk-7.0 ``` However, note that the Microsoft repo contains other packages such as PowerShell, SQL Server Command-Line Tools, etc., so removing it may not be desirable. I'm sure it's possible to make the APT config more specific to just these packages, but this is working for me for now. Hopefully Microsoft and Ubuntu work together to fix this soon. More info on the issue and various solutions is available here: * https://learn.microsoft.com/en-us/dotnet/core/install/linux-package-mixup * https://github.com/dotnet/core/issues/7699
Good evening, you can use a nested for loop for this. For example if you want to make grid for minesweeper you can do it like so: ``` from tkinter import Tk, Canvas root = Tk() screenWidth = 800 screenHeight = 1000 root.geometry("{}x{}".format(screenWidth, screenHeight)) size = 10 #This is size of 1 cell w = Canvas(root, width= screenWidth, height= screenHeight) w.pack() colour = "grey" #This is colour of cell out = "black" #This is colour of cell outline for i in range(round(screenWidth / size)): for j in range(round(screenHeight / size)): w.create_rectangle(i * size, j * size, (i + 1) * size, (j + 1) * size, fill= colour, outline= out) #You can also add tag to each cell if you want to make minesweeper it will be easier controlable. root.mainloop() ``` Sorry if it will not be syntaxHightlited. You can adjust size, colout, and outline colour.
I have a ASP.NET Core 6.0 and Angular 13 & TypeScript 4.5 project. I use Swagger and NSwag for create TypeScript clients. My swagger.json like as below: { "openapi": "3.0.1", "info": { "title": "Reinsurance.Api", "version": "1.0" }, "paths": { "/api/v1/Adam/GetData": { "get": { "tags": [ "Adam" ], "parameters": [ { "name": "id", "in": "query", "schema": { "type": "integer", "format": "int32" } } ], "responses": { "200": { "description": "Success", "content": { "text/plain": { "schema": { "$ref": "#/components/schemas/AdamListResponse" } }, "application/json": { "schema": { "$ref": "#/components/schemas/AdamListResponse" } }, "text/json": { "schema": { "$ref": "#/components/schemas/AdamListResponse" } } } } } } }, "/api/v1/Badam/GetData": { "get": { "tags": [ "Badam" ], "parameters": [ { "name": "id", "in": "query", "schema": { "type": "integer", "format": "int32" } }, { "name": "PageNumber", "in": "query", "schema": { "type": "integer", "format": "int32" } }, { "name": "PageSize", "in": "query", "schema": { "type": "integer", "format": "int32" } } ], "responses": { "200": { "description": "Success", "content": { "text/plain": { "schema": { "$ref": "#/components/schemas/BadamResponse" } }, "application/json": { "schema": { "$ref": "#/components/schemas/BadamResponse" } }, "text/json": { "schema": { "$ref": "#/components/schemas/BadamResponse" } } } } } } }, And my nswag config file like as below: { "runtime": "Net80", "defaultVariables": null, "documentGenerator": { "fromDocument": { "json": "", "url": "http://localhost:8080/swagger/v1/swagger.json", "output": null, "newLineBehavior": "Auto" } }, "codeGenerators": { "openApiToTypeScriptClient": { "className": "{controller}HttpService", "moduleName": "", "namespace": "", "typeScriptVersion": 4.5, "template": "Angular", "promiseType": "Promise", "httpClass": "HttpClient", "withCredentials": false, "useSingletonProvider": false, "injectionTokenType": "InjectionToken", "rxJsVersion": 6.0, "dateTimeType": "Date", "nullValue": "Undefined", "generateClientClasses": true, "generateClientInterfaces": true, "generateOptionalParameters": false, "exportTypes": true, "wrapDtoExceptions": true, "exceptionClass": "ApiException", "clientBaseClass": null, "wrapResponses": false, "wrapResponseMethods": [], "generateResponseClasses": true, "responseClass": "SwaggerResponse", "protectedMethods": [], "configurationClass": null, "useTransformOptionsMethod": false, "useTransformResultMethod": false, "generateDtoTypes": true, "operationGenerationMode": "MultipleClientsFromFirstTagAndOperationId", "markOptionalProperties": true, "generateCloneMethod": false, "typeStyle": "Class", "enumStyle": "Enum", "useLeafType": false, "classTypes": [], "extendedClasses": [], "extensionCode": "", "generateDefaultValues": true, "excludedTypeNames": [], "excludedParameterNames": [], "handleReferences": false, "generateTypeCheckFunctions": false, "generateConstructorInterface": true, "convertConstructorInterfaceData": false, "importRequiredTypes": true, "useGetBaseUrlMethod": false, "baseUrlTokenName": "API_BASE_URL", "queryNullValue": "", "useAbortSignal": false, "inlineNamedDictionaries": false, "inlineNamedAny": false, "includeHttpContext": false, "templateDirectory": "templates", "serviceHost": null, "serviceSchemes": null, "output": "http-services.ts", "newLineBehavior": "Auto" } } } NSwag create two different TypeScript clients. It is good. But NSwag add "2" suffix to BadamHttpService's GetData method. For example, my clients are like as below: AdamHttpService.GetData BadamHttpService.GetData2 If I create another controller that has GetData action, NSwag will create GetData3 method. Also, I changed **operationGenerationMode** from **MultipleClientsFromFirstTagAndOperationId** to **MultipleClientsFromOperationId**, but NSwag create only one client in this situation. How can I fix it ?
Is there a way to create a column from a single string value that is inherently and by default already a string column and not an object column? I don't want to spend any time casting an object column back to a string column. ```python df = pd.DataFrame(dict(a=range(10))) df["new"] = "my string" df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 a 10 non-null int64 1 new 10 non-null object dtypes: int64(1), object(1) memory usage: 288.0+ bytes ``` Even if I initialize an empty string column first, it still returns an object column. ```python df = pd.DataFrame(dict(a=range(10))) df["new"] = pd.Series(dtype="string") df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 a 10 non-null int64 1 new 0 non-null string dtypes: int64(1), string(1) memory usage: 288.0 bytes df["new"] = "my string" df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 a 10 non-null int64 1 new 10 non-null object dtypes: int64(1), object(1) memory usage: 288.0+ bytes ``` This is the only way that I have found that works, but it seems like so much code & effort for accomplishing something that should be simple. ```python df = pd.DataFrame(dict(a=range(10))) df["new"] = pd.Series(["my string"] * len(df), dtype="string", index=df.index) df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 a 10 non-null int64 1 new 10 non-null string dtypes: int64(1), string(1) memory usage: 288.0 bytes ```
Setting up the version control of .dotfiles while the .config is connected to a forked repo
I'm doing the parallel version of Bellman-Ford algorithm in c++ using std::atomic This is my main function executed in multiple threads ``` void calculateDistances(size_t start, size_t end, const Graph& graph, std::vector<std::atomic<double>>& distances, bool& haveChange) { for (size_t source = start; source < end; ++source) { for (const auto& connection : graph[source]) { const size_t& destination = connection.destination; const double& distance = connection.distance; double oldDistance = distances[destination]; while (distances[source] + distance < oldDistance) { if (distances[destination].compare_exchange_strong(oldDistance, distances[source] + distance)) { haveChange = true; break; } } } } ``` Here I'm trying to update `distances[destination]` with `distances[source] + distance` if second is smaller. However both: `distances[destination]` and `distances[source]` can be changed in other thread during this operation, so I'm using `compare_exchange_strong` here. But even when using this code - data race is still present and some of the iterations are skipped, resulting in failure of algorithm on some input data. Why is this going on and how I can fix this?