instruction
stringlengths
0
30k
|javascript|angular|typescript|twitter-bootstrap|
null
|delphi|nested|
I had the same problem. The thing with gc.collect() and cuda.empty_cache() is that these methods don't remove the model from your GPU they just clean the cache. So you need to delete your model from Cuda memory after each trial and probably clean the cache as well, without doing this every trial a new model will remain on your Cuda device. So I put these lines at the end of 'objective' function: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: py --> del model torch.cuda.empty_cache() gc.collect <!-- end snippet -->
I've already setup workload identity federation. We have created a management service account Where we are impersonating other Service Accounts. We have 2 different release pipeline to deploy cloud function. We are using 2 different SA (service account) for each. Let's say SA1 and SA2. When I deploy first release pipeline with SA1. It will deploy perfectly. When I run second pipeline just after previous pipeline. It will not deploying properly. When I check the logs. It will start deploying with SA1 and getting permission issue. Its like SA not expiring completely and start deploying with previous one. Task for second pipeline [![enter image description here][1]][1] **Release Pipeline logs** Authenticated with external account credentials for: [app-deployer@project2-2629.iam.gserviceaccount.com]. 2024-02-20T12:53:37.5811778Z Your current project is [project2-2629].  You can change this setting by running: 2024-02-20T12:53:37.5812138Z   $ gcloud config set project PROJECT_ID 2024-02-20T12:53:38.9624162Z AccessDeniedException: 403 app-deployer@project1-2620.iam.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket. Permission 'storage.objects.list' denied on resource (or it may not exist). I've already tried to solve by force recreating token with '--force' and 'revoke --all' option but not working. [1]: https://i.stack.imgur.com/VV9Zk.png
I have a SSRS report that I need to export as PPT file In order to generate the slides, I put rectangles of the exact size of a slide in cm (33.867 x 19.05 cm), for each slide with the necessary content inside There is a footer on each page (2cm) so each rectangle has a height of 17,05cm instead of 19,05 I removed margin and make sure report size is greater than body size I don't know why but i'm not able to export it as I need, one rectangle for each page, there are always blank pages and first rectangle is printing on 2 pages for example [image export ppt](https://i.stack.imgur.com/KJpIA.png) Thanks in advance for the help
I wasn't able to make proposed `static_unique_ptr_cast` working. Here is a working version: ``` #include <iostream> #include <memory> class Base { public: virtual void Do() { std::cout << "Base::Do" << std::endl; } }; class Derived: public Base { public: void Do() override { std::cout << "Derived::Do" << std::endl; } }; template<typename T> void destroy_der(T* p) { auto der = dynamic_cast<Derived*>(p); std::cout << "destroy_derived" << std::endl; delete der; } template<typename Derived, typename Base> std::unique_ptr<Derived> static_unique_ptr_cast(std::unique_ptr<Base> && p) { auto d = dynamic_cast<Derived *>(p.release()); return std::unique_ptr<Derived>(d); } template<typename Derived, typename DelDerived, typename Base, typename DelBase> std::unique_ptr<Derived, DelDerived> static_unique_ptr_cast_del(std::unique_ptr<Base, DelBase> && p) { auto d = dynamic_cast<Derived *>(p.release()); return std::unique_ptr<Derived, DelDerived>(d, reinterpret_cast<DelDerived>(std::move(p.get_deleter()))); } int main(void) { { std::unique_ptr<Base> b(new Derived); std::unique_ptr<Derived> d = static_unique_ptr_cast<Derived>(std::move(b)); d->Do(); } { std::unique_ptr<Base, void(*)(Base*)> b(new Derived, destroy_der<Base>); std::unique_ptr<Derived, void(*)(Derived*)> d = static_unique_ptr_cast_del<Derived, void(*)(Derived*)>(std::move(b)); d->Do(); } return 0; } ```
We are migrating to Azure platform as a service, and it looks like we can't use Crystal Reports. We use CR to generate PDFs based on the rpt files. I want to see if there is any low code tools out there that have a visual designer than will allow database data (or JSON data) and generate a PDF. I know we can work around this by using HTML and convert to PDF (this isn't ideal). I also know we can use Power BI, I'm looking to see if there Power BI alternatives (just to have a range of possibilities). Edit this is for a .NET Framework MVC C# application.
Alternative to Crystal Reports in Azure PAAS (app model)
|azure|crystal-reports|
I'm using an incredibly simple pipeline to build my angular App. I run it on an openshift cluster. these are the two tasks: ``` tasks: - name: fetch-source params: - name: url value: $(params.repo-url) - name: revision value: feature/tekton taskRef: kind: ClusterTask name: git-clone workspaces: - name: output workspace: shared-data - name: build-push params: - name: IMAGE value: $(params.image-reference) - name: DOCKERFILE value: $(params.containerfile-path) - name: CONTEXT value: $(params.context-dir-path) runAfter: - fetch-source taskRef: kind: ClusterTask name: buildah workspaces: - name: source workspace: shared-data ``` I'm trying to build this container file: ``` FROM node:latest as builder USER root # Set the working directory WORKDIR /usr/local/app # Add the source code to app COPY ./ /usr/local/app/ RUN chown -R root /usr/local/app RUN npm ci # Generate the build of the application RUN npm run build # Stage 2: Serve app with nginx server FROM nginx:latest [...] ``` The command ```npm ci``` fails with the error: ``` npm ERR! code EMFILE npm ERR! syscall open npm ERR! path /root/.npm/_cacache/index-v5/d8/c0/7503a3169361b6a2c8813bf0eddb92eed6417987f616accf18698f2b71f4 npm ERR! errno -24 npm ERR! EMFILE: too many open files, open '/root/.npm/_cacache/index-v5/d8/c0/7503a3169361b6a2c8813bf0eddb92eed6417987f616accf18698f2b71f4' npm ERR! A complete log of this run can be found in: /root/.npm/_logs/2024-03-27T13_42_33_018Z-debug-0.log subprocess exited with status 232 subprocess exited with status 232 Error: building at STEP "RUN npm ci": exit status 232 ``` In my understanding NPM tries to open too many files and hits the ulimit of the container (1024). The Build config that is doing the same operation has not problem and I don't know how that can be possible. I suspect the difference is that the build that is working is executed in a privileged context but i don't know if it's safe to do the same with tekton tasks. Any idea on were the problem could be?
I have a nextjs application. its hosted on Ubuntu OS. When I run, npm run start or npm start, the application runs. but when the terminal is closed, service stops and site gets down. On same server, I also have an express application. I am using pm2 to start the express server. But when I try to run next application using PM2, it shows status as errored and does not start. I tried using npm commands and pm2. but didnt work.
Issue with next js site deployment (production mode)
|next.js|deployment|
null
I am newbie in bash linux. I already make Boot Menu in Batch and Powershell. I want to make it in bash. In bash, when i type efibootmgr, the output: ``` BootCurrent: 0003 Timeout: 2 seconds BootOrder: 0011,0000,0001,0013,0014,0015,0004,0003,0016,0017 Boot0000* ThrottleStop UEFI Boot0001* rEFInd Boot Manager Boot0003* MX23 LinuX Boot0004* Windows Boot Manager etc... ``` I tried: ``` #!/bin/bash cd /home/mrkey7/Desktop/ sudo efibootmgr > file.txt echo "$(grep "Boot00" file.txt)" echo [R] Reboot echo [S] Shutdown echo [E] Exit read -n 1 -p "Choose:" ans; case $ans in r|R) sudo reboot;; s|S) sudo poweroff;; *) exit;; esac ``` Output: ``` Boot0000* ThrottleStop UEFI Boot0001* rEFInd Boot Manager Boot0003* MX23 LinuX Boot0004* Windows Boot Manager [R] Reboot [S] Shutdown [E] Exit Choose: ``` I want output like: ``` [1] ThrottleStop UEFI [2] rEFInd Boot Manager [3] MX23 LinuX [4] Windows Boot Manager [R] Reboot [S] Shutdown [E] Exit Choose: ``` Then when i press "1", pc will boots to "ThrottleStop UEFI". Press "2" boots to "rEFInd Boot Manager". etc.
As of reticulate version 1.34.0.9000 In this case, you need to explicitly pass the conda env in one of the 3 forms: NOTE: 1 and 2 will work in older versions. 1) use_condaenv(condaenv="r-reticulate") 2) use_condaenv(condaenv="C:\\Users\\caleb\\AppData\\Local\\r-miniconda\\envs\\r-reticulate\\") 3) use_condaenv(condaenv="C:\\Users\\caleb\\AppData\\Local\\r-miniconda\\envs\\r-reticulate\\python.exe"
I have table which represents sequence of points, I need to get sum by all possible combinations. The main problem is how to do it with minimum actions because the Real table is huge |Col1|col2|col3|col4|col5|col6|ct| |Id1 |id2 |id3 |id4 |id5 |id6 |30| |Id8 |id3 |id5 |id2 |id4 |id6 |45| The expected result is Id3|id5|75 Id3|id4|75 Id3|id6|75 Id5|id6|75 Id2|id4|75 Id2|id6|75 Id4|id6|75 I would be grateful for any help
I'am working on a React app and i use react-avatar-editor to resize a user profil picture, i have a problem while trying to zoom in the picture, at one point the zoom also applies to the browser tab instead of just the picture, ``` import { useRef, useState } from "react"; import AvatarEditor from "react-avatar-editor"; const Avatar = () => { const editorRef = useRef(null); const [profilePhoto, setProfilePhoto] = useState(() => { return localStorage.getItem("profilePhoto"); }); const [scale, setScale] = useState(1); const handleFileChange = (event) => { const file = event.target.files?.[0]; if (file) { setProfilePhoto(file); } }; const handleZoomChange = (e) => { const newScale = scale + e.deltaY * -0.01; setScale(Math.min(Math.max(1, newScale), 2)); //e.stopPropagation(); }; return ( <div onWheel={handleZoomChange}> <input type="file" accept="image/*" onChange={handleFileChange} /> <AvatarEditor ref={editorRef} image={profilePhoto} width={250} height={250} borderRadius={50} scale={scale} /> <button onClick={() => { if (editorRef.current) { const editedImage = editorRef.current .getImageScaledToCanvas() .toDataURL(); localStorage.setItem("profilePhoto", editedImage); setProfilePhoto(editedImage); } }} > Save </button> </div> ); }; export default Avatar; ``` i try the stopPropagation() function but it didn't work
SpringBatch ItemRepositoryReader, How to change datasources in a JPARepository?
|spring-batch|jparepository|
Given an existing table: ``` create table if not exists test.my_table ( name string, value int64 ); insert into test.my_table (name, value) values ('alex', 10); ``` You can merge new values using `UNION ALL` and many `select` statements. ``` merge into test.my_table t using ( (select 'alex' name, 15 value) union all (select 'dimitri' name, 20 value) ) s on t.name = s.name when matched then update set t.value = s.value when not matched then insert row ``` You need the `select` to name the column, you need to name the column to join existing table to values.
Also answered here: https://forum.camunda.io/t/what-is-the-life-cycle-of-an-external-task-topic/50957. But for reference here, there is no direct storage of the topic in any camunda tables. Actually the topic name is saved in the table ACT_RU_EXT_TASK as part of the external task. Once all external tasks associated with a topic are completed, those entries will be gone from the database.
I use this, but I cannot confirm if it works Jupyter notebooks or not. os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "0" # or "1", "2", "3", ...
``` import pandas as pd data = {'A': ['1, 1, 1', '1', '2', '3', '1'], 'B': ['1', '1,1,1,1', '2', '4', '1'], 'C': ['1, 1', '2', '3', '5', '1']} df = pd.DataFrame(data) def all_same(row): cc = set() for val in row: try: val = set(val.replace(" ", "").split(",")) except AttributeError: # cell with non-string value! # decide if you want to ignore this, or do raise this # for simplicity I just `pass` pass cc.update(val) return len(cc) == 1 df = df[~df.apply(all_same, axis=1)] ``` ```
It's going to depend a bit on your XSLT processor. In the Java world you're generally either using Xalan (which is XSLT 1.0) or Saxon (which is XSLT 3.0), but you've tagged the question XSLT 2.0 so we're left guessing a bit. One way to handle this is with `use-when` attributes: `<xsl:message use-when="$debug-level gt 3">Gotcha!</xsl:message>` However, using static parameters in `use-when` like this is a 3.0 feature.
|python|pysimplegui|
``` import pandas as pd data = {'A': ['1, 1, 1', '1', '2', '3', '1'], 'B': ['1', '1,1,1,1', '2', '4', '1'], 'C': ['1, 1', '2', '3', '5', '1']} df = pd.DataFrame(data) def all_same(row): cc = set() for val in row: try: val = set(val.replace(" ", "").split(",")) except AttributeError: # cell with non-string value! # decide if you want to ignore this, or reraise this # for simplicity I just `pass` pass cc.update(val) return len(cc) == 1 df = df[~df.apply(all_same, axis=1)] ``` ```
Here are the relevant links * https://www.sandbox.paypal.com/bizsignup/ * https://developer.paypal.com/dashboard/accounts -> Link other sandbox accounts to this developer account * https://www.sandbox.paypal.com/businessprofile/settings/email * https://developer.paypal.com/dashboard/notifications
I get an error when using Pyrogram with Threading in Tkinter
|python|multithreading|tkinter|customtkinter|pyrogram|
null
If I run the code below, `xStep1` does not have an attribute named pivot. According to the doc's it should have this attribute. Any help would be appreciated. x=Diagonal(10000) x[1,3]=0.3 x[3,1]=0.3 xStep1=chol(x,pivot=TRUE)
If you have similar enums which are all based on a single value, you can let them implement a common interface: ```java public interface ValueAware<V> { V getValue(); } ``` And then use the following method to find instances based on a value: ```java public static <T extends Enum<T> & ValueAware<V>, V> Optional<T> findEnumByValue(Class<T> enumClass, V valueToMatch) { return EnumSet.allOf(enumClass).stream() .filter(element -> Objects.equals(element.getValue(), valueToMatch)) .findAny(); } ``` The example enum would look like this: ```java public enum Animal implements ValueAware<String> { DOG("bark"), CAT("meow"); private final String sound; Animal(String sound) { this.sound = sound; } @Override public String getValue() { return sound; } } ``` Example usage: ```java Optional<Animal> cat = findEnumByValue(Animal.class, "meow"); ```
{"OriginalQuestionIds":[45781692],"Voters":[{"Id":7582247,"DisplayName":"Ted Lyngmo","BindingReason":{"GoldTagBadge":"c++"}}]}
I used react to develop my website and deployed it on github pages, but the problem is, the home page works, but other pages cannot be found. It loads shows the page and then suddenly goes off and says “404, file can not be found” after few seconds. I don’t understand why, I have tried various methods to fix it. Kindly help. My repository is: https://github.com/Bunmi2020/valor-motion-pictures The website is https://valormotionpictures.com/
One of my pages keeps showing file not found
|http-status-code-404|custom-error-pages|file-not-found|
null
here is my prometheus yml: global: scrape_interval: 3s # Set the scrape interval to every 3 seconds. Default is every 1 minute. evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: "prometheus" # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ["localhost:9090"] Prometheus is launched like: prometheus --config.file=/etc/prometheus/prometheus.yml --web.enable-remote-write-receiver --enable-feature=native-histograms K6 load test: k6 run -o experimental-prometheus-rw dist/get-all-node-requests-test.js I see metrics appear in Prometheus - but k6 generates metrics in ms and I would like to have ms in Prometheus (Y axis). Would you please advise how I can change it [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/ycD50.png
|sapui5|sap|
You can try with the below snippets. It will work as per your requirement. %dw 2.0 output application/json var inpDate = "03/01/2024" as Date {format:"MM/dd/yyyy"} var today = now() as Date {format:"MM/dd/yyyy"} --- { "inpDate": inpDate, today:today, "isCheck": (inpDate.month + inpDate.year) < (today.month + today.year) }
I'm helping a user get setup with Cytoscape on our HPC and I'm wondering if there are ways besides the opencl support to get higher performance from Cytoscape. What I mean is I have a 94 cpu node that seems to be running one of the layout operations of a sample dataset in almost the same speed as the user's desktop machine. I can tell by looking in the `Cytoscape.vmoptions` that the HPC memory available is being provided to the program so I don't think that's what's preventing higher performance. If opencl is the requirement for multicpu core leverage then I'll try to work that out, but I thought it was mostly for leveraging the GPU.
I am trying to plot a scatterplot for the data frame(posted below). I want to represent the scatterplot with the different stages being shown as different markers(circle,square etc) and different products being shown as different colors(red,blue etc). so far i have done that but i have a hard time showing a legend that depicts this. this is what i wrote: ``` df = pd.DataFrame([[1500,24,'open','drive'], [2900, 30, 'open', 'walk'], [1200, 50, 'closed', 'drive'], [4000, 80, 'open', 'air'], [8000, 70, 'ongoing', 'air'], [6100, 40, 'ongoing', 'walk'], [7200, 85, 'closed', 'drive'], [3300, 25, 'closed', 'drive'], [5400, 45, 'open', 'walk'], [5900, 53, 'open', 'air']]) df.columns = ['Cost','Duration','Stage','Product'] label_encoder = LabelEncoder() markers = {0: 'o', 1: 's', 2: '^'} df['Product_encoded'] = label_encoder.fit_transform(df['Product']) df['Stage_encoded'] = label_encoder.fit_transform(df['Stage']) df['Stage_encoded']= df['Stage_encoded'].map(markers) X= np.array(df) for idx,cl in enumerate(np.unique(df['Stage_encoded'])): plt.scatter(x=X[df['Stage_encoded']== cl,0],y=X[df['Stage_encoded']== cl,1],marker=cl,c=[colors[i] for i in X[df['Stage_encoded'] == cl, 4]]) plt.legend() ``` this shows the plot and gives the point the appropriate color and marker but i want to show the legend(marker and color)
create table login ( meter_no number(30,5), username varchar2(20), name varchar2(20), password varchar2(20), user varchar2(20) ); create table login ( meter_no number(30,5), username varchar2(20), name varchar2(20), password varchar2(20), user varchar2(20) ) > ERROR at line 1: > ORA-00904: : invalid identifier
|sql|oracle|
how to use custom function ? example in documentation nuxt not working ``` export default async function customSearchContent(search: Ref<string>) { const runtimeConfig = useRuntimeConfig() const { integrity, api } = runtimeConfig.public.content const { data } = await useFetch(`${api.baseURL}/search${integrity ? '.' + integrity : ''}.json`) const { results } = useFuse(search, data) return results } ```
I am interested in the 'CSI-Prediction' repository by sharanmourya available on GitHub at https://github.com/sharanmourya/CSI-Prediction. This repository contains PyTorch code for 'Spectral Temporal Graph Neural Network for massive MIMO CSI Prediction.' I would like to run the code and reproduce the results mentioned in the associated research paper. I have gone through the repository and the README file, but I would appreciate more detailed instructions on how to properly execute the code. If anyone has successfully run this code or has experience with similar projects, I would greatly appreciate your guidance. Any detailed instructions, clarifications, or tips on running this code would be invaluable. Thank you in advance for your help! In step 2 of the setup process, it is mentioned to use STNet for compression. Could you please provide more information on how to acquire and utilize STNet for the compression process? Are there any specific instructions or guidelines to follow?
Inquiry about Running the Source Code - CSI-Prediction Repository
|pytorch|stem|
null
I am trying to analyse execution time of a query. I used the following statement to check CPU and elapsed time: SET STATISTICS IO, TIME ON; For a single query, I can understand the CPU and elapsed time. But when it comes to a complex query, say as follows (Pretty big and for the time being, not sharing any sample data): DECLARE @userID BIGINT = 100, @type bit =0 DECLARE @fetchDate DATETIME; SET @fetchDate = DATEADD(yy,-2,datediff(d,0,getdate())) DECLARE @subuserId BIGINT; SELECT @subuserId = dbo.fn_getSubstituteForUser(@userID) DECLARE @userGradeLevelCode NVARCHAR(20); SELECT @userGradeLevelCode = GradeLevelCode FROM Tbl_UserPost pst LEFT JOIN Tbl_MasterGradeLevel gd ON pst.GradeLevel_ID = gd.GradeLevel_ID WHERE pst.IsActive = 1 AND pst.User_ID = @userID SELECT RANK() OVER (PARTITION BY MemoForAllID ORDER BY MemoForAllDetailID DESC) r, SenderID, MemoForAllID, ReceiverIndividualIDs, MemoForAllDetailID, Status, IsUpdated, IsTransfered, CreatedBy INTO #temp_VW_Tbl_MemoForAllDetail FROM VW_Tbl_MemoForAllDetail WHERE IsActive = 1 AND CreatedOn > @fetchDate SELECT header.MemoForAllID, detail.MemoForAllDetailID, header.Subject, header.Code, header.DocumentNumber, detail.SenderID, CASE WHEN (dd.GradeLevelCode = 'DEL') THEN dd.DepartmentName WHEN @type = 0 OR detail.Status IN ('REVIEW', 'DRAFT', 'CORRECTION') THEN dd.EmployeeName ELSE CASE WHEN (@userID IN (SELECT dlg1.DelegateID FROM Tbl_MemoForAllDelegateDetail dlg1 WHERE dlg1.MemoForAllID = header.MemoForAllID AND dlg1.IsActive = 1) AND detail.Status NOT IN ('REVIEW', 'CORRECTION')) THEN (SELECT DirectorateName FROM tbl_MasterDirectorate msd INNER JOIN tbl_UserPost pst ON pst.Directorate_ID = msd.Directorate_ID WHERE pst.IsActive = 1 AND pst.User_ID = (SELECT TOP 1 dlg1.SenderID FROM Tbl_MemoForAllDelegateDetail dlg1 WHERE dlg1.MemoForAllID = header.MemoForAllID AND dlg1.DelegateID = @userID AND dlg1.IsActive = 1)) ELSE (SELECT DirectorateName FROM tbl_MasterDirectorate msd INNER JOIN tbl_UserPost pst ON pst.Directorate_ID = msd.Directorate_ID WHERE pst.IsActive = 1 AND pst.User_ID = header.SenderID) END END AS SenderName, header.MemoForAllDate, header.Priority AS PriorityName, detail.Status, CASE WHEN (@userID IN (SELECT dlg1.DelegateID FROM Tbl_MemoForAllDelegateDetail dlg1 WHERE dlg1.MemoForAllID = header.MemoForAllID AND dlg1.IsActive = 1) AND detail.Status NOT IN ('REVIEW', 'CORRECTION')) THEN (SELECT TOP 1 dlg1.CreatedOn FROM Tbl_MemoForAllDelegateDetail dlg1 WHERE dlg1.MemoForAllID = header.MemoForAllID AND dlg1.DelegateID = @userID AND dlg1.IsActive = 1) ELSE header.MemoForAllDate END AS SentOn, CASE WHEN (@userID IN (SELECT value FROM STRING_SPLIT(header.CCIDs, ','))) THEN CAST(1 AS BIT) WHEN (@userID IN (SELECT value FROM STRING_SPLIT(header.CCSubIDs, ','))) THEN CAST(1 AS BIT) WHEN (@userID IN (SELECT value FROM STRING_SPLIT(dlgt.CCDelegateIDs, ','))) THEN CAST(1 AS BIT) ELSE CAST(0 AS BIT) END AS IsCCUser, CASE WHEN (@subuserId <> 0) THEN CAST(0 AS BIT) ELSE CAST(1 AS BIT) END AS CanTakeAction, detail.IsUpdated, CASE WHEN (@userID IN (SELECT a.SenderID FROM Tbl_MemoForAllDelegateDetail a WHERE a.MemoForAllID = header.MemoForAllID AND a.IsActive = 1)) THEN CAST(1 AS BIT) ELSE CAST(0 AS BIT) END AS HasAssignedDelegate, CASE WHEN (@userID IN (SELECT value FROM STRING_SPLIT(header.RecipientIDs, ',')) AND detail.Status NOT IN ('REVIEW', 'CORRECTION')) THEN CAST(1 AS BIT) WHEN (@userID IN (SELECT value FROM STRING_SPLIT(header.RecipientSubIDs, ',')) AND detail.Status NOT IN ('REVIEW', 'CORRECTION')) THEN CAST(1 AS BIT) WHEN (@userID IN (SELECT dlg.DelegateID FROM Tbl_MemoForAllDelegateDetail dlg WHERE dlg.MemoForAllID = header.MemoForAllID AND dlg.IsActive = 1 ) AND detail.Status NOT IN ('REVIEW', 'CORRECTION')) THEN CAST(1 AS BIT) ELSE CAST(0 AS BIT) END AS CanFreeze, CASE WHEN (@userID IN (SELECT a.User_ID FROM Tbl_MemoForAllFreezeDetail a WHERE a.MemoForAllID = header.MemoForAllID AND a.IsActive = 1 )) THEN CAST(1 AS BIT) ELSE CAST(0 AS BIT) END AS HasFreezed, CASE WHEN ( header.MemoForAllID IN (SELECT DISTINCT a.MemoForAllRefID FROM Tbl_MemoForAllHeader a WHERE a.CreatedBy = @userID and CreatedOn > @fetchDate) AND @userID IN (SELECT a.User_ID FROM Tbl_MemoForAllFreezeDetail a WHERE a.MemoForAllID = header.MemoForAllID AND a.IsActive = 1 ) ) THEN CAST(1 AS BIT) ELSE CAST(0 AS BIT) END AS IsReffered, 'MemoForAll' MemoType, CAST(0 AS BIT) IsMemoClosed, CAST(0 AS BIT) IsFinalSent, CASE WHEN (@userID IN (SELECT dlg.DelegateID FROM Tbl_MemoForAllDelegateDetail dlg WHERE dlg.MemoForAllID = header.MemoForAllID AND dlg.IsActive = 1 ) AND detail.Status NOT IN ('REVIEW', 'CORRECTION')) THEN CAST(1 AS bigint) ELSE CAST(0 AS bigint) END AS RecipientDelegate_ID, CAST(0 AS BIT) IsForwarded, CASE WHEN (header.Status NOT IN ('DRAFT','REVIEW') AND @userGradeLevelCode IN ('DIR')) THEN CAST(1 AS BIT) ELSE CAST(0 AS BIT) END AS CanTransfer, detail.IsTransfered, detail.CreatedBy FROM Tbl_MemoForAllHeader header LEFT JOIN Tbl_MemoForAllDelegateDetail dlgt ON header.MemoForAllID = dlgt.MemoForAllID AND dlgt.ID = ( SELECT TOP 1 ID FROM Tbl_MemoForAllDelegateDetail WHERE MemoForAllID = header.MemoForAllID AND IsActive = 1 ) INNER JOIN #temp_VW_Tbl_MemoForAllDetail detail ON (header.MemoForAllID = detail.MemoForAllID) inner join VW_UserInfo dd on dd.User_ID=detail.SenderID WHERE detail.r =1 and header.CreatedOn > @fetchDate and header.IsActive = 1 AND header.IsForAllDirectors = 0 AND ( (header.IsForAllDirectors = 0 AND header.Status IN ('ADD_DELEGATE') AND @userID IN(SELECT value FROM STRING_SPLIT(header.RecipientIDs,',')) ) OR @userID IN(SELECT value FROM STRING_SPLIT(header.CCSecretaryIDs,',')) OR (header.Status IN ('REVIEW') AND (header.OriginatorID = @userID OR header.OriginatorSubID = @userID)) ) AND header.Status NOT IN ('DRAFT') DROP TABLE #temp_VW_Tbl_MemoForAllDetail The performance degrades in production, in my understanding it could be due to no. of `CAST()` and subqueries (As I didn't write the actual query) with `CASE` statement. If any suggestion provided on assumption, would be glad to know more about optimizing. Now my actual query is something different that I am unable to figure out. When I run the query on production, I get the following: **Image 1**: [![Query Execution 1][1]][1] For 4970 rows, it takes around **00:01:20**. In local, it's pretty fast. When I use this statement `SET STATISTICS IO, TIME ON;`, got the below: **Image 2**: [![Query Execution 2][2]][2] My question is, the execution time I get in the first image would be the same with the second image. I am willing to know like how the calculation done in the second image for execution time of the query. I can see no. of elapsed time for each scan, to sum up and to get the actual execution time should I calculate all the elapsed time in the second image? Any brief explanation would be great. **N.B**: I am pretty novice in analysing `SQL` queries, please pardon if I missed anything or suggest anything that can improve the post. [1]: https://i.stack.imgur.com/glbhE.png [2]: https://i.stack.imgur.com/7Ih2Z.png
<!-- begin snippet: js hide: false console: true babel: null --> <!-- language: lang-js --> $(function() { $('.map').maphilight(); }); var myData = { "left-eye": { "title": "", "image":"", "description": "Lorem ipsum A dolor sin amet. ." }, "mouth": { "title": "", "description": "Lorem ipsum B dolor sin amet. Lorem ipsum B dolor sin amet.Lorem ipsum B dolor sin amet.Lorem ipsum B dolor sin amet.Lorem ipsum B dolor sin amet.Lorem ipsum B dolor sin amet.Lorem ipsum B dolor sin amet.Lorem ipsum B dolor sin amet.vLorem ipsum B dolor sin amet.vLorem ipsum B dolor sin amet.Lorem ipsum B dolor sin amet.</p>Lorem ipsum B dolor sin amet." }, }; var areas = document.getElementsByTagName('area'), bubble = document.getElementById('myBubble'), bubbleContent = document.getElementById('myBubbleContent'), bubbleClose = document.getElementById('myBubbleCloseButton'); // On click of an area, open popup for(var i=0, l=areas.length; i<l; i++) { areas[i].addEventListener('click', openBubble, false); } // On click of close button, close popup bubbleClose.addEventListener('click', closeBubble, false); function openBubble() { var content = myData[this.id]; bubbleContent.innerHTML = '<h3>' + content.title + '</h3>' + '<img src="' + content.image + '" alt="" />' + '<p>' + content.description + '</p>'; bubble.style.top = (event.clientY - 20) + "px"; bubble.style.left = (event.clientX - 200) + "px"; bubble.className = 'shown'; } function closeBubble() { bubble.className = ''; } <!-- language: lang-css --> #myWrapper{ position: relative; font-family: Arial, Helvetica, sans-serif; } #myImage{ display: block; } #myBubble{ background: #fff; border: 1px solid #aaa; padding: .5rem 1rem; display: none; top: 1.5rem; left: 1.5rem; width: 40%; } #myBubble.shown{position: fixed; display: block;cursor: pointer;font-size: 0.8rem; margin-top: 0.2rem; filter: drop-shadow(0 2px 4px rgba(0,0,0,0.5)); width:20%; } #myBubble.shown:before{ content: ""; position: absolute; top: 20px; left:-30px; z-index: 1; border: solid 15px transparent; border-right-color: #FFF; } #myBubble img{ display: block; width: 100%; } #myBubbleCloseButton{ position: absolute; top: 0; right: 0; padding: .5rem; line-height: 1; cursor: pointer; color: #1A8782; } #myBubbleCloseButton:hover{ color: #1A8782; } <!-- language: lang-html --> <script src="https://cdn.rawgit.com/davidjbradshaw/image-map-resizer/master/js/imageMapResizer.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/maphilight/1.4.0/jquery.maphilight.min.js"></script> <div id="myWrapper"> <img id="myImage" src="http://tinypic.com/images/goodbye.jpg" usemap="#myMap" alt="" class="map" /> <map name="myMap" id="myMap"> <area id="left-eye" coords="118,36,148,80" shape="rect" href="javascript:void(0);"> <area id="mouth" coords="121,84,198,118" shape="rect"> </map> <div id="myBubble"> <div id="myBubbleContent"></div> <div id="myBubbleCloseButton">✕</div> </div> </div> <!-- end snippet -->
### What are the details of your problem? I have a [Sonatype Nexus Repository Manager](https://www.sonatype.com/products/sonatype-nexus-repository) that has an unknown number of hosted repositories for the [Maven-3 m2](https://maven.apache.org/settings.html) format. I'd like to easily make sure all available options are being used from this Nexus. I can see many options being listed available on the Nexus search but at maven build are not being found. 1. Sonatype.Nexus.Repo.1 1. Sonatype.Nexus.Repo.2 1. Sonatype.Nexus.Repo.A 1. Sonatype.Nexus.Repo.N 1. Sonatype.Nexus.Repo.A2 ### What did you try and what were you expecting? I was expecting that if I gave the overarching Sonatype Nexus URL e.g. Sonatype.Nexus.Repo that that'd be fine enough for redirects to figure out and auto-manage. But actually, none of the artifacts seemed to resolve. I don't know if I just haven't been searching the right key terms or my phrasing has been wonky. Is there a way to force Maven to use all the sub-repositories of the main Nexus? Is there a Nexus URL format that will automatically provide all the available repositories' Maven packages in one link? Is there a way to have Maven check every mirror listed for `<mirrorOf>central</mirrorOf>`?
I'm trying to run a dag with some `DbtTestOperator` and `DbtRunOperator` but got an error as the `dbt_packages` is not in the repo bucket. Even though, I added the `dbt_utils` package in the file `packages.yml` the dag failed because of inexistent `dbt_packages` folder. packages.yml ``` packages: - package: calogica/dbt_expectations version: [">=0.5.0", "<0.6.0"] - package: dbt-labs/dbt_utils version: 0.7.1 ``` airflow logs ``` [2024-03-11, 11:28:13 UTC] {dbt_hook.py:126} INFO - Output: [2024-03-11, 11:28:17 UTC] {dbt_hook.py:130} INFO - 11:28:17 Running with dbt=1.0.9 [2024-03-11, 11:28:17 UTC] {dbt_hook.py:130} INFO - 11:28:17 Encountered an error: [2024-03-11, 11:28:17 UTC] {dbt_hook.py:130} INFO - Runtime Error [2024-03-11, 11:28:17 UTC] {dbt_hook.py:130} INFO - Failed to read package: Runtime Error [2024-03-11, 11:28:17 UTC] {dbt_hook.py:130} INFO - no dbt_project.yml found at expected path /home/airflow/gcs/dags/dbt_models/dbt_packages/dbt_utils/dbt_project.yml [2024-03-11, 11:28:17 UTC] {dbt_hook.py:130} INFO - [2024-03-11, 11:28:17 UTC] {dbt_hook.py:130} INFO - Error encountered in /home/airflow/gcs/dags/dbt_models/dbt_packages/dbt_utils [2024-03-11, 11:28:17 UTC] {dbt_hook.py:132} INFO - Command exited with return code 2 [2024-03-11, 11:28:17 UTC] {taskinstance.py:1904} ERROR - Task failed with exception ``` What action would resolve this issue ?
Error in Cloud Composer with DBT where path /dbt_packages/dbt_utils/dbt_project.yml is not found
|airflow|dbt|airflow-2.x|
null
Try this approach: ```sql UPDATE dbo.[user] SET NewName = ( SELECT STRING_AGG( (SELECT CHAR(65 + CAST(RAND() * 26 AS INT)) -- Generates a random uppercase letter (A-Z) FROM master..spt_values AS s WHERE s.type = 'P' FOR XML PATH('') -- convert rows in char string ), ' ' ) FROM ( SELECT LEN(word) AS word_length, word FROM STRING_SPLIT(Name, ' ') ) AS parts ); ``` This query should replace each word in the `Name` column with a random string of the same length, while preserving spaces between words. It first splits the `Name` column into individual words using `STRING_SPLIT`, calculates the length of each word, generates a random string of the same length for each word, and aggregates the results back into the `NewName` column. The `FOR XML PATH` clause in SQL Server is used to concatenate rows of data into a single string.
Zoom problem while using onWheel with react-avatar-editor
|reactjs|
null
When you subtract `p1` from `p2` i.e. `(p2 - p1)`, you're essentially finding the difference between their memory addresses. Since both pointers are pointing to elements within the same array, the difference between their addresses indicates how many elements apart they are in the array. So, `p2 - p1` gives you 2, indicating that there are 2 integers (or elements, each element is of 4 bytes because of `int`). If you intend to access the value, you can do it by `printf("%d",*p2-*p1);`
I have a Symfony application using Doctrine ORM. When creating a new Temp entity with relationships and trying to persist it, I get a 500 Internal Server Error. My`TaskController.php`: ```php /** * @Route("/tasks", name="task_list") */ public function listTasks(TaskRepository $taskRepository): Response { $tasks = $taskRepository->findAll(); dump($tasks); return $this->render('task/list.html.twig', [ 'tasks' => $tasks, ]); } /** * @Route("/tasks/create", name="task_create") */ public function createTask(Request $request): Response { $task = new Task(); $task->setCreatedAt(new \DateTime()); $task->setUpdatedAt(NULL); $form = $this->createForm(TaskType::class, $task); $form->handleRequest($request); if ($form->isSubmitted() && $form->isValid()) { $this->entityManager->persist($task); // Use the EntityManager to persist the task dd($task); $this->entityManager->flush(); // Use the EntityManager to flush changes return $this->redirectToRoute('task_list'); } return $this->render('task/create.html.twig', [ 'form' => $form->createView(), ]); } ``` My `routes.yaml` ```xml task_list: path: /tasks controller: App\Controller\TaskController::listTasks task_create: path: /tasks/create controller: App\Controller\TaskController::createTask ``` My entities: `User` Entity: ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\UserRepository; use Doctrine\DBAL\Types\Types; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: UserRepository::class)] #[ApiResource] class User { #[ORM\Id] #[ORM\GeneratedValue] #[ORM\Column] private ?int $id = null; #[ORM\Column(length: 255, nullable: true)] private ?string $username = null; #[ORM\Column(length: 255, nullable: true)] private ?string $name = null; #[ORM\Column(length: 255, nullable: true)] private ?string $surname = null; #[ORM\Column(length: 255, nullable: true)] private ?string $email = null; #[ORM\Column(length: 255, nullable: true)] private ?string $password = null; #[ORM\Column(length: 100, nullable: true)] private ?string $role = null; #[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)] private ?\DateTimeInterface $created_at = null; public function getId(): ?int { return $this->id; } public function setId(int $id): static { $this->id = $id; return $this; } public function getUsername(): ?string { return $this->username; } public function setUsername(?string $username): static { $this->username = $username; return $this; } public function getName(): ?string { return $this->name; } public function setName(?string $name): static { $this->name = $name; return $this; } public function getSurname(): ?string { return $this->surname; } public function setSurname(?string $surname): static { $this->surname = $surname; return $this; } public function getEmail(): ?string { return $this->email; } public function setEmail(?string $email): static { $this->email = $email; return $this; } public function getPassword(): ?string { return $this->password; } public function setPassword(?string $password): static { $this->password = $password; return $this; } public function getRole(): ?string { return $this->role; } public function setRole(?string $role): static { $this->role = $role; return $this; } public function getCreatedAt(): ?\DateTimeInterface { return $this->created_at; } public function setCreatedAt(?\DateTimeInterface $created_at): static { $this->created_at = $created_at; return $this; } } ``` `Task` Entity: ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\TaskRepository; use Doctrine\DBAL\Types\Types; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: TaskRepository::class)] #[ApiResource] class Task { #[ORM\Id] #[ORM\GeneratedValue] #[ORM\Column] private ?int $id = null; #[ORM\Column(length: 255)] private ?string $title = null; #[ORM\Column(type: Types::TEXT, nullable: true)] private ?string $description = null; #[ORM\Column(length: 100, nullable: true)] private ?string $status = null; #[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)] private ?\DateTimeInterface $createdAt = null; #[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)] private ?\DateTimeInterface $updatedAt = null; #[ORM\ManyToOne] #[ORM\JoinColumn(nullable: false)] private ?User $user = null; public function getId(): ?int { return $this->id; } public function setId(int $id): static { $this->id = $id; return $this; } public function getTitle(): ?string { return $this->title; } public function setTitle(string $title): static { $this->title = $title; return $this; } public function getDescription(): ?string { return $this->description; } public function setDescription(?string $description): static { $this->description = $description; return $this; } public function getStatus(): ?string { return $this->status; } public function setStatus(?string $status): static { $this->status = $status; return $this; } public function getCreatedAt(): ?\DateTimeInterface { return $this->createdAt; } public function setCreatedAt(?\DateTimeInterface $createdAt): static { $this->createdAt = $createdAt; return $this; } public function getUpdatedAt(): ?\DateTimeInterface { return $this->updatedAt; } public function setUpdatedAt(?\DateTimeInterface $updatedAt): static { $this->updatedAt = $updatedAt; return $this; } public function getUser(): ?User { return $this->user; } public function setUser(?User $user): static { $this->user = $user; return $this; } } ``` Kindly, help me.
Symfony 7: Doctrine EntityManager persist() fails on new entity with relationships
|php|symfony|doctrine-orm|doctrine|entitymanager|
``` $upgradeInvoke = Invoke-Command -Session $session -ScriptBlock { try { Shutdown.exe /r /f /t 1200 /d p:4:2 /c "Triggerd Reboot Timer for 20 minitus" $arguments = "/s", "/v`"/qn REBOOT=ReallySuppress`"" $process = Start-Process -FilePath $using:setupFullPath -ArgumentList $arguments -PassThru -Wait -Verb runas Write-Output $process.ExitCode } catch { Write-Output "An error occurred: $($_.Exception.Message)" } } -WarningAction SilentlyContinue -ErrorAction Stop ``` I'm Trying to install VMTools in a remote machine after disabling UAC, The Invoke is working and I could see User32 Event logged in EventViwer for reboot. But Start-Process is not getting triggered. The same script is working when any other user is just logged in via RDP or Direct Console. It looks weird, doesn't it? Moreover, its working fine with Windows Server 2012 R2, 2016 and 2019 Standard. It's not working with Windows 2022 Standard.
Is there a way to have joiner/indexer or composite node with an update policy? The update policy has the requirement: 1. The output is ready once at least one port receives a value. 2. Mechanism to check if a port is ready at the receiving node e.g. null pointer or default constructor or user defined invalid value or a flag true or false indicating if the port is ready 3. If a port has only one value, this value is conserved and is sent once another port with new value is ready (if all ports hold the same old message, then no updates). **Example**: Update node with two ports 1 and 2. Value x1 is pushed into port 1 => the node is ready to send updates with "pair" {x1, nullptr} while keeping x1 in the queue. Value x2 arrives at port 1 => the node is ready and sends {x2, nullptr}, x1 is overwritten and x2 is conserved Value y1 arrives at port 2 => the node is ready and sends {x2, y1} and x2 and y1 are conserved Value x3 arrived at port 1 => the node is ready and sends {x3, y1} and x3 and y1 are conserved This looks like combination of either joiner/indexer or composite and overwrite_node. My value type is shared_ptr so nullptr would work for me.
Update Node from OneTBB Library
|c++|graph|intel|tbb|tbb-flow-graph|
Use a set, aggregate with [`issuperset`](): ``` Type = {3,4,5} df['Date'] = pd.to_datetime(df['Date']) keep = df.groupby('Date')['Type'].agg(Type.issubset) out = df[df['Date'].isin(keep.index[keep])] ``` Variant: ``` Type = {3,4,5} df['Date'] = pd.to_datetime(df['Date']) out = df[df.groupby('Date')['Type'].transform(Type.issubset)] ``` Output: ``` Date Type Value 2 2024-03-12 3 3 3 2024-03-12 4 5 4 2024-03-12 5 5 5 2024-03-13 3 3 6 2024-03-13 4 5 7 2024-03-13 5 2 ```
I have a Python/Flask code that executes two SQL queries : ``` def valeurs_thematiques(): query_thematiques = """ SELECT NOMTHEMATIQUE FROM THEMATIQUES """ connection = connect(path) cursor = connection.cursor() cursor.execute(query_thematiques) res_them = cursor.fetchall() connection.close() return res_them res_them=valeurs_thematiques() def valeurs_activites(): query_activites = """ SELECT NOMTHEMATIQUE FROM THEMATIQUES """ connection = connect(path) cursor = connection.cursor() cursor.execute(query_activites) res_act = cursor.fetchall() connection.close() return res_act res_act=valeurs_activites() @app.route('/formulaire_creat_congres') def formulaire_creat_congres(): return render_template('formulaire_creat_congres.html', res_them=res_them, res_act=res_act) ``` Then, I have an HTML page with a loop that displays the results of the query in checkbox inputs : ``` <table> <tr> <td colspan="2" > <label><strong>Thématiques :</strong></label> </td> </tr> <tr> <td colspan="2"> {% for t in res_them %} <div> <input type="checkbox" id="{{ t[0] }}" name="nomtheme" value="{{ t[0] }}"> <label for="{{ t[0] }}">{{ {t[0]} }}</label> </div> {% endfor %} </td> </tr> </table> <table> <tr> <td colspan="2" > <label><strong>Activités :</strong></label> </td> </tr> <tr> <td colspan="2"> {% for a in res_act %} <div> <input type="checkbox" id="{{ a[0] }}" value="{{ a[0] }}" name="nomactivite"> <label for="{{ a[0] }}">{{ a[0] }}</label> </div> {% endfor %} </td> </tr> <tr> <td colspan="2"><input type="submit" value="Ajouter"></td> </tr> </table> ``` The results of the queries are displayed correctly on the '/formulaire_creat_congres' page. I want to retrieve the checkbox parameters with nomtheme=request.args.getlist("nomtheme"). However, when I display the value of the variable nomtheme, I either get an empty list or the list: ['on', 'on'] : ``` @app.route('/confirmation_creat_congres', methods = ["GET","POST"]) def confirmation_creat_congres(): if request.method=='GET': codcongres = request.args.get("codcongres") titrecongres = request.args.get("titrecongres") numedition = request.args.get("numedition") date_debut_c = request.args.get("date_debut_c") date_fin_c = request.args.get("date_fin_c") URL = request.args.get("URL") nomtheme = request.args.getlist('nomtheme') nomactivite = request.args.getlist("nomactivite") # codcongres=codcongres,titrecongres=titrecongres, numedition=numedition, #date_debut_c=date_debut_c, date-fin_c=date_fin_c, URL=URL, nomtheme=nomtheme,nomactivite=nomactivite else : codcongres = request.form.get("codcongres") titrecongres = request.form.get("titrecongres") numedition = request.form.get("numedition") date_debut_c = request.form.get("date_debut_c") date_fin_c = request.form.get("date_fin_c") URL = request.form.get("URL") nomtheme = request.form.getlist("nomtheme") nomactivite = request.form.getlist("nomactivite") connection = connect(path) cursor = connection.cursor() query11 = """ SELECT CODETHEMATIQUE FROM THEMATIQUES WHERE NOMTHEMATIQUE = ? """ codthemes = [] codactivites = [] for theme in nomtheme: cursor.execute(query11, (theme,)) codtheme = cursor.fetchone() if codtheme is not None: codthemes.append(codtheme) query22 = """ SELECT CODEACTIVITE FROM ACTIVITES WHERE NOMACTIVITE = ? """ for activite in nomactivite: cursor.execute(query22, (activite,)) codactivite = cursor.fetchone() if codactivite is not None: codactivites.append(codactivite) query = """ INSERT INTO congres (CODCONGRES, TITRECONGRES, NUMEDITIONCONGRES, DTDEBUTCONGRES, DTFINCONGRES, URLSITEWEBCONGRES) VALUES (?, ?, ?, ?, ?, ?) """ query1 = """ INSERT INTO TRAITER (CODCONGRES, CODETHEMATIQUE) VALUES (?, ?) """ query2 = """ INSERT INTO PROPOSER (CODEACTIVITE, CODCONGRES) VALUES (?, ?) """ cursor.execute(query, (codcongres,titrecongres,numedition,date_debut_c,date_fin_c,URL)) connection.commit() for codtheme1 in codthemes: cursor.execute(query1, (codcongres, codtheme1)) for codactivite1 in codactivites : cursor.execute(query2,(codactivite1, codcongres)) connection.commit() connection.close() message="La création d'un nouveau congrès est réussie ! " print("Requête HTTP :", request.url) print("Valeurs de nomtheme :", nomtheme) return render_template('confirmation_creat_congres.html', message=message, nomtheme=nomtheme) ``` It should contain the results of the previous query that have been checked (checkbox input).
You are getting the errors because you have not indented after your if-statements, the engine expects to run everything **one indentation in** from the if-statement if it evaluates to `true`. Hence solving your errors for each if-statement would simply be i.e. ``` yaml - ${{ if contains(parameters.host, '_T1_') }}: - task: Maven@3 # <-- One indentation in from if-statement # Task continues here ``` When they are currently (in your example) on the same indentation, the engine does not understand what you want to run when the if-statement evaluated to `true`, hence it is telling you it expected at least one key-pair to be found after the if-statement (one indentation in).
You shouldn't open/close the file each time you're writing something in it, as you did in your `WriteToFile` function (in append mode) Just create an `ofstream` object (part of std library) and pass it to your `WriteToFile` function like this. void SongLibrary::SaveAllToFile(string fileName) { ofstream outfile(filename); for (Song song : m_songs) { WriteToFile(song.ToFileString(),outfile); } outfile.close(); } And change your `WriteToFile` function to handle this object : void SongLibrary::WriteToFile(string data, ofstream& outfile) { outfile << data; }
I have an Enum that I already use for multiple other purposes, however in those cases I use the Enum like a public variable of the class meaning I can access it like `EntityManager.FAll`. However, in the new use case I have for it I want to use the Enum in that class as a function parameter of another class's function like `CEntityManager::EBroadcastTypes`. But no matter what I try, when I try to compile the code this always fails telling me that either when using the scope operator I need to be using a Class or Namespace even though this is a class (error code: C2653), or that `EBroadcastTypes` isnt a known identifier (error code: 2061). Just to further 'visualize' this, as an example. I want to use this Enum for 'filtering channels' when Ray Casting, so that I can either only check for specific Wanted Entities. ``` EntityManager.h class CEntityManager { public: enum EBroadcastTypes { FAll, FVehicle, FProjectile, FDynamic, // Any Entity that can movement on update (ie. Vehicle and Shells) FStatic, // Any Entity that never moves (ie. Scenery) }; struct BroadcastFilter { EBroadcastTypes type; vector<string> channels; }; vector<BroadcastFilter> m_BroadcastFilters; vector<string> GetChannelsOfFilter(EBroadcastTypes Type) { for (const auto& BroadcastFilter : m_BroadcastFilters) { if (BroadcastFilter.type == Type) { return BroadcastFilter.channels; } } } } ``` ``` Main.h // Primary Update(tick) function Update() { if()// On Some condition { TFloat32 range = 400.0f HitResult Hit = RayCast(ray, EntityManager::FAll, range); // Do something with Hit ... } } ``` RayCast file doesn't contain a class, simply functions that fall under the same category that would be used across multiple points/classes throughout the code. ``` CRayCast.h #include "EntityManager.h" struct CRay { CVector3 m_Origin; CVector3 m_Direction; }; HitResult RayCast(CRay ray, CEntityManager::EBroadcastType Type, TFloat32 Range); ``` ``` CRayCast.cpp #include "CRayCast.cpp" // Actually using EntityManager here, other than for getting the Enum. extern CEntityManager EntityManager; HitResult RayCast(CRay, CEntityManager::EBroadcastTypes Type, TFloat32 Range) { // Do the raycasting here // ... } ``` So is there any way to actually use the Enum like this? Tried including the header as well as forward declaring both the class and enum (the compiler then told me it didn't know what those forwards are).
WSO2 MI 4.2.0 I am using Wso2 micro integrator version 4.2.0. and a rollover policy based on a time period for the log files. I'm trying to delete old rollover files with more than 58 days, meaning that I want to keep ~58 days of logs with the following configuration (as recommended on https://apim.docs.wso2.com/en/latest/administer/logging-and-monitoring/logging/managing-log-growth/): `appender.CARBON_LOGFILE.strategy.action.type = Delete appender.CARBON_LOGFILE.strategy.action.basepath = ${sys:carbon.home}/repository/logs/ appender.CARBON_LOGFILE.strategy.action.maxdepth = 1 appender.CARBON_LOGFILE.strategy.action.condition.type = IfLastModified appender.CARBON_LOGFILE.strategy.action.condition.age = 58D appender.CARBON_LOGFILE.strategy.action.PathConditions.type = IfFileName appender.CARBON_LOGFILE.strategy.action.PathConditions.glob = wso2carbon-` Here is an image of all the configurations I have in log4j.properties for the carbon_logfile: [log4j config](https://i.stack.imgur.com/Q18l6.png) But the configuration seams to not be having any effect on wso2carbon log rotation or restart of the service. The service has 60 files with the pattern wso2carbon-* and increasing each day. [files](https://i.stack.imgur.com/KyuY5.png) Those anyone come accross any simular issue? Is there something wrong in the configuration for the delete action to be applied?
n <- 9 success <- 0 throw <- function(last_throw = 1, hist = "") { if (nchar(hist) == n) { success <<- success + 1L return(NULL) } for (i in last_throw:6) throw(last_throw = i, hist = paste0(hist, i)) } success # [1] 2002 Simplified: throw <- function(last_throw = 1L, len = 0L) { if (len == n) { success <<- success + 1L return(NULL) } for (i in last_throw:6L) throw(last_throw = i, len = len + 1L) } success # [1] 2002
If you want to replicate **SectionList** component using FlashList, your data array needs to look different. It is still a flat array of items. So in regular FlashList implementation like a **FlatList** you have an array like this: const data = [{...card0}, {...card1}, {...card2}] If you want to implement **SectionList** your data array needs to look like: const data = [{ title: '', type: 0 }, { ...card0, type: 1 }, { ....card1, type: 1 }, { title: '', type: 0 }, { ...card2, type: 1 }] And your **FlashList Component**: const ContactsFlashList = () => { return ( <FlashList data={contacts} renderItem={({ item }) => { if (item.type === 0) { // Rendering header return <Text style={styles.header}>{item}</Text>; } else { // Render item return <Text>{item.firstName}</Text>; } }} getItemType={(item) => { // To achieve better performance, specify the type based on the item return item.type === 0 ? "sectionHeader": "row"; }} estimatedItemSize={100} /> ); }; [https://shopify.github.io/flash-list/docs/guides/section-list][1] [1]: https://shopify.github.io/flash-list/docs/guides/section-list
In my VM, the IP address changed, so changing the **/etc/hosts** file by adding the following at the end solved the issue. 192.168.0.7 puppet-master puppet-master.local 192.168.0.8 puppet-client puppet-client.local 192.168.0.7 puppet puppet.local Here puppet-master is the name of the puppet server system and puppet-client is the hostname of puppet-agent system
I have this code but I don't actually get the email text. Have I got to decode the email text? import sys import imaplib import getpass import email import email.header from email.header import decode_header import base64 def read(username, password, sender_of_interest): # Login to INBOX imap = imaplib.IMAP4_SSL("imap.mail.com", 993) imap.login(username, password) imap.select('INBOX') # Use search(), not status() # Print all unread messages from a certain sender of interest if sender_of_interest: status, response = imap.uid('search', None, 'UNSEEN', 'FROM {0}'.format(sender_of_interest)) else: status, response = imap.uid('search', None, 'UNSEEN') if status == 'OK': unread_msg_nums = response[0].split() else: unread_msg_nums = [] data_list = [] for e_id in unread_msg_nums: data_dict = {} e_id = e_id.decode('utf-8') _, response = imap.uid('fetch', e_id, '(RFC822)') html = response[0][1].decode('utf-8') email_message = email.message_from_string(html) data_dict['mail_to'] = email_message['To'] data_dict['mail_subject'] = email_message['Subject'] data_dict['mail_from'] = email.utils.parseaddr(email_message['From']) #data_dict['body'] = email_message.get_payload()[0].get_payload() data_dict['body'] = email_message.get_payload() data_list.append(data_dict) print(data_list) # Mark them as seen #for e_id in unread_msg_nums: #imap.store(e_id, '+FLAGS', '\Seen') imap.logout() return data_dict So I do this: print('Getting the email text bodiies ... ') emailData = read(usermail, pw, sender_of_interest) print('Got the data!') for key in emailData.keys(): print(key, emailData[key]) The output is: > mail_to me@mail.com > mail_subject Get json file > mail_from ('Pedro Rodriguez', 'pedro@gmail.com') > body [<email.message.Message object at 0x7f7d9f928df0>, <email.message.Message object at 0x7f7d9f928f70>] How to actually get the email text?
How to get the text of the email body?
|email|
I have table which represents sequence of points, I need to get sum by all possible combinations. The main problem is how to do it with minimum actions because the Real table is huge |Col1|col2|col3|col4|col5|col6|ct| |:---|:---|:---|:---|:---|:---|:-| |Id1 |id2 |id3 |id4 |id5 |id6 |30| |Id8 |id3 |id5 |id2 |id4 |id6 |45| The expected result is Id3|id5|75 Id3|id4|75 Id3|id6|75 Id5|id6|75 Id2|id4|75 Id2|id6|75 Id4|id6|75 I would be grateful for any help
Remote X11 apps are slow unless running over a Gigabit LAN, and I'm looking for a protocol somewhere between ANSI escape codes and Postscript programs, that enables rendering graphical UI widgets and windows sent by a remote process over SSH to the local X11 server. Can I somehow seamlessly redirect connections from remote GUI processes to a local process that processes these Postscript-like requests and outputs X11 requests to the local server ? Normal X11 forwarded connections go directly to the local $DISPLAY socket which expects verbose, high-bandwidth X11 requests. I think Plan 9 had something like this but I can't find any reference at the moment. VS Code also has a clever approach where it serves HTML+JS from the SSH remote.
In my application the user can assume one of 3 different roles. These users can be assigned to programs, and they can be assigned to 3 different fields, each of those exclusevely to a role. In my API I'm trying to query all my users and annotate the programs they are in: ``` def get_queryset(self): queryset = ( User.objects .select_related('profile') .prefetch_related('managed_programs', 'supervised_programs', 'case_managed_programs') .annotate( programs=Case( When( role=RoleChoices.PROGRAM_MANAGER, then=ArraySubquery( Program.objects.filter(program_managers=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{"id": '), F('id'), Value(', "name": "'), F('name'), Value('"}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), When( role=RoleChoices.SUPERVISOR, then=ArraySubquery( Program.objects.filter(supervisors=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{"id": '), F('id'), Value(', "name": "'), F('name'), Value('"}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), When( role=RoleChoices.CASE_MANAGER, then=ArraySubquery( Program.objects.filter(case_managers=OuterRef('pk'), is_active=True) .values('id', 'name') .annotate( data=Func( Value('{"id": '), F('id'), Value(', "name": "'), F('name'), Value('"}'), function='concat', output_field=CharField(), ) ) .values('data') ), ), default=Value(list()), output_field=ArrayField(base_field=JSONField()), ) ) .order_by('id') ) return queryset ``` This works (almost) flawlessly and gives me only 5 DB hits, perfect, or not... The problem is that I'm using Django HashID fields for the Program PK, and this query returns the pure integer value for each Program. I've tried a more "normal" approach, by getting the data using a `SerializerMethodField`: ``` @staticmethod def get_programs(obj): role_attr = { RoleChoices.PROGRAM_MANAGER: 'managed_programs', RoleChoices.SUPERVISOR: 'supervised_programs', RoleChoices.CASE_MANAGER: 'case_managed_programs', } try: programs = getattr(obj, role_attr[obj.role], None).values_list('id', 'name') return [{'id': str(id), 'name': name} for id, name in programs] except (AttributeError, KeyError): return [] ``` This gives me the result I need, but the query quantity skyrockets. It seems that it's not taking advantage of the `prefetch_related`, but I don't understand how is this possible, considering I'm using the same queryset. So, I have two options here: - Use the annotations but having the HashID returning, instead of integer PK - Have the SerializerMethodField reuse the prefetched data, instead of requerying Is there a way to accomplish any of those? EDIT: A small heads-up, I've decided to use the first approach and Hash the ID manually inside the serializer ``` programs = serializers.SerializerMethodField() @staticmethod def get_programs(obj): return [ {"id": str(Hashid(value=program['id'], salt=settings.SECRET_KEY, min_length=13)), "name": program['name']} for program in obj.programs ] ``` For now it works, but I'd be more satisfied if there's a more direct way to accomplish this.
I have all project 'org.springframework.boot' version '3.2.3' 1. spring authorization Oauth 2.0 server http://auth-server:9000 2. User Interface (React JS application) 3. Gateway (aka Oauth 2.0 client) 4. Resource server Below is the configuration of SecurityConfig Resurce - server `@Configuration @EnableWebSecurity public class SecurityConfig { public interface Jwt2AuthoritiesConverter extends Converter<Jwt, Collection<? extends GrantedAuthority>> { } @Bean Jwt2AuthoritiesConverter authoritiesConverter() { return jwt -> { @SuppressWarnings("unchecked") final Collection<String> roles = (Collection<String>) jwt.getClaims().getOrDefault("roles", List.of()); return roles.stream().map(role -> new SimpleGrantedAuthority(role)).collect(Collectors.toList()); }; } public interface Jwt2AuthenticationConverter extends Converter<Jwt, AbstractAuthenticationToken> { } @Bean Jwt2AuthenticationConverter authenticationConverter(Jwt2AuthoritiesConverter authoritiesConverter) { return jwt -> new JwtAuthenticationToken(jwt, authoritiesConverter.convert(jwt)); } @Bean SecurityFilterChain securityFilterChain(HttpSecurity http, Converter<Jwt, AbstractAuthenticationToken> authenticationConverter) throws Exception { http.oauth2ResourceServer(oauth2ResourceServer -> oauth2ResourceServer .jwt(jwt -> jwt.jwtAuthenticationConverter(authenticationConverter))); http.csrf(csrf -> csrf.disable()); http.securityMatcher("/") .authorizeHttpRequests(authorize -> authorize.requestMatchers(HttpMethod.OPTIONS, "/**").permitAll() .requestMatchers("/lot-games/api/autogen/**").hasRole("ADMIN") .requestMatchers("/lot-games/api/**").permitAll().anyRequest().authenticated()); return http.build(); } }` I work out the GET methods perfectly well, but as soon as the POST method is used, I get an answer in the browser An expected CSRF token cannot be found, and I understand from the logs that such an answer forms the Gateway, since the logs (even at the springframework: TRACE level) are empty, but on the gateway `[7a7f852d-5] HTTP POST "/lot-games/api/services/v1/games/123", headers={masked} [7a7f852d-5] Completed 403 FORBIDDEN, headers={masked} [7a7f852d-1, L:/127.0.0.1:8072 ! R:/127.0.0.1:51774] Handling completed` I think there's no problem, I'll just add this code to the SecurityConfig gateway - but then everything breaks and I get `@Configuration @EnableWebFluxSecurity public class SecurityConfig { @Bean SecurityWebFilterChain securityWebFilterChain(ServerHttpSecurity http) { http.csrf(Customizer.withDefaults()); return http.build(); } }` or `@Bean SecurityWebFilterChain springSecurityFilterChain(ServerHttpSecurity http) { return http.csrf(CsrfSpec::disable).build(); } but, but everything breaks down and I get an error` `This application has no configured error view, so you are seeing this as a fallback. Mon Mar 11 19:54:33 MSK 2024 [1dd5aeb3-6] There was an unexpected error (type=Not Found, status=404). No static resource oauth2/authorization/lot-games-client-authorization-code. org.springframework.web.reactive.resource.NoResourceFoundException: 404 NOT_FOUND "No static resource oauth2/authorization/lot-games-client-authorization-code." at org.springframework.web.reactive.resource.ResourceWebHandler.lambda$handle$1(ResourceWebHandler.java:431) Suppressed: The stacktrace has been enhanced by Reactor, refer to additional information below: Error has been observed at the following site(s): *__checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ LogoutWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ ServerRequestCacheWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ SecurityContextServerWebExchangeWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ ReactorContextWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ CsrfWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ HttpHeaderWriterWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ ServerWebExchangeReactorContextWebFilter [DefaultWebFilterChain] *__checkpoint ⇢ org.springframework.security.web.server.WebFilterChainProxy [DefaultWebFilterChain] *__checkpoint ⇢ HTTP GET "/oauth2/authorization/lot-games-client-authorization-code" [ExceptionHandlingWebHandler]` Can you help me ?
An expected CSRF token cannot be found or No static resource oauth2/authorization/l
|spring-security-oauth2|spring-cloud-gateway|spring-resource-server|
null
You can use a view composer or share data directly with all views using a closure in boot method. use Illuminate\Support\Facades\View; public function boot(): void { View::composer('*', function ($view) { $user = auth()->user(); $view->with('current_user', $user); }); }
I have created APINetworkManagerAll here i have created serviceCall and calling that in viewcontroller. here postGenericCall2() is called and response also coming but my given `param` values not coming instead it says nil why? i am passing same values in postmn then response coming properly? where am i wrong? guide me please. struct RequestObjectAll { var params: [String: Any]? = nil var method: HTTPMethod var urlPath: String var isTokenNeeded: Bool var isLoaderNeed: Bool = false var isErrorNeed: Bool = false var vc: UIViewController? } class APINetworkManagerAll: NSObject { static let sharedInstance = APINetworkManagerAll() fileprivate override init() { super.init() } func serviceCall<T: Decodable>(requestObject: RequestObjectAll, completion: @escaping (Result<T, Error>) -> Void) { if requestObject.isLoaderNeed { requestObject.vc?.showLoader() } guard let url = URL(string: requestObject.urlPath) else { if requestObject.isLoaderNeed { requestObject.vc?.hideLoader() } completion(.failure(NetworkError.invalidURL)) return } var urlRequest = URLRequest(url: url) urlRequest.httpMethod = requestObject.method.rawValue guard let httpBody = try? JSONSerialization.data(withJSONObject: requestObject.params ?? ["" : ""], options: []) else { return } urlRequest.httpBody = httpBody let task = URLSession.shared.dataTask(with: urlRequest) { data, _, error in if requestObject.isLoaderNeed { requestObject.vc?.hideLoader() } if let error = error { completion(.failure(error)) return } if let data = data { do { let response = try JSONDecoder().decode(T.self, from: data) completion(.success(response)) } catch { completion(.failure(error)) } } else { let error = NSError(domain: "YourDomain", code: 0, userInfo: [NSLocalizedDescriptionKey: "No data received"]) completion(.failure(error)) } } task.resume() } } func postGenericCall2(){ let param = ["name": "john", "job": "AAA" ] let requestObject = RequestObjectAll( params: param, method: .post, urlPath: "https://reqres.in/api/users", isTokenNeeded: false, isLoaderNeed: true, vc: self ) APINetworkManagerAll.sharedInstance.serviceCall(requestObject: requestObject) { (result: Result<PostGenModelSmall, Error>) in switch result { case .success(let response): // Handle a successful response print("result of post service call.....: \(response)") case .failure(let error): // Handle the error print("Error decoding JSON: \(error)") } } } struct PostGenModelSmall: Codable { let name: String let job: String let id: String let createdAt: String } error: >Error decoding JSON: keyNotFound(CodingKeys(stringValue: "name", intValue: nil), Swift.DecodingError.Context(codingPath: [], debugDescription: "No value associated with key CodingKeys(stringValue: \"name\", intValue: nil) (\"name\").", underlyingError: nil))
I need to create a custom deserializer for arrays because I get them sometimes as strings. I did it but now it can't deserializer normal arrays. Is there a way to call super deserializer and only when it fails call the custom one? the code: ``` class ArrayDeserializer : JsonDeserialolizer<Array<Int>>() { override fun deserialize(p:JsonParser?, ctxt: DeserializtionContext?): Array<Int> { return try { ctxt?.readValue(p,String::class.java).toArray() } catch(e: Exeption) { ctxt?.readValue(p, Array<Int>::class.java) ?: throw Error() } } } ``` When the Array is in the correct format it throws a `StackoverflowError` exeption.
null
Here is my solution, first sort the array as then call the recursive function. public static void main(String[] args) { int[] input = new int[]{2, 7, 11, 15}; Arrays.sort(input); System.out.println(twoSumWithRecursion(input, 99, 0, input.length-1)); } private static String twoSumWithRecursion(int[] input, int target, int i, int j) { if (i == input.length-1) { return "Target not found"; } if (input[i] + input[j] == target) { return String.format("Two numbers: %s %s", input[i], input[j]); } if (input[i] + input[j] < target) { return twoSumWithRecursion(input, target, i+1, j); } return twoSumWithRecursion(input, target, i, j-1); }
I am new to thymeleaf and spring boot. This is my first Project and I am unable to solve the above mentioned error. My index.html looks like: ``` <!DOCTYPE HTML> <html xmlns:layout="http://www.ultraq.net.nz/thymeleaf/layout" xmlns:th="http://www.thymeleaf.org" layout:decorate ="~{layout/default}"> <head> <title> Test-BLH</title> </head> <body style="background-position: 0 -60px;" class="d-flex flex-column min-vh-100"> <div layout:fragment="content" th:remove="tag"> <h1 th:text="${name}"></h1> </div> </body> </html> ``` And Controller is looking like as follows: ``` @Controller public class LogController { @GetMapping public String show(Model model) { model.addAttribute("name", "Hello_World"); return "logs/index"; } } ``` I have added Thymeleagconfig class as follows: ``` @Configuration public class ThymeleafConfig { @Bean public SpringResourceTemplateResolver templateResolver() { SpringResourceTemplateResolver resolver = new SpringResourceTemplateResolver(); resolver.setPrefix("classpath:templates/"); resolver.setSuffix(".html"); resolver.setTemplateMode("HTML5"); resolver.setCacheable(false); return resolver; } @Bean public SpringTemplateEngine templateEngine() { SpringTemplateEngine engine = new SpringTemplateEngine(); engine.setTemplateResolver(templateResolver()); return engine; } @Bean public ThymeleafViewResolver viewResolver() { ThymeleafViewResolver resolver = new ThymeleafViewResolver(); resolver.setTemplateEngine(templateEngine()); return resolver; } ``` I have searched all the related post and have applied on my project still did not find the solution. Where I am doing wrong? Can anyone suggest what to do to resolve this ? Any help would be appreciated. Thanks in advance.
Post method generic codable API response not coming properly in swift
|json|swift|web-services|parameters|codable|
I have a code below: ``` 'use client'; import { Swiper, SwiperSlide } from 'swiper/react'; import 'swiper/css'; import 'swiper/css/pagination'; export default function Component() { const cards = [{ id: 0 }, { id: 1 }, { id: 2 }]; return ( <Swiper slidesPerView={3} spaceBetween={30} className="mySwiper" > {cards.map((card) => ( <SwiperSlide key={card.id}> <div className="h-52 bg-gray-400">{card.id}</div> </SwiperSlide> ))} </Swiper> ) } ``` The problem: Slides has full with before swiper calculate their width [enter image description here](https://i.stack.imgur.com/ULnSx.png) after few seconds width is setted and and slides looks as I expected [enter image description here](https://i.stack.imgur.com/sHpkZ.png) What should I do to create slides at their original width and avoid UI flickering? I tried setting a fixed width for slide but it didn't help
Next.js Swiper slide has full width before width calculation
|javascript|reactjs|typescript|next.js13|swiper.js|