instruction
stringlengths
0
30k
null
Contrary to what romainl writes in [his answer](https://stackoverflow.com/a/78250848/796259), Vim's own [`:help :!`](https://vimhelp.org/various.txt.html#%3A%21) suggests the following: >On Unix the command normally runs in a non-interactive >shell. If you want an interactive shell to be used >(to use aliases) set 'shellcmdflag' to "-ic". That would be `:set shellcmdflag=-ic`. The caveat is that your shell will not terminate after running your command but prompt you for input instead. You'll have to `fg` to return to Vim. I would guess that's why romainl discarded it as a solution. FWIW, I wrote a tiny alias a while ago which will return me to Vim when I type `fg` regardless of whether I'm in the shell through Vim's `:shell` or `:terminal` commands or <kbd>Ctrl-Z</kbd>: alias fg="[[ -v VIM ]] && exit || fg" This will work for `:!` with `'shellcmdflag'` set to "-ic" as well.
I have the following situation. I had a function that ran on Windows App Service Plan. That function processed blobs. The function had default `LogsAndContainerScan` trigger. Now after some time I decided to rewrite this function and also migrate it from Windows to Linux. To accomplish this I createad another Function App that was running on a new App Service Plan for Linux. During the deployment I deployed and started a new function app on Linux, and stopped the old one for Windows. To my big surprise, the new function started to process the blobs that were processed long ago by the previous function. After some digging and reading answers on Stack Overflow for example [this one](https://stackoverflow.com/questions/41008374/azure-functions-configure-blob-trigger-only-for-new-events) or [this one](https://stackoverflow.com/questions/51675455/stop-azure-blob-trigger-function-from-being-triggered-on-existing-blobs-when-fun), it seems to me that the function will process a blob only if it does not have a [blob receipt](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger?tabs=python-v2%2Cisolated-process%2Cnodejs-v4&pivots=programming-language-csharp#blob-receipts) inside `azure-webjobs-hosts` blob container. When I looked at my `azure-webjobs-hosts` blob container I found out that there are actually two folders in there - one for my previous function, and one for my new function. So I conclude that even though there were receipts for the existing blobs, they were in the folder of the old function app, which means that when I created a new function app, it tried to find the receipts in another folder, couldn't find them, so it started to process all of the blobs again. Which basically means that whenever I decide to create another function app with a blob trigger, it will try to reprocess all of the existing files. The question that I have. 1. Is my reasoning above correct, and every function app reprocess the blobs again that were processed before? If no why did it happen in my situation? 2. Is there any way I can avoid this situation in the future, when I, for example, decide to create yet another function app that will operate on the same blob container?
New Azure function app processes blobs that were already processed by another function app
|azure|azure-functions|azure-blob-storage|azure-blob-trigger|
I wrote an embedded function inside my feature file and on a conditional basis, I want to call the function only if the first object of the data array doesn't match dataNotFound definition. Aprreciate your help. Scenario: xxxxxx #Getting data array from DB * def deleteTokens = """ function(lenArray) { for (var i = 0; i<lenArray; i++) { karate.call('deleteTokensCreated.feature', {ownerId: data[i].owner_id, token: data[i].token}); } } """ * def dataNotFound = {"message": "Data not found!"} * def deletedTokens = call deleteTokens lenArray **"* eval if (data[0] != dataNotFound ) call deleteTokens lenArray"** doesn't work.
In 2024 the following syntax seems to work. db.Debug().Where("stock IN ?", values).Find(&paintings) Debug() will show you the raw query which will be in the correct SQL syntax "WHERE stock IN (?,?)". P.S. Remove debug in production.
If I have a table with a column named `json_stuff`, and I have two rows with `{ "things": "stuff" }` and `{ "more_things": "more_stuff" }` in their `json_stuff` column, what query can I make across the table to receive `[ things, more_things ]` as a result? How can I get all keys from a JSON column in PostgreSQL?
so I am having a code here with FPGrowth library in Pyspark [![][1]][1] What Im trying to do here is to create the freqItemsets and assocationRules manually to understand better about the PCY(Park-Chen-Yu) algorithm, can someone help ? Please note we can only use Pyspark and anything relate to Data Mining, that means we are not allow to use library like pandas and so on. [1]: https://i.stack.imgur.com/N6fZd.png
null
I created a new Chatbot via Teams toolkit according to the tutorial. When I tried to use the 'Debug in Teams' option on the company's intranet which made my pc must use proxy to access the Internet, I received an error message “(×) Error: Unable to execute dev tunnel operation 'create'. Tunnel service returned status code: 407 Proxy Authentication Required”. As the notification info, I selected 'Debug in Test Tool' option, a web-based Teams chat window launched . I can send messages to the chatbot, but when the bot attempts to reply, it still produces an error. [log](https://i.stack.imgur.com/GRCxF.png) When I checked the vscode terminal, error detail is "[onTurnError] unhandled error: AxiosError: Request failed with status code 407". I think I must setup proxy in this workspace, I tried use "set HTTP_PROXY=xxx" command or add "HTTP_PROXY=xxx" in all the .env files in ./env folder, the error still existed.
How to setup proxy for Chatbot?
|proxy|teams-toolkit|
null
How to delete added UiElement
I would be inclined to only populate the `character` list with active players, rather than empty ones. That way you can append new players directly to the list when they're created. Nevertheless, this should give you what you're after: ``` # create a new player. NOTE: "No" is updated when appending to character list player = {"No": 1, "Name":"Player", "Level": 30, "Health": 300} # find the character index idx = 0 for c in character: if c['Name'] == 'EMPTY': break else: idx += 1 # update player number player['No'] = idx + 1 # replace element in list character[idx] = player ```
Here's a solution that implements the same semantics as MySQL's `JSON_KEYS()`, which...: - is `NULL` safe (i.e. when the array is empty, it produces `[]`, not `NULL`, or an empty result set) - produces a JSON array, which is what I would have expected from how the question was phrased. ```sql SELECT o, ( SELECT coalesce(json_agg(j), json_build_array()) FROM json_object_keys(o) AS j (j) ) FROM ( VALUES ('{}'::json), ('{"a":1}'::json), ('{"a":1,"b":2}'::json) ) AS t (o) ``` Replace `json` by `jsonb` if needed. Producing: ```lang-none |o |coalesce | |-------------|----------| |{} |[] | |{"a":1} |["a"] | |{"a":1,"b":2}|["a", "b"]| ```
Waf fails to configure C compiler on github actions
|c|github-actions|waf|
Since the faddeeva is a convolution of a gaussian and a lorenztian i would expect them to look vaguely similar, however when I add a non-resonant contribution, the scipy.wofz does not gain the expected shape. Don't worry about the amplitudes, I am just wondering about the overall line-shape ``` import numpy as np from scipy.special import wofz import matplotlib.pyplot as plt x = np.linspace(2185,2300,1000) gamma = 2 sigma = 2 z = ((2230 - x) + 1j*gamma)/sigma plt.plot(x,np.abs(1/(2230-x-1j*gamma)+5)**2,color='green',label='lorenztian') plt.plot(x, (np.exp(-((2230-x)**2)/(2*sigma**2))+5)**2,color='red',label='gaussian') plt.plot(x,np.abs(wofz(z)+5)**2,color='blue',label = 'faddeeva') convolve = np.convolve(1/(2230-x-1j*gamma),np.exp(-((2230-x)**2)/(2*sigma**2)),mode='same')*(x[1]-x[0]) plt.plot(x,np.abs(convolve+5)**2,label = 'convolution') plt.legend() plt.show() ``` [![enter image description here](https://i.stack.imgur.com/6eSKT.png)](https://i.stack.imgur.com/6eSKT.png) There is also a strange lump (non-technical term) at the end range of the convolution for whatever reason. I tried adding the convolution which should in theory give a similar line-shape but it clearly does not.
Why does the Faddeeva function not look similar to the convolution of a Lorentzian and a Gaussian?
|python|scipy|
null
|linux|gdb|coredump|
I have HTML table with following structure: ``` <table> <thead>..</thead> <tbody><tbody> <tr class="certain"> <td><div class="that-one"></div>some</td> <td>content</td> <td>etc</td> </tr> several other rows.. </table> ``` And I am trying to figure out what to do with the `<div class="that-one">` (or any other element, if needed) inside the table so it can be painted outside the table. I have tried negative left property, transform: translateX(-20px) and overflow: visible. I know that there is something different with rendering HTML tables. I cannot change structure of the table just can add whetever element inside. Finding: both negative left property and transform: translateX(-20px) work in Firefox but dont work in Chrome (behaves like overflow: hidden). Have some Javascript workaround on my mind, but would rather stay without it. Also, I dont want it as CSS pseudoelement, because there will be some click event binded.
You can try something like this: ``` PopScope( canPop: canPop, onPopInvoked: (bool value) { setState(() { canPop= !value; }); if (canPop) { ScaffoldMessenger.of(context).showSnackBar( const SnackBar( content: Text("Click once more to go back"), duration: Duration(milliseconds: 1500), ), ); } }, child:child) ```
It's not about php but you can also do it with php too by making a Router. You can search in google for it. But also you can do it with .htaccess file and path redirects and rewrite rules. Here is code i prepared for your example write it into .htaccess in your server. <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / # Rewrite rule for www.website/custom_post/category/post_name RewriteRule ^custom_post/category/([^/]+)/([^/]+)/?$ index.php?category=$1&post_name=$2 [L,QSA] # If the request is not a file or directory, pass it to index.php RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule>
I'm trying to initialize an array of hashes of a specific length in a short way. $array=@(@{"status"=1})*3 This does not work as I expect it to, when I change a value in the hash for one element in the array, all elements are updated $a1=@(@{"status"=1},@{"status"=1},@{"status"=1}) $a2=@(@{"status"=1})*3 $a3=@(1)*3 $a1[1]["status"]=2 $a2[1]["status"]=2 $a3[1]=2 Write-Host "`nArray a1:" $a1 Write-Host "`nArray a2:" $a2 Write-Host "`nArray a3:" $a3 I expect that only the second element in the array is affected by the update in all three scenarios. But this is what I get Array a1: Name Value ---- ----- status 1 status 2 status 1 Array a2: status 2 status 2 status 2 Array a3: 1 2 1
Short for creating an array of hashes in powershell malfunction?
|powershell|multidimensional-array|hash|pass-by-reference|
null
(https://i.stack.imgur.com/1YpBw.png) I am encountering an issue with generating report using Pandas Profiling library. I have uninstalled and installed the library again in a newly created environment , still not able to generate any report. Only now the table is displayed as shown in ss. Please help me fix it. I was expecting To see reports being generated and green progress bar but rather i see 0% rendering and tables. Please help me fix it. No report is rendered using pandas profiling, i only see this piece of code : `Summarize dataset: 0%| | 0/5 [00:00<?, ?it/s] Generate report structure: 0%| | 0/1 [00:00<?, ?it/s] Render widgets: 0%| | 0/1 [00:00<?, ?it/s]` [1]: https://i.stack.imgur.com/pGEYM.png
You can convert a string to an ObjectID within the find query (or aggregation pipeline) with [`$toObjectId`](https://www.mongodb.com/docs/manual/reference/operator/aggregation/toObjectId/): ```js db.collection.find({ $expr: { $eq: ["$_id", { $toObjectId: "65fee1b40839f1f886ee94a9" }] } }) ``` [Mongo Playground](https://mongoplayground.net/p/RS1ysdBXetH) So your Python code should be: ```py document_text_id: str = ... # like "65fee1b40839f1f886ee94a9" result = collection.delete_one({ "$expr": { "$eq": ["$_id", { "$toObjectId": document_text_id }] } }) ```
|javascript|session|next.js|
The [official documentation][1] for the `StructLayoutAttribute` states: > For blittable types, LayoutKind.Sequential controls both the layout > in managed memory and the layout in unmanaged memory. For > **non-blittable types**, it controls the layout when the class or > structure is marshaled to unmanaged code, but **does not control** the > **layout in managed memory**. From this list of [Blittable and Non-Blittable Types][2], `System.Boolean` is non-blittable. However, the following struct will have the sequential layout in managed memory although it contains the non-blittable (but [unmanaged][3]) bool field. [StructLayout(LayoutKind.Sequential)] struct Unmanaged { public byte b1; public int i1; public bool bool1; public byte b2; //public string str; // -> uncomment to have auto layout public unsafe override string ToString() { //https://forum.unity.com/threads/question-about-size-and-padding-of-a-type-in-c.1274090/ var b1FieldOffset = (long)Unsafe.AsPointer(ref this.b1) - (long)Unsafe.AsPointer(ref this); var i1FieldOffset = (long)Unsafe.AsPointer(ref this.i1) - (long)Unsafe.AsPointer(ref this); var b2FieldOffset = (long)Unsafe.AsPointer(ref this.b2) - (long)Unsafe.AsPointer(ref this); var bool1Offset = (long)Unsafe.AsPointer(ref this.bool1) - (long)Unsafe.AsPointer(ref this); var sb = new StringBuilder(); sb.AppendLine($"Size: {Unsafe.SizeOf<Unmanaged>()}"); sb.AppendLine($"b1 Offset: {b1FieldOffset}"); sb.AppendLine($"i1 Offset: {i1FieldOffset}"); sb.AppendLine($"bool1 Offset: {bool1Offset}"); sb.AppendLine($"b2 Offset: {b2FieldOffset}"); return sb.ToString(); } } ToString will output: Size: 12 b1 Offset: 0 i1 Offset: 4 bool1 Offset: 8 b2 Offset: 9 If we uncomment the string field which is both non-blittable AND not unmanaged, we would get the following managed memory layout (sequential is disregarded): Size: 16 b1 Offset: 12 i1 Offset: 8 bool1 Offset: 13 b2 Offset: 14 After already providing feedback w.r.t an outdated tutorial for the Pack field in the same documentation (decimal being 2 4byte fields and 1 8byte field as of .NET5+ and not 4 4byte fields), I suspect that the documentation regarding **blittable/non-blittable** may be out of date and what it *really* means is **unmanaged/managed** per [Unmanaged types][3] Should I post another feedback for the documentation to change or is my suspicion unfounded? [1]: https://learn.microsoft.com/en-us/dotnet/api/system.runtime.interopservices.structlayoutattribute?view=net-8.0 [2]: https://learn.microsoft.com/en-us/dotnet/framework/interop/blittable-and-non-blittable-types [3]: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/unmanaged-types
Are "blittable types" really unmanaged types for StructLayout Sequential
|c#|.net|struct|
### Simple regex without index support Without index support (only sensible for trivial cardinalities!) this is fastest in Postgres, while doing **exactly** what you ask for: ~~~pgsql SELECT post_id, COALESCE(string_agg(topic_id::text, ',' ), 'Vague!') AS topic FROM ( SELECT p.post_id, k.topic_id FROM posts p LEFT JOIN keywords k ON p.content ~* ('\m' || k.keyword || '\M') ORDER BY p.post_id, k.topic_id ) sub GROUP BY post_id ORDER BY post_id; ~~~ Concerning my regex, I cite [the manual:][1] `\m` ... matches only at the beginning of a word `\M` ... matches only at the end of a word This covers the start (`^`) and end (`$`) of the string implicitly. `\W` (as suggested in another answer) matches any non-word character, and is wrong for the task.) Note how I apply `ORDER BY` once in a subquery instead of per-aggregate. See: - [How to combine ORDER BY and LIMIT with an aggregate function?][2] I this constellation, a simple `COALESCE` catches the case of no matches. ### FTS index for strict matching The simple (naive) approach above scales with O(N*M), i.e. terribly with a non-trivial number of rows in each table. Typically, you want index support. While strictly matching keywords, the best index should be a **[Full Text Search index][3] with the `'simple'` dictionary**, and a query that can actually use that index: ~~~pgsql CREATE INDEX post_content_fts_simple_idx ON post USING gin (to_tsvector('simple', content)); SELECT post_id, COALESCE(topics, 'Vague!') AS topics FROM ( SELECT post_id, string_agg(topic_id::text, ',') AS topics FROM ( SELECT p.post_id, k.topic_id FROM keyword k JOIN post p ON to_tsvector('simple', p.content) @@ to_tsquery('simple', k.keyword) ORDER BY p.post_id, k.topic_id ) sub GROUP BY post_id ) sub1 RIGHT JOIN post p USING (post_id) ORDER BY post_id; ~~~ ### FTS index for matching English words To match natural language words with built-in stemming, use a matching **dictionary, `'english'`** in the example: ~~~pgsql CREATE INDEX post_content_fts_en_idx ON post USING gin (to_tsvector('english', content)); SELECT post_id, COALESCE(topics, 'Vague!') AS topics FROM ( SELECT post_id, string_agg(topic_id::text, ',') AS topics FROM ( SELECT p.post_id, k.topic_id FROM keyword k JOIN post p ON to_tsvector('english', p.content) @@ to_tsquery('english', k.keyword) ORDER BY p.post_id, k.topic_id ) sub GROUP BY post_id ) sub1 RIGHT JOIN post p USING (post_id) ORDER BY post_id; ~~~ [fiddle](https://dbfiddle.uk/5YQwykLS) For fuzzy matching consider a trigram index. See: - https://stackoverflow.com/questions/1566717/postgresql-like-query-performance-variations/13452528#13452528 Related: - [Pattern matching with LIKE, SIMILAR TO or regular expressions][4] - [Get partial match from GIN indexed TSVECTOR column][5] [1]: https://www.postgresql.org/docs/current/functions-matching.html#POSIX-CONSTRAINT-ESCAPES-TABLE [2]: https://dba.stackexchange.com/a/213724/3684 [3]: https://www.postgresql.org/docs/current/textsearch-tables.html#TEXTSEARCH-TABLES-INDEX [4]: https://dba.stackexchange.com/a/10696/3684 [5]: https://dba.stackexchange.com/a/157982/3684
As on 2024, you can create subdomain programmatically using `/execute/SubDomain/addsubdomain`. Check this [cPanel documentation][1] for more information. function create_subdomain($subDomain,$cPanelUser,$cPanelPass,$rootDomain) { $subdomainRequest = "/execute/SubDomain/addsubdomain?domain=" . $subDomain . "&rootdomain=" . $rootDomain . "&dir=public_html/" . $subDomain . ".". $rootDomain; $openSocket = fsockopen('localhost',2082); if(!$openSocket) { return "Socket error";exit();} $authString = $cPanelUser . ":" . $cPanelPass; $authPass = base64_encode($authString); $buildHeaders = "GET " . $subdomainRequest."\r\n"; $buildHeaders .= "HTTP/1.0\r\n"; $buildHeaders .= "Host:localhost\r\n"; $buildHeaders .= "Authorization: Basic " . $authPass . "\r\n"; $buildHeaders .= "\r\n"; fputs($openSocket, $buildHeaders); while(!feof($openSocket)) { fgets($openSocket,128); } fclose($openSocket); $newDomain = "http://" . $subDomain . "." . $rootDomain . "/"; return "Created subdomain $newDomain"; } echo create_subdomain("Subdomain","cPanel-Username","cPanel-Password","Root-Domain"); [1]: https://api.docs.cpanel.net/openapi/cpanel/operation/addsubdomain/
I am trying to convert "regTemp" byte array to bmp file after `DBMerge(long dbHandle , byte[] temp1, byte[] temp2, byte[] temp3, byte[] regTemp, int[] regTempLen)` *(This function is used to combine registered fingerprint templates and returns the result in regTemp)* generates the final byte array. **I found many answers in stackoverflow but so far no luck, Some-Answers:** https://stackoverflow.com/a/60769564/9044234 *-Gives java.lang.ArrayIndexOutOfBoundsException: Index 2048 out of bounds for length 2048* https://stackoverflow.com/a/1193769/9044234 *-Gives java.lang.IllegalArgumentException: image == null!* **`You can find whole source code and documentation here:`** https://github.com/sayednaweed/slk20r-zkteco-java-sample public class ZKFPDemo extends JFrame{ private JTextArea textArea; //pre-register template private byte[][] regtemparray = new byte[3][2048]; private long mhDB = 0; private void OnExtractOK(byte[] template, int len){ int[] _retLen = new int[1]; _retLen[0] = 2048; byte[] regTemp = new byte[_retLen[0]]; if(0 == (ret = FingerprintSensorEx.DBMerge(mhDB, regtemparray[0], regtemparray[1], regtemparray[2], regTemp, _retLen))){ // Here I want to convert regTemp to bmp. textArea.setText("Merged successfully."); }else{ textArea.setText("Failed to merge."); } } } **I used this fingerprint scanner in C# project with C# SDK using follwoing code I was able to to convert it to BitmapSource then displayed the image.** public static BitmapSource ToBitmapSource(byte[] buffer) { BitmapSource bitmap = null; if (buffer != null && !(buffer.Length < 10)) { using (var stream = new MemoryStream(buffer)) { bitmap = BitmapFrame.Create( stream, BitmapCreateOptions.None, BitmapCacheOption.OnLoad); } } return bitmap; } It's been days, I am unable to solve the issue please someone help me thanks in advance.
I'm generating multiple figures in a for loop and then I want to save these figures in 1 pdf file. Therefore I use this code: from matplotlib.backends.backend_pdf import PdfPages save_image(filename_results) plt.close('all') def save_image(filename): #save image in pdf format, one page per plot p = PdfPages(filename) fig_nums = plt.get_fignums() figs = [plt.figure(n) for n in fig_nums] for fig in figs: fig.savefig(p, format='pdf') fig.clf() p.close() This code works but in the file generated, my heatmap get tilted by 45degrees (see [here](https://i.stack.imgur.com/3AKtT.png)). When I save the figures in another format (png, jpeg) or just do plt.show(), this doesn't happen. Therefore I assume that it's the save_image function the issue. Weirdly enough the code was not doing this for weeks, now suddently it does this. I'm using Python 3.1.9 and matplotlib 3.8.3. Can you please help me fix this issue? I need to same in the pdf format to ensure a high resolution image.
Save Matplotlib plots to PDF with PdfPages changes my plots
|python|matplotlib|savefig|pdfpages|
null
I have a problem with Chrome addon. How to find this element and get value 12752. I can't do that by class becouse it's reapeatible and I can't change HTML code. ``` <span class="tip_trigger" data-original-title="Some text" onclick="copyTextToClipboard('12752')">12752</span> ``` then I need to change onclick to not to copy to clipboard but to open a link in new tab. Everything what i tried doesn't work
How to find this element and change html Google Chrome extension
|google-chrome-extension|
null
Suppose I have the following dataset: |Personalnumber|Category|Year|Month|Index_ID|Previous_Index_ID| |----|----|----|----|----|----| |1|100|2022|8|42100| | |1|100|2022|9|9534|42100| |1|9400|2023|9|4| | |1|9400|2023|10|485|4 | |2|100|2022|1|214|102 | |2|100|2022|2|194231|214 | |3|200|2022|2|2111| | |3|200|2022|3|1012|2111 | |3|200|2022|4|9876|1012 | |3|200|2022|5|8794|9876 | |3|200|2022|6|24142|8794 | |4|100|2022|4|42100| | |4|200|2022|7|12| | |4|200|2022|8|14|12| |4|200|2022|9|485|14| The first column (`Personalnumber`) is a number that specifies a person. There is an additional column (`Category`) that gives a category. There is an entry for year and month (`Year` `Month`). There is an index column (`Index_ID`) and most importantly a column stating a reference, the previous index a case might relate to (`Previous_Index_ID`). So, let's make it more understandable: The first **case** belongs to person 1 within category 100. We have to entries that belong to this case. It starts with the index 42100. The next record has the index 9534, it is related to the first one, as the column "Previous_Index_ID" has the value 42100. The second **case** belongs to person 1 within category 9400. We have two entries that belong to this case. It starts with the index 4. The next record has the index 485, it is related to the first one, as the column "Previous_Index_ID" has the value 4. The third **case**: 2;100;2022;1;214;102 2;100;2022;2;194231;214 belongs to person 2 within category 100. Here we can see that we do not have the first record that would have index 102 in our dataset. It continues like this, for example person 3 has 5 records: 3;200;2022;2;2111; 3;200;2022;3;1012;2111 3;200;2022;4;9876;1012 3;200;2022;5;8794;9876 3;200;2022;6;24142;8794 This is one **case**. Now I want to add a column with an ***unique identifier for each case***. My code is as follows: import pandas as pd myfile = pd.read_csv(r"C:\pathtofile\testfile.csv", sep=";") myfile['newID'] = myfile.groupby(['Personalnumber','Category'], sort=False).ngroup().add(1) print(myfile) And indeed the output is as desired: Personalnumber Category Year Month Index_ID Previous_Index_ID newID 0 1 100 2022 8 42100 NaN 1 1 1 100 2022 9 9534 42100.0 1 2 1 9400 2023 9 4 NaN 2 3 1 9400 2023 10 485 4.0 2 4 2 100 2022 1 214 102.0 3 5 2 100 2022 2 194231 214.0 3 6 3 200 2022 2 2111 NaN 4 7 3 200 2022 3 1012 2111.0 4 8 3 200 2022 4 9876 1012.0 4 9 3 200 2022 5 8794 9876.0 4 10 3 200 2022 6 24142 8794.0 4 11 4 100 2022 4 42100 NaN 5 12 4 200 2022 7 12 NaN 6 12 4 200 2022 8 14 12 6 12 4 200 2022 9 485 14 6 The column newID shows the correct case numbering. Now an additional case comes into play: 1;100;2022;8;101; 1;100;2022;9;204;101 1;100;2022;10;4344;204 1;100;2022;11;2069;4344 This case also belongs to person 1, category 100. Now the data looks like this: |Personalnumber|Category|Year|Month|Index_ID|Previous_Index_ID| |----|----|----|----|----|----| |1|100|2022|8|42100| | |1|100|2022|8|101| | |1|100|2022|9|9534|42100 | |1|100|2022|9|204|101 | |1|100|2022|10|4344|204 | |1|100|2022|11|2069|4344 | |1|9400|2023|9|4| | |1|9400|2023|10|485|4 | |2|100|2022|1|214|102 | |2|100|2022|2|194231|214 | |3|200|2022|2|2111| | |3|200|2022|3|1012|2111 | |3|200|2022|4|9876|1012 | |3|200|2022|5|8794|9876 | |3|200|2022|6|24142|8794 | |4|100|2022|4|42100| | |4|200|2022|7|12| | |4|200|2022|8|14|12| |4|200|2022|9|485|14| As you can see it gets mixed up and my code leads to wrong results. Reason is that the new case falls into the same "place", it also has category 100 and belongs to person 1. However, from the column Index_ID and Previous_Index_ID it is clear that this is another case. These two columns show the traces from which one can differentiate between them and see that these are two different cases. (Of course there could be also even further cases that "fall into the same place", so it is not limited to just two as here in this example.) So my problem now is to get the following desired output: Personalnumber Category Year Month Index_ID Previous_Index_ID newID 0 1 100 2022 8 42100 NaN 1 1 1 100 2022 8 101 NaN 2 2 1 100 2022 9 9534 42100.0 1 3 1 100 2022 9 204 101.0 2 4 1 100 2022 10 4344 204.0 2 5 1 100 2022 11 2069 4344.0 2 6 1 9400 2023 9 4 NaN 3 7 1 9400 2023 10 485 4.0 3 8 2 100 2022 1 214 102.0 4 9 2 100 2022 2 194231 214.0 4 10 3 200 2022 2 2111 NaN 5 11 3 200 2022 3 1012 2111.0 5 12 3 200 2022 4 9876 1012.0 5 13 3 200 2022 5 8794 9876.0 5 14 3 200 2022 6 24142 8794.0 5 15 4 100 2022 4 42100 NaN 6 16 4 200 2022 7 12 NaN 7 16 4 200 2022 8 14 12 7 16 4 200 2022 9 485 14 7 How can I do this? The Index_ID is not unique over the complete dataset, it is only unique per year and month. So you can see that the Index_ID 42100 occurs in 2022 month 8 (personalnumber 1) and also in 2022 in month 4 (personalnumber 4). Or Index_ID 485 occurcs in 2023 month 10 (personalnumber 1) and also in 2022 month 9 (personalnumber 4). However, of course it is unique within a year and month. (The index numbers are set completey random. So sorting ascending or descending on the Index_ID or Previous_Index_Id column is not a solution.) ***EDIT*** regarding my comment below: Consider the following example: Personalnumber;Category;Year;Month;Index_ID;Previous_Index_ID 398;14;2022;1;10708;1 398;14;2022;2;50242;10708 398;14;2022;3;76850;50242 398;14;2022;4;120861;76850 398;14;2022;5;110883;120861 398;14;2022;6;188043;110883 398;14;2022;7;9432;188043 398;14;2022;8;175715;9432 398;14;2022;9;142837;175715 398;14;2022;10;152659;142837 398;14;2022;11;52335;152659 398;14;2022;12;156366;52335 398;14;2023;1;16416;156366 398;14;2023;2;163499;16416 398;14;2023;3;1;163499 With the last line (398;14;2023;3;1;163499) the code in the proposed answer from Muhammed Samed Özmen throws a recursion error. I think the recursion error might arise due to 398;14;2022;1;10708;**1** and 398;14;2023;3;**1**;163499. However if I change the last record to Index_ID = 2, like this: Personalnumber;Category;Year;Month;Index_ID;Previous_Index_ID 398;14;2022;1;10708;1 398;14;2022;2;50242;10708 398;14;2022;3;76850;50242 398;14;2022;4;120861;76850 398;14;2022;5;110883;120861 398;14;2022;6;188043;110883 398;14;2022;7;9432;188043 398;14;2022;8;175715;9432 398;14;2022;9;142837;175715 398;14;2022;10;152659;142837 398;14;2022;11;52335;152659 398;14;2022;12;156366;52335 398;14;2023;1;16416;156366 398;14;2023;2;163499;16416 398;14;2023;3;2;163499 Then it works and it sets a newID for this case as it should (all these records belong to one case).
In your above code Set `ignoreMargins: true` read more about [FullPage][1] and [ignoreMargins][2] buildBackground: (_) => pw.FullPage( ignoreMargins: true, child: pw.Container(color: PdfColors.white), ), [1]: https://pub.dev/documentation/pdf/3.10.8/widgets/FullPage-class.html [2]: https://pub.dev/documentation/pdf/3.10.8/widgets/FullPage/ignoreMargins.html
A version without Python-level looping and with reversing the whole string once instead of each word separately. ```python def reverseEachWord(s): return " ".join(s[::-1].split()[::-1]) print(reverseEachWord("the dog ran")) ``` [Attempt This Online!](https://ato.pxeger.com/run?1=m72soLIkIz9vwYKlpSVpuhY3M1NS0xSKUstSi4pTXROTM8Lzi1I0ijWtuBSAoCi1pLQoT0FJQUkvKz8zT6M42spK1zBWr7ggJ7NEQxPC0-TiKijKzCvRQDdFqSQjVSElP12hKDFPSVMTYiHUXpj9AA) Benchmark with the top answer's solution: ``` With s = 'The dog ran': 2.30 μs ' '.join(w[::-1] for w in s.split()) 0.72 μs " ".join(s[::-1].split()[::-1]) With s = 'The dog ran' * 1000: 458 μs ' '.join(w[::-1] for w in s.split()) 185 μs " ".join(s[::-1].split()[::-1]) ``` Benchmark script: ```python from timeit import repeat codes = [ "' '.join(w[::-1] for w in s.split())", '" ".join(s[::-1].split()[::-1])', ] for code in codes: t = min(repeat(code, "s = 'The dog ran'", number=10**5)) print(f'{t*1e1:4.2f} μs ', code) print() for code in codes: t = min(repeat(code, "s = 'The dog ran' * 1000", number=10**2)) print(f'{t*1e4:3.0f} μs ', code) ``` [Attempt This Online!](https://ato.pxeger.com/run?1=pZFNTkMhFEbjlFXcMOHxQl-g1sSQdBedNR1UCxYjPwGal8a4Eied6CpcgwtwNT7ADjTOyoxwPs6Xe1_fwzHvvTud3g5Zz26_rkYdvYVsrDIZjA0-ZogqqG1G6N7vVIIlrBFMBxMgw6M3rhvXUs7EBrSPMIJxkIYUnkzuKMWssgQDbmxq7BloN0oY2iBU8sVRvqguWcN5Utop2mp05YUBLkXIaq9g5x8gbh3BDNzB3qm4FLzvbyit4RCNy50mz7kXSsjFMNcv8PmRgLDqoAg1hF7qhx4E5_x3jfl_NRbyeuB_arT5_6zhvI5v)
I tried many things but this was the only solution I found for my use case when using nested pages with back buttons: ``` <a href="javascript:history.back()"</a> ```
The continuity correction implementation in SciPy was *not* correct, as discussed in [`scipy/scipy#2118`](https://github.com/scipy/scipy/issues/2118). What *should* happen is for the correction to nudge the test statistic toward the mean of the asymptotic distribution, slightly increasing the p-value to compensate for the discrete nature of the exact distribution. Since this question was posted, `mannwhitneyu` has been rewritten, and the new implementation of the continuity correction is available [`here`](https://github.com/scipy/scipy/blob/4edfcaa3ce8a387450b6efce968572def71be089/scipy/stats/_mannwhitneyu.py#L183-L188). Also, note that `use_continuity=-1` never should have been allowed; that was a bug, too.
> My expectation would be that creating the instance `Base` with the specified type `str` it is used for `V` which should be compatible in the `test()` return value as it converts to `Test[str](t)` Your `b = Base[str]()` line has no retroactive effect on the definition of `class Base(Generic[V]):` whatsoever - the definition of `Base.test` simply fails the type check because `t` is `str` (via `t = '1'`), and so what's being returned is `Test[str]` and not a generic `Test[V]` - it's not a way to return "mix" types (see the [`typing.AnyStr`](https://docs.python.org/3/library/typing.html#typing.AnyStr) as an example and the documentation there has a discussion on this). Changing that assignment to `t = 1` and to `b = Base[int]()`, mypy will complain about the opposite: ```lang-text error: Argument 1 to "Test" has incompatible type "int"; expected "str" [arg-type] ``` Using `pyright` actually gives a somewhat better error message: ```lang-text /tmp/so_78174062.py:16:24 - error: Argument of type "Literal['1']" cannot be assigned to parameter "a" of type "V@Base" in function "__init__"   Type "Literal['1']" cannot be assigned to type "V@Base" (reportArgumentType) ``` Which clearly highlights that these are two distinct types (one being a `str` and the other being a [`TypeVar` (type variable)](https://docs.python.org/3/library/typing.html#typing.TypeVar)"). Based on the given code in the question, the `test` method really should have this signature: ```lang-python def test(self) -> Test[str]: t = '1' return Test[str](t) ``` The above would type check correctly.
You can create subdomain programmatically using `/execute/SubDomain/addsubdomain`. Check this [cPanel documentation][1] for more information. function create_subdomain($subDomain,$cPanelUser,$cPanelPass,$rootDomain) { $subdomainRequest = "/execute/SubDomain/addsubdomain?domain=" . $subDomain . "&rootdomain=" . $rootDomain . "&dir=public_html/" . $subDomain . ".". $rootDomain; $openSocket = fsockopen('localhost',2082); if(!$openSocket) { return "Socket error";exit();} $authString = $cPanelUser . ":" . $cPanelPass; $authPass = base64_encode($authString); $buildHeaders = "GET " . $subdomainRequest."\r\n"; $buildHeaders .= "HTTP/1.0\r\n"; $buildHeaders .= "Host:localhost\r\n"; $buildHeaders .= "Authorization: Basic " . $authPass . "\r\n"; $buildHeaders .= "\r\n"; fputs($openSocket, $buildHeaders); while(!feof($openSocket)) { fgets($openSocket,128); } fclose($openSocket); $newDomain = "http://" . $subDomain . "." . $rootDomain . "/"; return "Created subdomain $newDomain"; } echo create_subdomain("Subdomain","cPanel-Username","cPanel-Password","Root-Domain"); [1]: https://api.docs.cpanel.net/openapi/cpanel/operation/addsubdomain/
The [handle class](https://www.mathworks.com/help/matlab/handle-classes.html) and its copy-by-reference behavior is the natural way to implement linkage. It is, however, possible to implement a linked list in Matlab without OOP. And an abstract list which does *not* splice an existing array in the middle to insert a new element -- as complained in [this comment](https://stackoverflow.com/questions/1413860/matlab-linked-list#comment23877880_1422443). (Although I do have to use a Matlab data type somehow, and adding new element to an existing Matlab array always requires memory allocation somewhere.) The reason of this availability is that we can model linkage in other ways than pointer/reference. The reason is *not* [closure](https://en.wikipedia.org/wiki/Closure_(computer_programming)) with [nested functions](https://www.mathworks.com/help/matlab/matlab_prog/nested-functions.html). I will nevertheless use closure to encapsulate a few *persistent* variables. At the end, I will include an example to show that closure alone confers no linkage. And so [this answer](https://stackoverflow.com/a/1421186/3181104) as written is incorrect. At the end of the day though, linked list in Matlab is only an academic exercise. Matlab, aside from aforementioned handler class and classes inheriting from it (called subclasses in Matlab), is purely copy-by-value. Matlab will optimize and automate how copying works under the hood. It will avoid deep copy whenever it can. That is probably the better take-away for OP's question. That is also why linked list is not obvious to make in Matlab. ------------- ##### Example Matlab linked list: ```lang-matlab function headNode = makeLinkedList(value) % value is the value of the initial node % for simplicity, we will require initial node; and won't implement insert before head node % for the purpose of this example, we accommodate only double as value % we will also limit max list size to 2^31-1 as opposed to the usual 2^48 in Matlab vectors m_id2ind=containers.Map('KeyType','int32','ValueType','int32'); % pre R2022b, faster to split than to array value m_idNext=containers.Map('KeyType','int32','ValueType','int32'); %if exist('value','var') && ~isempty(value) m_data=value; % stores value for all nodes m_id2ind(1)=1; m_idNext(1)=0; % 0 denotes no next node m_id=1; % id of head node m_endId=1; %else % m_data=double.empty; % % not implemented %end headNode = struct('value',value,... 'next',@next,... 'head',struct.empty,... 'push_back',@addEnd,... 'addAfter',@addAfter,... 'deleteAt',@deleteAt,... 'nodeById',@makeNode,... 'id',m_id); function nextNode=next(node) if m_idNext(node.id)==0 warning('There is no next node.') nextNode=struct.empty; else nextNode=makeNode(m_idNext(node.id)); end end function node=makeNode(id) if isKey(m_id2ind,id) node=struct('value',id2val(id),... 'next',@next,... 'head',headNode,... 'push_back',@addEnd,... 'addAfter',@addAfter,... 'deleteAt',@deleteAt,... 'nodeById',@makeNode,... 'id',id); else warning('No such node!') node=struct.empty; end end function temp=id2val(id) temp=m_data(m_id2ind(id)); end function addEnd(value) addAfter(value,m_endId); end function addAfter(value,id) m_data(end+1)=value; temp=numel(m_data);% new id will be new list length if (id==m_endId) m_idNext(temp)=0; else m_idNext(temp)=temp+1; end m_id2ind(temp)=temp; m_idNext(id)=temp; m_endId=temp; end function deleteAt(id) end end ``` With the above .m file, the following runs: ```lang-matlab >> clear all % remember to clear all before making new lists >> headNode = makeLinkedList(1); >> node2=headNode.next(headNode); Warning: There is no next node. > In makeLinkedList/next (line 33) >> headNode.push_back(2); >> headNode.push_back(3); >> node2=headNode.next(headNode); >> node3=node2.next(node2); >> node3=node3.next(node3); Warning: There is no next node. > In makeLinkedList/next (line 33) >> node0=node2.head; >> node2=node0.next(node0); >> node2.value ans = 2 >> node3=node2.next(node2); >> node3.value ans = 3 ``` `.next()` in the above can take any valid node `struct` -- not limited to itself. Similarly, `push_back()` etc can be done from any node. A node it cannot reference itself implicitly and automatically because non-OOP [`struct`](https://www.mathworks.com/help/matlab/ref/struct.html) in Matlab does not have a `this` pointer or `self` reference. In the above example, nodes are given unique IDs, a dictionary is used to map ID to data (index) and to map ID to next ID. (With pre-R2022 `containers.Map()`, it's more efficient to have 2 dictionaries even though we have the same key and same value type across the two.) So when inserting new node, we simply need to update the relevant next ID. (Double) array was chosen to store the node values (which are doubles) and that is the data type Matlab is designed to work with and be efficient at. As long as no new allocation is required to append an element, insertion is constant time. Matlab automates the management of memory allocation. Since we are not doing array operations on the underlying array, Matlab is unlikely to take extra step to make copies of new contiguous arrays every time there is a resize. [Cell array](https://www.mathworks.com/help/matlab/ref/cell.html) may incur less re-allocation but with some trade-offs. Since [dictionary](https://www.mathworks.com/help/matlab/ref/dictionary.html) is used, I am not sure if this solution qualifies as purely [functional](https://en.wikipedia.org/wiki/Functional_programming). ------------ ##### re: closure vs linkage In short, closure does not confer linkage. Matlab's nested functions have access to variables in parent functions directly -- as long as they are not shadowed by local variables of the same names. But there is no variable passing. And thus there is no pass-by-reference. And thus we can't model linkage with this non-existent referencing. I did take advantage of closure above to make variables persistent, since scope (called [workspace](https://www.mathworks.com/help/matlab/matlab_prog/base-and-function-workspaces.html) in Matlab) being referred to means all variables in the scope will persist. That said, Matlab also has a [persistent](https://www.mathworks.com/help/matlab/ref/persistent.html) specifier. Closure is not the only way. To showcase this distinction, the example below will not work because every time there is passing of `previousNode`, `nextNode`, they are passed-by-value. There is no way to access the original `struct` across function boundaries. And thus, even with nested function and closure, there is no linkage! ```lang-matlab function newNode = SOtest01(value,previousNode,nextNode) if ~exist('previousNode','var') || isempty(previousNode) i_prev=m_prev(); else i_prev=previousNode; end if ~exist('nextNode','var') || isempty(nextNode) i_next=m_next(); else i_next=nextNode; end newNode=struct('value',m_value(),... 'prev',i_prev,... 'next',i_next); function out=m_value out=value; end function out=m_prev out=previousNode; end function out=m_next out=nextNode; end end ```
Processing Baseline 04.00 S2 L1C products (2022-01-25) contain an Offset in the metadata.Before storage inside L1C (in a 16 bits integer format), a quantization gain and an offset are applied to the computed TOA reflectance . The transformation of reflectances in 16 bit integers is performed according to the following equation: L1C_TOA = (L1C_DN + RADIO_ADD_OFFSET) / QUANTIFICATION_VALUE QUANTIFICATION_VALUE = 10000 RADIO_ADD_OFFSET = -1000
I am getting an error like this while creating private routing in Typescript can anyone Help? > Type '{ exact: true; render: (routerProps: RouterProps) => Element; }' > is not assignable to type 'IntrinsicAttributes & RouteProps'. > Property 'exact' does not exist on type 'IntrinsicAttributes & > RouteProps'. <!-- begin snippet: js hide: false console: true babel: true --> <!-- language: lang-js --> import React, { Suspense } from "react"; import { Route, Routes, RouterProps, useLocation, Navigate } from "react-router-dom"; interface RenderRouteProps extends RouterProps {} const RenderRoute: React.FC<CustomRoute> = props => { const { component } = props; const Component: React.ComponentType<RenderRouteProps> = component! return ( <Route exact render={(routerProps: RouterProps) => <Component {...routerProps} {...props} />}/> ); }; const PrivateRoute = (props: PrivateRouteProps & {redirectPath?: RouteRedirectProps, animate?: boolean}) => { const location = useLocation(); const { appRoutes, redirectPath } = props; return ( <Suspense> <Routes location={location}> {appRoutes.map((route, index) => ( <RenderRoute key={index} {...route} /> ))} {redirectPath?.length && redirectPath.map((path, index) => ( path && <Navigate to={path.to} key={index} /> ))} </Routes> </Suspense> ) }; export default PrivateRoute; <!-- end snippet -->
null
### *I am using socket io in my react native chat app .. When I open the screen for first time it works perfectly but when I navigate back to other screens all the apicalls getting slower and my chat screen also getting slow (This is the problem 1),next,* ### *`Message received from socket is received instantly but if I emit message its taking to much time (more than 15 sec)`* This is my code... ``` useEffect(() => { socketConnection.on('connect', () => { console.log('Connected to server'); joinChat(); }); if (socketConnection) { setSocket(socketConnection); } socketConnection.on('private message', async function (data) { try { console.log(' ~ data:', data); const obj = {from: 1, message: data?.message, time: date, to: username}; setChats([...chats, obj]); } catch (error) { console.error('Error updating chat data:', error); } }); socketConnection.on('error', error => { console.error('Socket connection error:', error); }); return () => { socketConnection.disconnect(); }; }, []); ```
You need to run `updateSelectizeInput` if one of the `SelectizeInput` receives a selection. Therefore you could include the code below: The first `lapply` attaches an `observeEvent` to all `SelectizeInput` and the `lapply` inside contains the updates, excluding the just changed input and all which already have a selection. ``` lapply( X = cols_to_filter, FUN = function(colChanged) { observeEvent(input[[paste0("filter_", colChanged)]], { relevantData <- data()[get(colChanged) %in% input[[paste0("filter_", colChanged)]]] lapply( X = cols_to_filter[!cols_to_filter %in% colChanged], FUN = function(colToChange) { if (is.character(input[[paste0("filter_", colToChange)]])) { return() } updateSelectizeInput(inputId = paste0("filter_", colToChange), choices = relevantData[[colToChange]]) } ) }) } ) ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/j5tuT.gif
I've searched through loads of forum posts from the ROBLOX Developer Forum for any answer(s) to this, and I have had no luck finding an answer so I'm hoping you can help. Basically I want to have a ServerScript (located ServerScriptService) wait for a LocalScript (located in a GUI Button) to fire a RemoteEvent (using FireServer) with the user's selected police division. I currently only have One GUI button setup (Frontline Policing). The LocalScript can be found below; **LocalScript** ```lua local div = game.ReplicatedStorage.Events.divisionEvent local button = script.Parent local gui = script.Parent.Parent.Parent.Parent button.MouseButton1Down:Connect(function() div:FireServer("Frontline") gui.Enabled = false end) ``` TLDR - I am wondering if it's possible to get a ServerScript to wait for a LocalScript to fire a RemoveEvent with needed information. I've done loads of Research through around 50-60 Roblox Developer Forum posts, and none of them are either; - The Same Issue as mine. - The Proper Resolution. - Not even associated with my issue.
A macro to the rescue: ```rust macro_rules! generate_prop_changed { ( $( $method:ident => $prop:ident; )* ) => { $( pub fn $method(&self) -> bool { self.original .as_ref() .map_or(true, |val| val.$prop == self.current.$prop) } )* }; } impl User { generate_prop_changed! { age_changed => age; name_changed => name; } } ``` If you want to play PRO-land, try: 1. Getting rid of the need to provide the method names (hint: [`paste`](https://docs.rs/paste)). 2. Getting rid of the need to provide the field names too by wrapping the struct declaration with the macro. 3. Or, the hardest: create a [derive macro](https://doc.rust-lang.org/stable/reference/procedural-macros.html#derive-macros) for it.
Need to calculate Matrix exponential with Tailor series with MPI matrix is small 3 x 3 for example Meanwhile ``` vector<vector<double>> matrixExp(const vector<vector<double>>& A) { int n = A.size(); vector<vector<double>> E(n, vector<double>(n, 0)); vector<vector<double>> T(n, vector<double>(n, 0)); vector<vector<double>> localE(n, vector<double>(n, 0)); int rank, size; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); for (int i = 0; i < n; i++) E[i][i] = 1; for (int i = 0; i < n; i++) localE[i][i] = 0; T = E; for (int j = 1; j <= rank; j++) { T = matrixMult(T, A); T = matrixDiv(T, j); } localE = T; for (int i = rank + 1; i <= N; i += size) { for (int j = i; j < i + size; j++) { T = matrixMult(T, A); T = matrixDiv(T, j); } localE = matrixSum(localE, T); } MPI_Reduce(localE[0].data(), E[0].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); MPI_Reduce(localE[1].data(), E[1].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); MPI_Reduce(localE[2].data(), E[2].data(), n, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); return E; } ``` But i dont know how to optimize this ``` for (int j = i; j < i + size; j++) { T = matrixMult(T, A); T = matrixDiv(T, j); } ``` Maybe it's impossible with this implementation
{"Voters":[{"Id":213269,"DisplayName":"Jonas"},{"Id":18157,"DisplayName":"Jim Garrison"},{"Id":1974224,"DisplayName":"Cristik"}],"SiteSpecificCloseReasonIds":[18]}
im trying to run my frontend and server side app but im getting this error when i try to update my dog profile PUT http://localhost:8080/dog-profile/dogprofile 404 (Not Found) dispatchXhrRequest @ xhr.js:272 xhr @ xhr.js:63 dispatchRequest @ dispatchRequest.js:61 _request @ Axios.js:179 request @ Axios.js:49 httpMethod @ Axios.js:218 wrap @ bind.js:9 onSubmit @ dogprofile.js:46 eval @ index.esm.mjs:2214 await in eval (async) callCallback @ react-dom.development.js:4164 invokeGuardedCallbackDev @ react-dom.development.js:4213 invokeGuardedCallback @ react-dom.development.js:4277 invokeGuardedCallbackAndCatchFirstError @ react-dom.development.js:4291 executeDispatch @ react-dom.development.js:9041 processDispatchQueueItemsInOrder @ react-dom.development.js:9073 processDispatchQueue @ react-dom.development.js:9086 dispatchEventsForPlugins @ react-dom.development.js:9097 eval @ react-dom.development.js:9288 batchedUpdates$1 @ react-dom.development.js:26135 batchedUpdates @ react-dom.development.js:3991 dispatchEventForPluginEventSystem @ react-dom.development.js:9287 dispatchEventWithEnableCapturePhaseSelectiveHydrationWithoutDiscreteEventReplay @ react-dom.development.js:6465 dispatchEvent @ react-dom.development.js:6457 dispatchDiscreteEvent @ react-dom.development.js:6430 Show 23 more frames Show less dogprofile.js:57 Error updating dog profile: AxiosError {message: 'Request failed with status code 404', name: 'AxiosError', code: 'ERR_BAD_REQUEST', config: {…}, request: XMLHttpRequest, …}code: "ERR_BAD_REQUEST"config: {transitional: {…}, adapter: Array(2), transformRequest: Array(1), transformResponse: Array(1), timeout: 0, …}message: "Request failed with status code 404"name: "AxiosError"request: XMLHttpRequest {onreadystatechange: null, readyState: 4, timeout: 0, withCredentials: false, upload: XMLHttpRequestUpload, …}response: {data: {…}, status: 404, statusText: 'Not Found', headers: AxiosHeaders, config: {…}, …}stack: "AxiosError: Request failed with status code 404\n at settle (webpack-internal:///./node_modules/axios/lib/core/settle.js:24:12)\n at XMLHttpRequest.onloadend (webpack-internal:///./node_modules/axios/lib/adapters/xhr.js:125:66)\n at Axios.request (webpack-internal:///./node_modules/axios/lib/core/Axios.js:54:41)\n at async onSubmit (webpack-internal:///./pages/dogprofile.js:55:30)\n at async eval (webpack-internal:///./node_modules/react-hook-form/dist/index.esm.mjs:2214:13)"[[Prototype]]: Error ive been stuck on this for hours now and im so lost on what to do. This is what i have on my frontend dogprofile.js import React, { useState, useEffect } from 'react'; import { useForm } from 'react-hook-form'; import { Form, Button } from 'react-bootstrap'; import api from '../services/api'; import { toast } from 'react-toastify'; const DogProfile = () => { const { register, handleSubmit, setValue } = useForm(); const [dogProfile, setDogProfile] = useState({ name: '', breed: '', age: '', weight: '', aggressionStatus: '', lastVisitDate: '' }); const [user, setUser] = useState(null); useEffect(() => { setUser(JSON.parse(localStorage.getItem('userLoggedIn'))); }, []); useEffect(() => { const fetchDogProfile = async () => { try { const token = localStorage.getItem('authToken'); const response = await api.get('/dog-profile/dogprofile', { headers: { 'x-auth-token': token } }); setDogProfile(response.data); } catch (error) { console.error('Error fetching dog profile:', error); } }; if (user) { fetchDogProfile(); } }, [user]); const onSubmit = async (data) => { try { const token = localStorage.getItem('authToken'); const response = await api.put('/dog-profile/dogprofile', data, { headers: { 'x-auth-token': token } }); console.log(response); toast.success('Dog profile updated successfully', { autoClose: 1000, position: "bottom-center" }); } catch (error) { console.error('Error updating dog profile:', error); toast.error('Failed to update dog profile'); } }; return ( <> <h1 className='my-3'>Dog Profile</h1> <Form onSubmit={handleSubmit(onSubmit)}> <Form.Group controlId="name"> <Form.Label>Name</Form.Label> <Form.Control defaultValue={dogProfile.name} {...register('name')} /> </Form.Group> <Form.Group controlId="breed"> <Form.Label>Breed</Form.Label> <Form.Control defaultValue={dogProfile.breed} {...register('breed')} /> </Form.Group> <Form.Group controlId="age"> <Form.Label>Age</Form.Label> <Form.Control defaultValue={dogProfile.age} {...register('age')} /> </Form.Group> <Form.Group controlId="weight"> <Form.Label>Weight</Form.Label> <Form.Control defaultValue={dogProfile.weight} {...register('weight')} /> </Form.Group> <Form.Group controlId="aggressionStatus"> <Form.Label>Aggression Status</Form.Label> <Form.Control defaultValue={dogProfile.aggressionStatus} {...register('aggressionStatus')} /> </Form.Group> <Form.Group controlId="lastVisitDate"> <Form.Label>Last Visit Date</Form.Label> <Form.Control type="date" defaultValue={dogProfile.lastVisitDate} {...register('lastVisitDate')} /> </Form.Group> <Button variant="primary" type="submit"> Save Changes </Button> </Form> </> ); }; export default DogProfile; and these are my serverside files dogProfileController.js const DogProfile = require("../models/dogProfile"); const getDogProfile = async (req, res) => { try { const dogProfile = await DogProfile.findOne({ owner: req.user.userId }); if (!dogProfile) { return res.status(404).json({ msg: "Dog profile not found" }); } res.json(dogProfile); } catch (error) { console.error("Error fetching dog profile:", error); res.status(500).send("Internal Server Error"); } }; const updateDogProfile = async (req, res) => { try { // Logic for updating dog profile based on the user's ID from req.user const updatedDogProfile = await DogProfile.findOneAndUpdate( { owner: req.user.userId }, // Filter by owner ID req.body, // Update with request body { new: true } // Return the updated document ); if (!updatedDogProfile) { return res.status(404).json({ msg: "Dog profile not found" }); } console.log("Updated Dog Profile:", updatedDogProfile); res.json(updatedDogProfile); console.log("Dog profile updated successfully"); } catch (error) { console.error("Error updating dog profile:", error); res.status(500).send("Internal Server Error"); } }; module.exports = { getDogProfile, updateDogProfile, }; models/dogProfile.js const mongoose = require("mongoose"); const dogProfileSchema = new mongoose.Schema({ owner: { type: mongoose.Schema.Types.ObjectId, ref: "Customer", required: true }, name: { type: String, required: true }, breed: { type: String, required: true }, age: { type: Number, required: true }, weight: { type: Number, required: true }, aggressionStatus: { type: String, required: true }, lastVisitDate: { type: Date, required: true }, }); const DogProfile = mongoose.model("DogProfile", dogProfileSchema); module.exports = DogProfile; routes/dogProfileRoutes.js // routes/dogProfileRoutes.js const express = require("express"); const Dog = require("../models/dogProfile"); const dogProfileController = require("../controllers/dogProfileController"); const authMiddleware = require("../middleware/authMiddleware"); const router = express.Router(); router.use(authMiddleware); router.get("/dogprofile", dogProfileController.getDogProfile); router.put("/dogprofile", dogProfileController.updateDogProfile); module.exports = router; routes.js // routes/routes.js const express = require('express'); const authRoutes = require('./authRoutes'); const appointmentRoutes = require('./appointmentRoutes'); const customerRoutes = require('./customerRoutes'); const aboutRoute = require('./aboutRoute'); // Import the aboutRoute module const servicesRoute = require('./servicesRoute'); // Import the servicesRoute module const contactsRoute = require('./contactsRoute'); // Import the contactsRoute module const homeRoute = require('./homeRoute'); // Import the homeRoute module const dogProfileRoutes = require('./dogProfileRoutes'); // Import the dogProfileRoutes module const router = express.Router(); // Delegate specific functionalities to separate route files router.use('/auth', authRoutes); router.use('/appointments', appointmentRoutes); router.use('/customers', customerRoutes); router.use('/dog-profile', dogProfileRoutes); // Include the about route router.use('/about', aboutRoute); router.use('/services', servicesRoute); // Include the services route // Include the contacts route router.use('/contacts', contactsRoute); // Include the home route router.use('/', homeRoute); module.exports = router;
getting a 404 error AxiosError {message: 'Request failed with status code 404'
|reactjs|mongodb|axios|serverside-javascript|
My issue with this message was literally a permission gap. To solve the problem you can visit the `service-usage-access-control-reference` [page][1]. Based on permissions there, you can use for example the `roles/serviceusage.serviceUsageAdmin` permission in your service account. In my case i did the following: gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:YOUR_SA" --role="roles/serviceusage.serviceUsageAdmin" You can use different permissions based on your needs, for example: `roles/serviceusage.serviceUsageConsumer` Finishing the command i was able to run gh action (`gcloud builds submit`) using the service account [1]: https://cloud.google.com/service-usage/docs/access-control
The oauth2 process is running normally, but http://authorize-server.com/...authorize (302) Bring a set cookie. Expected behavior: No set cookie should be included in any situation. If the jsessionid for this process is required, should I manually end the session at/API/oauth2/endpoint? Because I want to use jwt for authentication by setting up resourceservice, the remaining steps do not require a session. ```kotlin @Bean fun filterChain(http: HttpSecurity): SecurityFilterChain { http .csrf { it.disable() } .formLogin { it.disable() } .logout { it.disable() } .httpBasic { it.disable() } .anonymous { it.disable() } .oauth2ResourceServer { it.jwt { } } .cors { } .sessionManagement { it.sessionCreationPolicy(SessionCreationPolicy.STATELESS) } .requestCache { it.requestCache(NullRequestCache()) } .securityContext { it.securityContextRepository(NullSecurityContextRepository()) it.requireExplicitSave(true) } .authorizeHttpRequests { it.requestMatchers("/api/oauth2/endpoint").permitAll() it.requestMatchers("/api/ping").permitAll() it.anyRequest().authenticated() } .oauth2Login { it.authorizationEndpoint { it.authorizationRequestResolver( oAuth2AuthorizationRequestResolver( registrationRepository, oAuth2AuthorizationRequestCustomizer ) ) } it.tokenEndpoint { it.accessTokenResponseClient( oAuth2AccessTokenResponseClient( oAuth2AuthorizationCodeGrantRequestEntityConverter, mapOAuth2AccessTokenResponseConverter ) ) } it.userInfoEndpoint { it.userService(oAuth2UserService) } it.defaultSuccessUrl("/api/oauth2/endpoint", false) it.failureHandler { request, response, exception -> exception.printStackTrace() } } return http.build() } ``` ```kotlin @Bean fun corsConfigurationSource(): CorsConfigurationSource { val configuration = CorsConfiguration() configuration.allowedOriginPatterns = mutableListOf("*") configuration.allowedMethods = mutableListOf("*") configuration.allowedHeaders = mutableListOf("*") configuration.allowCredentials = true val source = UrlBasedCorsConfigurationSource() source.registerCorsConfiguration("/**", configuration) return source } @Bean fun decoder(): JwtDecoder { val originalKey = "b0f29fc0d32efdbabff03d4aae352b4936e69b0c3c6b8a0b067ae2453f96b431".toByteArray() val secretKeySpec = SecretKeySpec(originalKey, "HmacSHA256") return NimbusJwtDecoder.withSecretKey(secretKeySpec).build() } @Bean fun encoder(): JwtEncoder { val originalKey = "b0f29fc0d32efdbabff03d4aae352b4936e69b0c3c6b8a0b067ae2453f96b431".toByteArray() val secretKeySpec = SecretKeySpec(originalKey, "HmacSHA256") return NimbusJwtEncoder(ImmutableSecret(secretKeySpec)) } @Bean fun authenticationConverter(): JwtAuthenticationConverter { val grantedAuthoritiesConverter = JwtGrantedAuthoritiesConverter() grantedAuthoritiesConverter.setAuthorityPrefix("") val authenticationConverter = JwtAuthenticationConverter() authenticationConverter.setJwtGrantedAuthoritiesConverter(grantedAuthoritiesConverter) return authenticationConverter } // @Bean fun oAuth2AuthorizationRequestResolver( clientRegistrationRepository: ClientRegistrationRepository, oAuth2AuthorizationRequestCustomizer: OAuth2AuthorizationRequestCustomizer ): OAuth2AuthorizationRequestResolver { val resolver = DefaultOAuth2AuthorizationRequestResolver(clientRegistrationRepository, OAuth2AuthorizationRequestRedirectFilter.DEFAULT_AUTHORIZATION_REQUEST_BASE_URI) resolver.setAuthorizationRequestCustomizer(oAuth2AuthorizationRequestCustomizer) return resolver } // @Bean fun oAuth2AccessTokenResponseClient( oAuth2AuthorizationCodeGrantRequestEntityConverter: OAuth2AuthorizationCodeGrantRequestEntityConverter, mapOAuth2AccessTokenResponseConverter: MapOAuth2AccessTokenResponseConverter ): OAuth2AccessTokenResponseClient<OAuth2AuthorizationCodeGrantRequest> { val authorizationCodeTokenResponseClient = DefaultAuthorizationCodeTokenResponseClient() authorizationCodeTokenResponseClient.setRequestEntityConverter(oAuth2AuthorizationCodeGrantRequestEntityConverter) val tokenResponseHttpMessageConverter = OAuth2AccessTokenResponseHttpMessageConverter() tokenResponseHttpMessageConverter.supportedMediaTypes = listOf(MediaType.APPLICATION_JSON, MediaType.TEXT_PLAIN) tokenResponseHttpMessageConverter.setAccessTokenResponseConverter(mapOAuth2AccessTokenResponseConverter) val restTemplate = RestTemplate(listOf(FormHttpMessageConverter(), tokenResponseHttpMessageConverter)) authorizationCodeTokenResponseClient.setRestOperations(restTemplate) return authorizationCodeTokenResponseClient } ``` http://authorize-server.com/...authorize (302) should not include set cookie.(What I'm puzzled about is that the state parameter in the oauth2 process can determine the source, why do we still use cookie+jsessionid??)
Even if SessionCreationPolicy.STATELESS is set, http://authorize-server.com/...authorize(302) comes with set-cookies and JSESSIONID
|spring-security|spring-security-oauth2|
null
> I've got a separate class for the tree itself, ... Presumably, this class maintains a pointer to the tree: ```lang-cpp template <class T> class BTree { Node<T>* root; // ... } ``` After function `remove` deletes a root node, it must set the `root` pointer to `nullptr`. What's happening now is: 1. The root is deleted once, as `delnode`, in function `remove`. 2. It is deleted a second time in the destructor of class `BTree`. Thus, the double-free error.
Using POS product of Odoo 17 I am trying to add one more item in burger icon menubar in navbar. I am successfully added the item "Admin Panel" and now I wan too just call the function when user click on the that "Admin Panel". But it continuously giving error I want when user click on the "Admin Panel" then simply call the function. this is my topper.js file ``` /** @odoo-module */ import { patch } from "@web/core/utils/patch"; import { Navbar } from "@point_of_sale/app/navbar/navbar"; //import { v4 as uuidv4 } from 'uuid'; Navbar.template = "pos_top_message.TopbarExtension" patch(Navbar.prototype, { setup() { super.setup(); }, }); odoo.define("pos_top_message.topbar",[],function(require){ 'use strict' console.log("Hello odoo..") var adminPanel = function(){ console.log("Kalu mathalu"); }; adminPanel(); }); ``` this is my view/pos_top_message.xml file: ``` <li class="menu-item navbar-button" t-on-click="adminPanel"> <a class="dropdown-item py-2"> Admin Panel </a> </li> ``` and I am getting error: OwlError: Invalid handler (expected a function, received: 'undefined') OwlError@ mainEventHandler@http://localhost:8069/web/assets/3/efce9ad/point_of_sale.assets_prod.min.js:1565:112 listener@http://localhost:8069/web/assets/3/efce9ad/point_of_sale.assets_prod.min.js:742:31
# A confusion of getting to the bottom of a jq parse. I'm learning, apreciate your help. Two arrays here, both with same type ouput result, just integer values (spaced), same for string values. Ex: 1 2 3 4 5 A multi-nested weather dump: I trying to understand to return just an index ([0...]) value or/and an array to sort - json and/or bash of the values. I've tried, and yes have read the man, jq i'm loving, just ?... # #Query 1 drp=$(cat "$HOME/..." | jq '.DailyForecasts[].Day.RainProbability | .') JSON: -- The query above works fine gives integer output, there are 5 array sets so 5 digits: number number number number number - space separated, no [] . "Day": { "Icon": 2, "IconPhrase": "Mostly sunny", "HasPrecipitation": false, "ShortPhrase": "Mostly sunny and breezy", "LongPhrase": "Mostly sunny and breezy", "PrecipitationProbability": 1, "ThunderstormProbability": 0, "RainProbability": 1, "SnowProbability": 0, "IceProbability": 0, "Wind": { "Speed": { "Value": 15, "Unit": "mi/h", "UnitType": 9 ... } # #Query 2: Ive tried to map, etc, number of ways (ex. going for the grass value). JSON: -- Same description as Query 1 apg=$(cat "$HOME/..." | jq '.DailyForecasts[].AirAndPollen | .[1] | .Value | .') "AirAndPollen": [ { "Name": "AirQuality", "Value": 44, "Category": "Good", "CategoryValue": 1, "Type": "Ozone" }, { "Name": "Grass", "Value": 12, "Category": "Moderate", "CategoryValue": 2 }, ... ]
The items in my Combo Box are being fetched from my data source. But how do I allow the user to manually enter an option in my Combo Box? This option should then be saved in my data source. [![enter image description here][1]][1] I've tried different approaches with no luck. [1]: https://i.stack.imgur.com/Qoz2Z.png
How to manually add a value to data source with combo box in Power Apps
|combobox|powerapps|powerapps-canvas|powerapps-collection|
null
from datetime import datetime, timedelta import base64 import hashlib import hmac from urllib.parse import urlsplit, parse_qs, urlencode def sign_url(base_url: str, path: str, key_name: str, base64_key: str, expiration_time: datetime) -> str: """Generates a signed URL for accessing a private resource with a specified expiration time. Args: base_url: The base URL (domain) of the resource, without scheme (http/https). path: The path to the resource on the base URL. key_name: The name of the signing key. base64_key: The signing key, base64 encoded. expiration_time: The expiration time for the signed URL. Returns: A signed URL with query parameters including the expiration time, key name, and signature. """ # Ensure the path starts with '/' if not path.startswith('/'): path = '/' + path # Construct the full URL to be signed full_url = f"https://{base_url}{path}" # Parse the URL to prepare for signing parsed_url = urlsplit(full_url) query_params = parse_qs(parsed_url.query, keep_blank_values=True) # Convert expiration time to a UNIX timestamp (seconds since epoch) expiration_timestamp = int(expiration_time.timestamp()) # Add 'Expires' and 'KeyName' to the query parameters query_params['Expires'] = str(expiration_timestamp) query_params['KeyName'] = key_name # Reconstruct the URL with the added query parameters for signing url_to_sign = f"{parsed_url.scheme}://{parsed_url.netloc}{parsed_url.path}?{urlencode(query_params, doseq=True)}" # Decode the base64-encoded key decoded_key = base64.urlsafe_b64decode(base64_key) # Create a signature using SHA-256 digest = hmac.new(decoded_key, url_to_sign.encode("utf-8"), hashlib.sha256).digest() signature = base64.urlsafe_b64encode(digest).decode("utf-8") # Append the signature to the URL signed_url = f"{url_to_sign}&Signature={signature}" return signed_url # Usage example with the provided information if __name__ == "__main__": base_url = "some-domain.services" # Load Balancer domain path = "/no-fetch/three-cats2.jpg" # Path to the object in the CDN bucket key_name = "The_key_name" base64_key = "auto_generated_code" expiration_time = datetime.utcnow() + timedelta(hours=1) # URL expires in 1 hour signed_url = sign_url(base_url, path, key_name, base64_key, expiration_time) print("Signed URL:", signed_url)
в боте на aiogram-dialog есть визард для создания заявки, где есть поле "описание". Если тип контента описания text, document, video, audio, photo, то все отрабатывает корректно. Но если тип описания voice (голосовое), то при нажатии любой кнопки в превью-окне бот падает в ошибку, которая в заголовке. Ссылка на гитхаб на бота: https://github.com/oooazimut/SupportBot/tree/operator Диалог создания заявки в dialogs/task.
Aiogram error iogram.exceptions.TelegramBadRequest: Telegram server says - Bad Request: there is no text in the message to edit
|python|aiogram|
null
I am writing a console application in C#, VS2022. Where I receive a json string from Orchestrator (Uipath), try to deserialize it to an IEnumerable. I've spent all day reading everything about this error but I can't figure out what I'm doing wrong so now I'd really like someone to help me or at least point me in the right direction. Thanks in advanced! This is the string: ``` { "@odata.context": "https://xxxxx.com/odata/$metadata#QueueProcessingStatuses", "@odata.count": 156, "value": [ { "ItemsToProcess": 1, "ItemsInProgress": 0, "QueueDefinitionId": 397, "QueueDefinitionKey": "e5xxxxxx-xxxx-4c76-b145-f120d5de97dd", "QueueDefinitionName": "_Framework2.0_TestQueue", "QueueDefinitionDescription": null, "QueueDefinitionAcceptAutomaticallyRetry": false, "QueueDefinitionMaxNumberOfRetries": 0, "QueueDefinitionEnforceUniqueReference": false, "ProcessingMeanTime": 0, "SuccessfulTransactionsNo": 0, "ApplicationExceptionsNo": 0, "BusinessExceptionsNo": 0, "SuccessfulTransactionsProcessingTime": 0, "ApplicationExceptionsProcessingTime": 0, "BusinessExceptionsProcessingTime": 0, "TotalNumberOfTransactions": 0, "LastProcessed": "2022-03-01T08:42:10.11Z", "ReleaseName": null, "ReleaseId": null, "IsProcessInCurrentFolder": null, "SpecificDataJsonSchemaExists": false, "OutputDataJsonSchemaExists": false, "AnalyticsDataJsonSchemaExists": false, "ProcessScheduleId": null, "QueueFoldersCount": 1, "Id": 397, "Tags": [] } ] } ``` This is the class: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Text.Json.Serialization; using System.Threading.Tasks; using System.Text.Json; namespace ConsoleApp1.Model { public class GetQueueStatusModel { [JsonPropertyName("@odata.context")] public string? odatacontext { get; set; } [JsonPropertyName("@odata.count")] public int odatacount { get; set; } public List<Value>? value { get; set; } } public class Value { public int ItemsToProcess { get; set; } public int ItemsInProgress { get; set; } public int QueueDefinitionId { get; set; } public string? QueueDefinitionKey { get; set; } public string? QueueDefinitionName { get; set; } public object? QueueDefinitionDescription { get; set; } public bool QueueDefinitionAcceptAutomaticallyRetry { get; set; } public int QueueDefinitionMaxNumberOfRetries { get; set; } public bool QueueDefinitionEnforceUniqueReference { get; set; } public int ProcessingMeanTime { get; set; } public int SuccessfulTransactionsNo { get; set; } public int ApplicationExceptionsNo { get; set; } public int BusinessExceptionsNo { get; set; } public int SuccessfulTransactionsProcessingTime { get; set; } public int ApplicationExceptionsProcessingTime { get; set; } public int BusinessExceptionsProcessingTime { get; set; } public int TotalNumberOfTransactions { get; set; } public DateTime LastProcessed { get; set; } public object? ReleaseName { get; set; } public object? ReleaseId { get; set; } public object? IsProcessInCurrentFolder { get; set; } public bool SpecificDataJsonSchemaExists { get; set; } public bool OutputDataJsonSchemaExists { get; set; } public bool AnalyticsDataJsonSchemaExists { get; set; } public object? ProcessScheduleId { get; set; } public int QueueFoldersCount { get; set; } public int Id { get; set; } public List<object>? Tags { get; set; } } } ``` This is where i try to deserialize it: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using ConsoleApp1.Interfaces; using ConsoleApp1.Model; using System.Text.Json; using System.ComponentModel.DataAnnotations; using System.Text.Json.Serialization; namespace ConsoleApp1.Repository { internal class GetQueueStatusRepository : IGetQueueStatusRepository { private readonly HttpClient _httpClient; public GetQueueStatusRepository(HttpClient httpClient) { _httpClient = httpClient; } public async Task<IEnumerable<GetQueueStatusModel>>GetQueueStatus() { var result = await _httpClient.GetAsync("odata/QueueProcessingRecords/UiPathODataSvc.RetrieveQueuesProcessingStatus"); result.EnsureSuccessStatusCode(); var response = await result.Content.ReadAsStringAsync(); return DeserializeResult<IEnumerable<GetQueueStatusModel>>(response); } private T? DeserializeResult<T>(string json) { var options = new JsonSerializerOptions { PropertyNameCaseInsensitive = true, WriteIndented = true, DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull, ReadCommentHandling = JsonCommentHandling.Skip, AllowTrailingCommas = true }; return JsonSerializer.Deserialize<T>(json, options); } } } ``` Interface: ``` using ConsoleApp1.Model; namespace ConsoleApp1.Interfaces { internal interface IGetQueueStatusRepository { Task<IEnumerable<GetQueueStatusModel>>GetQueueStatus(); } } ``` I have tried some converter for json but i couldnt make it work for me. Alos tried some regex to edit the string but with no luck.
I dont understand what to do with: System.Text.Json.JsonException: 'The JSON value could not be converted to System.Collections.Generic.IEnumerable`1
|json|ienumerable|json-deserialization|system.text.json|jsonconverter|
null
Please provide the source code next time. But I want to answer on why we use `break` in general. - The `break` statement is used to exit a loop prematurely if a certain condition is met. - In this context, once we find a duplicate element, there's no need to continue searching for more duplicates. - By using `break`, we exit the inner loop as soon as we find the first duplicate, saving time and unnecessary iterations.
The OP's problem qualifies perfectly for a solution of mainly 2 combined techniques ... 1) the [`map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) based creation of a list of [async function/s (expressions)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/async_function), each function representing a delaying broadcast task (_delaying_ and not _delayed_ because the task executes immediately but delays its returning time). 2) the creation of an [async generator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/AsyncGenerator) via an [async generator-function (expression)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/async_function*), where the latter consumes / works upon the created list of delaying tasks, and where the async generator itself will be iterated via the [`for await...of` statement](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of). In addition one needs to write kind of a `wait` function which can be achieved easily via an async function which returns a [`Promise`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/Promise) instance, where the latter resolves the promise via [`setTimeout`](https://developer.mozilla.org/en-US/docs/Web/API/setTimeout) and a customizable delay value. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> const queueData = ["Sample Data 1", "Sample Data 2", "Sample Data 3"]; // create a list of async function based "delaying tasks". const delayingTasks = queueData .map(data => async () => { wss.broadcast( JSON.stringify({ data }) ); await wait(1500); return `successful broadcast of "${ data }"`; }); // create an async generator from the "delaying tasks". const scheduledTasksPool = (async function* (taskList) { let task; while (task = taskList.shift()) { yield await task(); } })(delayingTasks); // utilize the async generator of "delaying tasks". (async () => { for await (const result of scheduledTasksPool) { console.log({ result }); } })(); <!-- language: lang-css --> .as-console-wrapper { min-height: 100%!important; top: 0; } <!-- language: lang-html --> <script> const wss = { broadcast(payload) { console.log('broadcast of payload ...', payload); }, }; async function wait(timeInMsec = 1_000) { return new Promise(resolve => setTimeout(resolve, Math.max(0, Math.min(timeInMsec, 20_000))) ); } </script> <!-- end snippet --> And since the approach is two folded, one even can customize each delay in between two tasks ... one just slightly has to change the format of the to be queued data and the task generating mapper functionality (2 lines of code are effected) ... <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> // changed format. const queueData = [ { data: "Sample Data 1", delay: 1000 }, { data: "Sample Data 2", delay: 3000 }, { data: "Sample Data 3", delay: 2000 }, { data: "Sample Data 4" }, ]; // create a list of async function based "delaying tasks". const delayingTasks = queueData .map(({ data, delay = 0 }) => async () => { // changed argument. wss.broadcast( JSON.stringify({ data }) ); await wait(delay); // changed ... custom delay. return `successful broadcast of "${ data }"`; }); // create an async generator from the "delaying tasks". const scheduledTasksPool = (async function* (taskList) { let task; while (task = taskList.shift()) { yield await task(); } })(delayingTasks); // utilize the async generator of "delaying tasks". (async () => { for await (const result of scheduledTasksPool) { console.log({ result }); } })(); <!-- language: lang-css --> .as-console-wrapper { min-height: 100%!important; top: 0; } <!-- language: lang-html --> <script> const wss = { broadcast(payload) { console.log('broadcast of payload ...', payload); }, }; async function wait(timeInMsec = 1_000) { return new Promise(resolve => setTimeout(resolve, Math.max(0, Math.min(timeInMsec, 20_000))) ); } </script> <!-- end snippet -->
|python|matplotlib|seaborn|
In trying to understand how to work with classes, objects, and methods, Please explain why `print(list.sort())` does not work like `list.sort()` followed by `print(list)`. ```python list = [5, 1, 2] print(list.sort()) ``` #output is `None` VS. ```python list.sort() print(list) ``` #output is `[1, 2, 5]` Explained above: I expected `print(list.sort())` to output a sorted list instead of `None`
|python|list|methods|printing|
I have replicated some part of code to illustrate you what mistake your are doing and how you can correct it. context.jsx import {createContext, useState} from 'react'; export const ShopContext = createContext(); const ShopProvider = ({ children }) => { const [data, setData] = useState([]); const defaultContext = { data, setData, }; return ( <ShopContext.Provider value={defaultContext}> {children} </ShopContext.Provider> ); }; export default ShopProvider; index.jsx import React from 'react'; import ReactDOM from 'react-dom/client'; import ShopProvider from './context'; import { App } from './App.jsx' ReactDOM.createRoot( document.querySelector('#root') ).render(<ShopProvider><App /></ShopProvider>) App.jsx import React, {useContext, useEffect } from 'react'; import {ShopContext} from './context'; import Product from './Product'; export function App(props) { const {setData} = useContext(ShopContext); // So you require to get setter method from context. useEffect(() => { async function FetchData() { try { const response = await fetch("https://fakestoreapi.com/products"); if (!response.ok) { throw new Error(`HTTP error: Status ${response.status}`); } let postsData = await response.json(); postsData.sort((a, b) => { const nameA = a.title.toLowerCase(); const nameB = b.title.toLowerCase(); return nameA.localeCompare(nameB); }); setData(postsData); } catch (err) { setData([]); } } FetchData(); }, []); return ( <div className='App'> <Product /> </div> ); } Product.jsx import {useContext} from 'react'; import {ShopContext} from './context'; export default function Product() { const { data } = useContext(ShopContext); const id = 1; if (!data) { return <div>Loading product...</div>; } const product = data.find((item) => item?.id === id); return ( <div> {product?.title} </div> ); } Let me know if you face any problem in understanding this illustration.
Problem while using react native socket io client
|javascript|reactjs|react-native|websocket|socket.io|
null
I am not sure if you were able to solve this. But have you tried updating the wsgi path in .ebextensions For Python Flask I had to make it as - option_settings: "aws:elasticbeanstalk:container:python": WSGIPath: application:application Also, you can check the configuration in AWS EB console, the wsgi path mentioned there should also be the correct one.
Posting from https://learn.microsoft.com/en-us/answers/questions/1615486/container-instane-exec-api The API <https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.ContainerInstance/containerGroups/$groupName/containers/$containerName/exec?api-version=2023-05-01> returns a web socket URL with password. Is there any documentation about how to invoke it with a client such as `curl` or `websocat` ? <https://learn.microsoft.com/en-us/rest/api/container-instances/containers/execute-command?view=rest-container-instances-2023-05-01&tabs=HTTP> Azure CLI `az container exec` automatically opens web socket but does not provide any details when used with `--debug` option. Would Java API class ContainerExec also automatically interact with WebSocket and provide stdout of the command? <https://learn.microsoft.com/en-us/java/api/com.microsoft.azure.management.containerinstance.containerexec?view=azure-java-legacy>