instruction
stringlengths
0
30k
So, in dictionary in python, if you have item with same keys, they just get overriten by further item, so in your dict finnaly you would have only 1 item with "A" key. My suggestion to you is to try to get all that to a massive, and simply parse that, so 1 item is key, next is data, next is key and so on. If you will do that, simply by replacing ":" with "," as i did in following code ``` array = ("A", "Red", "A", "Blue", "A", "Green", "B", "Yellow", "C", "Black", "B", "smth") dictionary = {} for item_key, item in zip(array[::2], array[1::2]): if item_key in dictionary: if type(dictionary[item_key]) == list: dictionary[item_key].append(item) else: a = dictionary[item_key] dictionary[item_key] = [a, item] else: dictionary[item_key] = item ``` this code should help you
All you need to do is add the style parameter as I have done below: ``` import random as rd import seaborn as sns import pandas as pd import matplotlib.pyplot as plt #Make some sample data episode=list(range(200)) season=[] for k in range(20): for i in range(10): season.append(k) rating=[] for i in range(200): rating.append(rd.uniform(7.75, 8.25)) df=pd.DataFrame() df['Episode Number']=episode df['Rating']=rating df['Season']=season ### PLOT plt.figure(figsize=(20, 8)) plt.ylim(0,12) # scatterplot sns.scatterplot( data=df, x='Episode Number', y='Rating', hue='Season', style='Season', palette='tab10', s=50 ) # regression line sns.regplot( data=df, x='Episode Number', y='Rating', scatter=False, ci=None, line_kws={ 'color':'red', 'linestyle':'-', 'linewidth':3, 'alpha':0.3 } ) plt.legend(loc="lower left", ncol=2,title='Season') plt.show() ``` The chart will look like: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/Z4fGa.png
while comparing, convert var1 to string. See below: var1 = int(input("enter a number:")) num = str(var1)[::-1] if str(var1) == num: print("its a palindrome") else: print("its not a palindrome")
> Is there a way to figure out how I generated it? What you have there is a [U+00AD 'SOFT HYPHEN'](https://unicode-explorer.com/c/00AD). The linked page gives the key combinations for typing it on Windows and Linux. > How can I avoid, find, and fix this problem if it happens in a real-world scenario? I'd recommend the [Gremlins tracker](https://marketplace.visualstudio.com/items?itemName=nhoizey.gremlins) extension.
This is more related to your system/Windows config instead of OpenVINO. As a workaround You can try to enable both CPU as below: 1. Press "Windows+R" to open a Run dialogue box. Type "msconfig" in the "Open" field and press "Enter." The System Configuration window opens. 2. Click the "Boot" tab, then click the "Advanced Options" button. The BOOT Advanced Options window opens. 3. Click the check box labeled "Number of processors" to place a check mark inside it. 4. Click the drop-down list below the "Number of processors" label, then select "2." 5. Click "OK" to close the BOOT Advanced Options window, then click "OK" again to close the System Configuration window. 6. Exit all applications, then restart the computer. After you restart the PC, you should notice that Windows boots and opens applications faster.
{"Voters":[{"Id":68587,"DisplayName":"John Kugelman"},{"Id":256196,"DisplayName":"Bohemian"}],"SiteSpecificCloseReasonIds":[18]}
Pretty much what it says on the tin. I have a JS script that isn't staying within the div element it is designated to on my page, it "sits" in the footer section instead. [Page design with the JS script result "sitting" in the footer](https://i.stack.imgur.com/0Dpil.png) ``` <header> <div>icon</div> <div>header</div> <div style="padding:5px;"> script is supposed to go here <script src="JS/SpinningBaton.js"></script> </div> </header> ``` That's how I currently have the HTML block written in an attempt to resolve the issue. Unfortunately, nothing has changed. What am I doing to wrong? For context, the div element is 165px in size while the JS script is 155px, so it should fit without overflow. I have tried putting the script within different elements within the div and resizing the div element. But the script defiantly stays in the footer area regardless of what I have tried so far.
The function takes about 20-35ms when ran with `static inline` on GCC or Clang with at least O1 (on O0 its the same 400-600ms like without the static keyword), when static is removed the function takes +400ms to execute on an array with 1bil bytes/chars, when single threaded the function's time doesn't change whether its used with static or without. On MSVC it will always take 400 or more ms even with O2i and avx2 arch. If I just replace the AVX2 code with a simple call to std::count(begin, end, target) it will run just as fast as the AVX2 no matter if static is specified or not (even on MSVC this time) Code: ```c++ static inline uint64_t opt_count(const char* begin, const char* end, const char target) noexcept { const __m256i avx2_Target = _mm256_set1_epi8(target); uint64_t result = 0; static __m256i cnk1, cnk2; static __m256i cmp1, cmp2; static uint32_t msk1, msk2; uint64_t cst; for (; begin < end; begin += 64) { cnk1 = _mm256_load_si256((const __m256i*)(begin)); cnk2 = _mm256_load_si256((const __m256i*)(begin+32)); cmp1 = _mm256_cmpeq_epi8(cnk1, avx2_Target); cmp2 = _mm256_cmpeq_epi8(cnk2, avx2_Target); msk1 = _mm256_movemask_epi8(cmp1); msk2 = _mm256_movemask_epi8(cmp2); // Casting and shifting is faster than 2 popcnt calls cst = static_cast<uint64_t>(msk2) << 32; result += _mm_popcnt_u64(msk1 | cst); } return result; } ``` Caller: ```c++ uint64_t opt_count_parallel(const char* begin, const char* end, const char target) noexcept { const size_t num_threads = std::thread::hardware_concurrency()*2; const size_t total_length = end - begin; if (total_length < num_threads * 2) { return opt_count(begin, end, target); } const size_t chunk_size = (total_length + num_threads - 1) / num_threads; std::vector<std::future<uint64_t>> futures; futures.reserve(num_threads); for (size_t i = 0; i < num_threads; ++i) { const char* chunk_begin = begin + (i * chunk_size); const char* chunk_end = std::min(end, chunk_begin + chunk_size); futures.emplace_back(std::async(std::launch::async, opt_count, chunk_begin, chunk_end, target)); } uint64_t total_count = 0; for (auto& future : futures) { total_count += future.get(); } return total_count; } ``` In a different file I allocate a buffer with new, align it and time each iteration of the `opt_count_parallel` call and print its output. I have tried using thread and future and both have more or less the same result. Here is the godbolt diff view: https://godbolt.org/z/9P87bndsb , I don't see much difference in the assembly but I'm not knowledgeable enough to understand the small differences I've also tried assigning avx2_Target outside the opt_count, in opt_count_parallel which made no difference I looked at GCC's fopt-info and the output was same on both occasions, I've also tried force inlining and noalign but again no noticeable difference I've also tried debugging/profiling but its a bit of an annoyance since it's speedup is lost on -O0 and profiling just shows that everything takes uniformly longer to execute
Using `static` on a AVX2 counter function increases performance ~10x in MT environment without any change in Compiler optimizations
|c++|multithreading|optimization|avx2|
null
Since you are open to other options, I thought I can answer your question in python, if that's ok. Here's how you can do it with seaborn, a nice data visualization package that has a lot of helpful functions for statistics and ML. First, to get you started with python, I would recommend [anaconda][1], as their installation already includes a tonne of nice packages which are all compatible with each other. After getting acquainted, you can import the following packages for the example: import seaborn as sns # nice plotting %matplotlib notebook import matplotlib.pyplot as plt # package also for plottingm the above line is a magic command to define the backend import pandas as pd # for data handling I defined the data without Afghanistan for now, as it seems to be a bad outlier that negatively affects the regression models, I'll show that at the end. For now, here are the results without AFG: data = { 'Location': ['Albania', 'Albania', 'Algeria', 'Algeria', 'Angola', 'Angola', 'Antigua and Barbuda', 'Antigua and Barbuda', 'Argentina', 'Argentina'], 'Year': [1990, 2020, 1990, 2020, 1990, 2020, 1990, 2020, 1990, 2020], 'A': [76.4334, 79.6756, 68.5167, 75.9125, 82.6756, 83.1922, 73.6497, 80.7452, 75.9699, 81.3155], 'B': [32.87494, 8.27645, 205.81979, 77.69479, 1101.50579, 221.90649, 58.0267, 21.24795, 87.81427, 44.90008] } # define the data without afghanistan df = pd.DataFrame(data) # generate dataframe plt.figure() # figure sns.scatterplot(data=df, x='A', y='B', hue='Location', style='Year') # scatter plot the data, changing style for year and colour with country sns.regplot(data=df[df['Year'] == 1990], x='A', y='B', scatter=False, color='blue') # plot model for 1990 sns.regplot(data=df[df['Year'] == 2020], x='A', y='B', scatter=False, color='orange') # plot model for 2000 plt.grid() # add grid plt.legend(loc = "lower right", ncols = 3) # and legend This leads to the following plot: [![regression plot without AFG][2]][2] Of course, if you have the data in a better way, you can use `pd.read_excel()` or `pd.read_csv()` to get the data from saved files. Check the data with Afghanistan here: [![Plots with AFG][3]][3] As you see, the regression primarily captures the variance found between AFG and the other groups. This can be solves, for example, by using [robust regression][4] instead. I hope this answer encourages you to start learning python and doing this. The advantage would be that there are a lot of resources these days to get started and to even master python. Also, chatGPT can help you get started (you will come back complaining about it anyway ;) ) [1]: https://www.anaconda.com/ [2]: https://i.stack.imgur.com/DS0Kh.png [3]: https://i.stack.imgur.com/lsVqM.png [4]: https://en.wikipedia.org/wiki/Robust_regression
Javascript is recording if someone scrolls more than 400.If someone scrolls more than 400, I want to record this event in PHP somehow, here I will just alert it. So, if someone scrolls more than 400, there will be an alert coming from the PHP-variable, and not JS-variable. (Actually, I just want this function to be triggered once, and I want just to record the maximum scroll dept and INSERT that value into a database table, but a solution to my question above will be good enough). Right now, my code is alerting "PHP: scroll\> 400 just occured!" even though no scroll occured. Can someone provide with a working Ajax solution? This is my code in header.php: ``` <?php echo ' <script type="text/javascript"> window.onscroll = function() { var scrollLimit = 400; if (window.scrollY >= scrollLimit) { alert("Javascript: scroll> 400 just occured!") } }; </script>'; // $FromJStoPHP should trigger when window.scrollY >= scrollLimit if (window.scrollY >= scrollLimit) { $FromJStoPHP='PHP: scroll> 400 just occured!'; echo '<script type="text/javascript">alert("'.$FromJStoPHP.'");</script>'; } ?> ```
I try generate signature with the below code to be used from mobile side in Apple store but the generated signature is invalid. Apple Link: https://developer.apple.com/documentation/storekit/in-app_purchase/subscriptions_and_offers/generating_a_signature_for_promotional_offers My Code: ``` var nonce = Guid.NewGuid().ToString().ToLower(); long timestamp = (long)DateTime.UtcNow.Subtract(new DateTime(1970, 1, 1)).TotalMilliseconds; string payload = request.AppBundleID + "\u2063" + request.KeyIdentifier + "\u2063" + request.ProductIdentifier + "\u2063" + request.OfferIdentifier + "\u2063" + request.ApplicationUsername + "\u2063" + nonce + "\u2063" + timestamp; string keyP8Base64 = System.IO.File.ReadAllText(Path.Combine(webHostEnvironment.WebRootPath, "FileName.p8")) .Replace("-----BEGIN PRIVATE KEY-----", "") .Replace("-----END PRIVATE KEY-----", "") .Replace("\n", "") .Replace("\r", ""); byte[] keyP8Bytes = Convert.FromBase64String(keyP8Base64); var privateKey = CngKey.Import(keyP8Bytes, CngKeyBlobFormat.Pkcs8PrivateBlob); using (ECDsaCng dsa = new ECDsaCng(privateKey)) { dsa.HashAlgorithm = CngAlgorithm.Sha256; var unsignedData = Encoding.UTF8.GetBytes(payload); // Sign data var signature = dsa.SignData(unsignedData); // Convert signature to ASN.1 format var r = new BigInteger(signature.Take(32).ToArray(), isUnsigned: true, isBigEndian: true); var s = new BigInteger(signature.Skip(32).ToArray(), isUnsigned: true, isBigEndian: true); var rBytes = r.ToByteArray(); var sBytes = s.ToByteArray(); if (rBytes.Length > 1 && rBytes[0] == 0) rBytes = rBytes.Skip(1).ToArray(); if (sBytes.Length > 1 && sBytes[0] == 0) sBytes = sBytes.Skip(1).ToArray(); var derSignature = new byte[6 + rBytes.Length + sBytes.Length]; derSignature[0] = 0x30; // ASN.1 sequence tag derSignature[1] = (byte)(4 + rBytes.Length + sBytes.Length); // Length of the sequence derSignature[2] = 0x02; // Integer tag for r derSignature[3] = (byte)rBytes.Length; // Length of r Buffer.BlockCopy(rBytes, 0, derSignature, 4, rBytes.Length); derSignature[4 + rBytes.Length] = 0x02; // Integer tag for s derSignature[5 + rBytes.Length] = (byte)sBytes.Length; // Length of s Buffer.BlockCopy(sBytes, 0, derSignature, 6 + rBytes.Length, sBytes.Length); var derSignatureString = Convert.ToBase64String(derSignature); if (dsa.VerifyData(unsignedData, signature)) { return new PromoSignatureResponse { KeyIdentifier = request.KeyIdentifier, Nonce = nonce, Signature = derSignatureString, Timestamp = timestamp.ToString() }; } ``` Example of the generated signature which is invalid be Apple Store: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvcYI0Wg2vrkuK6jzPNf1qUSkTJeYSNKzu3hJSJhKXuH7c2NMD7QomM5yTdXhs4GFGNmaAPhjS3p3mOXcabfW9g=="
Trigger JS when scrolling and record event to PHP
|javascript|php|scroll|
null
I’m trying to create an APL document (for an Alexa widget) that contains a background image, several text components, and a footer action button.  I would like for the user to be able to touch anywhere on the widget (outside of the footer action button) to open the corresponding Alexa skill.  I added a VectorGraphic with a width and height, each with a value of 100%.  When I touch anywhere outside of the text components, I receive the “onPress” event.  But when I touch on the text components, I don’t receive the “onPress” event.  Do I need to wrap each of the text components in a TouchWrapper?  Or, is there a better way to achieve what I’m after.  I’ve included a mockup of my APL document below.  Thanks in advance! ``` { "type": "APL", "version": "2023.2", "import": [ { "name": "alexa-layouts", "version": "1.7.0" } ], "mainTemplate": { "parameters": [ "payload" ], "items": [ { "type": "Container", "height": "100%", "items": [ { "type": "AlexaBackground", "colorOverlay": "true", "backgroundImageSource": "https://teamassist-resources.s3.amazonaws.com/stadium_small_500x347.jpg" }, { "type": "VectorGraphic", "position": "absolute", "height": "100%", "width": "100%", "source": "alexaPhotoOverlayScrim", "scale": "fill", "onPress": [ { "type": "SendEvent", "arguments": [ "openSkill" ], "flags": { "interactionMode": "STANDARD" } } ] }, { "type": "AlexaFooterActionButton", "id": "actionButtonId", "buttonText": "Text to Phone", "position": "absolute", "bottom": 0, "primaryAction": [ { "type": "SendEvent", "arguments": [ "send text" ], "flags": { "interactionMode": "STANDARD" } } ] }, { "type": "Text", "id": "eventDateText", "text": "2024-06-10", "textAlign": "center", "paddingTop": "${viewportSizeClass == @viewportClassMediumLarge ? @spacingLarge : @spacingSmall}", "fontSize": "${viewportSizeClass == @viewportClassMediumXSmall ? @fontSize2XSmall : @fontSizeSmall}" }, { "type": "Text", "text": "Sporting FC 2006 vs Red Bulls FC 2006", "textAlign": "center", "paddingTop": "${viewportSizeClass == @viewportClassMediumLarge ? @spacingLarge : @spacingSmall}", "fontSize": "${viewportSizeClass == @viewportClassMediumXSmall ? @fontSize2XSmall : @fontSizeSmall}" }, { "type": "Text", "text": " ", "textAlign": "center", "paddingTop": "${viewportSizeClass == @viewportClassMediumLarge ? @spacingLarge : @spacingSmall}", "fontSize": "@fontSize2XSmall" }, { "type": "Text", "text": "Scheels Soccer Park", "textAlign": "center", "fontStyle": "italic", "fontSize": "${viewportSizeClass == @viewportClassMediumXSmall ? @fontSize2XSmall : @fontSizeXSmall}" }, { "type": "Text", "text": "@ 6:30 p.m. (Field 6)", "textAlign": "center", "fontStyle": "normal", "fontSize": "${viewportSizeClass == @viewportClassMediumXSmall ? @fontSize2XSmall : @fontSizeXSmall}" } ] } ] } } ```
C# code for generating a signature for Apple promotions offer is producing an invalid signature
|c#|ios|subscription|storekit|
null
Hi I am trying to update a database table rows with the same row ID using PHP for loop but it only updates all the rows in the database table with the last iteration data when i ECHO the update query it shows the correct update code in each iteration below i have included the code and necessary screenshots PHP ForLoop code ``` if (!empty($_POST['cycleStartDate']) && !empty($_POST['mainSystemLink']) && !empty($_POST['paymentMethod'])) { $invoice_stat = 'no'; // Assuming you have the initial payment date // If payment method is monthly, add 1 month to the current date for the next iteration if ($paymentMethod == 'monthly') { $description = $_POST['mainSystemLink']; // Assuming the description is the same for all iterations $amount = $_POST['payment']; // Assuming the amount is the same for all iterations $aId = $_POST['AutoId']; // Assuming the AutoId is the same for all iterations $cycleStartDate = $_POST['cycleStartDate']; for ($i = 0; $i < 12; $i++) { // Construct the query $query = "UPDATE `hosting_payment` SET `payment_date` = '$cycleStartDate', `description`='$main_sys_link', `amount`='$payment', `payment_method`='$paymentMethod' WHERE `project_id`='$proj_id'"; // Execute the query $result = mysqli_query($con, $query); // Update paymentDate for the next iteration $cycleStartDate = date('Y-m-d', strtotime($cycleStartDate . ' +1 month')); echo $query; } } ``` update query echo output 59Record updated successfullyUPDATE `hosting_payment` SET `payment_date` = '2024-01-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-02-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-03-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-04-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-05-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-06-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-07-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-08-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-09-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-10-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-11-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59'UPDATE `hosting_payment` SET `payment_date` = '2024-12-05', `description`='www.dasisprinters.com1' , `amount`='30000', `payment_method`='monthly' WHERE `project_id`='59' Database Screenshots [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/rjgnb.png please can you guys check and let me know the necessary fixes THANKS FOR YOUR VALUABLE TIME
Widget APL Touch Component Advice
|widget|apl|
null
Javascript is recording if someone scrolls more than 400. If someone scrolls more than 400, I want to record this event in PHP somehow, here I will just alert it. So, if someone scrolls more than 400, there will be an alert coming from the PHP-variable, and not JS-variable. (Actually, I just want this function to be triggered once, and I want just to record the maximum scroll dept and INSERT that value into a database table, but a solution to my question above will be good enough). Right now, my code is alerting "PHP: scroll\> 400 just occured!" even though no scroll occured. Can someone provide with a working Ajax solution? This is my code in header.php: ``` <?php echo ' <script type="text/javascript"> window.onscroll = function() { var scrollLimit = 400; if (window.scrollY >= scrollLimit) { alert("Javascript: scroll> 400 just occured!") } }; </script>'; // $FromJStoPHP should trigger when window.scrollY >= scrollLimit if (window.scrollY >= scrollLimit) { $FromJStoPHP='PHP: scroll> 400 just occured!'; echo '<script type="text/javascript">alert("'.$FromJStoPHP.'");</script>'; } ?> ```
@helder-sepulveda's answer has a minor bug - validation will fail when all variables are set to `"false"` Here's a working example: ```lang-hcl variable "sc1_default" { default = "true" } variable "sc2_default" { default = "false" } variable "sc3_default" { default = "false" } variable "sc4_default" { default = "true" } provider "null" {} locals { joined_scs = join("", [var.sc1_default, var.sc2_default, var.sc3_default, var.sc4_default]) scs_are_valid = replace(local.joined_scs, "false", "") == "true" } resource "null_resource" "validation" { lifecycle { precondition { condition = local.scs_are_valid error_message = "One and only one SC should be set to true." } } } ``` Running `terraform plan`: ```lang-txt Planning failed. Terraform encountered an error while generating this plan. │ Error: Resource precondition failed │ │ on main.tf line 27, in resource "null_resource" "validation": │ 27: condition = local.scs_are_valid │ ├──────────────── │ │ local.scs_are_valid is false │ │ One and only one SC should be set to true. ```
The document title in the bottom right of that second screenshot is shown in italics, which means that the document does not actually exist. The ID is just shown in the Firestore console because you have subcollections with data under it. In the fields section of the console, you'll also find a message to explain this: > This document does not exist, it will not appear in queries or snapshots. [Learn more](https://firebase.google.com/docs/firestore/using-console#non-existent_ancestor_documents) And since the document doesn't actually exist, calling `exist` on that path correctly returns false. If you want it to return true, you'll have to ensure that the document actually exists. Also see: * https://stackoverflow.com/questions/48137582/firestore-db-documents-shown-in-italics * https://stackoverflow.com/questions/52826365/parent-document-is-being-marked-as-deleted-italics-by-default * https://stackoverflow.com/questions/54499104/problem-with-getting-list-of-document-collection-from-directory
\documentclass[12pt]{article} \usepackage{emoji} \begin{document} These are colour emojis using the \texttt{emoji} package and LuaLaTeX: \emoji{leaves} \emoji{rose} You can use emoji-modifiers: \emoji{woman-health-worker-medium-skin-tone} \emoji{family-man-woman-girl-boy} \emoji{flag-malaysia} \emoji{flag-united-kingdom} \end{document} Also, set the compiler to **LuaLaTeX** from Menu --> Settings [![enter image description here][1]][1] [Reference][2] [1]: https://i.stack.imgur.com/PULgl.png [2]: https://www.overleaf.com/learn/latex/Questions/Inserting_emojis_in_LaTeX_documents_on_Overleaf
Edited from original answer: It turns out they added support in Android 26 to make a USSD call. The function is TelephonyManager.sendUssdRequest. It does not exist on devices older than 26. And what ussd commands are supported is not specified, I would only rely on the bare basics and not advanced menu functionality. Still if you have any other way of getting that information use that instead. In reality- USSD is kind of a dead technology. It existed in pre-data days to provide limited ability to make network calls. In the modern day when every phone has data there's no advantage to using it over a web service. Remember that USSD codes vary by OEM, carrier, etc- they are not universal. So by using any USSD code at all you are limiting your app to working on only a subset of phones most likely by a specific OEM or carrier. Expect to have a low Play Store rating because you'll be given 1 stars by everyone without that model/carrier who downloads your app.
null
If you want a single plot with two y-axes, it's possible with ggplo2 but requires a bit of fiddling. 1) You can add a second y-axis with a magnitude scaled relative to the first one using ``` + scale_y_continuous( sec.axis = sec_axis(~./6000, name="Belysning") ) ``` Here, I'm taking the first axis to go up to 6000. I would recommend manually setting the first y-axis-limit so the two are 100% congruent. 2. Now you add the points/lines for "Belysning" to the plot. Note, they're plotted relative to y-axis 1, so you need to rescale them. ``` + geom_line( aes(y=Belysning * 6000), color="gold2") ``` 3. Combined, it should look something like: ``` plot_data <- cbind(complete_july_data,complete_august_data) plot_data <- full_join(result_df, plot_data, join_by(Dato == DATE)) y_max <- 6000 plot <- ggplot(plot_data, aes(x=Dato)) + geom_bar(aes(y = Count, fill = MANUAL.ID.), stat = "identity", width = 0.7, position = "dodge") + geom_line(aes(y = Belysning*y_max), color = "gold2") + geom_point(aes(y = Belysning*y_max), color = "goldenrod", size = 2) + labs(title = "Kumulative kurver for juli og august ved Borupgård 2018", x = "Dato", fill = "Art") + scale_y_continuous( limits = c(0,y_max), name = "Kumulative observationer", sec.axis = sec_axis(~./y_max, name="Belysning") ) + scale_x_date(limits = as.Date(c("2018-07-26", "2018-08-29")), date_breaks = "5 day", date_labels = "%Y-%m-%d") + theme_bw() + theme( axis.text.x = element_text(angle = 45, hjust = 1), axis.title.y.right = element_text(color = "gold2") ) ``` I tried simplifying by merging all three datasets into one. If that causes issues just add them in singularly like you were doing. But I created a sample dataset to look like yours and it gives me this: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/3WrIh.png
php for loop does not update each iteration in to MySql DB Table
|php|mysql|loops|for-loop|sql-update|
I'm using a scrollable widget like ListView.builder to display a list of VideoPlayer widgets. However, when scrolling, the VideoPlayerController widgets stop initializing after a scrolling a few. My attempt to fix this issue was to dispose of every VideoPlayerController behind the current widget that's being displayed. That fixed this but however, I need a way to re-display the previous VideoPlayer widget if a user decides to scroll back but it's already disposed hence getting this error: ``` FlutterError (A VideoPlayerController was used after being disposed. Once you have called dispose() on a VideoPlayerController, it can no longer be used.) ``` What's the best way I could achieve this? I could think of this solution: - Recreate a new VideoPlayerController and re-initialize that controller but is there any other better way?
Best way to reuse a VideoPlayerController after disposing it
|flutter|dart|video-player|
null
The [old docs](https://legacy.reactjs.org/docs/hooks-custom.html#useyourimagination) give a rough idea of how `useReducer` is implemented: function useReducer(reducer, initialState) { const [state, setState] = useState(initialState); function dispatch(action) { const nextState = reducer(state, action); setState(nextState); } return [state, dispatch]; } From the looks of it, it seems like calling multiple `dispatches` sequentially will mimic sequential calling of `setState`. Now we know that calling multiple `setState` without prevState will cause bugs as these calls are batched, e.g. https://codesandbox.io/p/sandbox/react-dev-s27ysl?file=%2Fsrc%2FApp.js three setState in a row doesn't increase the counter by 3. A similar program, however, with `dispatch` cause the counter to increase by 3 instead: https://jsfiddle.net/ynfem670/1/ So this makes me beleive that the `useReducer` leviates `setSTate(funtction(prevState){...})` as following: function useReducer(reducer, initialState) { const [state, setState] = useState(initialState); function dispatch(action) { const nextState = reducer(state, action); //sort of uses prevState setState(prevState => nextState); //Note that reducer already applies //logic to state so we don't need to mutate prevState. } return [state, dispatch]; } **Question 1:** How is `useReducer` implemented interally? Not the exact code but the general idea or pseudo code. -------- On a seperate note, I don't see any way to grab prevProps in either `useState` or `dispatch`, i.e., in class based ReactJs we have: this.setState((state, props) => ({ counter: state.counter + props.increment })); Now hook based `setState` and `dispatch` doesn't get props parameter: const [state, setState] = useState({}); setState(prevState => { // Object.assign would also work return {...prevState, ...updatedValues}; }); One could argue that in function based components on every render we get live binding to props always. In that case why don't we use `this.props` in `this.setState((state, props)=>{/*this.props*/})`?
null
I'm having trouble with reading a file that contains Base64 encoded items. Everything's ok until I try to add new items to the file and read it then. This is the error code: >Error: System.FormatException: La entrada no es una cadena Base 64 valida porque contiene un caracter que no es Base 64, mas de dos caracteres de relleno o un caracter no valido entre los caracteres de relleno. en System.Convert.FromBase64_Decode(Char* startInputPtr, Int32 inputLength, Byte* startDestPtr, Int32 destLength) en System.Convert.FromBase64CharPtr(Char* inputPtr, Int32 inputLength) en System.Convert.FromBase64String(String s) Error: 49652976 This is the code where I write the new line: ``` System.IO.File.AppendAllText( @"C:\Users\Public\Documents\Jurassic Command Line\" + _isla + "_Dinosaurs.jcl", System.Convert.ToBase64String(System.Text.Encoding.ASCII.GetBytes("\n" + _especie))); MessageBox.Show(System.Convert.ToBase64String(System.Text.Encoding.ASCII.GetBytes("\n" + _especie))); ``` And this is the code where the file is read ``` comboBox.Items.Clear(); _data = File.ReadAllText(@"C:\Users\Public\Documents\Jurassic Command Line\Nublar_Dinosaurs.jcl"); _dataDecoded = Decodificar(_data); File.WriteAllText(_filePath, _dataDecoded); string[] lines = File.ReadAllLines(_filePath); static string Decodificar(string data) { byte[] encodedDataAsBytes = Convert.FromBase64String(data); string returnValue = Encoding.ASCII.GetString(encodedDataAsBytes); return returnValue; } ``` I've tried everything I could find, but nothing works.
Javascript is recording if someone scrolls more than 400. If someone scrolls more than 400, I want to record this event in PHP somehow, here I will just alert it. So, if someone scrolls more than 400, there will be an alert coming from the PHP-variable, and not JS-variable. (Actually, I just want this function to be triggered once, and I want just to record the maximum scroll dept and INSERT that value into a database table, but a solution to my question above will be good enough). Right now, my code is alerting "PHP: scroll\> 400 just occured!" even though no scroll occured! Can someone provide with a working Ajax solution? This is my code in header.php: ``` <?php echo ' <script type="text/javascript"> window.onscroll = function() { var scrollLimit = 400; if (window.scrollY >= scrollLimit) { alert("Javascript: scroll> 400 just occured!") } }; </script>'; // $FromJStoPHP should trigger when window.scrollY >= scrollLimit if (window.scrollY >= scrollLimit) { $FromJStoPHP='PHP: scroll> 400 just occured!'; echo '<script type="text/javascript">alert("'.$FromJStoPHP.'");</script>'; } ?> ```
This is my code ``` import React, { useState } from 'react'; import { Link } from 'react-router-dom'; import { SidebarData } from './SidebarData'; import { LuMenu } from "react-icons/lu"; import { RxCross2 } from "react-icons/rx"; function Navbar() { const [sidebar, setSidebar] = useState(false) const showSidebar = () => setSidebar(!sidebar) return ( <> <div className='navbar'> <Link to="#" className='menu-bars'> <h1><LuMenu onClick={showSidebar}/> </h1> </Link> </div> <nav className={sidebar ? "nav-menu active" : "nav-menu"}> <ul className="nav-menu-items" onClick={showSidebar}> <li className='navbar-toggle'> {/* <Link to="#" className='menu-bars'> <RxCross2 /> </Link> */} </li> {SidebarData.map((item, index) => { return ( <li key={index} className={item.cName}> {/* <Link to={item.path}> {item.icon} <span>{item.title}</span> </Link> */} </li> ) })} </ul> </nav> </> ) } export default Navbar; import React from 'react'; import { AiFillHome } from "react-icons/ai"; import { LuPencilLine } from "react-icons/lu"; export const SidebarData = [ { title: "Home", path: "/", icon: <AiFillHome/>, cName: "nav-text" }, { title: "Entry", path: "/entry", icon: <LuPencilLine/>, cName: "nav-text" } ] import React from 'react'; import { BrowserRouter as Router, Routes, Route } from 'react-router-dom'; import Home from './pages/Home'; import Entry from './pages/Entry'; import Header from './components/Header'; function App() { return ( <div className="App"> <Header/> <Router> <Routes> <Route path="/" element={<Home />} /> <Route path="/entry" element={<Entry />} /> </Routes> </Router> </div> ); } export default App; ``` This is the error that i am getting. ``` [Error] TypeError: Right side of assignment cannot be destructured LinkWithRef (bundle.js:37906) renderWithHooks (bundle.js:24762) updateForwardRef (bundle.js:27331) callCallback (bundle.js:14358) dispatchEvent invokeGuardedCallbackDev (bundle.js:14402) invokeGuardedCallback (bundle.js:14459) beginWork$1 (bundle.js:34323) performUnitOfWork (bundle.js:33571) workLoopSync (bundle.js:33494) renderRootSync (bundle.js:33467) performConcurrentWorkOnRoot (bundle.js:32862:93) workLoop (bundle.js:43931) flushWork (bundle.js:43909) performWorkUntilDeadline (bundle.js:44146) [Error] The above error occurred in the <Link> component: LinkWithRef@http://localhost:3000/static/js/bundle.js:37893:14 div Navbar@http://localhost:3000/static/js/bundle.js:249:80 div div Header div App Consider adding an error boundary to your tree to customize error handling behavior. Visit https://reactjs.org/link/error-boundaries to learn more about error boundaries. logCapturedError (bundle.js:26865) (anonymous function) (bundle.js:26890) callCallback (bundle.js:22796) commitUpdateQueue (bundle.js:22814) commitLayoutEffectOnFiber (bundle.js:30860) commitLayoutMountEffects_complete (bundle.js:31953) commitLayoutEffects_begin (bundle.js:31942) commitLayoutEffects (bundle.js:31888) commitRootImpl (bundle.js:33797) commitRoot (bundle.js:33677) finishConcurrentRender (bundle.js:32997) performConcurrentWorkOnRoot (bundle.js:32925) workLoop (bundle.js:43931) flushWork (bundle.js:43909) performWorkUntilDeadline (bundle.js:44146) ``` I am observed that When i am commenting the <Link> part from the code i do not get the error, but else I get error mentioned above, am i doing something wrong I haven't found my error any where so i am posting this, i tried using useNavigation, but i could not implement it
Right side of assignment cannot be destructured which using Link component from 'react-router-dom' in react js
|reactjs|routes|react-router|react-router-dom|react-18|
null
For Expect <sup>(which uses [tag:tcl])</sup> the `*` char as in `/home/john/TestDir/*` has no special meaning. You can explicitly use a shell to run your command: ~~~ spawn bash -c "rsync -rv /home/john/TestDir/* jim@192.168.0.20:/home/jim/TestDir" # ^^^^^^^ ~~~
``` # Import the AWS PowerShell module Import-Module AWSPowerShell -Force # Define the S3 bucket and object key $bucketName = "bucket-name" $objectKey = "new ad account/fd new ad account V4.ps1" # Define the local file path where you want to save the downloaded code $localFilePath = "D:\Script Test\fd new ad account V4.ps1" # Replace with your desired local file path # Download the code from S3 try { Read-S3Object -BucketName $bucketName -Key $objectKey -File $localFilePath -ErrorAction Stop Write-Host "File downloaded successfully from S3." } catch { Write-Error "Failed to download file from S3: $_" exit 1 } # Execute the downloaded code try { & $localFilePath Write-Host "Script executed successfully." } catch { Write-Error "Failed to execute script: $_" exit 1 } exit 0 ``` Make sure that the IAM role attached to your WorkSpace has the necessary permissions to access the S3 bucket. Your policy looks good but replace "arn:aws:s3:::your-bucket-name/*" with the actual ARN and make sure the role attached to your WorkSpace has the rights. Also, replace "arn:aws:s3:::your-bucket-name/*" in your bucket policy. > By default, IAM users don't have permissions for WorkSpaces resources > and operations. To allow IAM users to manage WorkSpaces resources, you > must create an IAM policy that explicitly grants them permissions, and > attach the policy to the IAM users or groups that require those > permissions. > > To provide access, add permissions to your users, groups, or roles: > > Users and groups in AWS IAM Identity Center: > > Create a permission set. Follow the instructions in Create a > permission set in the AWS IAM Identity Center User Guide. > > Users managed in IAM through an identity provider: > > Create a role for identity federation. Follow the instructions in > Creating a role for a third-party identity provider (federation) in > the IAM User Guide. > > IAM users: > > Create a role that your user can assume. Follow the instructions in > Creating a role for an IAM user in the IAM User Guide. > > (Not recommended) Attach a policy directly to a user or add a user to > a user group. Follow the instructions in Adding permissions to a user > (console) in the IAM User Guide. > > For more information about IAM policies, see Policies and Permissions > in the IAM User Guide guide. > > WorkSpaces also creates an IAM role, workspaces_DefaultRole, which > allows the WorkSpaces service access to required resources. https://docs.aws.amazon.com/workspaces/latest/adminguide/workspaces-access-control.html#workspaces-iam-role
In my main batch file I ask the user for an input (limited to certain strings) and store it as a variable I then need to append a string to that variable to create the name of another variable and return the value of this variable which is stored in another file. Fruity example of what I'm trying to do for simplicity: In my config file I would have something like this: ``` set banana_colour=yellow set orange_colour=orange set strawberry_colour=red etc... ``` Then in my main file I would have ``` set /p fruit="Please input a fruit: " ``` I would then like to be able to return the value of a variable in the config file by concatenating the user input with a string (in this example `_colour`). So if the user entered *banana*, I would echo the value of the variable with the name `banana_colour` which in this case would be `yellow`. I tried to set a new variable which was the concatenation of the input and the string and then echo the variable of that string. For example: ``` set colour=%fruit%_colour echo %colour% ``` But of course, that does not return the value of the variable with that name it just returns the name: `banana_colour` instead of `yellow`.
.bat file - How can I return the value of a variable whose name depends on another variable concatenated with a string in a batch file?
|batch-file|variables|concatenation|
null
When using onDrag in SwiftUI on Mac how can I detect when the dragged object has been released anywhere?
Turns out, if using the python Bolt framework, this is pretty trivial. Follow this example: https://slack.dev/bolt-python/tutorial/getting-started Then add a handler for 'file_shared' like so: @app.event("file_shared") def handle_file_shared_events(body, logger, client: WebClient, context): .... You'll need to add Oath permissions for 'files:read'.
There are a few minor things to fix here. - You are missing a + in ToString - You need to use value in your set instead of newRadius, newColor and newSpeed - You are using the same member name as your variable. In the old days we would use underscore like I have in the working example below. Consider using the new format though eg. public int Speed { get; set; } using System; namespace Fans { class Fan { private int _speed; private double _radius; private string _color; //default constructor public Fan() { this._speed = 1; this._radius = 1.53; this._color = "green"; } //convenience constructor public Fan(double newRadius) { this._radius = newRadius; } public override string ToString() { return "A " + _radius + " inch " + _color + " fan at a speed of " + _speed; } public int speed { get { return _speed; } set { _speed = value; } } public double radius { get { return _radius; } set { _radius = value; } } public string color { get { return _color ; } set { _color = value; } } } }
The strings are of the form: ```none Item 5. Some text: 48 Item 5E. Some text, ``` The result of the search should produce 4 groups as follows. ```none Group(1) = "#" Group(2) = "5" or "5E" Group(3) = "Some text" Group(4) = "48" or "," ``` I have tried: ``` r"(.*)Group (.*)\.(.+)(?::(.+))|(,)" r"(.*)Group (.*)\.(.+)(?:(?::(.+))|(,))" r"(.*)Group (.*)\.(.+)(:(.+))|(,)" ``` It have tried a variety of ways to solve this, but none work as required. What should the regular expression be?
How to create a regular expression to partition a string that terminates in either ": 45" or ",", without the ": "
i'm stuck in this error since two days couldn't get some sleep i don't understand what seems the problem since redux and redux toolkit have migrated to a newer version and i seem lost in the code here is the formSlice.js and the fetcForms action ``` export const fetchForms = createAsyncThunk('data/fetchAll', async () => { const response = await fetch("http://localhost:5000/autoroute/"); const data = await response.json(); return data; }); const initialState = { data: [], status: 'idle', error: null }; const formSlice = createSlice({ name: 'data', initialState:{}, extraReducers: (builder) => { builder .addCase(fetchForms.fulfilled, (state, action) => { state.data = action.payload; }) }, }); //and this is the home component where i'm trying to display data const dispatch = useDispatch(); const data = useSelector((state) => state.data); console.log(data) useEffect(() => { dispatch(fetchForms()); }, [dispatch]); return ( <div> <h2>Form Data</h2> //and this where the error occurs {data && data.map((form) => ( <div key={form._id}> <p>Type of Accident: {form.accident}</p> <p>Route Type: {form.route}</p> <p>Lane: {form.lane}</p> <p>Number of Fatalities: {form.nbrmort}</p> <p>Number of Injuries: {form.nbrblesse}</p> {/* Other form details */} </div> ))} </div> ); ```
How is useReducer implemented in ReactJs source code?
|javascript|reactjs|react-hooks|state|
In Angular 17, I use ```html @for (notification of notifications; track notification) { <i>{{notification.quotationId}}</i> } ``` but in WebStorm I get the warning > Unresolved variable quotationId In TypeScript, the type of the property `notifications` is set to `PendingQuotations`, which is: ```typescript export type PendingQuotations = { quotationId: number, sender: string, date: string }[] ``` Why do I get this warning? This is how I set the property: ```typescript export class HeaderComponent implements OnInit { public notifications?: PendingQuotations; constructor(private generalService: GeneralService) { } ngOnInit(): void { this.$getNotifications().subscribe((pendingQuotations: PendingQuotations) => { if (pendingQuotations.length > 0) { this.notifications = pendingQuotations; } }); } $getNotifications(): Observable<PendingQuotations> { return this.generalService.get<Response>('/quotation/pending-quotations') .pipe( map((res: Response) => res.messages[0] as unknown as PendingQuotations) ); } } ``` This is is the value of `res`: ```json { "code": "PENDING_QUOTATIONS", "status": "OK", "timestamp": "13.01.2024, 14:33:24", "messages": [ [ { "id": 1, "quotationId": 2, "receiver": "receiver@example.com", "sender": "sender@example.com", "date": "2024-01-13T09:27:59.256+00:00" } ], "I am a string" ], "token": null } ```
I am trying to AWS SES mail service to add my Spring Boot application. Following to tutorials I have added following depediciers to pom file. <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-aws</artifactId> <version>2.1.0.RELEASE</version> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-java-sdk-ses</artifactId> <version>1.11.505</version> </dependency> <dependency> <groupId>javax</groupId> <artifactId>javaee-web-api</artifactId> <version>7.0</version> <scope>provided</scope> </dependency> When I run following configuration in Spring Boot Application I have got following error. I colud not find the reason. Is it a problem with dependicies. Caused by: java.lang.NoClassDefFoundError: javax/mail/MessagingException at com.xr.crosscheck.config.SESConfig.javaMailSender(SESConfig.java:38) ~[classes/:na] at com.xr.crosscheck.config.SESConfig$$SpringCGLIB$$0.CGLIB$javaMailSender$1(<generated>) ~[classes/:na] at com.xr.crosscheck.config.SESConfig$$SpringCGLIB$$FastClass$$1.invoke(<generated>) ~[classes/:na] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:258) ~[spring-core-6.1.2.jar:6.1.2] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-6.1.2.jar:6.1.2] at com.xr.crosscheck.config.SESConfig$$SpringCGLIB$$0.javaMailSender(<generated>) ~[classes/:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:140) ~[spring-beans-6.1.2.jar:6.1.2] ... 67 common frames omitted Caused by: java.lang.ClassNotFoundException: javax.mail.MessagingException at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) ~[na:na] at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) ~[na:na] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[na:na] ... 78 common frames omitted In my Configuration file I have tried to get beans of AWS and JavaMailSender and above error occurs. My configuration file is, @Configuration public class SESConfig { @Value("${cloud.aws.credentials.access-key}") private String accessKey; @Value("${cloud.aws.credentials.secret-key}") private String secretKey; @Value("${cloud.aws.region.static}") private String region; @Bean public AmazonSimpleEmailService amazonSimpleEmailService() { BasicAWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey); return AmazonSimpleEmailServiceClientBuilder.standard() .withCredentials(new AWSStaticCredentialsProvider(credentials)) .withRegion(Regions.EU_CENTRAL_1) .build(); } @Bean public JavaMailSender javaMailSender(AmazonSimpleEmailService amazonSimpleEmailService) { return new SimpleEmailServiceJavaMailSender(amazonSimpleEmailService); } } Any advice could be helpfull to solve why it is happening.
Thank you all for pitching in. You were correct. There was nothing actually wrong with my code. I solved the memory depletion issue by turning off CacheBlobs in my query component. I'm using IBDAC, and all I needed to do was set that property to False. That allowed me to copy 107,389 documents into the database in a little over one hour on a fairly slow Windows 11 machine (running the app in the IDE). It seems I had been putting data into cache faster than it was being removed. Again, thank you for all your comments.
I am upgrading from SQLAlchemy 1.4 to SQLAlchemy 2 and I am running into an error: `Can't add additional column 'machine_id' when specifying __table__` when trying to create a versioned history table with inherited columns. Following the example [SQLAlchemy example](https://github.com/zzzeek/sqlalchemy/blob/main/examples/versioned_history/history_meta.py): ``` versioned_cls = type( "%sHistory" % cls.__name__, bases, { "_history_mapper_configured": True, "__table__": history_table, "__mapper_args__": dict( inherits=super_history_mapper, polymorphic_identity=local_mapper.polymorphic_identity, polymorphic_on=polymorphic_on, properties=properties, ), }, ) ``` ``` class MachineIdMixin(object): @declared_attr.cascading def machine_id(self) -> Mapped[String]: return mapped_column(String, ForeignKey("machines.id"), nullable=False, index=True) ``` ``` class MyTable(Base, MachineIdMixin, Versioned): __table__ = "mytable" id = Column(String, primary_key=True) ``` When attempting to run however I get the following error: `Can't add additional column 'machine_id' when specifying __table__`. When debugging on that line, the passed in column is `machine_id` whereas `mytable_history` as a column `mytable_history.machine_id`, and it incorrectly identifies as a column to be added. I've also checked: https://docs.sqlalchemy.org/en/20/orm/declarative_mixins.html#using-orm-declared-attr-to-generate-table-specific-inheriting-columns and it seems the mixin definition is correct. Is there something I am doing incorrectly?
SQLAlchemy 2 Can't add additional column when specifying __table__
|python-3.x|sqlalchemy|
all. I'm still pretty new to coding, not having written a single piece of code in my life until 4 weeks ago, and I have 2 questions. First, I've been struggling trying to figure out how to pull specific pieces of data out of a database that contains 3 tables. Secondly, I've been trying to figure out how to display that data in a GUI. All the tutorials I've read and videos I've watched have shown how to enter and delete values from a table using a GUI, but I haven't found anything about retrieving and displaying those values in a GUI. The closest I have gotten is retrieving and displaying the needed information in the terminal. I'm using Python 3.12.0 along with CustomTkinter to make a GUI for my MySQL 8.3 database. My database is "HardwareDB" containing tables "doitbest", "truevalue", and "reference". The tables contain columns with headings, such as SKU, Description, Price, Model Number, PrimaryUPC, etc. The basic idea is that I would scan a UPC barcode or enter a Model or SKU number into an entry box on the GUI and then have all the relevant information from both Do-It Best and True Value pop up in said GUI. This way I can compare the two side-by-side when doing our weekly orders. The "reference" table should allow me to compare similar, though not necessarily the same, products between the "doitbest" and "truevalue" tables. When it comes to retrieving specific pieces of data this is closest I've come: ``` search = "SELECT Description FROM doitbest, truevalue, reference WHERE PrimaryUPC = UPC_entry.get()" mycursor.execute(search) myresult = mycursor.fetchall() for x in myresult: print(x) ``` The idea behind this was that I scan a barcode into the entry box represented by UPC_entry = customtkinter.CTkEntry but this doesn't work. It says, "mysql.connector.errors.ProgrammingError: 1630 (42000): FUNCTION upc_entry.get does not exist." However, if I replace the UPC_entry.get() with a barcode, it comes back with the relevant data in the terminal. The second problem is how to display this data in the GUI rather than just in the terminal. Like I stated above, I haven't been able to really find any information about that. For aesthetic reasons, I wanted to use entry boxes/widgets, but instead of using them to enter information, I wanted to retrieve the data and display them in the boxes. My other thought would be to use a combination of frames, background color, and CTklabel to mimic the look of the entry boxes/widgets. I'm not dead set on that, however. I would settle for anything that would make it easy to compare the information. I just didn't know if you guys would have any suggestions for that. Heck. I'm not even deadset on CustomTkinter if you guys have a better suggestion.
ClassnotFound Exeption
|spring-boot|amazon-ses|
You can try the following: ```python from vedo import load # https://free3d.com/3d-model/091_aya-3dsmax-2020-189298.html mesh = load("091_W_Aya_100K.obj").texture("tex/091_W_Aya_2K_01.jpg") mesh.lighting('glossy') # change lighting (press k interactively) mesh.show() ``` [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/aQFHy.png
I'm using a scrollable widget like ListView.builder to display a list of VideoPlayer widgets. However, when scrolling, the VideoPlayerController widgets stop initializing after a scrolling a few. My attempt to fix this issue was to dispose of every VideoPlayerController behind the current widget that's being displayed. That fixed this but however, I need a way to re-display the previous VideoPlayer widget if a user decides to scroll back but it's already disposed hence getting this error: ``` FlutterError (A VideoPlayerController was used after being disposed. Once you have called dispose() on a VideoPlayerController, it can no longer be used.) ``` What's the best way I could achieve this? I could think of this solution: - Recreate a new VideoPlayerController and re-initialize that controller but is there any other better way? Thanks in advance
|postgresql|
Mistakenly you have used a duplicated `ID` for both elements, so you cannot use the same id twice, in case of getting this work, you need to replace the `id` attribute with `class` and set your `querySelectorAll` to this class, and it's all, give a try to this snippet to see how it's going on! <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> //ANIMATING THE MAIN IMAGE ON THE START PAGE let myPanel = document.getElementById("panel"); let subpanel = document.getElementById("panel-container"); const projectPanel = document.querySelectorAll(".panel-project"); // const projectSubpanel = document.querySelector("#project-img"); let mouseX, mouseY; let transformAmount = 3; function transformPanel(panel, subpanel, mouseEvent) { mouseX = mouseEvent.pageX; mouseY = mouseEvent.pageY; const centerX = panel.offsetLeft + panel.clientWidth / 2; const centerY = panel.offsetTop + panel.clientHeight / 2; const percentX = (mouseX - centerX) / (panel.clientWidth / 2); const percentY = -((mouseY - centerY) / (panel.clientHeight / 2)); subpanel.style.transform = "perspective(400px) rotateY(" + percentX * transformAmount + "deg) rotateX(" + percentY * transformAmount + "deg)"; } function handleMouseEnter(panel, subpanel) { setTimeout(() => { subpanel.style.transition = ""; }, 100); subpanel.style.transition = "transform 0.1s"; } function handleMouseLeave(panel, subpanel) { subpanel.style.transition = "transform 0.1s"; setTimeout(() => { subpanel.style.transition = ""; }, 100); subpanel.style.transform = "perspective(400px) rotateY(0deg) rotateX(0deg)"; } myPanel.addEventListener("mousemove", (event) => transformPanel(myPanel, subpanel, event) ); myPanel.addEventListener("mouseenter", () => handleMouseEnter(myPanel, subpanel) ); myPanel.addEventListener("mouseleave", () => handleMouseLeave(myPanel, subpanel) ); projectPanel.forEach((el) => { const projectSubpanel = el.querySelector(".project-img"); el.addEventListener("mousemove", (event) => transformPanel(el, projectSubpanel, event) ); el.addEventListener("mouseenter", () => handleMouseEnter(el, projectSubpanel) ); el.addEventListener("mouseleave", () => handleMouseLeave(el, projectSubpanel) ); }); <!-- language: lang-css --> * { margin: 0; padding: 0; box-sizing: border-box; } :root { /* rgba(116, 116, 116) */ --gray-color: #ededed; } body { font-family: 'Raleway', sans-serif; } /* //////////////////LOADER///////////////////// */ .loader { position: absolute; width: 100%; height: 100vh; overflow: hidden; z-index: 99; } .loader-box { display: flex; flex-direction: column; align-items: center; justify-content: center; width: 100%; height: 100vh; font-size: 20rem; color: transparent; -webkit-text-stroke: 3px #cbcbcb; } .loader-box.first { position: absolute; top: 0; left: 0; background-color: var(--gray-color); z-index: 2; } .loader-box.second { position: absolute; top: 0; left: 0; background-color: #b8b8b8; z-index: 1; } /* //////////////////NAV///////////////////// */ nav { position: absolute; left: 50%; transform: translateX(-50%); height: 100px; width: 90%; display: flex; align-items: center; padding-block: 10px; z-index: 2; background-color: #fff; overflow: hidden; } nav a { padding: 5px 20px; text-decoration: none; color: rgb(62, 62, 62); } .name-logo { margin-right: auto; text-transform: uppercase; font-size: 1.8rem; color: rgb(116, 116, 116); font-weight: 500; } .menu { display: flex; list-style: none; font-size: 1.1rem; } /* //////////////////Start Page///////////////////// */ .start-page { /* background-color: red; */ overflow: hidden; height: 100vh; display: flex; justify-content: space-around; align-items: center; } .start-image { width: 50%; height: 100%; } .start-page #panel { display: flex; justify-content: center; align-items: center; filter: grayscale(60%) } .start-text { display: flex; flex-direction: column; justify-content: center; align-items: center; width: 50%; height: 100%; } .start-text p, .start-title, .image { background-color: #fff; z-index: 3; } .start-text p { font-size: 1.3rem; margin-right: 100px; } .start-title { text-transform: uppercase; font-size: 5rem; color: transparent; -webkit-text-stroke: 2px black; } .start-title span { color: transparent; -webkit-text-stroke: 3px black; } /* PARALAX */ #panel, #panel-container { width: 500px; height: 650px; } #panel-container { display: flex; justify-content: center; align-items: center; position: relative; background: url("https://picsum.photos/id/237/200/300") center top; background-size: cover; background-position: 50% 50%; transform: perspective(400px) rotateX(0deg) rotateY(0deg); transform-style: preserve-3d; /* box-shadow: 1.5rem 2.5rem 5rem 0.7rem rgba(0, 0, 0, 0.13); */ } .panel-content { color: black; text-align: center; padding: 20px; transform: translateZ(80px) scale(0.8); transform-style: preserve-3d; overflow-wrap: break-word; } /* LINE */ .decoration-line { position: absolute; top: 0; left: 50%; height: 300vh; width: 1px; background-color: black; z-index: -1; } .decoration-line.one { left: 10%; } .decoration-line.three { left: 90%; } /* MY WORK */ .my-work { width: 100%; } .my-work h1 { font-size: 6rem; width: 60%; margin: 0 auto 100px auto; text-transform: uppercase; color: transparent; -webkit-text-stroke: 3px black; background-color: #fff; text-align: center; z-index: 3; } .projects { position: relative; width: 78%; left: 11%; } .project { display: flex; justify-content: space-evenly; align-items: center; width: 100%; margin-bottom: 100px; } .panel-project { min-width: 550px; height: 300px; } .project-img { display: flex; justify-content: center; align-items: center; position: relative; transform: perspective(400px) rotateX(0deg) rotateY(0deg); transform-style: preserve-3d; min-width: 550px; height: 300px; background-size: cover; background-position: center; border: 1px solid black; border-radius: 10px; filter: grayscale(60%) } .project-desription { width: 50%; text-align: center; background-color: #fff; padding: 0 50px; } .project-desription h2 { font-size: 2.5rem; -webkit-text-stroke: 1px black; color: transparent; margin-bottom: 30px; } .project-desription p { line-height: 1.5rem; } .reverse { flex-direction: row-reverse; } #komornik { background-image: url("https://picsum.photos/id/237/200/300"); } #hulk-factory { background-image: url("https://picsum.photos/id/237/200/300"); } @media only screen and (max-width:1200px) { /* LOADER */ .loader-box { font-size: 15rem; } /* START PAGE */ .start-text p { font-size: 1rem; } .start-title { font-size: 4rem; } #panel-container { width: 380px; } /* MY WORK */ .project-img { min-width: 400px; height: 200px; } .project-desription p { line-height: 1.2rem; } } @media only screen and (max-width:992px) { /* MY WORK */ .my-work h1 { font-size: 4rem; } .project { flex-direction: column; } .project-desription { width: 100%; text-align: center; background-color: #fff; margin-top: 25px; padding: 0 50px; } .project-desription h2 { font-size: 2.5rem; -webkit-text-stroke: 1px black; color: transparent; margin-bottom: 30px; } .project-desription p { line-height: 1.5rem; } } @media only screen and (max-width:768px) { /* LOADER */ .loader-box { font-size: 10rem; } /* MENU */ .menu { display: none; } nav { align-items: flex-start; width: 100%; transition: .3s; } .name-logo { padding: 15px 0 0 30px; } .menu-active { height: 300px; } .menu-active .menu { position: absolute; top: 30%; display: block; text-align: center; width: 100%; font-size: 1.7rem; } .menu-active li { padding: 15px 0; } .hamburger-menu { position: relative; width: 50px; height: 50px; cursor: pointer; margin-right: 40px; } .hamburger-line, .hamburger-line::before, .hamburger-line::after { content: ""; position: absolute; width: 100%; height: 2px; background-color: black; display: block; } .hamburger-line { top: 50%; left: 0; transform: translateY(-50%); } .hamburger-line::before { top: -12px; } .hamburger-line::after { top: 12px; } /* START PAGE */ .start-page { padding-top: 120px; flex-direction: column; justify-content: center; } .start-image { width: 100%; } .start-text { width: 80%; } .start-title { font-size: 3rem; } #panel { height: 400px; } #panel-container { height: 50vh; width: 100%; top: 50px; } .start-text p, .start-title, .image { background-color: transparent; z-index: 0; } .decoration-line { display: none; } } @media only screen and (max-width:576px) { /* LOADER */ .loader-box { font-size: 7rem; } .name-logo { font-size: 1.3rem; } .start-title { font-size: 2.2rem; } .start-title { -webkit-text-stroke: 1px black; } .start-title span { -webkit-text-stroke: 2px black; } /* MY WORK */ .my-work { width: 100%; } .my-work h1 { margin-top: 50px; font-size: 3.5rem; width: 100%; } .project-desription { padding: 0 0; } } @media only screen and (max-height:800px) { #panel, #panel-container { width: 350px; height: 450px; } .start-text p { font-size: 1rem; } .start-title { font-size: 3.5rem; } /* LOADER */ } <!-- language: lang-html --> <nav> <a class="name-logo" href="#">LOREM IPSUM</a> <div class="hamburger-menu"> <div class="hamburger-line"></div> </div> <ul class="menu "> <li class="menu-li"><a href="">Start</a></li> <li class="menu-li"><a href="">Work</a></li> <li class="menu-li"><a href="">Contact</a></li> </ul> </nav> <main> <div class="decoration-line one"></div> <div class="decoration-line two"></div> <div class="decoration-line three"></div> <div class="start-page"> <div class="image"> <div id="panel"> <div id="panel-container"> <div id="panel-content"></div> </div> </div> </div> <div class="start-text"> <h1 class="start-title">Lorem ipsum dolor sit amet consectetur adipisicing elit. </h1> <p> Lorem ipsum dolor sit amet consectetur adipisicing elit. Saepe quo distinctio adipisci accusamus libero vel quos, cum, autem dolore laborum quibusdam recusandae odio ex odit aperiam aliquam ipsum enim iure </p> </div> </div> <div class="my-work"> <h1>Lorem ipsum</h1> <div class="projects"> <div class="project"> <a class="panel-project" href="" target="_blank"> <div id="komornik" class="project-img"></div> </a> <div class="project-desription"> <h2>First Dog</h2> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Saepe quo distinctio adipisci accusamus libero vel quos, cum, autem dolore laborum quibusdam recusandae odio ex odit aperiam aliquam ipsum enim iure. Cupiditate fugiat quia, quo possimus a fugit. Deserunt nesciunt placeat quam? Dignissimos ipsa aut atque veritatis aspernatur, quaerat iste et?</p> </div> </div> <div class="project reverse"> <a class="panel-project" href="" target="_blank"> <div id="hulk-factory" class="project-img"></div> </a> <div class="project-desription"> <h2>Second dog</h2> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Saepe quo distinctio adipisci accusamus libero vel quos, cum, autem dolore laborum quibusdam recusandae odio ex odit aperiam aliquam ipsum enim iure. Cupiditate fugiat quia, quo possimus a fugit. Deserunt nesciunt placeat quam? Dignissimos ipsa aut atque veritatis aspernatur, quaerat iste et?</p> </div> </div> </div> </main> <!-- end snippet -->
|python|runtime-error|hyperparameters|keras-tuner|
I want to create a fixed 1H to 24H intraday CC table or plot. Even though I have a fixed timeframe, the values are still changing and incorrect: 1H timeframe target chart: Only the 1H CC seems to be correct if I compare it with the CC indicator by TV. The others are incorrect. All the other timeframe target charts are incorrect. Here is the script: ``` //@version=5 // Rules indicator(title="I: Correlation Coefficient Table", shorttitle="I: CC T", precision=2, scale=scale.right) // Inputs sym_s = input.symbol(defval="BINANCE:BTCUSD", title="Symbol Source") src = input.source(defval=close, title="Source") len = input.int(defval=14, title="Length", minval=1, step=1) sym_t = input.symbol(defval="BINANCE:ETHUSD", title="Symbol Target") // Core src_s_1h = request.security(symbol=sym_s, timeframe="60", expression=src) src_s_2h = request.security(symbol=sym_s, timeframe="120", expression=src) src_s_3h = request.security(symbol=sym_s, timeframe="180", expression=src) src_t_1h = request.security(symbol=sym_t, timeframe="60", expression=src) src_t_2h = request.security(symbol=sym_t, timeframe="120", expression=src) src_t_3h = request.security(symbol=sym_t, timeframe="180", expression=src) cc_1h = ta.correlation(source1=src_s_1h, source2=src_t_1h, length=len) cc_2h = ta.correlation(source1=src_s_2h, source2=src_t_2h, length=len) cc_3h = ta.correlation(source1=src_s_2h, source2=src_t_3h, length=len) cc_r_1h = math.round(number=cc_1h, precision=3) cc_r_2h = math.round(number=cc_2h, precision=3) cc_r_3h = math.round(number=cc_3h, precision=3) // Table cc_table = table.new(position=position.bottom_center, columns=5, rows=3, bgcolor=color.black, frame_color=color.white, frame_width=2, border_color=color.white, border_width=1) table.cell(table_id=cc_table, column=1, row=1, text="TF", bgcolor=color.white) table.cell(table_id=cc_table, column=1, row=2, text="CC", bgcolor=color.white) table.cell(table_id=cc_table, column=2, row=1, text="1H", text_color=color.white, bgcolor=color.black) table.cell(table_id=cc_table, column=3, row=1, text="2H", text_color=color.white, bgcolor=color.black) table.cell(table_id=cc_table, column=4, row=1, text="3H", text_color=color.white, bgcolor=color.black) table.cell(table_id=cc_table, column=2, row=2, text=str.tostring(value=cc_r_1h), text_color=color.white, bgcolor=color.black) table.cell(table_id=cc_table, column=3, row=2, text=str.tostring(value=cc_r_2h), text_color=color.white, bgcolor=color.black) table.cell(table_id=cc_table, column=4, row=2, text=str.tostring(value=cc_r_3h), text_color=color.white, bgcolor=color.black) ``` Is it possible to somehow do an input\*symbol() where I do something like BINANCE:BTCUSD:1H (or 60)? I tried this and it didn't work, and I can't find a format page on how to do this. Could the ta.correlation() function be cursed?
I have a very frustrating issue [FunctionName("MyFunction")] public async Task MyFunctionAsync( [ServiceBusTrigger("%AzureTopicName%", "mysubscription", Connection = "event-bus-connection")] Message message) { //My logic here } This never gets hit I have checked and verified all configuration and there are no issues Messages are being written to the service bus which would not happen if there was a config issue However, my trigger never gets fired even though messages are in the subscription Has anyone ever seen this? This is local execution not Azure Paul
Local Azure Function Service Bus Trigger doesnt pick up messages but no information
|azure|servicebus|
I'm encountering the "Parent directory is not in asset database" error when using AssetDatabase.MoveAsset in a Unity Editor script. This happens even though I've implemented the following steps to create the parent directory: 1. **Folder Creation:** Using Directory.CreateDirectory or unity methods based on AssetDatabase.CreateFolder. 2. **Asset Import:** Calling AssetDatabase.ImportAsset with the ImportAssetOptions.ImportRecursive flag. 3. **AssetDatabase Refresh**: Using AssetDatabase.Refresh explicitly in various places in my code. **Code Example (Illustrative):** ``` public class AssetData { public string Path { get; private set; } public string Name { get; private set; } public string SuggestedPath { get; private set; } public string FileExtension { get { return System.IO.Path.GetExtension(Path).ToLower(); } } public bool IsModified { get; set; } = false; public AssetData(string path, string name, string suggestedPath) { Path = path; Name = name; SuggestedPath = suggestedPath; } } ``` ``` using System; using System.Collections.Generic; using System.IO; using UnityEditor; using UnityEngine; [ExecuteInEditMode] public class AssetProcesser : MonoBehaviour { private Queue<AssetData> m_assetsToProcess; public void CorrectAssets(List<AssetData> assets) { m_assetsToProcess = new Queue<AssetData>(assets); EditorApplication.update += ProcessAssetCorrectionQueue; AssetDatabase.StartAssetEditing(); } private void ProcessAssetCorrectionQueue() { if (m_assetsToProcess.Count == 0) { FinishAssetCorrection(); return; } try { foreach (AssetData asset in m_assetsToProcess) { ProcessAsset(asset); } } finally { AssetDatabase.StopAssetEditing(); } } private void ProcessAsset(AssetData asset) { string guid = AssetDatabase.AssetPathToGUID(asset.Path); string currentPath = AssetDatabase.GUIDToAssetPath(guid); string newPath = asset.SuggestedPath; string newParentDirectory = Path.GetDirectoryName(newPath); CreateFolderIfNeeded(newParentDirectory); ImportFolderRecursive(newParentDirectory); AssetDatabase.Refresh(ImportAssetOptions.ImportRecursive); string error = AssetDatabase.MoveAsset(currentPath, newPath); // Error I am getting : Parent directory is not in asset database if (!string.IsNullOrEmpty(error)) { ToolsLogs.LogError($"Failed to rename asset {asset.Path}: {error}"); } AssetDatabase.ImportAsset(newPath, ImportAssetOptions.ImportRecursive); } private static void CreateFolderIfNeeded(string folderPath) { if (!folderPath.StartsWith("Assets/")) { ToolsLogs.LogError("Invalid path. Path must be within the Assets folder."); return; } string[] folders = folderPath.Substring("Assets/".Length).Split('/'); string currentPath = "Assets"; if (!AssetDatabase.IsValidFolder(folderPath)) { try { for (int i = 0; i < folders.Length; i++) { string folderName = folders[i]; currentPath = Path.Combine(currentPath, folderName); if (AssetDatabase.IsValidFolder(currentPath)) { continue; } string createdFolderGuid = AssetDatabase.CreateFolder(Path.GetDirectoryName(currentPath), Path.GetFileName(currentPath)); if (!string.IsNullOrEmpty(createdFolderGuid)) { AssetDatabase.ImportAsset(currentPath, ImportAssetOptions.ImportRecursive); Debug.Log($"Created folder: {createdFolderGuid}"); } } } catch (Exception ex) { Debug.LogError($"Error creating directory: {ex.Message}"); } } } private static void ImportFolderRecursive(string folderPath) { AssetDatabase.ImportAsset(folderPath, ImportAssetOptions.ImportRecursive | ImportAssetOptions.ForceSynchronousImport); } private void FinishAssetCorrection() { EditorUtility.ClearProgressBar(); EditorApplication.update -= ProcessAssetCorrectionQueue; AssetDatabase.Refresh(); } } ``` **Environment:** Unity Version: [2021.3.15f1] Operating System: [macOS] **Things I've Tried:** - **Debugging with Logging**: Verified that the folder exists on the file system but is still not recognized by the Asset Database. - **Forced Delays:** Increased delays and experimented with coroutines. - **Alternative Folder Creation Methods**: Tried variations in folder creation logic. **Questions:** 1. What are the common reasons why the Asset Database might not immediately recognize a newly created directory? 2. Are there more effective ways to force synchronization between the file system and the Asset Database? 3. Could there be other parts of my asset processing script interfering with folder recognition? Additional Notes: 1) I'm open to exploring the possibility of issues with meta files if the more common solutions don't resolve the problem. 2) This script runs exclusively within the Unity Editor and is not part of any runtime/build behaviour. The provided code example is a simplified representation of my actual codebase. Thank you in advance for any insights or suggestions!
"Parent directory is not in asset database" Error in Unity AssetDatabase.MoveAsset Despite Creating Folder
|c#|unity-game-engine|unity-editor|unity3d-editor|
null
I am working on an IoT application with over 15K lines of code using PlatformIO IDE, Espressif 6.2.0, and the Arduino framework for the ESP32 WROOM target. I left the device running for about 28 hours, and everything was working fine until I encountered the following exception, which simply stated that the ISR WDT (Interrupt Service Routine Watchdog Timer) had timed out, which is understandable. However, what happened next really confused me. The device attempted to reboot **97** times, and each attempt resulted in the same error. On the 98th attempt, the device got stuck in the reboot process and did not proceed further. It seemed as if the CPU was halted, but I do not know why or how exactly. I have a single ISR in my program attached to a digital input. In that ISR, I only increment an integer variable by one and nothing else. My question is, why might this have happened? Let's consider that my program has a major problem that occurs after 28 hours, but why doesn't the issue stop after rebooting? Has anyone had a similar experience, and if so, what were the possible solutions? Here is the start and end of the log for your reference: ```[T-97928] [N] Free Heap Size: 55432, Min Heap Size: 16372 [T-97929] [N] Free Heap Size: 55464, Min Heap Size: 16372 [97929352][E][Wire.cpp:513] requestFrom(): i2cRead returned Error 263 Guru Meditation Error: Core 1 panic'ed (Interrupt wdt timeout on CPU1). Core 1 register dump: PC : 0x40081467 PS : 0x00050a35 A0 : 0x4008522c A1 : 0x3ffbfaac A2 : 0x00000000 A3 : 0x3ffbe12c A4 : 0x3ffbe14c A5 : 0x00000000 A6 : 0x3ffbe14c A7 : 0x00000000 A8 : 0x00000001 A9 : 0x3ffbfa8c A10 : 0x00000000 A11 : 0x00000000 A12 : 0x00000001 A13 : 0x00000001 A14 : 0x00000001 A15 : 0x3ffb8ff0 SAR : 0x0000001b EXCCAUSE: 0x00000006 EXCVADDR: 0x00000000 LBEG : 0x4008a468 LEND : 0x4008a47e LCOUNT : 0xffffffff Core 1 was running in ISR context: EPC1 : 0x400f6137 EPC2 : 0x00000000 EPC3 : 0x00000000 EPC4 : 0x00000000 Backtrace: 0x40081464:0x3ffbfaac |<-CORRUPTED Core 0 register dump: PC : 0x4018d46a PS : 0x00060735 A0 : 0x800f4bcc A1 : 0x3ffcc580 A2 : 0x00000000 A3 : 0x80000001 A4 : 0x800921f8 A5 : 0x3ffaf660 A6 : 0x00000003 A7 : 0x00060023 A8 : 0x800f46ae A9 : 0x3ffcc550 A10 : 0x00000000 A11 : 0x80000001 A12 : 0x800921f8 A13 : 0x3ffaf670 A14 : 0x00000003 A15 : 0x00060023 SAR : 0x0000001d EXCCAUSE: 0x00000006 EXCVADDR: 0x00000000 LBEG : 0x00000000 LEND : 0x00000000 LCOUNT : 0x00000000 Backtrace: 0x4018d467:0x3ffcc580 0x400f4bc9:0x3ffcc5a0 0x400902c4:0x3ffcc5c0 ELF file SHA256: ed76320984d99ab6 Rebooting... [T-0] [N] Setup Started. Guru Meditation Error: Core 1 panic'ed (Interrupt wdt timeout on CPU1). Core 1 register dump: PC : 0x4008520c PS : 0x00050035 A0 : 0x4000bff0 A1 : 0x3ffbfadc A2 : 0x00000000 A3 : 0x00000040 A4 : 0x00001000 A5 : 0x3ffbfaac A6 : 0x00000000 A7 : 0x3ffbe12c A8 : 0x0002566a A9 : 0x3ffbfaac A10 : 0x00000001 A11 : 0x00000804 A12 : 0x00000001 A13 : 0x3ffbfa8c A14 : 0x00000000 A15 : 0x00000000 SAR : 0x0000001b EXCCAUSE: 0x00000006 EXCVADDR: 0x00000000 LBEG : 0x4008a5d0 LEND : 0x4008a5db LCOUNT : 0xffffffff Core 1 was running in ISR context: EPC1 : 0x400f6137 EPC2 : 0x00000000 EPC3 : 0x00000000 EPC4 : 0x00000000 Backtrace: 0x40085209:0x3ffbfadc |<-CORRUPTED Core 0 register dump: PC : 0x4018d46a PS : 0x00060735 A0 : 0x800f4bcc A1 : 0x3ffcc580 A2 : 0x00000000 A3 : 0x4008981c A4 : 0x00060520 A5 : 0x3ffbcfb0 A6 : 0x007bfaf8 A7 : 0x003fffff A8 : 0x800f46ae A9 : 0x3ffcc550 A10 : 0x00000000 A11 : 0x3ffbfaf4 A12 : 0x3ffbfaf4 A13 : 0x00000000 A14 : 0x00060520 A15 : 0x00000000 SAR : 0x0000001d EXCCAUSE: 0x00000006 EXCVADDR: 0x00000000 LBEG : 0x00000000 LEND : 0x00000000 LCOUNT : 0x00000000 Backtrace: 0x4018d467:0x3ffcc580 0x400f4bc9:0x3ffcc5a0 0x400902c4:0x3ffcc5c0 ELF file SHA256: ed76320984d99ab6 . . . . 97 Exception Like This . . . . Guru Meditation Error: Core 1 panic'ed (Interrupt wdt timeout on CPU1). Core 1 register dump: PC : 0x40085204 PS : 0x00050a35 A0 : 0x4008182e A1 : 0x3ffbfadc A2 : 0x00000000 A3 : 0x3ffb78e0 A4 : 0x00001000 A5 : 0x3ffbfaac A6 : 0x00000000 A7 : 0x3ffbe12c A8 : 0x3ffbe168 A9 : 0x00000000 A10 : 0x3ffbe14c A11 : 0x00000001 A12 : 0x00000001 A13 : 0x3ffbfa8c A14 : 0x00000000 A15 : 0x00000000 SAR : 0x0000001b EXCCAUSE: 0x00000006 EXCVADDR: 0x00000000 LBEG : 0x4008a468 LEND : 0x4008a47e LCOUNT : 0xffffffff Core 1 was running in ISR context: EPC1 : 0x400f6137 EPC2 : 0x00000000 EPC3 : 0x00000000 EPC4 : 0x00000000 Backtrace: 0x40085201:0x3ffbfadc |<-CORRUPTED Core 0 register dump: PC : 0x4018d46a PS : 0x00060935 A0 : 0x800f4bcc A1 : 0x3ffcc580 A2 : 0x00000000 A3 : 0x80000001 A4 : 0x800921f8 A5 : 0x3ffcc4a0 A6 : 0x00000003 A7 : 0x00060023 A8 : 0x800f46ae A9 : 0x3ffcc550 A10 : 0x00000000 A11 : 0x80000001 A12 : 0x800921f8 A13 : 0x3ffaf630 A14 : 0x00000003 A15 : 0x00060023 SAR : 0x0000001d EXCCAUSE: 0x00000006 EXCVADDR: 0x00000000 LBEG : 0x00000000 LEND : 0x00000000 LCOUNT : 0x00000000 Backtrace: 0x4018d467:0x3ffcc580 0x400f4bc9:0x3ffcc5a0 0x400902c4:0x3ffcc5c0 ELF file SHA256: ed76320984d99ab6 Rebooting... [T-0] [N] Setup Started. ```
``` const express = require('express'); const jwt = require('jsonwebtoken'); const passport = require('passport'); const localStrategy = require('passport-local').Strategy; const router = express.Router(); passport.use( new localStrategy(async (email , password , done)=>{ try{ const user = {email: "piyu" , password: "123"} return done(null , user) } catch(err){ return done(err) } }) ) ... router .route('/login') .get(async (req , res) => { return res.render("login") }) .post( async (req , res , next) => { passport.authenticate('local' , async (err , user , info )=>{ if(err){ return next(err.message); } if(!user){ return res.send("user not found") } if(user){ return res.send("user found successfully") } })(req , res , next) }) module.exports = router; ``` Here I have hardcoded some stuff because I was trying things out. Inside "passport.use" i have hardcoded user object, so that the callback function in "passport.authenticate" should always receive user object and print "user found successfully". But for some reason the opposite has been happening and "user not found" has been printing on the webpage. I tried `console.log(user)` inside the "passport.authenticate" and found that the callback function has not received user object or error . So i tried `console.log("123")` inside passport.use and found that when "local" function is called from "passport.authenticate" the "passport.use" is never being called or invoked for some reason.
I am trying to adapt functional programming style on Google Sheets and have got quite satisfactory results, such as functors, recursive functions and currying functions. However, I am stuck by the pipe function. It is quite easy to realize the pipe function with other modern programming languages, because they allow us to wrap functions in an array, which is able to be unwrapped and returns us functions functioning as well as the original ones. For example, with JavaScript, we have ```js function pipe(){ return input => [...arguments].reduce((result, func)=>func(result), input) } ``` so that we are allowed to do this: ```js pipe(plusThree, multiplyFour, minusSeven)(2) ``` However, on Google Sheets, it seems that the only way creating an array is to put items in a "virtual" range (a bundle of cells), No matter whether we do MAKEARRAY or {"func", "arr"}. Unfortunately, once we put a function in a cell, it immediately executes and return an error. Therefore, the formula I tried fails ``` =LET( plusOne, LAMBDA(num, num+1), multTwo, LAMBDA(num, num*2), pipe, LAMBDA(functions, LAMBDA(input, LET( recursiveFunction, LAMBDA(self, idx, IF( idx < COLUMNS(functions), CHOOSECOLS(functions, idx)(self(self, idx+1)), CHOOSECOLS(functions, idx)(input) ) ), recursiveFunction(recursiveFunction, 1)))), result, pipe({plusOne, multTwo})(5), result ) ``` (My apologies to use a recursive function. It would be much easier to read if I wrote it with REDUCE, which I tried for the very first time and failed, sadly.) The key point is that I do not have any other way unwrapping an array of functions and get the unwrapped functions in it function as well as the ones before wrapping! And LAMBDA in Google Sheets does not allow us to have uncertain number of arguments in a function (which is soooo frustrating). Zero argument is not allowed, either. (We can use a meaningless placeholder to simulate it, though.) So, even CHOOSE seems a candidate to iterate arguments without wrapping it in an (range-like) array, our pipe function cannot use it as the key part to iterate functions as arguments. So, I am really eager to know whether it is a way simulating pipe function on Google Sheets. The spreadsheet I am working on is noted here, for anyone it may concern: https://docs.google.com/spreadsheets/d/1Z1_udcul2sNtzBtJbCeDlnMR2J4JymytfScKFyUtiw8/edit#gid=1987950802 Any discussions or ideas are much appreciated!
If you have an infinite `Stream` like a `NetworkStream`, replacing string tokens on-the-fly would make a lot of sense. But because you are processing a finite stream of which you need the *complete* content, such a filtering doesn't make sense because of the performance impact. My argument is that you would have to use buffering. The size of the buffer is of course restricting the amount of characters you can process. Assuming you use a ring buffer or kind of a queue, you would have to remove one character to append a new. This leads to a lot of drawbacks when compared to processing the complete content. Pros & Cons | Full-content search & replace | Buffered (real-time) search & replace | | -------- | -------------- | | Single pass over the full range | multiple passes over the same range | | Advanced search (e.g. backtracking, look-ahead etc.) | Because characters get dropped at the start of the buffer, the search engine loses context | | Because we have the full context available, we won't ever miss a match | Because characters get dropped at the start of the buffer, there will be edge cases where the searched token does not fit into the buffer. Of course, we can resize the buffer to a length = n * token_length. However, this limitation impacts performance as the optimal buffer size is now constrained by the token size. The worst case is that the token length equals the length of the stream. Based on the search algorithm we would have to keep input in the buffer until we match all key words and then discard the matched part. If there is no match the buffer size would equal the size of the stream. It makes sense to evaluate the expected input and search keys for the worst-case scenarios and switch to a full-context search to get rid of the buffer management overhead. Just to highlight that the efficiency of a real-time search is very much limited by the input and the search keys (expected scenario). It can't be faster than full-context search. It potentially consumes less memory under optimal circumstances. | | Don't have to worry about an optimal buffer size to maximize efficiency | Buffer size becomes more and more important the longer the source content is. Too small buffers result in too many buffer shifts and too many passes over the same range. Note, the key cost for the original search & replace task is the string search and the replace/modification, so it makes sense to reduce the number of comparisons to an absolute minimum. | | Consumes more memory. However, this is not relevant in this case as we will store the complete response in (client) memory anyways. And if we use the appropriate text search technique, we avoid all the extra `string` allocations that occur during the search & replace. | Only the size of the buffers are allocated. However, this is not relevant in this case as we will store the complete response in (client) memory anyways. | Does not allow to filter the stream in real-time (not relevant in this case)). However, a hybrid solution is possible where we would filter certain parts in real-time (e.g. a preamble) | Allows to filter the stream in real-time so that we can make decisions based on content e.g. abort the reading (not relevant in this case).| I stop here as the most relevant performance costs search & replace are better for the full-content solution. For the real-time search & replace we basically would have to implement our own algorithm that has to compete against the .NET search and replace algorithms. No problem, but considering the effort and the final use case I would not waste any time on that. An efficient solution could implement a custom `TextReader`, an advanced `StreamReader` that operates on a `StringBuilder` to search and replace characters. While `StringBuilder` offers a significant performance advantage over the string search & replace it does not allow complex search patterns like word boundaries. For example, word boundaries are only possible if the patter explicitly includes the bounding characters. For example, replacing "int" in the input "internal int " with "pat" produces "paternal pat". If we want to replace only "int" in "internal int " we would have to use regular expression. Because regular expression only operates on `string` we have to pay with efficiency. The following example implements a `StringReplaceStreamReader` that extends `TextReader` to act as a specialized `StreamReader`. For best performance, tokens are replaced *after* the complete stream has been read. For brevity it only supports `ReadToEndAsync`, `Read` and `Peak` methods. It supports simple search where the search pattern is simply matched against the input (called simple-search). Then it also supports two variants of regular expression search and replace for more advanced search and replace scenarios. The first variant is based on a set of key-value pairs while the second variant uses a regex pattern provided by the caller. Because simple-search involves iteration of the source dictionary entries + multiple passes (one for each entry) this search mode is expected to be the slowest algorithm, although the replace using a `StringBuilder` itself is actually faster. Under these circumstances, the `Regex` search & replace is expected to be significantly faster than the simple search and replace using the `StringBuilder` as it can process the input in a single pass. The `StringReplaceStreamReader` search behavior is configurable via constructor. Usage example ------------------ ```c# private Dictionary<string, string> replacementTable; private async Task ReplaceInStream(Stream sourceStream) { this.replacementTable = new Dictionary<string, string> { { "private", "internal" }, { "int", "BUTTON" }, { "Read", "Not-To-Read" } }; // Search and replace using simple search // (slowest). await using var streamReader = new StringReplaceStreamReader(sourceStream, Encoding.Default, this.replacementTable, StringComparison.OrdinalIgnoreCase); string text = await streamReader.ReadToEndAsync(); // Search and replace variant #1 using regex with key-value pairs instead of a regular expression pattern // for advanced scenarios (fast) await using var streamReader2 = new StringReplaceStreamReader(sourceStream, Encoding.Default, this.replacementTable, SearchMode.Regex); string text2 = await streamReader2.ReadToEndAsync(); // Search and replace variant #2 using regex and regular expression pattern // for advanced scenarios (fast). // The matchEvaluator callback actually provides the replcement value for each match. string searchPattern = this.replacementTable.Keys .Select(key => $@"\b{key}\b") .Aggregate((current, newValue) => $"{current}|{newValue}"); await using var streamReader3 = new StringReplaceStreamReader(sourceStream, Encoding.Default, searchPattern, Replace); string text3 = await streamReader.ReadToEndAsync(); } private string Replace(Match match) => this.replacementTable.TryGetValue(match.Value, out string replacement) ? replacement : match.Value; ``` Implementation ------------------- **SearchMode** ```c# public enum SearchMode { Default = 0, Simple, Regex } ``` **StringReplaceStreamReader.cs** ```c# public class StringReplaceStreamReader : TextReader, IDisposable, IAsyncDisposable { public Stream BaseStream { get; } public long Length => this.BaseStream.Length; public bool EndOfStream => this.BaseStream.Position == this.BaseStream.Length; public SearchMode SearchMode { get; } private const int DefaultCapacity = 4096; private readonly IDictionary<string, string> stringReplaceTable; private readonly StringComparison stringComparison; private readonly Encoding encoding; private readonly Decoder decoder; private readonly MatchEvaluator? matchEvaluator; private readonly Regex? regularExpression; private readonly byte[] byteBuffer; private readonly char[] charBuffer; public StringReplaceStreamReader(Stream stream, Encoding encoding, IDictionary<string, string> stringReplaceTable) : this(stream, encoding, stringReplaceTable, StringComparison.OrdinalIgnoreCase, SearchMode.Simple) { } public StringReplaceStreamReader(Stream stream, Encoding encoding, IDictionary<string, string> stringReplaceTable, SearchMode searchMode) : this(stream, encoding, stringReplaceTable, StringComparison.OrdinalIgnoreCase, searchMode) { } public StringReplaceStreamReader(Stream stream, Encoding encoding, IDictionary<string, string> stringReplaceTable, StringComparison stringComparison) : this(stream, encoding, stringReplaceTable, stringComparison, SearchMode.Simple) { } public StringReplaceStreamReader(Stream stream, Encoding encoding, IDictionary<string, string> stringReplaceTable, StringComparison stringComparison, SearchMode searchMode) { ArgumentNullException.ThrowIfNull(stream, nameof(stream)); ArgumentNullException.ThrowIfNull(stringReplaceTable, nameof(stringReplaceTable)); this.BaseStream = stream; this.encoding = encoding ?? Encoding.Default; this.decoder = this.encoding.GetDecoder(); this.stringReplaceTable = stringReplaceTable; this.stringComparison = stringComparison; this.SearchMode = searchMode; this.regularExpression = null; this.matchEvaluator = ReplaceMatch; if (searchMode is SearchMode.Regex) { RegexOptions regexOptions = CreateDefaultRegexOptions(stringComparison); var searchPatternBuilder = new StringBuilder(); foreach (KeyValuePair<string, string> entry in stringReplaceTable) { string pattern = @$"\b{entry.Key}\b"; searchPatternBuilder.Append(pattern); searchPatternBuilder.Append('|'); } string searchPattern = searchPatternBuilder.ToString().TrimEnd('|'); this.regularExpression = new Regex(searchPattern, regexOptions); } this.byteBuffer = new byte[StringReplaceStreamReader.DefaultCapacity]; int charBufferSize = this.encoding.GetMaxCharCount(this.byteBuffer.Length); this.charBuffer = new char[charBufferSize]; } public StringReplaceStreamReader(Stream stream, Encoding encoding, string searchAndReplacePattern, MatchEvaluator matchEvaluator) : this(stream, encoding, searchAndReplacePattern, matchEvaluator, RegexOptions.None) { } public StringReplaceStreamReader(Stream stream, Encoding encoding, string searchAndReplacePattern, MatchEvaluator matchEvaluator, RegexOptions regexOptions) { ArgumentNullException.ThrowIfNull(stream, nameof(stream)); ArgumentNullException.ThrowIfNullOrWhiteSpace(searchAndReplacePattern, nameof(searchAndReplacePattern)); ArgumentNullException.ThrowIfNull(matchEvaluator, nameof(matchEvaluator)); this.BaseStream = stream; this.encoding = encoding ?? Encoding.Default; this.decoder = this.encoding.GetDecoder(); this.matchEvaluator = matchEvaluator; this.SearchMode = SearchMode.Regex; this.stringReplaceTable = new Dictionary<string, string>(); this.stringComparison = StringComparison.OrdinalIgnoreCase; if (regexOptions is RegexOptions.None) { regexOptions = CreateDefaultRegexOptions(stringComparison); } else if ((regexOptions & RegexOptions.Compiled) == 0) { regexOptions |= RegexOptions.Compiled; } this.regularExpression = new Regex(searchAndReplacePattern, regexOptions); this.byteBuffer = new byte[StringReplaceStreamReader.DefaultCapacity]; int charBufferSize = this.encoding.GetMaxCharCount(this.byteBuffer.Length); this.charBuffer = new char[charBufferSize]; } public override int Peek() { int value = Read(); this.BaseStream.Seek(this.BaseStream.Position - 1, SeekOrigin.Begin); return value; } public override int Read() => this.BaseStream.ReadByte(); public override Task<string> ReadToEndAsync() => ReadToEndAsync(CancellationToken.None); public override async Task<string> ReadToEndAsync(CancellationToken cancellationToken) { if (!this.BaseStream.CanRead) { throw new InvalidOperationException("Source stream is not readable."); } var textBuilder = new StringBuilder(StringReplaceStreamReader.DefaultCapacity); int bytesRead = 0; int charsRead = 0; while (!this.EndOfStream) { cancellationToken.ThrowIfCancellationRequested(); bytesRead = await this.BaseStream.ReadAsync(buffer, 0, buffer.Length); bool flush = this.EndOfStream; charsRead = this.decoder.GetChars(this.byteBuffer, 0, bytesRead, this.charBuffer, 0, flush); textBuilder.Append(charBuffer, 0, charsRead); } cancellationToken.ThrowIfCancellationRequested(); SearchAndReplace(textBuilder, cancellationToken, out string result); return result; } public ValueTask DisposeAsync() => ((IAsyncDisposable)this.BaseStream).DisposeAsync(); private void SearchAndReplace(StringBuilder textBuilder, CancellationToken cancellationToken, out string result) { cancellationToken.ThrowIfCancellationRequested(); if (this.SearchMode is SearchMode.Simple or SearchMode.Default) { foreach (KeyValuePair<string, string> entry in this.stringReplaceTable) { cancellationToken.ThrowIfCancellationRequested(); textBuilder.Replace(entry.Key, entry.Value); } result = textBuilder.ToString(); } else if (this.SearchMode is SearchMode.Regex) { string input = textBuilder.ToString(); result = this.regularExpression!.Replace(input, this.matchEvaluator!); } else { throw new NotImplementedException($"Search mode {this.SearchMode} is not implemented."); } } private string ReplaceMatch(Match match) => this.stringReplaceTable.TryGetValue(match.Value, out string replacement) ? replacement : match.Value; private RegexOptions CreateDefaultRegexOptions(StringComparison stringComparison) { RegexOptions regexOptions = RegexOptions.Multiline | RegexOptions.Compiled; if (stringComparison is StringComparison.CurrentCultureIgnoreCase or StringComparison.InvariantCultureIgnoreCase or StringComparison.OrdinalIgnoreCase) { regexOptions |= RegexOptions.IgnoreCase; } return regexOptions; } } ```
I'm currently confused about windows and states. Suppose I have a program that counts user access data every minute and needs to do sum statistics in each window. Assume that at this time, I configure checkpoint for the fault tolerance of the program. The checkpoint configuration is to trigger every 30 seconds. Then when the time is 01:00, the program hangs. In theory, it can only be restored to the state data at 00:30, but there is no trigger window at 00:30. After calculation, we get the kafka offset data at 00:30 and the window data at 00:00. Is there any problem with my understanding? here is my program: [flink graph][1] build stream config is config.getDelayMaxDuration() = 60000,config.getAggregateWindowMillisecond()=60000 SingleOutputStreamOperator<BaseResult> wordCountSampleStream = subStream.assignTimestampsAndWatermarks( WatermarkStrategy.<MetricEvent>forBoundedOutOfOrderness(config.getDelayMaxDuration()) .withTimestampAssigner(new MetricEventTimestampAssigner()) .withIdleness(config.getWindowIdlenessTime()) ).setParallelism(CommonJobConfig.getParallelismOfSubJob("WORD_COUNT_SAMPLE_TEST")) .flatMap(new WordCountToResultFlatMapFunction(config)).setParallelism(CommonJobConfig.getParallelismOfSubJob("WORD_COUNT_SAMPLE_TEST")) .keyBy(new BaseResultKeySelector()) .window(TumblingEventTimeWindows.of(Time.milliseconds(config.getAggregateWindowMillisecond()))) .apply(new WordCountWindowFunction(config)).setParallelism(CommonJobConfig.getParallelismOfSubJob("WORD_COUNT_SAMPLE_TEST")); wordCountSampleStream.addSink(sink).setParallelism(CommonJobConfig.getParallelismOfSubJob("WORD_COUNT_SAMPLE_TEST")); window apply function: public class WordCountWindowFunction extends RichWindowFunction<BaseResult, BaseResult, String, TimeWindow> { private StreamingConfig config; private Logger logger = LoggerFactory.getLogger(WordCountWindowFunction.class); public WordCountWindowFunction(StreamingConfig config) { this.config = config; } @Override public void close() throws Exception { super.close(); } @Override public void apply(String s, TimeWindow window, Iterable<BaseResult> input, Collector<BaseResult> out) throws Exception { WordCountEventPortrait result = new WordCountEventPortrait(); long curWindowTimestamp = window.getStart() / config.getAggregateWindowMillisecond() * config.getAggregateWindowMillisecond(); result.setDatasource("word_count_test"); result.setTimeSlot(curWindowTimestamp); for (BaseResult sub : input) { logger.info("in window cur sub is {} ", sub); WordCountEventPortrait curInvoke = (WordCountEventPortrait) sub; result.setTotalCount(result.getTotalCount() + curInvoke.getTotalCount()); result.setWord(curInvoke.getWord()); } logger.info("out window result is {} ", result); out.collect(result); } } sink function: public class ClickHouseRichSinkFunction extends RichSinkFunction<BaseResult> implements CheckpointedFunction { private ConcurrentHashMap<String, SinkBatchInsertHelper<BaseResult>> tempResult = new ConcurrentHashMap<>(); private ClickHouseDataSource dataSource; private Logger logger = LoggerFactory.getLogger(ClickHouseRichSinkFunction.class); @Override public void snapshotState(FunctionSnapshotContext context) throws Exception { for (Map.Entry<String, SinkBatchInsertHelper<BaseResult>> helper : tempResult.entrySet()) { helper.getValue().insertAllTempData(); } } @Override public void initializeState(FunctionInitializationContext context) throws Exception { } @Override public void open(Configuration parameters) throws Exception { Properties properties = new Properties(); properties.setProperty("user", CommonJobConfig.CLICKHOUSE_USER); properties.setProperty("password", CommonJobConfig.CLICKHOUSE_PASSWORD); dataSource = new ClickHouseDataSource(CommonJobConfig.CLICKHOUSE_JDBC_URL, properties); } @Override public void close() { AtomicInteger totalCount = new AtomicInteger(); tempResult.values().forEach(it -> { totalCount.addAndGet(it.getTempList().size()); batchSaveBaseResult(it.getTempList()); it.getTempList().clear(); }); } @Override public void invoke(BaseResult value, Context context) { tempResult.compute(value.getDatasource(), (datasource, baseResults) -> { if (baseResults == null) { baseResults = new SinkBatchInsertHelper<>(CommonJobConfig.COMMON_BATCH_INSERT_COUNT, needToInsert -> batchSaveBaseResult(needToInsert), CommonJobConfig.BATCH_INSERT_INTERVAL_MS); } baseResults.tempInsertSingle(value); return baseResults; }); } private void batchSaveBaseResult(List<BaseResult> list) { if (list.isEmpty()) { return; } String sql = list.get(0).getPreparedSQL(); try { try (PreparedStatement ps = dataSource.getConnection().prepareStatement(sql)) { for (BaseResult curResult : list) { curResult.addParamsToPreparedStatement(ps); ps.addBatch(); } ps.executeBatch(); } } catch (SQLException error) { log.error("has exception during batch insert,datasource is {} ", list.get(0).getDatasource(), error); } } } batch insert helper: public class SinkBatchInsertHelper<T> { private List<T> waitToInsert; private ReentrantLock lock; private int bulkActions; private AtomicInteger tempActions; private Consumer<List<T>> consumer; private AtomicLong lastSendTimestamp; private long sendInterval; private Logger logger = LoggerFactory.getLogger(SinkBatchInsertHelper.class); public SinkBatchInsertHelper(int bulkActions, Consumer<List<T>> consumer, long sendInterval) { this.waitToInsert = new ArrayList<>(); this.lock = new ReentrantLock(); this.bulkActions = bulkActions; this.tempActions = new AtomicInteger(0); this.consumer = consumer; this.sendInterval = sendInterval; this.lastSendTimestamp = new AtomicLong(0); } public void tempInsertSingle(T data) { lock.lock(); try { waitToInsert.add(data); if (tempActions.incrementAndGet() >= bulkActions || ((System.currentTimeMillis() - lastSendTimestamp.get()) >= sendInterval)) { batchInsert(); } } finally { lastSendTimestamp.set(System.currentTimeMillis()); lock.unlock(); } } public long insertAllTempData() { lock.lock(); try { long result = tempActions.get(); if (tempActions.get() > 0) { batchInsert(); } return result; } finally { lock.unlock(); } } private void batchInsert() { for(T t: waitToInsert){ logger.info("batch insert data:{}", t); } consumer.accept(waitToInsert); waitToInsert.clear(); tempActions.set(0); } public int getTempActions() { return tempActions.get(); } public List<T> getTempList() { lock.lock(); try { return waitToInsert; } finally { lock.unlock(); } } } The resulting phenomenon is: Suppose I cancel the task at 00:31:30, then when I restart the task, the statistics at 00:31:00 will be less than expected. I found out by printing records that it was because when the sink wrote the data at 00:30:00, the kafka consumer had actually consumed the data after 00:31:00, but this part of the data was not written to ck. , it was not replayed in the window when restarting, so this part of the data was lost. The statistics will not return to normal until 00:32:00. [1]: https://i.stack.imgur.com/03630.png
{"Voters":[{"Id":1230836,"DisplayName":"Elias Van Ootegem"},{"Id":32880,"DisplayName":"JimB"},{"Id":720999,"DisplayName":"kostix"}]}
I have written an application in Tauri, Rust + JS React. When the machine connected to the app stops working, the app sends to database equivalent of the injection but in other state like machine failure to indicate how many injections could have happened during the machine failure and look: ``` I, [2024-03-28T03:29:07.845214 #723418] INFO -- : [373a6e4c-86cf-47d5-8e6a-ac31ce78d1bc] Parameters: {"cycles"=>[{"injector"=>"ST 70", "state"=>3, "length"=>55, "current_time"=>"3:29:7", "current_date"=>"2024-3-28"}], "cycle"=>{"cycles"=>[{"injector"=>"ST 70", "state"=>3, "length"=>55, "current_time"=>"3:29:7", "current_date"=>"2024-3-28"}]}} I, [2024-03-28T03:29:07.854491 #723418] INFO -- : [373a6e4c-86cf-47d5-8e6a-ac31ce78d1bc] Completed 200 OK in 9ms (Views: 0.3ms | ActiveRecord: 0.0ms | Allocations: 2381) ``` This is last log from Rails that was send from my app to Rails server. This request sending is handled by Interval in JS that is fired each 55 seconds as you can see, the length is set to 55, and the request is made with axios library. Everything would be fine if not this - the next request from app came 43 minutes later, as if the app has fallen asleep: ``` I, [2024-03-28T04:13:40.365444 #723418] INFO -- : [6dae3bff-6f3f-44c4-9961-0b8b2b41f7c2] Parameters: {"cycles"=>[{"injector"=>"ST 70", "state"=>3, "length"=>55, "current_time"=>"3:30:2", "current_date"=>"2024-3-28"}], "cycle"=>{"cycles"=>[{"injector"=>"ST 70", "state"=>3, "length"=>55, "current_time"=>"3:30:2", "current_date"=>"2024-3-28"}]}} I, [2024-03-28T04:13:40.360601 #723418] INFO -- : [6e537b40-79c4-4cec-82f1-5b348731f892] Started POST "/cycles" for *** at 2024-03-28 04:13:40 +0100 I, [2024-03-28T04:13:40.369729 #723418] INFO -- : [6e537b40-79c4-4cec-82f1-5b348731f892] Processing by CyclesController#create as HTML I, [2024-03-28T04:13:40.369892 #723418] INFO -- : [6e537b40-79c4-4cec-82f1-5b348731f892] Parameters: {"cycles"=>[{"injector"=>"ST 70", "state"=>3, "length"=>55, "current_time"=>"3:31:52", "current_date"=>"2024-3-28"}], "cycle"=>{"cycles"=>[{"injector"=>"ST 70", "state"=>3, "length"=>55, "current_time"=>"3:31:52", "current_date"=>"2024-3-28"}]}} I, [2024-03-28T04:13:40.365232 #723418] INFO -- : [89cb0939-2fd8-48f7-b8e1-c8a7f3667771] Started POST "/cycles" for *** at 2024-03-28 04:13:40 +0100 I, [2024-03-28T04:13:40.371292 #723418] INFO -- : [89cb0939-2fd8-48f7-b8e1-c8a7f3667771] Processing by CyclesController#create as HTML I, [2024-03-28T04:13:40.374927 #723418] INFO -- : [89cb0939-2fd8-48f7-b8e1-c8a7f3667771] Parameters: {"cycles"=>[{"injector"=>"ST 70", "state"=>3, "length"=>55, "current_time"=>"3:32:47", "current_date"=>"2024-3-28"}], "cycle"=>{"cycles"=>[{"injector"=>"ST 70", "state"=>3, "length"=>55, "current_time"=>"3:32:47", "current_date"=>"2024-3-28"}]}} ``` The app sent all data that was generated during this "hung up" time at once - even though the request came 4:13, the request included data 43 minutes old, as you can see (current_time). And it was like the server was bombed with the data that appeared during this 43 minutes, like JS has realised that he has to send all data to the server. All axios request have been maintained in "cache?" somehow and when the app had woken up, it sent it all at once. The problem is JavaScript is single threaded, I'm aware of this, and in this case there is a lot stuff going on on main thread, but to be honest - I've seen app that handle much more and everything works completely fine. What happened guys? Need help. Is there any known issue JS getting frozen for a period of time?
How to Retrieve Data from an MySQL Database and Display it in a GUI?
|python|mysql|tkinter|customtkinter|
null
I see two possible options here. First is "more formally correct", but way too permissive, approach relying on `partial` hint: ```python from __future__ import annotations from functools import partial from typing import Callable, TypeVar, ParamSpec, Any, Optional, Protocol, overload, Concatenate R = TypeVar("R") P = ParamSpec("P") class YourCallable(Protocol[P, R]): @overload def __call__(self, value: float, *args: P.args, **kwargs: P.kwargs) -> R: ... @overload def __call__(self, value: None = None, *args: P.args, **kwargs: P.kwargs) -> partial[R]: ... def lazy(func: Callable[Concatenate[float, P], R]) -> YourCallable[P, R]: def wrapper(value: float | None = None, *args: P.args, **kwargs: P.kwargs) -> R | partial[R]: if value is not None: return func(value, *args, **kwargs) else: if args: raise ValueError("Lazy call must provide keyword arguments only") return partial(func, **kwargs) return wrapper # type: ignore[return-value] @lazy def test_multiply(value: float, *, multiplier: float) -> float: return value * multiplier @lazy def test_format(value: float, *, fmt: str) -> str: return fmt % value print('test_multiply 5*2:', test_multiply(value=5, multiplier=2)) print('test_format 7.777 as .2f:', test_format(value=7.777, fmt='%.2f')) func_mult_11 = test_multiply(multiplier=11) # returns a partial function print('Type of func_mult_11:', type(func_mult_11)) print('func_mult_11 5*11:', func_mult_11(value=5)) func_mult_11(value=5, multiplier=5) # OK func_mult_11(value='a') # False negative: we want this to fail ``` Last two calls show hat is good and bad about this approach. `partial` accepts any input arguments, so is not sufficiently safe. If you want to override the arguments provided to lazy callable initially, this is probably the best solution. Note that I slightly changed signatures of the input callables: without that you will not be able to use `Concatenate`. Note also that `KwArg`, `DefaultNamedArg` and company are all deprecated in favour of protocols. You cannot use paramspec with kwargs only, args must also be present. If you trust your type checker, it is fine to use kwarg-only callables, all unnamed calls will be rejected at the type checking phase. However, I have another alternative to share if you do not want to override default args passed to the initial callable, which is fully safe, but emits false positives if you try to. ```python from __future__ import annotations from functools import partial from typing import Callable, TypeVar, ParamSpec, Any, Optional, Protocol, overload, Concatenate _R_co = TypeVar("_R_co", covariant=True) R = TypeVar("R") P = ParamSpec("P") class ValueOnlyCallable(Protocol[_R_co]): def __call__(self, value: float) -> _R_co: ... class YourCallableTooStrict(Protocol[P, _R_co]): @overload def __call__(self, value: float, *args: P.args, **kwargs: P.kwargs) -> _R_co: ... @overload def __call__(self, value: None = None, *args: P.args, **kwargs: P.kwargs) -> ValueOnlyCallable[_R_co]: ... def lazy_strict(func: Callable[Concatenate[float, P], R]) -> YourCallableTooStrict[P, R]: def wrapper(value: float | None = None, *args: P.args, **kwargs: P.kwargs) -> R | partial[R]: if value is not None: return func(value, *args, **kwargs) else: if args: raise ValueError("Lazy call must provide keyword arguments only") return partial(func, **kwargs) return wrapper # type: ignore[return-value] @lazy_strict def test_multiply_strict(value: float, *, multiplier: float) -> float: return value * multiplier @lazy_strict def test_format_strict(value: float, *, fmt: str) -> str: return fmt % value print('test_multiply 5*2:', test_multiply_strict(value=5, multiplier=2)) print('test_format 7.777 as .2f:', test_format_strict(value=7.777, fmt='%.2f')) func_mult_11_strict = test_multiply_strict(multiplier=11) # returns a partial function print('Type of func_mult_11:', type(func_mult_11_strict)) print('func_mult_11 5*11:', func_mult_11_strict(value=5)) func_mult_11_strict(value=5, multiplier=5) # False positive: OK at runtime, but not allowed by mypy. E: Unexpected keyword argument "multiplier" for "__call__" of "ValueOnlyCallable" [call-arg] func_mult_11_strict(value='a') # Expected. E: Argument "value" to "__call__" of "ValueOnlyCallable" has incompatible type "str"; expected "float" [arg-type] ``` You can also mark `value` kw-only in `ValueOnlyCallable` definition if you'd like, I just don't think it is reasonable for a function with only one argument. You can compare both approaches in [playground](https://mypy-play.net/?mypy=latest&python=3.11&flags=strict&gist=f0d994cc5194d4d3f3eedf3e558b66c3). If you do not want to use an ignore comment, the verbose option below should work. However, I do not think that verbosity is worth removing one ignore comment - it's up to you to decide. ```python def lazy_strict(func: Callable[Concatenate[float, P], R]) -> YourCallableTooStrict[P, R]: @overload def wrapper(value: float, *args: P.args, **kwargs: P.kwargs) -> R: ... @overload def wrapper(value: None = None, *args: P.args, **kwargs: P.kwargs) -> ValueOnlyCallable[R]: ... def wrapper(value: float | None = None, *args: P.args, **kwargs: P.kwargs) -> R | ValueOnlyCallable[R]: if value is not None: return func(value, *args, **kwargs) else: if args: raise ValueError("Lazy call must provide keyword arguments only") return partial(func, **kwargs) return wrapper ``` Here's also Pyright [playground](https://pyright-play.net/?strict=true&code=GYJw9gtgBA%2BjwFcAuCQFM5QJYQA5hCSgEMA7UsJYpLMUgZwFgAoF0SKRUgYyTDAA29bHgJFcxQlmIC24aEgCeuLKQDmI-ISgBhGQOIAjAWgA0UACrK0ANUnmACpOIQAyrjTdH4Pt0HmwADc0EAEwYgATcx06bmo0UniWFhgAJRg-KABeS2s7EAAKACI0jLAi8z9AyWlSJCyLEAQ0AEoWVOzcj3zi1KK25gdOpxAXd09ih37k1mZuA3phOwFmgHlSAUU9AQNjNAKHHzA-AQBtUr8AXRaALhYoB6gItGBYDP04Avo0AWBzapWaBunDC1BaUAAtAA%2BWDpPzAgB0SPujxm82IiygAE0wKhtrsTBZ%2BK4kCAsLwDkcTqcHOYLmBrndmI8oAABIIhUERFEPZ6vOBxHafb6-f4yZrA4CgpDmABUkjU9GBDgRCvoctlAGsAO5q5UInVq8HQ2FlRHI5mPdnBULhbmW3kvN6CgTCn5-KAAiVQABydDQnT9pDMUHlIEV%2BrVGsN4aVUBVMcVxphyzWGy2%2BiMJnOcIZ5oRaIWwhxeMze0plGOghp5lSjJ5bI5tsiDb5zo%2BMC%2B7rFgMl0rlevjqtj0d1sf1ifoyagqXzDetnLtradAo7XdFnvFQN9-sD-oH46HUdDWrHEaHk%2BnEikMlOdbnsxYbYMAC9FAUuNxgfis2hTjEeHiRIkD-KVwhleNLlra5IRhEsQB-PYaxnS4mRZNttVGXAPEKL1tzA6goAAHx3YM92DA9zxVY9ZVPQcEzPKdYJnYioGvGhb3vBsWSwV48OwYQKCIIMgW4llHnQFAQFITgEB4Ao8Mo9UT0vMSHh%2Bb40PE7TJNQGT2OkAQPzkrxQxoujYwGFldOkqBMOIbCQgeABiKAlA8YEsDUCh0FOGzSAhPDLhmJ8nVfRQYHoUlySQYyeG-MtswAuIQOA0D%2B0g6Dp3gxDCWJaLeGQriHUbG0uWXV57Mc3Ctz7cClMjEcVMYidGOnWcoCRAsSoXZt7XQp0qpwhTatIgMchEhqjya2jJ1ao1mNTNB1k2XK-3vTqLQGyqsOGvC6sIkiRPIkMwyo4dFVHeiDTa5iOhIpaVozHZfzvVC1OwPitwEqAhLGrTtOstApJkz8RsBJSrssj6NNEkrAagfy2MkDijM-SHmqNBskaGkIZlZcLQteECopgCAEAEGhcE2cHvQIiDZXMcnKawamsBCA6kGnemAcR4G9M3QFQygZmqYEdmQHxwnmDbEmkHgAgIGoWn8IyxnOAgJBgSikBpx13mkeATWoAAUkF5oZlwMk6gKAByOWyYpsXFCgABWWUACYblt8wHdF1mabwrJXaZp2A4lrIPZaAYrdUWL7bQUngEVwiAHYEVTzOSGEBEPeAb3fcT%2BXk5AJXYqD9PM9T8wjfqW2Tdz4BbejmZP0dymYAARk7zo-bD6n339tmQiybvwSgVz-OEYhkZvARZJ4Gg6BYWObdtqwPCgMBXjb0Wu87gu3OsOLuHb%2BWx5j63493p397d2Vu8Pm%2BO%2B7lXg5b5hn-Pzu35DkX%2B-FiPV249XKrAANJsBMmffeb9bbEGbi5KAAAxGQ3xfpoDUNQLAwRgTagDLqOobkAAWWBhB8E4MQLAshZgE2IG%2BSKBUkBEzckXM%2BAcIo6xiirTmcpQ4s2HiATm3NpQG35rZfisp-78MAZLGh4UGFkl4Mwh2Jcy4KK4ftEE9VQw101trUketSSiJBhrIgZs8KWyvnbPu0iXbuy9j7FhpMh6bHURSIOf8XERyjpfOO1jWGqLThnLOGJOp50PiolO8tOHuK3FkSumddF1wbnnZuAxIE8Ggd3NxRAcg2OdjkgoXiR5j0QVPEgs9UYL14LQUgK8rHr2sFvHeUC96P0ce5fYX9945I-qva%2BrTb7d3vu0mugyX6d0KR4j%2B3TskxPLnEzxACI7AMQSgoQAZ8D0CwDQHBUBwEkCIE0OoOAQyGGQL9SgJAdhgDwREKAhgXYQGUIoBEUAACiwIACqwYAAeHheBoHuZqNAihtQEHuQqBAEAEhECKMUkARROAECgCUd4QoYBIu3qix66Y1pIqgKcF0EIFTBU-uM7%2BUy4lwIQRPD5-zPAgQiG8z5UAACC4ZoWwtRXhJF5C0UujgFi14RRcWrUSmgJFRDQmqD8HgLBewj6byKDrIoABuKAaAGWAvuUUemBLTgKghJ0slQA), because `mypy` failed to find a mistake in my original answer and Pyright did.
GNU sed offers a simpler way to do it, ``` sed '0,/word/{//d} 0,/word/{//d} 0,/word/{//d} ' ``` or even, since there's only the one search, ``` sed '/word/ { 0,//{//d}; 0,//{//d}; 0,//{//d}; }' ``` which also scans each line only once, to fix a (minor but possibly measurable) scaling issue noticed in comments.
For our use-case, reCaptcha v3 or Enterprise does not work, please don't downvote without actually reading the question: We have been using Google ReCaptcha v2 for some time with the goal of preventing human users who are idle from using Auto Clickers / Macros to cheat and level up in our web based game while they are away. For example, they run auto-clickers / macros to move their character and level up while they are away but their browsers are real and pass v3 and enterprise background captcha checks instantly because they are actually active in their browser maybe half of the day. While away, they run these auto-clicker tools to continue leveling up. For this reason, we still use captcha v2, as it is the only option which has a graphical challenge which trips up auto-clickers, macros, and users trying to avoid becoming idle while they are away (to continue leveling up in our games). Our issue is that reCaptcha v2 does not show graphical challenges to 99% of users, as it passes them silently and returns "No Challenge" making it essentially silent, and just as bad for us as v3 or enterprise. hCaptcha has an option to "Force Challenge", which makes every single user get the graphical challenge for every request. Does Google ReCaptcha (any version) have this option? Which forces a graphical challenge for each captcha? This is what we are looking for. To clarify, background (silent) non-graphical captchas such as v3/enterprise do not work for us, as they will pass 99% of the time for our users. These are human users with real browser sessions, they have just been idle for a few hours and are running auto-clickers while they are away. When we detect them, we want to display a captcha. For this reason we (along with almost all AAA gaming companies) still use reCaptcha v2, as it is the only option google offers that has a graphical / puzzle based captcha. We only show a captcha to users who we are almost positive are cheating, so we are ok if it is very difficult.
{"Voters":[{"Id":235698,"DisplayName":"Mark Tolonen"},{"Id":987358,"DisplayName":"Michael Butscher"},{"Id":4518341,"DisplayName":"wjandrea"}]}
I am trying out the various ways of creating `std::vector` on the fly and passing it to another function: #include <iostream> #include <vector> void print(std::vector<int> a) { std::cout << a.size() << '\n'; } int main() { print({1, 2, 3, 4, 5}); print(std::vector<int>{1, 2, 3, 4, 5}); print(std::vector<int>({1, 2, 3, 4, 5})); } This produces the desired output: $ clang++ -std=c++11 foo.cpp && ./a.out 5 5 5 I want to know what are the differences between these three invocations: print({1, 2, 3, 4, 5}); print(std::vector<int>{1, 2, 3, 4, 5}); print(std::vector<int>({1, 2, 3, 4, 5})); Here is another example: #include <iostream> #include <vector> int main() { std::vector<int> a = {1, 2, 3, 4, 5}; // std::cout << (a == {1, 2, 3, 4, 5}) << '\n'; // error std::cout << (a == std::vector<int>{1, 2, 3, 4, 5}) << '\n'; std::cout << (a == std::vector<int>({1, 2, 3, 4, 5})) << '\n'; } Here is the output: $ clang++ -std=c++11 foo.cpp && ./a.out 1 1 I am hoping that the answer to this question can become a reference answer on this topic where the answer discusses the following aspects of these invocations: - The above three and any other similar ways of passing a `std::vector` to another function. - Semantic differences between the various methods. - Performance differences (if any) between the various methods. - Best practices (if any) due to which one invocation is favourable over another.
Pinescript CC values incorrect and changing even though the timeframe cannot be changed
|pine-script|pine-script-v5|
null