instruction
stringlengths
0
30k
I get **Security Issues** (CVE-2019-20444, CVE-2019-20445) with threat level 9 for all the versions of the jars flink-rpc-akka-loader, flink-rpc-akka while scanning. Anyone faced this issue? Please share your resolution. Thanks. [flink-rpc-akka-loader_vulnerable][1] [1]: https://i.stack.imgur.com/pPzQJ.png
{"OriginalQuestionIds":[43058219],"Voters":[{"Id":4108803,"DisplayName":"blackgreen"}]}
I've implemented something similar a while ago. For a nice type to help with declaring your default config: // Deeply make all fields optional type DeepOptional<T> = T extends Primitive | any[] ? T // If primitive or array then return type : {[P in keyof T]?: DeepOptional<T[P]>} // Make this key optional and recurse Then you can define your default config with intellisense like: const a: DeepOptional<Config>={...} For the resulting type to be correct: What you want to achieve can be done with a merge function returning a union of the types like. function merge<A extends Object, B extends Object>(a:A,b:B): A & B Since the result is A & B, B will overwrite A's optional statement if the same key is given without it. You can check this out for a working example: https://github.com/Aderinom/typedconf/blob/master/src/config.builder.ts#L118
Google one-tap stopped working. Console shows: [GSI_LOGGER]: The given login_uri is not allowed for the given client ID. Login for this site has been up and running for years, including as recently as yesterday. Not sure if it's related, but we received an email last month indicating that the site would be migrating to FedCM in April. The email also indicated that no issues were detected so we could expect it to be seamless. We are seeing the login failures on both of the sites in the OAuth project. I've now followed the [migration doc](https://developers.google.com/identity/gsi/web/guides/fedcm-migration), adding `data-use_fedcm_for_prompt="true"` to the `g_id_onload` block but no luck. The email indicated we could postpone the migration by setting that value to `false` but it still doesn't work.
Google one-tap sign-in stopped working. Possibly related to FedCM changes?
|google-signin|fedcm|
null
I'm part of the development team for a system that uses qualified injection (at runtime) of services on endpoints. We are given the context and version and from that we create the corresponding bean. However we use dependency lookup. We would like to use only DI. Is there any way to do this without using calls directly from the spring container? I researched some ways to use the container but there were always calls to the spring container. @RestController @RequestMapping("${spring.application.name}/{context}/v{version}") public class SomeResource extends RestResponseResource { @Autowired BeanProducer producer; @GetMapping(value = "/some-get", produces = "application/json") public ResponseEntity<SomeResponseDTO> someResource(@PathVariable("context") String context, @PathVariable("version") Integer version) SomeService service = producer.getInstance(SomeServiceInterface.class, context.concat(SomeServiceInterface.class, version)).orElseThrow(); return ResponseEntity.ok(service.someMethod()).build(); } and the **producer.getInstance()** method is look like public <T> Optional<T> getInstance(Class<T> type, String qualifier, Object... args) { Optional<T> bean = Optional.empty(); try { bean = Optional.ofNullable(Strings.isBlank(qualifier) ? this.beanFactory.getBean(type, args) : this.beanFactory.getBean(qualifier, args)); } catch (BeanNotOfRequiredTypeException | NoSuchBeanDefinitionException var6) { log.debug("qualified bean not found", qualifier); } return bean; } As can be seen above I make use of the BeanFactory interface to obtain the desired beans from the qualifiers. The goal is not to make use of calls to the spring IoC container classes
I'm trying to do some preprocessing on my dataset. Specifically, I'm trying to remove paywall language from the text (in bold below) but I keep getting an empty string as my output. Here is the sample text: > In order to put a stop to the invasive bush honeysuckle or Lonicera > Maackii currently taking over forests in Missouri and Kansas, > according to Debbie Neff of Excelsior Springs has organized an… > Premium Content is available to subscribers only. **Please login here to > access content or go here to purchase a subscription.** and my custom function: ```py import re import string import nltk from nltk.corpus import stopwords # function to detect paywall-related text def detect_paywall(text): paywall_keywords = ["login", "subscription", "purchase a subscription", "subscribers"] for keyword in paywall_keywords: if re.search(r'\b{}\b'.format(keyword), text, flags=re.IGNORECASE): return True return False # function for text preprocessing def preprocess_text(text): # Check if the text contains paywall-related content if detect_paywall(text): # Remove paywall-related sentences or language from the text sentences = nltk.sent_tokenize(text) cleaned_sentences = [sentence for sentence in sentences if not detect_paywall(sentence)] cleaned_text = ' '.join(cleaned_sentences) return cleaned_text.strip() # Remove leading/trailing whitespace # Tokenization tokens = nltk.word_tokenize(text) # Convert to lowercase tokens = [token.lower() for token in tokens] # Remove punctuation table = str.maketrans('', '', string.punctuation) stripped = [w.translate(table) for w in tokens] # Remove stopwords stop_words = set(stopwords.words('english')) words = [word for word in stripped if word.isalpha() and word not in stop_words] return ' '.join(words) ``` I've tried modifying the list of words to detect but to no avail. However, I found that removing "subscribers" from the list does remove the second sentence of the paywall language. But that's not really ideal because there still remains the other half. The function is also inconsistent because it works on this piece of text (as it will remove the paywall language), but not the one above. > Of the hundreds of thousands of high school wrestlers, only a small percentage know what it’s like to win a state title. is part of that percentage. The Richmond junior joined that group by winning… **Premium Content is available to subscribers only. Please login here to access content or go here to purchase a subscription.**
Data is not filtering in props. Showing passdata.map is not a function
|reactjs|react-hooks|
null
Here is one possible option using [`cKDTree.query`][1] from [tag:scipy]: ```py from scipy.spatial import cKDTree def knearest(gdf, **kwargs): notna = gdf["PPM_P"].notnull() arr_geom1 = np.c_[ gdf.loc[notna, "geometry"].x, gdf.loc[notna, "geometry"].y, ] arr_geom2 = np.c_[ gdf.loc[~notna, "geometry"].x, gdf.loc[~notna, "geometry"].y, ] dist, idx = cKDTree(arr_geom1).query(arr_geom2, **kwargs) k = kwargs.get("k") _ser = pd.Series( gdf.loc[notna, "PPM_P"].to_numpy()[idx].tolist(), index=(~notna)[lambda s: s].index, ) gdf.loc[~notna, "PPM_P"] = _ser[~notna].map(np.mean) return gdf N = 2 # feel free to make it 5, or whatever.. out = knearest(gdf.to_crs(3662), k=range(1, N + 1)) ``` Output (*with `N=2`*): ***NB**: Each red point (*an FID having a null PPM_P, is associated with the N nearest green points*)*. [![enter image description here][2]][2] GeoDataFrame (*with intermediates*): ```py # I filled some random FID with a PPM_P value to make the input meaningful FID PPM_P (OP) PPM_P (INTER) PPM_P geometry 0 0 34.919571 NaN 34.919571 POINT (842390.581 539861.877) 1 1 NaN 37.480218 37.480218 POINT (842399.476 539861.532) 2 2 NaN 35.567003 35.567003 POINT (842408.370 539861.187) 3 3 NaN 35.567003 35.567003 POINT (842420.229 539860.726) 4 4 36.214436 NaN 36.214436 POINT (842429.124 539860.381) 5 5 NaN 38.127651 38.127651 POINT (842438.018 539860.036) 6 6 NaN 40.431946 40.431946 POINT (842446.913 539859.691) 7 7 40.823028 NaN 40.823028 POINT (842458.913 539862.868) 8 8 NaN 37.871299 37.871299 POINT (842378.298 539851.425) 9 9 40.823028 NaN 40.823028 POINT (842390.158 539850.965) 10 10 40.040865 NaN 40.040865 POINT (842399.052 539850.620) 11 11 36.214436 NaN 36.214436 POINT (842407.947 539850.275) 12 12 34.919571 NaN 34.919571 POINT (842419.947 539853.452) 13 13 NaN 38.127651 38.127651 POINT (842428.841 539853.107) 14 14 40.040865 NaN 40.040865 POINT (842437.736 539852.761) 15 15 NaN 40.431946 40.431946 POINT (842449.595 539852.301) 16 16 NaN 40.431946 40.431946 POINT (842458.489 539851.956) 17 17 NaN 40.431946 40.431946 POINT (842467.384 539851.611) 18 18 NaN 40.431946 40.431946 POINT (842476.278 539851.266) 19 19 NaN 37.871299 37.871299 POINT (842368.981 539840.859) ``` [1]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.query.html [2]: https://i.stack.imgur.com/KefCq.png
For me, it worked by changing double quote (") for single quote ('). You can read the doc [here][1]. [1]: https://dev.mysql.com/doc/refman/8.0/en/string-literals.html
Best way to remove all packages from the virtual environment. # Windows PowerShell: <!-- language-all: shell --> pip freeze > unins ; pip uninstall -y -r unins ; del unins # Windows Command Prompt: pip freeze > unins && pip uninstall -y -r unins && del unins # Linux: pip3 freeze > unins ; pip3 uninstall -y -r unins ; rm unins
I am trying to convert "regTemp" byte array to bmp file after `DBMerge(long dbHandle , byte[] temp1, byte[] temp2, byte[] temp3, byte[] regTemp, int[] regTempLen)` *(This function is used to combine registered fingerprint templates and returns the result in regTemp)* generates the final byte array. **I found many answers in stackoverflow but so far no luck, Some-Answers:** https://stackoverflow.com/a/60769564/9044234 *-Gives java.lang.ArrayIndexOutOfBoundsException: Index 2048 out of bounds for length 2048* https://stackoverflow.com/a/1193769/9044234 *-Gives java.lang.IllegalArgumentException: image == null!* **`You can find whole source code and documentation here:`** https://github.com/sayednaweed/slk20r-zkteco-java-sample public class ZKFPDemo extends JFrame{ private JTextArea textArea; //pre-register template private byte[][] regtemparray = new byte[3][2048]; private long mhDB = 0; private void OnExtractOK(byte[] template, int len){ int[] _retLen = new int[1]; _retLen[0] = 2048; byte[] regTemp = new byte[_retLen[0]]; if(0 == (ret = FingerprintSensorEx.DBMerge(mhDB, regtemparray[0], regtemparray[1], regtemparray[2], regTemp, _retLen))){ textArea.setText("Merged successfully."); }else{ textArea.setText("Failed to merge."); } } } **I used this fingerprint scanner in C# project with C# SDK using follwoing code I was able to to convert it to BitmapSource then displayed the image.** public static BitmapSource ToBitmapSource(byte[] buffer) { BitmapSource bitmap = null; if (buffer != null && !(buffer.Length < 10)) { using (var stream = new MemoryStream(buffer)) { bitmap = BitmapFrame.Create( stream, BitmapCreateOptions.None, BitmapCacheOption.OnLoad); } } return bitmap; } It's been days, I am unable to solve the issue please someone help me thanks in advance.
null
null
null
null
I want to integrate a live video viewing feature into my website. Which YouTube API should I use to implement this functionality? I use API : https://www.googleapis.com/youtube/v3/search but I get the execution limit exceeded error. Can you please suggest me another API?
I want to integrate a live video viewing feature into my website. Which YouTube API should I use to implement this functionality?
|api|youtube|
null
How do I test the ohlc series, I cannot see any demo in lightningchart official site. I tried below code candle[id] = chart[id].addOHLCSeries().setMouseInteractions(false);
Lightning Chart Ohlc Series
|lightningchart|
Spring uses [ASM library][1] for byte code manipulation. [ConfigurationClassParser][2] uses this library to load @Bean annotated methods. [![enter image description here][3]][3] [1]: https://github.com/spring-projects/spring-framework/blob/main/spring-core/src/main/java/org/springframework/asm/package-info.java [2]: https://github.com/spring-projects/spring-framework/blob/main/spring-context/src/main/java/org/springframework/context/annotation/ConfigurationClassParser.java#L448 [3]: https://i.stack.imgur.com/aw8wG.png
I have two tables (in an Azure SQL database): ```sql CREATE TABLE [dbo].[Action]( [ID] [int] IDENTITY(1,1) NOT NULL, [Action] [varchar](7) NOT NULL, [Resource] [varchar](16) NOT NULL, [Timestamp] [datetime] NOT NULL ) ALTER TABLE [dbo].[Action] ADD CONSTRAINT [PK_Action] PRIMARY KEY CLUSTERED ( [ID] ASC ) ``` and ```sql CREATE TABLE [dbo].[ResourceScope]( [Resource] [varchar](16) NOT NULL, [Scope] [varchar](100) NOT NULL ) ALTER TABLE [dbo].[ResourceScope] ADD CONSTRAINT [PK_ResourceScope] PRIMARY KEY CLUSTERED ( [Scope] ASC, [Resource] ASC ) ``` The `Action` table contains about 15M actions, regarding about 1.5M unique resources. The `ResourceScope` table contains about 10K rows with unique resources and one of a couple of different scopes. An API requires paging through the `Action` table with the following query, where both <id> and <scope> are variable, but the pages are always the same size. ```sql SELECT TOP 101 [Action].[ID], [Action].[Action], [Action].[Resource], [Action].[Timestamp] FROM [Action] JOIN [ResourceScope] ON [ResourceScope].[Resource] = [Action].[Resource] WHERE [Action].[ID] <= <id> AND [ResourceScope].[Scope] = <scope> ORDER BY [Action].[ID] DESC ``` When paging through the data for a certain scope about 95% of all queries take about 15ms, but the remaining 5% take anywhere between 500ms and 4000ms. The slow queries use always exactly the same id/scope combination, where I can narrow down to some exact IDs that make the query slow. The ids for which the query is slow have the exact same query plan as the fast ones, but the slow ones perform a huge amount of clustered index seeks on the ResourceScope primary key, where the fast ones only perform a couple of executions (mostly around 100 executions) of that clustered index seek on that index. Example of a plan for a slow id: https://www.brentozar.com/pastetheplan/?id=ryScGHekR Example of a plan for a fast id: https://www.brentozar.com/pastetheplan/?id=B1c6MrxyA When I add an indexed view for a specific scope, like below, and page trough that view, the 5% slow queries disappear. ```sql CREATE VIEW [dbo].[DemoSetAction] WITH SCHEMABINDING AS SELECT [dbo].[Action].[ID], [dbo].[Action].[Action], [dbo].[Action].[Resource], [dbo].[Action].[Timestamp] FROM [dbo].[Action] JOIN [dbo].[ResourceScope] ON [dbo].[ResourceScope].[Resource] = dbo.[Action].[Resource] WHERE [dbo].[ResourceScope].[Scope] = 'demo-set' CREATE UNIQUE CLUSTERED INDEX IDX_DemoSet ON [dbo].[DemoSetAction] (ID); ``` Paging query on view: ```sql SELECT TOP 101 [DemoSetAction].[ID], [DemoSetAction].[Action], [DemoSetAction].[Resource], [DemoSetAction].[Timestamp] FROM [DemoSetAction] WHERE [Action].[ID] <= <id> ORDER BY [Action].[ID] DESC ``` Since the scopes can be dynamically added (but are not expected to grow beyond ~25 unique scopes), I would prefer not to make indexed views for every scope. Two questions: 1. What causes the slow queries 2. Are there any indexes or other things I could add/change to allow for the same or similar query performance without the use of the indexed views?
Small percentage of parameters cause very slow queries on SQL Server
|sql|sql-server|azure-sql-database|
Below is my security config: public SecurityWebFilterChain securityWebFilterChain(ServerHttpSecurity http){ http .authorizeExchange(authorizeExchangeSpec -> authorizeExchangeSpec .pathMatchers("/login").permitAll() .anyExchange().authenticated() .and() .formLogin().disable() .csrf().disable() .oauth2Login()) .exceptionHandling(exceptionHandlingSpec -> exceptionHandlingSpec.authenticationEntryPoint(new RedirectServerAuthenticationEntryPoint("/login"))); return http.build(); } I have a customized login page which works fine, and when i access the application, the custom login page is rendered. But, now when I try to logout form the application, using /logout GET call, it does not work. If I remove the custom login page configuration: exceptionHandling(exceptionHandlingSpec -> exceptionHandlingSpec.authenticationEntryPoint(new RedirectServerAuthenticationEntryPoint("/login"))) And then try /logout GET call from the client, it works fine and default spring security behavior is seen for logout, where the call redirects to logout confirmation page and a logout button is displayed.
{"Voters":[{"Id":5021945,"DisplayName":"Irish Redneck"}]}
**! EDIT**: a solution can be found at the bottom of the question I am using a Huawei E3372 4G USB dongle on Win8.1. This dongle's settings can be accessed via browser by typing 192.168.8.1 and the user can enable a 4G connection by manually clicking the "Enable mobile data" button. This is the script I am trying to use to "Enable mobile data" connection, knowing I'm doing something wrong only on line 4: ``` #!/bin/bash curl -s -X GET "http://192.168.8.1/api/webserver/token" > token.xml TOKEN=$(grep -v '<?xml version="1.0" encoding="UTF-8"?><response><token>' -v '</token></response>' token.xml) curl "http://192.168.8.1/api/dialup/mobile-dataswitch" -H "Host: 192.168.8.1" -H "User-Agent: Mozilla/5.0 (Windows NT 6.3; rv:68.0) Gecko/20100101 Goanna/4.8 Firefox/68.0 PaleMoon/29.0.1" -H "Accept: */*" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8;" -H "_ResponseSource: Broswer" -H "__RequestVerificationToken: $TOKEN" -H "X-Requested-With: XMLHttpRequest" -H "Referer: http://192.168.8.1/html/content.html" -H "Cookie: SessionID=AgVjkIjBxOC0xPbys3nne7rA4I8GXNzUkZCcSOGPR8P3xss8XOuqRbdb0EgHidXhQXZ903xf0nk0F8J81ISqHpZ7kYvZaSW5wHWDqJ9w90pXj90cPwCm7F01fFcmp0gv" -H "Connection: keep-alive" --data-raw "<?xml version=""1.0"" encoding=""UTF-8""?><request><dataswitch>1</dataswitch></request>" date exec $SHELL ``` Upon executing the first curl command, the xml file's content would look like this: ``` <?xml version="1.0" encoding="UTF-8"?><response><token>ZsxY7Q9G90jh4FqUiAjxD9XmqLWf0rYg4RUNf6FoVzeTIlPPms0Ov1RERFFRY77o</token></response> ``` Just for test purposes, if I manually insert the token in the bash script, it works like a charm: ``` #!/bin/bash curl "http://192.168.8.1/api/dialup/mobile-dataswitch" -H "Host: 192.168.8.1" -H "User-Agent: Mozilla/5.0 (Windows NT 6.3; rv:68.0) Gecko/20100101 Goanna/4.8 Firefox/68.0 PaleMoon/29.0.1" -H "Accept: */*" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8;" -H "_ResponseSource: Broswer" -H "__RequestVerificationToken: ZsxY7Q9G90jh4FqUiAjxD9XmqLWf0rYg4RUNf6FoVzeTIlPPms0Ov1RERFFRY77o" -H "X-Requested-With: XMLHttpRequest" -H "Referer: http://192.168.8.1/html/content.html" -H "Cookie: SessionID=AgVjkIjBxOC0xPbys3nne7rA4I8GXNzUkZCcSOGPR8P3xss8XOuqRbdb0EgHidXhQXZ903xf0nk0F8J81ISqHpZ7kYvZaSW5wHWDqJ9w90pXj90cPwCm7F01fFcmp0gv" -H "Connection: keep-alive" --data-raw "<?xml version=""1.0"" encoding=""UTF-8""?><request><dataswitch>1</dataswitch></request>" date exec $SHELL ``` I've found several suggestions for the same or similar dongles, none of them worked for me, it must be due to my insufficient knowledge. My cry for help is about line 4 of the top-most script. I am making a mistake somewhere obviously. Thank you in advance for your help. ===== EDIT: SOLUTION IS FOUND ! ===== markp-fuso's suggestion was the path to my solution. Kudos. I just noticed that besides a variable "token" which was changing upon each "on/off" action, this dongle also had a less variable "SesTokInfo" which is not being changed upon each "on/off" action (I just tested that manually) and it is different than what it was yesterday. It could be "plug/unplug" of dongle that causes that, I honestly can't know. To whom it may concern: The final form of the working script which I've just tested with positive result twice would be: (note that the script contains the curl command to "Enable mobile data". The one to "Disable mobile data" should contain 0 instead of 1 in the "dataswitch>1</dataswitch" part. ``` #!/bin/bash curl -s -X GET "http://192.168.8.1/api/webserver/token" > token.xml curl -s -X GET "http://192.168.8.1/api/webserver/SesTokInfo" > sestoken.xml TOKEN=$(sed -En 's|.*<token>(.*)</token>.*|\1|p' token.xml) SESTOKEN=$(sed -En 's|.*<SesInfo>(.*)</SesInfo>.*|\1|p' sestoken.xml) typeset -p TOKEN typeset -p SESTOKEN curl "http://192.168.8.1/api/dialup/mobile-dataswitch" -H "Host: 192.168.8.1" -H "User-Agent: Mozilla/5.0 (Windows NT 6.3; rv:68.0) Gecko/20100101 Goanna/4.8 Firefox/68.0 PaleMoon/29.0.1" -H "Accept: */*" -H "Accept-Language: en-US,en;q=0.5" --compressed -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8;" -H "_ResponseSource: Broswer" -H "__RequestVerificationToken: $TOKEN" -H "X-Requested-With: XMLHttpRequest" -H "Referer: http://192.168.8.1/html/content.html" -H "Cookie: SessionID=$SESTOKEN" -H "Connection: keep-alive" --data-raw "<?xml version=""1.0"" encoding=""UTF-8""?><request><dataswitch>1</dataswitch></request>" date exec $SHELL ```
use [child_process](https://nodejs.org/api/child_process.html) to call an external program - [zenity](https://man.archlinux.org/man/zenity.1.en) - GTK - `zenity --title="open file" --file-selection` - `zenity --title="save file" --file-selection --save --filename=asdf.txt` - `zenity --title="open directory" --file-selection --directory` - [kdialog](https://apps.kde.org/kdialog/) - Qt - `kdialog --getopenfilename` - `kdialog --getsavefilename` - `kdialog --getexistingdirectory` - [nativefiledialog](https://github.com/btzy/nativefiledialog-extended) - GTK/native, no CLI app? - [qarma](https://github.com/luebking/qarma) - Qt, clone of zenity - [node-file-dialog](https://github.com/manorit2001/node-file-dialog), python tkinter - [dialogbox](https://github.com/martynets/dialogbox/) - Qt, no file picker, create complex dialogs - `dialogbox <<<'add label "asdf" msg; add pushbutton &Ok okay apply exit'` - [gtkdialog](https://github.com/oshazard/gtkdialog) - 10 stars, 10 years ago - [dialog](https://invisible-island.net/dialog/dialog.html) - ncurses (terminal)
\**[Android](https://en.wikipedia.org/wiki/Android_%28operating_system%29)*. \**[Android Studio](https://en.wikipedia.org/wiki/Android_Studio)*. \**[Java](https://en.wikipedia.org/wiki/Java_%28programming_language%29)*. \**[JRE](https://en.wikipedia.org/wiki/Java_virtual_machine#Java_Runtime_Environment)*. \**[Gradle](https://en.wikipedia.org/wiki/Gradle)*.
null
``` function sendDataToServer( firstName, lastName, phoneNumber, address, birthday, age, idNumber, gender, degree, intake, semester, course ) { if ( firstName !== "" && lastName !== "" && phoneNumber !== "" && phoneNumber.length === 10 && address !== "" && birthday !== "" && age !== "" && age >= 18 && age <= 30 && idNumber !== "" && idNumber.length === 12 ) { console.log("sendind date" + birthday); console.log("sendind nic" + idNumber); console.log("sendind phone" + phoneNumber); console.log(typeof phoneNumber); console.log(typeof idNumber); console.log(typeof birthday); fetch("http://localhost:8080/student/add", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ firstName: firstName, lastName: lastName, phoneNumber: phoneNumber, address: address, birthday: birthday, age: age, idNumber: idNumber, gender: gender, degree: degree, intake: intake, semester: semester, course: course, }), }) .then((response) => { if (response.ok) { alert("Student added successfully"); } else { alert("An error occurred"); console.log(response); } }) .catch((error) => { console.log(error); alert("An error occurred"); }); } else { //alert("Please fill all required fields"); console.log("Please fill all required fields"); } } ``` This function takes in various parameters representing user data such as firstName, lastName, phoneNumber, address, birthday, age, idNumber, gender, degree, intake, semester, and course. These are all String data and in database also these are all defined as Strings I take the date, nic and phoneNumber as Strings and tried to pass it, but JSON body does not passing these three Strings but it passing other strings. Why is that ??
I want have some git branch deployments. For a particular branch, I want to keep the latest (sorted by imagePushedAt) and delete the remaining. This is what I am trying: ``` image_tags_json=$(aws ecr describe-images --repository-name bi-dagster --query 'sort_by(imageDetails,& imagePushedAt)[*].[imageTags[], imagePushedAt]' --output json) # Check if there are any image tags returned if [[ $(echo "$image_tags_json" | jq -r 'length') -eq 0 ]]; then echo "No image tags found." exit 1 fi # Extract image tags containing the branch name along with timestamps branch_image_tags=$(echo "$image_tags_json" | jq -r --arg branch "$branch_name" '.[] | select(.[0] | arrays) | select(.[0][] | contains($branch)) | "\(.[0]) \(.[1])"') # Find the latest timestamp latest_timestamp=$(echo "$branch_image_tags" | awk '{print $2}' | sort -r | head -n1) # Output the image tags except the one with the latest timestamp tags_to_delete=$(echo "$branch_image_tags" | awk -v latest="$latest_timestamp" '$2 != latest {print $1}') #echo $tags_to_delete image_digests=$(echo "$tags_to_delete" | jq -r '. | join(" ")') echo $image_digests for digest in $image_digests; do aws ecr batch-delete-image --repository-name bi-dagster --image-ids imageDigest="$digest" done ``` When I echo the ```image_digests```, I get an output in this format. These are the correctly identified imageTags to be deleted, separated by space. ``` 1233-1-DATA 238-1-DATA 157-1-DATA 661-1-DATA ``` But the problem comes when I try to actually delete them. I get this error on the last command. ``` { "imageIds": [], "failures": [ { "imageId": { "imageDigest": "661-1-DATA" }, "failureCode": "InvalidImageDigest", "failureReason": "Invalid request parameters: image digest should satisfy the regex '[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+'" } ] } ``` Edit: I create and push these images via Github. For each image, there's an "ImageIndex" and "Image". The "Image" always has a -- instead of the actual name (like in ImageIndex). My code now works to delete the ImageIndex objects but the Image objects are still there. [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/h0Xj5.png
In my C# code I programmed a consumer with Confluent Kafka. I am reading the timestamp of the message and want to convert it to the know time and date format. I tried following code but get always Error: System.FormatException: String 'Confluent.Kafka.Timestamp' was not recognized as a valid DateTime. How can I fix this problem? public void Kafka_consumer() { //some code for getting message from Kafka topic Kafka_TimeStamp_string = consumeResult.Message.Timestamp.ToString(); TimeStamp_dateTime = DateTime.ParseExact(Kafka_TimeStamp_string, "yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); }
How can I convert the Kafka message timestamp to date and time format in C#?
|c#|kafka-topic|confluent-kafka-dotnet|
Tomcat TPS is low loaded test done with 100000 threads per second, getting 1200 TPS how to increase TPS. We have Tomcat 9.0.78 installed on RHEL 9 server with 16 Core + 64 GB RAM Below are the configuration parameters done on servers server.xml ----------- maxThreads="10000" minSpareThreads="4000" maxConnections="100000" **HHD harddisk** OS config ---------- ulimit and Soft Limits set to unlimited
JSON Body is Not Passing Certain Strings
|javascript|json|string|spring-boot|
null
I have been using a `~/.config` directory, as a git repo that was forked from https://github.com/benbrastmckie/.config to `https://github.com/<myusername>/.config` and git cloned. I want to keep its remote repo focused on my Neovim related configuration, so my other local files in `.config` are git-excluded for now. The list of the excluded files can be viewed with `cat ~/.config/.git/info/exclude`. Now, I am trying to set up a `~/.dotflies`. In this case, I am thinking of including the whole `.config` directory in the `.dotfiles`, adding the excluded files too. I want to do it such that the version control of `.dotfiles` should function properly without affecting what I have set for the `.config`. There are different kinds of methods available to configure the version control of `.dotfiles.`. But I am confused about what to follow in this situation.
Setting up the version contol of .dotfiles while the .config is connected to a forked repo
|git|version-control|config|dotfiles|
TypeError Traceback (most recent call last) <ipython-input-16-ecab93a49f0c> in <cell line: 2>() 1 import pandas as pd ----> 2 dt=pd.read_csv("/content/survey_results_public.csv") 3 dt TypeError: 'str' object is not callable import pandas as pd dt=pd.read_csv("/content/survey_results_public.csv") dt
Str object is not callable in pandas
|python|pandas|matplotlib|
null
I am trying to make a to do list. MY app take a data successfully. But when I want to delete a data from the list. it is showing typeerror: passdata.map is not a function. I am just learning react and I can not figure out where the problem is. N:B: My code is in between two components. ``` import React, { useState } from "react"; import { OutputData } from "./OutputData"; export const Form = () => { const [data, setData] = useState(""); const [val, setVal] = useState([]); const changed = (event) => { setData(event.target.value); }; const clicked = () => { setVal((oldVal) => { return [...oldVal, data]; }); setData(" "); }; const del = (id)=>{ setVal((newVal)=>{ return newVal.filter((val,index)=>{ return id !== index; }); }); setVal(""); } return ( <div> <div> <h1>To Do List</h1> <input type="text" onChange={changed} /> <button onClick={clicked}> <span>+</span> </button> </div> <div> <OutputData passData={val} del={del}></OutputData> </div> </div> ); }; ``` My Second Component Here: ``` import React from 'react' export const OutputData = ({passData,index,del}) => { return ( <div> { passData.map((passData,index)=>{ return( <div className='lidiv' key={index}> <li>{passData}</li> <button onClick={() => del(index)}>Delete</button> </div> ) }) } </div> ) } ``` I want to know why is this happened & How to solve this problem
Default /logout does not work if /login is customised spring security 5.7.11
|spring-boot|spring-security|spring-webflux|spring-security-oauth2|spring-cloud-gateway|
I wroted an embedded function inside my feature file and on a conditional basis, I want to call the function only if the first object of the data array doesn't match dataNotFound definition. Aprreciate your help. Scenario: xxxxxx #Getting data array from DB * def deleteTokens = """ function(lenArray) { for (var i = 0; i<lenArray; i++) { karate.call('deleteTokensCreated.feature', {ownerId: data[i].owner_id, token: data[i].token}); } } """ * def dataNotFound = {"message": "Data not found!"} * def deletedTokens = call deleteTokens lenArray **"* eval if (data[0] != dataNotFound ) call deleteTokens lenArray"** doesn't work.
Karate call embedded function conditonally
|karate|
null
I've completed this guide [guide](https://www.youtube.com/watch?v=eaQc7vbV4po) and now want to retrieve user details using a server component, but my cookie doesn't appear to be in the request. I customised my profile page and made it into a server component, but when I reload the page to call the fetch request, no token cookie is present. I have checked this with the following lines of code in the route handler ``` const token = request.cookies.get("token")?.value || ""; console.log(token, "token"); ```
how do i send httponly cookie with fetch request in nextjs
|javascript|next.js|cookies|next.js13|
null
I'm getting below error from Revalidate Itinerary API. ```none { "status": "Incomplete", "type": "Application", "errorCode": "ERR.2SG.SERVICE_VERSION_DEPRECATED", "timeStamp": "2024-03-26T12:21:06.387Z", "message": "Requested service STPS_SHOPFLIGHTSREVALIDATEAPI version v4.3.0 is deprecated" } ``` I did try to upgrade the version 5 but getting different response as compare to previously .
Home-llm model returns the response every time differently
When I push my job, which I designed in Talend Studio in GitLab, I saw 3 files (`.item`, `.properties`, and `.screenshot`) per job under the `process` folder. Questions: 1. What role do the 3 files play? 2. When I try to change an existing job and push to Git, there are some diffs in those 3 files. But those diffs seems cryptic to me. How could I read those files? For example `.screenshot` files Before changing: ```xml <?xml version="1.0" encoding="UTF-8"?> <talendfile:ScreenshotsMap xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:talendfile="platform:/resource/org.talend.model/model/TalendFile.xsd" key="process" value="iVBORw0KGgoAAAANSUhEUgAABBgAAAHxCAIAAABI8JjTAAAv/klEQVR4nO3deXCc9Z3ncdXWztRUJamdqa3a2pl/Zqa2KscmsJVKMpPK1izxEO/.../Pzhyck+Pn5+fn5+fn5+fnDExL8/Pz8/Pz8/Pz8/OEJCX5+fn5+fn5+fn7+8IQEPz8/Pz8/Pz8/P394/x/MYxX37hKydQAAAABJRU5ErkJggg=="/> ``` After changing: ```xml <talendfile:ScreenshotsMap xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:talendfile="platform:/resource/org.talend.model/model/TalendFile.xsd" key="process" value="iVBORw0KGgoAAAANSUhEUgAABBgAAAHxCAIAAABI8JjTAABCd0lEQVR4nO3de3Cc5Z3ge9XUztScIjmzU1u1tbPnjzN7TtUkbAJbqSQzKbb2EC/xzvlra+fMJDuZCRdzM2BQnJDrkAEmF2YyTnCCTYQDJDHhEmCDwRiDJtyv5m6DDdiAAVs3y5J1v3XL8Xm6X6nVkuVuPzaS3vfVR/.../PzRIyT4+fn5+fn5+fn5+aNHSPDz8/Pz8/Pz8/PzR4+Q4Ofn5+fn5+fn5+ePHiHBz8/Pz8/Pz8/Pzx89QoKfn5+fn5+fn5+fP3qEBD8/Pz8/Pz8/Pz9/9Pz/DP6n89Dd4IMAAAAASUVORK5CYII="/> ```
Tomcat 9 Thread issue - how to increase TPS in tomcat
|performance|tomcat|tps|
null
I've successfully implemented an asynchronous-start and terminating version of the LCR algorithm for leader election in a single ring network, as per my project's initial requirements. The algorithm allows processors to wake up at different rounds and terminates once a leader is elected, with all processors aware of the election result. My current implementation works as expected in a single ring setup. Now, I need to extend this implementation to a more complex 'ring of rings' network topology, where each sub-ring (representing a subnetwork) is connected to form a larger ring (the main ring). The main ring should operate using the asynchronous-start variant of the LCR algorithm, while the sub-rings use the terminating LCR algorithm. The goal is to elect the processor with the maximum ID across the entire network as the leader, ensure all processors in the main ring are aware of the elected leader, and terminate all processors in the main ring upon completion. Challenges & Expectations: I'm grappling with the conceptual and implementation shift from a single ring to a ring of rings network. Specifically: Interfacing Between Rings: How to effectively manage the interaction between sub-rings and the main ring, especially in terms of message passing and leader election propagation. Ensuring Correctness and Performance: How to verify that the maximum ID is consistently elected across various network sizes and structures, and how to efficiently measure the rounds and message counts until termination. I anticipate challenges in adapting the existing algorithm to support this hierarchical network structure without compromising on the efficiency and correctness of the leader election process. **Current Implementation:** Main.java ``` public class Main { public static void main(String[] args) { int numProcessors = 5; // Total number of processors in the ring Ring ringNetwork = new Ring(); // Initialise processors with unique IDs and start rounds for (int i = 0; i < numProcessors; i++) { int startRound = i + 1; // Example start rounds Processor processor = new Processor(i + 1, startRound); ringNetwork.addProcessor(processor); } // Start the leader election process ringNetwork.startElection(); System.out.println("Leader elected: Processor ID " + ringNetwork.getElectedLeaderId()); } } ``` **Processor.java** ``` import java.util.Queue; import java.util.LinkedList; import java.util.concurrent.atomic.AtomicBoolean; public class Processor { private final int id; private final int startRound; private Processor clockwiseNeighbor; private final Queue<Integer> messageQueue = new LinkedList<>(); private final AtomicBoolean isActive = new AtomicBoolean(false); private final AtomicBoolean hasElected = new AtomicBoolean(false); private int leaderId = -1; public Processor(int id, int startRound) { this.id = id; this.startRound = startRound; } public void setClockwiseNeighbor(Processor neighbor) { this.clockwiseNeighbor = neighbor; } public void activate(int currentRound) { if (currentRound >= startRound && isActive.compareAndSet(false, true)) { // Process any messages that were received before activation but not processed while (!messageQueue.isEmpty()) { processMessage(messageQueue.poll()); } sendMessage(id); // Send initial message with own ID } } // Send a message to the clockwise neighbor private void sendMessage(int message) { if (clockwiseNeighbor != null) { clockwiseNeighbor.receiveMessage(message, isActive.get()); } } // Receive a message from the counter-clockwise neighbor public void receiveMessage(int message, boolean senderActive) { if (senderActive) { // Only queue messages from active senders messageQueue.add(message); } } public void processMessages() { while (!messageQueue.isEmpty() && isActive.get()) { processMessage(messageQueue.poll()); } } // Process the received message private void processMessage(int receivedId) { if (receivedId > id && !hasElected.get()) { sendMessage(receivedId); } else if (receivedId == id) { hasElected.set(true); leaderId = id; // Elect self and terminate System.out.println("Processor " + id + " elected as leader"); // Propagate leader ID for others to terminate sendMessage(id); } else if (hasElected.get()) { // If this processor has already elected a leader, propagate the leader's ID sendMessage(leaderId); } } public void acknowledgeLeader(int leaderId) { if (!hasElected.get()) { this.leaderId = leaderId; hasElected.set(true); isActive.set(false); // Terminate after acknowledging the leader System.out.println("Processor " + id + " acknowledges leader " + leaderId); } } public boolean hasElected() { return hasElected.get(); } public int getId() { return id; } public int getLeaderId() { return leaderId; } } ``` **Ring.java** ``` import java.util.ArrayList; import java.util.List; public class Ring { private final List<Processor> processors = new ArrayList<>(); private int electedLeaderId = -1; private int currentRound = 1; // Add a processor to the ring public void addProcessor(Processor processor) { if (!processors.isEmpty()) { processors.get(processors.size() - 1).setClockwiseNeighbor(processor); } processors.add(processor); } // Close the ring by connecting the last and first processors public void closeRing() { if (!processors.isEmpty()) { processors.get(processors.size() - 1).setClockwiseNeighbor(processors.get(0)); } } public void startElection() { closeRing(); // Ensure the ring is closed before starting boolean electionInProgress = true; while (electionInProgress) { simulateRound(); electionInProgress = checkElectionProgress(); } } private void simulateRound() { processors.forEach(p -> p.activate(currentRound)); processors.forEach(Processor::processMessages); currentRound++; // Increment round for next simulation } private boolean checkElectionProgress() { // Check if a leader has been elected and propagate the leader ID to all processors for (Processor p : processors) { if (p.hasElected() && electedLeaderId == -1) { electedLeaderId = p.getLeaderId(); processors.forEach(proc -> proc.acknowledgeLeader(electedLeaderId)); break; } } // Check if all processors have acknowledged the leader return processors.stream().anyMatch(proc -> !proc.hasElected()); } public int getElectedLeaderId() { return electedLeaderId; } } ``` Required Changes: To adapt my project to a ring of rings network, I need insights on: Algorithmic Adaptation: Strategies to extend the single ring LCR algorithm to support the ring of rings topology while maintaining asynchronous start and termination. Implementation Guidance: Suggestions on managing the complexity of multiple interconnected rings and ensuring efficient message passing and leader election propagation. Performance Evaluation: Advice on setting up simulations for networks of varying sizes and structures to validate the correctness of the algorithm and measure its performance. Any guidance, insights, or references to similar implementations would be greatly appreciated as I navigate this complex extension of my project.
Extending LCR Algorithm from Single Ring to Ring of Rings Network for Leader Election
|java|
null
So I was talking to a friend and they recommended that I join the table to itself. You can do a left join to itself to retain all of the students to generalize the problem and then you calculate the time frame you're interested in. `SELECT student_name, t1.hw_ts, count(*) FROM (SELECT student name, t1.hw_ts FROM table) as t1 LEFT JOIN (SELECT student_name, hw_ts FROM table t2) as t2 ON t1.student_name == t2.student_name and t2.hw_ts < t1.hw_ts and t2.hw_ts > t1.hw_ts - 24 hours) group by 1, 2`
The problem is that you are trying to access the port 3306, that is where your database is being hosted, while Spring Boot by default hosts the server at port 8080. If you access localhost:8080/api/cadastrar-pj it will work fine.
See the following article: [Label][1] A label has no "Tap" or "Tapped" event, therefore you cannot work with EventToCommandBehavior in that way. The right way to use a Tap command is how you did it in the first place. [1]: https://learn.microsoft.com/en-us/dotnet/api/microsoft.maui.controls.label?view=net-maui-8.0
I work on a webForm project (.NET FrameWork) And I have this exception : ``` <!-- System.Web.HttpCompileException (0x80004005): c:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\a1f84433\8a878a8d\App_Web_genericusercontrol.ascx.9c4a4a40.c8huzmqq.0.cs(180): error CS0234: The type or namespace name 'Services' does not exist in the namespace 'Cnbp.Cbk' (are you missing an assembly reference?) at System.Web.Compilation.AssemblyBuilder.Compile() at System.Web.Compilation.BuildProvidersCompiler.PerformBuild() at System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate) at System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean ensureIsUpToDate) at System.Web.UI.TemplateControl.LoadControl(VirtualPath virtualPath) at Cnbp.Cbk.FrontOffice.ContainerClient.Controls.SubGenericUserControl.GenerateBlocks(Boolean isReadOnly, Boolean editByBlock, LinkButton calcButton, ContributionBlockUserControl& blocCtrlContribution) at Cnbp.Cbk.FrontOffice.ContainerClient.Controls.SubContainerUserControl.BindControls() --> ``` .the DLL in question is already exists and is well referenced on the project but is not detected during the live ascx compilation .PI: the DLLs are in the GAC .I tried lots of solutions but none worked, someone has a solution please. Thanks in advance.
The problem is not that IntelliJ-IDEA doesn't support Java 22. The problem is that Gradle does not run on Java 22 (it only supports building projects for Java 22 but must be run with Java 21 maximum). From the [Gradle 8.7 release notes](https://docs.gradle.org/current/release-notes.html#support-for-building-projects-with-java-22): > **Support for building projects with Java 22** > > Gradle now supports using Java 22 for compiling, testing, and starting other Java programs. Selecting a language version is done using toolchains. > > You cannot run Gradle 8.7 itself with Java 22 because Groovy still needs to support JDK 22. However, future versions are expected to provide this support.
TypeError Traceback (most recent call last) <ipython-input-16-ecab93a49f0c> in <cell line: 2>() 1 import pandas as pd ----> 2 dt=pd.read_csv("/content/survey_results_public.csv") 3 dt TypeError: 'str' object is not callable import pandas as pd dt=pd.read_csv("/content/survey_results_public.csv") dt
Can I share the link to the Github repository with you so that you can study it further at home? it does indeed return a UserDetails object but for the configuration part of Spring Security there could be problems that I cannot resolve.
null
I am used to open Chrome devtools and try code snippets in the console: it's fast and powerful. However, AFAIS it's not possible to use `eval` or `with`, because such code snippets trigger the browser security policy. [![console refusing eval][1]][1] I know I can tweak my browser by modifying the `chrome://flags`. However, in this case there doesn't seem to be such a flag. Am I right? Or is there a way to make the devtools console more lenient? [1]: https://i.stack.imgur.com/iUSch.png
How to tweak the security policy of Chrome, in order to run "unsafe" snippets in the console?
|google-chrome|content-security-policy|google-developer-tools|
{"Voters":[{"Id":23476278,"DisplayName":"Rohan jain"}]}
|javascript|reactjs|redux|redux-toolkit|
(I'll assume you're referring to the conditional operator in the [C family of programming languages](https://en.wikipedia.org/wiki/List_of_C-family_programming_languages). Tag your question with a specific language if you want to focus on that language.) ## Case 1: Base case Let's start with the simplest case, a single conditional expression: condition ? expression1 : expression2 To evaluate this expression, follow these steps: 1. Evaluate `condition`. 2. If the result of Step 1 is true (or "truthy", depending on the language), then ignore `expression2` and evaluate `expression1`. 3. If the result of Step 1 is false (or "falsy", depending on the language), then ignore `expression1` and evaluate `expression2`. 4. Return the result of Step 2 (the value of `expression1`) or Step 3 (the value of `expression2`), whichever applies. Notice that the order of evaluation goes from left to right: `condition` is evaluated first, followed by either `expression1` or `expression2`. ## Case 2: `condition` is a conditional expression If the `condition` in `condition ? expression1 : expression2` is itself a conditional expression, then you get the following overall expression: (subcondition ? subexpression1 : subexpression2) ? expression1 : expression2 To evaluate this expression, follow the same steps as in the base case but replace `condition` with `subcondition ? subexpression1 : subexpression2`. In Step 1, you'll *recurse*: 1. Evaluate `subcondition ? subexpression1 : subexpression2`. To do this, follow the same steps as in the base case: 1. Evaluate `subcondition`. 2. If the result of Step 1.1 is true, then ignore `subexpression2` and evaluate `subexpression1`. 3. If the result of Step 1.1 is false, then ignore `subexpression1` and evaluate `subexpression2`. 4. Return the result of Step 1.2 (the value of `subexpression1`) or Step 1.3 (the value of `subexpression2`), whichever applies. 2. If the result of Step 1 is true, then ignore `expression2` and evaluate `expression1`. 3. If the result of Step 1 is false, then ignore `expression1` and evaluate `expression2`. 4. Return the result of Step 2 (the value of `expression1`) or Step 3 (the value of `expression2`), whichever applies. Notice that the order of evaluation goes from left to right: `subcondition` is evaluated first, followed by either `subexpression1` or `subexpression2`, followed by either `expression1` or `expression2`. ### Case 3: `expression2` is a conditional expression If the `expression2` in `condition ? expression1 : expression2` is itself a conditional expression, then you get the following overall expression: condition ? expression1 : (subcondition ? subexpression1 : subexpression2) To evaluate this expression, follow the same steps as in the base case but replace `expression2` with `subcondition ? subexpression1 : subexpression2`. In Step 3, you'll *recurse*: 1. Evaluate `condition`. 2. If the result of Step 1 is true, then ignore `subcondition ? subexpression1 : subexpression2` and evaluate `expression1`. 3. If the result of Step 1 is false, then ignore `expression1` and evaluate `subcondition ? subexpression1 : subexpression2`. To do this, follow the same steps as in the base case: 1. Evaluate `subcondition`. 2. If the result of Step 3.1 is true, then ignore `subexpression2` and evaluate `subexpression1`. 3. If the result of Step 3.1 is false, then ignore `subexpression1` and evaluate `subexpression2`. 4. Return the result of Step 3.2 (the value of `subexpression1`) or Step 3.3 (the value of `subexpression2`), whichever applies. 4. Return the result of Step 2 (the value of `expression1`) or Step 3 (the value of `subcondition ? subexpression1 : subexpression2`), whichever applies. Notice that the order of evaluation goes from left to right: `condition` is evaluated first, followed by either `expression1` or `subcondition`, followed by (in the `subcondition` case only) either `subexpression1` or `subexpression2`. ## Left versus right associativity Let's now look at your original expression, but omitting all parentheses: condition1 ? expression1 : condition2 ? expression2 : expression3 This could potentially be interpreted in two different ways: 1. Left associative: `(condition1 ? expression1 : condition2) ? expression2 : expression3` * This is Case 2 above. 2. Right associative: `condition1 ? expression1 : (condition2 ? expression2 : expression3)` * This is Case 3 above. An important thing to understand is that *associativity* and *order of evaluation* are two completely different concepts. For the conditional operator, left or right associativity simply determines whether you're in Case 2 or Case 3. But in both cases, the order of evaluation goes from left to right, with `condition1` evaluated first.
First the code is incomplete, you did not include sqflite purpose, definition and usage. And regarding your prblm of Future Builder loading every time is because you called infoCard() inside build fn. So everytime widget build it calls the fn and execute the query everytime, so what you can do is either: - Declare the Future<Widget> outside build fn and define it inside the init fn. like this ``` late final Future<Widget> infoWidget; initState() {infoWidget =infoCard();} now pass this infoWidget to FutureBuilder. As the future is outside the scope of build fn, so it will not be called again and again. - Second Option is to use FutureProvider for that, I can see the you are using Riverpod, so please use FutureProvider because it is there and used for such scenarios.
null
I'm trying to Implement a K-means algorithm, with semi-random choosing of the initial centroids. I'm using Python as a way to process the data using numpy to choose initial centers and stable API in order to implement the iterative part of K-means in C. However, when I am entering relatively large datasets, I get Segmentation Error (core dumped), so far I tried to manage memory better and free all the Global array before go back to python, also I tried to free all local array before end of the function. This is the code in python: ``` def Kmeans(K, iter , eps ,file_name_1, file_name_2): compound_df = get_compound_df(file_name_1,file_name_2) N , d = int(compound_df.shape[0]) , int(compound_df.shape[1]) data = np.array(pd.DataFrame.to_numpy(compound_df),dtype=float) assert int(iter) < 1000 and int(iter) > 1 and iter.isdigit() , "Invalid maximum iteration!" assert 1 < int(K) and int(K) < N , "Invalid number of clusters!" PP_centers = k_means_PP(compound_df,int(K)) actual_centroids = [] for center_ind in PP_centers: actual_centroids.append(data[center_ind]) actual_centroids = np.array(actual_centroids,dtype=float) data = (data.ravel()).tolist() actual_centroids = (actual_centroids.ravel()).tolist() print(PP_centers) print(f.fit(int(K),int(N),int(d),int(iter),float(eps),actual_centroids,data)) ``` This is the code in C, that manages the `PyObject` creation, this is the python object being returned to the `Kmeans` function: ``` PyObject* convertCArrayToDoubleList(double* arr){ int i, j; PyObject* K_centroid_list = PyList_New(K); if(!K_centroid_list) return NULL; for(i=0;i<K;++i){ PyObject* current_center = PyList_New(d); if(!K_centroid_list){ Py_DECREF(K_centroid_list); return NULL; } for(j=0;j<d;++j){ PyObject* num = PyFloat_FromDouble(arr[i*d+j]); if(!num){ Py_DECREF(K_centroid_list); Py_DECREF(current_center); return NULL; } PyList_SET_ITEM(current_center,j,num); } PyList_SET_ITEM(K_centroid_list,i,current_center); } return K_centroid_list; } ``` I ran valgrind on some samples, there were some leaks of memory but I could not identify the leak. I also tried various freeing and `Py_DECREF` combinations and attempt to reduce the leakage, but to no avail.
Start by creating a cumulative frequency table, or a fenwick tree. You'll have a record for each radius of circle, with value corresponding to explored weights at that distance from the origin. Then, begin a BFS from the origin. For each diagonal "frontier", you'll need to update your table/tree with the radius:weight key-value pair (add weight to existing value). You'll also need to then query the table/tree for the current cumulative sum at each radius just added, noting the maximum and updating a global running maximum accordingly. Once your search terminates, you'll have the maximum sum for your clipped-circle. If you want to reconstruct the circle, just store the max radius and BFS depth along with the global max sum itself. This will give you your solution in `O(N^2 log N)` time, as there will be N^2 updates and queries, which are `O(log N)` each. The intuition behind this solution is that by exploring along this diagonal "frontier" outward, you implicitly clip all your circles you query since the weights above/right of it haven't been added yet. By calculating the max (at each search depth) for just the radii that were just updated, you also enforce the constraint that the circles intersect the clipping line at an integer coordinate.
I am creating a memoization example with a function that adds up / averages the elements of an array and compares it with the cached ones to retrieve them in case they are already stored. In addition, I want to store only if the result of the function differs considerably (passes a threshold e.g. 5000 below). I created an example using a decorator to do so, the results using the decorator is slightly slower than without the memoization which is not OK, also is the logic of the decorator correct ? My code is attached below: import time import random from collections import OrderedDict def memoize(f): cache = {} def g(*args): if args[1] == 'avg': sum_key_arr = sum(args[0])/ len(list(args[0])) elif args[1] == 'sum': sum_key_arr = sum(args[0]) print(sum_key_arr) if sum_key_arr not in cache: for key, value in OrderedDict(sorted(cache.items())).items():# key in dict cannot be an array so I use the sum of the array as the key if abs(sum_key_arr - key) <= 5000:#threshold is great here so that all values are approximated! #print('approximated') return cache[key] else: #print('not approximated') cache[sum_key_arr] = f(args[0],args[1]) return cache[sum_key_arr] return g @memoize def aggregate(dict_list_arr,operation): if operation == 'avg': return sum(dict_list_arr) / len(list(dict_list_arr)) if operation == 'sum': return sum(dict_list_arr) return None t = time.time() for i in range(200,150000): res = aggregate(list(range(i)),'avg') elapsed = time.time() - t print(res) print(elapsed)
I've searched through loads of forum posts from the ROBLOX Developer Forum for any answer(s) to this, and I have had no luck finding an answer so I'm hoping you can help. Basically I want to have a ServerScript (located ServerScriptService) wait for a LocalScript (located in a GUI Button) to fire a RemoteEvent (using FireServer) with the user's selected police division. I currently only have One GUI button setup (Frontline Policing). The LocalScript can be found below; **LocalScript** ```lua local div = game.ReplicatedStorage.Events.divisionEvent local button = script.Parent local gui = script.Parent.Parent.Parent.Parent button.MouseButton1Down:Connect(function() div:FireServer("Frontline") gui.Enabled = false end) ``` TLDR - I am wondering if it's possible to get a ServerScript to wait for a LocalScript to fire a RemoveEvent with needed information. I've done loads of Research through around 50-60 Roblox Developer Forum posts, and none of them are either; > The Same Issue as mine. > The Proper Resolution. > Not even associated with my issue.
How to make a ServerScript wait for a LocalScript to fire a RemoteEvent
|lua|roblox|luau|roblox-studio|
null
That is not a "something"-delimited file, it's fixed width. ```r readLines(unz("~/Downloads/2018029_ascii.zip", "FRSS108PUF.dat"), n=3) # [1] "100011331 1 1 1 1 2 1 2 2 2 2 2 4 1 1 1 1 1 1 2 1 2 3 3 3 3 3 2 2 3 3 3 2 3 2 1 2 3 2 2 2 1 1 1 1 2 1 1 2 1 1 1 1 2 5 3 3 3 3 3 3 3 5 1 5 4 4 4 4 4 4 4 4 12000000000000000000000000000000000000000000000000000000000000000000000000017.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.30382293817.303822938 18.5837245717.927828408 18.5837245718.02058140118.02058140117.92782840818.471095936 0 18.5837245718.471095936 18.5837245718.47109593617.92782840818.36509251618.47109593617.92782840818.47109593618.36509251617.927828408" # [2] "100022331 2 1 2 2 2 1 2 2 2 2 2 4 1 1 2 1 1 1 2 2 1 4 3 3 2 3 3 4 2 3 4 4 3 2 1 4 1 2 2 2 2 2 1 4 2 2 1 1 3 1 2 1 2 4 4 4 4 4 3 3 2 4 1 4 3 4 3 3 3 2 2 3 1200000000000000000000000000000000000000000000000000000000000000000000000005.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.4137039431 5.318726681 5.3187266815.41370394315.41370394315.41370394315.41370394315.41370394315.41370394315.41370394315.41370394315.41370394315.4137039431 5.318726681 5.3187266815.33290239175.33290239175.4137039431 0 5.318726681 5.3187266815.41370394315.4137039431 5.3187266815.22702449685.41370394315.41370394315.41370394315.41370394315.41370394315.39811066135.39811066135.39811066135.41370394315.41370394315.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.2292535051" # [3] "100032331 2 1 1 1 2 1 2 2 2 2 2 4 1 1 1 2 2 1 1 1 3 5 4 4 4 3 4 5 5 5 5 4 4 5 4 4 3 5 2 2 5 1 1 4 1 1 1 1 3 1 1 1 2 3 4 4 5 2 2 1 1 3 1 4 3 4 5 2 2 1 1 2 1200000000000000000000000000000000000000000000000000000000000000000000000005.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.4137039431 5.318726681 5.3187266815.41370394315.41370394315.41370394315.41370394315.41370394315.41370394315.41370394315.41370394315.41370394315.4137039431 5.318726681 5.3187266815.33290239175.33290239175.41370394315.4137039431 5.318726681 5.3187266815.41370394315.4137039431 5.3187266815.22702449685.41370394315.4137039431 05.41370394315.41370394315.39811066135.39811066135.39811066135.41370394315.41370394315.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.22925350515.2292535051" ``` The hard part about fixed-width formats is determining the widths of each field. Fortunately (somewhat), the documentation zip has `LayoutPUF.pdf` that contains each field and the columns for each. The widths for that file should total 1441, since that's what we're getting from the file: ```r nchar(readLines(unz("~/Downloads/2018029_ascii.zip", "FRSS108PUF.dat"), n=3)) # [1] 1441 1441 1441 ``` Counting up the columns, we can use ```r widths <- c(5, rep(1, 4), rep(2, 73), rep(1, 74), rep(12, 101)) out <- read.fwf(unz("~/Downloads/2018029_ascii.zip", "FRSS108PUF.dat"), widths = widths) # Warning in readLines(file, n = thisblock) : # incomplete final line found on '~/Downloads/2018029_ascii.zip:FRSS108PUF.dat' str(out) # 'data.frame': 1527 obs. of 253 variables: # $ V1 : int 10001 10002 10003 10004 10005 10006 10007 10008 10009 10010 ... # $ V2 : int 1 2 2 3 2 2 1 1 2 3 ... # $ V3 : int 3 3 3 2 2 2 4 4 3 2 ... # $ V4 : int 3 3 3 1 1 1 2 4 4 2 ... # $ V5 : int 1 1 1 1 1 1 1 1 1 1 ... # $ V6 : int 1 2 2 2 2 1 2 2 2 2 ... # $ V7 : int 1 1 1 1 1 2 1 1 1 1 ... # $ V8 : int 1 2 1 2 2 2 2 1 2 1 ... # $ V9 : int 1 2 1 2 2 2 2 2 2 2 ... # [list output truncated] ``` Over to you to name all 253 columns. You can transcribe from the pdf (you might be able to scrape it, but that doesn't look like an awesome scrape-able pdf), starting with something like ```r colnames(out) <- c("IDNUMBER", "DSIZCL3", "URBAN", "OEREG", "Q1", "Q2", ...) ``` It will be laborious, no doubt.